file_name,content deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,MANNINGMohamed Elgendy deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"Deep Learning for Vision Systems MOHAMED ELGENDY MANNING SHELTER ISLAND" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"For online information and ordering of this and other Manning books, please visit www.manning.com . The publisher offers discounts on this book when ordered in quantity. For more information, please contact Special Sales Department Manning Publications Co. 20 Baldwin Road PO Box 761 Shelter Island, NY 11964 Email: orders@manning.com ©2020 by Manning Publications Co. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without prior written permission of the publisher. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in the book, and Manning Publications was aware of a trademark claim, the designations have been printed in initial caps or all caps. Recognizing the importance of preserving what has been written, it is Manning’s policy to have the books we publish printed on acid-free paper, and we exert our best efforts to that end. Recognizing also our responsibility to conserve the resources of our planet, Manning books are printed on paper that is at least 15 percent recycled and processed without the use of elemental chlorine. Development editor: Jenny Stout Technical development editor: Alain Couniot Manning Publications Co. Review editor: Ivan Martinovic ´ 20 Baldwin Road Production editor: Lori Weidert PO Box 761 Copy editor: Tiffany Taylor Shelter Island, NY 11964 Proofreader: Keri Hales Technical proofreader: Al Krinker Typesetter: Dennis Dalinnik Cover designer: Marija Tudor ISBN: 9781617296192 Printed in the United States of America" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf," To my mom, Huda, who taught me perseverance and kindness To my dad, Ali, who taught me patience and purpose To my loving and supportive wife, Amanda, who always inspires me to keep climbing To my two-year-old daughter, Emily, who teaches me every day that AI still has a long way to go to catch up with even the tiniest humans" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf, deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"vcontents preface xiii acknowledgments xv about this book xvi about the author xix about the cover illustration xx PART 1DEEP LEARNING FOUNDATION ............................. 1 1 Welcome to computer vision 3 1.1 Computer vision 4 What is visual perception? 5■Vision systems 5 Sensing devices 7■Interpreting devices 8 1.2 Applications of computer vision 10 Image classification 10■Object detection and localization 12 Generating art (style transfer) 12■Creating images 13 Face recognition 15■Image recommendation system 15 1.3 Computer vision pipeline: The big picture 17 1.4 Image input 19 Image as functions 19■How computers see images 21 Color images 21" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"CONTENTS vi 1.5 Image preprocessing 23 Converting color images to grayscale to reduce computation complexity 23 1.6 Feature extraction 27 What is a feature in computer vision? 27■What makes a good (useful) feature? 28■Extracting features (handcrafted vs. automatic extracting) 31 1.7 Classifier learning algorithm 33 2 Deep learning and neural networks 36 2.1 Understanding perceptrons 37 What is a perceptron? 38■How does the perceptron learn? 43 Is one neuron enough to solve complex problems? 43 2.2 Multilayer perceptrons 45 Multilayer perceptron architecture 46■What are hidden layers? 47■How many layers, and how many nodes in each layer? 47■Some takeaways from this section 50 2.3 Activation functions 51 Linear transfer function 53■Heaviside step function (binary classifier) 54■Sigmoid/logistic function 55■Softmax function 57■Hyperbolic tangent function (tanh) 58 Rectified linear unit 58■Leaky ReLU 59 2.4 The feedforward process 62 Feedforward calculations 64■Feature learning 65 2.5 Error functions 68 What is the error function? 69■Why do we need an error function? 69■Error is always positive 69■Mean square error 70■Cross-entropy 71■A final note on errors and weights 72 2.6 Optimization algorithms 74 What is optimization? 74■Batch gradient descent 77 Stochastic gradient descent 83■Mini-batch gradient descent 84 Gradient descent takeaways 85 2.7 Backpropagation 86 What is backpropagation? 87■Backpropagation takeaways 90" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"CONTENTS vii 3 Convolutional neural networks 92 3.1 Image classification using MLP 93 Input layer 94■Hidden layers 96■Output layer 96 Putting it all together 97■Drawbacks of MLPs for processing images 99 3.2 CNN architecture 102 The big picture 102■A closer look at feature extraction 104 A closer look at classification 105 3.3 Basic components of a CNN 106 Convolutional layers 107■Pooling layers or subsampling 114 Fully connected layers 119 3.4 Image classification using CNNs 121 Building the model architecture 121■Number of parameters (weights) 123 3.5 Adding dropout layers to avoid overfitting 124 What is overfitting? 125■What is a dropout layer? 125 Why do we need dropout layers? 126■Where does the dropout layer go in the CNN architecture? 127 3.6 Convolution over color images (3D images) 128 How do we perform a convolution on a color image? 129 What happens to the computational complexity? 130 3.7 Project: Image classification for color images 133 4 Structuring DL projects and hyperparameter tuning 145 4.1 Defining performance metrics 146 Is accuracy the best metric for evaluating a model? 147 Confusion matrix 147■Precision and recall 148 F-score 149 4.2 Designing a baseline model 149 4.3 Getting your data ready for training 151 Splitting your data for train/validation/test 151 Data preprocessing 153 4.4 Evaluating the model and interpreting its performance 156 Diagnosing overfitting and underfitting 156■Plotting the learning curves 158■" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"CONTENTS viii 4.5 Improving the network and tuning hyperparameters 162 Collecting more data vs. tuning hyperparameters 162 Parameters vs. hyperparameters 163■Neural network hyperparameters 163■Network architecture 164 4.6 Learning and optimization 166 Learning rate and decay schedule 166■A systematic approach to find the optimal learning rate 169■Learning rate decay and adaptive learning 170■Mini-batch size 171 4.7 Optimization algorithms 174 Gradient descent with momentum 174■Adam 175 Number of epochs and early stopping criteria 175■Early stopping 177 4.8 Regularization techniques to avoid overfitting 177 L2 regularization 177■Dropout layers 179 Data augmentation 180 4.9 Batch normalization 181 The covariate shift problem 181■Covariate shift in neural networks 182■How does batch normalization work? 183 Batch normalization implementation in Keras 184■Batch normalization recap 185 4.10 Project: Achieve high accuracy on image classification 185 PART 2IMAGE CLASSIFICATION AND DETECTION ........... 193 5 Advanced CNN architectures 195 5.1 CNN design patterns 197 5.2 LeNet-5 199 LeNet architecture 199■LeNet-5 implementation in Keras 200 Setting up the learning hyperparameters 202■LeNet performance on the MNIST dataset 203 5.3 AlexNet 203 AlexNet architecture 205■Novel features of AlexNet 205 AlexNet implementation in Keras 207■Setting up the learning hyperparameters 210■AlexNet performance 211 5.4 VGGNet 212 Novel features of VGGNet 212■VGGNet configurations 213 Learning hyperparameters 216■" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"CONTENTS ix 5.5 Inception and GoogLeNet 217 Novel features of Inception 217■Inception module: Naive version 218■Inception module with dimensionality reduction 220■Inception architecture 223■GoogLeNet in Keras 225■Learning hyperparameters 229■Inception performance on the CIFAR dataset 229 5.6 ResNet 230 Novel features of ResNet 230■Residual blocks 233■ResNet implementation in Keras 235■Learning hyperparameters 238 ResNet performance on the CIFAR dataset 238 6 Transfer learning 240 6.1 What problems does transfer learning solve? 241 6.2 What is transfer learning? 243 6.3 How transfer learning works 250 How do neural networks learn features? 252■Transferability of features extracted at later layers 254 6.4 Transfer learning approaches 254 Using a pretrained network as a classifier 254■Using a pretrained network as a feature extractor 256■Fine-tuning 258 6.5 Choosing the appropriate level of transfer learning 260 Scenario 1: Target dataset is small and similar to the source dataset 260■Scenario 2: Target dataset is large and similar to the source dataset 261■Scenario 3: Target dataset is small and different from the source dataset 261■Scenario 4: Target dataset is large and different from the source dataset 261■Recap of the transfer learning scenarios 262 6.6 Open source datasets 262 MNIST 263■Fashion-MNIST 264■CIFAR 264 ImageNet 265■MS COCO 266■Google Open Images 267 Kaggle 267 6.7 Project 1: A pretrained network as a feature extractor 268 6.8 Project 2: Fine-tuning 274 7 Object detection with R-CNN, SSD, and YOLO 283 7.1 General object detection framework 285 Region proposals 286■Network predictions 287 Non-maximum suppression (NMS) 288■Object-detector evaluation metrics 289" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"CONTENTS x 7.2 Region-based convolutional neural networks (R-CNNs) 292 R-CNN 293■Fast R-CNN 297■Faster R-CNN 300 Recap of the R-CNN family 308 7.3 Single-shot detector (SSD) 310 High-level SSD architecture 311■Base network 313 Multi-scale feature layers 315■Non-maximum suppression 319 7.4 You only look once (YOLO) 320 How YOLOv3 works 321■YOLOv3 architecture 324 7.5 Project: Train an SSD network in a self-driving car application 326 Step 1: Build the model 328■Step 2: Model configuration 329 Step 3: Create the model 330■Step 4: Load the data 331 Step 5: Train the model 333■Step 6: Visualize the loss 334 Step 7: Make predictions 335 PART 3GENERATIVE MODELS AND VISUAL EMBEDDINGS ...339 8 Generative adversarial networks (GANs) 341 8.1 GAN architecture 343 Deep convolutional GANs (DCGANs) 345■The discriminator model 345■The generator model 348■Training the GAN 351■GAN minimax function 354 8.2 Evaluating GAN models 357 Inception score 358■Fréchet inception distance (FID) 358 Which evaluation scheme to use 358 8.3 Popular GAN applications 359 Text-to-photo synthesis 359■Image-to-image translation (Pix2Pix GAN) 360■Image super-resolution GAN (SRGAN) 361 Ready to get your hands dirty? 362 8.4 Project: Building your own GAN 362 9 DeepDream and neural style transfer 374 9.1 How convolutional neural networks see the world 375 Revisiting how neural networks work 376■Visualizing CNN features 377■" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"CONTENTS xi 9.2 DeepDream 384 How the DeepDream algorithm works 385■DeepDream implementation in Keras 387 9.3 Neural style transfer 392 Content loss 393■Style loss 396■Total variance loss 397 Network training 397 10 Visual embeddings 400 10.1 Applications of visual embeddings 402 Face recognition 402■Image recommendation systems 403 Object re-identification 405 10.2 Learning embedding 406 10.3 Loss functions 407 Problem setup and formalization 408■Cross-entropy loss 409 Contrastive loss 410■Triplet loss 411■Naive implementation and runtime analysis of losses 412 10.4 Mining informative data 414 Dataloader 414■Informative data mining: Finding useful triplets 416■Batch all (BA) 419■Batch hard (BH) 419 Batch weighted (BW) 421■Batch sample (BS) 421 10.5 Project: Train an embedding network 423 Fashion: Get me items similar to this 424■Vehicle re-identification 424■Implementation 426■Testing a trained model 427 10.6 Pushing the boundaries of current accuracy 431 appendix A Getting set up 437 index 445" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf, deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"xiiipreface Two years ago, I decided to write a book to teach deep learning for computer vision from an intuitive perspective. My goal was to develop a comprehensive resource that takes learners from knowing only the basics of machine learning to building advanced deep learning algorithms that they can apply to solve complex computer vision problems. The problem : In short, as of this moment, there are no books out there that teach deep learning for computer vision the way I wanted to learn about it. As a beginner machine learning engineer, I wanted to read one book that would take me from point A to point Z. I planned to specialize in building modern computer vision applications, and I wished that I had a single resource that would teach me everything I needed to do two things: 1) use neural networks to build an end-to-end computer vision applica- tion, and 2) be comfortable reading and implementing research papers to stay up-to- date with the latest industry advancements. I found myself jumping between online courses, blogs, papers, and YouTube videos to create a comprehensive curriculum for myself. It’s challenging to try to comprehend what is happening under the hood on a deeper level: not just a basic understanding, but how the concepts and theories make sense mathematically. It was impossible to find one comprehensive resource that (horizontally) covered the most important topics that I needed to learn to work on complex computer vision applica- tions while also diving deep enough (vertically) to help me understand the math that makes the magic work. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"PREFACE xiv As a beginner, I searched but couldn’t find anything to meet these needs. So now I’ve written it. My goal has been to write a book that not only teaches the content I wanted when I was starting out, but also levels up your ability to learn on your own. My solution is a comprehensive book that dives deep both horizontally and vertically: ■Horizontally —This book explains most topics that an engineer needs to learn to build production-ready computer vision applications, from neural networks and how they work to the different types of neural network architectures and how to train, evaluate, and tune the network. ■Vertically —The book dives a level or two deeper than the code and explains intuitively (and gently) how the math works under the hood, to empower you to be comfortable reading and implementing research papers or even invent- ing your own techniques. At the time of writing, I believe this is the only deep learning for vision systems resource that is taught this way. Whether you are looking for a job as a computer vision engineer, want to gain a deeper understanding of advanced neural networks algorithms in computer vision, or want to build your product or startup, I wrote this book with you in mind. I hope you enjoy it." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"xvacknowledgments This book was a lot of work. No, make that really a lot of work! But I hope you will find it valuable. There are quite a few people I’d like to thank for helping me along the way. I would like to thank the people at Manning who made this book possible: pub- lisher Marjan Bace and everyone on the editorial and production teams, including Jennifer Stout, Tiffany Taylor, Lori Weidert, Katie Tennant, and many others who worked behind the scenes. Many thanks go to the technical peer reviewers led by Alain Couniot—Al Krinker, Albert Choy, Alessandro Campeis, Bojan Djurkovic, Burhan ul haq, David Fombella Pombal, Ishan Khurana, Ita Cirovic Donev, Jason Coleman, Juan Gabriel Bono, Juan José Durillo Barrionuevo, Michele Adduci, Millad Dagdoni, Peter Hraber, Richard Vaughan, Rohit Agarwal, Tony Holdroyd, Tymoteusz Wolodzko, and Will Fuger—and the active readers who contributed their feedback in the book forums. Their contribu- tions included catching typos, code errors and technical mistakes, as well as making valuable topic suggestions. Each pass through the review process and each piece of feedback implemented through the forum topics shaped and molded the final ver- sion of this book. Finally, thank you to the entire Synapse Technology team. You’ve created some- thing that’s incredibly cool. Thank you to Simanta Guatam, Aleksandr Patsekin, Jay Patel, and others for answering my questions and brainstorming ideas for the book." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"xviabout this book Who should read this book If you know the basic machine learning framework, can hack around in Python, and want to learn how to build and train advanced, production-ready neural networks to solve complex computer vision problems, I wrote this book for you. The book was written for anyone with intermediate Python experience and basic machine learning understanding who wishes to explore training deep neural networks and learn to apply deep learning to solve computer vision problems. When I started writing the book, my primary goal was as follows: “I want to write a book to grow readers’ skills, not teach them content.” To achieve this goal, I had to keep an eye on two main tenets: 1Teach you how to learn . I don’t want to read a book that just goes through a set of scientific facts. I can get that on the internet for free. If I read a book, I want to finish it having grown my skillset so I can study the topic further. I want to learn how to think about the presented solutions and come up with my own. 2Go very deep . If I’m successful in satisfying the first tenet, that makes this one easy. If you learn how to learn new concepts, that allows me to dive deep with- out worrying that you might fall behind. This book doesn’t avoid the math part of the learning, because understanding the mathematical equations will empower you with the best skill in the AI world: the ability to read research papers, compare innovations, and make the right decisions about implement- ing new concepts in your own problems. But I promise to introduce only the mathematical concepts you need, and I promise to present them in a way that" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"ABOUT THIS BOOK xvii doesn’t interrupt your flow of understanding the concepts without the math part if you prefer. How this book is organized: A roadmap This book is structured into three parts. The first part explains deep leaning in detail as a foundation for the remaining topics. I strongly recommend that you not skip this section, because it dives deep into neural network components and definitions and explains all the notions required to be able to understand how neural networks work under the hood. After reading part 1, you can jump directly to topics of interest in the remaining chapters. Part 2 explains deep learning techniques to solve object classifica- tion and detection problems, and part 3 explains deep learning techniques to gener- ate images and visual embeddings. In several chapters, practical projects implement the topics discussed. About the code All of this book’s code examples use open source frameworks that are free to down- load. We will be using Python, Tensorflow, Keras, and OpenCV. Appendix A walks you through the complete setup. I also recommend that you have access to a GPU if you want to run the book projects on your machine, because chapters 6–10 contain more complex projects to train deep networks that will take a long time on a regular CPU. Another option is to use a cloud environment like Google Colab for free or other paid options. Examples of source code occur both in numbered listings and in line with normal text. In both cases, source code is formatted in a fixed-width font like this to sepa- rate it from ordinary text. Sometimes code is also in bold to highlight code that has changed from previous steps in the chapter, such as when a new feature adds to an existing line of code. In many cases, the original source code has been reformatted; we’ve added line breaks and reworked indentation to accommodate the available page space in the book. In rare cases, even this was not enough, and listings include line-continuation markers ( ➥). Additionally, comments in the source code have often been removed from the listings when the code is described in the text. Code annotations accompany many of the listings, highlighting important concepts. The code for the examples in this book is available for download from the Man- ning website at www.manning.com/books/deep-learning-for-vision-systems and from GitHub at https:/ /github.com/moelgendy/deep_learning_for_vision_systems . liveBook discussion forum Purchase of Deep Learning for Vision Systems includes free access to a private web forum run by Manning Publications where you can make comments about the book, ask technical questions, and receive help from the author and from other users. To" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"ABOUT THIS BOOK xviii access the forum, go to https:/ /livebook.manning.com/#!/book/deep-learning-for- vision-systems/discussion . You can also learn more about Manning’s forums and the rules of conduct at https:/ /livebook.manning.com/#!/discussion . Manning’s commitment to our readers is to provide a venue where a meaningful dialogue between individual readers and between readers and the author can take place. It is not a commitment to any specific amount of participation on the part of the author, whose contribution to the forum remains voluntary (and unpaid). We sug- gest you try asking the author some challenging questions lest his interest stray! The forum and the archives of previous discussions will be accessible from the publisher’s website as long as the book is in print." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"xixabout the author Mohamed Elgendy is the vice president of engineering at Rakuten, where he is lead- ing the development of its AI platform and products. Previously, he served as head of engineering at Synapse Technology, building proprietary computer vision applica- tions to detect threats at security checkpoints worldwide. At Amazon, Mohamed built and managed the central AI team that serves as a deep learning think tank for Ama- zon engineering teams like AWS and Amazon Go. He also developed the deep learn- ing for computer vision curriculum at Amazon’s Machine University. Mohamed regularly speaks at AI conferences like Amazon’s DevCon, O’Reilly’s AI conference, and Google’s I/O." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"xxabout the cover illustration The figure on the cover of Deep Learning for Vision Systems depicts Ibn al-Haytham, an Arab mathematician, astronomer, and physicist who is often referred to as “the father of modern optics” due to his significant contributions to the principles of optics and visual perception. The illustration is modified from the frontispiece of a fifteenth- century edition of Johannes Hevelius’s work Selenographia . In his book Kitab al-Manazir (Book of Optics ), Ibn al-Haytham was the first to explain that vision occurs when light reflects from an object and then passes to one’s eyes. He was also the first to demonstrate that vision occurs in the brain, rather than in the eyes—and many of these concepts are at the heart of modern vision systems. You will see the correlation when you read chapter 1 of this book. Ibn al-Haytham has been a great inspiration for me as I work and innovate in this field. By honoring his memory on the cover of this book, I hope to inspire fellow prac- titioners that our work can live and inspire others for thousands of years." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"Part 1 Deep learning foundation C omputer vision is a technological area that’s been advancing rapidly thanks to the tremendous advances in artificial intelligence and deep learning that have taken place in the past few years. Neural networks now help self-driving cars to navigate around other cars, pedestrians, and other obstacles; and recom- mender agents are getting smarter about suggesting products that resemble other products. Face-recognition technologies are becoming more sophisticated, too, enabling smartphones to recognize faces before unlocking a phone or a door. Computer vision applications like these and others have become a staple in our daily lives. However, by moving beyond the simple recognition of objects, deep learning has given computers the power to imagine and create new things, like art that didn’t exist previously, new human faces, and other objects. Part 1 of this book looks at the foundations of deep learning, different forms of neural net- works, and structured projects that go a bit further with concepts like hyper- parameter tuning." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf, deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"3Welcome to computer vision Hello! I’m very excited that you are here. You are making a great decision—to grasp deep learning (DL) and computer vision (CV). The timing couldn’t be more perfect. CV is an area that’s been advancing rapidly, thanks to the huge AI and DL advances of recent years. Neural networks are now allowing self-driving cars to fig- ure out where other cars and pedestrians are and navigate around them. We are using CV applications in our daily lives more and more with all the smart devices in our homes—from security cameras to door locks. CV is also making face recogni- tion work better than ever: smartphones can recognize faces for unlocking, and smart locks can unlock doors. I wouldn’t be surprised if sometime in the near future, your couch or television is able to recognize specific people in your house and react according to their personal preferences. It’s not just about recognizingThis chapter covers Components of the vision system Applications of computer vision Understanding the computer vision pipeline Preprocessing images and extracting features Using classifier learning algorithms" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"4 CHAPTER 1Welcome to computer vision objects—DL has given computers the power to imagine and create new things like art- work; new objects; and even unique, realistic human faces. The main reason that I’m excited about deep learning for computer vision, and w hat d re w m e t o thi s fi e ld, is ho w r apid advanc es i n AI re s ear ch ar e e nab lin g ne w applications to be built every day and across different industries, something not possi- ble just a few years ago. The unlimited possibilities of CV research is what inspired me to write this book. By learning these tools, perhaps you will be able to invent new prod- ucts and applications. Even if you end up not working on CV per se, you will find many concepts in this book useful for some of your DL algorithms and architectures. That is because while the main focus is CV applications, this book covers the most important DL architectures, such as artificial neural networks (ANNs), convolutional networks (CNNs), generative adversarial networks (GANs), transfer learning, and many more, which are transferable to other domains like natural language processing (NLP) and voice user interfaces (VUIs). The high-level layout of this chapter is as follows: Computer vision intuition —We will start with visual perception intuition and learn the similarities between humans and machine vision systems. We will look at how vision systems have two main components: a sensing device and an inter- preting device. Each is tailored to fulfill a specific task. Applications of CV — H e r e , w e w i l l t a k e a b i r d ’ s - e y e v i e w o f t h e D L a l g o r i t h m s used in different CV applications. We will then discuss vision in general for dif- ferent creatures. Computer vision pipeline —Finally, we will zoom in on the second component of vision systems: the interpreting device. We will walk through the sequence of steps taken by vision systems to process and understand image data. These are referred to as a computer vision pipeline . The CV pipeline is composed of four main steps: image input, image preprocessing, feature extraction, and an ML model to interpret the image. We will talk about image formation and how com- puters see images. Then, we will quickly review image-processing techniques and extracting features. Ready? Let’s get started! 1.1 Computer vision The core concept of any AI system is that it can perceive its environment and take actions based on its perceptions. Computer vision is concerned with the visual percep- tion part: it is the science of perceiving and understanding the world through images and videos by constructing a physical model of the world so that an AI system can then take appropriate actions. For humans, vision is only one aspect of perception. We per- ceive the world through our sight, but also through sound, smell, and our other senses. It is similar with AI systems—vision is just one way to understand the world. Depending on the application you are building, you select the sensing device that best captures the world." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"5 Computer vision 1.1.1 What is visual perception? Visual perception , at its most basic, is the act of observing patterns and objects through sight or visual input. With an autonomous vehicle, for example, visual perception means understanding the surrounding objects and their specific details—such as pedestrians, or whether there is a particular lane the vehicle needs to be centered in—and detecting traffic signs and understanding what they mean. That’s why the word perception is part of the definition. We are not just looking to capture the surrounding environment. We are trying to build systems that can actually understand that environment through visual input. 1.1.2 Vision systems In past decades, traditional image-processing techniques were considered CV systems, but that is not totally accurate. A machine processing an image is completely different from that machine understanding what’s happening within the image, which is not a trivial task. Image processing is now just a piece of a bigger, more complex system that aims to interpret image content. HUMAN VISION SYSTEMS At the highest level, vision systems are pretty much the same for humans, animals, insects, and most living organisms. They consist of a sensor or an eye to capture the image and a brain to process and interpret the image. The system then outputs a prediction of the image components based on the data extracted from the image (figure 1.1). Let’s see how the human vision system works. Suppose we want to interpret the image of dogs in figure 1.1. We look at it and directly understand that the image con- sists of a bunch of dogs (three, to be specific). It comes pretty natural to us to classify POOL Human vision system Eye (sensing device responsible for capturing images of the environment)Brain (interpreting device responsible for understanding the image content) Dogs grassInterpretation Figure 1.1 The human vision system uses the eye and brain to sense and interpret an image." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"6 CHAPTER 1Welcome to computer vision and detect objects in this image because we have been trained over the years to iden- tify dogs. Suppose someone shows you a picture of a dog for the first time—you definitely don’t know what it is. Then they tell you that this is a dog. After a couple experiments like this, you will have been trained to identify dogs. Now, in a follow-up exercise, they show you a picture of a horse. When you look at the image, your brain starts analyzing the object features: hmmm, it has four legs, long face, long ears. Could it be a dog? “Wrong: this is a horse,” you’re told. Then your brain adjusts some parameters in its algorithm to learn the differences between dogs and horses. Congratulations! You just trained your brain to classify dogs and horses. Can you add more animals to the equa- tion, like cats, tigers, cheetahs, and so on? Definitely. You can train your brain to iden- tify almost anything. The same is true of computers. You can train machines to learn and identify objects, but humans are much more intuitive than machines. It takes only a few images for you to learn to identify most objects, whereas with machines, it takes thousands or, in more complex cases, millions of image samples to learn to identify objects. AI VISION SYSTEMS Scientists were inspired by the human vision system and in recent years have done an amazing job of copying visual ability with machines. To mimic the human vision sys- tem, we need the same two main components: a sensing device to mimic the function of the eye and a powerful algorithm to mimic the brain function in interpreting and classifying image content (figure 1.2). The ML perspective Let’s look at the previous example from the machine learning perspective: You learned to identify dogs by looking at examples of several dog-labeled images. This approach is called supervised learning. Labeled data is data for which you already know the target answer. You were shown a sample image of a dog and told that it was a dog. Your brain learned to associate the features you saw with this label: dog. You were then shown a different object, a horse, and asked to identify it. At first, your brain thought it was a dog, because you hadn’t seen horses before, and your brain confused horse features with dog features. When you were told that your prediction was wrong, your brain adjusted its parameters to learn horse features. “Yes, both have four legs, but the horse’s legs are lon- ger. Longer legs indicate a horse.” We can run this experiment many times until the brain makes no mistakes. This is called training by trial and error ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"7 Computer vision 1.1.3 Sensing devices Vision systems are designed to fulfill a specific task. An important aspect of design is selecting the best sensing device to capture the surroundings of a specific environ- ment, whether that is a camera, radar, X-ray, CT scan, Lidar, or a combination of devices to provide the full scene of an environment to fulfill the task at hand. Let’s look at the autonomous vehicle (AV) example again. The main goal of the AV vision system is to allow the car to understand the environment around it and move from point A to point B safely and in a timely manner. To fulfill this goal, vehi- cles are equipped with a combination of cameras and sensors that can detect 360 degrees of movement—pedestrians, cyclists, vehicles, roadwork, and other objects— from up to three football fields away. Here are some of the sensing devices usually used in self-driving cars to perceive the surrounding area: Lidar, a radar-like technique, uses invisible pulses of light to create a high- resolution 3D map of the surrounding area. Cameras can see street signs and road markings but cannot measure distance. Radar can measure distance and velocity but cannot see in fine detail. Medical diagnosis applications use X-rays or CT scans as sensing devices. Or maybe you need to use some other type of radar to capture the landscape for agricultural vision systems. There are a variety of vision systems, each designed to perform a partic- ular task. The first step in designing vision systems is to identify the task they are built for. This is something to keep in mind when designing end-to-end vision systems. Recognizing images Animals, humans, and insects all have eyes as sensing devices. But not all eyes have the same structure, output image quality, and resolution. They are tailored to the spe- cific needs of the creature. Bees, for instance, and many other insects, have compoundComputer vision system Dogs grassOutput Interpreting device Sensing device Figure 1.2 The components of the computer vision system are a sensing device and an interpreting device." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"8 CHAPTER 1Welcome to computer vision 1.1.4 Interpreting devices Computer vision algorithms are typically employed as interpreting devices. The inter- preter is the brain of the vision system. Its role is to take the output image from the sensing device and learn features and patterns to identify objects. So we need to build a brain. Simple! Scientists were inspired by how our brains work and tried to reverse engineer the central nervous system to get some insight on how to build an artificial brain. Thus, artificial neural networks (ANNs) were born (figure 1.3). In figure 1.3, we can see an analogy between biological neurons and artificial sys- tems. Both contain a main processing element, a neuron , with input signals ( x1, x2, …, xn) and an output. The learning behavior of biological neurons inspired scientists to create a network of neurons that are connected to each other. Imitating how information is processed in the human brain, each artificial neuron fires a signal to all the neurons that it’s con- nected to when enough of its input signals are activated. Thus, neurons have a very simple mechanism on the individual level (as you will see in the next chapter); but when you have millions of these neurons stacked in layers and connected together, each neuron is connected to thousands of other neurons, yielding a learning behav- ior. Building a multilayer neural network is called deep learning (figure 1.4).(continued) eyes that consist of multiple lenses (as many as 30,000 lenses in a single compound eye). Compound eyes have low resolution, which makes them not so good at recog- nizing objects at a far distance. But they are very sensitive to motion, which is essen- tial for survival while flying at high speed. Bees don’t need high-resolution pictures. Their vision systems are built to allow them to pick up the smallest movements while flying fast. Compound eyes are low resolution but sensitive to motion.Compound eyes How bees see a flower " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"9 Computer vision DL methods learn representations through a sequence of transformations of data through layers of neurons. In this book, we will explore different DL architectures, such as ANNs and convolutional neural networks, and how they are used in CV applications. Biological neuron Artificial neuron Neuron Flow of informationDendrites (information coming from other neurons) Synapses (information output to other neurons)fx( ) Output xx n2 ...x1Input Neuron Figure 1.3 The similarities between biological neurons and artificial systems InputArtificial neural network (ANN) Layers of neuronsOutput Figure 1.4 Deep learning involves layers of neurons in a network. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"10 CHAPTER 1Welcome to computer vision CAN MACHINE LEARNING ACHIEVE BETTER PERFORMANCE THAN THE HUMAN BRAIN ? Well, if you had asked me this question 10 years ago, I would’ve probably said no, machines cannot surpass the accuracy of a human. But let’s take a look at the follow- ing two scenarios: Suppose you were given a book of 10,000 dog images, classified by breed, and you were asked to learn the properties of each breed. How long would it take you to study the 130 breeds in 10,000 images? And if you were given a test of 100 dog images and asked to label them based on what you learned, out of the 100, how many would you get right? Well, a neural network that is trained in a couple of hours can achieve more than 95% accuracy. On the creation side, a neural network can study the patterns in the strokes, col- ors, and shading of a particular piece of art. Based on this analysis, it can then transfer the style from the original artwork into a new image and create a new piece of original art within a few seconds. Recent AI and DL advances have allowed machines to surpass human visual ability in many image classification and object detection applications, and capacity is rapidly expanding to many other applications. But don’t take my word for it. In the next sec- tion, we’ll discuss some of the most popular CV applications using DL technology. 1.2 Applications of computer vision Computers began to be able to recognize human faces in images decades ago, but now AI systems are rivaling the ability of computers to classify objects in photos and videos. Thanks to the dramatic evolution in both computational power and the amount of data available, AI and DL have managed to achieve superhuman performance on many com- plex visual perception tasks like image search and captioning, image and video classifi- cation, and object detection. Moreover, deep neural networks are not restricted to CV tasks: they are also successful at natural language processing and voice user inter- face tasks. In this book, we’ll focus on visual applications that are applied in CV tasks. DL is used in many computer vision applications to recognize objects and their behavior. In this section, I’m not going to attempt to list all the CV applications that are out there. I would need an entire book for that. Instead, I’ll give you a bird’s-eye view of some of the most popular DL algorithms and their possible applications across different industries. Among these industries are autonomous cars, drones, robots, in-store cam- eras, and medical diagnostic scanners that can detect lung cancer in early stages. 1.2.1 Image classification Image classification is the task of assigning to an image a label from a predefined set of categories. A convolutional neural network is a neural network type that truly shines in processing and classifying images in many different applications: Lung cancer diagnosis —Lung cancer is a growing problem. The main reason lung cancer is very dangerous is that when it is diagnosed, it is usually in the middle or" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"11 Applications of computer vision late stages. When diagnosing lung cancer, doctors typically use their eyes to examine CT scan images, looking for small nodules in the lungs. In the early stages, the nodules are usually very small and hard to spot. Several CV compa- nies decided to tackle this challenge using DL technology. Almost every lung cancer starts as a small nodule, and these nodules appear in a variety of shapes that doctors take years to learn to recognize. Doctors are very good at identifying mid- and large-size nodules, such as 6–10 mm. But when nodules are 4 mm or smaller, sometimes doctors have difficulty identify- ing them. DL networks, specifically CNNs, are now able to learn these features automatically from X-ray and CT scan images and detect small nodules early, before they become deadly (figure 1.5). Traffic sign recognition —Traditionally, standard CV methods were employed to detect and classify traffic signs, but this approach required time-consuming man- ual work to handcraft important features in images. Instead, by applying DL to this problem, we can create a model that reliably classifies traffic signs, learning to identify the most appropriate features for this problem by itself (figure 1.6). NOTE Increasing numbers of image classification tasks are being solved with convolutional neural networks. Due to their high recognition rate and fast execution, CNNs have enhanced most CV tasks, both pre-existing and new. Just like the cancer diagnosis and traffic sign examples, you can feed tens or hundreds of thousands of images into a CNN to label them into as many classes as you want. Other image classification examples include identifying people and objects, classifying different animals (like cats versus dogs versus horses), different breeds of animals, types of land suitable for agriculture, and so on. In short, if you have a set of labeled images, convolutional networks can classify them into a set of predefined classes. CT scanTumor X-ray Tumor Figure 1.5 Vision systems are now able to learn patterns in X-ray images to identify tumors in earlier stages of development." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"12 CHAPTER 1Welcome to computer vision 1.2.2 Object detection and localization Image classification problems are the most basic applications for CNNs. In these prob- lems, each image contains only one object, and our task is to identify it. But if we aim to reach human levels of understanding, we have to add complexity to these networks so they can recognize multiple objects and their locations in an image. To do that, we can build object detection systems like YOLO (you only look once), SSD (single-shot detector), and Faster R-CNN, which not only classify images but also can locate and detect each object in images that contain multiple objects. These DL systems can look at an image, break it up into smaller regions, and label each region with a class so that a variable num- ber of objects in a given image can be localized and labeled (figure 1.7). You can imag- ine that such a task is a basic prerequisite for applications like autonomous systems. 1.2.3 Generating art (style transfer) Neural style transfer , one of the most interesting CV applications, is used to transfer the style from one image to another. The basic idea of style transfer is this: you take one image—say, of a city—and then apply a style of art to that image—say, The Starry Night (by Vincent Van Gogh)—and output the same city from the original image, but look- ing as though it was painted by Van Gogh (figure 1.8). This is actually a neat application. The astonishing thing, if you know any painters, is that it can take days or even weeks to finish a painting, and yet here is an application that can paint a new image inspired by an existing style in a matter of seconds. Figure 1.6 Vision systems can detect traffic signs with very high performance." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"13 Applications of computer vision 1.2.4 Creating images Although the earlier examples are truly impressive CV applications of AI, this is where I see the real magic happening: the magic of creation. In 2014, Ian Good- fellow invented a new DL model that can imagine new things called generative adversarial networks (GANs). The name makes them sound a little intimidating, but I promise you that they are not. A GAN is an evolved CNN architecture that isBicycleClouds Pedestrian Figure 1.7 Deep learning systems can segment objects in an image. + Style Generated art =Original image Figure 1.8 Style transfer from Van Gogh’s The Starry Night onto the original image, producing a piece of art that feels as though it was created by the original artist" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"14 CHAPTER 1Welcome to computer vision considered a major advancement in DL. So when you understand CNNs, GANs will make a lot more sense to you. GANs are sophisticated DL models that generate stunningly accurate synthesized images of objects, people, and places, among other things. If you give them a set of images, they can make entirely new, realistic-looking images. For example, StackGAN is one of the GAN architecture variations that can use a textual description of an object to generate a high-resolution image of the object matching that description. This is not just running an image search on a database. These “photos” have never been seen before and are totally imaginary (figure 1.9). The GAN is one of the most promising advancements in machine learning in recent years. Research into GANs is new, and the results are overwhelmingly promising. Most of the applications of GANs so have far have been for images. But it makes you won- der: if machines are given the power of imagination to create pictures, what else can they create? In the future, will your favorite movies, music, and maybe even books be created by computers? The ability to synthesize one data type (text) to another (image) will eventually allow us to create all sorts of entertainment using only detailed text descriptions. GANs create artwork In October 2018, an AI-created painting called The Portrait of Edmond Belamy sold for $432,500. The artwork features a fictional person named Edmond de Belamy, possibly French and—to judge by his dark frock coat and plain white collar—a man of the church.This small blue bird has a short, pointy beak and brown on its wings. This bird is completely red with black wings and a pointy beak. Figure 1.9 Generative adversarial networks (GANS) can create new, “made-up” images from a set of existing images." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"15 Applications of computer vision 1.2.5 Face recognition Face recognition (FR) allows us to exactly identify or tag an image of a person. Day-to- day applications include searching for celebrities on the web and auto-tagging friends and family in images. Face recognition is a form of fine-grained classification. The famous Handbook of Face Recognition (Li et al., Springer, 2011) categorizes two modes of an FR system: Face identification —Face identification involves one-to-many matches that com- pare a query face image against all the template images in the database to deter- mine the identity of the query face. Another face recognition scenario involves a watchlist check by city authorities, where a query face is matched to a list of suspects (one-to-few matches). Face verification —Face verification involves a one-to-one match that compares a query face image against a template face image whose identity is being claimed (figure 1.10). 1.2.6 Image recommendation system In this task, a user seeks to find similar images with respect to a given query image. Shopping websites provide product suggestions (via images) based on the selection of a particular product, for example, showing a variety of shoes similar to those the user selected. An example of an apparel search is shown in figure 1.11.The artwork was created by a team of three 25-year-old French students using GANs. The network was trained on a dataset of 15,000 portraits painted between the fourteenth and twentieth centuries, and then it created one of its own. The team printed the image, framed it, and signed it with part of a GAN algorithm.AI-generated artwork featuring a fictional person named Edmond de Belamy sold for $432,500. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"16 CHAPTER 1Welcome to computer vision Face verification Face verification systemPerson 1Person 1 Person 2 Not person 1Face identification Face identification system Haven’t seen her before Figure 1.10 Example of face verification (left) and face recognition (right) Figure 1.11 Apparel search. The leftmost image in each row is the query/clicked image, and the subsequent columns show similar apparel. ( Source : Liu et al., 2016.) Query Retrievals" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"17 Computer vision pipeline: The big picture 1.3 Computer vision pipeline: The big picture Okay, now that I have your attention, let’s dig one level deeper into CV systems. Remember that earlier in this chapter, we discussed how vision systems are composed of two main components: sensing devices and interpreting devices (figure 1.12 offers a reminder). In this section, we will take a look at the pipeline the interpreting device component uses to process and understand images. Applications of CV vary, but a typical vision system uses a sequence of distinct steps to process and analyze image data. These steps are referred to as a computer vision pipeline . Many vision applications follow the flow of acquiring images and data, processing that data, performing some analysis and recognition steps, and then finally making a pre- diction based on the extracted information (figure 1.13). Let’s apply the pipeline in figure 1.13 to an image classifier example. Suppose we have an image of a motorcycle, and we want the model to predict the probability of the object from the following classes: motorcycle, car, and dog (see figure 1.14).Computer vision system Dogs grassOutput Interpreting device Sensing device Figure 1.12 Focusing on the interpreting device in computer vision systems 1. Input data 2. Preprocessing 3. Feature extraction 4. ML model • Images • Videos (image frames)Getting the data ready: • Standardize images • Color transformation • More...• Find distinguishing information about the image• Learn from the extracted features to predict and classify objects Figure 1.13 The computer vision pipeline, which takes input data, processes it, extracts information, and then sends it to the machine learning model to learn" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"18 CHAPTER 1Welcome to computer vision DEFINITIONS An image classifier is an algorithm that takes in an image as input and outputs a label or “class” that identifies that image. A class (also called a category ) in machine learning is the output category of your data. Here is how the image flows through the classification pipeline: 1A computer receives visual input from an imaging device like a camera. This input is typically captured as an image or a sequence of images forming a video. 2Each image is then sent through some preprocessing steps whose purpose is to standardize the images. Common preprocessing steps include resizing an image, blurring, rotating, changing its shape, or transforming the image from one color to another, such as from color to grayscale. Only by standardizing the images—for example, making them the same size—can you then compare them and further analyze them. 3We extract features. Features are what help us define objects, and they are usu- ally information about object shape or color. For example, some features that distinguish a motorcycle are the shape of the wheels, headlights, mudguards, and so on. The output of this process is a feature vector that is a list of unique shapes that identify the object. 4The features are fed into a classification model . This step looks at the feature vec- tor from the previous step and predicts the class of the image. Pretend that you are the classifier model for a few minutes, and let’s go through the classification process. You look at the list of features in the feature vector one by one and try to determine what’s in the image: aFirst you see a wheel feature; could this be a car, a motorcycle, or a dog? Clearly it is not a dog, because dogs don’t have wheels (at least, normal dogs, not robots). Then this could be an image of a car or a motorcycle. bYou move on to the next feature, the headlight s. There is a higher probability that this is a motorcycle than a car. cThe next feature is rear mudguards —again, there is a higher probability that it is a motorcycle.1. Input data 2. Preprocessing • Geometric transforming • Image blurring3. Feature extraction P(motorcycle) = 0.854. Classifier Features vectorP(car) = 0.14 P(dog) = 0.01 Figure 1.14 Using the machine learning model to predict the probability of the motorcycle object from the motorcycle, car, and dog classes" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"19 Image input dThe object has only two wheels; this is closer to a motorcycle. eAnd you keep going through all the features like the body shape, pedal, and so on, until you arrive at a best guess of the object in the image. The output of this process is the probability of each class. As you can see in our exam- ple, the dog has the lowest probability, 1%, whereas there is an 85% probability that this is a motorcycle. You can see that, although the model was able to predict the right class with the highest probability, it is still a little confused about distinguishing between cars and motorcycles—it predicted that there is a 14% chance this is an image of a car. Since we know that it is a motorcycle, we can say that our ML classifica- tion algorithm is 85% accurate. Not bad! To improve this accuracy, we may need to do more of step 1 (acquire more training images), or step 2 (more processing to remove noise), or step 3 (extract better features), or step 4 (change the classifier algorithm and tune some hyperparameters), or even allow more training time. The many differ- ent approaches we can take to improve the performance of our model all lie in one or more of the pipeline steps. That was the big picture of how images flow through the CV pipeline. Next, we’ll zoom in one level deeper on each of the pipeline steps. 1.4 Image input In CV applications, we deal with images or video data. Let’s talk about grayscale and color images for now, and in later chapters, we will talk about videos, since videos are just stacked sequential frames of images. 1.4.1 Image as functions An image can be represented as a function of two variables x and y, which define a two- dimensional area. A digital image is made of a grid of pixels. The pixel is the raw build- ing block of an image. Every image consists of a set of pixels in which their values rep- resent the intensity of light that appears in a given place in the image. Let’s take a look at the motorcycle example again after applying the pixel grid to it (figure 1.15). Grayscale image (32 × 16) F(20, 7) = 0 Black pixel F(12, 13) = 255 White pixelx 31 0 0 y 15 F(18, 9) = 190 Gray pixel Figure 1.15 Images consists of raw building blocks called pixels . The pixel values represent the intensity of light that appears in a given place in the image. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"20 CHAPTER 1Welcome to computer vision The image in figure 1.14 has a size of 32 × 16. This means the dimensions of the image are 32 pixels wide and 16 pixels tall. The x-axis goes from 0 to 31, and the y-axis from 0 to 16. Overall, the image has 512 (32 × 16) pixels. In this grayscale image, each pixel contains a value that represents the intensity of light on that specific pixel. The pixel val- ues range from 0 to 255. Since the pixel value represents the intensity of light, the value 0 represents very dark pixels (black), 255 is very bright (white), and the values in between represent the intensity on the grayscale. You can see that the image coordinate system is similar to the Cartesian coordinate system: images are two-dimensional and lie on the x-y plane. The origin (0, 0) is at the top left of the image. To represent a specific pixel, we use the following notations: F as a function, and x, y as the location of the pixel in x- and y-coordinates. For example, the pixel located at x = 12 and y = 13 is white; this is represented by the following func- tion: F(12, 13) = 255. Similarly, the pixel (20, 7) that lies on the front of the motor- cycle is black, represented as F(20, 7) = 0. Grayscale => F(x, y) gives the intensity at position (x, y) That was for grayscale images. How about color images? In color images, instead of representing the value of the pixel by just one number, the value is represented by three numbers representing the intensity of each color in the pixel. In an RGB system, for example, the value of the pixel is represented by three numbers: the intensity of red, intensity of green, and intensity of blue. There are other color systems for images like HSV and Lab. All follow the same concept when representing the pixel value (more on color images soon). Here is the function repre- senting color images in the RGB system: Color image in RGB => F(x, y) = [ red (x, y), green (x, y), blue (x, y) ] Thinking of an image as a function is very useful in image processing. We can think of an image as a function of F(x, y) and operate on it mathematically to transform it to a new image function G(x, y). Let’s take a look at the image transformation examples in table 1.1. Table 1.1 Image transformation example functions Application Transformation Darken the image. G(x, y) = 0.5 * F(x, y) Brighten the image. G(x, y) = 2 * F(x, y) Move an object down 150 pixels. G(x, y) = F(x, y + 150) Remove the gray in an image to trans- form the image into black and white.G(x, y) = { 0 if F(x, y) < 130, 255 otherwise }" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"21 Image input 1.4.2 How computers see images When we look at an image, we see objects, landscape, colors, and so on. But that’s not the case with computers. Consider figure 1.16. Your human brain can process it and immediately know that it is a picture of a motorcycle. To a computer, the image looks like a 2D matrix of the pixels’ values, which represent intensities across the color spec- trum. There is no context here, just a massive pile of data. The image in figure 1.16 is of size 24 × 24. This size indicates the width and height of the image: there are 24 pixels horizontally and 24 vertically. That means there is a total of 576 (24 × 24) pixels. If the image is 700 × 500, then the dimensionality of the matrix will be (700, 500), where each pixel in the matrix represents the intensity of brightness in that pixel. Zero represents black, and 255 represents white. 1.4.3 Color images In grayscale images, each pixel represents the intensity of only one color, whereas in the standard RGB system, color images have three channels (red, green, and blue). In other words, color images are represented by three matrices: one represents the intensity of red in the pixel, one represents green, and one represents blue (figure 1.17). As you can see in figure 1.17, the color image is composed of three channels: red, green, and blue. Now the question is, how do computers see this image? Again, they see the matrix, unlike grayscale images, where we had only one channel. In this case, we will have three matrices stacked on top of each other; that’s why it’s a 3D matrix. The dimensionality of 700 × 700 color images is (700, 700, 3). Let’s say the first matrix represents the red channel; then each element of that matrix represents an intensity of red color in that pixel, and likewise with green and blue. Each pixel in a color 08 49 81 52 22 24 32 47 24 21 78 16 84 19 04 04 04 20 20 0102 49 49 90 31 47 98 24 55 36 17 39 56 80 52 36 42 49 23 7022 99 31 95 14 32 81 20 58 23 53 05 00 61 08 68 14 34 35 5497 40 73 23 71 60 28 68 05 09 28 42 48 68 83 81 73 41 29 7138 17 55 04 51 99 64 02 66 75 22 96 35 05 97 57 38 72 78 8315 81 79 60 67 03 23 62 73 00 75 35 71 94 35 62 25 30 31 5100 18 14 11 43 45 67 12 99 74 31 31 89 47 99 20 39 23 90 5440 57 29 42 59 02 10 20 26 44 67 47 07 49 14 72 11 88 01 4900 60 93 69 41 44 26 95 97 20 15 55 05 28 07 03 24 34 74 1675 87 71 24 92 75 38 63 17 45 94 58 44 73 97 16 94 62 31 9204 17 40 48 34 33 40 94 78 35 03 88 44 92 57 33 72 99 49 33What computers see What we see 05 40 67 56 54 53 67 39 78 14 80 24 37 13 32 67 18 69 71 4807 98 53 01 22 78 59 63 94 00 04 00 44 86 16 46 06 82 48 6178 43 99 32 40 36 54 04 83 41 42 17 40 52 26 55 46 47 86 4352 69 30 54 40 64 70 49 14 33 16 54 21 17 26 12 29 59 81 5112 46 03 71 28 20 66 91 88 97 14 24 58 77 79 32 32 85 14 0150 04 49 37 44 35 18 44 34 34 09 34 51 04 33 43 40 74 23 8977 56 13 02 33 09 38 49 89 31 53 29 54 09 27 93 62 04 57 1991 62 36 34 13 12 64 94 63 33 56 85 17 55 98 53 74 34 05 6708 00 65 91 80 80 70 21 72 95 92 57 58 40 44 69 36 24 54 48 Figure 1.16 A computer sees images as matrices of values. The values represent the intensity of the pixels across the color spectrum. For example, grayscale images range between pixel values of 0 for black and 255 for white. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"22 CHAPTER 1Welcome to computer vision image has three numbers (0 to 255) associated with it. These numbers represent intensity of red, green, and blue color in that particular pixel. If we take the pixel (0,0) as an example, we will see that it represents the top-left pixel of the image of green grass. When we view this pixel in the color images, it looks like figure 1.18. The example in figure 1.19 shows some shades of the color green and their RGB values. 35 166 156165 166 158163 164 162165 166 165 162 157 ...158 159 159 158 167 ...... ... ... ... ... ...102 170 160169 170 162167 168 166169 170 169 163 158169 170 170 168 168 ...... ... ... ... ... ...RGB channels Channel 3 Blue intensity values Channel 2 Green intensity values Channel 1 Red intensity valuesColor image F(0, 0) = [11, 102, 35] 11 159 149 146 145 ...158 159 151 146 143 ...156 157 155 149 143 ...158 159 158 153 148 ...158 159 159 158 158 ...... ... ... ... ... ... Figure 1.17 Color images are represented by red, green, and blue channels, and matrices can be used to indicate those colors’ intensity. Red 11 +Green 102 + =Blue 35Forest Green (11, 102, 35) Figure 1.18 An image of green grass is actually made of three colors of varying intensity. Forest HEX #0B6623 RGB 11 102 35Forest green Codes: HEX #0B6623 RGB 11 102 35 Olive HEX #708238 RGB 112 130 56Olive green Codes: HEX #708238 RGB 112 130 56 Jungle HEX #29AB87 RGB 41 171 135Jungle green Codes: HEX #29AB87 RGB 41 171 135Mint HEX #98FB98 RGB 152 251 152Codes:Mint green HEX #98FB98 RGB 152 251 152 Lime HEX #C7EA46 RGB 199 234 70Codes:Lime green HEX #C7EA46 RGB 199 234 70 Jade HEX #00A86B RGB 0 168 107Codes:Jade green HEX #00A86B RGB 0 168 107 Figure 1.19 Different shades of green mean different intensities of the three image colors (red, green, blue)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"23 Image preprocessing 1.5 Image preprocessing In machine learning (ML) projects, you usually go through a data preprocessing or cleaning step. As an ML engineer, you will spend a good amount of time cleaning up and preparing the data before you build your learning model. The goal of this step is to make your data ready for the ML model to make it easier to analyze and process computationally. The same thing is true with images. Based on the problem you are solving and the dataset in hand, some data massaging is required before you feed your images to the ML model. Image processing could involve simple tasks like image resizing. Later, you will learn that in order to feed a dataset of images to a convolutional network, the images all have to be the same size. Other processing tasks can take place, like geometric and color transformation, converting color to grayscale, and many more. We will cover various image-processing techniques throughout the chapters of this book and in the projects. The acquired data is usually messy and comes from different sources. To feed it to the ML model (or neural network), it needs to be standardized and cleaned up. Pre- processing is used to conduct steps that will reduce the complexity and increase the accuracy of the applied algorithm. We can’t write a unique algorithm for each of the conditions in which an image is taken; thus, when we acquire an image, we convert it into a form that would allow a general algorithm to solve it. The following subsections describe some data-preprocessing techniques. 1.5.1 Converting color images to grayscale to reduce computation complexity Sometimes you will find it useful to remove unnecessary information from your images to reduce space or computational complexity. For example, suppose you want to convert your colored images to grayscale, because for many objects, color is notHow do computers see color? Computers see an image as matrices. Grayscale images have one channel (gray); thus, we can represent grayscale images in a 2D matrix, where each element rep- resents the intensity of brightness in that particular pixel. Remember, 0 means black and 255 means white. Grayscale images have one channel, whereas color images have three channels: red, green, and blue. We can represent color images in a 3D matrix where the depth is three. We’ve also seen how images can be treated as functions of space. This concept allows us to operate on images mathematically and change or extract information from them. Treating images as functions is the basis of many image-processing tech- niques, such as converting color to grayscale or scaling an image. Each of these steps is just operating mathematical equations to transform an image pixel by pixel. Grayscale: f(x, y) gives the intensity at position ( x, y) Color image: f(x, y) = [ red ( x, y), green ( x, y), blue ( x, y) ] " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"24 CHAPTER 1Welcome to computer vision necessary to recognize and interpret an image. Grayscale can be good enough for rec- ognizing certain objects. Since color images contain more information than black- and-white images, they can add unnecessary complexity and take up more space in memory. Remember that color images are represented in three channels, which means that converting them to grayscale will reduce the number of pixels that need to be processed (figure 1.20). In this example, you can see how patterns of brightness and darkness (intensity) can be used to define the shape and characteristics of many objects. However, in other applications, color is important to define certain objects, like skin cancer detection, which relies heavily on skin color (red rashes). Standardizing images —As you will see in chapter 3, one important constraint that exists in some ML algorithms, such as CNNs, is the need to resize the images in your dataset to unified dimensions. This implies that your images must be pre- processed and scaled to have identical widths and heights before being fed to the learning algorithm. Data augmentation —Another common preprocessing technique involves aug- menting the existing dataset with modified versions of the existing images. Scal- ing, rotations, and other affine transformations are typically used to enlarge your dataset and expose the neural network to a wide variety of variations ofBicycleClouds Pedestrian Figure 1.20 Converting color images to grayscale results in a reduced number of pixels that need to be processed. This could be a good approach for applications that do not rely a lot on the color information loss due to the conversion. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"25 Image preprocessing your images. This makes it more likely that your model will recognize objects when they appear in any form and shape. Figure 1.21 shows an example of image augmentation applied to a butterfly image. Other techniques —Many more preprocessing techniques are available to get your images ready for training an ML model. In some projects, you might need to remove the background color from your images to reduce noise. Other projects might require that you brighten or darken your images. In short, any adjustments that you need to apply to your dataset are part of preprocessing. You will selectWhen is color important? Converting an image to grayscale might not be a good decision for some problems. There are a number of applications for which color is very important: for example, building a diagnostic system to identify red skin rashes in medical images. This appli- cation relies heavily on the intensity of the red color in the skin. Removing colors from the image will make it harder to solve this problem. In general, color images provide very helpful information in many medical applications. Another example of the importance of color in images is lane-detection applications in a self-driving car, where the car has to identify the difference between yellow and white lines, because they are treated differently. Grayscale images do not provide enough information to distinguish between the yellow and white lines. The rule of thumb to identify the importance of colors in your problem is to look at the image with the human eye. If you are able to identify the object you are looking for in a gray image, then you probably have enough information to feed to your model. If not, then you definitely need more information (colors) for your model. The same rule can be applied for most other preprocessing techniques that we will discuss. YellowWhite Grayscale-based image processors cannot differentiate between color images." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"26 CHAPTER 1Welcome to computer vision the appropriate processing techniques based on the dataset at hand and the problem you are solving. You will see many image-processing techniques through- out this book, helping you build your intuition of which ones you need when working on your own projects. No free lunch theorem This is a phrase that was introduced by David Wolpert and William Macready in “No Free Lunch Theorems for Optimizations” ( IEEE Transactions on Evolutionary Compu- tation 1, 67). You will often hear this said when a team is working on an ML project. It means that no one prescribed recipe fits all models. When working on ML proj- ects, you will need to make many choices like building your neural network architec- ture, tuning hyperparameters, and applying the appropriate data preprocessing techniques. While there are some rule-of-thumb approaches to tackle certain prob- lems, there is really no single recipe that is guaranteed to work well in all situations.Data augmentationOriginal image De-texturized De-colorized Edge enhanced Salient edge map Flip/rotate Figure 1.21 Image-augmentation techniques create modified versions of the input image to provide more examples for the ML model to learn from. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"27 Feature extraction 1.6 Feature extraction Feature extraction is a core component of the CV pipeline. In fact, the entire DL model works around the idea of extracting useful features that clearly define the objects in the image. So we’ll spend a little more time here, because it is important that you understand what a feature is, what a vector of features is, and why we extract features. DEFINITION A feature in machine learning is an individual measurable prop- erty or characteristic of an observed phenomenon. Features are the input that you feed to your ML model to output a prediction or classification. Suppose you want to predict the price of a house: your input features (properties) might include square_foot , number_of_rooms , bathrooms , and so on, and the model will output the predicted price based on the values of your fea- tures. Selecting good features that clearly distinguish your objects increases the predictive power of ML algorithms. 1.6.1 What is a feature in computer vision? In CV, a feature is a measurable piece of data in your image that is unique to that spe- cific object. It may be a distinct color or a specific shape such as a line, edge, or image segment. A good feature is used to distinguish objects from one another. For example, if I give you a feature like a wheel and ask you to guess whether an object is a motorcy- cle or a dog, what would your guess be? A motorcycle. Correct! In this case, the wheel is a strong feature that clearly distinguishes between motorcycles and dogs. However, if I give you the same feature (a wheel) and ask you to guess whether an object is a bicycle or a motorcycle, this feature is not strong enough to distinguish between those objects. You need to look for more features like a mirror, license plate, or maybe a pedal, that collectively describe an object. In ML projects, we want to transform the raw data (image) into a feature vector to show to our learning algorithm, which can learn the characteristics of the object (figure 1.22). In the figure, we feed the raw input image of a motorcycle into a feature extraction algorithm. Let’s treat the feature extraction algorithm as a black box for now, and we will come back to it. For now, we need to know that the extraction algorithm produces a vector that contains a list of features. This feature vector is a 1D array that makes a robust representation of the object.You must make certain assumptions about the dataset and the problem you are try- ing to solve. For some datasets, it is best to convert the colored images to grayscale, while for other datasets, you might need to keep or adjust the color images. The good news is that, unlike traditional machine learning, DL algorithms require min- imum data preprocessing because, as you will see soon, neural networks do most of the heavy lifting in processing an image and extracting features." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"28 CHAPTER 1Welcome to computer vision 1.6.2 What makes a good (useful) feature? Machine learning models are only as good as the features you provide. That means coming up with good features is an important job in building ML models. But what makes a good feature? And how can you tell? Feature generalizability It is important to point out that figure 1.22 reflects features extracted from just one motorcycle. A very important characteristic of a feature is repeatability . The feature should be able to detect motorcycles in general, not just this specific one. So in real- world problems, a feature is not an exact copy of a piece of the input image. If we take the wheel feature, for example, the feature doesn’t look exactly like the wheel of one particular motorcycle. Instead, it looks like a circular shape with some patterns that identify wheels in all images in the training dataset. When the feature extractor sees thousands of images of motorcycles, it recognizes patterns that define wheels in general, regardless of where they appear in the image and what type of motorcycle they are part of. Input data Features Feature extraction algorithm Figure 1.22 Example input image fed to a feature-extraction algorithm to find patterns within the image and create the feature vector Feature after looking at thousands of images Feature after looking at one image Features need to detect general patterns." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"29 Feature extraction Let’s discuss this with an example. Suppose we want to build a classifier to tell the dif- ference between two types of dogs: Greyhound and Labrador. Let’s take two features— the dogs’ height and their eye color—and evaluate them (figure 1.23). Let’s begin with height. How useful do you think this feature is? Well, on average, Greyhounds tend to be a couple of inches taller than Labradors, but not always. There is a lot of variation in the dog world. So let’s evaluate this feature across different val- ues in both breeds’ populations. Let’s visualize the height distribution on a toy exam- ple in the histogram in figure 1.24. From the histogram, we can see that if the dog’s height is 20 inches or less, there is more than an 80% probability that the dog is a Labrador. On the other side of the his- togram, if we look at dogs that are taller than 30 inches, we can be pretty confident Greyhound Labrador Figure 1.23 Example of Greyhound and Labrador dogs 300 250 200 150 100 50 0Number of dogs 10 15 20 25 30 35 40 HeightLabrador Greyhound Figure 1.24 A visualization of the height distribution on a toy dogs dataset" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"30 CHAPTER 1Welcome to computer vision the dog is a Greyhound. Now, what about the data in the middle of the histogram (heights from 20 to 30 inches)? We can see that the probability of each type of dog is pretty close. The thought process in this case is as follows: if height ≤ 20: return higher probability to Labrador if height ≥ 30: return higher probability to Greyhound if 20 < height < 30: look for other features to classify the object So the height of the dog in this case is a useful feature because it helps (adds informa- tion) in distinguishing between both dog types. We can keep it. But it doesn’t distin- guish between Greyhounds and Labradors in all cases, which is fine. In ML projects, there is usually no one feature that can classify all objects on its own. That’s why, in machine learning, we almost always need multiple features, where each feature cap- tures a different type of information. If only one feature would do the job, we could just write if-else statements instead of bothering with training a classifier. TIP Similar to what we did earlier with color conversion (color versus gray- scale), to figure out which features you should use for a specific problem, do a thought experiment. Pretend you are the classifier. If you want to differentiate between Greyhounds and Labradors, what information do you need to know? You might ask about the hair length, the body size, the color, and so on. For another quick example of a non-useful feature to drive this idea home, let’s look at dog eye color. For this toy example, imagine that we have only two eye colors, blue and brown. Figure 1.25 shows what a histogram might look like for this example. Blue eyes Brown eyesLabrador Greyhound Figure 1.25 A visualization of the eye color distribution in a toy dogs dataset " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"31 Feature extraction It is clear that for most values, the distribution is about 50/50 for both types. So practi- cally, this feature tells us nothing, because it doesn’t correlate with the type of dog. Hence, it doesn’t distinguish between Greyhounds and Labradors. 1.6.3 Extracting features (handcrafted vs. automatic extracting) This is a large topic in machine learning that could take up an entire book. It’s typi- cally described in the context of a topic called feature engineering. In this book, we are only concerned with extracting features in images. So I’ll touch on the idea very quickly in this chapter and build on it in later chapters. TRADITIONAL MACHINE LEARNING USING HANDCRAFTED FEATURES In traditional ML problems, we spend a good amount of time in manual feature selec- tion and engineering. In this process, we rely on our domain knowledge (or partner with domain experts) to create features that make ML algorithms work better. We then feed the produced features to a classifier like a support vector machine (SVM) or AdaBoost to predict the output (figure 1.26). Some of the handcrafted feature sets are these: Histogram of oriented gradients (HOG) Haar Cascades Scale-invariant feature transform (SIFT) Speeded-Up Robust Feature (SURF)What makes a good feature for object recognition? A good feature will help us recognize an object in all the ways it may appear. Charac- teristics of a good feature follow: Identifiable Easily tracked and compared Consistent across different scales, lighting conditions, and viewing angles Still visible in noisy images or when only part of an object is visible InputFeature extraction (handcrafted)Learning algorithm SVM or AdaBoost Output Car Not a car Figure 1.26 Traditional machine learning algorithms require handcrafted feature extraction. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"32 CHAPTER 1Welcome to computer vision DEEP LEARNING USING AUTOMATICALLY EXTRACTED FEATURES In DL, however, we do not need to manually extract features from the image. The net- work extracts features automatically and learns their importance on the output by applying weights to its connections. You just feed the raw image to the network, and while it passes through the network layers, the network identifies patterns within the image with which to create features (figure 1.27). Neural networks can be thought of as feature extractors plus classifiers that are end-to-end trainable, as opposed to tradi- tional ML models that use handcrafted features. How do neural networks distinguish useful features from non-useful features? You might get the impression that neural networks only understand the most useful features, but that’s not entirely true. Neural networks scoop up all the features avail- able and give them random weights. During the training process, the neural network adjusts these weights to reflect their importance and how they should impact the out- put prediction. The patterns with the highest appearance frequency will have higher weights and are considered more useful features. Features with the lowest weights will have very little impact on the output. This learning process will be discussed in deeper detail in the next chapter.Input Feature extraction and classification Output Car Not a car Figure 1.27 A deep neural network passes the input image through its layers to automatically extract features and classify the object. No handcrafted features are needed. OutputFeaturesW W WW 2 3 41Weights NeuronX2 X3X1 ... Xn Weighting different features to reflect their importance in identifying the object" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"33 Classifier learning algorithm WHY USE FEATURES ? The input image has too much extra information that is not necessary for classifica- tion. Therefore, the first step after preprocessing the image is to simplify it by extract- ing the important information and throwing away nonessential information. By extracting important colors or image segments, we can transform complex and large image data into smaller sets of features. This makes the task of classifying images based on their features simpler and faster. Consider the following example. Suppose we have a dataset of 10,000 images of motorcycles, each of 1,000 width by 1,000 height. Some images have solid backgrounds, and others have busy backgrounds of unnecessary data. When these thousands of images are fed to the feature extraction algorithms, we lose all the unnecessary data that is not important to identify motorcycles, and we only keep a consolidated list of useful features that can be fed directly to the classifier (figure 1.28). This process is a lot sim- pler than having the classifier look at the raw dataset of 10,000 images to learn the properties of motorcycles. 1.7 Classifier learning algorithm Here is what we have discussed so far regarding the classifier pipeline: Input image —We’ve seen how images are represented as functions, and that com- puters see images as a 2D matrix for grayscale images and a 3D matrix (three channels) for colored images. Image preprocessing —We discussed some image-preprocessing techniques to clean up our dataset and make it ready as input to the ML algorithm. Feature extraction —We converted our large dataset of images into a vector of use- ful features that uniquely describe the objects in the image.Feature extractionFeatures vectorImages dataset of 10,000 images ... ... ...Classifier algorithm Figure 1.28 Extracting and consolidating features from thousands of images in one feature vector to be fed to the classifier" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"34 CHAPTER 1Welcome to computer vision Now it is time to feed the extracted feature vector to the classifier to output a class label for the images (for example, motorcycle or otherwise). As we discussed in the previous section, the classification task is done one of these ways: traditional ML algorithms like SVMs, or deep neural network algorithms like CNNs. While traditional ML algorithms might get decent results for some problems, CNNs truly shine in processing and classifying images in the most complex problems. In this book, we will discuss neural networks and how they work in detail. For now, I want you to know that neural networks automatically extract useful features from your dataset, and they act as a classifier to output class labels for your images. Input images pass through the layers of the neural network to learn their features layer by layer (figure 1.29). The deeper your network is (the more layers), the more it will learn the features of the dataset: hence the name deep learning . More layers come with some trade-offs that we will discuss in the next two chapters. The last layer of the neural net- work usually acts as the classifier that outputs the class label. Summary Both human and machine vision systems contain two basic components: a sens- ing device and an interpreting device. The interpreting process consists of four steps: input the data, preprocess it, do feature extraction, and produce a machine learning model.Deep learning classifier Network layers Input image ...MotorcycleOutput Not motorcycle... ... ...... ... Feature extraction layers (The input image flows through the network layers to learn its features. Early layers detect patterns in the image, then later layers detect patterns within patterns, and so on, until it creates the feature vector.)Classification layer (Looks at the feature vector extracted by the previous layer and fires the upper node if it sees the features of a motorcycle or the lower node if it doesn’t.) Figure 1.29 Input images pass through the layers of a neural network so it can learn features layer by layer." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"35 Summary An image can be represented as a function of x and y. Computers see an image as a matrix of pixel values: one channel for grayscale images and three channels for color images. Image-processing techniques vary for each problem and dataset. Some of these techniques are converting images to grayscale to reduce complexity, resizing images to a uniform size to fit your neural network, and data augmentation. Features are unique properties in the image that are used to classify its objects. Traditional ML algorithms use several feature-extraction methods." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"36Deep learning and neural networks In the last chapter, we discussed the computer vision (CV) pipeline components: the input image, preprocessing, extracting features, and the learning algorithm (classifier). We also discussed that in traditional ML algorithms, we manually extract features that produce a vector of features to be classified by the learning algorithm, whereas in deep learning (DL), neural networks act as both the feature extractor and the classifier. A neural network automatically recognizes patterns and extracts features from the image and classifies them into labels (figure 2.1). In this chapter, we will take a short pause from the CV context to open the DL algorithm box from figure 2.1. We will dive deeper into how neural networks learn features and make predictions. Then, in the next chapter, we will comeThis chapter covers Understanding perceptrons and multilayer perceptrons Working with the different types of activation functions Training networks with feedforward, error functions, and error optimization Performing backpropagation" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"37 Understanding perceptrons back to CV applications with one of the most popular DL architectures: convolutional neural networks. The high-level layout of this chapter is as follows: We will begin with the most basic component of the neural network: the perceptron , a neural network that contains only one neuron. Then we will move on to a more complex neural network architecture that con- tains hundreds of neurons to solve more complex problems. This network is called a multilayer perceptron (MLP), where neurons are stacked in hidden layers . Here, you will learn the main components of the neural network architecture: the input layer, hidden layers, weight connections, and output layer. You will learn that the network training process consists of three main steps: 1Feedforward operation 2Calculating the error 3Error optimization: using backpropagation and gradient descent to select the most optimum parameters that minimize the error function We will dive deep into each of these steps. You will see that building a neural network requires making necessary design decisions: choosing an optimizer, cost function, and activation functions, as well as designing the architecture of the network, including how many layers should be connected to each other and how many neurons should be in each layer. Ready? Let’s get started! 2.1 Understanding perceptrons Let’s take a look at the artificial neural network (ANN) diagram from chapter 1 (fig- ure 2.2). You can see that ANNs consist of many neurons that are structured in layers to perform some kind of calculations and predict an output. This architecture can beFeature extractorFeatures vectorTraditional machine learning flow Traditional ML algorithmOutput Input Deep learning algorithmDeep learning flow Output Input Figure 2.1 Traditional ML algorithms require manual feature extraction. A deep neural network automatically extracts features by passing the input image through its layers." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"38 CHAPTER 2Deep learning and neural networks also called a multilayer perceptron , which is more intuitive because it implies that the net- work consists of perceptrons structured in multiple layers. Both terms, MLP and ANN, are used interchangeably to describe this neural network architecture. In the MLP diagram in figure 2.2, each node is called a neuron . We will discuss how MLP networks work soon, but first let’s zoom in on the most basic component of the neural network: the perceptron. Once you understand how a single perceptron works, it will become more intuitive to understand how multiple perceptrons work together to learn data features. 2.1.1 What is a perceptron? The most simple neural network is the perceptron, which consists of a single neuron. Conceptually, the perceptron functions in a manner similar to a biological neuron (figure 2.3). A biological neuron receives electrical signals from its dendrites , modu- lates the electrical signals in various amounts, and then fires an output signal through its synapses only when the total strength of the input signals exceeds a certain thresh- old. The output is then fed to another neuron, and so forth. To model the biological neuron phenomenon, the artificial neuron performs two consecutive functions: it calculates the weighted sum of the inputs to represent the total strength of the input signals, and it applies a step function to the result to determine whether to fire the output 1 if the signal exceeds a certain threshold or 0 if the signal doesn’t exceed the threshold. As we discussed in chapter 1, not all input features are equally useful or important. To represent that, each input node is assigned a weight value, called its connection weight , to reflect its importance.InputArtificial neural network (ANN) Layers of neuronsOutput Figure 2.2 An artificial neural network consists of layers of nodes, or neurons connected with edges. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"39 Understanding perceptrons In the perceptron diagram in figure 2.4, you can see the following: Input vector —The feature vector that is fed to the neuron. It is usually denoted with an uppercase X to represent a vector of inputs ( x1, x2, . . ., xn). Weights vector —Each x1 is assigned a weight value w1 that represents its impor- tance to distinguish between different input datapoints.Connection weights Not all input features are equally important (or useful) features. Each input feature (x1) is assigned its own weight ( w1) that reflects its importance in the decision-making process. Inputs assigned greater weight have a greater effect on the output. If the weight is high, it amplifies the input signal; and if the weight is low, it diminishes the input signal. In common representations of neural networks, the weights are repre- sented by lines or edges from the input node to the perceptron. For example, if you are predicting a house price based on a set of features like size, neighborhood, and number of rooms, there are three input features ( x1, x2, and x3). Each of these inputs will have a different weight value that represents its effect on the final decision. For example, if the size of the house has double the effect on the price compared with the neighborhood, and the neighborhood has double the effect compared with the number of rooms, you will see weights something like 8, 4, and 2, respectively. How the connection values are assigned and how the learning happens is the core of the neural network training process. This is what we will discuss for the rest of this chapter. Biological neuron Artificial neuron Neuron Flow of informationDendrites (information coming from other neurons) Synapses (information output to other neurons)fx( ) Output xx n2 ...x1Input Neuron Figure 2.3 Artificial neurons were inspired by biological neurons. Different neurons are connected to each other by synapses that carry information." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"40 CHAPTER 2Deep learning and neural networks Neuron functions —The calculations performed within the neuron to modulate the input signals: the weighted sum and step activation function. Output —Controlled by the type of activation function you choose for your net- work. There are different activation functions, as we will discuss in detail in this chapter. For a step function, the output is either 0 or 1. Other activation func- tions produce probability output or float numbers. The output node represents the perceptron prediction. Let’s take a deeper look at the weighted sum and step function calculations that hap- pen inside the neuron. WEIGHTED SUM FUNCTION Also known as a linear combination , the weighted sum function is the sum of all inputs multiplied by their weights, and then added to a bias term. This function produces a straight line represented in the following equation: z = xi · wi + b (bias) z = x1 · w1 + x2 · w2 + x3 · w3 + … + xn · wn + b Here is how we implement the weighted sum in Python: z = np.dot(w.T,X) + b OutputSumActivation function InputsW W WW 2 3 41 X2 X3X1 ... Xn Figure 2.4 Input vectors are fed to the neuron, with weights assigned to represent importance. Calculations performed within the neuron are weighted sum and activation functions.  X is the input vector (uppercase X), w is the weights vector, and b is the y-intercept." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"41 Understanding perceptrons What is a bias in the perceptron, and why do we add it? Let’s brush up our memory on some linear algebra concepts to help understand what’s happening under the hood. Here is the function of the straight line: The function of a straight line is represented by the equation ( y = mx + b), where b is the y-intercept. To be able to define a line, you need two things: the slope of the line and a point on the line. The bias is that point on the y-axis. Bias allows you to move the line up and down on the y-axis to better fit the prediction with the data. Without the bias ( b), the line always has to go through the origin point (0,0), and you will get a poorer fit. To visualize the importance of bias, look at the graph in the above figure and try to separate the circles from the star using a line that passes through the ori- gin (0,0). It is not possible. The input layer can be given biases by introducing an extra input node that always has a value of 1, as you can see in the next figure. In neural networks, the value of the bias (b) is treated as an extra weight and is learned and adjusted by the neuron to minimize the cost function, as we will learn in the following sections of this chapter. The input layer can be given biases by introducing an extra input that always has a value of 1.xyb ym xb== The equation of a straight line OutputActivation functionInputs Weights Net input function1 X1W1 W2 WmW0 X2 Xm......" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"42 CHAPTER 2Deep learning and neural networks STEP ACTIVATION FUNCTION In both artificial and biological neural networks, a neuron does not just output the bare input it receives. Instead, there is one more step, called an activation function ; this is the decision-making unit of the brain. In ANNs, the activation function takes the same weighted sum input from before ( z = Σxi · wi + b) and activates (fires) the neuron if the weighted sum is higher than a certain threshold. This activation happens based on the activation function calculations. Later in this chapter, we’ll review the different types of activation functions and their general purpose in the broader context of neu- ral networks. The simplest activation function used by the perceptron algorithm is the step function that produces a binary output (0 or 1). It basically says that if the summed input ≥ 0, it “fires” (output = 1); else (summed input < 0), it doesn’t fire (out- put = 0) (figure 2.5). This is how the step function looks in Python: def step_function(z): if z <= 0: return 0 else: return 11.0 0.8 0.6 0.4 0.2 0.0 –4 –3 –2 –1 0 1 2 3 4 ZStep function xi i•wb+ y = g x g z (), where is an activation function and is the weighted sum =Output =0 If 1 Ifwx b ≤ w x b >• •+0 +0 Figure 2.5 The step function produces a binary output (0 or 1). If the summed input ≥ 0, it “fires” (output = 1); else (summed input < 0) it doesn't fire (output = 0). z is the weighted sum = Σxi · wi + b" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"43 Understanding perceptrons 2.1.2 How does the perceptron learn? The perceptron uses trial and error to learn from its mistakes. It uses the weights as knobs by tuning their values up and down until the network is trained (figure 2.6). The perceptron’s learning logic goes like this: 1The neuron calculates the weighted sum and applies the activation function to make a prediction yˆ. This is called the feedforward process: yˆ = activation( xi · wi + b) 2It compares the output prediction with the correct label to calculate the error: error = y – yˆ 3It then updates the weight. If the prediction is too high, it adjusts the weight to make a lower prediction the next time, and vice versa. 4Repeat! This process is repeated many times, and the neuron continues to update the weights to improve its predictions until step 2 produces a very small error (close to zero), which means the neuron’s prediction is very close to the correct value. At this point, we can stop the training and save the weight values that yielded the best results to apply to future cases where the outcome is unknown. 2.1.3 Is one neuron enough to solve complex problems? The short answer is no, but let’s see why. The perceptron is a linear function. This means the trained neuron will produce a straight line that separates our data. Suppose we want to train a perceptron to predict whether a player will be accepted into the college squad. We collect all the data from previous years and train theOutputSumActivation functionX1 X2 X3 XW W W W 41 2 3 4 Figure 2.6 Weights are tuned up and down during the learning process to optimize the value of the loss function. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"44 CHAPTER 2Deep learning and neural networks perceptron to predict whether players will be accepted based on only two features (height and weight). The trained perceptron will find the best weights and bias values to produce the straight line that best separates the accepted from non-accepted (best fit). The line has this equation: z = height · w1 + age · w2 + b After the training is complete on the training data, we can start using the perceptron to predict with new players. When we get a player who is 150 cm in height and 12 years old, we compute the previous equation with the values (150, 12). When plotted in a graph (figure 2.7), you can see that it falls below the line: the neuron is predicting that this player will not be accepted. If it falls above the line, then the player will be accepted. In figure 2.7, the single perceptron works fine because our data was linearly separable . This means the training data can be separated by a straight line. But life isn’t always that simple. What happens when we have a more complex dataset that cannot be sep- arated by a straight line ( nonlinear dataset )? As you can see in figure 2.8, a single straight line will not separate our training data. We say that it does not fit our data. We need a more complex network for more complex data like this. What if we built a network with two perceptrons? This would produce two lines. Would that help us separate the data better? Okay, this is definitely better than the straight line. But, I still see some color mis- predictions. Can we add more neurons to make the function fit better? Now you are getting it. Conceptually, the more neurons we add, the better the network will fit our210cm 200 140150160170180190 Height 130 120 10 11 12 13 14 15 16 17 18 19 Agex b Figure 2.7 Linearly separable data can be separated by a straight line." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"45 Multilayer perceptrons training data. In fact, if we add too many neurons, this will make the network overfit the training data (not good). But we will talk about this later. The general rule here is that the more complex our network is, the better it learns the features of our data. 2.2 Multilayer perceptrons We saw that a single perceptron works great with simple datasets that can be separated by a line. But, as you can imagine, the real world is much more complex than that. This is where neural networks can show their full potential. Linear vs. nonlinear problems Linear datasets —The data can be split with a single straight line. Nonlinear datasets —The data cannot be split with a single straight line. We need more than one line to form a shape that splits the data. Look at this 2D data. In the linear problem, the stars and dots can be easily classified by drawing a single straight line. In nonlinear data, a single line will not separate both shapes.cm Neuron 1 Neuron 2210 200 140150160170180190 Height 130 120 10 11 12 13 14 15 16 17 18 19 Age Figure 2.8 In a nonlinear dataset, a single straight line cannot separate the training data. A network with two perceptrons can produce two lines and help separate the data further in this example. Linear (can be split by one straight line)(need more than oneNonlinear line to split the data) Examples of linear data and nonlinear data" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"46 CHAPTER 2Deep learning and neural networks To split a nonlinear dataset, we need more than one line. This means we need to come up with an architecture to use tens and hundreds of neurons in our neural net- work. Let’s look at the example in figure 2.9. Remember that a perceptron is a linear function that produces a straight line. So in order to fit this data, we try to create a triangle-like shape that splits the dark dots. It looks like three lines would do the job. Figure 2.9 is an example of a small neural network that is used to model nonlinear data. In this network, we used three neurons stacked together in one layer called a hidden layer , so called because we don’t see the output of these layers during the training process. 2.2.1 Multilayer perceptron architecture We’ve seen how a neural network can be designed to have more than one neuron. Let’s expand on this idea with a more complex dataset. The diagram in figure 2.10 is from the Tensorflow playground website ( https:/ /playground.tensorflow.org ). We try to model a spiral dataset to distinguish between two classes. In order to fit this dataset, we need to build a neural network that contains tens of neurons. A very common neu- ral network architecture is to stack the neurons in layers on top of each other, called hidden layers . Each layer has n number of neurons. Layers are connected to each other by weight connections. This leads to the multilayer perceptron (MLP) architecture in the figure. The main components of the neural network architecture are as follows: Input layer —Contains the feature vector. Hidden layers —The neurons are stacked on top of each other in hidden layers. They are called “hidden” layers because we don’t see or control the input going into these layers or the output. All we do is feed the feature vector to the input layer and see the output coming out of the output layer. Weight connections (edges) —Weights are assigned to each connection between the nodes to reflect the importance of their influence on the final output predic- tion. In graph network terms, these are called edges connecting the nodes .Input features Output Hidden layer x x1 2Figure 2.9 A perceptron is a linear function that produces a straight line. So to fit this data, we need three perceptrons to create a triangle-like shape that splits the dark dots." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"47 Multilayer perceptrons Output layer —We get the answer or prediction from our model from the output layer. Depending on the setup of the neural network, the final output may be a real-valued output (regression problem) or a set of probabilities (classification problem). This is determined by the type of activation function we use in the neurons in the output layer. We’ll discuss the different types of activation func- tions in the next section. We discussed the input layer, weights, and output layer. The next area of this architec- ture is the hidden layers. 2.2.2 What are hidden layers? This is where the core of the feature-learning process takes place. When you look at the hidden layer nodes in figure 2.10, you see that the early layers detect simple pat- terns to learn low-level features (straight lines). Later layers detect patterns within patterns to learn more complex features and shapes, then patterns within patterns within patterns, and so on. This concept will come in handy when we discuss convolu- tional networks in later chapters. For now, know that, in neural networks, we stack hid- den layers to learn complex features from each other until we fit our data. So when you are designing your neural network, if your network is not fitting the data, the solu- tion could be adding more hidden layers. 2.2.3 How many layers, and how many nodes in each layer? As a machine learning engineer, you will mostly be designing your network and tun- ing its hyperparameters. While there is no single prescribed recipe that fits all models, we will try throughout this book to build your hyperparameter tuning intuition, asX16 neurons 6 neurons 6 neuronsSix hidden layers Input features6 neurons 6 neurons 2 neuronsOutput X2 These are the new features that are learned after each layer. Figure 2.10 Tensorflow playground example representation of the feature learning in a deep neural network " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"48 CHAPTER 2Deep learning and neural networks well as recommend some starting points. The number of layers and the number of neurons in each layer are among the important hyperparameters you will be design- ing when working with neural networks. A network can have one or more hidden layers (technically, as many as you want). Each layer has one or more neurons (again, as many as you want). Your main job, as a machine learning engineer, is to design these layers. Usually, when we have two or more hidden layers, we call this a deep neural network . The general rule is this: the deeper your network is, the more it will fit the training data. But too much depth is not a good thing, because the network can fit the training data so much that it fails to generalize when you show it new data (overfitting); also, it becomes more computa- tionally expensive. So your job is to build a network that is not too simple (one neu- ron) and not too complex for your data. It is recommended that you read about different neural network architectures that are successfully implemented by others to build an intuition about what is too simple for your problem. Start from that point, maybe three to five layers (if you are training on a CPU), and observe the network performance. If it is performing poorly (underfitting), add more layers. If you see signs of overfitting (discussed later), then decrease the number of layers. To build a sense of how neural networks perform when you add more layers, play around with the Tensorflow playground ( https:/ /playground.tensorflow.org ). Fully connected layers It is important to call out that the layers in classical MLP network architectures are fully connected to the next hidden layer. In the following figure, notice that each node in a layer is connected to all nodes in the previous layer. This is called a fully con- nected network . These edges are the weights that represent the importance of this node to the output value. n_units n_units n_out Input features Hidden layer 1 Hidden layer 2 Output layer210 A fully connected network" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"49 Multilayer perceptrons In later chapters, we will discuss other variations of neural network architecture (like convolutional and recurrent networks). For now, know that this is the most basic neu- ral network architecture, and it can be referred to by any of these names: ANN, MLP, fully connected network, or feedforward network. Let’s do a quick exercise to find out how many edges we have in our example. Sup- pose that we designed an MLP network with two hidden layers, and each has five neurons: Weights_0_1 : (4 nodes in the input layer) × (5 nodes in layer 1) + 5 biases [1 bias per neuron] = 25 edges Weights_1_2 : 5 × 5 nodes + 5 biases = 30 edges Weights_2_output : 5 × 3 nodes + 3 bias = 18 edges Total edges (weights) in this network = 73 We have a total of 73 weights in this very simple network. The values of these weights are randomly initialized, and then the network performs feedforward and backpropagation to learn the best values of weights that most fit our model to the training data. To see the number of weights in this network, try to build this simple network in Keras as follows: model = Sequential([ Dense(5, input_dim=4), Dense(5), Dense(3) ]) And print the model summary: model.summary() The output will be as follows: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 5) 25 _________________________________________________________________ dense_1 (Dense) (None, 5) 30 _________________________________________________________________ dense_2 (Dense) (None, 3) 18 ================================================================= Total params: 73 Trainable params: 73 Non-trainable params: 0" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"50 CHAPTER 2Deep learning and neural networks 2.2.4 Some takeaways from this section Let’s recap what we’ve discussed so far: We talked about the analogy between biological and artificial neurons: both have inputs and a neuron that does some calculations to modulate the input signals and create output. We zoomed in on the artificial neuron’s calculations to explore its two main functions: weighted sum and the activation function. We saw that the network assigns random weights to all the edges. These weight parameters reflect the usefulness (or importance) of these features on the out- put prediction. Finally, we saw that perceptrons contain a single neuron. They are linear func- tions that produce a straight line to split linear data. In order to split more com- plex data (nonlinear), we need to apply more than one neuron in our network to form a multilayer perceptron. The MLP architecture contains input features, connection weights, hidden lay- ers, and an output layer. We discussed the high-level process of how the perceptron learns. The learning process is a repetition of three main steps: feedforward calculations to produce a prediction (weighted sum and activation), calculating the error, and back- propagating the error and updating the weights to minimize the error. We should also keep in mind some of the important points about neural network hyperparameters: Number of hidden layers —You can have as many layers as you want, each with as many neurons as you want. The general idea is that the more neurons you have, the better your network will learn the training data. But if you have too many neurons, this might lead to a phenomenon called overfitting : the network learned the training set so much that it memorized it instead of learning its fea- tures. Thus, it will fail to generalize. To get the appropriate number of layers, start with a small network, and observe the network performance. Then start adding layers until you get satisfying results. Activation function —There are many types of activation functions, the most pop- ular being ReLU and softmax. It is recommended that you use ReLU activation in the hidden layers and Softmax for the output layer (you will see how this is implemented in most projects in this book). Error function —Measures how far the network’s prediction is from the true label. Mean square error is common for regression problems, and cross-entropy is common for classification problems. Optimizer —Optimization algorithms are used to find the optimum weight values that minimize the error. There are several optimizer types to choose from. In this chapter, we discuss batch gradient descent, stochastic gradient descent, and" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"51 Activation functions mini-batch gradient descent. Adam and RMSprop are two other popular opti- mizers that we don’t discuss. Batch size —Mini-batch size is the number of sub-samples given to the network, after which parameter update happens. Bigger batch sizes learn faster but require more memory space. A good default for batch size might be 32. Also try 64, 128, 256, and so on. Number of epochs —The number of times the entire training dataset is shown to the network while training. Increase the number of epochs until the validation accu- racy starts decreasing even when training accuracy is increasing (overfitting). Learning rate —One of the optimizer’s input parameters that we tune. Theoreti- cally, a learning rate that is too small is guaranteed to reach the minimum error (if you train for infinity time). A learning rate that is too big speeds up the learning but is not guaranteed to find the minimum error. The default lr value of the optimizer in most DL libraries is a reasonable start to get decent results. From there, go down or up by one order of magnitude. We will discuss the learning rate in detail in chapter 4. 2.3 Activation functions When you are building your neural network, one of the design decisions that you will need to make is what activation function to use for your neurons’ calculations. Activa- tion functions are also referred to as transfer functions or nonlinearities because they transform the linear combination of a weighted sum into a nonlinear model. An acti- vation function is placed at the end of each perceptron to decide whether to activate this neuron. More on hyperparameters Other hyperparameters that we have not discussed yet include dropout and regular- ization. We will discuss hyperparameter tuning in detail in chapter 4, after we cover convolutional neural networks in chapter 3. In general, the best way to tune hyperparameters is by trial and error. By getting your hands dirty with your own projects as well as learning from other existing neural net- work architectures, you will start to develop intuition about good starting points for your hyperparameters. Learn to analyze your network’s performance and understand which hyperparameter you need to tune for each symptom. And this is what we are going to do in this book. By understanding the reasoning behind these hyperparameters and observing the network performance in the projects at the end of the chapters, you will develop a feel for which hyperparameter to tune for a particular effect. For example, if you see that your error value is not decreasing and keeps oscillating, then you might fix that by reducing the learning rate. Or, if you see that the network is performing poorly in learning the training data, this might mean that the network is underfitting and you need to build a more complex model by adding more neurons and hidden layers. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"52 CHAPTER 2Deep learning and neural networks Why use activation functions at all? Why not just calculate the weighted sum of our network and propagate that through the hidden layers to produce an output? The purpose of the activation function is to introduce nonlinearity into the net- work. Without it, a multilayer perceptron will perform similarly to a single perceptron no matter how many layers we add. Activation functions are needed to restrict the out- put value to a certain finite value. Let’s revisit the example of predicting whether a player gets accepted (figure 2.11). First, the model calculates the weighted sum and produces the linear function ( z): z = height · w1 + age · w2 + b The output of this function has no bound. z could literally be any number. We use an activation function to wrap the prediction values to a finite value. In this example, we use a step function where if z > 0, then above the line (accepted) and if z < 0, then below the line (rejected). So without the activation function, we just have a linear function that produces a number, but no decision is made in this perceptron. The activation function is what decides whether to fire this perceptron. There are infinite activation functions. In fact, the last few years have seen a lot of progress in the creation of state-of-the-art activations. However, there are still relatively few activations that account for the vast majority of activation needs. Let’s dive deeper into some of the most common types of activation functions.x 210cm 200 140150160170180190 Height 130 120 10 11 12 13 14 15 16 17 18 19 Ageb Figure 2.11 This example revisits the prediction of whether a player gets accepted from section 2.1." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"53 Activation functions 2.3.1 Linear transfer function A linear transfer function , also called an identity function , indicates that the function passes a signal through unchanged. In practical terms, the output will be equal to the input, which means we don’t actually have an activation function. So no matter how many layers our neural network has, all it is doing is computing a linear activation function or, at most, scaling the weighted average coming in. But it doesn’t transform input into a nonlinear function. activation( z) = z = wx + b The composition of two linear functions is a linear function, so unless you throw a nonlinear activation function in your neural network, you are not computing any interesting functions no matter how deep you make your network. No learning here! To understand why, let’s calculate the derivative of the activation z(x) = w · x + b, where w = 4 and b = 0. When we plot this function, it looks like figure 2.12. Then the derivative of z(x) = 4 x is z'(x) = 4 (figure 2.13). The derivative of a linear function is constant: it does not depend on the input value x. This means that every time we do a backpropagation, the gradient will be the same. And this is a big problem: we are not really improving the error, since the gradient is pretty much the same. This will be clearer when we discuss backpropagation later in this chapter.fx x( ) = 4 4y 3 2 1 –1 –4–31 2 3 4x –4 –3 –2 –1 –2 Figure 2.12 The plot for the activation function f(x) = 4 x" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"54 CHAPTER 2Deep learning and neural networks 2.3.2 Heaviside step function (binary classifier) The step function produces a binary output. It basically says that if the input x > 0, it fires (output y = 1); else (input < 0), it doesn’t fire (output y = 0). It is mainly used in binary classification problems like true or false, spam or not spam, and pass or fail (figure 2.14).f' x g x( ) = ( ) = 4y 3 2 1 –1 –2 –4–31 2 3 4 x –4 –3 –2 –14 Figure 2.13 The plot for the derivative of z(x) = 4x is z'(x) = 4. 1.0 0.8 0.6 0.4 0.2 0.0 –4 –3 –2 –1 0 1 2 3 4 ZStep function Output =0 If 1 Ifwx b ≤ w x b >• •+0 +0 Figure 2.14 Step functions are commonly used in binary classification problems because they transform the input into zero or one. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"55 Activation functions 2.3.3 Sigmoid/logistic function This is one of the most common activation functions. It is often used in binary classifi- ers to predict the probability of a class when you have two classes. The sigmoid squishes all the values to a probability between 0 and 1, which reduces extreme values or out- liers in the data without removing them. Sigmoid or logistic functions convert infinite continuous variables (range between – ∞ to + ∞) into simple probabilities between 0 and 1. It is also called the S-shape curve because when plotted in a graph, it produces an S-shaped curve. While the step function is used to produce a discrete answer (pass or fail), sigmoid is used to produce the probability of passing and probability of failing (figure 2.15): σ(z) = Here is how sigmoid is implemented in Python: import numpy as np def sigmoid(x): return 1 / (1 + np.exp(-x))1 1ez–+--------------- Sigmoid ) =σ(z1 1 + e 0– –5 –100.01.0 1.8 1.6 1.4 1.2 51 0z Figure 2.15 While the step function is used to produce a discrete answer (pass or fail), sigmoid is used to produce the probability of passing or failing. Imports numpy Sigmoid activation function" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"56 CHAPTER 2Deep learning and neural networks Just-in-time linear algebra (optional) Let’s take a deeper dive into the math side of the sigmoid function to understand the problem it helps solve and how the sigmoid function equation is driven. Suppose that we are trying to predict whether patients have diabetes based on only one feature: their age. When we plot the data we have about our patients, we get the linear model shown in the figure: z = β0 + β1 age In this plot, you can observe the balance of probabilities that should go from 0 to 1. Note that when patients are below the age of 25, the predicted probabilities are neg- ative; meanwhile, they are higher than 1 (100%) when patients are older than 43 years old. This is a clear example of why linear functions do not work in most cases. Now, how do we fix this to give us probabilities within the range of 0 < probability < 1? First, we need to do something to eliminate all the negative probability values. The exponential function is a great solution for this problem because the exponent of any- thing (and I mean anything ) is always going to be positive. So let’s apply that to our linear equation to calculate the probability ( p): p = exp( z) = exp( β0 + β1 age) This equation ensures that we always get probabilities greater than 0. Now, what about the values that are higher than 1? We need to do something about them. With proportions, any given number divided by a number that is greater than it will give us a number smaller than 1. Let’s do exactly that to the previous equation. We divide the equation by its value plus a small value: either 1 or a (in some cases very small) value—let’s call it epsilon ( ε): p = 2 1 p1.5 0.5 0 –0.520 30 35 40 45 50 55 Age25 The linear model we get when we plot our data about our patients z()exp z()exp ε+------------------------" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"57 Activation functions 2.3.4 Softmax function The softmax function is a generalization of the sigmoid function. It is used to obtain classification probabilities when we have more than two classes. It forces the outputs of a neural network to sum to 1 (for example, 0 < output < 1). A very common use case in deep learning problems is to predict a single class out of many options (more than two). The softmax equation is as follows: σ(xj) = Figure 2.16 shows an example of the softmax function.If you divide the equation by exp( z), you get p = When we plot the probability of this equation, we get the S shape of the sigmoid func- tion, where probability is no longer below 0 or above 1. In fact, as patients’ ages grow, the probability asymptotically gets closer to 1; and as the weights move down, the function asymptotically gets closer to 0 but is never outside the 0 < p < 1 range. This is the plot of the sigmoid function and logistic regression.1 1 z–()exp+---------------------------- 2 1 p1.5 0.5 0 –0.520 30 35 40 45 50 55 Age25As patients get older, the probability asymptotically gets closer to 1. This is the plot of the sigmoid function and logistic regression. exj Σiexi------------ Softmax0.46 0.34 0.201.2 0.9 0.4Figure 2.16 The softmax function transforms the input values to probability values between 0 and 1. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"58 CHAPTER 2Deep learning and neural networks TIP Softmax is the go-to function that you will often use at the output layer of a classifier when you are working on a problem where you need to predict a class between more than two classes. Softmax works fine if you are classifying two classes, as well. It will basically work like a sigmoid function. By the end of this section, I’ll tell you my recommendations about when to use each activa- tion function. 2.3.5 Hyperbolic tangent function (tanh) The hyperbolic tangent function is a shifted version of the sigmoid version. Instead of squeezing the signal values between 0 and 1, tanh squishes all values to the range –1 to 1. Tanh almost always works better than the sigmoid function in hidden layers because it has the effect of centering your data so that the mean of the data is close to zero rather than 0.5, which makes learning for the next layer a little bit easier: tanh( x) = = One of the downsides of both sigmoid and tanh functions is that if ( z) is very large or very small, then the gradient (or derivative or slope) of this function becomes very small (close to zero), which will slow down gradient descent (figure 2.17). This is when the ReLU activation function (explained next) provides a solution. 2.3.6 Rectified linear unit The rectified linear unit (ReLU) activation function activates a node only if the input is above zero. If the input is below zero, the output is always zero. But when the input is higher than zero, it has a linear relationship with the output variable. The ReLU function is represented as follows: f(x) = max (0, x)x()sinh x() cosh------------------- -exex–– exex–+---------------- - tanh ( ) x –1.0–0.5–4 –20.51.0 4x 2 Figure 2.17 If (z) is very large or very small, then the gradient (or derivative or slope) of this function becomes very small (close to zero)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"59 Activation functions At the time of writing, ReLU is considered the state-of-the-art activation function because it works well in many different situations, and it tends to train better than sig- moid and tanh in hidden layers (figure 2.18). Here is how ReLU is implemented in Python: def relu(x): if x < 0: return 0 else: return x 2.3.7 Leaky ReLU One disadvantage of ReLU activation is that the derivative is equal to zero when ( x) is negative. Leaky ReLU is a ReLU variation that tries to mitigate this issue. Instead of having the function be zero when x < 0, leaky ReLU introduces a small negative slope (around 0.01) when ( x) is negative. It usually works better than the ReLU function, although it’s not used as much in practice. Take a look at the leaky ReLU graph in fig- ure 2.19; can you see the leak? f(x) = max(0.01 x, x) Why 0.01? Some people like to use this as another hyperparameter to tune, but that would be overkill, since you already have other, bigger problems to worry about. Feel free to try different values (0.1, 0.01, 0.002) in your model and see how they work.Rectifier ReLU( ) = x0 if < 0x xxif > = 06 5 4 3 2 1 0 –1 –2 –3 –2 –4 0 2 4 Figure 2.18 The ReLU function eliminates all negative values of the input by transforming them into zeros. ReLU activation function" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"60 CHAPTER 2Deep learning and neural networks Here is how Leaky ReLU is implemented in Python: def leaky_relu(x): if x < 0: return x * 0.01 else: return x Table 2.1 summarizes the various activation functions we’ve discussed in this section. Table 2.1 A cheat sheet of the most common activation functions Activation functionDescription Plot Equation Linear trans- fer function (identity function)The signal passes through it unchanged. It remains a linear function. Almost never used.f(x) = x Heaviside step function (binary classifier)Produces a binary output of 0 or 1. Mainly used in binary classifica- tion to give a dis- crete value.output = {Leaky ReLU fx( ) =0.01 for < 0xx xxfor = > 010 8 6 4 2 51 0 –10 –5 Figure 2.19 Instead of having the function be zero when x < 0, leaky ReLU introduces a small negative slope (around 0.01) when ( x) is negative. Leaky ReLU activation function with a 0.0 1 leak 3 2 1 0 –6 –4 –2 2 4 045 –3 –4 –5–2–1 1.0 0.8 0.6 0.4 0.2 0.0 –4 –3 –2 –1 0 1 2 3 4 ZStep function 0ifwx b 0≤+⋅ 1i fwx b 0>+⋅" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"61 Activation functions Sigmoid/ logistic functionSquishes all the values to a probabil- ity between 0 and 1, which reduces extreme values or outliers in the data. Usually used to classify two classes.σ(z) = Softmax functionA generalization of the sigmoid func- tion. Used to obtain classification proba- bilities when we have more than two classes. σ(xj) = Hyperbolic tangent func- tion (tanh)Squishes all values to the range of –1 to 1. Tanh almost always works better than the sigmoid function in hidden layers.tanh( x) = = Rectified linear unit (ReLU)Activates a node only if the input is above zero. Always recommended for hidden layers. Better than tanh.f(x) = max (0, x) Leaky ReLU Instead of having the function be zero when x < 0, leaky ReLU introduces a small negative slope (around 0.01) when ( x) is negative.f(x) = max(0.01 x, x)Table 2.1 A cheat sheet of the most common activation functions Activation functionDescription Plot Equation 0 z–6 –4 –2 –80.01.0 1.5 Ø( )z 24681 1ez–+------------------------- 0 z–6 –4 –2 –80.01.0 1.5 Ø( )z 2468exj Σiexi--------- tanh x –1.0–0.5–4 –20.51.0 4x 2x()sinh x()cosh------------------------------- exex–– exex–+-------------------------- Rectifier 6 5 4 3 2 1 0 –1 –2 –3–2 –4 0 2 4 10 8 6 4 2 51 0 –10 –5" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"62 CHAPTER 2Deep learning and neural networks 2.4 The feedforward process Now that you understand how to stack perceptrons in layers, connect them with weights/edges, perform a weighted sum function, and apply activation functions, let’s implement the complete forward-pass calculations to produce a prediction output. The process of computing the linear combination and applying the activation func- tion is called feedforward . We briefly discussed feedforward several times in the previ- ous sections; let’s take a deeper look at what happens in this process. The term feedforward is used to imply the forward direction in which the informa- tion flows from the input layer through the hidden layers, all the way to the output layer. This process happens through the implementation of two consecutive functions: the weighted sum and the activation function. In short, the forward pass is the calcula- tions through the layers to make a prediction. Let’s take a look at the simple three-layer neural network in figure 2.20 and explore each of its components: Layers —This network consists of an input layer with three input features, and three hidden layers with 3, 4, 1 neurons in each layer.Hyperparameter alert Due to the number of activation functions, it may appear to be an overwhelming task to select the appropriate activation function for your network. While it is important to select a good activation function, I promise this is not going to be a challenging task when you design your network. There are some rules of thumb that you can start with, and then you can tune the model as needed. If you are not sure what to use, here are my two cents about choosing an activation function: For hidden layers —In most cases, you can use the ReLU activation function (or leaky ReLU) in hidden layers, as you will see in the projects that we will build throughout this book. It is increasingly becoming the default choice because it is a bit faster to compute than other activation functions. More importantly, it reduces the likelihood of the gradient vanishing because it does not saturate for large input values—as opposed to the sigmoid and tanh acti- vation functions, which saturate at ~ 1. Remember, the gradient is the slope. When the function plateaus, this will lead to no slope; hence, the gradient starts to vanish. This makes it harder to descend to the minimum error (we will talk more about this phenomenon, called vanishing/exploding gradients , in later chapters). For the output layer —The softmax activation function is generally a good choice for most classification problems when the classes are mutually exclu- sive. The sigmoid function serves the same purpose when you are doing binary classification. For regression problems, you can simply use no activa- tion function at all, since the weighted sum node produces the continuous output that you need: for example, if you want to predict house pricing based on the prices of other houses in the same neighborhood." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"63 The feedforward process Weights and biases (w, b) —The edges between nodes are assigned random weights denoted as Wab(n), where ( n) indicates the layer number and ( ab) indi- cates the weighted edge connecting the ath neuron in layer ( n) to the bth neu- ron in the previous layer ( n – 1). For example, W23(2) is the weight that connects the second node in layer 2 to the third node in layer 1 ( a22 to a13). (Note that you can see different denotations of Wab(n) in other DL literature, which is fine as long as you follow one convention for your entire network.) The biases are treated similarly to weights because they are randomly initial- ized, and their values are learned during the training process. So, for conve- nience, from this point forward we are going to represent the basis with the same notation that we gave for the weights ( w). In DL literature, you will mostly find all weights and biases represented as ( w) for simplicity. Activation functions σ(x)—In this example, we are using the sigmoid function σ(x) as an activation function. Node values (a) —We will calculate the weighted sum, apply the activation func- tion, and assign this value to the node amn, where n is the layer number and m is the node index in the layer. For example, a23 means node number 2 in layer 3. Layer 1 n= 3Layer 2 n= 4Input layer n= 3Layer 3 n= 1 a21 31 aa 11 11W1 12W211W2 13W2 41W2 42W2 43W233W231W2 32W223W221W2 22W2 11W3 12W3 13W3 14W312W1 32W113W1 21W1 23W122W1 31W1 33W1a12 a22 a32 a42a g x xx 132 31 Figure 2.20 A simple three-layer neural network" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"64 CHAPTER 2Deep learning and neural networks 2.4.1 Feedforward calculations We have all we need to start the feedforward calculations: a1(1) = σ(w11(1)x1 + w21(1)x2 + w31(1)x3) a2(1) = σ(w12(1)x1 + w22(1)x2 + w32(1)x3) a3(1) = σ(w13(1)x1 + w23(1)x2 + w33(1)x3) Then we do the same calculations for layer 2 , and a4(2) all the way to the output prediction in layer 3: yˆ = a1(2) = σ (w11(3)a1(2) + w12(3)a2(2) + w13(3)a3(2) + w14(3)a4(2)) And there you have it! You just calculated the feedforward of a two-layer neural net- work. Let’s take a moment to reflect on what we just did. Take a look at how many equations we need to solve for such a small network. What happens when we have a more complex problem with hundreds of nodes in the input layer and hundreds more in the hidden layers? It is more efficient to use matrices to pass through multi- ple inputs at once. Doing this allows for big computational speedups, especially when using tools like NumPy, where we can implement this with one line of code. Let’s see how the matrices computation looks (figure 2.21). All we did here is sim- ply stack the inputs and weights in matrices and multiply them together. The intuitive way to read this equation is from the right to the left. Start at the far right and follow with me: We stack all the inputs together in one vector (row, column), in this case (3, 1). We multiply the input vector by the weights matrix from layer 1 ( W(1)) and then apply the sigmoid function. We multiply the result for layer 2 ⇒ σ · W(2) and layer 3 ⇒ σ · W(3). If we have a fourth layer, you multiply the result from step 3 by σ · W(4), and so on, until we get the final prediction output yˆ! Here is a simplified representation of this matrices formula: yˆ = σ · W(3) · σ · W(2) · σ · W(1) · (x)a12()a22()a32(),," deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"65 The feedforward process 2.4.2 Feature learning The nodes in the hidden layers ( ai) are the new features that are learned after each layer. For example, if you look at figure 2.20, you see that we have three feature inputs ( x1, x2, and x3). After computing the forward pass in the first layer, the net- work learns patterns, and these features are transformed to three new features with different values ( ). Then, in the next layer, the network learns patterns within the patterns and produces new features ( , and , and so forth). The produced features after each layer are not totally understood, and we don’t see them, nor do we have much control over them. It is part of the neural network magic. That’s why they are called hidden layers. What we do is this: we look at the final output prediction and keep tuning some parameters until we are satisfied by the network’s performance. To reiterate, let’s see this in a small example. In figure 2.22, you see a small neural network to estimate the price of a house based on three features: how many bedrooms it has, how big it is, and which neighborhood it is in. You can see that the original input feature values 3, 2000, and 1 were transformed into new feature values after performing the feedforward process in the first layer ( ). Then they were transformed again to a prediction output value ( yˆ). When training a neural network, we see the prediction output and compare it with the true price to calculate the error and repeat the process until we get the minimum error. To help visualize the feature-learning process, let’s take another look at figure 2.9 (repeated here in figure 2.23) from the Tensorflow playground. You can see that the first layer learns basic features like lines and edges. The second layer begins to learn more complex features like corners. The process continues until the last layers of the network learn even more complex feature shapes like circles and spirals that fit the dataset. Figure 2.21 Reading from left to right, we stack the inputs together in one vector, multiply the input vector by the weights matrix from layer 1, apply the sigmoid function, and multiply the result.WW(3) (2) Layer 3 Layer 2W ŷ113=3 13W3 14W3/c115 W212 22W2 23W2 31W2 32W2 33W2 41W2 42W2 43W211W2 12W2 13W2W(1) Layer 1W211 22W1 23W1 31W1 32W1 33W111W1 12W1 13W1 Input vectorX XX 2 31 /c115/c115 W12/c183/c183 a11()a21()a31(),, a12()a22()a32(),, a42() a12()a22()a32()a42(),,," deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"66 CHAPTER 2Deep learning and neural networks That is how a neural network learns new features: via the network’s hidden layers. First, they recognize patterns in the data. Then, they recognize patterns within patterns; then patterns within patterns within patterns, and so on. The deeper the network is, the more it learns about the training data.Bedrooms Square feet Neighborhood (mapped to an ID number) WeightsInput features Hidden layerOutput prediction ( ) ŷ Weights3 2,000 1Final price estimate New feature a4New feature a1 New feature a2 New feature a3 Figure 2.22 A small neural network to estimate the price of a house based on three features: how many bedrooms it has, how big it is, and which neighborhood it is in x16 neurons 6 neurons 6 neuronsSix hidden layers Input features6 neurons 6 neurons 2 neuronsOutput x2 These are the new features that are learned after each layer. Figure 2.23 Learning features in multiple hidden layers" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"67 The feedforward process Vectors and matrices refresher If you understood the matrix calculations we just did in the feedforward discussion, feel free to skip this sidebar. If you are still not convinced, hang tight: this sidebar is for you. The feedforward calculations are a set of matrix multiplications. While you will not do these calculations by hand, because there are a lot of great DL libraries that do them for you with just one line of code, it is valuable to understand the mathematics that happens under the hood so you can debug your network. Especially because this is very trivial and interesting, let’s quickly review matrix calculations. Let’s start with some basic definitions of matrix dimensions: A scalar is a single number. A vector is an array of numbers. A matrix is a 2D array. A tensor is an n-dimensional array with n > 2. We will follow the conventions used in most mathematical literature: Scalars are written in lowercase and italics: for instance, n. Vectors are written in lowercase, italics, and bold type: for instance, x. Matrices are written in uppercase, italics, and bold: for instance, X. Matrix dimensions are written as follows: (row × column). Multiplication: Scalar multiplication —Simply multiply the scalar number by all the numbers in the matrix. Note that scalar multiplications don’t change the matrix dimensions: Matrix multiplication —When multiplying two matrices, such as in the case of (row 1 × column 1) × (row 2 × column 2), column 1 and row 2 must be equal to each other, and the product will have the dimensions (row 1 × column 2). For example,11 22 41 3Scalar Vector Tensor Matrix 2 1 7 12 3 4 5Matrix dimensions: a scalar is a single number, a vector is an array of numbers, a matrix is a 2D array, and a tensor is an n-dimensional array. 2 ·10 4 2 · 4 2 · 36 3=2 · 10 2 · 6 34 1 × 3 1 × 32 xyz 8= 613 7 4 3 × 3Same Product9 4 07 ·" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"68 CHAPTER 2Deep learning and neural networks 2.5 Error functions So far, you have learned how to implement the forward pass in neural networks to produce a prediction that consists of the weighted sum plus activation operations. Now, how do we evaluate the prediction that the network just produced? More importantly, how do we know how far this prediction is from the correct answer (the label)? The answer is this: measure the error. The selection of an error function is another important aspect of the design of a neural network. Error functions can also be referred to as cost functions or loss functions , and these terms are used inter- changeably in DL literature. where x = 3 · 13 + 4 · 8 + 2 · 6 = 83, and the same for y = 63 and z = 37. Now that you know the matrices multiplication rules, pull out a piece of paper and work through the dimensions of matrices in the earlier neural network example. The following figure shows the matrix equation again for your convenience. The last thing I want you to understand about matrices is transposition . With transpo- sition, you can convert a row vector to a column vector and vice versa, where the shape ( m × n) is inverted and becomes ( n × m). The superscript ( AT) is used for trans- posed matrices:WW(3) (2) Layer 3 Layer 2W ŷ113=3 13W3 14W3W212 22W2 23W2 31W2 32W2 33W2 41W2 42W2 43W211W2 12W2 13W2W(1) Layer 1W211 22W1 23W1 31W1 32W1 33W111W1 12W1 13W1 Input vectorX XX 2 31 /c115/c115 /c115 /c183/c183 W12 The matrix equation from the main text. Use it to work through matrix dimensions. AA= = [2 8]T/c2222 8 AA==T/c2221 4 72 5 83 6 9 91 24 57 8 9 6 3 AA==T/c2220 2 11 4 –1021 –141" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"69 Error functions 2.5.1 What is the error function? The error function is a measure of how “wrong” the neural network prediction is with respect to the expected output (the label). It quantifies how far we are from the cor- rect solution. For example, if we have a high loss, then our model is not doing a good job. The smaller the loss, the better the job the model is doing. The larger the loss, the more our model needs to be trained to increase its accuracy. 2.5.2 Why do we need an error function? Calculating error is an optimization problem, something all machine learning engi- neers love (mathematicians, too). Optimization problems focus on defining an error function and trying to optimize its parameters to get the minimum error (more on optimization in the next section). But for now, know that, in general, when we are working on an optimization problem, if we are able to define the error function for the problem, we have a very good shot at solving it by running optimization algo- rithms to minimize the error function. In optimization problems, our ultimate goal is to find the optimum variables (weights) that would minimize the error function as much as we can. If we don’t know how far from the target we are, how will we know what to change in the next iteration? The process of minimizing this error is called error function optimization . We will review several optimization methods in the next section. But for now, all we need to know from the error function is how far we are from the correct prediction, or how much we missed the desired degree of performance. 2.5.3 Error is always positive Consider this scenario: suppose we have two data points that we are trying to get our network to predict correctly. If the first gives an error of 10 and the second gives an error of –10, then our average error is zero! This is misleading because “error = 0” means our network is producing perfect predictions, when, in fact, it missed by 10 twice. We don’t want that. We want the error of each prediction to be positive, so the errors don’t cancel each other when we take the average error. Think of an archer aiming at a target and missing by 1 inch. We are not really concerned about which direction they missed; all we need to know is how far each shot is from the target. A visualization of loss functions of two separate models plotted over time is shown in figure 2.24. You can see that model #1 is doing a better job of minimizing error, whereas model #2 starts off better until epoch 6 and then plateaus. Different loss functions will give different errors for the same prediction, and thus have a considerable effect on the performance of the model. A thorough discussion of loss functions is outside the scope of this book. Instead, we will focus on the two most commonly used loss functions: mean squared error (and its variations), usually used for regression problems, and cross-entropy, used for classification problems. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"70 CHAPTER 2Deep learning and neural networks 2.5.4 Mean square error Mean squared error (MSE) is commonly used in regression problems that require the output to be a real value (like house pricing). Instead of just comparing the predic- tion output with the label ( yˆi – yi), the error is squared and averaged over the number of data points, as you see in this equation: E(W, b) = ( yˆi – yi)2 MSE is a good choice for a few reasons. The square ensures the error is always positive, and larger errors are penalized more than smaller errors. Also, it makes the math nice, which is always a plus. The notations in the formula are listed in table 2.2. MSE is quite sensitive to outliers, since it squares the error value. This might not be an issue for the specific problem that you are solving. In fact, this sensitivity to outliers might be beneficial in some cases. For example, if you are predicting a stock price, you would want to take outliers into account, and sensitivity to outliers would be a good thing. In other scenarios, you wouldn’t want to build a model that is skewed by outliers, such as predicting a house price in a city. In that case, you are more inter- ested in the median and less in the mean. A variation error function of MSE calledEpoch number5 10 15 20 25 30 35 40 0Model 1 0.21.8 1.6 1.4 1.2 1.0 Loss 0.8 0.6 0.4Model 2 Figure 2.24 A visualization of the loss functions of two separate models plotted over time 1 N--- - i1=N " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"71 Error functions mean absolute error (MAE) was developed just for this purpose. It averages the absolute error over the entire dataset without taking the square of the error: E(W, b) = | yˆi – yi| 2.5.5 Cross-entropy Cross-entropy is commonly used in classification problems because it quantifies the difference between two probability distributions. For example, suppose that for a spe- cific training instance, we are trying to classify a dog image out of three possible classes (dogs, cats, fish). The true distribution for this training instance is as follows: Probability(cat) P(dog) P(fish) 0.0 1.0 0.0 We can interpret this “true” distribution to mean that the training instance has 0% probability of being class A, 100% probability of being class B, and 0% probability of being class C. Now, suppose our machine learning algorithm predicts the following probability distribution: Probability(cat) P(dog) P(fish) 0.2 0.3 0.5 How close is the predicted distribution to the true distribution? That is what the cross- entropy loss function determines. We can use this formula: E(W, b) = – yˆi log( pi)Table 2.2 Meanings of notation used in regression problems Notation Meaning E(W, b) The loss function. Is also annotated as J(W, b) in other literature. W Weights matrix. In some literature, the weights are denoted by the theta sign ( θ). b Biases vector. N Number of training examples. yˆi Prediction output. Also notated as hw, b(X) in some DL literature. yi The correct output (the label). (yˆi – yi) Usually called the residual . 1 N--- - i1=N  i1=m " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"72 CHAPTER 2Deep learning and neural networks where ( y) is the target probability, ( p) is the predicted probability, and ( m) is the num- ber of classes. The sum is over the three classes: cat, dog, and fish. In this case, the loss is 1.2: E = - (0.0 * log(0.2) + 1.0 * log(0.3) + 0.0 * log(0.5)) = 1.2 So that is how “wrong” or “far away” our prediction is from the true distribution. Let’s do this one more time, just to show how the loss changes when the network makes better predictions. In the previous example, we showed the network an image of a dog, and it predicted that the image was 30% likely to be a dog, which was very far from the target prediction. In later iterations, the network learns some patterns and gets the predictions a little better, up to 50%: Probability(cat) P(dog) P(fish) 0.3 0.5 0.2 Then we calculate the loss again: E = - (0.0*log(0.3) + 1.0*log(0.5) + 0.0*log(0.2)) = 0.69 You see that when the network makes a better prediction (dog is up to 50% from 30%), the loss decreases from 1.2 to 0.69. In the ideal case, when the network predicts that the image is 100% likely to be a dog, the cross-entropy loss will be 0 (feel free to try the math). To calculate the cross-entropy error across all the training examples ( n), we use this general formula: E(W, b) = – yˆij log( pij) NOTE It is important to note that you will not be doing these calculations by hand. Understanding how things work under the hood gives you better intu- ition when you are designing your neural network. In DL projects, we usually use libraries like Tensorflow, PyTorch, and Keras where the error function is generally a parameter choice. 2.5.6 A final note on errors and weights As mentioned before, in order for the neural network to learn, it needs to minimize the error function as much as possible (0 is ideal). The lower the errors, the higher the accuracy of the model in predicting values. How do we minimize the error? Let’s look at the following perceptron example with a single input to understand the relationship between the weight and the error:i1=n  i1=m  X YW f(x)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"73 Error functions Suppose the input x = 0.3, and its label (goal prediction) y = 0.8. The prediction out- put ( yˆ) of this perception is calculated as follows: yˆi = w · x = w · 0.3 And the error, in its simplest form, is calculated by comparing the prediction yˆ and the label y: error = | yˆ – y| = |( w · x) – y| = | w · 0.3 – 0.8| If you look at this error function, you will notice that the input ( x) and the goal predic- tion ( y) are fixed values. They will never change for these specific data points. The only two variables that we can change in this equation are the error and the weight. Now, if we want to get to the minimum error, which variable can we play with? Correct: the weight! The weight acts as a knob that the network needs to adjust up and down until it gets the minimum error. This is how the network learns: by adjusting weight. When we plot the error function with respect to the weight, we get the graph shown in figure 2.25. As mentioned before, we initialize the network with random weights. The weight lies somewhere on this curve, and our mission is to make it descend this curve to its optimal value with the minimum error. The process of finding the goal weights of the neural network happens by adjusting the weight values in an iterative process using an optimi- zation algorithm. 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2Cost function: ( )Jw 0 0 –5 5 10 15 20 25 30 35 40 wSlopeStarting weight Goal weightFigure 2.25 The network learns by adjusting weight. When we plot the error function with respect to weight, we get this type of graph." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"74 CHAPTER 2Deep learning and neural networks 2.6 Optimization algorithms Training a neural network involves showing the network many examples (a training dataset); the network makes predictions through feedforward calculations and com- pares them with the correct labels to calculate the error. Finally, the neural network needs to adjust the weights (on all edges) until it gets the minimum error value, which means maximum accuracy. Now, all we need to do is build algorithms that can find the optimum weights for us. 2.6.1 What is optimization? Ahh, optimization! A topic that is dear to my heart, and dear to every machine learn- ing engineer (mathematicians too). Optimization is a way of framing a problem to maximize or minimize some value. The best thing about computing an error function is that we turn the neural network into an optimization problem where our goal is to minimize the error . Suppose you want to optimize your commute from home to work. First, you need to define the metric that you are optimizing (the error function). Maybe you want to optimize the cost of the commute, or the time, or the distance. Then, based on that specific loss function, you work on minimizing its value by changing some parameters. Changing the parameters to minimize (or maximize) a value is called optimization . If you choose the loss function to be the cost, maybe you will choose a longer commute that will take two hours, or (hypothetically) you might walk for five hours to minimize the cost. On the other hand, if you want to optimize the time spent commuting, maybe you will spend $50 to take a cab that will decrease the commute time to 20 min- utes. Based on the loss function you defined, you can start changing your parameters to get the results you want. TIP In neural networks, optimizing the error function means updating the weights and biases until we find the optimal weights , or the best values for the weights to produce the minimum error. Let’s look at the space that we are trying to optimize: In a neural network of the simplest form, a perceptron with one input, we have only one weight. We can easily plot the error (that we are trying to minimize) with respect to this weight, represented by the 2D curve in figure 2.26 (repeated from earlier). But what if we have two weights? If we graph all the possible values of the two weights, we get a 3D plane of the error (figure 2.27). What about more than two weights? Your network will probably have hundreds or thousands of weights (because each edge in your network has its own weight value).X YW f(x)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"75 Optimization algorithms Since we humans are only equipped to understand a maximum of 3 dimensions, it is impossible for us to visualize error graphs when we have 10 weights, not to mention hundreds or thousands of weight parameters. So, from this point on, we will study the error function using the 2D or 3D plane of the error. In order to optimize the model, our goal is to search this space to find the best weights that will achieve the lowest pos- sible error. Why do we need an optimization algorithm? Can’t we just brute-force through a lot of weight values until we get the minimum error? Suppose we used a brute-force approach where we just tried a lot of different possi- ble weights (say 1,000 values) and found the weight that produced the minimum error. Could that work? Well, theoretically, yes. This approach might work when we2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2Cost function: ( )Jw 0 0 –5 5 10 15 20 25 30 35 40 wSlopeStarting weight Goal weight Figure 2.26 The error function with respect to its weight for a single perceptron is a 2D curve. Error ww 1280 300 250 200 150 100 5050 0100150200300 250 060 40 20 Goal weight Figure 2.27 Graphing all possible values of two weights gives a 3D error plane. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"76 CHAPTER 2Deep learning and neural networks have very few inputs and only one or two neurons in our network. Let me try to con- vince you that this approach wouldn’t scale. Let’s take a look at a scenario where we have a very simple neural network. Suppose we want to predict house prices based on only four features (inputs) and one hidden layer of five neurons (see figure 2.28). As you can see, we have 20 edges (weights) from the input to the hidden layer, plus 5 weights from the hidden layer to the output prediction, totaling 25 weight variables that need to be adjusted for optimum values. To brute-force our way through a simple neural network of this size, if we are trying 1,000 different values for each weight, then we will have a total of 1075 combinations: 1,000 × 1,000 × . . . × 1,000 = 1,00025 = 1075 combinations Let’s say we were able to get our hands on the fastest supercomputer in the world: Sun- way TaihuLight, which operates at a speed of 93 petaflops ⇒ 93 × 1015 floating-pointPrice Input layer Hidden layer Output layerArea (feet )2 Bedrooms Distance to city (miles) Agex x1 2 xx 43y Figure 2.28 If we want to predict house prices based on only four features (inputs) and one hidden layer of five neurons, we’ll have 20 edges (weights) from the input to the hidden layer, plus 5 weights from the hidden layer to the output prediction." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"77 Optimization algorithms operations per second (FLOPs). In the best-case scenario, this supercomputer would need = 1.08 × 1058 seconds = 3.42 × 1050 years That is a huge number: it’s longer than the universe has existed. Who has that kind of time to wait for the network to train? Remember that this is a very simple neural net- work that usually takes a few minutes to train using smart optimization algorithms. In the real world, you will be building more complex networks that have thousands of inputs and tens of hidden layers, and you will be required to train them in a matter of hours (or days, or sometimes weeks). So we have to come up with a different approach to find the optimal weights. Hopefully I have convinced you that brute-forcing through the optimization pro- cess is not the answer. Now, let’s study the most popular optimization algorithm for neural networks: gradient descent. Gradient descent has several variations: batch gradi- ent descent (BGD), stochastic gradient descent (SGD), and mini-batch GD (MB-GD). 2.6.2 Batch gradient descent The general definition of a gradient (also known as a derivative ) is that it is the function that tells you the slope or rate of change of the line that is tangent to the curve at any given point. It is just a fancy term for the slope or steepness of the curve (figure 2.29). Gradient descent simply means updating the weights iteratively to descend the slope of the error curve until we get to the point with minimum error. Let’s take a look at the error function that we introduced earlier with respect to the weights. At the initial weight point, we calculate the derivative of the error function to get the slope (direc- tion) of the next step. We keep repeating this process to take steps down the curve until we reach the minimum error (figure 2.30).1075 93 1015×---------------------- a ef bcdSlope at point aSlope at point cSlope at point d Slope at point f Slope at point e Slope at point bFigure 2.29 A gradient is the function that describes the rate of change of the line that is tangent to a curve at any given point." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"78 CHAPTER 2Deep learning and neural networks HOW DOES GRADIENT DESCENT WORK ? To visualize how gradient descent works, let’s plot the error function in a 3D graph (figure 2.31) and go through the process step by step. The random initial weight (starting weight) is at point A, and our goal is to descend this error mountain to the goal w1 and w2 weight values, which produce the minimum error value. The way we do that is by taking a series of steps down the curve until we get the minimum error. In order to descend the error mountain, we need to determine two things for each step: The step direction (gradient) The step size (learning rate)Cost WeightGradientInitial weight Incremental stepDerivative of cost Minimum cost Figure 2.30 Gradient descent takes incremental steps to descend the error function. Error WW 1280 300 250 200 150 100 5050 0100150200300 250 060 40 20Starting weight Goal weight 4 31 2 BA Figure 2.31 The random initial weight (starting weight) is at point A. We descend the error mountain to the w1 and w2 weight values that produce the minimum error value. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"79 Optimization algorithms THE DIRECTION (GRADIENT ) Suppose you are standing on the top of the error mountain at point A. To get to the bottom, you need to determine the step direction that results in the deepest descent (has the steepest slope). And what is the slope, again? It is the derivative of the curve. So if you are standing on top of that mountain, you need to look at all the directions around you and find out which direction will result in the deepest descent (1, 2, 3, or 4, for example). Let’s say it is direction 3; we choose that way. This brings us to point B, and we restart the process (calculate feedforward and error) and find the direction of deepest descent, and so forth, until we get to the bottom of the mountain. This process is called gradient descent . By taking the derivative of the error with respect to the weight ( ), we get the direction that we should take. Now there’s one thing left. The gradient only determines the direction. How large should the size of the step be? It could be a 1-foot step or a 100-foot jump. This is what we need to deter- mine next. THE STEP SIZE (LEARNING RATE α) The learning rate is the size of each step the network takes when it descends the error mountain, and it is usually denoted by the Greek letter alpha ( α). It is one of the most important hyperparameters that you tune when you train your neural network (more on that later). A larger learning rate means the network will learn faster (since it is descending the mountain with larger steps), and smaller steps mean slower learning. Well, this sounds simple enough. Let’s use large learning rates and complete the neu- ral network training in minutes instead of waiting for hours. Right? Not quite. Let’s take a look at what could happen if we set a very large learning rate value. In figure 2.32, you are starting at point A. When you take a large step in the direc- tion of the arrow, instead of descending the error mountain, you end up at point B, on the other side. Then another large step takes you to C, and so forth. The error will keep oscillating and will never descend. We will talk more later about tuning the learn- ing rate and how to determine if the error is oscillating. But for now, you need to know this: if you use a very small learning rate, the network will eventually descend thedE dw------ - Error WW 1280 300 250 200 150 100 5050 0100150200300 250 060 40 20 Goal weight BCA Figure 2.32 Setting a very large learning rate causes the error to oscillate and never descend." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"80 CHAPTER 2Deep learning and neural networks mountain and will get to the minimum error. But this training will take longer (maybe weeks or months). On the other hand, if you use a very large learning rate, the net- work might keep oscillating and never train. So we usually initialize the learning rate value to 0.1 or 0.01 and see how the network performs, and then tune it further. PUTTING DIRECTION AND STEP TOGETHER By multiplying the direction (derivative) by the step size (learning rate), we get the change of the weight for each step: Δwi = –α We add the minus sign because the derivative always calculates the slope in the u p w a r d d i r e c t i o n . S i n c e w e n e e d t o d e s c e n d t h e m o u n t a i n , w e g o i n t h e o p p o s i t e direction of the slope: wnext–step = wcurrent + Δw Calculus refresher: Calculating the partial derivative The derivative is the study of change. It measures the steepness of a curve at a par- ticular point on a graph. It looks like mathematics has given us just what we are looking for. On the error graph, we want to find the steepness of the curve at the exact weight point. Thank you, math!dE dwi-------- 0 –4 –2 –6–1530 2025 fx x( ) =2 15 510 –10–50 4 26Gradient at = 2 x We want to find the steepness of the curve at the exact weight point." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"81 Optimization algorithms Other terms for derivative are slope and rate of change . If the error function is denoted as E(x), then the derivative of the error function with respect to the weight is denoted as d dE(x) E(x) or just dw dw This formula shows how much the total error will change when we change the weight. Luckily, mathematicians created some rules for us to calculate the derivative. Since this is not a mathematics book, we will not discuss the proof of the rules. Instead, we will start applying these rules at this point to calculate our gradient. Here are the basic derivative rules: Let’s take a look at a simple function to apply the derivative rules: f(x) = 10 x5 + 4x7 + 12 x We can apply the power, constant, and sum rules to get also denoted as f'(x): then, f'(x) = 50 x4 + 28 x6 + 12 To get an intuition of what this means, let’s plot f(x):Constant Rule: ( c) = 0 Difference Rule: [ f(x) – g(x)] – f'(x) – g'(x) Constant Multiple Rule: [ cf(x)] = cf'(x) Product Rule: [ f(x)g(x)] = f(x)g'(x) + g(x)f'(x) Power Rule: ( xn) = xn–1Quotient Rule: [f(x) g(x)] = g(x)f'(x) – f(x)g'(x) [g(x)]2 Sum Rule: [ f(x) – g(x)] = f'(x) – g'(x) Chain Rule: f(g(x)) = f'(g(x))g'(x)d dx----- -d dx----- - d dx----- -d dx----- - d dx----- -d dx----- - d dx----- -d dx----- - df dx--------- x= 2 x= 6 xfx() Using a simple function to apply derivative rules. To get the slope at any point, we can compute f'(x) at that point." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"82 CHAPTER 2Deep learning and neural networks PITFALLS OF BATCH GRADIENT DESCENT Gradient descent is a very powerful algorithm to get to the minimum error. But it has two major pitfalls. First, not all cost functions look like the simple bowls we saw earlier. There may be holes, ridges, and all sorts of irregular terrain that make reaching the minimum error very difficult. Consider figure 2.33, where the error function is a little more complex and has ups and downs. (continued) If we want to get the slope at any point, we can compute f'(x) at that point. So f'(2) gives us the slope of the line on the left, and f'(6) gives the slope of the second line. Get it? For a last example of derivatives, let’s apply the power rule to calculate the derivative of the sigmoid function: Note that you don’t need to memorize the derivative rules, nor do you need to calcu- late the derivatives of the functions yourself. Thanks to the awesome DL community, we have great libraries that will compute these functions for you in just one line of code. But it is valuable to understand how things are happening under the hood. σ(x) = = (1 + e–x) = –(1 + e–x)–2(–e–x) = = · = σ(x) · (1 – σ(x))If you want to write out the derivative of the sigmoid activation function in code, it will look like this: def sigmoid(x): return 1/(1+np.exp(-x)) def sigmoid_derivative(x): return sigmoid(x) * (1 - sigmoid(x))d dx----- -d dx----- -1 1 + ex–----------------- d dx----- -power rule ex– (1 + ex–)2--------------------------- - 1 1 + ex–-----------------ex– 1 + ex– ----------------- Global minimaLocal minimaStarting point Error Figure 2.33 Complex error functions are represented by more complex curves with many local minima values. Our goal is to reach the global minimum value. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"83 Optimization algorithms Remember that during weight initialization, the starting point is randomly selected. What if the starting point of the gradient descent algorithm is as shown in this figure? The error will start descending the small mountain on the right and will indeed reach a minimum value. But this minimum value, called the local minima , is not the lowest possible error value for this error function. It is the minimum value for the local mountain where the algorithm randomly started. Instead, we want to get to the lowest possible error value, the global minima . Second, batch gradient descent uses the entire training set to compute the gradi- ents at every step. Remember this loss function? L(W, b) = ( yˆi – yi)2 This means that if your training set ( N) has 100,000,000 (100 million) records, the algorithm needs to sum over 100 million records just to take one step. That is computa- tionally very expensive and slow. And this is why this algorithm is also called batch gra- dient descent —because it uses the entire training data in one batch. One possible approach to solving these two problems is stochastic gradient descent. We’ll take a look at SGD in the next section. 2.6.3 Stochastic gradient descent In stochastic gradient descent, the algorithm randomly selects data points and goes through the gradient descent one data point at a time (figure 2.34). This provides many different weight starting points and descends all the mountains to calculate their local minimas. Then the minimum value of all these local minimas is the global minima. This sounds very intuitive; that is the concept behind the SGD algorithm. Stochastic is just a fancy word for random . Stochastic gradient descent is probably the most-used optimization algorithm for machine learning in general and for deep learning in particular. While gradient descent measures the loss and gradient over the1 N--- - i1=N  Global minimaLocal minimaStarting point Error Figure 2.34 The stochastic gradient descent algorithm randomly selects data points across the curve and descends all of them to find the local minima. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"84 CHAPTER 2Deep learning and neural networks full training set to take one step toward the minimum, SGD randomly picks one instance in the training set for each one step and calculates the gradient based only on that sin- gle instance. Let’s take a look at the pseudocode of both GD and SGD to get a better understanding of the differences between these algorithms: Because we take a step after we compute the gradient for the entire training data in batch GD, you can see that the path down the error is smooth and almost a straight line. In contrast, due to the stochastic (random) nature of SGD, you see the path toward the global cost minimum is not direct but may zigzag if we visualize the cost surface in a 2D space. That is because in SGD, every iteration tries to better fit just a single training example, which makes it a lot faster but does not guarantee that every step takes us a step down the curve. It will arrive close to the global minimum and, once it gets there, it will continue to bounce around, never settling down. In practice, this isn’t a problem because ending up very close to the global minimum is good enough for most practical purposes. SGD almost always performs better and faster than batch GD. 2.6.4 Mini-batch gradient descent Mini-batch gradient descent (MB-GD) is a compromise between BGD and SGD. Instead of computing the gradient from one sample (SGD) or all samples (BGD), we divide the training sample into mini-batches from which to compute the gradient (a common mini-batch size is k = 256). MB-GD converges in fewer iterations than BGD because we update the weights more frequently; however, MB-GD lets us use vectorized operations, which typically result in a computational performance gain over SGD.GD Stochastic GD 1Take all the data. 2Compute the gradient. 3Update the weights and take a step down. 4Repeat for n number of epochs (iterations). A smooth path for the GD down the error curve1Randomly shuffle samples in the training set. 2Pick one data instance. 3Compute the gradient. 4Update the weights and take a step down. 5Pick another one data instance. 6Repeat for n number of epochs (training iterations). An oscillated path for SGD down the error curveb wb w" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"85 Optimization algorithms 2.6.5 Gradient descent takeaways There is a lot going on here, so let’s sum it up, shall we? Here is how gradient descent is summarized in my head: Three types: batch, stochastic, and mini-batch. All follow the same concept: – Find the direction of the steepest slope: the derivative of the error with respect to the weight . – Set the learning rate (or step size). The algorithm will compute the slope, but you will set the learning rate as a hyperparameter that you will tune by trial and error. – Start the learning rate at 0.01, and then go down to 0.001, 0.0001, 0.00001. The lower you set your learning rate, the more guaranteed you are to descend to the minimum error (if you train for an infinite time). Since we don’t have infinite time, 0.01 is a reasonable start, and then we go down from there. Batch GD updates the weights after computing the gradient of all the training data. This can be computationally very expensive when the data is huge. It doesn’t scale well. Stochastic GD updates the weights after computing the gradient of a single instance of the training data. SGD is faster than BGD and usually reaches very close to the global minimum. Mini-batch GD is a compromise between batch and stochastic, using neither all the data nor a single instance. Instead, it takes a group of training instances (called a mini-batch), computes the gradient on them and updates the weights, and then repeats until it processes all the training data. In most cases, MB-GD is a good starting point. –batch_size is a hyperparameter that you will tune. This will come up again in the hyperparameter-tuning section in chapter 4. But typically, you can start experimenting with batch_size = 32, 64, 128, 256. – Don’t get batch_size confused with epochs . An epoch is the full cycle over all the training data. The batch is the number of training samples in the group for which we are computing the gradient. For example, if we have 1,000 samples in our training data and set batch_size = 256, then epoch 1 = batch 1 of 256 samples plus batch 2 (256 samples) plus batch 3 (256 samples) plus batch 4 (232 samples). Finally, you need to know that a lot of variations to gradient descent have been used over the years, and this is a very active area of research. Some of the most popular enhancements are Nesterov accelerated gradient RMSpropdE dwi--------" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"86 CHAPTER 2Deep learning and neural networks Adam Adagrad Don’t worry about these optimizers now. In chapter 4, we will discuss tuning tech- niques to improve your optimizers in more detail. I know that was a lot, but stay with me. These are the main things I want to you remember from this section: How gradient descent works (slope plus step size) The difference between batch, stochastic, and mini-batch GD The GD hyperparameters that you will tune: learning rate and batch_size If you’ve got this covered, you are good to move to the next section. And don’t worry a lot about hyperparameter tuning. I’ll cover network tuning in more detail in coming chapters and in almost all the projects in this book. 2.7 Backpropagation Backpropagation is the core of how neural networks learn. Up until this point, you learned that training a neural network typically happens by the repetition of the fol- lowing three steps: Feedforward: get the linear combination (weighted sum), and apply the activa- tion function to get the output prediction ( yˆ): yˆ = σ · W(3) · σ · W(2) · σ · W(1) · (x) Compare the prediction with the label to calculate the error or loss function: E(W, b) = | yˆi – yi| Use a gradient descent optimization algorithm to compute the Δw that opti- mizes the error function: Δwi = –α Backpropagate the Δw through the network to update the weights:1 N--- - i1=N  dE dwi-------- W= W –new old α/c182Error /c182Wx( (Old weight New weight Learning rateDerivative of error with respect to weight" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"87 Backpropagation In this section, we will dive deeper into the final step: backpropagation. 2.7.1 What is backpropagation? Backpropagation , or backward pass , means propagating derivatives of the error with respect to each specific weight from the last layer (output) back to the first layer (inputs) to adjust weights. By propa- gating the Δw backward from the prediction node ( yˆ) all the way through the hidden layers and back to the input layer, the weights get updated: (wnext–step = wcurrent + Δw) This will take the error one step down the error mountain. Then the cycle starts again (steps 1 to 3) to update the weights and take the error another step down, until we get to the minimum error. Backpropagation might sound clearer when we have only one weight. We simply adjust the weight by adding the Δw to the old weight wnew = w – α . But it gets complicated when we have a multilayer perceptron (MLP) network with many weight variables. To make this clearer, consider the scenario in figure 2.35.dE dwi-------- dE dw i-------- Layer 1 Layer 2 Layer 3 Layer 4 W1, 1(1) 1, 4W(1) 1, 3W(1) W2, 2(2)W1, 2(1)W2, 1(4) (1) 1, 3∂W∂EW1, 2(1) x x1 2 x3(2) 3, 2∂W∂E (3) 2, 2∂W∂E (4) 2, 1∂W∂E Figure 2.35 Backpropagation becomes complicated when we have a multilayer perceptron (MLP) network with many weight variables." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"88 CHAPTER 2Deep learning and neural networks How do we compute the change of the total error with respect to w13 ? Remember that basically says, “How much will the total error change when we change the parameter w13?” We learned how to compute by applying the derivative rules on the error function. That is straightforward because w21 is directly connected to the error func- tion. But to compute the derivatives of the total error with respect to the weights all the way back to the input, we need a calculus rule called the chain rule . Figure 2.36 shows how backpropagation uses the chain rule to flow the gradients in the backward direction through the network. Let’s apply the chain rule to calculateCalculus refresher: Chain rule in derivatives Back again to calculus. Remember the derivative rules that we listed earlier? One of the most important rules is the chain rule. Let’s dive deep into it to see how it is implemented in backpropagation: Chain Rule: f(g(x)) = f'(g(x))g'(x) The chain rule is a formula for calculating the derivatives of functions that are com- posed of functions inside other functions. It is also called the outside-inside rule . Look at this: f(g(x)) = outside function × inside function = f(g(x)) × g(x) The chain rule says, “When composing functions, the derivatives just multiply.” That is going to be very useful for us when implementing backpropagation, because feed- forwarding is just composing a bunch of functions, and backpropagation is taking the derivative at each piece of this function. To implement the chain rule in backpropagation, all we are going to do is multiply a bunch of partial derivatives to get the effect of errors all the way back to the input. Here is how it works—but first, remember that our goal is to propagate the error backward all the way to the input layer. So in the following example, we want to cal- culate , which is the effect of total error on input (x): All we do here is multiply the upstream gradient by the local gradient all the way until we get to the target value. dE dw13----------- dE dw13----------- dE dw21----------- d dx----- - d dx----- -d dx----- -d dx----- - d dx----- -d dx----- - dE dx------ X dE dx=A B E dE dB·dB dA·dA dx" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"89 Backpropagation the derivative of the error with respect to the third weight on the first input w1,3(1), where the (1) means layer 1, and w1,3 means node number 1 and weight number 3: = × × × The equation might look complex at the beginning, but all we are doing really is mul- tiplying the partial derivative of the edges starting from the output node all the way backward to the input node. All the notations are what makes this look complex, but once you understand how to read w1,3(1), the backward-pass equation looks like this: The error backpropagated to the edge w1,3(1)= effect of error on edge 4 · effect on edge 3 · effect on edge 2 · effect on target edge There you have it. That is the backpropagation technique used by neural networks to update the weights to best fit our problem.dE dw1,31()------------dE dw2,14()------------dw2,14() dw2,23()------------dw2,23() dw3,22()------------dw3,22() dw1,31()------------ Layer 1 Layer 2 Layer 3 Layer 4 W1, 1(1) 1, 4W(1) 1, 3W(1) W2, 2(2)W1, 2(1)W2, 1(4) (1) 1, 3∂W∂EW1, 2(1) x x1 2 x3(2) 3, 2∂W∂E (3) 2, 2∂W∂E (4) 2, 1∂W∂E Figure 2.36 Backpropagation uses the chain rule to flow gradients back through the network." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"90 CHAPTER 2Deep learning and neural networks 2.7.2 Backpropagation takeaways Backpropagation is a learning procedure for neurons. Backpropagation repeatedly adjusts weights of the connections (weights) in the network to minimize the cost function (the difference between the actual out- put vector and the desired output vector). As a result of the weight adjustments, hidden layers come to represent import- ant features other than the features represented in the input layer. For each layer, the goal is to find a set of weights that ensures that for each input vector, the output vector produced is the same as (or close to) the desired output vector. The difference in values between the produced and desired out- puts is called the error function. The backward pass (backpropagation; figure 2.37) starts at the end of the net- work, backpropagates or feeds the errors back, recursively applies the chain rule to compute gradients all the way to the inputs of the network, and then updates the weights. To reiterate, the goal of a typical neural network problem is to discover a model that best fits our data Ultimately, we want to minimize the cost or loss function by choosing the best set of weight parameters. Summary Perceptrons work fine for datasets that can be separated by one straight line (linear operation). Nonlinear datasets that cannot be modeled by a straight line need a more com- plex neural network that contains many neurons. Stacking neurons in layers creates a multilayer perceptron. The network learns by the repetition of three main steps: feedforward, calculate error, and optimize weights.Forward pass fx y(, ) z yxBackward pass dfdL dzdL dxdL dz=dz dx dL dydL dz=dz dy Figure 2.37 The forward pass calculates the output prediction (left). The backward pass passes the derivative of the error backward to update its weights (right). " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"91 Summary Parameters are variables that are updated by the network during the training process, like weights and biases. These are tuned automatically by the model during training. Hyperparameters are variables that you tune, such as number of layers, activa- tion functions, loss functions, optimizers, early stopping, and learning rate. We tune these before training the model." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"92Convolutional neural networks Previously, we talked about artificial neural networks (ANNs), also known as multi- layer perceptrons (MLPs), which are basically layers of neurons stacked on top of each other that have learnable weights and biases. Each neuron receives some inputs, which are multiplied by their weights, with nonlinearity applied via activa- tion functions. In this chapter, we will talk about convolutional neural networks (CNNs), which are considered an evolution of the MLP architecture that performs a lot better with images. The high-level layout of this chapter is as follows: 1Image classification with MLP —We will start with a mini project to classify images using MLP topology and examine how a regular neural network architecture processes images. You will learn about the MLP architecture’s drawbacks when processing images and why we need a new, creative neural network architecture for this task. This chapter covers Classifying images using MLP Working with the CNN architecture to classify images Understanding convolution on color images" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"93 Image classification using MLP 2Understanding CNNs —We will explore convolutional networks to see how they extract features from images and classify objects. You will learn about the three main components of CNNs: the convolutional layer, the pooling layer, and the fully connected layer. Then we will apply this knowledge in another mini proj- ect to classify images with CNNs. 3Color images —We will compare how computers see color images versus grayscale images, and how convolution is implemented over color images. 4Image classification project —We will apply all that you learn in this chapter in an end-to-end image classification project to classify color images with CNNs. The basic concepts of how the network learns and optimizes parameters are the same with both MLPs and CNNs: Architecture —MLPs and CNNs are composed of layers of neurons that are stacked on top of each other. CNNs have different structures (convolutional versus fully connected layers), as we are going to see in the coming sections. Weights and biases —In convolutional and fully connected layers, inference works the same way. Both have weights and biases that are initially randomly gener- ated, and their values are learned by the network. The main difference between them is that the weights in MLPs are in a vector form, whereas in convolutional layers, weights take the form of convolutional filters or kernels. Hyperparameters —As with MLPs, when we design CNNs we will always specify the error function, activation function, and optimizer. All hyperparameters explained in the previous chapters remain the same; we will add some new ones that are specific to CNNs. Training —Both networks learn the same way. First they perform a forward pass to get predictions; second, they compare the prediction with the true label to get the loss function ( y – yˆ); and finally, they optimize parameters using gradi- ent descent, backpropagate the error to all the weights, and update their values to minimize the loss function. Ready? Let’s get started! 3.1 Image classification using MLP Let’s recall the MLP architecture from chapter 2. Neurons are stacked in layers on top of each other, with weight connections. The MLP architecture consists of an input layer, one or more hidden layers, and an output layer (figure 3.1). This section uses what you know about MLPs from chapter 2 to solve an image classification problem using the MNIST dataset. The goal of this classifier will be to classify images of digits from 0 to 9 (10 classes). To begin, let’s look at the three main components of our MLP architecture (input layer, hidden layers, and output layer)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"94 CHAPTER 3Convolutional neural networks 3.1.1 Input layer When we work with 2D images, we need to preprocess them into something the net- work can understand before feeding them to the network. First, let’s see how comput- ers perceive images. In figure 3.2, we have an image 28 pixels wide × 28 pixels high. This image is seen by the computer as a 28 × 28 matrix, with pixel values ranging from 0 to 255 (0 for black, 255 for white, and the range in between for grayscale).Input features Hidden layer 1 Hidden layer 2 Output layern_units n_units n_out 210 Figure 3.1 The MLP architecture consists of layers of neurons connected by weight connections. 000000000000000000000000000 0000000000000000000000000000 0000000000000000000000000000 0000000000000000000000000000 0000000000000000000000000000 0000000000000000 00000 00000000 00000 00000000 00000 00000000 00000 0000000000000000 0000000 000000000000000 00000000 0000000000000 000000000 00000000000 00000000000 000000000 000000000000 000000000 0000000000 000000000 000000000 00000000000000 000000000 000000000000000 000000000 000000000000000 000000000 000000000000000 000000000 000 00000000 0000000000 000 000000 0000000000 000 000000000000 000 0000000000000 000055 87 157 156 187 215 81 5 25 54 25 96 140 215 215 251 254 254 254 254 254 163 157 254 254 254 0 254 254 254 254 254 254 254 254 241 100 156 117 184 214 214 156 117 49 19 19 141 254 241 158 23 16 173 254 246 70 90 247 254 219 58 43 203 254 250 121 15 43 145 254 254 229 111 4 137 245 254 254 238 40 130 254 254 254 254 254 235 106 23 55 196 196 114 194 223 254 254 216 23 11 75 245 253 55 1 150 254 134 14 200 254 134 11 246 253 59 17 11 18 67 254 254 170 128 254 159 20 140 248 254 254 9 154 254 229 80 79 79 161 176 218 254 254 254 104 29 203 254 254 254 215 254 254 254 250 95 11 5 80 155 155 156 111 58 58 36 000000000000000 0000000000000000000000000000 0000000000000000000000000000 000000000000000000000000000 0028 × 28 = 784 pixels Figure 3.2 The computer sees this image as a 28 × 28 matrix of pixel values ranging from 0 to 255." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"95 Image classification using MLP Since MLPs only take as input 1D vectors with dimensions (1, n), they cannot take a raw 2D image matrix with dimensions ( x, y). To fit the image in the input layer, we first need to transform our image into one large vector with the dimensions (1, n) that contains all the pixel values in the image. This process is called image flattening . In this example, the total number ( n) of pixels in this image is 28 × 28 = 784. Then, in order to feed this image to our network, we need to flatten the (28 × 28) matrix into one long vector with dimensions (1, 784). The input vector looks like this: x = [ row1, row2, row3, ..., row28] That said, the input layer in this example will have a total of 784 nodes: x1, x2, …, x784. Here is how we flatten an input image in Keras: from keras.models import Sequential from keras.layers import Flatten model = Sequential() model.add( Flatten(input_shape = (28,28) )) Visualizing input vectors To help visualize the flattened input vector, let’s look at a much smaller matrix (4, 4): The input ( x) is a flattened vector with the dimensions (1, 16): So, if we have pixel values of 0 for black and 255 for white, the input vector will be as follows: Input = [0, 255, 255, 255, 0, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0]Row 1X1X2X3X4X5X6X7X8X9X10X11X12X13X14X15X16 Row 2 Row 3 Row 4 As before, imports the Keras library Imports a layer called Flatten to convert the image matrix into a vectorDefines the model Adds the Flatten layer, also known as the input layer" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"96 CHAPTER 3Convolutional neural networks The Flatten layer in Keras handles this process for us. It takes the 2D image matrix input and converts it into a 1D vector. Note that the Flatten layer must be supplied a parameter value of the shape of the input image. Now the image is ready to be fed to the neural network. What’s next? Hidden layers. 3.1.2 Hidden layers As discussed in the previous chapter, the neural network can have one or more hid- den layers (technically, as many as you want). Each layer has one or more neurons (again, as many as you want). Your main job as a neural network engineer is to design these layers. For the sake of this example, let’s say you decided to arbitrarily design the network to have two hidden layers, each having 512 nodes—and don’t forget to add the ReLU activation function for each hidden layer. As in the previous chapter, let’s add two fully connected (also known as dense ) layers, using Keras: from keras.layers import Dense model.add(Dense(512, activation = 'relu')) model.add(Dense(512, activation = 'relu')) 3.1.3 Output layer The output layer is pretty straightforward. In classification problems, the number of nodes in the output layer should be equal to the number of classes that you are trying to detect. In this problem, we are classifying 10 digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). Then we need to add one last Dense layer that contains 10 nodes: model.add(Dense(10, activation = ‘softmax’ ))Choosing an activation function In chapter 2, we discussed the different types of activation functions in detail. As a DL engineer, you will often have a lot of different choices when you are building your network. Choosing the activation function that is the most suitable for the problem you are solving is one of these choices. While there is no single best answer that fits all problems, in most cases, the ReLU function performs best in the hidden layers; and for most classification problems where classes are mutually exclusive, softmax is generally a good choice in the output layer. The softmax function gives us the prob- ability that the input image depicts one of the ( n) classes. Imports the Dense layer Adds two Dense layers with 5 12 nodes each" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"97 Image classification using MLP 3.1.4 Putting it all together When we put all these layers together, we get a neural network like the one in figure 3.3. Here is how it looks in Keras: from keras.models import Sequential from keras.layers import Flatten, Dense model = Sequential() model.add( Flatten(input_shape = (28,28) )) model.add(Dense(512, activation = 'relu')) model.add(Dense(512, activation = 'relu')) model.add(Dense(10, activation = 'softmax' )) model.summary() When you run this code, you will see the model summary printed as shown in figure 3.4. You can see that the output of the flatten layer is a vector with 784 nodes, as dis- cussed before, since we have 784 pixels in each 28 × 28 images. As designed, the hid- den layers produce 512 nodes each; and, finally, the output layer ( dense_3 ) produces a layer with 10 nodes. Figure 3.3 The neural network we create by combining the input, hidden, and output layersInput layer 784 nodes 512 nodes 512 nodes 10 nodes... ...Hidden layers ...Output layer = pro(image depicts a 0) ...= pro(image depicts a 1) = pro(image depicts a 9) Imports the Keras libraryImports a Flatten layer to convert the image matrix into a vector Defines the neural network architectureAdds the Flatten layerAdds 2 hidden layers with 5 12 nodes each. Using the ReLU activation function is recommended in hidden layers. Adds 1 output Dense layer with 10 nodes. Using the softmax activation function is recommended in the output layer for multiclass classification problems.Prints a summary of the model architecture" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"98 CHAPTER 3Convolutional neural networks The Param # field represents the number of parameters (weights) produced at each layer. These are the weights that will be adjusted and learned during the training pro- cess. They are calculated as follows: 1Params after the flatten layer = 0, because this layer only flattens the image to a vector for feeding into the input layer. The weights haven’t been added yet. 2Params after layer 1 = (784 nodes in input layer) × (512 in hidden layer 1) + (512 connections to biases) = 401,920. 3Params after layer 2 = (512 nodes in hidden layer 1) × (512 in hidden layer 2) + (512 connections to biases) = 262,656. 4Params after layer 3= (512 nodes in hidden layer 2) × (10 in output layer) + (10 connections to biases) = 5,130. 5Total params in the network = 401,920 + 262,656 + 5,130 = 669,706. This means that in this tiny network, we have a total of 669,706 parameters (weights and biases) that the network needs to learn and whose values it needs to tune to optimize the error function. This is a huge number for such a small network. You can see how this number would grow out of control if we added more nodes and layers or used bigger images. This is one of the two major drawbacks of MLPs that we will discuss next. MLPs vs. CNNs If you train the example MLP on the MNIST dataset, you will get pretty good results (close to 96% accuracy compared to 99% with CNNs). But MLPs and CNNs do not usually yield comparable results. The MNIST dataset is special because it is very clean and perfectly preprocessed. For example, all images have the same size and are centered in a 28 × 28 pixel grid. Also, the MNIST dataset contains only grayscale images. It would be a much harder task if the images had color or the digits were skewed or not centered.Layer (type) Total params: 669,706 Trainable params: 669,706 Non-trainable params: 0Flatten_1 (Flatten) dense_1 (Dense) dense_2 (Dense) dense_3 (Dense)Output Shape (None, 784) (None, 512) (None, 512) (None, 10)Param # 0 401920 262656 5130 Figure 3.4 The model summary " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"99 Image classification using MLP 3.1.5 Drawbacks of MLPs for processing images We are nearly ready to talk about the topic of this chapter: CNNs. But first, let’s discuss the two major problems in MLPs that convolutional networks are designed to fix. SPATIAL FEATURE LOSS Flattening a 2D image to a 1D vector input results in losing the spatial features of the image. As we saw in the mini project earlier, before feeding an image to the hidden layers of an MLP, we must flatten the image matrix to a 1D vector. This means throw- ing away all the 2D information contained in the image. Treating an input as a simple vector of numbers with no special structure might work well for 1D signals; but in 2D images, it will lead to information loss because the network doesn’t relate the pixel val- ues to each other when trying to find patterns. MLPs have no knowledge of the fact that these pixel numbers were originally spatially arranged in a grid and that they are connected to each other. CNNs, on the other hand, do not require a flattened image. We can feed the raw image matrix of pixels to a CNN network, and the CNN will understand that pixels that are close to each other are more heavily related than pix- els that are far apart. Let’s oversimplify things to learn more about the importance of spatial features in an image. Suppose we are trying to teach a neural network to identify the shape of a square, and suppose the pixel value 1 is white and 0 is black. When we draw a white square on a black background, the matrix will look like figure 3.5. Since MLPs take a 1D vector as an input, we have to flatten the 2D image to a 1D vec- tor. The input vector of figure 3.5 looks like this: Input vector = [1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] When the training is complete, the network will learn to identify a square only when the input nodes x1, x2, x5, and x6 are fired. But what happens when we have newIf you try the example MLP architecture with a slightly more complex dataset like CIFAR-10, as we will do in the project at the end of this chapter, the network will per- form very poorly (around 30–40% accuracy). It performs even worse with more com- plex datasets. In messy real-world image data, CNNs truly outshine MLPs. 1 111 0 0 0 0 0000 0000Figure 3.5 If the pixel value 1 is white and 0 is black, this is what our matrix looks like for identifying a square." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"100 CHAPTER 3Convolutional neural networks images with square shapes located in different areas in the image, as shown in fig- ure 3.6? The MLP will have no idea that these are the shapes of squares because the network didn’t learn the square shape as a feature. Instead, it learned the input nodes that, when fired, might lead to a square shape. If we want our network to learn squares, we need a lot of square shapes located everywhere in the image. You can see how this solution won’t scale for complex problems. Another example of feature learning is this: if we want to teach a neural network to recognize cats, then ideally, we want the network to learn all the shapes of cat features regardless of where they appear on the image (ears, nose, eyes, and so on). This only happens when the network looks at the image as a set of pixels that, when close to each other, are heavily related. The mechanism of how CNNs learn will be explained in detail in this chapter. But figure 3.7 shows how the network learns features throughout its layers.0 0 0 00 0 0 0 0 0 0 01 1110 0 0 0 00 00 0 0 0 01 11 1 Figure 3.6 Square shapes in different areas of the image Earlier layers learn simple features like curves and edges.Later layers learn more complex features like ear, nose, and eye.Output prediction Input prediction “Cat” Figure 3.7 CNNs learn the image features through its layers." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"101 Image classification using MLP FULLY CONNECTED (DENSE ) LAYERS MLPs are composed of dense layers that are fully connected to each other. Fully con- nected means every node in one layer is connected to all nodes of the previous layer and all nodes in the next layer. In this scenario, each neuron has parameters (weights) to train per neuron from the previous layer. While this is not a big problem for the MNIST dataset because the images are really small in size (28 × 28), what happens when we try to process larger images? For example, if we have an image with dimen- sions 1,000 × 1,000, it will yield 1 million parameters for each node in the first hidden layer. So if the first hidden layer has 1,000 neurons, this will yield 1 billion parameters even in such a small network. You can imagine the computational complexity of opti- mizing 1 billion parameters after only the first layer. This number will increase drasti- cally when we have tens or hundreds of layers. This can get out of control pretty fast and will not scale. CNNs, on the other hand, are locally connected layers, as figure 3.8 shows: nodes are connected to only a small subset of the previous layers’ nodes. Locally connected lay- ers use far fewer parameters than densely connected layers, as you will see. WHAT DOES IT ALL MEAN? The loss of information caused by flattening a 2D image matrix to a 1D vector and the computational complexity of fully connected layers with larger images suggest that we need an entirely new way of processing image input, one where 2D information is not entirely lost. This is where convolutional networks come in. CNNs accept the full Sliding windowsFully connected neural net Locally connected neural net ... Figure 3.8 (Left) Fully connected neural network where all neurons are connected to all pixels of the image. (Right) Locally connected network where only a subset of pixels is connected to each neuron. These subsets are called sliding windows ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"102 CHAPTER 3Convolutional neural networks image matrix as input, which significantly helps the network understand the patterns contained in the pixel values. 3.2 CNN architecture Regular neural networks contain multiple layers that allow each layer to find succes- sively complex features, and this is the way CNNs work. The first layer of convolutions learns some basic features (edges and lines), the next layer learns features that are a little more complex (circles, squares, and so on), the following layer finds even more complex features (like parts of the face, a car wheel, dog whiskers, and the like), and so on. You will see this demonstrated shortly. For now, know that the CNN architec- ture follows the same pattern as neural networks: we stack neurons in hidden layers on top of each other; weights are randomly initiated and learned during network training; and we apply activation functions, calculate the error ( y – yˆ), and backpropa- gate the error to update the weights. This process is the same. The difference is that we use convolutional layers instead of regular fully connected layers for the feature- learning part. 3.2.1 The big picture Before we look in detail at the CNN architecture, let’s back up for a moment to see the big picture (figure 3.9). Remember the image classification pipeline we discussed in chapter 1? Before deep learning (DL), we used to manually extract features from images and then feed the resulting feature vector to a classifier (a regular ML algorithm like SVM). With the magic that neural networks provide, we can replace the manual work of step 3 in figure 3.9 with a neural network (MLP or CNN) that does both feature learning and classification (steps 3 and 4). We saw earlier, in the digit-classification project, how to use MLP to learn features and classify an image (steps 3 and 4 together). It turned out that our issue with fully connected layers was not the classification part—fully connected layers do that very1. Input data 2. Preprocessing 3. Feature extraction 4. ML model • Images • Videos (image frames)Getting the data ready: • Standardize images • Color transformation • More...• Find distinguishing information about the image• Learn from the extracted features to predict and classify objects Figure 3.9 The image classification pipeline consists of four components: data input, data preprocessing, feature extraction, and the ML algorithm. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"103 CNN architecture well. Our issue was in the way fully connected layers process the image to learn fea- tures. Let’s get a little creative: we’ll keep what’s working and make modifications to what’s not working. The fully connected layers aren’t doing a good job of feature extraction (step 3), so let’s replace that with locally connected layers (convolutional layers). On the other hand, fully connected layers do a great job of classifying the extracted features (step 4), so let’s keep them for the classification part. The high-level architecture of CNNs looks like figure 3.10: Input layer Convolutional layers for feature extraction Fully connected layers for classification Output prediction Remember, we are still talking about the big picture. We will dive into each of these components soon. In figure 3.10, suppose we are building a CNN to classify images into two classes: the numbers 3 and 7. Look at the figure, and follow along with these steps: 1Feed the raw image to the convolutional layers. 2The image passes through the CNN layers to detect patterns and extract fea- tures called feature maps . The output of this step is then flattened to a vector of the learned features of the image. Notice that the image dimensions shrink after each layer, and the number of feature maps (the layer depth) increases until we have a long array of small features in the last layer of the feature- extraction part. Conceptually, you can think of this step as the neural network learning to represent more abstract features of the original image.Feature maps Feature mapsFeature maps Input layerFeature extraction Classification Prediction 3 Fully connected layers Convolutional layers Output layerFlattened ... ... 7 Figure 3.10 The CNN architecture consists of the following: input layer, convolutional layers, fully connected layers, and output prediction. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"104 CHAPTER 3Convolutional neural networks 3The flattened feature vector is fed to the fully connected layers to classify the extracted features of the image. 4The neural network fires the node that represents the correct prediction of the image. Note that in this example, we are classifying two classes (3 and 7). Thus the output layer will have two nodes: one to represent the digit 3, and one for the digit 7. DEFINITION The basic idea of neural networks is that neurons learn features from the input. In CNNs, a feature map is the output of one filter applied to the previous layer. It is called a feature map because it is a mapping of where a certain kind of feature is found in the image. CNNs look for fea- tures such as straight lines, edges, or even objects. Whenever they spot these features, they report them to the feature map. Each feature map is looking for something specific: one could be looking for straight lines and another for curves. 3.2.2 A closer look at feature extraction You can think of the feature-extraction step as breaking large images into smaller pieces of features and stacking them into a vector. For example, an image of the digit 3 is one image (depth = 1) and is broken into smaller images that contain specific fea- tures of the digit 3 (figure 3.11). If it is broken into four features, then the depth equals 4. As the image passes through the CNN layers, it shrinks in dimensions, and the layer gets deeper because it contains more images of smaller features. Note that this is just a metaphor to help visualize the feature-extraction process. CNNs don’t literally break an image into pieces. Instead, they extract meaningful features that separate this object from other images in the training set, and stack them in an array of features.Feature extraction Figure 3.11 An image is broken into smaller images that contain distinctive features." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"105 CNN architecture 3.2.3 A closer look at classification After feature extraction is complete, we add fully connected layers (a regular MLP) to look at the features vector and say, “The first feature (top) has what looks like an edge: this could be 3, or 7, or maybe an ugly 2. I’m not sure; let’s look at the second feature. Hmm, this is definitely not a 7 because it has a curve,” and so on until the MLP is con- fident that the image is the digit 3. How CNNs learn patterns It is important to note that a CNN doesn’t go from the image input to the features vector directly in one layer. This usually happens in tens or hundreds of layers, as you will see later in this chapter. The feature-learning process happens step by step after each hidden layer. So the first layer usually learns very basic features like lines and edges, and the second assembles those lines into recognizable shapes, corners, and circles. Then, in the deeper layers, the network learns more complex shapes such as human hands, eyes, ears, and so on. For example, here is a simplified ver- sion of how CNNs learn faces. A simplified version of how CNNs learn faces You can see that the early layers detect patterns in the image to learn low-level fea- tures like edges, and the later layers detect patterns within patterns to learn more complex features like parts of the face, then patterns within patterns within patterns , and so on: Input image + Layer 1 ⇒ patterns + Layer 2 ⇒ patterns within patterns + Layer 3 ⇒ patterns within patterns within patterns ... and so on This concept will come in handy when we discuss more advanced CNN architectures in later chapters. For now, know that in neural networks, we stack hidden layers to learn patterns from each other until we have an array of meaningful features to iden- tify the image. Low-level featureOverlapMid-level featureHigh-level featureOverlap " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"106 CHAPTER 3Convolutional neural networks 3.3 Basic components of a CNN Without further ado, let’s discuss the main components of a CNN architecture. There are three main types of layers that you will see in almost every convolutional network (figure 3.12): 1Convolutional layer (CONV) 2Pooling layer (POOL) 3Fully connected layer (FC) CNN text representation The text representation of the architecture in figure 3.12 goes like this: CNN architecture: INPUT ⇒ CONV ⇒ RELU ⇒ POOL ⇒ CONV ⇒ RELU ⇒ POOL ⇒ FC ⇒ SOFTMAX Note that the ReLU and softmax activation functions are not really standalone layers— they are the activation functions used in the previous layer. The reason they are shown this way in the text representation is to call out that the CNN designer is using the ReLU activation function in the convolutional layers and softmax activation in the fully connected layer. So this represents a CNN architecture that contains two convo- lutional layers plus one fully connected layer. You can add as many convolutional and fully connected layers as you see fit. The convolutional layers are for feature learning or extraction, and the fully connected layers are for classification. P(three)Convolution layer (24 × 24) Convolution layer (8 × 8)Pooling layer (12 × 12) Pooling layer (4 × 4)Input (28 × 28) 0.18 P(four) 0.002 P(five) 0.62 0.008P(nine)P(two) 0.2P(one) 0.01Feature learning Classification Flatten Softmax Fully connected Convolution (5 × 5 kernel)Convolution (5 × 5 kernel)Max pooling (2 × 2)Max pooling (2 × 2)Fully connected layers Figure 3.12 The basic components of convolutional networks are convolutional layers and pooling layers to perform feature extraction, and fully connected layers for classification." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"107 Basic components of a CNN Now that we’ve seen the full architecture of a convolutional network, let’s dive deeper into each of the layer types to get a deeper understanding of how they work. Then at the end of this section, we will put them all back together. 3.3.1 Convolutional layers A convolutional layer is the core building block of a convolutional neural network. Convolutional layers act like a feature finder window that slides over the image pixel by pixel to extract meaningful features that identify the objects in the image. WHAT IS CONVOLUTION ? In mathematics, convolution is the operation of two functions to produce a third modified function. In the context of CNNs, the first function is the input image, and the second function is the convolutional filter. We will perform some mathematical operations to produce a modified image with new pixel values. Let’s zoom in on the first convolutional layer to see how it processes an image (fig- ure 3.13). By sliding the convolutional filter over the input image, the network breaks the image into little chunks and processes those chunks individually to assemble the modified image, a feature map. Keeping this diagram in mind, here are some facts about convolution filters: The small 3 × 3 matrix in the middle is the convolution filter, also called a kernel . The kernel slides over the original image pixel by pixel and does some math calculations to get the values of the new “convolved” image on the next layer.3015(–1×3) + (0 ×0) + (1 ×1) + (–2×2) + (0 ×6) + (2 ×2) + (–1×2) + (0 ×4) + (1 ×1) = –303 3 0 24 3 0 0 3 241061 1 4 301–1 0 1 –2 0 2 –1 0 1503 2 0 262432 0 3 241063 1 1 262440 6 3 241061 1 6–3Receptive field Convolved imageConvolution filter (3 ×3) Destination pixel62 Figure 3.13 A 3 × 3 convolutional filter is sliding over the input image." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"108 CHAPTER 3Convolutional neural networks The area of the image that the filter convolves is called the receptive field (see fig- ure 3.14). What are the kernel values? In CNNs, the convolution matrix is the weights. This means they are also randomly initialized and the values are learned by the network (so you will not have to worry about assigning its values). CONVOLUTIONAL OPERATIONS The math should look familiar from our discussion of MLPs. Remember how we multi- plied the input by the weights and summed them all together to get the weighted sum? weighted sum = x1 · w1 + x2 · w2 + x3 · w3 + ... + xn · wn + b We do the same thing here, except that in CNNs, the neurons and weights are struc- tured in a matrix shape. So we multiply each pixel in the receptive field by the corre- sponding pixel in the convolution filter and sum them all together to get the value of the center pixel in the new image (figure 3.15). This is the same matrix dot product we saw in chapter 2: (93 × –1) + (139 × 0) + (101 × 1) + (26 × –2) + (252 × 0) + (196 × 2) + (135 × –1) + (240 × 0) + (48 × 1) = 243 The filter (or kernel) slides over the whole image. Each time, we multiply every corre- sponding pixel element-wise and then add them all together to create a new image with new pixel values. This convolved image is called a feature map or activation map .Receptive field Figure 3.14 The kernel slides over the original image pixel by pixel and calculates the convolved image on the next layer. The convolved area is called the receptive field." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"109 Basic components of a CNN Applying filters to learn features Let’s not lose focus of the initial goal. We are doing all this so the network extracts features from the image. How does applying filters lead toward this goal? In image processing, filters are used to filter out unwanted information or amplify features in an image. These filters are matrices of numbers that convolve with the input image to modify it. Look at this edge-detection filter: When this kernel ( K) is convolved with the input image F(x,y), it creates a new con- volved image (a feature map) that amplifies the edges.131 162 232 84Original image 91 207 104 93 139 101 237 109 243 23 232 136 135 126 185 135 230 18 61 225 157 124 25 14 102 108 5 155 116 218 232 24993–1 1390 101+1 23–2 2320 136+2 135–1 2300 18+1Convoluted image 243Kernel Figure 3.15 Multiplying each pixel in the receptive field by the corresponding pixel in the convolution filter and summing them gives the value of the center pixel in the new image. 4 –1 –1–1 0 –1 0 00 Convolution kernel with optimized weightsInput imageConvolved image (feature map) –10 0 4* = –1 –1 –10 0 Applying an edge detection kernel on an image" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"110 CHAPTER 3Convolutional neural networks We are basically done with the concept of filter. That is all there is to it! Now, let’s take a look at the convolutional layer as a whole: Each convolutional layer contains one or more convolutional filters. The number of filters in each convo- lutional layer determines the depth of the next layer, because each filter produces its own feature map (convolved image). Let’s look at the convolutional layers in Keras to see how they work: from keras.layers import Conv2D model.add(Conv2D(filters=16, kernel_size=2, strides= '1', padding= 'same', activation= 'relu'))(continued) To understand how the convolution happens, let’s zoom in on a small piece of the image. This image shows the convolution calculations in one area of the image to compute the value of one pixel. We compute the values of all the pixels by sliding the kernel over the input image pixel by pixel and applying the same convolution process. These kernels are often called weights because they determine how important a pixel is in forming a new output image. Similar to what we discussed about MLP and weights, these weights represent the importance of the feature on the output. In images, the input features are the pixel values. Other filters can be applied to detect different types of features. For example, some filters detect horizontal edges, others detect vertical edges, still others detect more complex shapes like corners, and so on. The point is that these filters, when applied in the convolutional layers, yield the feature-learning behavior we discussed earlier: first they learn simple features like edges and straight lines, and later layers learn more complex features. Edge detection kernel Convolution 0 × 120 + –1 × 140 + 0 × 120 + –1 × 225 + 4 × 220 + –1 × 205 + 0 × 225 + –1 × 250 + 0 × 230 = 60Input image –1 0 0 4/g0–1 –1 –1 0 05 5 50 2 54 512 18 7516 20 50 80 105 120 140 120 100 110 170 225 220 205 255 250 230100 The new value of the middle pixel in the convolved image is 60. The pixel value is > 0, which means that a small edge has been detected.Calculations for applying an edge kernel on an input image" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"111 Basic components of a CNN And there you have it. One line of code creates the convolutional layer. We will see where this line fits in the full code later in this chapter. Let’s stay focused on the con- volutional layer. As you can see from the code, the convolutional layer takes five main arguments. As mentioned in chapter 2, it is recommended that we use the ReLU acti- vation function in the neural networks’ hidden layers. That’s one argument out of the way. Now, let’s explain the remaining four hyperparameters that control the size and depth of the output volume: Filters: the number of convolutional filters in each layer. This represents the depth of its output. Kernel size: the size of the convolutional filter matrix. Sizes vary: 2 × 2, 3 × 3, 5 × 5. Stride. Padding. We will discuss strides and padding in the next section. But now, let’s look at each of these four hyperparameters. NOTE As you learned in chapter 2 on deep learning, hyperparameters are the knobs you tune (increase and decrease) when configuring your neural net- work to improve performance. NUMBER OF FILTERS IN THE CONVOLUTIONAL LAYER Each convolutional layer has one or more filters. To understand this, let’s review MLPs from chapter 2. Remember how we stacked neurons in hidden layers, and each hid- den layer has n number of neurons (also called hidden units )? Figure 3.16 shows the MLP diagram from chapter 2. Input features Hidden layer 1 Hidden layer 2 Output layern_units n_units n_out 210 Figure 3.16 Neurons are stacked in hidden layers, and each hidden layer has n neurons (hidden units)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"112 CHAPTER 3Convolutional neural networks Similarly, with CNNs, the convolutional layers are the hidden layers. And to increase the number of neurons in hidden layers, we increase the number of kernels in convo- lutional layers. Each kernel unit is considered a neuron. For example, if we have a 3 × 3 kernel in the convolutional layer, this means we have 9 hidden units in this layer. When we add another 3 × 3 kernel, we have 18 hidden units. Add another one, and we have 27, and so on. So, by increasing the number of kernels in a convolutional layer, we increase the number of hidden units, which makes our network more complex and able to detect more complex patterns. The same was true when we added more neurons (hidden units) to the hidden layers in the MLP. Figure 3.17 provides a repre- sentation of the CNN layers that shows the number-of-kernels idea. KERNEL SIZE Remember that a convolution filter is also known as a kernel. It is a matrix of weights that slides over the image to extract features. The kernel size refers to the dimensions of the convolution filter (width times height; figure 3.18). kernel_size is one of the hyperparameters that you will be setting when building a convolutional layer. Like most neural network hyperparameters, no single best answer fits all problems. The intuition is that smaller filters will capture very fine details of the image, and bigger filters will miss minute details in the image. Input imageConvolution layer (C1)Convolution layer (CN–3)Feature pooling layer ( P2)Feature pooling layer ( P4)Feature pooling layer ( PN–2)Classification layersConvolution layer (C3) Figure 3.17 Representation of the CNN layers that shows the number-of-kernels idea " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"113 Basic components of a CNN Remember that filters contain the weights that will be learned by the network. So, the- oretically, the bigger the kernel_size , the deeper the network, which means the bet- ter it learns. However, this comes with higher computational complexity and might lead to overfitting. Kernel filters are almost always square and range from the smallest at 2 × 2 to the largest at 5 × 5. Theoretically, you can use bigger filters, but this is not preferred because it results in losing important image details. STRIDES AND PADDING You will usually think of these two hyperparameters together, because they both con- trol the shape of the output of a convolutional layer. Let’s see how: Strides —The amount by which the filter slides over the image. For example, to slide the convolution filter one pixel at a time, the strides value is 1. If we want t o j u m p t w o p i x e l s a t a t i m e , t h e s t r i d e s v a l u e i s 2 . S t r i d e s o f 3 o r m o r e a r e uncommon and rare in practice. Jumping pixels produces smaller output vol- umes spatially. Strides of 1 will make the output image roughly the same height and width of the input image, while strides of 2 will make the output image roughly half of the input image size. I say “roughly” because it depends on what you set the pad- ding parameter to do with the edge of the image. Tuning I don’t want you to get overwhelmed with all the hyperparameter tuning. Deep learning is really an art as well as a science. I can’t emphasize this enough: most of your work as a DL engineer will be spent not building the actual algorithms, but rather building your network architecture and setting, experimenting, and tuning your hyperparame- ters. A great deal of research today is focused on trying to find the optimal topologies and parameters for a CNN, given a type of problem. Fortunately, the problem of tuning hyperparameters doesn’t have to be as hard as it might seem. Throughout the book, I will indicate good starting points for using hyperparameters and help you develop an instinct for evaluating your model and analyzing its results to know which knob (hyperparameter) you need to tune (increase or decrease).3 × 3 5 × 5Figure 3.18 The kernel size refers to the dimensions of the convolution filter." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"114 CHAPTER 3Convolutional neural networks Padding —Often called zero-padding because we add zeros around the border of an image (figure 3.19). Padding is most commonly used to allow us to preserve the spatial size of the input volume so the input and output width and height are the same. This way, we can use convolutional layers without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since, otherwise, the height/width would shrink as we went to deeper layers. NOTE The goal when using strides and padding hyperparameters is one of two things: keep all the important details of the image and transfer them to the next layer (when the strides value is 1 and the padding value is same ); or ignore some of the spatial information of the image to make the processing computationally more affordable. Note that we will be adding the pooling layer (discussed next) to reduce the size of the image to focus on the extracted features. For now, know that strides and padding hyperparameters are meant to control the behavior of the convolutional layer and the size of its output: whether to pass on all image details or ignore some of them. 3.3.2 Pooling layers or subsampling Adding more convolutional layers increases the depth of the output layer, which leads to increasing the number of parameters that the network needs to optimize (learn). You can see that adding several convolutional layers (usually tens or even hundreds) will produce a huge number of parameters (weights). This increase in network dimen- sionality increases the time and space complexity of the mathematical operations that take place in the learning process. This is when pooling layers come in handy. Subsam- pling or pooling helps reduce the size of the network by reducing the number of parameters passed to the next layer. The pooling operation resizes its input by apply- ing a summary statistical function, such as a maximum or average, to reduce the over- all number of parameters passed on to the next layer. Pad Padding = 2 Pad0 0 0 0 0 0 0 00 0 0 0 0 0 0 00 0 123 11 12 194 0 00 0 94 3 4 83 0 00 0 2 22 23 12 0 00 0 4 192 34 94 0 00 0 0 0 0 0 0 00 0 0 0 0 0 0 0Figure 3.19 Zero-padding adds zeros around the border of the image. Padding = 2 adds two layers of zeros around the border." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"115 Basic components of a CNN The goal of the pooling layer is to downsample the feature maps produced by the convolutional layer into a smaller number of parameters, thus reducing computa- tional complexity. It is a common practice to add pooling layers after every one or two convolutional layers in the CNN architecture (figure 3.20). MAX POOLING VS. AVERAGE POOLING There are two main types of pooling layers: max pooling and average pooling. We will discuss max pooling first. Similar to convolutional kernels, max pooling kernels are windows of a certain size and strides value that slide over the image. The difference with max pooling is that the windows don’t have weights or any values. All they do is slide over the fea- ture map created by the previous convolutional layer and select the max pixel value to pass along to the next layer, ignoring the remaining values. In figure 3.21, you see a pooling filter with a size of 2 × 2 and strides of 2 (the filter jumps 2 pixels when sliding over the image). This pooling layer reduces the feature map size from 4 × 4 down to 2 × 2. When we do that to all the feature maps in the convolutional layer, we get maps of smaller dimensions (width times height), but the depth of the layer is kept the same because we apply the pooling filter to each of the feature maps from the previous fil- ter. So if the convolutional layer has three feature maps, the output of the pooling layer will also have three feature maps, but of smaller size (figure 3.22). Global average pooling is a more extreme type of dimensionality reduction. Instead of setting a window size and strides, global average pooling calculates the averageI N P U TC O N VP O O LC O N VP O O L /c222 POOL CONV POOL CONV INPUT /c222/c222/c222Figure 3.20 Pooling layers are commonly added after every one or two convolutional layers. 1 9 6 4 5 4 7 8 5 1 2 9 6 7 6 01 9 6 4 5 4 7 8 5 1 2 9 6 7 6 01 9 6 4 5 4 7 8 5 1 2 9 6 7 6 01 9 6 4 5 4 7 8 8 5 1 2 99 6 77 6 09 Figure 3.21 A 2 × 2 pooling filter and strides of 2, reducing the feature map from 4 × 4 to 2 × 2" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"116 CHAPTER 3Convolutional neural networks values of all pixels in the feature map (figure 3.23). You can see in figure 3.24 that the global average pooling layer takes a 3D array and turns it into a vector. WHY USE A POOLING LAYER ? As you can see from the examples we have discussed, pooling layers reduce the dimen- sionality of our convolutional layers. The reason it is important to reduce dimensionality is that in complex projects, CNNs contain many convolutional layers, and each has tens or hundreds of convolutional filters (kernels). Since the kernel contains the parameters (weights) that the network learns, this can get out of control very quickly, and the dimen- sionality of our convolutional layers can get very large. So adding pooling layers helps keep the important features and pass them along to the next layer, while shrinking imageCONV layer (4 × 4 × 3)POOL layer (2 × 2 × 3)Figure 3.22 If the convolutional layer has three feature maps, the pooling layer’s output will have three smaller feature maps. 1 9 6 4 5 4 7 8 5 1 2 9 6 7 6 05 Figure 3.23 Global average pooling calculates the average values of all the pixels in a feature map. CONV layer (4 × 4 × 3)POOL layer (1 × 1 × 3) Figure 3.24 The global average pooling layer turns a 3D array into a vector." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"117 Basic components of a CNN dimensionality. Think of pooling layers as image-compressing programs. They reduce the image resolution while keeping its important features (figure 3.25). CONVOLUTIONAL AND POOLING LAYERS RECAP Let’s review what we have done so far. Up until this point, we used a series of convolu- tional and pooling layers to process an image and extract meaningful features that are specific to the images in the training dataset. To summarize how we got here: 1The raw image is fed to the convolutional layer, which is a set of kernel filters that slide over the image to extract features. 2The convolutional layer has the following attributes that we need to configure: from keras.layers import Conv2D model.add(Conv2D(filters=16, kernel_size=2, strides= '1', padding= 'same', activation= 'relu')) –filters is the number of kernel filters in each layer (the depth of the hid- den layer). –kernel_size is the size of the filter (aka kernel). Usually 2, or 3, or 5. –strides is the amount by which the filter slides over the image. A strides value of 1 or 2 is usually recommended as a good start.Pooling vs. strides and padding The main purpose of pooling and strides is to reduce the number of parameters in the neural network. The more parameters we have, the more computationally expensive the training process will be. Many people dislike the pooling operation and think that we can get away without it in favor of tuning strides and padding the convolutional layer. For example, “Striving for Simplicity: The All Convolutional Net”a proposes discarding the pooling layer in favor of architecture that only consists of repeated convolutional layers. To reduce the size of the representation, the authors suggest occasionally using larger strides in the convolutional layer. Discarding pooling layers has also been found helpful in training good generative models, such as generative adversarial networks (GANs), which we will discuss in chapter 10. It seems likely that future architectures will feature very few to no pooling layers. But for now, pooling layers are still widely used to downsample images from one layer to the next. aJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller, “Striving for Simplicity: The All Convolutional Net,” https://arxiv.org/abs/1412.6806 .Original DownsampledFigure 3.25 Pooling layers reduce image resolution and keep the image’s important features." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"118 CHAPTER 3Convolutional neural networks –padding a d d s c o l u m n s a n d r o w s o f z e r o v a l u e s a r o u n d t h e b o r d e r o f t h e image to reserve the image size in the next layer. –activation of relu is strongly recommended in the hidden layers. 3The pooling layer has the following attributes that we need to configure: from keras.layers import MaxPooling2D model.add(MaxPooling2D(pool_size=(2, 2), strides = 2)) And we keep adding pairs of convolutional and pooling layers to achieve the required depth for our “deep” neural network. Visualize what happens after each layer After the convolutional layers, the image keeps its width and height dimensions (usu- ally), but it gets deeper and deeper after each layer. Why? Remember the cutting-the- image-into-pieces-of-features analogy we mentioned earlier? That is what’s happen- ing after the convolutional layer. For example, suppose the input image is 28 × 28 (like in the MNIST dataset). When we add a CONV_1 layer (with filters of 4, strides of 1, and padding of same), the output will be the same width and height dimensions but with depth of 4 (28 × 28 × 4). Now we add a CONV_2 layer with the same hyperparameters but more filters (12), and we get deeper output: 28 × 28 × 12. After the pooling layers, the image keeps its depth but shrinks in width and height: Input image CONV-1 4 kernels 12 kernels 28×28×12 8 ×28×12 28 ×28×4CONV-2 Input image POOL 28×2814×1410×105×5POOL POOL" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"119 Basic components of a CNN 3.3.3 Fully connected layers After passing the image through the feature-learning process using convolutional and pooling layers, we have extracted all the features and put them in a long tube. Now it is time to use these extracted features to classify images. We will use the regular neural network architecture, MLP, that we discussed in chapter 2. WHY USE FULLY CONNECTED LAYERS ? MLPs work great in classification problems. The reason we used convolutional layers in this chapter is that MLPs lose a lot of valuable information when extracting features from an image—we have to flatten the image before feeding it to the network— whereas convolutional layers can process raw images. Now we have the features extracted, and after we flatten them, we can use regular MLPs to classify them. We discussed the MLP architecture thoroughly in chapter 2: nothing new here. To reiterate, here are the fully connected layers (figure 3.26): Input flattened vector —As illustrated in figure 3.26, to feed the features tube to the MLP for classification, we flatten it to a vector with the dimensions (1, n). For example, if the features tube has the dimensions of 5 × 5 × 40, the flattened vector will be (1, 1000). Hidden layer —We add one or more fully connected layers, and each layer has one or more neurons (similar to what we did when we built regular MLPs). Output layer —Chapter 2 recommended using the softmax activation function for classification problems involving more than two classes. In this example, we are classifying digits from 0 to 9: 10 classes. The number of neurons in the output layer is equal to the number of classes; thus, the output layer will have 10 nodes.Putting the convolutional and pooling together, we get something like this: This keeps happening until we have, at the end, a long tube of small shaped images that contain all the features in the original image. The output of the convolutional and pooling layers produces a feature tube (5 × 5 × 40) that is almost ready to be classified. We use 40 here as an example for the depth of the feature tube, as in 40 feature maps. The last step is to flatten this tube before feeding it to the fully connected layer for classification. As discussed earlier, the flat- tened layer will have the dimensions of (1, m) where m = 5 × 5 × 40 = 1,000 neurons.Feature maps 1Feature maps 2Feature maps 3+ CONV and POOL layers+ CONV and POOL layers+ CONV and POOL layers+ CONV and POOL layers Feature maps 4" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"120 CHAPTER 3Convolutional neural networks MLPs and fully connected layers Remember from chapter 2 that multilayer perceptrons (MLPs) are also called fully connected layers, because all the nodes from one layer are connected to all the nodes in the previous and next layers. They are also called dense layers . The terms MLP, fully connected , dense , and sometimes feedforward are used interchangeably to refer to the regular neural network architecture.Flattened input vector (1, 1000) (5 × 5 × 40)Softmax ( = 10)nFC layer ( = 64)n x2 x3 x1000x1 a2 a3a1 a640 1 2 3 4 9P1 P2 P3 P9P0 P4Classification Flattened Figure 3.26 Fully connected layers for an MLP Input features Hidden layer 1 Hidden layer 2 Output layern_units n_units n_out 210" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"121 Image classification using CNNs 3.4 Image classification using CNNs Okay, you are now fully equipped to build your own CNN model to classify images. For this mini project, which is a simple problem but which will help build the foundation to more complex problems in the following chapters, we will use the MNIST dataset. (The MNIST dataset is like “Hello World” for deep learning.) NOTE Regardless of which DL library you decide to use, the concepts are pretty much the same. You start with designing the CNN architecture in your mind or on a piece of paper, and then you begin stacking layers on top of each other and setting their parameters. Both Keras and MXNet (along with Tensor- Flow, PyTorch, and other DL libraries) have pros and cons that we will discuss later, but the concepts are similar. So for the rest of this book, we will be work- ing mostly with Keras with a little overview of other libraries here and there. 3.4.1 Building the model architecture This is the part in your project where you define and build the CNN model architec- ture. To look at the full code of the project that includes image preprocessing, train- ing, and evaluating the model, go to the book’s GitHub repo at https:/ /github.com/ moelgendy/deep_learning_for_vision_systems and open the mnist_cnn notebook or go to the book’s website: www.manning.com/books/deep-learning-for-vision-systems or www.computerVisionBook.com . At this point, we are concerned with the code that builds the model architecture. At the end of this chapter, we will build an end-to-end image classifier and dive deeper into the other pieces: from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), strides=1, padding='same', activation='relu', input_shape=(28,28,1))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3), strides=1, padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(64, activation='relu')) model.add(Dense(10, activation='softmax')) model.summary() Builds the model objectCONV_ 1: adds a convolutional layer with ReLU activation and depth = 32 kernels POOL_ 1: downsamples the image to choose the best features CONV_2: increases the depth to 64 POOL_2: more downsampling Flatten, since there are too many dimensions; we only want a classification output FC_1: Fully connected to get all relevant dataFC_2: Outputs a softmax to squash the matrix into output probabilities for the 10 classesPrints the model architecture summary" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"122 CHAPTER 3Convolutional neural networks When you run this code, you will see the model summary printed as in figure 3.27. Following are some general observations before we look at the model summary: We need to pass the input_shape argument to the first convolutional layer only. Then we don’t need to declare the input shape to the model, since the output of the previous layer is the input of the current layer—it is already known to the model. You can see that the output of every convolutional and pooling layer is a 3D ten- sor of shape ( None , height , width , channels ). The height and width values are pretty straightforward: they are the dimensions of the image at this layer. The channels value represents the depth of the layer. This is the number of fea- ture maps in each layer. The first value in this tuple, set to None , is the number of images that are processed in this layer. Keras sets this to None , which means this dimension is variable and accepts any number of batch_size . As you can see in the Output Shape columns, as you go deeper through the net- work, the image dimensions shrink and the depth increases, as we discussed earlier in this chapter. Notice the number of total params (weights) that the network needs to optimize: 220,234, compared to the number of params from the MLP network we created earlier in this chapter (669,706). We were able to cut it down to almost a third. Let’s take a look at the model summary line by line: CONV_1 —We know the input shape: (28 × 28 × 1). Look at the output shape of conv2d : (28 × 28 × 32). Since we set the strides parameter to 1 and padding to same , the dimensions of the input image did not change. But depth increasedLayer (type) Total params: 220,234 Trainable params: 220,234 Non-trainable params: 0conv2d_1 (Conv2D) max_pooling2d_1 (MaxPooling2 (MaxPooling2conv2d_2 (Conv2D) max_pooling2d_2Output Shape (None, 28, 28, 32) (None, 14, 14, 32) (None, 14, 14, 64) (None, 7, 7, 64) (None, 3136) (None, 64) (None, 10)Param # 320 0 18496 0 0 200768 650flatten_1 (Flatten) dense_1 (Dense) dense_2 (Dense) Figure 3.27 The printed model summary" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"123 Image classification using CNNs to 32. Why? Because we added 32 filters in this layer. Each filter produces one feature map. POOL_1 —The input of this layer is the output of its previous layer: (28 × 28 × 32). After the pooling layer, the image dimensions shrink, and depth stays the same. Since we used a 2 × 2 pool, the output shape is (14 × 14 × 32). CONV_2 — Same as before, convolutional layers increase depth and keep dimen- sions. The input from the previous layer is (14 × 14 × 32). Since the filters in this layer are set to 64, the output is (14 × 14 × 64). POOL_2 —Same 2 × 2 pool, keeping the depth and shrinking the dimensions. The output is (7 × 7 × 64). Flatten —Flattening a features tube that has dimensions of (7 × 7 × 64) con- verts it into a flat vector of dimensions (1, 3136). Dense_1 —We set this fully connected layer to have 64 neurons, so the output is 64. Dense_2 —This is the output layer that we set to 10 neurons, since we have 10 classes. 3.4.2 Number of parameters (weights) Okay, now we know how to build the model and read the summary line by line to see how the image shape changes as it passes through the network layers. One important thing remains: the Param # column on the right in the model summary. WHAT ARE THE PARAMETERS ? Parameters is just another name for weights. These are the things that your network learns. As we discussed in chapter 2, the network’s goal is to update the weight values during the gradient descent and backpropagation processes until it finds the optimal parameter values that minimize the error function. HOW ARE THESE PARAMETERS CALCULATED ? In MLP, we know that the layers are fully connected to each other, so the weight con- nections or edges are simply calculated by multiplying the number of neurons in each layer. In CNNs, weight calculations are not as straightforward. Fortunately, there is an equation for this: number of params = filters × kernel size × depth of the previous layer + number of fil- ters (for biases) Let’s apply this equation in an example. Suppose we want to calculate the parameters at the second layer of the previous mini project. Here is the code for CONV_2 again: model.add(Conv2D(64, (3, 3), strides=1, padding='same', activation='relu')) Since we know that the depth of the previous layer is 32, then ⇒ Params = 64 × 3 × 3 × 32 + 64 = 18,496" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"124 CHAPTER 3Convolutional neural networks Note that the pooling layers do not add any parameters. Hence, you will see the Param # value is 0 after the pooling layers in the model summary. The same is true for the flatten layer: no extra weights are added (figure 3.28). When we add all the parameters in the Param # column, we get the total number of parameters that this network needs to optimize: 220,234. TRAINABLE AND NON-TRAINABLE PARAMS In the model summary, you will see the total number of params and, below it, the number of trainable and non-trainable params. The trainable params are the weights that this neural network needs to optimize during the training process. In this exam- ple, all our params are trainable (figure 3.29). In later chapters, we will talk about using a pretrained network and combining it with your own network for faster and more accurate results: in such a case, you may decide to freeze some layers because they are pretrained. So, not all of the network params will be trained. This is useful for understanding the memory and space complexity of your model before starting the training process; but more on that later. As far as we know now, all our params are trainable. 3.5 Adding dropout layers to avoid overfitting So far, you have been introduced to the three main layers of CNNs: convolution, pool- ing, and fully connected. You will find these three layer types in almost every CNN architecture. But that’s not all of them—there are additional layers that you can add to avoid overfitting. Layer (type) max_pooling2d_1 (MaxPooling2 (MaxPooling2conv2d_2 (Conv2D) max_pooling2d_2Output Shape (None, 14, 14, 32) (None, 14, 14, 64) (None, 7, 7, 64) (None, 3136)Param # 0 18496 0 0flatten_1 (Flatten) Figure 3.28 Pooling and flatten layers don’t add parameters, so Param # is 0 after pooling and flattening layers in the model summary. Total params: 220,234 Trainable params: 220,234 Non-trainable params: 0 Figure 3.29 All of our params are trainable and need to be optimized during training." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"125 Adding dropout layers to avoid overfitting 3.5.1 What is overfitting? The main cause of poor performance in machine learning is either overfitting or underfitting the data. Underfitting is as the name implies: the model fails to fit the training data. This happens when the model is too simple to fit the data: for example, using one perceptron to classify a nonlinear dataset. Overfitting , on the other hand, means fitting the data too much: memorizing the training data and not really learning the features. This happens when we build a super network that fits the training dataset perfectly (very low error while training) but fails to generalize to other data samples that it hasn’t seen before. You will see that, in over- fitting, the network performs very well in the training dataset but performs poorly in the test dataset (figure 3.30). In machine learning, we don’t want to build models that are too simple and so under- fit the data or are too complex and overfit it. We want to use other techniques to build a neural network that is just right for our problem. To address that, we will discuss dropout layers next. 3.5.2 What is a dropout layer? A dropout layer is one of the most commonly used layers to prevent overfitting. Drop- out turns off a percentage of neurons (nodes) that make up a layer of your network (figure 3.31). This percentage is identified as a hyperparameter that you tune when you build your network. By “turns off,” I mean these neurons are not included in a particular forward or backward pass. It may seem counterintuitive to throw away a con- nection in your network, but as a network trains, some nodes can dominate others or end up making large mistakes. Dropout gives you a way to balance your network so that every node works equally toward the same goal, and if one makes a mistake, it won’t dominate the behavior of your model. You can think of dropout as a technique that makes a network resilient; it makes all the nodes work well as a team by making sure no node is too weak or too strong.Underfitting y xJust right! y xOverfitting y x Figure 3.30 Underfitting (left): the model doesn’t represent the data very well. Just right (middle): the model fits the data very well. Overfitting (right): the model fits the data too much, so it won’t be able to generalize for unseen examples. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"126 CHAPTER 3Convolutional neural networks 3.5.3 Why do we need dropout layers? Neurons develop codependency among each other during training, which controls the individual power of each neuron, leading to overfitting of training data. To really understand why dropouts are effective, let’s take a closer look at the MLP in figure 3.31 and think about what the nodes in each layer really represent. The first layer (far left) is the input layer that contains the input features. The second layer contains the features learned from the patterns of the previous layer when multi- plied by the weights. Then the following layer is patterns learned within patterns, and so on. Each neuron represents a certain feature that, when multiplied by a weight, is transformed into another feature. When we randomly turn off some of these nodes, we force the other nodes to learn patterns without relying on only one or two features, because any feature can be randomly dropped out at any point. This results in spreading out the weights among all the features, leading to more trained neurons. Dropout helps reduce interdependent learning among the neurons. In that sense, it helps to view dropout as a form of ensemble learning. In ensemble learning, we train a number of weaker classifiers separately, and then we use them at test time by averaging the responses of all ensemble members. Since each classifier has been trained separately, it has learned different aspects of the data, and their mistakes (errors) are different. Combining them helps to produce a stronger classifier, which is less prone to overfitting. Intuition An analogy that helps me understand dropout is training your biceps with a bar. When lifting a bar with both arms, we tend to rely on our stronger arm to lift a little more weight than our weaker arm. Our stronger arm will end up getting more training than the other and develop a larger muscle:Dropout Figure 3.31 Dropout turns off a percentage of the neurons that make up a network layer." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"127 Adding dropout layers to avoid overfitting 3.5.4 Where does the dropout layer go in the CNN architecture? As you have learned in this chapter, a standard CNN consists of alternating convolu- tional and pooling layers, ending with fully connected layers. To prevent overfitting, it’s become standard practice after you flatten the image to inject a few dropout layersDropout means mixing up our workout (training) a little. We tie our right arm and train our left arm only. Then we tie the left arm and train the right arm only. Then we mix it up and go back to the bar with both arms, and so on. After some time, you will see that you have developed both of your biceps: This is exactly what happens when we train neural networks. Sometimes part of the network has very large weights and dominates all the training, while another part of the network doesn’t get much training. What dropout does is turn off some neurons and let the rest of the neurons train. Then, in the next epoch, it turns off other neu- rons, and the process continues.Too strong because it received too much trainingWeak because it didn’t get as much training" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"128 CHAPTER 3Convolutional neural networks between the fully connected layers at the end of the architecture. Why? Because drop- out is known to work well in the fully connected layers of convolutional neural nets. Its effect in convolutional and pooling layers is, however, not well studied yet: CNN architecture: ... CONV ⇒ POOL ⇒ Flatten ⇒ DO ⇒ FC ⇒ DO ⇒ FC Let’s see how we use Keras to add a dropout layer to our previous model: # CNN and POOL layers # ... # ... model.add(Flatten()) model.add(Dropout(rate=0.3)) model.add(Dense(64, activation='relu')) model.add(Dropout(rate=0.5)) model.add(Dense(10, activation='softmax')) model.summary() As you can see, the dropout layer takes rate as an argument. The rate represents the fraction of the input units to drop. For example, if we set rate to 0.3, it means 30% of the neurons in this layer will be randomly dropped in each epoch. So if we have 10 nodes in a layer, 3 of these neurons will be turned off, and 7 will be trained. The three neurons are randomly selected, and in the next epoch other randomly selected neu- rons are turned off, and so on. Since we do this randomly, some neurons may be turned off more than others, and some may never be turned off. This is okay, because we do this many times so that, on average, each neuron will get almost the same treatment. Note that this rate is another hyperparameter that we tune when build- ing our CNN. 3.6 Convolution over color images (3D images) Remember from chapter 1 that computers see grayscale images as 2D matrices of pix- els (figure 3.32). To a computer, the image looks like a 2D matrix of the pixels’ values, which represent intensities across the color spectrum. There is no context here, just a massive pile of data. Color images, on the other hand, are interpreted by the computer as 3D matrices with height, width, and depth. In the case of RGB images (red, green, and blue) the depth is three: one channel for each color. For example, a color 28 × 28 image will be seen by the computer as a 28 × 28 × 3 matrix. Think of this as a stack of three 2D matrices—one each for the red, green, and blue channels of the image. Each of theFlatten layer Dropout layer with 30% probability FC_1: fully connected to get all relevant data Dropout layer with 50% probability FC_2: outputs a softmax to squash the matrix into output probabilities for the 10 classesPrints the model architecture summary" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"129 Convolution over color images (3D images) three matrices represents the value of intensity of its color. When they are stacked, they create a complete color image (figure 3.33). NOTE For generalization, we represent images as a 3D array: height × width × depth. For grayscale images, depth is 1; and for color images, depth is 3. 3.6.1 How do we perform a convolution on a color image? Similar to what we did with grayscale images, we slide the convolutional kernel over the image and compute the feature maps. Now the kernel is itself three-dimensional: one dimension for each color channel (figure 3.34). 08 49 81 52 22 24 32 47 24 21 78 16 84 19 04 04 04 20 20 0102 49 49 90 31 47 98 24 55 36 17 39 56 80 52 36 42 49 23 7022 99 31 95 14 32 81 20 58 23 53 05 00 61 08 68 14 34 35 5497 40 73 23 71 60 28 68 05 09 28 42 48 68 83 81 73 41 29 7138 17 55 04 51 99 64 02 66 75 22 96 35 05 97 57 38 72 78 8315 81 79 60 67 03 23 62 73 00 75 35 71 94 35 62 25 30 31 5100 18 14 11 43 45 67 12 99 74 31 31 89 47 99 20 39 23 90 5440 57 29 42 59 02 10 20 26 44 67 47 07 49 14 72 11 88 01 4900 60 93 69 41 44 26 95 97 20 15 55 05 28 07 03 24 34 74 1675 87 71 24 92 75 38 63 17 45 94 58 44 73 97 16 94 62 31 9204 17 40 48 34 33 40 94 78 35 03 88 44 92 57 33 72 99 49 33What computers see What we see 05 40 67 56 54 53 67 39 78 14 80 24 37 13 32 67 18 69 71 4807 98 53 01 22 78 59 63 94 00 04 00 44 86 16 46 06 82 48 6178 43 99 32 40 36 54 04 83 41 42 17 40 52 26 55 46 47 86 4352 69 30 54 40 64 70 49 14 33 16 54 21 17 26 12 29 59 81 5112 46 03 71 28 20 66 91 88 97 14 24 58 77 79 32 32 85 14 0150 04 49 37 44 35 18 44 34 34 09 34 51 04 33 43 40 74 23 8977 56 13 02 33 09 38 49 89 31 53 29 54 09 27 93 62 04 57 1991 62 36 34 13 12 64 94 63 33 56 85 17 55 98 53 74 34 05 6708 00 65 91 80 80 70 21 72 95 92 57 58 40 44 69 36 24 54 48 Figure 3.32 To a computer, an image looks like a 2D matrix of pixel values. 35 166 156165 166 158163 164 162165 166 165 162 157 ...158 159 159 158 167 ...... ... ... ... ... ...102 170 160169 170 162167 168 166169 170 169 163 158169 170 170 168 168 ...... ... ... ... ... ...RGB channels Channel 3 Blue intensity values Channel 2 Green intensity values Channel 1 Red intensity valuesColor image F(0, 0) = [11, 102, 35] 11 159 149 146 145 ...158 159 151 146 143 ...156 157 155 149 143 ...158 159 158 153 148 ...158 159 159 158 158 ...... ... ... ... ... ... Figure 3.33 Color images are represented by three matrices. Each matrix represents the value of its color’s intensity. Stacking them creates a complete color image." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"130 CHAPTER 3Convolutional neural networks To perform convolution, we will do the same thing we did before, except that now, our sum is three times as many terms. Let’s see how (figure 3.35): Each of the color channels has its own corresponding filter. Each filter will slide over its image, multiply every corresponding pixel element- wise, and then add them all together to compute the convolved pixel value of each filter. This is similar to what we did previously. We then add the three values to get the value of a single node in the convolved image or feature map. And don’t forget to add the bias value of 1. Then we slide the filters over by one or more pixels (based on the strides value) and do the same thing. We continue this process until we compute the pixel values of all nodes in the feature map. 3.6.2 What happens to the computational complexity? Note that if we pass a 3 × 3 filter over a grayscale image, we will have a total of 9 param- eters (weights) for each filter (as already demonstrated). In color images, every filter is itself a 3D filter. This means every filter has a number of parameters: (height × width × depth) = (3 × 3 × 3) = 27. You can see how the network complexity increases when processing color images because it has to optimize more parameters; color images also take up more memory space. Color images contain more information than grayscale images. This can add unnecessary computational complexity and take up memory space. However, color images are also really useful for certain classification tasks. That’s why in some use cases, you, as a computer vision engineer, will use your judgement as to whether to convert your color images to grayscale where color doesn’t really matter. This is because for many objects, color is not needed to recognize and interpret an image: grayscale could be enough to recognize objects. +1 –1 –1–1 +1 –1 –1 –1 +1+1 –1 –1–1 +1 –1 –1 –1 +1+1 –1 –1–1 +1 –1 –1 –1 +1 Figure 3.34 We slide the convolutional kernel over the image and compute the feature maps, resulting in a 3D kernel." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"131 Convolution over color images (3D images) In figure 3.36, you can see how patterns of light and dark in an object (intensity) can be used to define its shape and characteristics. However, in other applications, color is important to define certain objects: for example, skin cancer detection relies heavily on skin color (red rashes). In general, when it comes to CV applications like identify- ing cars, people, or skin cancer, you can decide whether color information is import- ant or not by thinking about your own vision. If the identification problem is easier in color for us humans, it’s likely easier for an algorithm to see color images, too.0 × 1 + 0 × –1 + 0 × 0 + 0 × 1 + 0 × 1 + 2 × 0 + 0 × 0 + 1 × 1 + 2 × –1 = –10 × 1 + 0 × 0 + 0 × –1 + 0 × –1 + 0 × 1 + 1 × 1 + 0 × –1 + 0 × 0 + 0 × 1 = 10 × 0 + 0 × 0 + 0 × 0 + 0 × 1 + 0 × 0 + 1 × 0 + 0 × 1 + 2 × 0 + 1 × 0 = 0Input volume (+pad 1) (7 × 7 × 3) RedFilter (3 × 3 × 3) w0 [ : , : , 0 ] 000000 001110 021102 000202 001121 002122 0000000 0 0 0 0 0 0000 0 100 100 Green w0 [ : , : , 1 ] 000000 001110 000102 011101 002102 022010 0000000 0 0 0 0 0 010 – 1 1 –1 1 +1 –1 0 1+ 1 for bias175 449 552 Blue w0 [ : , : , 2 ] 000000 002110 012102 011220 001111 020100 0000000 0 0 0 0 0 01 –1 0 –1 110 01 – 1 Figure 3.35 Performing convolution " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"132 CHAPTER 3Convolutional neural networks Note that in figure 3.36, we added only one filter (that contains 3 channels), which produced one feature map. Similarly to grayscale images, each filter we add will pro- d u c e i t s o w n f e a t u r e m a p . I n t h e C N N i n f i g u r e 3 . 3 7 , w e h a v e a n i n p u t i m a g e o f dimensions (7 × 7 × 3). We add two convolution filters of dimensions (3 × 3). The out- put feature map has a depth of 2, since we added two filters, similar to what we did with grayscale images. An important closing note on CNN architecture I strongly recommend looking at existing architectures, since many people have already done the work of throwing things together and seeing what works. Practically speaking, unless you are working on research problems, you should start with a CNN architecture that has already been built by other people to solve problems similar to yours. Then tune it further to fit your data. In chapter 4, we will explain how to diagnose your network’s performance and dis- cuss tuning strategies to improve it. In chapter 5, we will discuss the most popular CNN architectures and examine how other researchers built them. What I want you to take from this section is, first, a conceptual understanding of how a CNN is built; and, second, that more layers lead to more neurons, which lead to more learning behavior. But this comes with computational cost. So you should always consider the size and complexity of your training data (many layers may not be necessary for a simple task). BicycleClouds Pedestrian Figure 3.36 Patterns of light and dark in an object (intensity) can be used to define its shape and characteristics in a grayscale image." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"133 Project: Image classification for color images 3.7 Project: Image classification for color images Let’s take a look at an end-to-end image classification project. In this project, we will train a CNN to classify images from the CIFAR-10 dataset ( www.cs.toronto.edu/ ~kriz/cifar.html ). CIFAR-10 is an established CV dataset used for object recognition. It is a subset of the 80 Million Tiny Images dataset1 and consists of 60,000 (32 × 32) color 1Antonio Torralba, Rob Fergus, and William T. Freeman, “80 Million Tiny Images: A Large Data Set for Non- parametric Object and Scene Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence (November 2008), https:/ /doi.org/10.1109/TPAMI.2008.128 .Bias 0 (1 × 1 × 1) b 1b0 [ : , : , 0 ]Bias 1 (1 × 1 × 1) b 0b1 [ : , : , 0 ]10 – 1 –1 1 1 –1 0 1 1– 10 110 01 – 1000000 0011100 100 1001 0 0 1 0 0Input volume (+pad 1) (7 × 7 × 3) Filter 1 Filter 2 Output volume (3 × 3 × 2) 021102 000202 0011 002122 0000000 0 0 000000 001021 0001 0111 0021 022010 0000000 0 0 0 000000 002222 0120 0112 0011 020100 0000000 0 0 0175 449 552–1 –1 1 –1 –1 1o [ : , : , 0 ] 441 661 42 – 4o [ : , : , 1 ]0 210 1 –1 0 1 1–1 11 0–1 11 12 01 020 01– 11 1– 11 –1 0 –1 –1 1 1 0 21 20 110 0 0111 101 111 Bias b0 (1×1×1) 1b0 [ : , : , 0 ]Bias b1 (1 b1–1 20 0 2 1 2 0 1 10 0 01 1 1 1 0 1 1 1 1 1 (1 b1 2000 0 0 Figure 3.37 Our input image has dimensions (7 × 7 × 3), and we add two convolution filters of dimensions (3 × 3). The output feature map has a depth of 2." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"134 CHAPTER 3Convolutional neural networks images containing 1 of 10 object classes, with 6,000 images per class. Now, fire up your notebook and let’s get started. STEP 1: L OAD THE DATASET The first step is to load the dataset into our train and test objects. Luckily, Keras pro- vides the CIFAR dataset for us to load using the load_data() method. All we have to do is import keras.datasets and then load the data: import keras from keras.datasets import cifar10 (x_train, y_train), (x_test, y_test) = cifar10.load_data() import numpy as np import matplotlib.pyplot as plt %matplotlib inline fig = plt.figure(figsize=(20,5)) for i in range(36): ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[]) ax.imshow(np.squeeze(x_train[i])) STEP 2: I MAGE PREPROCESSING Based on your dataset and the problem you are solving, you will need to do some data cleanup and preprocessing to get it ready for your learning model. A cost function has the shape of a bowl, but it can be an elongated bowl if the features have very different scales. Figure 3.38 shows gradient descent on a training set where features 1 and 2 have the same scale (on the left), and on a training set where feature 1 has much smaller values than feature 2 (on the right). TIP When using gradient descent, you should ensure that all features have a similar scale; otherwise, it will take much longer to converge.Loads the preshuffled train and tests the data Normalized features F1 F2Gradient descent with and without feature scaling Non-normalized features F1 F2 Figure 3.38 Normalized features are on the same scale represented by a uniform bowl (left). Non-normalized features are not on the same scale and are represented by an elongated bowl (right). Gradient descent on a training set with features that have the same scale (left) and on a training set where feature 1’s values are much smaller than feature 2’s (right)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"135 Project: Image classification for color images Rescale the images Rescale the input images as follows: x_train = x_train.astype( 'float32' )/255 x_test = x_test.astype( 'float32' )/255 Prepare the labels (one-hot encoding) In this chapter and throughout the book, we will discuss how computers process input data (images) by converting it into numeric values in the form of matrices of pixel intensities. But what about the labels? How are the labels understood by computers? Every image in our dataset has a specific label that explains (in text) how this image is categorized. In this particular dataset, for example, the labels are categorized by the following 10 classes: ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] . We need to convert these text labels into a form that can be processed by computers. Computers are good with numbers, so we will do something called one-hot encoding. One-hot encoding is a process by which categorical variables are converted into a numeric form. Suppose the dataset looks like the following: After one-hot encoding, we have the following: Luckily, Keras has a method that does just that for us: from keras.utils import np_utils num_classes = len(np.unique(y_train)) y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes)Image Label image_1 dog image_2 automobile image_3 airplane image_4 truck image_5 bird airplane bird cat deer dog frog horse ship truck automobile image_1 0 0 0 0 1 0 0 0 0 0 image_2 0 0 0 0 0 0 0 0 0 1 image_3 1 0 0 0 0 0 0 0 0 0 image_4 0 0 0 0 0 0 0 0 1 0 image_5 0 1 0 0 0 0 0 0 0 0Rescales the images by dividing the pixel values by 255: [0,255] ⇒ [0,1] One-hot encodes the labels" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"136 CHAPTER 3Convolutional neural networks Split the dataset for training and validation In addition to splitting our data into train and test datasets, it is a standard practice to further split the training data into training and validation datasets (figure 3.39). Why? Because each split is used for a different purpose: Training dataset —The sample of data used to train the model. Validation dataset —The sample of data used to provide an unbiased evaluation of model fit on the training dataset while tuning model hyperparameters. The evaluation becomes more biased as skill on the validation dataset is incorpo- rated into the model configuration. Test dataset —The sample of data used to provide an unbiased evaluation of final model fit on the training dataset. Here is the Keras code: (x_train, x_valid) = x_train[5000:], x_train[:5000] (y_train, y_valid) = y_train[5000:], y_train[:5000] print('x_train shape:' , x_train.shape) print(x_train.shape[0], 'train samples' ) print(x_test.shape[0], 'test samples' ) print(x_valid.shape[0], 'validation samples' ) The label matrix One-hot encoding converts the (1 × n) label vector to a label matrix of dimensions (10 × n), where n is the number of sample images. So, if we have 1,000 images in our dataset, the label vector will have the dimensions (1 × 1000). After one-hot encoding, the label matrix dimensions will be (1000 × 10). That’s why, when we define our network architecture in the next step, we will make the output softmax layer contain 10 nodes, where each node represents the probability of each class we have.Train Validation Test Figure 3.39 Splitting the data into training, validation, and test subsets Breaks the training set into training and validation sets Prints the shape of the training set Prints the number of training, validation, and test images" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"137 Project: Image classification for color images STEP 3: D EFINE THE MODEL ARCHITECTURE You learned that the core building block of CNNs (and neural networks in general) is the layer. Most DL projects consist of stacking together simple layers that implement a form of data distillation . As you learned in this chapter, the main CNN layers are convo- lution, pooling, fully connected, and activation functions. How do you decide on the network architecture? How many convolutional layers should you create? How many pooling layers? In my opinion, it is very helpful to read about some of the most popular architectures (Alex- Net, ResNet, Inception) and extract the key ideas leading to the design decisions. Looking at how these state-of-the-art architectures are built and playing with your own projects will help you build an intuition about the CNN architecture that most suits the problem you are solving. We will discuss the most popular CNN architectures in chapter 5. Until then, here is what you need to know: The more layers you add, the better (at least theoretically) your network will learn; but this will come at the cost of increasing the computational and memoryFC softmax ( = 10)n Output layerImage of cat0 P(airplane)= 0 1 P(bird)= 0.01 2 P(cat)= 0.85 3 P(deer)= 0.01 4 P(dog)= 0.09 5 P(frog)= 0 6 P(horse)= 0.04 7 P(ship)= 0 8 P(truck)= 0 9 P(automobile)= 0CNN layers" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"138 CHAPTER 3Convolutional neural networks space complexity, because it increases the number of parameters to optimize. You will also face the risk of the network overfitting your training set. As the input image goes through the network layers, its depth increases, and the dimensions (width, height) shrink, layer by layer. In general, two or three layers of 3 × 3 convolutional layers followed by a 2 × 2 pooling can be a good start for smaller datasets. Add more convolutional and pooling layers until your image is a reasonable size (say, 4 × 4 or 5 × 5), and then add a couple of fully connected layers for classification. You need to set up several hyperparameters (like filter , kernel_size , and padding ). Remember that you do not need to reinvent the wheel: instead, look in the literature to see what hyperparameters usually work for others. Choose an architecture that worked well for someone else as a starting point, and then tune these hyperparameters to fit your situation. The next chapter is dedicated to looking at what has worked well for others. The architecture shown in figure 3.40 is called AlexNet: it’s a popular CNN architec- ture that won the ImageNet challenges in 2011 (more details on AlexNet in chapter 5). The AlexNet CNN architecture is composed of five convolutional and pooling layers, and three fully connected layers. Let’s try a smaller version of AlexNet and see how it performs with our dataset (fig- ure 3.41). Based on the results, we might add more layers. Our architecture will stack three convolutional layers and two fully connected (dense) layers as follows: CNN: INPUT ⇒ CONV_1 ⇒ POOL_1 ⇒ CONV_2 ⇒ POOL_2 ⇒ CONV_3 ⇒ POOL_3 ⇒ DO ⇒ FC ⇒ DO ⇒ FC (softmax)Learning to work with layers and hyperparameters I don’t want you to get hung up on setting hyperparameters when building your first CNNs. One of the best ways to gain an instinct for how to put layers and hyperparam- eters together is to actually see concrete examples of how others have done it. Most of your work as a DL engineer will involve building your architecture and tuning the parameters. The main takeaways from this chapter are these: Understand how the main CNN layers work (convolution, pooling, fully con- nected, dropout) and why they exist. Understand what each hyperparameter does (number of filters in the convolu- tional layer, kernel size, strides, and padding). Understand, in the end, how to implement any given architecture in Keras. If you are able to replicate this project on your own dataset, you are good to go. In chapter 5, we will review several state-of-the-art architectures and see what worked for them. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"139 Project: Image classification for color images Note that we will use the ReLU activation function for all the hidden layers. In the last dense layer, we will use a softmax activation function with 10 nodes to return an array of 10 probability scores (summing to 1). Each score will be the probability that the current image belongs to our 10 image classes:3 33 33 13 192 48 3224224 11 115555 Stride of 4Max poolingMax poolingMax pooling 12827 13 1313 13 192 1282048 2048Dense 100011 1148 128 192 192 1282048 2048Dense Dense 5555 33 3333 27 33 131333 1313 Figure 3.40 AlexNet architecture Input Conv layer 1 Pooling layer 1Conv layer 2Conv layer 3 Fully connected layerOutput layer10 Pooling layer 2Pooling layer 3 Feature extractor Classifier Figure 3.41 We will build a small CNN consisting of three convolutional layers and two dense layers. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"140 CHAPTER 3Convolutional neural networks from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout model = Sequential() model.add(Conv2D(filters=16, kernel_size=2, padding= 'same', activation= 'relu', input_shape=(32, 32, 3))) model.add(MaxPooling2D(pool_size=2)) model.add(Conv2D(filters=32, kernel_size=2, padding= 'same', activation= 'relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Conv2D(filters=64, kernel_size=2, padding= 'same', activation= 'relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.3)) model.add(Flatten()) model.add(Dense(500, activation= 'relu')) model.add(Dropout(0.4)) model.add(Dense(10, activation= 'softmax' )) model.summary() When you run this cell, you will see the model architecture and how the dimensions of the feature maps change with every successive layer, as illustrated in figure 3.42. We discussed previously how to understand this summary. As you can see, our model has 528,054 parameters (weights and biases) to train. We also discussed previ- ously how this number was calculated. STEP 4: C OMPILE THE MODEL The last step before training our model is to define three more hyperparameters—a loss function, an optimizer, and metrics to monitor during training and testing: Loss function —How the network will be able to measure its performance on the training data. Optimizer —The mechanism that the network will use to optimize its parameters (weights and biases) to yield the minimum loss value. It is usually one of the variants of stochastic gradient descent, explained in chapter 2. Metrics —List of metrics to be evaluated by the model during training and test- ing. Typically we use metrics=['accuracy'] . Feel free to revisit chapter 2 for more details on the exact purpose and different types of loss functions and optimizers. First convolutional and pooling layers. Note that we need to define input_shape in the first convolutional layer only. Second convolutional and pooling layers with a ReLU activation function Third convolutional and pooling layers Dropout layer to avoid overfitting with a 30% rate Flattens the last feature map into a vector of features Adds the first fully connected layer Another dropout layer with a 40% rate The output layer is a fully connected layer with 10 nodes and softmax activation to give probabilities to the 10 classes.Prints a summary of the model architecture" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"141 Project: Image classification for color images Here is the code to compile the model: model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) STEP 5: T RAIN THE MODEL We are now ready to train the network. In Keras, this is done via a call to the network’s .fit() method (as in fitting the model to the training data): from keras.callbacks import ModelCheckpoint checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose=1, save_best_only= True) hist = model.fit(x_train, y_train, batch_size=32, epochs=100, validation_data=(x_valid, y_valid), callbacks=[checkpointer], verbose=2, shuffle= True) When you run this cell, the training will start, and the verbose output shown in fig- ure 3.43 will show one epoch at a time. Since 100 epochs of display do not fit on one page, the screenshot shows the first 13 epochs. But when you run this on your note- book, the display will keep going for 100 epochs.Layer (type) Total params: 528,054 Trainable params: 528,054 Non-trainable params: 0conv2d_1 (Conv2D) conv2d_2 (Conv2D)max_pooling2d_1 (MaxPooling 2 max_pooling2d_2 (MaxPooling 2 (MaxPooling 2Output Shape (None, 32, 32, 16) (None, 16, 16, 16) (None, 16, 16, 32) conv2d_3 (Conv2D) (None, 8, 8, 64)(None, 8, 8, 32)Param # 208 0 2080 0 max_pooling2d_3 (None, 4, 4, 64) dropout_1 (Dropout) (None, 4, 4, 64) dropout_2 (Dropout)flatten_1 (Flatten) (None, 1024) (None, 500) (None, 500) (None, 10)8256 0 0 0 512500 0 5010dense_1 (Dense) dense_2 (Dense) Figure 3.42 Model summary " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"142 CHAPTER 3Convolutional neural networks Looking at the verbose output in figure 3.43 will help you analyze how your network is performing and suggest which knobs (hyperparameter) to tune. We will discuss this in detail in chapter 4. For now, let’s look at the most important takeaways: loss and acc are the error and accuracy values for the training data. val_loss and val_acc are the error and accuracy values for the validation data. Look at the val_loss and val_acc values after each epoch. Ideally, we want val_loss t o be d ec re as ing and val_acc to be increasing, indicating that the network is actually learning after each epoch. From epochs 1 through 6, you can see that the model is saving the weights after each epoch, because the validation loss value is improving. So at the end of each epoch, we save the weights that are considered the best weights so far.Train on 45000 amples, validation 5000 samples Epoch 1/100 Epoch 00000: val_loss improved from inf to 1.35820, saving model to model.weights.best.hdf5 46s - loss: 1.6192 - acc: 0.4140 - val_loss: 1.3582 - val_acc: 0.5166 Epoch 2/100 Epoch 00001: val_loss improved from 1.35820 to 1.22245, saving model to model.weights.best.hdf5 53s - loss: 1.2881 - acc: 0.5402 - val_loss: 1.2224 - val_acc: 0.5644 Epoch 3/100 Epoch 00002: val_loss improved from 1.22245 to 1.12096, saving model to model.weights.best.hdf5 49s - loss: 1.1630 - acc: 0.5879 - val_loss: 1.1210 - val_acc: 0.6046 Epoch 4/100 Epoch 00003: val_loss improved from 1.12096 to 1.10724, saving model to model.weights.best.hdf5 56s - loss: 1.0928 - acc: 0.6160 - val_loss: 1.1072 - val_acc: 0.6134 Epoch 5/100 Epoch 00004: val_loss improved from 1.10724 to 0.97377, saving model to model.weights.best.hdf5 52s - loss: 1.0413 - acc: 0.6382 - val_loss: 0.9738 - val_acc: 0.6596 Epoch 6/100 Epoch 00005: val_loss improved from 0.97377 to 0.95501, saving model to model.weights.best.hdf5 50s - loss: 1.0090 - acc: 0.6484 - val_loss: 0.9550 - val_acc: 0.6768 Epoch 7/100 Epoch 00006: val_loss improved from 0.95501 to 0.94448, saving model to model.weights.best.hdf5 49s - loss: 0.9967 - acc: 0.6561 - val_loss: 0.9445 - val_acc: 0.6828 Epoch 8/100 Epoch 00007: val_loss did not improve 61s - loss: 0.9934 - acc: 0.6604 - val_loss: 1.1300 - val_acc: 0.6376 Epoch 9/100 Epoch 00008: val_loss improved from 0.94448 to 0.91779, saving model to model.weights.best.hdf5 49s - loss: 0.9858 - acc: 0.6672 - val_loss: 0.9178 - val_acc: 0.6882 Epoch 10/100 Epoch 00009: val_loss did not improve 50s - loss: 0.9839 - acc: 0.6658 - val_loss: 0.9669 - val_acc: 0.6748 Epoch 11/100 Epoch 00010: val_loss improved from 0.91779 to 0.91570, saving model to model.weights.best.hdf5 49s - loss: 1.0002 - acc: 0.6624 - val_loss: 0.9157 - val_acc: 0.6936 Epoch 12/100 Epoch 00011: val_loss did not improve 54s - loss: 1.0001 - acc: 0.6659 - val_loss: 1.1442 - val_acc: 0.6646 Epoch 13/100 Epoch 00012: val_loss did not improve 56s - loss: 1.0161 - acc: 0.6633 - val_loss: 0.9702 - val_acc: 0.6788 Figure 3.43 The first 13 epochs of training" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"143 Project: Image classification for color images At epoch 7, val_loss went up to 1.1300 from 0.9445, which means that it did not improve. So the network did not save the weights at this epoch. If you stop the training now and load the weights from epoch 6, you will get the best results that you achieved during the training. The same is true for epoch 8: val_loss decreases, so the network saves the weights as best values. And at epoch 9, there is no improvement, and so forth. If you stop your training after 12 epochs and load the best weights, the network will load the weights saved after epoch 10 at ( val_loss = 0.9157) and ( val_acc = 0.6936). This means you can expect to get accuracy on the test data close to 69%. STEP 6: L OAD THE MODEL WITH THE BEST VAL_ACC Now that the training is complete, we use the Keras method load_weights() to load into our model the weights that yielded the best validation accuracy score: model.load_weights('model.weights.best.hdf5')Keep your eye on these common phenomena val_loss is oscillating. If val_loss is oscillating up and down, you might want to decrease the learning-rate hyperparameter. For example, if you see val_loss going from 0.8 to 0.9, to 0.7, to 1.0, and so on, this might mean that your learning rate is too high to descend the error mountain. Try decreas- ing the learning rate and letting the network train for a longer time. val_loss is not improving (underfitting) . If val_loss is not decreasing, this might mean your model is too simple to fit the data (underfitting). Then you may want to build a more complex model by adding more hidden layers to help the network fit the data. loss is decreasing and val_loss stopped improving . This means your net- work started to overfit the training data and failed to decrease the error for the validation data. In this case, consider using a technique to prevent overfit- ting, like dropout layers. There are other techniques to avoid overfitting, as we will discuss in the next chapter.Big learning rate Small learning rate If val_loss oscillates, the learning rate may be too high. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"144 CHAPTER 3Convolutional neural networks STEP 7: E VALUATE THE MODEL The last step is to evaluate our model and calculate the accuracy value as a percentage indicating how often our model correctly predicts the image classification: score = model.evaluate(x_test, y_test, verbose=0) print('\n', 'Test accuracy:' , score[1]) When you run this cell, you will get an accuracy of about 70%. That is not bad. But we can do a lot better. Try playing with the CNN architecture by adding more convolu- tional and pooling layers, and see if you can improve your model. In the next chapter, we will discuss strategies to set up your DL project and hyper- parameter tuning to improve the model’s performance. At the end of chapter 4, we will revisit this project to apply these strategies and improve the accuracy to above 90%. Summary MLPs, ANNs, dense, and feedforward all refer to the regular fully connected neural network architecture that we discussed in chapter 2. MLPs usually work well for 1D inputs, but they perform poorly with images for two main reasons. First, they only accept feature inputs in a vector form with dimensions (1 × n). This requires flattening the image, which will lead to losing its spatial information. Second, MLPs are composed of fully connected layers that will yield millions and billions of parameters when processing bigger images. This will increase the computational complexity and will not scale for many image problems. CNNs really shine in image processing because they take the raw image matrix as an input without having to flatten the image. They are composed of locally con- nected layers called convolution filters, as opposed to the MLPs’ dense layers. CNNs are composed of three main layers: the convolutional layer for feature extraction, the pooling layer to reduce network dimensionality, and the fully connected layer for classification. The main cause of poor prediction performance in machine learning is either overfitting or underfitting the data. Underfitting means that the model is too simple and fails to fit (learn) the training data. Overfitting means that the model is so complex that it memorizes the training data and fails to generalize for test data that it hasn’t seen before. A dropout layer is added to prevent overfitting. Dropout turns off a percentage of neurons (nodes) that make up a layer of our network. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"145Structuring DL projects and hyperparameter tuning This chapter concludes the first part of this book, providing a foundation for deep learning (DL). In chapter 2, you learned how to build a multilayer perceptron (MLP). In chapter 3, you learned about a neural network architecture topology that is very commonly used in computer vision (CV) problems: convolutional neu- ral networks (CNNs). In this chapter, we will wrap up this foundation by discussing how to structure your machine learning (ML) project from start to finish. You will learn strategies to quickly and efficiently get your DL systems working, analyze the results, and improve network performance. As you might have already noticed from the previous projects, DL is a very empirical process. It relies on running experiments and observing model perfor- mance more than having one go-to formula for success that fits all problems. We often have an initial idea for a solution, code it up, run the experiment to see how it did, and then use the outcome of this experiment to refine our ideas. WhenThis chapter covers Defining performance metrics Designing baseline models Preparing training data Evaluating a model and improving its performance" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"146 CHAPTER 4Structuring DL projects and hyperparameter tuning building and tuning a neural network, you will find yourself making many seemingly arbitrary decisions: What is a good architecture to start with? How many hidden layers should you stack? How many hidden units or filters should go in each layer? What is the learning rate? Which activation function should you use? Which yields better results, getting more data or tuning hyperparameters? In this chapter, you will learn the following: Defining the performance metrics for your system —In addition to model accuracy, you will use other metrics like precision, recall, and F-score to evaluate your network. Designing a baseline model— You will choose an appropriate neural network archi- tecture to run your first experiment. Getting your data ready for training —In real-world problems, data comes in messy, not ready to be fed to a neural network. In this section, you will massage your data to get it ready for learning. Evaluating your model and interpreting its performance —When training is complete, you analyze your model’s performance to identify bottlenecks and narrow down improvement options. This means diagnosing which of the network compo- nents are performing worse than expected and identifying whether poor per- formance is due to overfitting, underfitting, or a defect in the data. Improving the network and tuning hyperparameters —Finally, we will dive deep into the most important hyperparameters to help develop your intuition about which hyperparameters you need to tune. You will use tuning strategies to make incremental changes based on your diagnosis from the previous step. TIP With more practice and experimentation, DL engineers and researchers build their intuition over time as to the most effective ways to make improve- ments. My advice is to get your hands dirty and try different architectures and approaches to develop your hyperparameter-tuning skills. Ready? Let’s get started! 4.1 Defining performance metrics Performance metrics allow us to evaluate our system. When we develop a model, we want to find out how well it is working. The simplest way to measure the “goodness” of our model is by measuring its accuracy. The accuracy metric measures how many times our model made the correct prediction. So, if we test the model with 100 input samples, and it made the correct prediction 90 times, this means the model is 90% accurate. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"147 Defining performance metrics Here is the equation used to calculate model accuracy: accuracy = 4.1.1 Is accuracy the best metric for evaluating a model? We have been using accuracy as a metric for evaluating our model in earlier projects, and it works fine in many cases. But let’s consider the following problem: you are designing a medical diagnosis test for a rare disease. Suppose that only one in every million people has this disease. Without any training or even building a system at all, if you hardcode the output to be always negative (no disease found), your system will always achieve 99.999% accuracy. Is that good? The system is 99.999% accurate, which might sound fantastic, but it will never capture the patients with the disease. This means the accuracy metric is not suitable to measure the “goodness” of this model. We need other evaluation metrics that measure different aspects of the model’s predic- tion ability. 4.1.2 Confusion matrix To set the stage for other metrics, we will use a confusion matrix : a table that describes the performance of a classification model. The confusion matrix itself is relatively sim- ple to understand, but the related terminology can be a little confusing at first. Once you understand it, you’ll find that the concept is really intuitive and makes a lot of sense. Let’s go through it step by step. The goal is to describe model performance from different angles other than pre- diction accuracy. For example, suppose we are building a classifier to predict whether a patient is sick or healthy. The expected classifications are either positive (the patient is sick) or negative (the patient is healthy). We run our model on 1,000 patients and enter the model predictions in table 4.1. Let’s now define the most basic terms, which are whole numbers (not rates): True positives (TP) —The model correctly predicted yes (the patient has the disease). True negatives (TN) —The model correctly predicted no (the patient does not have the disease).Table 4.1 Running our model to predict healthy vs. sick patients Predicted sick (positive) Predicted healthy (negative) Sick patients (positive)100 30 True positives (TP) False negative (FN) Healthy patients (negative)70 800 False positives (FP) True negatives (TN)correct predictions total number of examples------------------------------------------------------------------ -" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"148 CHAPTER 4Structuring DL projects and hyperparameter tuning False positives (FP) —The model falsely predicted yes, but the patient actually does not have the disease (in some literature known as a Type I error or error of the first kind ). False negatives (FN) —The model falsely predicted no, but the patient actually does have the disease (in some literature known as a Type II error or error of the sec- ond kind ). The patients that the model predicts are negative (no disease) are the ones that the model believes are healthy, and we can send them home without further care. The patients that the model predicts are positive (have disease) are the ones that we will send for further investigation. Which mistake would we rather make? Mistakenly diag- nosing someone as positive (has disease) and sending them for more investigation is not as bad as mistakenly diagnosing someone as negative (healthy) and sending them home at risk to their life. The obvious choice of evaluation metric here is that we care more about the number of false negatives (FN). We want to find all the sick people, even if the model accidentally classifies some healthy people as sick. This metric is called recall . 4.1.3 Precision and recall Recall (also known as sensitivity ) tells us how many of the sick patients our model incor- rectly diagnosed as well. In other words, how many times did the model incorrectly diagnose a sick patient as negative (false negative, FN)? Recall is calculated by the fol- lowing equation: Recall = Precision (also known as specificity ) is the opposite of recall. It tells us how many of the well patients our model incorrectly diagnosed as sick. In other words, how many times did the model incorrectly diagnose a well patient as positive (false positive, FP)? Preci- sion is calculated by the following equation: Precision = Identifying an appropriate metric It is important to note that although in the example of health diagnostics we decided that recall is a better metric, other use cases require different metrics, like precision. To identify the most appropriate metric for your problem, ask yourself which of the two possible false predictions is more consequential: false positive or false negative. If your answer is FP, then you are looking for precision. If FN is more significant, then recall is your answer. true positive true positive + false negative------------------------------------------------------------------------- true positive true positive + false positive-----------------------------------------------------------------------" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"149 Designing a baseline model 4.1.4 F-score In many cases, we want to summarize the performance of a classifier with a single metric that represents both recall and precision. To do so, we can convert precision (p) and recall ( r) into a single F-score metric. In mathematics, this is called the harmonic mean of p and r: F-score = The F-score gives a good overall representation of how your model is performing. Let’s take a look at the health-diagnostics example again. We agreed that this is a high- recall model. But what if the model is doing really well on the FN and giving us a high recall score, but it’s performing poorly on the FP and giving us a low precision score? Doing poorly on FP means, in order to not miss any sick patients, it is mistakenly diag- nosing a lot of patients as sick, to be on the safe side. So, while recall might be more important for this problem, it is good to look at the model from both scores—precision and recall—together: NOTE Defining the model evaluation metric is a necessary step because it will guide your approach to improving the system. Without clearly defined metrics, it can be difficult to tell whether changes to a ML system result in progress or not. 4.2 Designing a baseline model Now that you have selected the metrics you will use to evaluate your system, it is time to establish a reasonable end-to-end system for training your model. Depending on the problem you are solving, you need to design the baseline to suit your network type and architecture. In this step, you will want to answer questions like these:Consider a spam email classifier, for example. Which of the two false predictions would you care about more: falsely classifying a non-spam email as spam, in which case it gets lost, or falsely classifying a spam email as non-spam, after which it makes its way to the inbox folder? I believe you would care more about the former. You don’t want the receiver to lose an email because your model misclassified it as spam. We want to catch all spam, but it is very bad to lose a non-spam email. In this example, precision is a suitable metric to use. In some applications, you might care about both precision and recall at the same time. That’s called an F-score, as explained next. Precision Recall F-score Classifier A 95% 90% 92.4% Classifier B 98% 85% 91%2pr pr+----------" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"150 CHAPTER 4Structuring DL projects and hyperparameter tuning Should I use an MLP or CNN network (or RNN, explained later in the book)? Should I use other object detection techniques like YOLO or SSD (explained in later chapters)? How deep should my network be? Which activation type will I use? What kind of optimizer do I use? Do I need to add any other regularization layers like dropout or batch normal- ization to avoid overfitting? If your problem is similar to another problem that has been studied extensively, you will do well to first copy the model and algorithm already known to perform the best for that task. You can even use a model that was trained on a different dataset for your own problem without having to train it from scratch. This is called transfer learning and will be discussed in detail in chapter 6. For example, in the last chapter’s project, we used the architecture of the popular AlexNet as a baseline model. Figure 4.1 shows the architecture of an AlexNet deep CNN, with the dimensions of each layer. The input layer is followed by five convolu- tional layers (CONV1 through CONV5), the output of the fifth convolutional layer is fed into two fully connected layers (FC6 through FC7), and the output layer is a fully connected layer (FC8) with a softmax function: INPUT ⇒ CONV1 ⇒ POOL1 ⇒ CONV2 ⇒ POOL2 ⇒ CONV3 ⇒ CONV4 ⇒ CONV5 ⇒ POOL3 ⇒ FC6 ⇒ FC7 ⇒ SOFTMAX_8 Looking at the AlexNet architecture, you will find all the network hyperparameters that you need to get started with your own model:33 1313 384 96 3224224 11 1155 5555CONV2 CONV3 CONV4 CONV5 FC6 FC7 FC8 Input image (RGB) Stride of 4Max poolingMax poolingMax pooling25633 2727 33 13 1313 13 384 256 4096 4096Dense Dense Dense 1000 Figure 4.1 The AlexNet architecture consists of five convolutional layers and three FC layers. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"151 Getting your data ready for training Network depth (number of layers): 5 convolutional layers plus 3 fully con- nected layers Layers’ depth (number of filters): CONV1 = 96, CONV2 = 256, CONV3 = 384, CONV4 = 385, CONV5 = 256 Filter size: 11 × 11, 5 × 5, 3 × 3, 3 × 3, 3 × 3 ReLU as the activation function in the hidden layers (CONV1 all the way to FC7) Max pooling layers after CONV1, CONV2, and CONV5 FC6 and FC7 with 4,096 neurons each FC8 with 1000 neurons, using a softmax activation function NOTE In the next chapter, we will discuss some of the most popular CNN architectures along with their code implementations in Keras. We will look at networks like LeNet, AlexNet, VGG, ResNet, and Inception that will build your understanding of what architecture works best for different problems and perhaps inspire you to invent your own CNN architecture. 4.3 Getting your data ready for training We have defined the performance metrics that we will use to evaluate our model and have built the architecture of our baseline model. Let’s get our data ready for train- ing. It is important to note that this process varies a lot based on the problem and data you have. Here, I’ll explain the basic data-massaging techniques that you need to per- form before training your model. I’ll also help you develop an instinct for what “ready data” looks like so you can determine which preprocessing techniques you need. 4.3.1 Splitting your data for train/validation/test When we train a ML model, we split the data into train and test datasets (figure 4.2). We use the training dataset to train the model and update the weights, and then we evaluate the model against the test dataset that it hasn’t seen before. The golden rule here is this: never use the test data for training. The reason we should never show the test samples to the model while training is to make sure the model is not cheating. We show the model the training samples to learn their features, and then we test how it generalizes on a dataset that it has never seen, to get an unbiased evaluation of its performance. Training Testing Figure 4.2 Splitting the data into training and testing datasets" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"152 CHAPTER 4Structuring DL projects and hyperparameter tuning WHAT IS THE VALIDATION DATASET ? After each epoch during the training process, we need to evaluate the model’s accu- racy and error to see how it is performing and tune its parameters. If we use the test dataset to evaluate the model during training, we will break our golden rule of never using the testing data during training. The test data is only used to evaluate the final performance of the model after training is complete. So we make an additional split called a validation dataset to evaluate and tune parameters during training (figure 4.3). Once the model has completed training, we test its final performance over the test dataset. Take a look at this pseudo code for model training: for each epoch for each training data instance propagate error through the network adjust the weights calculate the accuracy and error over training data for each validation data instance calculate the accuracy and error over the validation data As we saw in the project in chapter 3, when we train the model, we get train_loss , train_acc , val_loss , and val_acc after each epoch (figure 4.4). We use this data to analyze the network’s performance and diagnose overfitting and underfitting, as you will see in section 4.4. WHAT IS A GOOD TRAIN/VALIDATION /TEST DATA SPLIT? Traditionally, an 80/20 or 70/30 split between train and test datasets is used in ML projects. When we add the validation dataset, we went with 60/20/20 or 70/15/15. But that was back when an entire dataset was just tens of thousands of samples. WithTraining Cross validation Training your model Making decisions Test your model.Testing Figure 4.3 An additional split called a validation dataset to evaluate the model during training while keeping the test subset for the final test after training Epoch 1/100 Epoch 00000: val_loss improved from inf to 1.35820, saving model to model.weights.best.hdf5 46s - loss: 1.6192 - acc: 0.4140 - val_loss: 1.3582 - val_acc: 0.5166 Epoch 2/100 Epoch 00001: val_loss improved from 1.35820 to 1.22245, saving model to model.weights.best.hdf5 53s - loss: 1.2881 - acc: 0.5402 - val_loss: 1.2224 - val_acc: 0.5644 Figure 4.4 Training results after each epoch" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"153 Getting your data ready for training the huge amount of data we have now, sometimes 1% for both the validation and the test set is enough. For example, if our dataset contains 1 million samples, 10,000 sam- ples is very reasonable for each of the test and validation sets, because it doesn’t make sense to hold back several hundred thousand samples of your dataset. It is better to use this data for model training. So, to recap, if you have a relatively small dataset, the traditional ratios might be okay. But if you are dealing with a large dataset, then it is fine to set your train and val- idation sets to much smaller values. 4.3.2 Data preprocessing Before you feed your data to the neural network, you will need to do some data cleanup and processing to get it ready for your learning model. There are several preprocessing techniques to choose from, based on the state of your dataset and the problem you are solving. The good news about neural networks is that they require minimal data preprocessing. When given a large amount of training data, they are able to extract and learn features from raw data, unlike the other traditional ML techniques. With that said, preprocessing still might be required to improve performance or work within specific limitations on the neural network, such as converting images to grayscale, image resizing, normalization, and data augmentation. In this section, we’ll go through these preprocessing concepts; we’ll see their code implementations in the project at the end of the chapter.Be sure datasets are from the same distribution An important thing to be aware of when splitting your data is to make sure your train/validation/test datasets come from the same distribution. Suppose you are building a car classifier that will be deployed on cell phones to detect car models. Keep in mind that DL networks are data-hungry, and the common rule of thumb is that the more data you have, the better your model will perform. So, to source your data, you decide to crawl the internet for car images that are all high-quality, professionally- framed images. You train your model and tune it, you achieve satisfying results on your test dataset, and you are ready to release the model to the world—only to dis- cover that it is performing poorly on real-life images taken by phone cameras. This happens because your model has been trained and tuned to achieve good results on high-quality images, so it fails to generalize on real-life images that may be blurry or lower resolution or have different characteristics. In more technical words, your training and validation datasets are composed of high- quality images, whereas the production images (real life) are lower-quality images. Thus it is very important that you add lower-quality images to your train and validate datasets. Hence, the train/validate/test datasets should come from the same distribution. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"154 CHAPTER 4Structuring DL projects and hyperparameter tuning IMAGE GRAYSCALING We talked in chapter 3 about how color images are represented in three matrices ver- sus only one matrix for grayscale images; color images add computational complexity with their many parameters. You can make a judgment call about converting all your images to grayscale, if your problem doesn’t require color, to save on the computa- tional complexity. A good rule of thumb here is to use the human-level performance rule: if you are able to identify the object with your eyes in grayscale images, then a neural network will probably be able to do the same. IMAGE RESIZING One limitation for neural networks is that they require all images to be the same shape. If you are using MLPs, for example, the number of nodes in the input layer must be equal to the number of pixels in the image (remember how, in chapter 3, we flattened the image to feed it to the MLP). The same is true for CNNs. You need to set the input shape of the first convolutional layer. To demonstrate this, let’s look at the Keras code to add the first CNN layer: model.add(Conv2D(filters=16, kernel_size=2, padding= 'same', activation= 'relu', input_shape=(32, 32, 3))) As you can see, we have to define the shape of the image at the first convolutional layer. For example, if we have three images with dimensions of 32 × 32, 28 × 28, and 64 × 64, we have to resize all the images to one size before feeding them to the model. DATA NORMALIZATION Data normalization is the process of rescaling your data to ensure that each input fea- ture (pixel, in the image case) has a similar data distribution. Often, raw images are composed of pixels with varying scales (ranges of values). For example, one image may have a pixel value range from 0 to 255, and another may have a range of 20 to 200. Although not required, it is preferred to normalize the pixel values to the range of 0 to 1 to boost learning performance and make the network converge faster. To make learning faster for your neural network, your data should have the follow- ing characteristics: Small values —Typically, most values should be in the [0, 1] range. Homogenous —All pixels should have values in the same range. Data normalization is done by subtracting the mean from each pixel and then divid- ing the result by the standard deviation. The distribution of such data resembles a Gaussian curve centered at zero. To demonstrate the normalization process, figure 4.5 illustrates the operation in a scatterplot. TIP Make sure you normalize your training and test data by using the same mean and standard deviation, because you want your data to go through the same transformation and rescale exactly the same way. You will see how this is implemented in the project at the end of this chapter." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"155 Getting your data ready for training In non-normalized data, the cost function will likely look like a squished, elongated bowl. After you normalize your features, your cost function will look more symmetric. Figure 4.6 shows the cost function of two features, F1 and F2. As you can see, for normalized features, the GD algorithm goes straight forward toward the minimum error, thereby reaching it quickly. But for non-normalized fea- tures, it oscillates toward the direction of the minimum error and ends with a long march down the error mountain. It will eventually reach the minimum, but it will take longer to converge. TIP Why does GD oscillate for non-normalized features? If we don’t normal- ize our data, the range of distribution of feature values will likely be different for each feature, and thus the learning rate will cause corrections in each dimension that differ proportionally from one another. This forces GD to oscillate to the direction of the minimum error and ends up with a longer path down the error.Subtract the meanNon-normalized data Zero-centered Divide by standard deviationNormalized data Figure 4.5 To normalize data, we subtract the mean from each pixel and divide the result by the standard deviation. Normalized features F F1 2Gradient descent with and without feature scaling Non-normalized features F F1 2 Figure 4.6 Normalized features help the GD algorithm go straight forward toward the minimum error, thereby reaching it quickly (left). With non-normalized features, the GD oscillates toward the direction of the minimum error and reaches the minimum more slowly (right). " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"156 CHAPTER 4Structuring DL projects and hyperparameter tuning IMAGE AUGMENTATION Data augmentation will be discussed in more detail later in this chapter, when we cover regularization techniques. But it is important for you to know that this is another preprocessing technique that you have in your toolbelt to use when needed. 4.4 Evaluating the model and interpreting its performance After the baseline model is established and the data is preprocessed, it is time to train the model and measure its performance. After training is complete, you need to deter- mine if there are bottlenecks, diagnose which components are performing poorly, and determine whether the poor performance is due to overfitting, underfitting, or a defect in the training data. One of the main criticisms of neural networks is that they are “black boxes.” Even when they work very well, it is hard to understand why they work so well. Many efforts are being made to improve the interpretability of neural networks, and this field is likely to evolve rapidly in the next few years. In this section, I’ll show you how to diag- nose neural networks and analyze their behavior. 4.4.1 Diagnosing overfitting and underfitting After running your experiment, you want to observe its performance, determine if bottlenecks are impacting its performance, and look for indicators of areas you need to improve. The main cause of poor performance in ML is either overfitting or under- fitting the training dataset. We talked about overfitting and underfitting in chapter 3, but now we will dive a little deeper to understand how to detect when the system is fit- ting the training data too much (overfitting) and when it is too simple to fit the data (underfitting): Underfitting means the model is too simple : it fails to learn the training data, so it performs poorly on the training data. One example of underfitting is using a single perceptron to classify the and shapes in figure 4.7. As you can see, a straight line does not split the data accurately. Overfitting is when the model is too complex for the problem at hand. Instead of learning features that fit the training data, it actually memorizes the training data. So it performs very well on the training data, but it fails to generalize when tested with new data that it hasn’t seen before. In figure 4.8, you see that theFigure 4.7 An example of underfitting" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"157 Evaluating the model and interpreting its performance model fits the data too well: it splits the training data, but this kind of fitting will fail to generalize. We want to build a model that is just right for the data: not too complex, causing overfit, or too simple, causing underfit. In figure 4.9, you see that the model missed on a data sample of the shape O, but it looks much more likely to gener- alize on new data. TIP The analogy I like to use to explain overfitting and underfitting is a stu- dent studying for an exam. Underfitting is when the student doesn’t study very well and so fails the exam. Overfitting is when the student memorizes the book and can answer correctly when asked questions from the book, but answers poorly when asked questions from outside the book. The student failed to generalize. What we want is a student to learn from the book (train- ing data) well enough to be able to generalize when asked questions related to the book material. To diagnose underfitting and overfitting, the two values to focus on while training are the training error and the validation error: If the model is doing very well on the training set but relatively poorly on the validation set, then it is overfitting. For example, if train_error is 1% and val_error is 10%, it looks like the model has memorized the training dataset but is failing to generalize on the validation set. In this case, you might consider tuning your hyperparameters to avoid overfitting and iteratively train, test, and evaluate until you achieve an acceptable performance. If the model is performing poorly on the training set, then it is underfitting. For example, if the train_error is 14% and val_error i s 1 5 % , t h e m o d e lFigure 4.8 An example of overfitting Figure 4.9 A model that is just right for the data and will generalize " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"158 CHAPTER 4Structuring DL projects and hyperparameter tuning might be too simple and is failing to learn the training set. You might want to consider adding more hidden layers or training longer (more epochs), or try different neural network architectures. In the next section, we will discuss several hyperparameter-tuning techniques to avoid overfitting and underfitting. 4.4.2 Plotting the learning curves Instead of looking at the training verbose output and comparing the error numbers, one way to diagnose overfitting and underfitting is to plot your training and validation errors throughout the training, as you see in figure 4.10. Figure 4.10A shows that the network improves the loss value (aka learns) on the training data but fails to generalize on the validation data. Learning on the validation data progresses in the first couple of epochs and then flattens out and maybe decreases. This is a form of overfitting. Note that this graph shows that the network is actually learning on the training data, a good sign that training is happening. So you don’t need to add more hidden units, nor do you need to build a more complex model. If anything, your network is too complex for your data, because it is learning so much that it is actually memorizing the data and failing to generalize to new data. In this case, your next step might be to collect more data or apply techniques to avoid overfitting. Using human-level performance to identify a Bayes error rate We talked about achieving a satisfying performance, but how can we know whether performance is good or not? We need a realistic baseline to compare the training and validation errors to, in order to know whether we are improving. Ideally, a 0% error rate is great, but it is not a realistic target for all problems and may even be impos- sible. That is why we need to define a Bayes error rate . A Bayes error rate represents the best possible error our model can achieve (theoret- ically). Since humans are usually very good with visual tasks, we can use human-level performance as a proxy to measure Bayes error. For example, if you are working on a relatively simple task like classifying dogs and cats, humans are very accurate. The human error rate will be very low: say, 0.5%. Then we want to compare the train_error of our model with this value. If our model accuracy is 95%, that’s not satisfying per- formance, and the model might be underfitting. On the other hand, suppose we are working on a more complex task for humans, like building a medical image classifi- cation model for radiologists. The human error rate could be a little higher here: say, 5%. Then a model that is 95% accurate is actually doing a good job. Of course, this is not to say that DL models can never surpass human performance: on the contrary. But it is a good way to draw a baseline to gauge whether a model is doing well. (Note that the example error percentages are just arbitrary numbers for the sake of the example.)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"159 Evaluating the model and interpreting its performance Figure 4.10B shows that the network performs poorly on both training and validation data. In this case, your network is not learning. You don’t need more data, because the network is too simple to learn from the data you already have. Your next step is to build a more complex model. Figure 4.10C shows that the network is doing a good job of learning the training data and generalizing to the validation data. This means there is a good chance that the network will have good performance out in the wild on test data. 4.4.3 Exercise: Building, training, and evaluating a network Before we move on to hyperparameter tuning, let’s run a quick experiment to see how we split the data and build, train, and visualize the model results. You can see an exer- cise notebook for this at www.manning.com/books/deep-learning-for-vision-systems or www.computervisionbook.com . In this exercise, we will do the following: Create toy data for our experiment Split the data into 80% training and 20% testing datasets Build the MLP neural network Train the model Evaluate the model Visualize the results Here are the steps: 1Import the dependencies: from sklearn.datasets import make_blobs from keras.utils import to_categorical Training loss Validation lossLoss 1251501752000.30.40.50.60.7 0255075100 EpochC Loss 1251501752000.30.40.50.60.7 0255075100 EpochB Loss 1251501752000.30.40.50.60.7 0255075100 EpochA Figure 4.10 (A) The network improves the loss value on the training data but fails to generalize on the validation data. (B) The network performs poorly on both the training and validation data. (C) The network learns the training data and generalizes to the validation data. The scikit-learn library to generate sample data Keras method that converts a class vector to a binary class matrix (one-hot encoding)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"160 CHAPTER 4Structuring DL projects and hyperparameter tuning from keras.models import Sequential from keras.layers import Dense from matplotlib import pyplot 2Use make_blobs from scikit-learn to generate a toy dataset with only two fea- tures and three label classes: X, y = make_blobs(n_samples=1000, centers=3, n_features=2, cluster_std=2, random_state=2) 3Use to_categorical from Keras to one-hot-encode the label: y = to_categorical(y) 4Split the dataset into 80% training data and 20% test data. Note that we did not create a validation dataset in this example, for simplicity: n_train = 800 train_X, test_X = X[:n_train, :], X[n_train:, :] train_y, test_y = y[:n_train], y[n_train:] print(train_X.shape, test_X.shape) >> (800, 2) (200, 2) 5Develop the model architecture—here, a very simple, two-layer MLP network (figure 4.11 shows the model summary): model = Sequential() model.add(Dense(25, input_dim=2, activation='relu')) model.add(Dense(3, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary()Neural networks and layers library Visualization library Two input dimensions because we have two features. ReLU activation function for hidden layers. Softmax activation for the output layer with three nodes because we have three classesCross-entropy loss function (explained in chapter 2) and adam optimizer (explained in the next section) Layer (type) Total params: 153 Trainable params: 153 Non-trainable params: 0dense_1 (Dense) dense_2 (Dense)Output Shape (None, 25) (None, 3)Param # 75 78 Figure 4.11 Model summary" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"161 Evaluating the model and interpreting its performance 6Train the model for 1,000 epochs: history = model.fit(train_X, train_y, validation_data=(test_X, test_y), epochs=1000, verbose=1) 7Evaluate the model: _, train_acc = model.evaluate(train_X, train_y) _, test_acc = model.evaluate(test_X, test_y) print('Train: %.3f, Test: %.3f' % (train_acc, test_acc)) >> Train: 0.825, Test: 0.819 8Plot the learning curves of model accuracy (figure 4.12): pyplot.plot(history.history['accuracy'], label='train') pyplot.plot(history.history['val_accuracy'], label='test') pyplot.legend() pyplot.show() Let’s evaluate the network. Looking at the learning curve in figure 4.12, you can see that both train and test curves fit the data with a similar behavior. This means the net- work is not overfitting, which would be indicated if the train curve was doing well but the test curve was not. But could the network be underfitting? Maybe: 82% on a very simple dataset like this is considered poor performance. To improve the performance of this neural network, I would try to build a more complex network and experiment with other underfitting techniques.0.8 0.30.40.50.60.7 0 200 400 600 800 1000Train Test Figure 4.12 The learning curves: both train and test curves fit the data with similar behavior. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"162 CHAPTER 4Structuring DL projects and hyperparameter tuning 4.5 Improving the network and tuning hyperparameters After you run your training experiment and diagnose for overfitting and underfitting, you need to decide whether it is more effective to spend your time tuning the net- work, cleaning up and processing your data, or collecting more data. The last thing you want to do is to spend a few months working in one direction only to find out that it barely improves network performance. So, before discussing the different hyper- parameters to tune, let’s answer this question first: should you collect more data? 4.5.1 Collecting more data vs. tuning hyperparameters We know that deep neural networks thrive on lots of data. With that in mind, ML novices often throw more data to the learning algorithm as their first attempt to improve its performance. But collecting and labeling more data is not always a feasi- ble option and, depending on your problem, could be very costly. Plus, it might not even be that effective. NOTE While efforts are being made to automate some of the data-labeling process, at the time of writing, most labeling is done manually, especially in CV problems. By manually , I mean that actual human beings look at each image and label them one by one (this is called human in the loop ). Here is another layer of complexity: if you are labeling lung X-ray images to detect a certain tumor, for example, you need qualified physicians to diagnose the images. This will cost a lot more than hiring people to classify dogs and cats. So collecting more data might be a good solution for some accuracy issues and increase the model’s robustness, but it is not always a feasible option. In other scenarios, it is much better to collect more data than to improve the learning algorithm. So it would be nice if you had quick and effective ways to figure out whether it is better to collect more data or tune the model hyperparameters. The process I use to make this decision is as follows: 1Determine whether the performance on the training set is acceptable as-is. 2Visualize and observe the performance of these two metrics: training accuracy (train_acc ) and validation accuracy ( val_acc ). 3If the network yields poor performance on the training dataset, this is a sign of underfitting. There is no reason to gather more data, because the learning algorithm is not using the training data that is already available. Instead, try tun- ing the hyperparameters or cleaning up the training data. 4If performance on the training set is acceptable but is much worse on the test dataset, then the network is overfitting your training data and failing to general- ize to the validation set. In this case, collecting more data could be effective. TIP When evaluating model performance, the goal is to categorize the high- level problem. If it’s a data problem , spend more time on data preprocessing or collecting more data. If it’s a learning algorithm problem , try to tune the network. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"163 Improving the network and tuning hyperparameters 4.5.2 Parameters vs. hyperparameters Let’s not get parameters confused with hyperparameters. Hyperparameters are the vari- ables that we set and tune. Parameters are the variables that the network updates with no direct manipulation from us. Parameters are variables that are learned and updated by the network during training, and we do not adjust them. In neural networks, parame- ters are the weights and biases that are optimized automatically during the backpropa- gation process to produce the minimum error. In contrast, hyperparameters are variables that are not learned by the network. They are set by the ML engineer before training the model and then tuned. These are variables that define the network struc- ture and determine how the network is trained. Hyperparameter examples include learning rate, batch size, number of epochs, number of hidden layers, and others dis- cussed in the next section. 4.5.3 Neural network hyperparameters DL algorithms come with several hyperparameters that control many aspects of the model’s behavior. Some hyperparameters affect the time and memory cost of running the algorithm, and others affect the model’s prediction ability. The challenge with hyperparameter tuning is that there are no magic numbers that work for every problem. This is related to the no free lunch theorem that we referred to in chapter 1. Good hyperparameter values depend on the dataset and the task at hand. Choosing the best hyperparameters and knowing how to tune them require an understanding of what each hyperparameter does. In this section, you will build your intuition about why you would want to nudge a hyperparameter one way or another, and I’ll propose good starting values for some of the most effective hyperparameters. Turning the knobs Think of hyperparameters as knobs on a closed box (the neural network). Our job is to set and tune the knobs to yield the best performance: Neural networkLearning rate0.0001 0.1 Epochs0 1000 Hidden units0 10,000 Mini-batch size1 1024 Predictions Data The hyperparameters are knobs that act as the network–human interface.Hyperparameter tuning" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"164 CHAPTER 4Structuring DL projects and hyperparameter tuning Generally speaking, we can categorize neural network hyperparameters into three main categories: Network architecture – Number of hidden layers (network depth) – Number of neurons in each layer (layer width) – Activation type Learning and optimization – Learning rate and decay schedule – Mini-batch size – Optimization algorithms – Number of training iterations or epochs (and early stopping criteria) Regularization techniques to avoid overfitting – L2 regularization – Dropout layers – Data augmentation We discussed all of these hyperparameters in chapters 2 and 3 except the regulariza- tion techniques. Next, we will cover them quickly with a focus on understanding what happens when we tune each knob up or down and how to know which hyperparame- ter to tune. 4.5.4 Network architecture First, let’s talk about the hyperparameters that define the neural network architecture: Number of hidden layers (representing the network depth) Number of neurons in each layer, also known as hidden units (representing the network width) Activation functions DEPTH AND WIDTH OF THE NEURAL NETWORK Whether you are designing an MLP, CNN, or other neural network, you need to decide on the number of hidden layers in your network (depth) and the number of neurons in each layer (width). The number of hidden layers and units describes the learning capacity of the network. The goal is to set the number large enough for the network to learn the data features. A smaller network might underfit, and a larger net- work might overfit. To know what is a “large enough” network, you pick a starting point, observe the performance, and then tune up or down. The more complex the dataset, the more learning capacity the model will need to learn its features. Take a look at the three datasets in figure 4.13. If you provide the model with too much learning capacity (too many hidden units), it might tend to overfit the data and memorize the training set. If your model is overfitting, you might want to decrease the number of hidden units." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"165 Improving the network and tuning hyperparameters Generally, it is good to add hidden neurons until the validation error no longer improves. The trade-off is that it is computationally expensive to train deeper net- works. Having a small number of units may lead to underfitting, while having more units is usually not harmful, with appropriate regularization (like dropout and others discussed later in this chapter). Try playing around with the Tensorflow playground ( https:/ /playground.tensorflow .org) to develop more intuition. Experiment with different architectures, and gradu- ally add more layers and more units in hidden layers while observing the network’s learning behavior. ACTIVATION TYPE Activation functions (discussed extensively in chapter 2) introduce nonlinearity to our neurons. Without activations, our neurons would pass linear combinations (weighted sums) to each other and not solve any nonlinear problems. This is a very active area of research: every few weeks, we are introduced to new types of activations, and there are many available. But at the time of writing, ReLU and its variations (like Leaky ReLU) perform the best in hidden layers. And in the output layer, it is very common to use the softmax function for classification problems, with the number of neurons equal to the number of classes in your problem. Layers and parameters When considering the number of hidden layers and units in your neural network archi- tecture, it is useful to think in terms of the number of parameters in the network and their effect on computational complexity. The more neurons in your network, the more parameters the network has to optimize. (In chapter 3, we learned how to print the model summary to see the total number of parameters that will be trained.) Can be separated by a single perceptronCan be separated by adding a few more neuronsNeeds a lot of neurons to separate the dataVery simple dataset Medium complexity dataset Complex dataset Figure 4.13 The more complex the dataset, the more learning capacity the model will need to learn its features." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"166 CHAPTER 4Structuring DL projects and hyperparameter tuning 4.6 Learning and optimization Now that we have built our network architecture, it is time to discuss the hyperparam- eters that determine how the network learns and optimize its parameter to achieve the minimum error. 4.6.1 Learning rate and decay schedule The learning rate is the single most important hyperparameter, and one should always make sure that it has been tuned. If there is only time to optimize one hyperparameter, then this is the hyperparameter that is worth tuning. —Yoshua Bengio The learning rate (lr value) was covered extensively in chapter 2. As a refresher, let’s think about how gradient descent (GD) works. The GD optimizer searches for the optimal values of weights that yield the lowest error possible. When setting up our optimizer, we need to define the step size that it takes when it descends the error mountain. This step size is the learning rate. It represents how fast or slow the opti- mizer descends the error curve. When we plot the cost function with only one weight, we get the oversimplified U-curve in figure 4.14, where the weight is randomly initial- ized at a point on the curve.(continued) Based on your hardware setup for the training process (computational power and memory), you can determine whether you need to reduce the number of parameters. To reduce the number of training parameters, you can do one of the following: Reduce the depth and width of the network (hidden layers and units). This will reduce the number of training parameters and, hence, reduce the neural net- work complexity. Add pooling layers, or tweak the strides and padding of the convolutional layers to reduce the feature map dimensions. This will lower the number of parameters. These are just examples to help you see how you will look at the number of training parameters in real projects and the trade-offs you will need to make. Complex net- works lead to a large number of training params, which in turn lead to high needs for computational power and memory. The best way to build your baseline architecture is to look at the popular architectures available to solve specific problems and start from there; evaluate its performance, tune its hyperparameters, and repeat. Remember how we were inspired by AlexNet to design our CNN in the image classification project in chapter 3. In the next chapter, we will explore some of the most popular CNN architectures like LeNet, AlexNet, VGG, ResNet, and Inception. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"167 Learning and optimization The GD calculates the gradient to find the direction that reduces the error (deriva- tive). In figure 4.14, the descending direction is to the right. The GD starts taking steps down after each iteration (epoch). Now, as you can see in figure 4.15, if we make a miraculously correct choice of the learning rate value, we land on the best weight value that minimizes the error in only one step. This is an impossible case that I’m using for elaboration purposes. Let’s call this the ideal lr value . If the learning rate is smaller than the ideal lr value, then the model can continue to learn by taking smaller steps down the error curve until it finds the most optimal value for the weight (figure 4.16). Much smaller means it will eventually converge but will take longer.Error Random initial valueWeightLearning step Minimum error Figure 4.14 When we plot the cost function with only one weight, we get an oversimplified U-curve. E WFigure 4.15 if we make a miraculously correct choice of the learning rate value, we land on the best weight value that minimizes the error in only one step. E WFigure 4.16 A learning rate smaller than the ideal lr value: the model takes smaller steps down the error curve." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"168 CHAPTER 4Structuring DL projects and hyperparameter tuning If the learning rate is larger than the ideal lr value, the optimizer will overshoot the optimal weight value in the first step, and then overshoot again on the other side in the next step (figure 4.17). This could possibly yield a lower error than what we started with and converge to a reasonable value, but not the lowest error that we are trying to reach. If the learning rate is much larger than the ideal lr value (more than twice as much), the optimizer will not only overshoot the ideal weight, but get farther and farther from the min error (figure 4.18). This phenomenon is called divergence. Too-high vs. too-low learning rate Setting the learning rate high or low is a trade-off between the optimizer speed versus performance. Too-low lr requires many epochs to converge, often too many. Theoret- ically, if the learning rate is too small, the algorithm is guaranteed to eventually con- verge if kept running for infinity time. On the other hand, too-high lr might get us to a lower error value faster because we take bigger steps down the error curve, but there is a better chance that the algorithm will oscillate and diverge away from the mini- mum value. So, ideally, we want to pick the lr that is just right (optimal): it swiftly reaches the minimum point without being so big that it might diverge.E WFigure 4.17 A learning rate larger than the ideal lr value: the optimizer overshoot the optimal weight value. E WFigure 4.18 A learning rate much larger than the ideal lr value: the optimizer gets farther from the min error." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"169 Learning and optimization 4.6.2 A systematic approach to find the optimal learning rate The optimal learning rate will be dependent on the topology of your loss landscape, which in turn is dependent on both your model architecture and your dataset. Whether you are using Keras, Tensorflow, PyTorch, or any other DL library, using the default learning rate value of the optimizer is a good start leading to decent results. Each optimizer type has its own default value. Read the documentation of the DL library that you are using to find out the default value of your optimizer. If your model doesn’t train well, you can play around with the lr variable using the usual suspects— 0.1, 0.01, 0.001, 0.0001, 0.00001, and 0.000001—to improve performance or speed up training by searching for an optimal learning rate. The way to debug this is to look at the validation loss values in the training verbose: If val_loss decreases after each step, that’s good. Keep training until it stops improving. If training is complete and val_loss is still decreasing, then maybe the learn- ing rate was so small that it didn’t converge yet. In this case, you can do one of two things:When plotting the loss value against the number of training iterations (epochs), you will notice the following: Much smaller lr —The loss keeps decreasing but needs a lot more time to converge. Larger lr —The loss achieves a better value than what we started with, but is still far from optimal. Much larger lr —The loss might initially decrease, but it starts to increase as the weight values get farther and farther away from the optimal values. Good lr —The loss decreases consistently until it reaches the minimum possi- ble value. EpochLossVery high learning rate Low learning rate High learning rate Good learning rateThe difference between very high, high, good, and low learning rates " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"170 CHAPTER 4Structuring DL projects and hyperparameter tuning – Train again with the same learning rate but with more training iterations (epochs) to give the optimizer more time to converge. – Increase the lr value a little and train again. If val_loss starts to increase or oscillate up and down, then the learning rate is too high and you need to decrease its value. 4.6.3 Learning rate decay and adaptive learning Finding the learning rate value that is just right for your problem is an iterative pro- cess. You start with a static lr value, wait until training is complete, evaluate, and then tune. Another way to go about tuning your learning rate is to set a learning rate decay: a method by which the learning rate changes during training. It often performs better than a static value, and drastically reduces the time required to get optimal results. By now, it’s clear that when we try lower learning values, we have a better chance to get to a lower error point. But training it will take longer. In some cases, training takes so long it becomes infeasible. A good trick is to implement a decay rate in our learn- ing rate. The decay rate tells our network to automatically decrease the lr throughout the training process. For example, we can decrease the lr by a constant value of ( x) for each ( n) number of steps. This way, we can start with the higher value to take bigger steps toward the minimum, and then gradually decrease the learning rate every ( n) epochs to avoid overshooting the ideal lr. One way to accomplish this is by reducing the learning rate linearly ( linear decay ). For example, you can decrease it by half every five epochs, as shown in figure 4.19. Another way is to decrease the lr exponentially ( exponential decay ). For example, you can multiply it by 0.1 every eight epochs (figure 4.20). Clearly, the network will con- verge a lot slower than with linear decay, but it will eventually converge.1.25 1.0Learning rate 0.75 0.5 0.25 0 1 11 10 9 8 7 6 5 4 3 2 12 13 14 15 16 17 18 20 19 Figure 4.19 Decreasing the lr by half every five epochs " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"171 Learning and optimization Other clever learning algorithms have an adaptive learning rate ( adaptive learning ). These algorithms use a heuristic approach that automatically updates the lr when the training stops. This means not only decreasing the lr when needed, but also increas- ing it when improvements are too slow (too-small lr). Adaptive learning usually works better than other learning rate–setting strategies. Adam and Adagrad are examples of adaptive learning optimizers: more on adaptive optimizers later in this chapter. 4.6.4 Mini-batch size Mini-batch size is another hyperparameter that you need to set and tune in the opti- mizer algorithm. The batch_size hyperparameter has a big effect on resource requirements of the training process and speed. In order to understand the mini-batch, let’s back up to the three GD types that we explained in chapter 2—batch, stochastic, and mini-batch: Batch gradient descent (BGD) —We feed the entire dataset to the network all at once, apply the feedforward process, calculate the error, calculate the gradient, and backpropagate to update the weights. The optimizer calculates the gradi- ent by looking at the error generated after it sees all the training data, and the weights are updated only once after each epoch. So, in this case, the mini-batch size equals the entire training dataset. The main advantage of BGD is that it has relatively low noise and bigger steps toward the minimum (see figure 4.21). The main disadvantage is that it can take too long to process the entire training dataset at each step, especially when training on big data. BGD also requires a huge amount of memory for training large datasets, which might not be avail- able. BGD might be a good option if you are training on a small dataset. Stochastic gradient descent (SGD) —Also called online learning . We feed the network a single instance of the training data at a time and use this one instance to do the forward pass, calculate error, calculate the gradient, and backpropagate to1.1 1.0Learning rate 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1 11 10 9 8 7 6 5 4 3 2 12 13 14 15 16 17 18 20 19 Figure 4.20 Multiplying the lr by 0.1 every eight epochs " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"172 CHAPTER 4Structuring DL projects and hyperparameter tuning update the weights (figure 4.22). In SGD, the weights are updated after it sees each single instance (as opposed to processing the entire dataset before each step for BGD). SGD can be extremely noisy as it oscillates on its way to the global minimum because it takes a step down after each single instance, which could sometimes be in the wrong direction. This noise can be reduced by using a smaller learning rate, so, on average, it takes you in a good direc- tion and almost always performs better than BGD. With SGD you get to make progress quickly and usually reach very close to the global minimum. The main disadvantage is that by calculating the GD for one instance at a time, you lose the speed gain that comes with matrix multiplication in the training calculations.Batch gradient descent (BGD) Low noise on its path to the minimum error Figure 4.21 Batch GD with low noise on its path to the minimum error Stochastic (GD) High noise and oscillates on its path to the minimum error Figure 4.22 Stochastic GD with high noise that oscillates on its path to the minimum error" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"173 Learning and optimization To recap BGD and SGD, on one extreme, if you set your mini-batch size to 1 (stochas- tic training), the optimizer will take a step down the error curve after computing the gradient for every single instance of the training data. This is good, but you lose the increased speed of using matrix multiplication. On the other extreme, if your mini- batch size is your entire training dataset, then you are using BGD. It takes too long to make a step toward the minimum error when processing large datasets. Between the two extremes, there is mini-batch GD. Mini-batch gradient descent (MB-GD) —A compromise between batch and stochas- tic GD. Instead of computing the gradient from one sample (SGD) or all train- ing samples (BGD), we divide the training sample into mini-batches to compute the gradient from. This way, we can take advantage of matrix multiplication for faster training and start making progress instead of having to wait to train the entire training set. Guidelines for choosing mini-batch size First, if you have a small dataset (around less than 2,000), you might be better off using BGD. You can train the entire dataset quite fast. For large datasets, you can use a scale of mini-batch size values. A typical starting value for the mini-batch is 64 or 128. You can then tune it up and down on this scale: 32, 64, 128, 256, 512, 1024, and keep doubling it as needed to speed up training. But make sure that your mini-batch size fits in your CPU/GPU memory. Mini-batch sizes of 1024 and larger are possible but quite rare. A larger mini-batch size allows a computational boost that uses matrix multiplication in the training calculations. But that comes at the expense of needing more memory for the training process and gen- erally more computational resources. The following figure shows the relationship between batch size, computational resources, and number of epochs needed for neu- ral network training: The relationship between batch size, computational resources, and number of epochs Computational resources per epoch Epochs required to find good , values WbNumber of datapoints Mini-batch Stochastic Batch" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"174 CHAPTER 4Structuring DL projects and hyperparameter tuning 4.7 Optimization algorithms In the history of DL, many researchers proposed optimization algorithms and showed that they work well with some problems. But most of them subsequently proved to not generalize well to the wide range of neural networks that we might want to train. In time, the DL community came to feel that the GD algorithm and some of its variants work well. So far, we have discussed batch, stochastic, and mini-batch GD. We learned that choosing a proper learning rate can be challenging because a too- small learning rate leads to painfully slow convergence, while a too-large learning rate can hinder convergence and cause the loss function to fluctuate around the mini- mum or even diverge. We need more creative solutions to further optimize GD. NOTE Optimizer types are well explained in the documentation of most DL frameworks. In this section, I’ll explain the concepts of two of the most popu- lar gradient-descent-based optimizers—Momentum and Adam—that really stand out and have been shown to work well across a wide range of DL archi- tectures. This will help you build a good foundation to dive deeper into other optimization algorithms. For more about optimization algorithms, read “An overview of gradient descent optimization algorithms” by Sebastian Ruder (https:/ /arxiv.org/pdf/1609.04747.pdf ). 4.7.1 Gradient descent with momentum Recall that SGD ends up with some oscillations in the vertical direction toward the minimum error (figure 4.23). These oscillations slow down the convergence process and make it harder to use larger learning rates, which could result in your algorithm overshooting and diverging. To reduce these oscillations, a technique called momentum was invented that lets the GD navigate along relevant directions and softens the oscillation in irrelevant direc- tions. In other words, it makes learning slower in the vertical-direction oscillations and faster in the horizontal-direction progress, which will help the optimizer reach the tar- get minimum much faster. This is similar to the idea of momentum from classical physics: when a snowball rolls down a hill, it accumulates momentum, going faster and faster. In the same way,Target minimumUnwanted oscillations in the vertical direction Progress toward the minimum in the horizontal direction Figure 4.23 SGD oscillates in the vertical direction toward the minimum error. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"175 Optimization algorithms our momentum term increases for dimensions whose gradients point in the same direction and reduces updates for dimensions whose gradients change direction. This leads to faster convergence and reduces oscillations. 4.7.2 Adam Adam stands for adaptive moment estimation . Adam keeps an exponentially decaying average of past gradients, similar to momentum. Whereas momentum can be seen as a ball rolling down a slope, Adam behaves like a heavy ball with friction to slow down the momentum and control it. Adam usually outperforms other optimizers because it helps train a neural network model much more quickly than the techniques we have seen earlier. Again, we have new hyperparameters to tune. But the good news is that the default values of major DL frameworks often work well, so you may not need to tune at all— except for the learning rate, which is not an Adam-specific hyperparameter: keras.optimizers.Adam(lr= 0.001, beta_1= 0.9, beta_2= 0.999, epsilon= None, decay= 0.0) The authors of Adam propose these default values: The learning rate needs to be tuned. For the momentum term β1, a common choice is 0.9. For the RMSprop term β2, a common choice is 0.999. ε is set to 10–8. 4.7.3 Number of epochs and early stopping criteria A training iteration, or epoch , is when the model goes a full cycle and sees the entire training dataset at once. The epoch hyperparameter is set to define how many itera- tions our network continues training. The more training iterations, the more our model learns the features of our training data. To diagnose whether your network needs more or fewer training epochs, keep your eyes on the training and validation error values. How the math works in momentum The math here is really simple and straightforward. The momentum is built by adding a velocity term to the equation that updates the weight: wnew = wold – α wnew = wold – learning rate × gradient + velocity term The velocity term equals the weighted average of the past gradients.dE dwi------------ Original update rule New rule after adding velocity" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"176 CHAPTER 4Structuring DL projects and hyperparameter tuning The intuitive way to think about this is that we want to continue training as long as the error value is decreasing. Correct? Let’s take a look at the sample verbose output from a network training in figure 4.24. You can see that both training and validation errors are decreasing. This means the network is still learning. It doesn’t make sense to stop the training at this point. The network is clearly still making progress toward the minimum error. Let’s let it train for six more epochs and observe the results (figure 4.25). It looks like the training error is doing well and still improving. That’s good. This means the network is improving on the training set. However, if you look at epochs 8 and 9, you will see that val_error started to oscillate and increase. Improving train_error while not improving val_error means the network is starting to overfit the training data and failing to generalize to the validation data. Let’s plot the training and validation errors (figure 4.26). You can see that both the training and validation errors were improving at first, but then the validationEpoch 1, Training Error: 5.4353, Validation Error: 5.6394 Epoch 2, Training Error: 5.1364, Validation Error: 5.2216 Epoch 3, Training Error: 4.7343, Validation Error: 4.8337Figure 4.24 Sample verbose output of the first five epochs. Both training and validation errors are improving. Epoch 6, Training Error: 3.7312, Validation Error: 3.8324 Epoch 7, Training Error: 3.5324, Validation Error: 3.7215 Epoch 8, Training Error: 3.7343, Validation Error: 3.8337Figure 4.25 The training error is still improving, but the validation error started oscillating from epoch 8 onward. Error 2 7 Number of epochsJust rightModel complexity graph Underfitting OverfittingValidation error Training error Figure 4.26 Improving train_error while not improving val_error means the network is starting to overfit." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"177 Regularization techniques to avoid overfitting error started to increase, leading to overfitting. We need to find a way to stop the training just before it starts to overfit. This technique is called early stopping . 4.7.4 Early stopping Early stopping is an algorithm widely used to determine the right time to stop the training process before overfitting happens. It simply monitors the validation error value and stops the training when the value starts to increase. Here is the early stop- ping function in Keras: EarlyStopping(monitor= 'val_loss' , min_delta= 0, patience= 20) The EarlyStopping function takes the following arguments: monitor —The metric you monitor during training. Usually we want to keep an eye on val_loss because it represents our internal testing of model perfor- mance. If the network is doing well on the validation data, it will probably do well on test data and production. min_delta —The minimum change that qualifies as an improvement. There is no standard value for this variable. To decide the min_delta value, run a few epochs and see the change in error and validation accuracy. Define min_delta according to the rate of change. The default value of 0 works pretty well in many cases. patience —This variable tells the algorithm how many epochs it should wait before stopping the training if the error does not improve. For example, if we set patience equal to 1, the training will stop at the epoch where the error increases. We must be a little flexible, though, because it is very common for the error to oscillate a little and continue improving. We can stop the training if it hasn’t improved in the last 10 or 20 epochs. TIP The good thing about early stopping is that it allows you to worry less about the epochs hyperparameter. You can set a high number of epochs and let the stopping algorithm take care of stopping the training when error stops improving. 4.8 Regularization techniques to avoid overfitting If you observe that your neural network is overfitting the training data, your network might be too complex and need to be simplified. One of the first techniques you should try is regularization. In this section, we will discuss three of the most common regularization techniques: L2, dropout, and data augmentation. 4.8.1 L2 regularization The basic idea of L2 regularization is that it penalizes the error function by adding a regularization term to it. This, in turn, reduces the weight values of the hidden units and makes them too small, very close to zero, to help simplify the model. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"178 CHAPTER 4Structuring DL projects and hyperparameter tuning Let’s see how regularization works. First, we update the error function by adding the regularization term: error function new = error function old + regularization term Note that you can use any of the error functions explained in chapter 2, like MSE or cross entropy. Now, let’s take a look at the regularization term L2 regularization term = where lambda ( λ) is the regularization parameter, m is the number of instances, and w is the weight. The updated error function looks like this: error function new = error function old + Why does L2 regularization reduce overfitting? Well, let’s look at how the weights are updated during the backpropagation process. We learned from chapter 2 that the optimizer calculates the derivative of the error, multiplies it by the learning rate, and subtracts this value from the old weight. Here is the backpropagation equation that updates the weights: Since we add the regularization term to the error function, the new error becomes larger than the old error. This means its derivative ( ∂Error /∂Wx) is also bigger, leading to a smaller Wnew. L 2 r e g u l a r i z a t i o n i s a l s o k n o w n a s weight decay , as it forces the weights to decay toward zero (but not exactly zero). Reducing weights leads to a simpler neural network To see how this works, consider: if the regularization term is so large that, when mul- tiplied with the learning rate, it will be equal to Wold, then this will make the new weight equal to zero. This cancels the effect of this neuron, leading to a simpler neu- ral network with fewer neurons.λ 2m------- w2× λ 2m------- w2× α W= W –new old/c182Error /c182Wx( (Old weight New weight Learning rateDerivative of error with respect to weight" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"179 Regularization techniques to avoid overfitting This is what L2 regularization looks like in Keras: model.add(Dense(units= 16, kernel_regularizer=regularizers.l2( ƛ), activation= 'relu')) The lambda value is a hyperparameter that you can tune. The default value of your DL library usually works well. If you still see signs of overfitting, increase the lambda hyperparameter to reduce the model complexity. 4.8.2 Dropout layers Dropout is another regularization technique that is very effective for simplifying a neural network and avoiding overfitting. We discussed dropout extensively in chapter 3. The dropout algorithm is fairly simple: at every training iteration, every neuron has a probability p of being temporarily ignored (dropped out) during this training itera- tion. This means it may be active during subsequent iterations. While it is counterintu- itive to intentionally pause the learning on some of the network neurons, it is quite surprising how well this technique works. The probability p is a hyperparameter that is called dropout rate and is typically set in the range of 0.3 to 0.5. Start with 0.3, and if you see signs of overfitting, increase the rate. TIP I like to think of dropout as tossing a coin every morning with your team to decide who will do a specific critical task. After a few iterations, all your team members will learn how to do this task and not rely on a single member to get it done. The team would become much more resilient to change. Both L2 regularization and dropout aim to reduce network complexity by reducing its neurons’ effectiveness. The difference is that dropout completely cancels the effect ofIn practice, L2 regularization does not make the weights equal to zero. It just makes them smaller to reduce their effect. A large regularization parameter ( ƛ) lead to neg- ligible weights. When the weights are negligible, the model will not learn much from these units. This will make the network simpler and thus reduce overfitting L2 regularization reduces the weights and simplifies the network to reduce overfitting.x1 x3x2 y When adding a hidden layer to your network, add the kernel_regularization argument with the L2 regularizer" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"180 CHAPTER 4Structuring DL projects and hyperparameter tuning some neurons with every iteration, while L2 regularization just reduces the weight val- ues to reduce the neurons’ effectiveness. Both lead to a more robust, resilient neural network and reduce overfitting. It is recommended that you use both types of regular- ization techniques in your network. 4.8.3 Data augmentation One way to avoid overfitting is to obtain more data. Since this is not always a feasible option, we can augment our training data by generating new instances of the same images with some transformations. Data augmentation can be an inexpensive way to give your learning algorithm more training data and therefore reduce overfitting. The many image-augmentation techniques include flipping, rotation, scaling, zoom- ing, lighting conditions, and many other transformations that you can apply to your dataset to provide a variety of images to train on. In figure 4.27, you can see some of the transformation techniques applied to an image of the digit 6. In figure 4.27, we created 20 new images that the network can learn from. The main advantage of synthesizing images like this is that now you have more data (20×) that tells your algorithm that if an image is the digit 6, then even if you flip it vertically or horizontally or rotate it, it’s still the digit 6. This makes the model more robust to detect the number 6 in any form and shape. Data augmentation is considered a regularization technique because allowing the network to see many variants of the object reduces its dependence on the original form of the object during feature learning. This makes the network more resilient when tested on new data.0 0 5 10 15 20 255 10Data augmentation Original image Augmented image15 20 25 Figure 4.27 Various image augmentation techniques applied to an image of the digit 6" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"181 Batch normalization Data augmentation in Keras looks like this: from keras.preprocessing.image import ImageDataGenerator datagen = ImageDataGenerator(horizontal_flip= True, vertical_flip= True) datagen.fit(training_set) 4.9 Batch normalization Earlier in this chapter, we talked about data normalization to speed up learning. The normalization techniques we discussed were focused on preprocessing the training set before feeding it to the input layer. If the input layer benefits from normalization, why not do the same thing for the extracted features in the hidden units, which are changing all the time and get much more improvement in training speed and network resil- ience (figure 4.28)? This process is called batch normalization (BN). 4.9.1 The covariate shift problem Before we define covariate shift, let’s take a look at an example to illustrate the prob- lem that batch normalization (BN) confronts. Suppose you are building a cat classi- fier, and you train your algorithm on images of white cats only. When you test this classifier on images with cats that are different colors, it will not perform well. Why? Because the model has been trained on a training set with a specific distribution (white cats). When the distribution changes in the test set, it confuses the model (fig- ure 4.29).Imports ImageDataGenerator from Keras Generates batches of new image data. ImageDataGenerator takes transformation types as arguments. Here, we set horizontal and vertical flip to True. See the Keras documentation (or your DL library) for more transformation arguments.Computes the data augmentation on the training set Input layer L1 L2 L3 L4 These activations are essentially the input to the following layers, so why not normalize these values?yx1 x3x2a11 2a1 a31a12 2a2 a32a13 2a3 a33 Figure 4.28 Batch normalization is normalizing the extracted features in hidden units." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"182 CHAPTER 4Structuring DL projects and hyperparameter tuning We should not expect that the model trained on the data in graph A will do very well with the new distribution in graph B. The idea of the change in data distribution goes by the fancy name covariate shift . DEFINITION If a model is learning to map dataset X to label y, then if the dis- tribution of X changes, it’s known as covariate shift . When that happens, you might need to retrain your learning algorithm. 4.9.2 Covariate shift in neural networks To understand how covariate shift happens in neural networks, consider the simple four-layer MLP in figure 4.30. Let’s look at the network from the third-layer (L3) per- spective. Its input are the activation values in L2 ( a12, a22, a32, and a42), which are theABCovariance shift Figure 4.29 Graph A is the training set of only white cats, and graph B is the testing set with cats of various colors. The circles represent the cat images, and the stars represent the non-cat images. The activation values in layer 2 are inputs to layer 3.L3 L4 x1 x3x2L2 L1 ya11 a22 a33 a44 Figure 4.30 A simple four-layer MLP. L1 features are input to the L2 layer. The same is true for layers 2, 3, and 4." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"183 Batch normalization features extracted from the previous layers. L3 is trying to map these inputs to yˆ to make it as close as possible to the label y. While the third layer is doing that, the net- work is adapting the values of the parameters from previous layers. As the parameters (w, b) are changing in layer 1, the activation values in the second layer are changing, too. So from the perspective of the third hidden layer, the values of the second hidden layer are changing all the time: the MLP is suffering from the problem of covariate shift. Batch norm reduces the degree of change in the distribution of the hidden unit values, causing these values to become more stable so that the later layers of the neu- ral network have firmer ground to stand on. NOTE It is important to realize that batch normalization does not cancel or reduce the change in the hidden unit values. What it does is ensure that the distribution of that change remains the same: even if the exact values of the units change, the mean and variance do not change. 4.9.3 How does batch normalization work? In their 2015 paper “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift” ( https:/ /arxiv.org/abs/1502.03167 ), Sergey Ioffe and Christian Szegedy proposed the BN technique to reduce covariate shift. Batch normalization adds an operation in the neural network just before the activation func- tion of each layer to do the following: 1Zero-center the inputs 2Normalize the zero-centered inputs 3Scale and shift the results This operation lets the model learn the optimal scale and mean of the inputs for each layer. How the math works in batch normalization 1To zero-center the inputs, the algorithm needs to calculate the input mean and standard deviation (the input here means the current mini-batch: hence the term batch normalization ): μB ← σ2 B ← ( xi – μB)2 where m is the number of instances in the mini-batch, μB is the mean, and σB is the standard deviation over the current mini-batch. 2Normalize the input: xˆi ← 1 m------- xi i1=m  Mini-batch mean 1 m------- i1=m  Mini-batch variance xiμB– σB2ε+-----------------------------" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"184 CHAPTER 4Structuring DL projects and hyperparameter tuning 4.9.4 Batch normalization implementation in Keras It is important to know how batch normalization works so you can get a better under- standing of what your code is doing. But when using BN in your network, you don’t have to implement all these details yourself. Implementing BN is often done by add- ing one line of code, using any DL framework. In Keras, the way you add batch nor- malization to your neural network is by adding a BN layer after the hidden layer, to normalize its results before they are fed to the next layer. The following code snippet shows you how to add a BN layer when building your neural network: from keras.models import Sequential from keras.layers import Dense, Dropout from keras.layers.normalization import BatchNormalization model = Sequential() model.add(Dense(hidden_units, activation= 'relu')) model.add(BatchNormalization()) model.add(Dropout(0.5)) model.add(Dense(units, activation= 'relu')) model.add(BatchNormalization()) model.add(Dense(2, activation= 'softmax' )) where xˆ is the zero-centered and normalized input. Note that there is a vari- able here that we added ( ε). This is a tiny number (typically 10–5) to avoid divi- sion by zero if σ is zero in some estimates. 3Scale and shift the results. We multiply the normalized output by a variable γ to scale it and add ( β) to shift it yi ← γXi + β where yi is the output of the BN operation, scaled and shifted. Notice that BN introduces two new learnable parameters to the network: γ and β. So our optimization algorithm will update the parameters of γ and β just like it updates weights and biases. In practice, this means you may find that training is rather slow at first, while GD is searching for the optimal scales and offsets for each layer, but it accelerates once it’s found reasonably good values. Imports the BatchNormalization layer from the Keras library Initiates the model Adds the first hidden layer Adds the batch norm layer to normalize the results of layer 1 If you are adding dropout to your network, it is preferable to add it after the batch norm layer because you don’t want the nodes that are randomly turned off to miss the normalization step.Adds the second hidden layer Adds the batch norm layer to normalize the results of layer 2Output layer" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"185 Project: Achieve high accuracy on image classification 4.9.5 Batch normalization recap The intuition that I hope you’ll take away from this discussion is that BN applies the nor- malization process not just to the input layer, but also to the values in the hidden layers in a neural network. This weakens the coupling of the learning process between earlier and later layers, allowing each layer of the network to learn more independently. From the perspective of the later layers in the network, the earlier layers don’t get to shift around as much because they are constrained to have the same mean and vari- ance. This makes the job of learning easier in the later layers. The way this happens is by ensuring that the hidden units have a standardized distribution (mean and vari- ance) controlled by two explicit parameters, γ and β, which the learning algorithm sets during training. 4.10 Project: Achieve high accuracy on image classification In this project, we will revisit the CIFAR-10 classification project from chapter 3 and apply some of the improvement techniques from this chapter to increase the accu- racy from ~65% to ~90%. You can follow along with this example by visiting the book’s website, www.manning.com/books/deep-learning-for-vision-systems or www .computervisionbook.com , to see the code notebook. We will accomplish the project by following these steps: 1Import the dependencies. 2Get the data ready for training: – Download the data from the Keras library. – Split it into train, validate, and test datasets. – Normalize the data. – One-hot encode the labels. 3Build the model architecture. In addition to regular convolutional and pooling layers, as in chapter 3, we add the following layers to our architecture: – Deeper neural network to increase learning capacity – Dropout layers – L2 regularization to our convolutional layers – Batch normalization layers 4Train the model. 5Evaluate the model. 6Plot the learning curve. Let’s see how this is implemented. STEP 1: I MPORT DEPENDENCIES Here’s the Keras code to import the needed dependencies: import keras from keras.datasets import cifar10 from keras.preprocessing.image import ImageDataGeneratorKeras library to download the datasets, preprocess images, and network components" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"186 CHAPTER 4Structuring DL projects and hyperparameter tuning from keras.models import Sequential from keras.utils import np_utils from keras.layers import Dense, Activation, Flatten, Dropout, BatchNormalization, Conv2D, MaxPooling2D from keras.callbacks import ModelCheckpoint from keras import regularizers, optimizers import numpy as np from matplotlib import pyplot STEP 2: G ET THE DATA READY FOR TRAINING Keras has some datasets available for us to download and experiment with. These datasets are usually preprocessed and almost ready to be fed to the neural network. In this project, we use the CIFAR-10 dataset, which consists of 50,000 32 × 32 color train- ing images, labeled over 10 categories, and 10,000 test images. Check the Keras docu- mentation for more datasets like CIFAR-100, MNIST, Fashion-MNIST, and more. Keras provides the CIFAR-10 dataset already split into training and testing sets. We will load them and then split the training dataset into 45,000 images for training and 5,000 images for validation, as explained in this chapter: (x_train, y_train), (x_test, y_test) = cifar10.load_data() x_train = x_train.astype( 'float32' ) x_test = x_test.astype( 'float32' ) (x_train, x_valid) = x_train[5000:], x_train[:5000] (y_train, y_valid) = y_train[5000:], y_train[:5000] Let’s print the shape of x_train , x_valid , and x_test : print('x_train =' , x_train.shape) print('x_valid =' , x_valid.shape) print('x_test =' , x_test.shape) >> x_train = (45000, 32, 32, 3) >> x_valid = (5000, 32, 32, 3) >> x_test = (1000, 32, 32, 3) The format of the shape tuple is as follows: (number of instances, width, height, channels). Normalize the data Normalizing the pixel values of our images is done by subtracting the mean from each pixel and then dividing the result by the standard deviation: mean = np.mean(x_train,axis=(0,1,2,3)) std = np.std(x_train,axis=(0,1,2,3)) x_train = (x_train-mean)/(std+1e-7) x_valid = (x_valid-mean)/(std+1e-7) x_test = (x_test-mean)/(std+1e-7)Imports numpy for math operations Imports the matplotlib library to visualize results Downloads and splits the data Breaks the training set into training and validation sets" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"187 Project: Achieve high accuracy on image classification One-hot encode the labels To one-hot encode the labels in the train, valid, and test datasets, we use the to_ categorical function in Keras: num_classes = 10 y_train = np_utils.to_categorical(y_train,num_classes) y_valid = np_utils.to_categorical(y_valid,num_classes) y_test = np_utils.to_categorical(y_test,num_classes) Data augmentation For augmentation techniques, we will arbitrarily go with the following transforma- tions: rotation, width and height shift, and horizontal flip. When you are working on problems, view the images that the network missed or provided poor detections for and try to understand why it is not performing well on them. Then create your hypothesis and experiment with it. For example, if the missed images were of shapes that are rotated, you might want to try the rotation augmentation. You would apply that, experiment, evaluate, and repeat. You will come to your decisions purely from analyzing your data and understanding the network performance: datagen = ImageDataGenerator( rotation_range=15, width_shift_range=0.1, height_shift_range=0.1, horizontal_flip= True, vertical_flip= False ) datagen.fit(x_train) STEP 3: B UILD THE MODEL ARCHITECTURE In chapter 3, we built an architecture inspired by AlexNet (3 CONV + 2 FC). In this project, we will build a deeper network for increased learning capacity (6 CONV + 1 FC). The network has the following configuration: Instead of adding a pooling layer after each convolutional layer, we will add one after every two convolutional layers. This idea was inspired by VGGNet, a popu- lar neural network architecture developed by the Visual Geometry Group (Uni- versity of Oxford). VGGNet will be explained in chapter 5. Inspired by VGGNet, we will set the kernel_size of our convolutional layers to 3 × 3 and the pool_size of the pooling layer to 2 × 2. We will add dropout layers every other convolutional layer, with ( p) ranges from 0.2 and 0.4. A batch normalization layer will be added after each convolutional layer to nor- malize the input for the following layer. In Keras, L2 regularization is added to the convolutional layer code. Here’s the code:Data augmentation Computes the data augmentation on the training set" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"188 CHAPTER 4Structuring DL projects and hyperparameter tuning base_hidden_units = 32 weight_decay = 1e-4 model = Sequential() # CONV1 model.add(Conv2D(base_hidden_units, kernel_size= 3, padding= 'same', kernel_regularizer=regularizers.l2(weight_decay), input_shape=x_train.shape[1:])) model.add(Activation( 'relu')) model.add(BatchNormalization()) # CONV2 model.add(Conv2D(base_hidden_units, kernel_size= 3, padding= 'same', kernel_regularizer=regularizers.l2(weight_decay))) model.add(Activation( 'relu')) model.add(BatchNormalization()) # POOL + Dropout model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.2)) # CONV3 model.add(Conv2D(base_hidden_units * 2, kernel_size= 3, padding= 'same', kernel_regularizer=regularizers.l2(weight_decay))) model.add(Activation( 'relu')) model.add(BatchNormalization()) # CONV4 model.add(Conv2D(base_hidden_units * 2, kernel_size= 3, padding= 'same', kernel_regularizer=regularizers.l2(weight_decay))) model.add(Activation( 'relu')) model.add(BatchNormalization()) # POOL + Dropout model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.3)) # CONV5 model.add(Conv2D(base_hidden_units * 4, kernel_size= 3, padding= 'same', kernel_regularizer=regularizers.l2(weight_decay))) model.add(Activation( 'relu')) model.add(BatchNormalization()) # CONV6 model.add(Conv2D(base_hidden_units * 4, kernel_size= 3, padding= 'same', kernel_regularizer=regularizers.l2(weight_decay))) model.add(Activation( 'relu')) model.add(BatchNormalization()) # POOL + Dropout model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.4))Number of hidden units variable. We declare this variable here and use it in our convolutional layers to make it easier to update from one place.L2 regularization hyperparameter ( ƛ) Creates a sequential model (a linear stack of layers)Notice that we define the input_shape here because this is the first convolutional layer. We don’t need to do that for the remaining layers. Adds L2 regularization to the convolutional layerUses a ReLU activation function for all hidden layers Adds a batch normalization layer Dropout layer with 20% probabilityNumber of hidden units = 64" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"189 Project: Achieve high accuracy on image classification # FC7 model.add(Flatten()) model.add(Dense(10, activation= 'softmax' )) model.summary() The model summary is shown in figure 4.31.Flattens the feature map into a 1D features vector (explained in chapter 3) 10 hidden units because the dataset has 10 class labels. Softmax activation function is used for the output layer (explained in chapter 2).Prints the model summary Layer (type) conv2d_1 (Conv2D) activation_1 (Activation)Output Shape (None, 32, 32, 32) (None, 32, 32, 32)Param # 896 conv2d_3 (Conv2D) (None, 16, 16, 64) 18496conv2d_2 (Conv2D) (None, 32, 32, 32) 92480 activation_2 (Activation) (None, 32, 32, 32) 0 activation_4 (Activation) (None, 16, 16, 64) 0activation_3 (Activation) (None, 16, 16, 64) 0batch_normalization_1 (batch (None, 32, 32, 32) 128 batch_normalization_2 (batch (None, 32, 32, 32) 128 batch_normalization_3 (batch (None, 16, 16, 64) 256 batch_normalization_4 (batch (None, 16, 16, 64) 256 batch_normalization_5 (batch (None, 8, 8, 128) 512 batch_normalization_6 (batch (None, 8, 8, 128) 512max_pooling2d_1 (MaxPooling2 (None, 16, 16, 32) 0 max_pooling2d_2 (MaxPooling2 (None, 8, 8, 64) 0 max_pooling2d_3 (MaxPooling2 (None, 4, 4, 128) 0dropout_1 (Dropout) conv2d_4 (Conv2D) (None, 16, 16, 64) 36928 conv2d_5 (Conv2D) (None, 8, 8, 128) 73856 conv2d_6 (Conv2D) (None, 8, 8, 128) 147584(None, 16, 16, 32) 0 dropout_2 (Dropout) (None, 8, 8, 64) 0 dropout_3 (Dropout) (None, 4, 4, 128) 0 flatten_1 (Flatten) (None, 2048 0 dense_1 (Dense) (None, 10) 20490activation_5 (Activation) (None, 8, 8, 128) 0 activation_6 (Activation) (None, 8, 8, 128) 0 Figure 4.31 Model summary" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"190 CHAPTER 4Structuring DL projects and hyperparameter tuning STEP 4: T RAIN THE MODEL Before we jump into the training code, let’s discuss the strategy behind some of the hyperparameter settings: batch_size —This is the mini-batch hyperparameter that we covered in this chapter. The higher the batch_size , the faster your algorithm learns. You can start with a mini-batch of 64 and double this value to speed up training. I tried 256 on my machine and got the following error, which means my machine was running out of memory. I then lowered it back to 128: Resource exhausted: OOM when allocating tensor with shape[256,128,4,4] epochs —I started with 50 training iterations and found that the network was still improving. So I kept adding more epochs and observing the training results. In this project, I was able to achieve >90% accuracy after 125 epochs. As you will see soon, there is still room for improvement if you let it train longer. Optimizer —I used the Adam optimizer. See section 4.7 to learn more about opti- mization algorithms. NOTE It is important to note that I’m using a GPU for this experiment. The training took around 3 hours. It is recommended that you use your own GPU or any cloud computing service to get the best results. If you don’t have access to a GPU, I recommend that you try a smaller number of epochs or plan to leave your machine training overnight or even for a couple of days, depend- ing on your CPU specifications. Let’s see the training code: batch_size = 128 epochs = 125 checkpointer = ModelCheckpoint(filepath='model.100epochs.hdf5', verbose=1, save_best_only=True ) optimizer = keras.optimizers.adam(lr=0.0001,decay=1e-6) model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy']) history = model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size), callbacks=[checkpointer], steps_per_epoch=x_train.shape[0] // batch_size, epochs=epochs, verbose=2, validation_data=(x_valid, y_valid)) Mini-batch size Number of training iterationsPath of the file where the best weights will be saved, and a Boolean True to save the weights only when there is an improvementAdam optimizer with a learning rate = 0.000 1 Cross-entropy loss function (explained in chapter 2)Allows you to do real-time data augmentation on images on CPU in parallel to training your model on GPU. The callback to the checkpointer saves the model weights; you can add other callbacks like an early stopping function." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"191 Project: Achieve high accuracy on image classification When you run this code, you will see the verbose output of the network training for each epoch. Keep your eyes on the loss and val_loss values to analyze the network and diagnose bottlenecks. Figure 4.32 shows the verbose output of epochs 121 to 125. STEP 5: E VALUATE THE MODEL To evaluate the model, we use a Keras function called evaluate and print the results: scores = model.evaluate(x_test, y_test, batch_size=128, verbose=1) print('\nTest result: %.3f loss: %.3f' % (scores[1]*100,scores[0])) >> Test result: 90.260 loss: 0.398 Plot learning curves Plot the learning curves to analyze the training performance and diagnose overfitting and underfitting (figure 4.33): pyplot.plot(history.history[ 'acc'], label= 'train') pyplot.plot(history.history[ 'val_acc' ], label= 'test') pyplot.legend() pyplot.show()Epoch 121/125 Epoch 00120: val_loss did not improve 30s - loss: 0.4471 - acc: 0.8741 - val_loss: 0.4124 - val_acc: 0.8886 Epoch 122/125 Epoch 00121: val_loss improved from 0.40342 to 0.40327, saving model to model.125epochs.hdf5 31s - loss: 0.4510 - acc: 0.8719 - val_loss: 0.4033 - val_acc: 0.8934 Epoch 123/125 Epoch 00122: val_loss improved from 0.40327 to 0.40112, saving model to model.125epochs.hdf5 30s - loss: 0.4497 - acc: 0.8735 - val_loss: 0.4031 - val_acc: 0.8959 Epoch 124/125 Epoch 00122: val_loss did not improve 30s - loss: 0.4497 - acc: 0.8725 - val_loss: 0.4162 - val_acc: 0.8894 Epoch 125/125 Epoch 00122: val_loss did not improve 30s - loss: 0.4471 - acc: 0.8734 - val_loss: 0.4025 - val_acc: 0.8959 Figure 4.32 Verbose output of epochs 121 to 125 0.8 0.40.50.60.9 0.7 0 20 40 60 80 100 120Train Test Figure 4.33 Learning curves" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"192 CHAPTER 4Structuring DL projects and hyperparameter tuning Further improvements Accuracy of 90% is pretty good, but you can still improve further. Here are some ideas you can experiment with: More training epochs —Notice that the network was improving until epoch 123. You can increase the number of epochs to 150 or 200 and let the network train longer. Deeper network —Try adding more layers to increase the model complexity, which increases the learning capacity. Lower learning rate —Decrease the lr (you should train longer if you do so). Different CNN architecture —Try something like Inception or ResNet (explained in detail in the next chapter). You can get up to 95% accuracy with the ResNet neural network after 200 epochs of training. Transfer learning —In chapter 6, we will explore the technique of using a pre- trained network on your dataset to get higher results with a fraction of the learning time. Summary The general rule of thumb is that the deeper your network is, the better it learns. At the time of writing, ReLU performs best in hidden layers, and softmax per- forms best in the output layer. Stochastic gradient descent usually succeeds in finding a minimum. But if you need fast convergence and are training a complex neural network, it’s safe to go with Adam. Usually, the more you train, the better. L2 regularization and dropout work well together to reduce network complex- ity and overfitting." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"Part 2 Image classification and detection R apid advances in AI research are enabling new applications to be built every day and across different industries that weren’t possible just a few years ago. By learning these tools, you will be empowered to invent new products and applications yourself. Even if you end up not working on computer vision per se, many concepts here are useful for deep learning algorithms and architectures. After working our way through the foundations of deep learning in part 1, it’s time to build a machine learning project to see what you’ve learned. Here, we’ll cover strategies to quickly and efficiently get deep learning systems work- ing, analyze results, and improve network performance, specifically by dig- ging into advanced convolutional neural networks, transfer learning, and object detection." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf, deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"195Advanced CNN architectures Welcome to part 2 of this book. Part 1 presented the foundation of neural networks architectures and covered multilayer perceptrons (MLPs) and convolutional neural networks (CNNs). We wrapped up part 1 with strategies to structure your deep neu- ral network projects and tune their hyperparameters to improve network perfor- mance. In part 2, we will build on this foundation to develop computer vision (CV) systems that solve complex image classification and object detection problems. In chapters 3 and 4, we talked about the main components of CNNs and setting up hyperparameters such as the number of hidden layers, learning rate, optimizer, and so on. We also talked about other techniques to improve network perfor- mance, like regularization, augmentation, and dropout. In this chapter, you will see how these elements come together to build a convolutional network. I will walk you through five of the most popular CNNs that were cutting edge in their time, and you will see how their designers thought about building, training, and improving networks. We will start with LeNet, developed in 1998, which performed fairly well at recognizing handwritten characters. You will see how CNN architectures haveThis chapter covers Working with CNN design patterns Understanding the LeNet, AlexNet, VGGNet, Inception, and ResNet network architectures" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"196 CHAPTER 5Advanced CNN architectures evolved since then to deeper CNNs like AlexNet and VGGNet, and beyond to more advanced and super-deep networks like Inception and ResNet, developed in 2014 and 2015, respectively. For each CNN architecture, you will learn the following: Novel features —We will explore the novel features that distinguish these networks from others and what specific problems their creators were trying to solve. Network architecture —We will cover the architecture and components of each network and see how they come together to form the end-to-end network. Network code implementation —We will walk step-by-step through the network imple- mentations using the Keras deep learning (DL) library. The goal of this section is for you to learn how to read research papers and implement new architec- tures as the need arises. Setting up learning hyperparameters —After you implement a network architecture, you need to set up the hyperparameters of the learning algorithms that you learned in chapter 4 (optimizer, learning rate, weight decay, and so on). We will implement the learning hyperparameters as presented in the original research paper of each network. In this section, you will see how performance evolved from one network to another over the years. Network performance —Finally, you will see how each network performed on bench- mark datasets like MNIST and ImageNet, as represented in their research papers. The three main objectives of this chapter follow: Understanding the architecture and learning hyperparameters of advanced CNNs. You will be implementing simpler CNNs like AlexNet and VGGNet for simple- to medium-complexity problems. For very complex problems, you might want to use deeper networks like Inception and ResNet. Understanding the novel features of each network and the reasons they were developed. Each succeeding CNN architecture solves a specific limitation in the previous one. After reading about the five networks in this chapter (and their research papers), you will build a strong foundation for reading and under- standing new networks as they emerge. Learning how CNNs have evolved and their designers’ thought processes. This will help you build an instinct for what works well and what problems may arise when building your own network. In chapter 3, you learned about the basic building blocks of convolutional layers, pooling layers, and fully connected layers of CNNs. As you will see in this chapter, in recent years a lot of CV research has focused on how to put together these basic build- ing blocks to form effective CNNs. One of the best ways for you to develop your intu- ition is to examine and learn from these architectures (similar to how most of us may have learned to write code by reading other people’s code). To get the most out of this chapter, you are encouraged to read the research papers linked in each section before you read my explanation. What you have learned" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"197 CNN design patterns in part 1 of this book fully equips you to start reading research papers written by pio- neers in the AI field. Reading and implementing research papers is by far one of the most valuable skills that you will build from reading this book. TIP Personally, I feel the task of going through a research paper, interpret- ing the crux behind it, and implementing the code is a very important skill every DL enthusiast and practitioner should possess. Practically implement- ing research ideas brings out the thought process of the author and also helps transform those ideas into real-world industry applications. I hope that, by reading this chapter, you will get comfortable reading research papers and implementing their findings in your own work. The fast-paced evolution in this field requires us to always stay up-to-date with the latest research. What you will learn in this book (or in other publications) now will not be the latest and greatest in three or four years—maybe even sooner. The most valuable asset that I want you to take away from this book is a strong DL foundation that empowers you to get out in the real world and be able to read the latest research and implement it yourself. Are you ready? Let’s get started! 5.1 CNN design patterns Before we jump into the details of the common CNN architectures, we are going to look at some common design choices when it comes to CNNs. It might seem at first that there are way too many choices to make. Every time we learn about something new in deep learning, it gives us more hyperparameters to design. So it is good to be able to narrow down our choices by looking at some common patterns that were cre- ated by pioneer researchers in the field so we can understand their motivation and start from where they ended rather than doing things completely randomly: Pattern 1: Feature extraction and classification —Convolutional nets are typically composed of two parts: the feature extraction part, which consists of a series of convolutional layers; and the classification part, which consists of a series of fully connected layers (figure 5.1). This is pretty much always the case with ConvNets, starting from LeNet and AlexNet to the very recent CNNs that have come out in the past few years, like Inception and ResNet. FC FC FC ConvFeature extraction Classification Conv Conv Figure 5.1 Convolutional nets generally include feature extraction and classification. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"198 CHAPTER 5Advanced CNN architectures Pattern 2: Image depth increases, and dimensions decrease —The input data at each layer is an image. With each layer, we apply a new convolutional layer over a new image. This pushes us to think of an image in a more generic way. First, you see that each image is a 3D object that has a height, width, and depth. Depth is referred to as the color channel , where depth is 1 for grayscale images and 3 for color images. In the later layers, the images still have depth, but they are not colors per se: they are feature maps that represent the features extracted from the previous layers. That’s why the depth increases as we go deeper through the network layers. In figure 5.2, the depth of an image is equal to 96; this represents the number of feature maps in the layer. So, that’s one pattern you will always see: the image depth increases, and the dimen- sions decrease. Pattern 3: Fully connected layers —This generally isn’t as strict a pattern as the pre- vious two, but it’s very helpful to know. Typically, all fully connected layers in a network either have the same number of hidden units or decrease at each layer. It is rare to find a network where the number of units in the fully connected lay- ers increases at each layer. Research has found that keeping the number of units constant doesn’t hurt the neural network, so it may be a good approach if you want to limit the number of choices you have to make when designing your network. This way, all you have to do is to pick a number of units per layer and apply that to all your fully connected layers. Now that you understand the basic CNN patterns, let’s look at some architectures that have implemented them. Most of these architectures are famous because they per- formed well in the ImageNet competition. ImageNet is a famous benchmark that33 1313 384 96224224 11 1155 5555Input image = H × W × channel Channel = {R, G, B,} = 3Image volume = H × W × feature maps Feature maps = 96 Stride of 4Max poolingMax pooling25633 2727 Figure 5.2 Image depth increases, and the dimensions decrease." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"199 LeNet-5 contains millions of images; DL and CV researchers use the ImageNet dataset to com- pare algorithms. More on that later. NOTE The snippets in this chapter are not meant to be runnable. The goal is to show you how to implement the specifications that are defined in a research paper. Visit the book’s website ( www.manning.com/books/deep-learning-for- vision-systems ) or Github repo ( https:/ /github.com/moelgendy/deep_learning _for_vision_systems ) for the full executable code. Now, let’s get started with the first network we are going to discuss in this chapter: LeNet. 5.2 LeNet-5 In 1998, Lecun et al. introduced a pioneering CNN called LeNet-5 .1 The LeNet-5 archi- tecture is straightforward, and the components are not new to you (they were new back in 1998); you learned about convolutional, pooling, and fully connected layers in chapter 3. The architecture is composed of five weight layers, and hence the name LeNet-5: three convolutional layers and two fully connected layers. DEFINITION We refer to the convolutional and fully connected layers as weight layers because they contain trainable weights as opposed to pooling layers that don’t contain any weights. The common convention is to use the number of weight layers to describe the depth of the network. For example, AlexNet (explained next) is said to be eight layers deep because it contains five convolu- tional and three fully connected layers. The reason we care more about weight layers is mainly because they reflect the model’s computational complexity. 5.2.1 LeNet architecture The architecture of LeNet-5 is shown in figure 5.3: INPUT IMAGE ⇒ C1 ⇒ TANH ⇒ S2 ⇒ C3 ⇒ TANH ⇒ S4 ⇒ C5 ⇒ TANH ⇒ FC6 ⇒ SOFTMAX7 where C is a convolutional layer, S is a subsampling or pooling layer, and FC is a fully connected layer. Notice that Yann LeCun and his team used tanh as an activation function instead of the currently state-of-the-art ReLU. This is because in 1998, ReLU had not yet been used in the context of DL, and it was more common to use tanh or sigmoid as an acti- vation function in the hidden layers. Without further ado, let’s implement LeNet-5 in Keras. 1Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-Based Learning Applied to Document Recogni- tion,” Proceedings of the IEEE 86 (11): 2278–2324, http:/ /yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf . " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"200 CHAPTER 5Advanced CNN architectures 5.2.2 LeNet-5 implementation in Keras To implement LeNet-5 in Keras, read the original paper and follow the architecture information from pages 6–8. Here are the main takeaways for building the LeNet-5 network: Number of filters in each convolutional layer —As you can see in figure 5.3 (and as defined in the paper), the depth (number of filters) of each convolutional layer is as follows: C1 has 6, C3 has 16, C5 has 120 layers. Kernel size of each convolutional layer —The paper specifies that the kernel_size is 5 × 5. Subsampling (pooling) layers —A subsampling (pooling) layer is added after each convolutional layer. The receptive field of each unit is a 2 × 2 area (for example, pool_size is 2). Note that the LeNet-5 creators used average pooling , which com- putes the average value of its inputs, instead of the max pooling layer that we used in our earlier projects, which passes the maximum value of its inputs. You can try both if you are interested, to see the difference. For this experiment, we are going to follow the paper’s architecture. Activation function —As mentioned before, the creators of LeNet-5 used the tanh activation function for the hidden layers because symmetric functions are believed to yield faster convergence compared to sigmoid functions (figure 5.4).Input 28 × 28C1 feature maps 28 × 28 × 6S2 feature maps 14 × 14 × 6C3 feature maps 10 × 10 × 16S4 feature maps 5 × 5 × 16C5 layer 120F6 layer 84Output 10 SubsamplingFull connectionFull connectionGaussian connectionConvolutionsConvolutions Subsampling Figure 5.3 LeNet architecture 120FC FC CONV CONV 5 x 5 s= 1CONV 5 x 5 s= 1f= 2 s= 2 28 × 28 × 1 28 × 28 × 6 14 × 14 × 6 10 × 10 × 16 5 × 5 × 16ŷ ... 84... 10...Avg pool f= 2 s= 2Avg pool Figure 5.4 The LeNet architecture consists of convolutional kernels of size 5 × 5; pooling layers; an activation function (tanh); and three fully connected layers with 120, 84, and 10 neurons, respectively. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"201 LeNet-5 Now let’s put that in code to build the LeNet-5 architecture: from keras.models import Sequential from keras.layers import Conv2D, AveragePooling2D, Flatten, Dense model = Sequential() # C1 Convolutional Layer model.add(Conv2D( filters = 6, kernel_size = 5, strides = 1, activation = 'tanh', input_shape = (28,28,1), padding = 'same')) # S2 Pooling Layer model.add(AveragePooling2D( pool_size = 2, strides = 2, padding = 'valid')) # C3 Convolutional Layer model.add(Conv2D( filters = 16, kernel_size = 5, strides = 1,activation = 'tanh', padding = 'valid')) # S4 Pooling Layer model.add(AveragePooling2D( pool_size = 2, strides = 2, padding = 'valid')) # C5 Convolutional Layer model.add(Conv2D( filters = 120, kernel_size = 5, strides = 1,activation = 'tanh', padding = 'valid')) model.add(Flatten()) # FC6 Fully Connected Layer model.add(Dense( units = 84, activation = 'tanh')) # FC7 Output layer with softmax activation model.add(Dense( units = 10, activation = 'softmax' )) model.summary() LeNet-5 is a small neural network by today’s standards. It has 61,706 parameters, com- pared to millions of parameters in more modern networks, as you will see later in this chapter. A note when reading the papers discussed in this chapter When you read the LeNet-5 paper, just know that it is harder to read than the others we will cover in this chapter. Most of the ideas that I mention in this section are in sections 2 and 3 of the paper. The later sections of the paper talk about something called the graph transformer network , which isn’t widely used today. So if you do try to read the paper, I recommend focusing on section 2, which talks about the LeNet architecture and the learning details; then maybe take a quick look at section 3, which includes a bunch of experiments and results that are pretty interesting. I recommend starting with the AlexNet paper (discussed in section 5.3), followed by the VGGNet paper (section 5.4), and then the LeNet paper. It is a good classic to look at once you go over the other ones. Imports the Keras model and layers Instantiates an empty sequential model Flattens the CNN output to feed it fully connected layers Prints the model summary (figure 5.5)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"202 CHAPTER 5Advanced CNN architectures 5.2.3 Setting up the learning hyperparameters LeCun and his team used scheduled decay learning where the value of the learning rate was decreased using the following schedule: 0.0005 for the first two epochs, 0.0002 for the next three epochs, 0.00005 for the next four, and then 0.00001 thereaf- ter. In the paper, the authors trained their network for 20 epochs. Let’s build a lr_schedule function with this schedule. The method takes an inte- ger epoch number as an argument and returns the learning rate ( lr): def lr_schedule(epoch): if epoch <= 2: lr = 5e-4 elif epoch > 2 and epoch <= 5: lr = 2e-4 elif epoch > 5 and epoch <= 9: lr = 5e-5 else: lr = 1e-5 return lr We use the lr_schedule function in the following code snippet to compile the model: from keras.callbacks import ModelCheckpoint, LearningRateScheduler lr_scheduler = LearningRateScheduler(lr_schedule) checkpoint = ModelCheckpoint(filepath='path_to_save_file/file.hdf5', monitor='val_acc',Layer (type) Total params: 61,706 Trainable params: 61,706 Non-trainable params: 0conv2d_1 (Conv2D) average_pooling2d_1 (AverageOutput Shape (None, 28, 28, 6) (None, 14, 14, 6)Param # 156 0 conv2d_2 (Conv2D) (None, 10, 10, 16) 2416 average_pooling2d_2 (Average (None, 5, 5, 16) 0 conv2d_3 (Conv2D) (None, 1, 1, 120) 48120 flatten_1 (Flatten) (None, 120) 0 dense_1 (Dense) (None, 84) 10164 dense_2 (Dense) (None, 10) 850 Figure 5.5 LeNet-5 model summary lr is 0.0005 for the first two epochs, 0.0002 for the next three epochs (3 to 5), 0.00005 for the next four (6 to 9), then 0.0000 1 thereafter (more than 9)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"203 AlexNet verbose=1, save_best_only=True) callbacks = [checkpoint, lr_reducer] model.compile(loss= 'categorical_crossentropy' , optimizer='sgd', metrics=[ 'accuracy' ]) Now start the network training for 20 epochs, as mentioned in the paper: hist = model.fit(X_train, y_train, batch_size=32, epochs= 20, validation_data=(X_test, y_test), callbacks=callbacks, verbose=2, shuffle= True) See the downloadable notebook included with the book’s code for the full code implementation, if you want to see this in action. 5.2.4 LeNet performance on the MNIST dataset When you train LeNet-5 on the MNIST dataset, you will get above 99% accuracy (see the code notebook with the book’s code). Try to re-run this experiment with the ReLU activation function in the hidden layers, and observe the difference in the net- work performance. 5.3 AlexNet LeNet performs very well on the MNIST dataset. But it turns out that the MNIST data- set is very simple because it contains grayscale images (1 channel) and classifies into only 10 classes, which makes it an easier challenge. The main motivation behind Alex- Net was to build a deeper network that can learn more complex functions. AlexNet (figure 5.6) was the winner of the ILSVRC image classification competi- tion in 2012. Krizhevsky et al. created the neural network architecture and trained it on 1.2 million high-resolution images into 1,000 different classes of the ImageNet dataset.2 AlexNet was state of the art at its time because it was the first real “deep” net- work that opened the door for the CV community to seriously consider convolutional networks in their applications. We will explain deeper networks later in this chapter, like VGGNet and ResNet, but it is good to see how ConvNets evolved and the main drawbacks of AlexNet that were the main motivation for the later networks. As you can see in figure 5.6, AlexNet has a lot of similarities to LeNet but is much deeper (more hidden layers) and bigger (more filters per layer). They have similar building blocks: a series of convolutional and pooling layers stacked on top of each other followed by fully connected layers and a softmax. We’ve seen that LeNet has around 61,000 parameters, whereas AlexNet has about 60 million parameters and 2Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Communications of the ACM 60 (6): 84–90, https:/ /dl.acm.org/doi/10.1145/3065386 ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"204 CHAPTER 5Advanced CNN architectures 650,000 neurons, which gives it a larger learning capacity to understand more complex features. This allowed AlexNet to achieve remarkable performance in the ILSVRC image classification competition in 2012. ImageNet and ILSVRC ImageNet ( http:/ /image-net.org/index ) is a large visual database designed for use in visual object recognition software research. It is aimed at labeling and categorizing images into almost 22,000 categories based on a defined set of words and phrases. The images were collected from the web and labeled by humans using Amazon’s Mechanical Turk crowdsourcing tool. At the time of this writing, there are over 14 mil- lion images in the ImageNet project. To organize such a massive amount of data, the creators of ImageNet followed the WordNet hierarchy where each meaningful word/ phrase in WordNet is called a synonym set (synset for short). Within the ImageNet project, images are organized according to these synsets, with the goal being to have 1,000+ images per synset. The ImageNet project runs an annual software contest called the ImageNet Large Scale Visual Recognition Challenge (ILSVRC, www.image-net.org/challenges/LSVRC ), where software programs compete to correctly classify and detect objects and scenes. We will use the ILSVRC challenge as a benchmark to compare different networks’ performance.33 1313 384Input CONV1 96 3224224 11 1155 5555CONV2 CONV3 CONV4 CONV5 FC6 FC7 FC8 Input image (RGB) Stride of 4Max poolingMax poolingMax pooling25633 2727 33 13 1313 13 384 256 4096 4096Dense Dense Dense 1000 Image input 5 convolution layers 3 fully connected layers Figure 5.6 AlexNet architecture" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"205 AlexNet 5.3.1 AlexNet architecture You saw a version of the AlexNet architecture in the project at the end of chapter 3. The architecture is pretty straightforward. It consists of: Convolutional layers with the following kernel sizes: 11 × 11, 5 × 5, and 3 × 3 Max pooling layers for images downsampling Dropout layers to avoid overfitting Unlike LeNet, ReLU activation functions in the hidden layers and a softmax activation in the output layer AlexNet consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. The architec- ture can be represented in text as follows: INPUT IMAGE ⇒ CONV1 ⇒ POOL2 ⇒ CONV3 ⇒ POOL4 ⇒ CONV5 ⇒ CONV6 ⇒ CONV7 ⇒ POOL8 ⇒ FC9 ⇒ FC10 ⇒ SOFTMAX7 5.3.2 Novel features of AlexNet Before AlexNet, DL was starting to gain traction in speech recognition and a few other areas. But AlexNet was the milestone that convinced a lot of people in the CV community to take a serious look at DL and demonstrate that it really works in CV. AlexNet presented some novel features that were not used in previous CNNs (like LeNet). You are already familiar with all of them from the previous chapters, so we’ll go through them quickly here. RELU ACTIVATION FUNCTION AlexNet uses ReLu for the nonlinear part instead of the tanh and sigmoid functions that were the earlier standard for traditional neural networks (like LeNet). ReLu was used in the hidden layers of the AlexNet architecture because it trains much faster. This is because the derivative of the sigmoid function becomes very small in the saturating region, and therefore the updates applied to the weights almost vanish. This phenome- non is called the vanishing gradient problem . ReLU is represented by this equation: f(x) = max(0, x) It’s discussed in detail in chapter 2. The vanishing gradient problem Certain activation functions, like the sigmoid function, squish a large input space into a small input space between 0 and 1 (–1 to 1 for tanh activations). Therefore, a large change in the input of the sigmoid function causes a small change in the output. As a result, the derivative becomes very small:" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"206 CHAPTER 5Advanced CNN architectures DROPOUT LAYER As explained in chapter 3, dropout layers are used to prevent the neural network from overfitting. The neurons that are “dropped out” do not contribute to the forward pass and do not participate in backpropagation. This means every time an input is pre- sented, the neural network samples a different architecture, but all of these architec- tures share the same weights. This technique reduces complex co-adaptations of neurons, since a neuron cannot rely on the presence of particular other neurons. Therefore, the neuron is forced to learn more robust features that are useful in con- junction with many different random subsets of the other neurons. Krizhevsky et al. used dropout with a probability of 0.5 in the two fully connected layers. DATA AUGMENTATION One popular and very effective approach to avoid overfitting is to artificially enlarge the dataset using label-preserving transformations. This happens by generating new instances of the training images with transformations like image rotation, flipping, scaling, and many more. Data augmentation is explained in detail in chapter 4. LOCAL RESPONSE NORMALIZATION AlexNet uses local response normalization. It is different from the batch normaliza- tion technique (explained in chapter 4). Normalization helps to speed up conver- gence. Nowadays, batch normalization is used instead of local response normalization; we will use BN in our implementation in this chapter.(continued) We will talk more about the vanishing gradient phenomenon later in this chapter when we look at the ResNet architecture.–10 –8 –6 –4 –2 0 x2468 1 00.70.9 0.8 0.6 0.5 0.2 0.1Sigmoid Derivative of sigmoid 0.30.4 The vanishing gradient problem: a large change in the input of the sigmoid function causes a negligible change in the output." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"207 AlexNet WEIGHT REGULARIZATION Krizhevsky et al. used a weight decay of 0.0005. Weight decay is another term for the L2 regularization technique explained in chapter 4. This approach reduces the over- fitting of the DL neural network model on training data to allow the network to gener- alize better on new data: model.add(Conv2D(32, (3,3), kernel_regularizer=l2( ƛ))) The lambda ( ƛ) value is a weight decay hyperparameter that you can tune. If you still see overfitting, you can reduce it by increasing the lambda value. In this case, Krizhevsky and his team found that a small decay value of 0.0005 was good enough for the model to learn. TRAINING ON MULTIPLE GPU S Krizhevsky et al. used a GTX 580 GPU with only 3 GB of memory. It was state-of-the-art at the time but not large enough to train the 1.2 million training examples in the data- set. Therefore, the team developed a complicated way to spread the network across two GPUs. The basic idea was that a lot of the layers were split across two different GPUs that communicated with each other. You don’t need to worry about these details today: there are far more advanced ways to train deep networks on distributed GPUs, as we will discuss later in this book. 5.3.3 AlexNet implementation in Keras Now that you’ve learned the basic components of AlexNet and its novel features, let’s apply them to build the AlexNet neural network. I suggest that you read the architec- ture description on page 4 of the original paper and follow along. As depicted in figure 5.7, the network contains eight weight layers: the first five are convolutional, and the remaining three are fully connected. The output of the last fully connected layer is fed to a 1000-way softmax that produces a distribution over the 1,000 class labels. NOTE AlexNet input starts with 227 × 227 × 3 images. If you read the paper, you will notice that it refers to a dimensions volume of 224 × 224 × 3 for the input images. But the numbers make sense only for 227 × 227 × 3 images (fig- ure 5.7). I suggest that this could be a typing mistake in the paper. The layers are stacked together as follows: CONV1 —The authors used a large kernel size (11). They also used a large stride (4), which makes the input dimensions shrink by roughly a factor 4 (from 227 × 227 to 55 × 55). We calculate the dimensions of the output as follows: + 1 = 55 and the depth is the number of filters in the convolutional layer (96). The out- put dimensions are 55 × 55 × 96. 227 11– () 4-------------------------" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"208 CHAPTER 5Advanced CNN architectures POOL with a filter size of 3 × 3— This reduces the dimensions from 55 × 55 to 27 × 27: + 1 = 27 The pooling layer doesn’t change the depth of the volume. The output dimen- sions are 27 × 27 × 96. Similarly, we can calculate the output dimensions of the remaining layers: CONV2 —Kernel size = 5, depth = 256, and stride = 1 POOL —Size = 3 × 3, which downsamples its input dimensions from 27 × 27 to 13 × 13 CONV3 —Kernel size = 3, depth = 384, and stride = 1 CONV4 —Kernel size = 3, depth = 384, and stride = 1 CONV5 —Kernel size = 3, depth = 256, and stride = 1 POOL —Size = 3 × 3, which downsamples its input from 13 × 13 to 6 × 6 Flatten layer —Flattens the dimension volume 6 × 6 × 256 to 1 × 9,216 FC with 4,096 neurons FC with 4,096 neurons Softmax layer with 1,000 neurons27 27 40969216FC FC ... 4096... 1000 softmax... 6256 256 613 131327 132711 1155CONV 11 11,× stride = 4, 96 kernels (227 – 11) /4 + 1 = 55(55 – 3) /2 + 1 = 27(27 + 2*2 – 5) /1 + 1 = 27(27 – 3) /2 + 1 = 13 (13 + 2*1 – 3) /1 + 1 = 13(13 + 2*1 – 3) /1 + 1 = 13 (13 + 2*1 – 3) /1 + 1 = 13(13 – 3) /2 + 1 = 6CONV 5 5, pad = 2,× 256 kernels CONV 3 3, pad = 1,× 384 kernelsCONV 3 3, pad = 1,× 384 kernelsCONV 3 3, pad = 1,× 256 kernelsOverlapping max pool 3 3, stride = 2×Overlapping max pool 3 3, stride = 2×Overlapping max pool 3 3, stride = 2× 3 5596 96 256 256 227227 13384 384 1313 13 Figure 5.7 AlexNet contains eight weight layers: five convolutional and three fully connected. Two contain 4,096 neurons, and the output is fed to a 1,000-neuron softmax. 55 3–() 2-------------------" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"209 AlexNet NOTE You might be wondering how Krizhevsky and his team decided to implement this configuration. Setting up the right values of network hyper- parameters like kernel size, depths, stride, pooling size, etc., is tedious and requires a lot of trial and error. The idea remains the same: we want to apply many weight layers to increase the model’s capacity to learn more complex functions. We also need to add pooling layers in between to down- sample the input dimensions, as discussed in chapter 2. With that said, set- ting up the exact hyperparameters comes as one of the challenges of CNNs. VGGNet (explained next) solves this problem by implementing a uniform layer configuration to reduce the amount of trial and error when designing your network. Note that all of the convolutional layers are followed by a batch normalization layer, and all of the hidden layers are followed by ReLU activations. Now, let’s put that in code to build the AlexNet architecture: from keras.models import Sequential from keras.regularizers import l2 from keras.layers import Conv2D, AveragePooling2D, Flatten, Dense, Activation,MaxPool2D, BatchNormalization, Dropout model = Sequential() # 1st layer (CONV + pool + batchnorm) model.add(Conv2D( filters= 96, kernel_size= (11,11), strides=(4,4), padding='valid', input_shape = (227,227,3))) model.add(Activation('relu')) model.add(MaxPool2D(pool_size=(3,3), strides=(2,2))) model.add(BatchNormalization()) # 2nd layer (CONV + pool + batchnorm) model.add(Conv2D(filters=256, kernel_size=(5,5), strides=(1,1), padding= 'same', kernel_regularizer=l2(0.0005))) model.add(Activation( 'relu')) model.add(MaxPool2D(pool_size=(3,3), strides=(2,2), padding= 'valid')) model.add(BatchNormalization()) # layer 3 (CONV + batchnorm) model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='same', kernel_regularizer=l2(0.0005))) model.add(Activation('relu')) model.add(BatchNormalization()) # layer 4 (CONV + batchnorm) model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='same', kernel_regularizer=l2(0.0005))) model.add(Activation('relu')) model.add(BatchNormalization()) # layer 5 (CONV + batchnorm) model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='same', kernel_regularizer=l2(0.0005)))Imports the Keras model, layers, and regularizers Instantiates an empty sequential model The activation function can be added on its own layer or within the Conv2D function as we did in previous implementations. Note that the AlexNet authors did not add a pooling layer here. Similar to layer 3" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"210 CHAPTER 5Advanced CNN architectures model.add(Activation('relu')) model.add(BatchNormalization()) model.add(MaxPool2D(pool_size=(3,3), strides=(2,2), padding= 'valid')) model.add(Flatten()) # layer 6 (Dense layer + dropout) model.add(Dense( units = 4096, activation = 'relu')) model.add(Dropout(0.5)) # layer 7 (Dense layers) model.add(Dense( units = 4096, activation = 'relu')) model.add(Dropout(0.5)) # layer 8 (softmax output layer) model.add(Dense( units = 1000, activation = 'softmax' )) model.summary() When you print the model summary, you will see that the number of total parameters is 62 million: NOTE Both LeNet and AlexNet have many hyperparameters to tune. The authors of those networks had to go through many experiments to set the ker- nel size, strides, and padding for each layer, which makes the networks harder to understand and manage. VGGNet (explained next) solves this problem with a very simple, uniform architecture. 5.3.4 Setting up the learning hyperparameters AlexNet was trained for 90 epochs, which took 6 days on two Nvidia Geforce GTX 580 GPUs simultaneously. This is why you will see that the network is split into two pipe- lines in the original paper. Krizhevsky et al. started with an initial learning rate of 0.01 with a momentum of 0.9. The lr is then divided by 10 when the validation error stops improving: reduce_lr = ReduceLROnPlateau(monitor= 'val_loss', factor=np.sqrt( 0.1)) optimizer = keras.optimizers.sgd( lr = 0.01, momentum = 0.9) model.compile(loss= 'categorical_crossentropy' , optimizer=optimizer, metrics=[ 'accuracy' ]) Flattens the CNN output to feed it fully connected layers Prints the model summary Total params: 62,383, 848 Trainable params: 62,381, 096 Non-trainable params: 2,752 Reduces the learning rate by 0. 1 when the validation error plateausSets the SGD optimizer with lr of 0.0 1 and momentum of 0.9 Compiles the model" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"211 AlexNet model.fit(X_train, y_train, batch_size= 128, epochs= 90, validation_data=(X_test, y_test), verbose=2, callbacks=[reduce_lr]) 5.3.5 AlexNet performance AlexNet significantly outperformed all the prior competitors in the 2012 ILSVRC challenges. It achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry that year, which used other traditional classifiers. This huge improvement in performance attracted the CV community’s attention to the potential that convolutional networks have to solve complex vision problems and led to more advanced CNN architectures, as you will see in the following sections of this chapter. Top-1 and top-5 error rates? Top-1 and top-5 are terms used mostly in research papers to describe the accuracy of an algorithm on a given classification task. The top-1 error rate is the percentage of the time that the classifier did not give the correct class the highest score, and the top-5 error rate is the percentage of the time that the classifier did not include the correct class among its top five guesses. Let’s apply this in an example. Suppose there are 100 classes, and we show the net- work an image of a cat. The classifier outputs a score or confidence value for each class as follows: 1Cat: 70% 2Dog: 20% 3Horse: 5% 4Motorcycle: 4% 5Car: 0.6% 6Plane: 0.4% This means the classifier was able to correctly predict the true class of the image in the top-1. Try the same experiment for 100 images and observe how many times the classifier missed the true label, and that’s your top-1 error rate. The same idea holds for the top-5 error rate. In the example, if the true label is Horse , then the classifier missed the true label in the top-1 but caught it in the first five pre- dicted classes (for example, top-5). Calculate how many times the classifier missed the true label in the top five predictions, and that’s your top-5. Ideally, we want the model to always predict the correct class in the top-1. But top-5 gives a more holistic evaluation of the model’s performance by defining how close the model is to the correct prediction for the missed classes.Trains the model and calls the reduce_lr value using callbacks in the training method" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"212 CHAPTER 5Advanced CNN architectures 5.4 VGGNet VGGNet was developed in 2014 by the Visual Geometry Group at Oxford University (hence the name VGG).3 The building components are exactly the same as those in LeNet and AlexNet, except that VGGNet is an even deeper network with more convo- lutional, pooling, and dense layers. Other than that, no new components are intro- duced here. VGGNet, also known as VGG16, consists of 16 weight layers: 13 convolutional lay- ers and 3 fully connected layers. Its uniform architecture makes it appealing in the DL community because it is very easy to understand. 5.4.1 Novel features of VGGNet We’ve seen how challenging it can be to set up CNN hyperparameters like kernel size, padding, strides, and so on. VGGNet’s novel concept is that it has a simple architecture containing uniform components (convolutional and pooling layers). It improves on AlexNet by replacing large kernel-sized filters (11 and 5 in the first and second convolu- tional layers, respectively) with multiple 3 × 3 pool-size filters one after another. The architecture is composed of a series of uniform convolutional building blocks followed by a unified pooling layer, where: All convolutional layers are 3 × 3 kernel-sized filters with a strides value of 1 and a padding value of same . All pooling layers have a 2 × 2 pool size and a strides value of 2. Simonyan and Zisserman decided to use a smaller 3 × 3 kernel to allow the network to extract finer-level features of the image compared to AlexNet’s large kernels (11 × 11 and 5 × 5). The idea is that with a given convolutional receptive field, multiple stacked smaller kernels is better than a larger kernel because having multiple nonlinear layers increases the depth of the network; this enables it to learn more complex features at a lower cost because it has fewer learning parameters. For example, in their experiments, the authors noticed that a stack of two 3 × 3 convolutional layers (without spatial pooling in between) has an effective receptive field of 5 × 5, and three 3 × 3 convolutional layers have the effect of a 7 × 7 receptive field. So by using 3 × 3 convolutions with higher depth, you get the benefits of using more nonlinear rectification layers (ReLU), which makes the decision function more discriminative. Second, this decreases the number of training parameters because when you use a three-layer 3 × 3 convolutional with C channels, the stack is parameter- ised by 32C2 = 27C2 weights compared to a single 7 × 7 convolutional layer that requires 72C2 = 49C2 weights, which is 81% more parameters. 3Karen Simonyan and Andrew Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recog- nition,” 2014, https:/ /arxiv.org/pdf/1409.1556v6.pdf ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"213 VGGNet This unified configuration of the convolutional and pooling components simplifies the neural network architecture, which makes it very easy to understand and implement. The VGGNet architecture is developed by stacking 3 × 3 convolutional layers with 2 × 2 pooling layers inserted after several convolutional layers. This is followed by the traditional classifier, which is composed of fully connected layers and a softmax, as depicted in figure 5.8. 5.4.2 VGGNet configurations Simonyan and Zisserman created several configurations for the VGGNet architec- ture, as shown in figure 5.9. All of the configurations follow the same generic design. Configurations D and E are the most commonly used and are called VGG16 and VGG19 , referring to the number of weight layers. Each block contains a series of 3 × 3 convolutional layers with similar hyperparameter configuration, followed by a 2 × 2 pooling layer. Table 5.1 lists the number of learning parameters (in millions) for each configura- tion. VGG16 yields ~138 million parameters; VGG19, which is a deeper version ofReceptive field As explained in chapter 3, the receptive field is the effective area of the input image on which the output depends: Receptive field ReLU nonlinearity 3 × 3 CONV, 64 3 × 3 CONV, 64Pool/2 Pool/2 3 × 3 CONV, 128Pool/2 3 × 3 CONV, 128 3 × 3 CONV, 256 3 × 3 CONV, 256Pool/2 3 × 3 CONV, 256 3 × 3 CONV, 512 3 × 3 CONV, 512Pool/2 3 × 3 CONV, 512 3 × 3 CONV, 512 3 × 3 CONV, 512 3 × 3 CONV, 512FC 4096 Softmax 1000FC 4096 Figure 5.8 VGGNet-16 architecture" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"214 CHAPTER 5Advanced CNN architectures VGGNet, has more than 144 million parameters. VGG16 is more commonly used because it performs almost as well as VGG19 but with fewer parameters. VGG16 IN KERAS Configurations D (VGG16) and E (VGG19) are the most commonly used configura- tions because they are deeper networks that can learn more complex functions. So, in this chapter, we will implement configuration D, which has 16 weight layers. VGG19 (configuration E) can be similarly implemented by adding a fourth convolutionalTable 5.1 VGGNet architecture parameters (in millions) Network A, A-LRN B C D E No. of parameters 133 133 134 138 144conv3-6411 weight layersAConvNet configuration 11 weight layersA-LRN 13 weight layers16 weight layersB C 16 weight layersD 19 weight layersE conv3-64 LRNconv3-64 conv3-64Input (224 x 224 RGB image) conv3-64 conv3-64 maxpoolconv3-64 conv3-64conv3-64 conv3-64 conv3-128 conv3-128 conv3-128 conv3-128conv3-128 conv3-128conv3-128 conv3-128conv3-128 conv3-128 maxpool conv3-256 conv3-256conv3-256 conv3-256conv3-256 conv3-256conv3-256 conv3-256conv3-256 conv3-256conv3-256 conv3-256 conv3-256 maxpool conv3-512 conv3-512conv3-512 conv3-512conv3-512 conv3-512conv3-512 conv3-512 conv3-512 maxpoolconv3-512 conv3-512 conv3-512conv3-512 conv3-512 conv3-512 conv3-512 maxpool conv3-512 conv3-512conv3-512 conv3-512conv3-512 conv3-512conv3-512 conv3-512 conv3-512conv3-512 conv3-512 conv3-512conv3-512 conv3-512 conv3-512 conv3-512 FC-4096 FC-4096 FC-1000conv3-256 conv3-256 conv3-256 Figure 5.9 VGGNet architecture configurations" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"215 VGGNet layer to the third, fourth, and fifth blocks as you can see in figure 5.9. This chapter’s downloaded code includes a full implementation of both VGG16 and VGG19. Note that Simonyan and Zisserman used the following regularization techniques to avoid overfitting: L2 regularization with weight decay of 5 × 10–4. For simplicity, this is not added to the implementation that follows. Dropout regularization for the first two fully connected layers, with a dropout ratio set to 0.5. The Keras code is as follows: model = Sequential() # block #1 model.add(Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same', input_shape=(224,224, 3))) model.add(Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(MaxPool2D((2,2), strides=(2,2))) # block #2 model.add(Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(MaxPool2D((2,2), strides=(2,2))) # block #3 model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(MaxPool2D((2,2), strides=(2,2))) # block #4 model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(MaxPool2D((2,2), strides=(2,2)))Instantiates an empty sequential model" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"216 CHAPTER 5Advanced CNN architectures # block #5 model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same')) model.add(MaxPool2D((2,2), strides=(2,2))) # block #6 (classifier) model.add(Flatten()) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1000, activation='softmax')) model.summary() When you print the model summary, you will see that the number of total parameters is ~138 million: 5.4.3 Learning hyperparameters Simonyan and Zisserman followed a training procedure similar to that of AlexNet: the training is carried out using mini-batch gradient descent with momentum of 0.9. The learning rate is initially set to 0.01 and then decreased by a factor of 10 when the vali- dation set accuracy stops improving. 5.4.4 VGGNet performance VGG16 achieved a top-5 error rate of 8.1% on the ImageNet dataset compared to 15.3% achieved by AlexNet. VGG19 did even better: it was able to achieve a top-5 error rate of ~7.4%. It is worth noting that in spite of the larger number of parameters and the greater depth of VGGNet compared to AlexNet, VGGNet required fewer epochs to converge due to the implicit regularization imposed by greater depth and smaller convolutional filter sizes.Prints the model summary Total params: 138,357, 544 Trainable params: 138,357, 544 Non-trainable params: 0" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"217 Inception and GoogLeNet 5.5 Inception and GoogLeNet The Inception network came to the world in 2014 when a group of researchers at Google published their paper, “Going Deeper with Convolutions.”4 The main hall- mark of this architecture is building a deeper neural network while improving the utilization of the computing resources inside the network. One particular incarnation of the Inception network is called GoogLeNet and was used in the team’s submission for ILSVRC 2014. It uses a network 22 layers deep (deeper than VGGNet) while reduc- ing the number of parameters by 12 times (from ~138 million to ~13 million) and achieving significantly more accurate results. The network used a CNN inspired by the classical networks (AlexNet and VGGNet) but implemented a novel element dubbed as the inception module . 5.5.1 Novel features of Inception Szegedy et al. took a different approach when designing their network architecture. As we’ve seen in the previous networks, there are some architectural decisions that you need to make for each layer when you are designing a network, such as these: The kernel size of the convolutional layer —We’ve seen in previous architectures that the kernel size varies: 1 × 1, 3 × 3, 5 × 5, and, in some cases, 11 × 11 (as in Alex- Net). When designing the convolutional layer, we find ourselves trying to pick and tune the kernel size of each layer that fits our dataset. Recall from chapter 3 that smaller kernels capture finer details of the image, whereas bigger filters will leave out minute details. When to use the pooling layer —AlexNet uses pooling layers every one or two con- volutional layers to downsize spatial features. VGGNet applies pooling after every two, three, or four convolutional layers as the network gets deeper. Configuring the kernel size and positioning the pool layers are decisions we make mostly by trial and error and experiment with to get the optimal results. Inception says, “Instead of choosing a desired filter size in a convolutional layer and deciding where to place the pooling layers, let’s apply all of them all together in one block and call it the inception module. ” That is, rather than stacking layers on top of each other as in classical architec- tures, Szegedy and his team suggest that we create an inception module consisting of several convolutional layers with different kernel sizes. The architecture is then devel- oped by stacking the inception modules on top of each other. Figure 5.10 shows how classical convolutional networks are architected versus the Inception network. 4 Christian Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumi- tru Erhan, Vincent Vanhoucke, and Andrew Rabinovich, “Going Deeper with Convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 1–9, 2015, http:/ /mng.bz/YryB ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"218 CHAPTER 5Advanced CNN architectures From the diagram, you can observe the following: In classical architectures like LeNet, AlexNet, and VGGNet, we stack convolu- tional and pooling layers on top of each other to build the feature extractors. At the end, we add the dense fully connected layers to build the classifier. In the Inception architecture, we start with a convolutional layer and a pooling layer, stack the inception modules and pooling layers to build the feature extractors, and then add the regular dense classifier layers. We’ve been treating the inception modules as black boxes to understand the bigger picture of the Inception architecture. Now, we will unpack the inception module to understand how it works. 5.5.2 Inception module: Naive version The inception module is a combination of four layers: 1 × 1 convolutional layer 3 × 3 convolutional layer 5 × 5 convolutional layer 3 × 3 max-pooling layer The outputs of these layers are concatenated into a single output volume forming the input of the next stage. The naive representation of the inception module is shown in figure 5.11. The diagram may look a little overwhelming, but the idea is simple to understand. Let’s follow along with this example: SoftmaxClassical CNN architecture Inception modules FC POOL CONV CONV POOL CONV InputSoftmax FC Inception modules POOL Inception modules POOL CONV InputDense classifiers Classical CNN feature extractorsDense classifiers Inception modules feature extractors Figure 5.10 Classical convolutional networks vs. the Inception network" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"219 Inception and GoogLeNet 1Suppose we have an input dimensional volume from the previous layer of size 32 × 32 × 200. 2We feed this input to four convolutions simultaneously: – 1 × 1 convolutional layer with depth = 64 and padding = same . The output of this kernel = 32 × 32 × 64. – 3 × 3 convolutional layer with depth = 128 and padding = same . Output = 32 × 32 × 128. – 5 × 5 convolutional layer with depth = 32 and padding = same . Output = 32 × 32 × 32. – 3 × 3 max-pooling layer with padding = same and strides = 1. Output = 32 × 32 × 32. 3We concatenate the depth of the four outputs to create one output volume of dimensions 32 × 32 × 256. Now we have an inception module that takes an input volume of 32 × 32 × 200 and outputs a volume of 32 × 32 × 256. NOTE In the previous example, we use a padding value of same . In Keras, padding can be set to same or valid , as we saw in chapter 3. The same value results in padding the input such that the output has the same length as the original input. We do that because we want the output to have width and height dimensions similar to the input. And we want to output similar dimen- sions in the inception module to simplify the depth concatenation process. Now we can just add up the depths of all the outputs to concatenate them into one output volume to be fed to the next layer in our network.Filter concatenation 3 × 3 max pooling 5 × 5 convolutions 3 × 3 convolutions 1 × 1 convolutions32 × 32 × 256 32 × 32 × 128 32 × 32 × 64Inception module 32 × 32 × 32 Previous layer 32 × 32 × 20032 × 32 × 32 Figure 5.11 Naive representation of an inception module" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"220 CHAPTER 5Advanced CNN architectures 5.5.3 Inception module with dimensionality reduction The naive representation of the inception module that we just saw has a big computa- tional cost problem that comes with processing larger filters like the 5 × 5 convolutional layer. To get a better sense of the compute problem with the naive representation, let’s calculate the number of operations that will be performed for the 5 × 5 convolu- tional layer in the previous example. The input volume with dimensions of 32 × 32 × 200 will be fed to the 5 × 5 convolu- tional layer of 32 filters with dimensions = 5 × 5 × 32. This means the total number of multiplications that the computer needs to compute is 32 × 32 × 200 multiplied by 5 × 5 × 32, which is more than 163 million operations. While we can perform this many operations with modern computers, this is still pretty expensive. This is when the dimensionality reduction layers can be very useful. DIMENSIONALITY REDUCTION LAYER (1 × 1 CONVOLUTIONAL LAYER ) The 1 × 1 convolutional layer can reduce the operational cost of 163 million opera- tions to about a tenth of that. That is why it is called a reduce layer . The idea here is to add a 1 × 1 convolutional layer before the bigger kernels like the 3 × 3 and 5 × 5 con- volutional layers, to reduce their depth, which in turn will reduce the number of operations. Let’s look at an example. Suppose we have an input dimension volume of 32 × 32 × 200. We then add a 1 × 1 convolutional layer with a depth of 16. This reduces the dimension volume from 200 to 16 channels. We can then apply the 5 × 5 convolu- tional layer on the output, which has much less depth (figure 5.12). Notice that the 32 × 32 × 200 input is processed through the two convolutional lay- ers and outputs a volume of dimensions 32 × 32 × 32, which is the same as produced 32 × 32 × 200 Computational cost: (32 × 32 × 16) × (1 × 1 × 200) = 3.2 million32 × 32 × 16CONV 1 × 1 16 filters 32 × 32 × 32CONV 5 × 5 32 filtersBottleneck layer Total computational cost: 16.3 millionComputational cost: (32 × 32 × 32) × (5 × 5 × 16) = 13.1 million Figure 5.12 Dimensionality reduction is used to reduce the computational cost by reducing the depth of the layer. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"221 Inception and GoogLeNet without applying the dimensionality reduction layer. But here, instead of processing the 5 × 5 convolutional layer on the entire 200 channels of the input volume, we take this huge volume and shrink its representation to a much smaller intermediate vol- ume that has only 16 channels. Now, let’s look at the computational cost involved in this operation and compare it to the 163 million multiplications that we got before applying the reduce layer: Computation = operations in the 1 × 1 convolutional layer + operations in the 5 × 5 convolutional layer = (32 × 32 × 200) multiplied by (1 × 1 × 16 + 32 × 32 × 16) multiplied by (5 × 5 × 32) = 3.2 million + 13.1 million The total number of multiplications in this operation is 16.3 million, which is a tenth of the 163 million multiplications that we calculated without the reduce layers. The 1 × 1 convolutional layer The idea of the 1 × 1 convolutional layer is that it preserves the spatial dimensions (height and width) of the input volume but changes the number of channels of the volume (depth): The 1 × 1 convolutional layers are also known as bottleneck layers because the bot- tleneck is the smallest part of the bottle and reduce layers reduce the dimensionality of the network, making it look like a bottleneck:6 × 6 × 32* = 1 × 1 × # filters 6 × 6 × # filters 1 × 1 conv layers preserve the spatial dimensions but change the depth. Input data Output layerBottleneck layer 1 × 1 convolutional layers are called bottleneck layers ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"222 CHAPTER 5Advanced CNN architectures IMPACT OF DIMENSIONALITY REDUCTION ON NETWORK PERFORMANCE You might be wondering whether shrinking the representation size so dramatically hurts the performance of the neural network. Szegedy et al. ran experiments and found that as long as you implement the reduce layer in moderation, you can shrink the representa- tion size significantly without hurting performance—and save a lot of computations. Now, let’s put the reduce layers into action and build a new inception module with dimensionality reduction . To do that, we will keep the same concept of concatenating the four layers from the naive representation. We will add a 1 × 1 convolutional reduce layer before the 3 × 3 and 5 × 5 convolutional layers to reduce their computational cost. We will also add a 1 × 1 convolutional layer after the 3 × 3 max-pooling layer because pooling layers don’t reduce the depth for their inputs. So, we will need to apply the reduce layer to their output before we do the concatenation (figure 5.13). We add dimensionality reduction prior to bigger convolutional layers to allow for increasing the number of units at each stage significantly without an uncontrolled blowup in computational complexity at later stages. Furthermore, the design follows the practical intuition that visual information should be processed at various scales and then aggregated so that the next stage can abstract features from the different scales simultaneously. RECAP OF INCEPTION MODULES To summarize, if you are building a layer of a neural network and you don’t want to have to decide what filter size to use in the convolutional layers or when to add pool- ing layers, the inception module lets you use them all and concatenate the depth of all the outputs. This is called the naive representation of the inception module.Depth concatenation 1 × 1 convolutions 5 × 5 convolutions 3 × 3 convolutions 1 × 1 convolutions 3 × 3 max pooling 1 × 1 convolutions 1 × 1 convolutions Previous layerInception module with dimensionality reduction Figure 5.13 Building an inception module with dimensionality reduction " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"223 Inception and GoogLeNet We then run into the problem of computational cost that comes with using large filters. Here, we use a 1 × 1 convolutional layer called the reduce layer that reduces the computational cost significantly. We add reduce layers before the 3 × 3 and 5 × 5 convolutional layers and after the max-pooling layer to create an inception module with dimensionality reduction. 5.5.4 Inception architecture Now that we understand the components of the inception module, we are ready to build the Inception network architecture. We use the dimension reduction represen- tation of the inception module, stack inception modules on top of each other, and add a 3 × 3 pooling layer in between for downsampling, as shown in figure 5.14. We can stack as many inception modules as we want to build a very deep convolu- tional network. In the original paper, the team built a specific incarnation of the DepthConcat CONV 1 × 1 + 1(S) CONV 5 × 5 + 1(S) CONV 3 × 3 + 1(S) CONV 1 × 1 + 1(S)Inception module 3 × 3 pool layerMaxPool 3 × 3 + 1(S) CONV 1 × 1 + 1(S) CONV 1 × 1 + 1(S) MaxPool 3 × 3 + 2(S) DepthConcat DepthConcatCONV 1 × 1 + 1(S) CONV 5 × 5 + 1(S) CONV 3 × 3 + 1(S) CONV 1 × 1 + 1(S) MaxPool 3 × 3 + 1(S) CONV 1 × 1 + 1(S) CONV 1 × 1 + 1(S)Inception module Figure 5.14 We build the Inception network by adding a stack of inception modules on top of each other." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"224 CHAPTER 5Advanced CNN architectures inception module and called it GoogLeNet. They used this network in their submission for the ILSVRC 2014 competition. The GoogLeNet architecture is shown in figure 5.15. As you can see, GoogLeNet uses a stack of a total of nine inception modules and a max pooling layer every several blocks to reduce dimensionality. To simplify this implementa- tion, we are going to break down the GoogLeNet architecture into three parts: Part A —Identical to the AlexNet and LeNet architectures; contains a series of convolutional and pooling layers. Part B —Contains nine inception modules stacked as follows: two inception modules + pooling layer + five inception modules + pooling layer + five incep- tion modules. Part C —The classifier part of the network, consisting of the fully connected and softmax layers.Softmax FC Global AvgPool 3 × 3 MaxPool2x 5x 3 × 3 MaxPool 3 × 3 MaxPool2x 3 × 3 CONV 1 × 1 CONV 3 × 3 MaxPool 7 × 7 CONV InputPart C: The classifier Part B: Contains nine inception blocks and separated by 3 × 3 max pooling layers Part A: Identical to AlexNet and LeNet; contains a series of convolutional and max pooling layersFigure 5.15 The full GoogLeNet model consists of three parts: the first part has the classical CNN architecture like AlexNet and LeNet, the second part is a stack of inceptions modules and pooling layers, and the third part is the traditional fully connected classifiers." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"225 Inception and GoogLeNet 5.5.5 GoogLeNet in Keras Now, let’s implement the GoogLeNet architecture in Keras (figure 5.16). Notice that the inception module takes the features from the previous module as input, passes them through four routes, concatenates the depth of the output of all four routes, and then passes the concatenated output to the next module. The four routes are as follows: 1 × 1 convolutional layer 1 × 1 convolutional layer + 3 × 3 convolutional layer 1 × 1 convolutional layer + 5 × 5 convolutional layer 3 × 3 pooling layer + 1 × 1 convolutional layer First we’ll build the inception_module function. It takes the number of filters of each convolutional layer as an argument and returns the concatenated output: def inception_module (x, filters_1 × 1, filters_3x3_reduce, filters_3x3, filters_5x5_reduce, filters_5x5, filters_pool_proj, name=None): conv_1x1 = Conv2D(filters_1x1, kernel_size=(1, 1), padding=' same', activation=' relu', kernel_initializer=kernel_init, bias_initializer=bias_init) (x) # 3 × 3 route = 1 × 1 CONV + 3 × 3 CONV pre_conv_3x3 = Conv2D(filters_3x3_reduce, kernel_size=(1, 1), padding=' same', activation=' relu', kernel_initializer=kernel_init, bias_initializer=bias_init) (x) conv_3x3 = Conv2D(filters_3x3, kernel_size=(3, 3), padding=' same', activation=' relu', Depth concatenation 1 × 1 convolutions 5 × 5 convolutions 3 × 3 convolutions 1 × 1 convolutions 3 × 3 max pooling 1 × 1 convolutions 1 × 1 convolutions Previous layerInception module with dimensionality reduction Figure 5.16 The inception module of GoogLeNet Creates the 1 × 1 convolutional layer that takes its input directly from the previous layer" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"226 CHAPTER 5Advanced CNN architectures kernel_initializer=kernel_init, bias_initializer=bias_init) (pre_conv_3x3) # 5 × 5 route = 1 × 1 CONV + 5 × 5 CONV pre_conv_5x5 = Conv2D(filters_5x5_reduce, kernel_size=(1, 1), padding=' same', activation=' relu', kernel_initializer=kernel_init, bias_initializer=bias_init)(x) conv_5x5 = Conv2D(filters_5x5, kernel_size=(5, 5), padding=' same', activation=' relu', kernel_initializer=kernel_init, bias_initializer=bias_init)(pre_conv_5x5) # pool route = POOL + 1 × 1 CONV pool_proj = MaxPool2D((3, 3), strides=(1, 1), padding=' same')(x) pool_proj = Conv2D(filters_pool_proj, (1, 1), padding=' same', activation=' relu', kernel_initializer=kernel_init, bias_initializer=bias_init)(pool_proj) output = concatenate([conv_1x1, conv_3x3, conv_5x5, pool_proj], axis=3, name=name) return output GOOGLENET ARCHITECTURE Now that the inception_module function is ready, let’s build the GoogLeNet architec- ture from figure 5.16. To get the values of the inception_module function’s argu- ments, we will go through figure 5.17, which represents the hyperparameters set up asConcatenates together the depth of the three filters Part Atype convolution 7x7/2 112 x112x64 1 2.7K 34Mpatch size/ strideoutput sizepool projdepth params ops #1x1#3x3# 5 x5#3x3# 5 x5reduce reduce Part CPart Bmax pool 3x3/2 56 x56x64 0 convolution 3x3/1 56 x56x192 2 112K 360M max pool 3x3/2 28 x28x192 0 inception (3a) 28x28x256 2 159K 128M inception (3b) 28x28x480 2 380K 304M max pool 3x3/2 14 x14x480 0 inception (4a 14x14x512 2 364K 73M inception (4b) 14x14x512 2 437K 88M inception (4c) 14x14x512 2 463K 100M inception (4d) 14x14x528 2 580K 119M inception (4e) 14x14x832 2 840K 170M max pool 3x3/2 7 x7x832 0 inception (5a) 7x7x832 2 1072K 54M inception (5b) 7x7x1024 22 96 128 96 112 128 144 160 160 1922 128 192 208 224 256 288 320 320 3842 16 32 16 24 24 32 32 32 482 32 96 48 64 64 64 128 128 1282 32 64 64 64 64 64 128 128 12864 128 192 160 128 112 256 256 384 1388K 71M avg pool 7x7/1 1 x1x1024 0 dropout (40%) 1x1x1024 0 linear 1x1x1000 1 1000K 1M softmax 1x1x1000 0 Figure 5.17 Hyperparameters implemented by Szegedy et al. in the original Inception paper" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"227 Inception and GoogLeNet implemented by Szegedy et al. in the original paper. (Note that “#3 × 3 reduce” and “#5 × 5 reduce” in the figure represent the 1 × 1 filters in the reduction layers that are used before the 3 × 3 and 5 × 5 convolutional layers.) Now, let’s go through the implementations of parts A, B, and C. PART A: B UILDING THE BOTTOM PART OF THE NETWORK Let’s build the bottom part of the network. This part consists of a 7 × 7 convolutional layer ⇒ 3 × 3 pooling layer ⇒ 1 × 1 convolutional layer ⇒ 3 × 3 convolutional layer ⇒ 3 × 3 pooling layer, as you can see in figure 5.18. In the LocalResponseNorm layer, similar to in AlexNet, local response normalization is used to help speed up convergence. Nowadays, batch normalization is used instead. Here is the Keras code for part A: # input layer with size = 24 × 24 × 3 input_layer = Input(shape=(224, 224, 3)) kernel_init = keras.initializers.glorot_uniform() bias_init = keras.initializers.Constant(value=0.2) x = Conv2D(64, (7, 7), padding= 'same', strides=(2, 2), activation='relu', name='conv_1_7x7/2', kernel_initializer=kernel_init, bias_initializer=bias_init)(input_layer) x = MaxPool2D((3, 3), padding=' same', strides=(2, 2), name=' max_pool_1_3x3/2 ')(x) x = BatchNormalization()(x)InputMaxPool 3 × 3 + 2(S) LocalRespNorm CONV 3 × 3 + 1(S) CONV 1 × 1 + 1(V) LocalRespNorm MaxPool 3 × 3 + 2(S) CONV 7 × 7 + 2(S) Figure 5.18 The bottom part of the network" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"228 CHAPTER 5Advanced CNN architectures x = Conv2D(64, (1, 1), padding=' same', strides=(1, 1), activation=' relu')(x) x = Conv2D(192, (3, 3), padding=' same', strides=(1, 1), activation='relu')(x) x = BatchNormalization()(x) x = MaxPool2D((3, 3), padding=' same', strides=(2, 2))(x) PART B: B UILDING THE INCEPTION MODULES AND MAX-POOLING LAYERS To build inception modules 3a and 3b and the first max-pooling layer, we use table 5.2 to start. The code is as follows: x = inception_module(x, filters_1x1=64, filters_3x3_reduce=96, filters_3x3=128, filters_5x5_reduce=16, filters_5x5=32, filters_pool_proj=32, name='inception_3a') x = inception_module(x, filters_1x1=128, filters_3x3_reduce=128, filters_3x3=192, filters_5x5_reduce=32, filters_5x5=96, filters_pool_proj=64, name='inception_3b') x = MaxPool2D((3, 3), padding=' same', strides=(2, 2))(x) Similarly, let’s create inception modules 4a, 4b, 4c, 4d, and 4e and the max pooling layer: x = inception_module(x, filters_1x1=192, filters_3x3_reduce=96, filters_3x3=208, filters_5x5_reduce=16, filters_5x5=48, filters_pool_proj=64, name= 'inception_4a' ) x = inception_module(x, filters_1x1=160, filters_3x3_reduce=112, filters_3x3=224, filters_5x5_reduce=24, filters_5x5=64, filters_pool_proj=64, name= 'inception_4b' ) x = inception_module(x, filters_1x1=128, filters_3x3_reduce=128, filters_3x3=256, filters_5x5_reduce=24, filters_5x5=64, filters_pool_proj=64, name= 'inception_4c' ) x = inception_module(x, filters_1x1=112, filters_3x3_reduce=144, filters_3x3=288, filters_5x5_reduce=32, filters_5x5=64, filters_pool_proj=64, name= 'inception_4d' ) x = inception_module(x, filters_1x1=256, filters_3x3_reduce=160, filters_3x3=320, filters_5x5_reduce=32, filters_5x5=128, filters_pool_proj=128, name= 'inception_4e' ) x = MaxPool2D((3, 3), padding= 'same', strides=(2, 2), name= 'max_pool_4_3x3/2' )(x)Table 5.2 Inception modules 3a and 3b Type #1 × 1 #3 × 3 reduce #3 × 3 #5 × 5 reduce #5 × 5 Pool proj Inception (3a) 064 096 128 16 32 32 Inception (3b) 128 128 192 32 96 64" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"229 Inception and GoogLeNet Now, let’s create modules 5a and 5b: x = inception_module(x, filters_1x1=256, filters_3x3_reduce=160, filters_3x3=320, filters_5x5_reduce=32, filters_5x5=128, filters_pool_proj=128, name= 'inception_5a' ) x = inception_module(x, filters_1x1=384, filters_3x3_reduce=192, filters_3x3=384, filters_5x5_reduce=48, filters_5x5=128, filters_pool_proj=128, name= 'inception_5b' ) PART C: B UILDING THE CLASSIFIER PART In their experiments, Szegedy et al. found that adding an 7 × 7 average pooling layer improved the top-1 accuracy by about 0.6%. They then added a dropout layer with 40% probability to reduce overfitting: x = AveragePooling2D(pool_size=(7,7), strides=1, padding='valid')(x) x = Dropout(0.4)(x) x = Dense(10, activation='softmax', name='output')(x) 5.5.6 Learning hyperparameters The team used a SGD gradient descent optimizer with 0.9 momentum. They also implemented a fixed learning rate decay schedule of 4% every 8 epochs. An example of how to implement the training specifications similar to the paper is as follows: epochs = 25 initial_lrate = 0.01 def decay(epoch, steps=100): initial_lrate = 0.01 drop = 0.96 epochs_drop = 8 lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop)) return lrate lr_schedule = LearningRateScheduler(decay, verbose=1) sgd = SGD(lr=initial_lrate, momentum=0.9, nesterov= False) model.compile(loss= 'categorical_crossentropy' , optimizer= sgd, metrics=[ 'accuracy' ]) model.fit(X_train, y_train, batch_size=256, epochs=epochs, validation_data=(X_test, y_test), callbacks=[lr_schedule], verbose=2, shuffle= True) 5.5.7 Inception performance on the CIFAR dataset GoogLeNet was the winner of the ILSVRC 2014 competition. It achieved a top-5 error rate of 6.67%, which was very close to human-level performance and much better than previous CNNs like AlexNet and VGGNet.Implements the learning rate decay function" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"230 CHAPTER 5Advanced CNN architectures 5.6 ResNet The Residual Neural Network (ResNet) was developed in 2015 by a group from the Microsoft Research team.5 They introduced a novel residual module architecture with skip connections . The network also features heavy batch normalization for the hidden layers. This technique allowed the team to train very deep neural networks with 50, 101, and 152 weight layers while still having lower complexity than smaller networks like VGGNet (19 layers). ResNet was able to achieve a top-5 error rate of 3.57% in the ILSVRC 2015 competition, which beat the performance of all prior ConvNets. 5.6.1 Novel features of ResNet Looking at how neural network architectures evolved from LeNet, AlexNet, VGGNet, and Inception, you might have noticed that the deeper the network, the larger its learning capacity, and the better it extracts features from images. This mainly happens because very deep networks are able to represent very complex functions, which allows the network to learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). Earlier in this chapter, we saw deep neural networks like VGGNet-19 (19 layers) and GoogLeNet (22 layers). Both performed very well in the ImageNet challenge. But can we build even deeper networks? We learned from chapter 4 that one downside of add- ing too many layers is that doing so makes the network more prone to overfit the train- ing data. This is not a major problem because we can use regularization techniques like dropout, L2 regularization, and batch normalization to avoid overfitting. So, if we can take care of the overfitting problem, wouldn’t we want to build networks that are 50, 100, or even 150 layers deep? The answer is yes. We definitely should try to build very deep neural networks. We need to fix just one other problem, to unblock the capability of building super-deep networks: a phenomenon called vanishing gradients . 5Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep Residual Learning for Image Recognition,” 2015, http:/ /arxiv.org/abs/1512.03385 .Vanishing and exploding gradients The problem with very deep networks is that the signal required to change the weights becomes very small at earlier layers. To understand why, let’s consider the gradient descent process explained in chapter 2. As the network backpropagates the gradient of the error from the final layer back to the first layer, it is multiplied by the weight matrix at each step; thus the gradient can decrease exponentially quickly to zero, leading to a vanishing gradient phenomenon that prevents the earlier layers from learning. As a result, the network’s performance gets saturated or even starts to degrade rapidly. In other cases, the gradient grows exponentially quickly and “explodes” to take very large values. This phenomenon is called exploding gradients ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"231 ResNet To solve the vanishing gradient problem, He et al. created a shortcut that allows the gradient to be directly backpropagated to earlier layers. These shortcuts are called skip connections : they are used to flow information from earlier layers in the network to later layers, creating an alternate shortcut path for the gradient to flow through. Another important benefit of the skip connections is that they allow the model to learn an identity function, which ensures that the layer will perform at least as well as the previous layer (figure 5.19). At left in figure 5.19 is the traditional stacking of convolutional layers one after the other. On the right, we still stack convolutional layers as before, but we also add the orig- inal input to the output of the convolutional block. This is a skip connection. We then add both signals: skip connection + main path. Note that the shortcut arrow points to the end of the second convolutional layer— not after it. The reason is that we add both paths before we apply the ReLU activation function of this layer. As you can see in figure 5.20, the X signal is passed along the shortcut path and then added to the main path, f(x). Then, we apply the ReLU activa- tion to f(x) + x to produce the output signal: relu( f(x) + x).Without skip connectionWith skip connection Figure 5.19 Traditional network without skip connections (left); network with a skip connection (right). Add both paths = ( ) + fx x x CONV CONV + ReLuShortcut path = x Main path = ( ) fxReLu relu( fx x( ) + ) Figure 5.20 Adding the paths and applying the ReLU activation function to solve the vanishing gradient problem that usually comes with very deep networks" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"232 CHAPTER 5Advanced CNN architectures The code implementation of the skip connection is straightforward: X_shortcut = X X = Conv2D(filters = F1, kernel_size = (3, 3), strides = (1,1))(X) X = Activation( 'relu')(X) X = Conv2D(filters = F1, kernel_size = (3, 3), strides = (1,1))(X) X = Add()([X, X_shortcut]) X = Activation( 'relu')(X) This combination of the skip connection and convolutional layers is called a residual block. Similar to the Inception network, ResNet is composed of a series of these resid- ual block building blocks that are stacked on top of each other (figure 5.21). From the figure, you can observe the following: Feature extractors —To build the feature extractor part of ResNet, we start with a convolutional layer and a pooling layer and then stack residual blocks on top of each other to build the network. When we are designing our ResNet network, we can add as many residual blocks as we want to build even deeper networks. Stores the value of the shortcut to be equal to the input xPerforms the main path operations: CONV + ReLU + CONV Adds both paths together Applies the ReLU activation function SoftmaxClassical CNN architecture Inception modules FC POOL CONV CONV POOL CONV InputResidual blocks Softmax FC Inception modules POOL Inception modules POOL CONV InputSoftmax FC Residual blockPOOL Residual block Residual block POOL CONV Input Figure 5.21 Classical CNN architecture (left). The Inception network consists of a set of inception modules (middle). The residual network consists of a set of residual blocks (right). " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"233 ResNet Classifiers —The classification part is still the same as we learned for other net- works: fully connected layers followed by a softmax. Now that you know what a skip connection is and you are familiar with the high-level architecture of ResNet, let’s unpack residual blocks to understand how they work. 5.6.2 Residual blocks A residual module consists of two branches: Shortcut path (figure 5.22)—Connects the input to an addition of the second branch. Main path —A series of convolutions and activations. The main path consists of three convolutional layers with ReLU activations. We also add batch normaliza- tion to each convolutional layer to reduce overfitting and speed up training. The main path architecture looks like this: [CONV ⇒ BN ⇒ ReLU] × 3. Similar to what we explained earlier, the shortcut path is added to the main path right before the activation function of the last convolutional layer. Then we apply the ReLU function after adding the two paths. Notice that there are no pooling layers in the residual block. Instead, He et al. decided to do dimension downsampling using bottleneck 1 × 1 convolutional layers, similar to the Inception network. So, each residual block starts with a 1 × 1 convolu- tional layer to downsample the input dimension volume, and a 3 × 3 convolutional layer and another 1 × 1 convolutional layer to downsample the output. This is a good technique to keep control of the volume dimensions across many layers. This configu- ration is called a bottleneck residual block . When we are stacking residual blocks on top of each other, the volume dimensions change from one block to another. And as you might recall from the matrices intro- duction in chapter 2, to be able to perform matrix addition operations, the matrices should have similar dimensions. To fix this problem, we need to downsample the shortcut path as well, before merging both paths. We do that by adding a bottleneckx CONV2D +Batch normReLu CONV2DBatch normReLu ReLu CONV2DBatch normShortcut path = x Main path ( ) fxAdd both pathsResidual blocks Figure 5.22 The output of the main path is added to the input value through the shortcut before they are fed to the ReLU function. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"234 CHAPTER 5Advanced CNN architectures layer (1 × 1 convolutional layer + batch normalization) to the shortcut path, as shown in figure 5.23. This is called the reduce shortcut . Before we jump into the code implementation, let’s recap the discussion of residual blocks: Residual blocks contain two paths: the shortcut path and the main path. The main path consists of three convolutional layers, and we add a batch nor- malization layer to them: – 1 × 1 convolutional layer – 3 × 3 convolutional layer – 1 × 1 convolutional layer There are two ways to implement the shortcut path: –Regular shortcut —Add the input dimensions to the main path. –Reduce shortcut —Add a convolutional layer in the shortcut path before merg- ing with the main path. When we are implementing the ResNet network, we will use both regular and reduce shortcuts. This will be clearer when you see the full implementation. But for now, we will implement bottleneck_residual_block function that takes a reduce Boolean argument. When reduce is True , this means we want to use the reduce shortcut; other- wise, it will implement the regular shortcut. The bottleneck_residual_block func- tion takes the following arguments: X—Input tensor of shape (number of samples, height, width, channel) f—Integer specifying the shape of the middle convolutional layer’s window for the main path filters —Python list of integers defining the number of filters in the convolu- tional layers of the main pathx CONV2D +Batch norm1 × 1 conv 3 × 3 conv 1 × 1 conv ReLu CONV2DBatch normReLu ReLu CONV2DBatch norm Main path ( ) fxAdd both pathsShortcut path = + 1 × 1 conv + BN x CONV2DBatch normBottleneck residual block with reduce shortcut Figure 5.23 To reduce the input dimensionality, we add a bottleneck layer (1 × 1 convolutional layer + batch normalization) to the shortcut path. This is called the reduce shortcut ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"235 ResNet reduce —Boolean: True identifies the reduction layer s—Integer (strides) The function returns X: the output of the residual block, which is a tensor of shape (height, width, channel). The function is as follows: def bottleneck_residual_block (X, kernel_size, filters, reduce= False, s=2): F1, F2, F3 = filters X_shortcut = X if reduce: X_shortcut = Conv2D(filters = F3, kernel_size = (1, 1), strides = (s,s))(X_shortcut) X_shortcut = BatchNormalization(axis = 3)(X_shortcut) X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (s,s), padding = 'valid')(X) X = BatchNormalization(axis = 3)(X) X = Activation( 'relu')(X) else: # First component of main path X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid')(X) X = BatchNormalization(axis = 3)(X) X = Activation( 'relu')(X) # Second component of main path X = Conv2D(filters = F2, kernel_size = kernel_size, strides = (1,1), padding = 'same')(X) X = BatchNormalization(axis = 3)(X) X = Activation( 'relu')(X) # Third component of main path X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid')(X) X = BatchNormalization(axis = 3)(X) # Final step X = Add()([X, X_shortcut]) X = Activation( 'relu')(X) return X 5.6.3 ResNet implementation in Keras You’ve learned a lot about residual blocks so far. Let’s add these blocks on top of each other to build the full ResNet architecture. Here, we will implement ResNet50: a ver- sion of the ResNet architecture that contains 50 weight layers (hence the name). YouUnpacks the tuple to retrieve the filters of each convolutional layer Saves the input value to use it later to add back to the main pathCondition if reduce is True To reduce the spatial size, applies a 1 × 1 convolutional layer to the shortcut path. To do that, we need both convolutional layers to have similar strides.If reduce, sets the strides of the first convolutional layer to be similar to the shortcut strides. Adds the shortcut value to main path and passes it through a ReLU activation" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"236 CHAPTER 5Advanced CNN architectures can use the same approach to develop ResNet with 18, 34, 101, and 152 layers by fol- lowing the architecture in figure 5.24 from the original paper. We know from the previous section that each residual module contains 3 × 3 convolu- tional layers, and we now can compute the total number of weight layers inside the ResNet50 network as follows: Stage 1: 7 × 7 convolutional layer Stage 2: 3 residual blocks, each containing [1 × 1 convolutional layer + 3 × 3 convolutional layer + 1 × 1 convolutional layer] = 9 convolutional layers Stage 3: 4 residual blocks = total of 12 convolutional layers Stage 4: 6 residual blocks = total of 18 convolutional layers Stage 5: 3 residual blocks = total of 9 convolutional layers Fully connected softmax layer When we sum all these layers together, we get a total of 50 weight layers that describe the architecture of ResNet50. Similarly, you can compute the number of weight layers in the other ResNet versions. NOTE In the following implementation, we use the residual block with reduce shortcut at the beginning of each stage to reduce the spatial size of the outputLayer name 3x3, 64 3x3, 64x21x1, 64 3x3, 64 1x1, 256x3Output size conv1 112x112 1x1 FLOPs 1.8x109 3.6x109 3.8x109 7.6x109 11.3x10956x567x7, 64, stride 2 conv2_x 28x28 conv3_x 14x14 7x7conv4_x conv5_x3x3, maxpool, stride 2 Average pool, 1000-d fc, softmax18-layer 34-layer 50-layer 101-layer 152-layer 3x3, 64 3x3, 64x31x1, 64 3x3, 64 1x1, 256x31x1, 64 3x3, 64 1x1, 256x3 1x1, 128 3x3, 128 1x1, 512x33x3, 128 3x3, 128x41x1, 128 3x3, 128 1x1, 512x41x1, 128 3x3, 128 1x1, 512x8 1x1, 256 3x3, 256 1x1, 1024x33x3, 256 3x3, 256x61x1, 256 3x3, 256 1x1, 1024x231x1, 256 3x3, 256 1x1, 1024x36 1x1, 512 3x3, 512 1x1, 2048x33x3, 512 3x3, 512x33x3, 128 3x3, 128x2 3x3, 256 3x3, 256x2 3x3, 512 3x3, 512x21x1, 512 3x3, 512 1x1, 2048x31x1, 512 3x3, 512 1x1, 2048x3 Figure 5.24 Architecture of several ResNet variations from the original paper " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"237 ResNet from the previous layer. Then we use the regular shortcut for the remaining layers of that stage. Recall from our implementation of the bottleneck_ residual_block function that we will set the argument reduce to True to apply the reduce shortcut. Now let’s follow the 50-layer architecture from figure 5.24 to build the ResNet50 net- work. We build a ResNet50 function that takes input_shape and classes as argu- ments and outputs the model: def ResNet50 (input_shape, classes): X_input = Input(input_shape) # Stage 1 X = Conv2D(64, (7, 7), strides=(2, 2), name= 'conv1')(X_input) X = BatchNormalization(axis=3, name= 'bn_conv1' )(X) X = Activation( 'relu')(X) X = MaxPooling2D((3, 3), strides=(2, 2))(X) # Stage 2 X = bottleneck_residual_block(X, 3, [64, 64, 256], reduce= True, s=1) X = bottleneck_residual_block(X, 3, [64, 64, 256]) X = bottleneck_residual_block(X, 3, [64, 64, 256]) # Stage 3 X = bottleneck_residual_block(X, 3, [128, 128, 512], reduce= True, s=2) X = bottleneck_residual_block(X, 3, [128, 128, 512]) X = bottleneck_residual_block(X, 3, [128, 128, 512]) X = bottleneck_residual_block(X, 3, [128, 128, 512]) # Stage 4 X = bottleneck_residual_block(X, 3, [256, 256, 1024], reduce= True, s=2) X = bottleneck_residual_block(X, 3, [256, 256, 1024]) X = bottleneck_residual_block(X, 3, [256, 256, 1024]) X = bottleneck_residual_block(X, 3, [256, 256, 1024]) X = bottleneck_residual_block(X, 3, [256, 256, 1024]) X = bottleneck_residual_block(X, 3, [256, 256, 1024]) # Stage 5 X = bottleneck_residual_block(X, 3, [512, 512, 2048], reduce= True, s=2) X = bottleneck_residual_block(X, 3, [512, 512, 2048]) X = bottleneck_residual_block(X, 3, [512, 512, 2048]) # AVGPOOL X = AveragePooling2D((1,1))(X) # output layer X = Flatten()(X) X = Dense(classes, activation= 'softmax' , name='fc' + str(classes))(X) model = Model(inputs = X_input, outputs = X, name= 'ResNet50' ) return modelDefines the input as a tensor with shape input_shape Creates the model" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"238 CHAPTER 5Advanced CNN architectures 5.6.4 Learning hyperparameters He et al. followed a training procedure similar to that of AlexNet: the training is car- ried out using mini-batch GD with momentum of 0.9. The team set the learning rate to start with a value of 0.1 and then decreased it by a factor of 10 when the validation error stopped improving. They also used L2 regularization with a weight decay of 0.0001 (not implemented in this chapter for simplicity). As you saw in the earlier implementation, they used batch normalization right after each convolutional and before activation to speed up training: from keras.callbacks import ReduceLROnPlateau epochs = 200 batch_size = 256 reduce_lr = ReduceLROnPlateau(monitor= 'val_loss', factor=np.sqrt( 0.1), patience =5, min_lr=0.5e-6) model.compile(loss= 'categorical_crossentropy' , optimizer=SGD, metrics=[ 'accuracy' ]) model.fit(X_train, Y_train, batch_size=batch_size, validation_data =(X_test, Y_test), epochs=epochs, callbacks=[reduce_lr]) 5.6.5 ResNet performance on the CIFAR dataset Similar to the other networks explained in this chapter, the performance of ResNet models is benchmarked based on their results in the ILSVRC competition. ResNet-152 won first place in the 2015 classification competition with a top-5 error rate of 4.49% with a single model and 3.57% using an ensemble of models. This was much better than all the other networks, such as GoogLeNet (Inception), which achieved a top-5 error rate of 6.67%. ResNet also won first place in many object detection and image localization challenges, as we will see in chapter 7. More importantly, the residual blocks concept in ResNet opened the door to new possibilities for efficiently training super-deep neural networks with hundreds of layers. Using open source implementations Now that you have learned some of the most popular CNN architectures, I want to share some practical advice on how to use them. It turns out that a lot of these neural networks are difficult or finicky to replicate due to details of tuning hyperparameters such as learning decay and other things that make a difference for performance. DL researchers can even have a hard time replicating someone else’s polished work based on reading their paper. Sets the training parametersmin_lr is the lower bound on the learning rate, and factor is the factor by which the learning rate will be reduced. Compiles the model Trains the model, calling the reduce_lr value using callbacks in the training method" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"239 Summary Summary Classical CNN architectures have the same classical architecture of stacking convolutional and pooling layers on top of each other with different configura- tions for their layers. LeNet consists of five weight layers: three convolutional and two fully connected layers, with a pooling layer after the first and second convolutional layers. AlexNet is deeper than LeNet and contains eight weight layers: five convolu- tional and three fully connected layers. VGGNet solved the problem of setting up the hyperparameters of the convolu- tional and pooling layers by creating a uniform configuration for them to be used across the entire network. Inception tried to solve the same problem as VGGNet: instead of having to decide which filter size to use and where to add the pooling layer, Inception says, “Let’s use them all.” ResNet followed the same approach as Inception and created residual blocks that, when stacked on top of each other, form the network architecture. ResNet attempted to solve the vanishing gradient problem that made learning plateau or degrade when training very deep neural networks. The ResNet team intro- duced skip connections that allow information to flow from earlier layers in the network to later layers, creating an alternate shortcut path for the gradient to flow through. The fundamental breakthrough with ResNet was that it allowed us to train extremely deep neural networks with hundreds of layers.Fortunately, many DL researchers routinely open source their work on the internet. A simple search for the network implementation on GitHub will point you toward implementations in several DL libraries that you can clone and train. If you can locate the author’s implementation, you can usually get going much faster than by trying to re-implement a network from scratch—although sometimes, re-implementing from scratch can be a good exercise, like what we did earlier." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"240Transfer learning Transfer learning is one of the most important techniques of deep learning. When building a vision system to solve a specific problem, you usually need to collect and label a huge amount of data to train your network. You can build convnets, as you learned in chapter 3, and start the training from scratch; that is an acceptable approach. But what if you could download an existing neural network that some- one else has tuned and trained, and use it as a starting point for your new task? Transfer learning allows you to do just that. You can download an open source model that someone else has already trained and tuned and use their optimized parameters (weights) as a starting point to train your model on a smaller dataset for a given task. This way, you can train your network a lot faster and achieve higher results.This chapter covers Understanding the transfer learning technique Using a pretrained network to solve your problem Understanding network fine-tuning Exploring open source image datasets for training a model Building two end-to-end transfer learning projects" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"241 What problems does transfer learning solve? DL researchers and practitioners have posted many research papers and open source projects of trained algorithms that they have worked on for weeks and months and trained on GPUs to get state-of-the-art results on an array of problems. Often, the fact that someone else has done this work and gone through the painful high- performance research process means you can download an open source architecture and weights and use them as a good start for your own neural network. This is transfer learning : the transfer of knowledge from a pretrained network in one domain to your own problem in a different domain. In this chapter, I will explain transfer learning and outline reasons why using it is important. I will also detail different transfer learning scenarios and how to use them. Finally, we will see examples of using transfer learning to solve real-world problems. Ready? Let’s get started! 6.1 What problems does transfer learning solve? As the name implies, transfer learning means transferring what a neural network has learned from being trained on a specific dataset to another related problem (figure 6.1). Transfer learning is currently very popular in the field of DL because it enables you to train deep neural networks with comparatively little data in a short training time. The importance of transfer learning comes from the fact that in most real-world problems, we typically do not have millions of labeled images to train such complex models. The idea is pretty straightforward. First we train a deep neural network on a very large amount of data. During the training process, the network extracts a large number of useful features that can be used to detect objects in this dataset. We then transfer these extracted features ( feature maps ) to a new network and train this new network on our new dataset to solve a different problem. Transfer learning is a great way to short- cut the process of collecting and training huge amounts of data simply by reusing theKnowledge (extracted features) Figure 6.1 Transfer learning is the transfer of the knowledge that the network has acquired from one task to a new task. In the context of neural networks, the acquired knowledge is the extracted features." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"242 CHAPTER 6Transfer learning model weights from pretrained models that were developed for standard CV bench- mark datasets, such as the ImageNet image-recognition tasks. Top-performing models can be downloaded and used directly, or integrated into a new model for your own CV problems. The question is, why would we want to use transfer learning? Why don’t we just train a neural network directly on our new dataset to solve our problem? To answer this question, we first need to know the main problems that transfer learning solves. We’ll discuss those now; then I’ll go into the details of how transfer learning works and the different approaches to apply it. Deep neural networks are immensely data-hungry and rely on huge amounts of labeled data to achieve high performance. In practice, very few people train an entire convolutional network from scratch. This is due to two main problems: Data problem —Training a network from scratch requires a lot of data in order to get decent results, which is not feasible in most cases. It is relatively rare to have a dataset of sufficient size to solve your problem. It is also very expensive to acquire and label data: this is mostly a manual process done by humans capturing images and labeling them one by one, which makes it a nontrivial task. Computation problem —Even if you are able to acquire hundreds of thousands of images for your problem, it is computationally very expensive to train a deep neural network on millions of images because doing so usually requires weeks of training on multiple GPUs. Also keep in mind that training a neural network is an iterative process. So, even if you happen to have the computing power required to train a complex neural network, spending weeks experi- menting with different hyperparameters in each training iteration until you finally reach satisfactory results will make the project very costly. Additionally, an important benefit of using transfer learning is that it helps the model generalize its learnings and avoid overfitting. When you apply a DL model in the wild, it is faced with countless conditions it may never have seen before and does not know how to deal with; each client has its own preferences and generates data that is differ- ent from the data used for training. The model is asked to perform well on many tasks that are related to but not exactly similar to the task it was trained for. For example, when you deploy a car classifier model to production, people usu- ally have different camera types, each with its own image quality and resolution. Also, images can be taken during different weather conditions. These image nuances vary from one user to another. To train the model on all these different cases, you either have to account for every case and acquire a lot of images to train the net- work on, or try to build a more robust model that is better at generalizing to new use cases. This is what transfer learning does. Since it is not realistic to account for all the cases the model may face in the wild, transfer learning can help us deal with novel scenarios. It is necessary for production-scale use of DL that goes beyond tasks" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"243 What is transfer learning? and domains where labeled data is plentiful. Transferring features extracted from another network that has seen millions of images will make our model less prone to overfit and help it generalize better when faced with novel scenarios. You will be able to fully grasp this concept when we explain how transfer learning works in the following sections. 6.2 What is transfer learning? Armed with the understanding of the problems that transfer learning solves, let’s look at its formal definition. Transfer learning is the transfer of the knowledge (feature maps) that the network has acquired from one task, where we have a large amount of data, to a new task where data is not abundantly available. It is generally used where a neural network model is first trained on a problem similar to the problem that is being solved. One or more layers from the trained model are then used in a new model trained on the problem of interest. As we discussed earlier, to train an image classifier that will achieve image classification accuracy near to or above the human level, we’ll need massive amounts of data, large compute power, and lots of time on our hands. I’m sure most of us don’t have all these things. Knowing that this would be a problem for people with little-to-no resources, researchers built state-of-the-art models that were trained on large image datasets like ImageNet, MS COCO, Open Images, and so on, and then shared their models with the general public for reuse. This means you should never have to train an image classifier from scratch again, unless you have an exception- ally large dataset and a very large computation budget to train everything from scratch by yourself. Even if that is the case, you might be better off using transfer learning to fine-tune the pretrained network on your large dataset. Later in this chapter, we will discuss the different transfer learning approaches, and you will understand what fine-tuning means and why it is better to use transfer learning even when you have a large dataset. We will also talk briefly about some of the popular datasets mentioned here. NOTE When we talk about training a model from scratch, we mean that the model starts with zero knowledge of the world, and the model’s structure and parameters begin as random guesses. Practically speaking, this means the weights of the model are randomly initialized, and they need to go through a training process to be optimized. The intuition behind transfer learning is that if a model is trained on a large and gen- eral enough dataset, this model will effectively serve as a generic representation of the visual world. We can then use the feature maps it has learned, without having to train on a large dataset, by transferring what it learned to our model and using that as a base starting model for our own task. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"244 CHAPTER 6Transfer learning In transfer learning, we first train a base network on a base dataset and task, and then we repurpose the learned features, or transfer them to a second target network to be trained on a target dataset and task. This process will tend to work if the features are general, meaning suitable to both base and target tasks, instead of specific to the base task. —Jason Yosinski et al.1 Let’s jump directly to an example to get a better intuition for how to use transfer learning. Suppose we want to train a model that classifies dog and cat images, and we have only two classes in our problem: dog and cat. We need to collect hundreds of thousands of images for each class, label them, and train our network from scratch. Another option is to use transfer knowledge from another pretrained network. First, we need to find a dataset that has similar features to our problem at hand. This involves spending some time exploring different open source datasets to find the one closest to our problem. For the sake of this example, let’s use ImageNet, since we are already familiar with it from the previous chapter and it has a lot of dog and cat images. So the pretrained network is familiar with dog and cat features and will require minimum training. (Later in this chapter, we will explore other datasets.) Next, we need to choose a network that has been trained on Image- Net and achieved good results. In chapter 5, we learned about state-of-the-art architectures like VGGNet, GoogLeNet, and ResNet. Any of them would work fine. For this example, we will go with a VGG16 network that has been trained on Image- Net datasets. To adapt the VGG16 network to our problem, we are going to download it with the pretrained weights, remove the classifier part, add our own classifier, and then retrain the new network (figure 6.2). This is called using a pretrained network as a fea- ture extractor . We will discuss the different types of transfer learning later in this chapter. DEFINITION A pretrained model is a network that has been previously trained on a large dataset, typically on a large-scale image classification task. We can either use the pretrained model directly as is to run our predictions, or use the pretrained feature extraction part of the network and add our own classi- fier. The classifier here could be one or more dense layers or even traditional ML algorithms like support vector machines (SVMs). 1Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson, “How Transferable Are Features in Deep Neural Networks?” Advances in Neural Information Processing Systems 27 (Dec. 2014): 3320–3328, https:/ /arxiv.org/ abs/1411.1792 ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"245 What is transfer learning? To fully understand how to use transfer learning, let’s implement this example in Keras. (Luckily, Keras has a set of pretrained networks that are ready for us to down- l o a d a n d u s e : t h e c o m p l e t e l i s t o f m o d e l s i s a t https:/ /keras.io/api/applications .) Here are the steps:Freeze the weights in the feature extraction layers. Remove the classifier. Add a softmax layer with 2 units.3 × 3 CONV, 64 3 × 3 CONV, 64 Pool/2 Pool/23 × 3 CONV, 128 Pool/23 × 3 CONV, 128 3 × 3 CONV, 256 3 × 3 CONV, 256 Pool/23 × 3 CONV, 256 3 × 3 CONV, 512 3 × 3 CONV, 512 Pool/23 × 3 CONV, 512 3 × 3 CONV, 512 3 × 3 CONV, 512 3 × 3 CONV, 512 FC 4096 Softmax 1000 Softmax 2+FC 4096 Figure 6.2 Example of applying transfer learning to a VGG16 network. We freeze the feature extraction part of the network and remove the classifier part. Then we add our new classifier softmax layer with two hidden units." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"246 CHAPTER 6Transfer learning 1Download the open source code of the VGG16 network and its weights to cre- ate our base model, and remove the classification layers from the VGG network (FC_4096 > FC_4096 > Softmax_1000 ): from keras.applications.vgg16 import VGG16 base_model = VGG16(weights = ""imagenet"" , include_top= False, input_shape = (224,224, 3)) base_model.summary() 2When you print a summary of the base model, you will notice that we down- loaded the exact VGG16 architecture that we implemented in chapter 5. This is a fast approach to download popular networks that are supported by the DL library you are using. Alternatively, you can build the network yourself, as we did in chapter 5, and download the weights separately. I’ll show you how in the project at the end of this chapter. But for now, let’s look at the base_model sum- mary that we just downloaded: Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 224, 224, 3) 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________Imports the VGG 16 model from Keras Downloads the model’s pretrained weights and saves them in the variable base_model. We specify that Keras should download the ImageNet weights. include_top is false to ignore the fully connected classifier part on top of the model." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"247 What is transfer learning? block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 ================================================================= Total params: 14,714,688 Trainable params: 14,714,688 Non-trainable params: 0 _________________________________________________________________ Notice that this downloaded architecture does not contain the classifier part (three fully connected layers) at the top of the network because we set the include_top argument to False . More importantly, notice the number of train- able and non-trainable parameters in the summary. The downloaded network as it is makes all the network parameters trainable. As you can see, our base_ model has more than 14 million trainable parameters. Next, we want to freeze all the downloaded layers and add our own classifier. 3Freeze the feature extraction layers that have been trained on the ImageNet dataset. Freezing layers means freezing their trained weights to prevent them from being retrained when we run our training: for layer in base_model.layers: layer.trainable = False base_model.summary() The model summary is omitted in this case for brevity, as it is similar to the pre- vious one. The difference is that all the weights have been frozen, the trainable parameters are now equal to zero, and all the parameters of the frozen layers are non-trainable: Total params: 14,714,688 Trainable params: 0 Non-trainable params: 14,714,688 4Add our own classification dense layer. Here, we will add a softmax layer with two units because we have only two classes in our problem (see figure 6.3): from keras.layers import Dense, Flatten from keras.models import Model last_layer = base_model.get_layer( 'block5_pool' ) last_output = last_layer.output Iterates through layers and locks them to make them non-trainable with this code Imports Keras modules Uses the get_layer method to save the last layer of the network Saves the output of the last layer to be the input of the next layer" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"248 CHAPTER 6Transfer learning x = Flatten()(last_output) x = Dense(2, activation= 'softmax' , name='softmax' )(x) 5Build a new_model that takes the input of the base model as its input and the output of the last softmax layer as an output. The new model is composed of all the feature extraction layers in VGGNet with the pretrained weights, plus our new, untrained , softmax layer. In other words, when we train the model, we are only going to train the softmax layer in this example to detect the specific fea- tures of our new problem (German Shepherd, Beagle, Neither): new_model = Model(inputs=base_model.input, outputs=x) new_model.summary() _________________________________________________________________ Layer (type) Output Shape Param # =================================================== input_1 (InputLayer) (None, 224, 224, 3) 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 _________________________________________________________________Flattens the classifier input, which is the output of the last layer of the VGG 16 modelAdds our new softmax layer with two hidden units Remove the classifier. Add a softmax layer with 2 units.Pool/2 FC 4096 Softmax 1000 Softmax 2+FC 4096 Figure 6.3 Remove the classifier part of the network, and add a softmax layer with two hidden nodes. Instantiates a new_model using Keras’s Model class Prints the new_model summary" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"249 What is transfer learning? block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 _________________________________________________________________ flatten_layer (Flatten) (None, 25088) 0 _________________________________________________________________ softmax (Dense) (None, 2) 50178 =================================================== Total params: 14,789,955 Trainable params: 50,178 Non-trainable params: 14,714,688 _________________________________________________________________ Training the new model is a lot faster than training the network from scratch. To ver- ify that, look at the number of trainable params in this model (~50,000) compared to the number of non-trainable params in the network (~14 million). These “non- trainable” parameters are already trained on a large dataset, and we froze them to use the extracted features in our problem. With this new model, we don’t have to train the entire VGGNet from scratch because we only have to deal with the newly added softmax layer. Additionally, we get much better performance with transfer learning because the new model has been trained on millions of images (ImageNet dataset + our small dataset). This allows the network to understand the finer details of object nuances, which in turn makes it generalize better on new, previously unseen images. Note that in this example, we only explored the part where we build the model, to show how transfer learning is used. At the end of this chapter, I’ll walk you through two end-to-end projects to demonstrate how to train the new network on your small dataset. But now, let’s see how transfer learning works." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"250 CHAPTER 6Transfer learning 6.3 How transfer learning works So far, we learned what the transfer learning technique is and the main problems it solves. We also saw an example of how to take a pretrained network that was trained on ImageNet and transfer its learnings to our specific task. Now, let’s see why transfer learning works , what is really being transferred from one problem to another, and how a network that is trained on one dataset can perform well on a different, possibly unrelated, dataset. The following quick questions are reminders from previous chapters to get us to the core of what is happening in transfer learning: 1What is really being learned by the network during training? The short answer is: feature maps . 2How are these features learned? During the backpropagation process, the weights are updated until we get to the optimized weights that minimize the error function. 3What is the relationship between features and weights? A feature map is the result of passing the weights filter on the input image during the convolution process (figure 6.4). 4What is really being transferred from one network to another? To transfer fea- tures, we download the optimized weights of the pretrained network. These weights are then reused as the starting point for the training process and retrained to adapt to the new problem. Okay, let’s dive into the details to understand what we mean when we say pretrained net- work. When we’re training a convolutional neural network, the network extracts fea- tures from an image in the form of feature maps: outputs of each layer in a neural network after applying the weights filter. They are representations of the features thatConvolution kernel with optimized weightsInput imageConvolved image (feature map) –10 0 4/g0 = –1 –1 –10 0 Figure 6.4 Example of generating a feature map by applying a convolutional kernel to the input image " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"251 How transfer learning works exist in the training set. They are called feature maps because they map where a certain kind of feature is found in the image. CNNs look for features such as straight lines, edges, and even objects. Whenever they spot these features, they report them to the feature map. Each weight filter is looking for something different that is reflected in the feature maps: one filter could be looking for straight lines, another for curves, and so on (figure 6.5). Now, recall that neural networks iteratively update their weights during the training cycle of feedforward and backpropagation. We say the network has been trained when we go through a series of training iterations and hyperparameter tuning until the net- work yields satisfactory results. When training is complete, we output two main items: the network architecture and the trained weights. So, when we say that we are going to use a pretrained network , we mean that we will download the network architecture together with the weights. During training, the model learns only the features that exist in this training data- set. But when we download large models (like Inception) that have been trained on huge numbers of datasets (like ImageNet), all the features that have already been extracted from these large datasets are now available for us to use. I find that really exciting because these pretrained models have spotted other features that weren’t in our dataset and will help us build better convolutional networks. In vision problems, there’s a huge amount of stuff for neural networks to learn about the training dataset. There are low-level features like edges, corners, round shapes, curvy shapes, and blobs; and then there are mid- and higher-level features like eyes, circles, squares, and wheels. There are many details in the images that CNNs can pick up on—but if we have only 1,000 images or even 25,000 images in our training dataset, this may not be enough data for the model to learn all those things. By using a pretrained network, we can basically download all this knowledge into our neural network to give it a huge and much faster start with even higher per- formance levels.Input Output Feature map 1 Feature map 2 Feature map 3 Feature map 4 Feature map 5 Figure 6.5 The network extracts features from an image in the form of feature maps. They are representations of the features that exist in the training set after applying the weight filters." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"252 CHAPTER 6Transfer learning 6.3.1 How do neural networks learn features? A neural network learns the features in a dataset step by step in increasing levels of complexity, one layer after another. These are called feature maps . The deeper you go through the network layers, the more image-specific features are learned. In figure 6.6, the first layer detects low-level features such as edges and curves. The output of the first layer becomes input to the second layer, which produces higher-level features like semi- circles and squares. The next layer assembles the output of the previous layer into parts of familiar objects, and a subsequent layer detects the objects. As we go through more layers, the network yields an activation map that represents more complex features. As we go deeper into the network, the filters begin to be more responsive to a larger region of the pixel space. Higher-level layers amplify aspects of the received inputs that are important for discrimination and suppress irrelevant variations. Consider the example in figure 6.6. Suppose we are building a model that detects human faces. We notice that the network learns low-level features like lines, edges, and blobs in the first layer. These low-level features appear not to be specific to a particular dataset or task; they are general features that are applicable to many data- sets and tasks. The mid-level layers assemble those lines to be able to recognize shapes, corners, and circles. Notice that the extracted features start to get a little Low-level generic features (edges, blobs, etc.)Mid-level features: combinations of edges and other features that are more specific to the training datasetHigh-level features that are very specific to the training dataset Jane Alice John MaxLabels Figure 6.6 An example of how CNNs detect low-level generic features at the early layers of the network. The deeper you go through the network layers, the more image-specific features are learned. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"253 How transfer learning works more specific to our task (human faces): mid-level features contain combinations of shapes that form objects in the human face like eyes and noses. As we go deeper through the network, we notice that features eventually transition from general to specific and, by the last layer of the network, form high-level features that are very specific to our task. We start seeing parts of human faces that distinguish one person from another. Now, let’s take this example and compare the feature maps extracted from four models that are trained to classify faces, cars, elephants, and chairs (see figure 6.7). Notice that the earlier layers’ features are very similar for all the models. They repre- sent low-level features like edges, lines, and blobs. This means models that are trained on one task capture similar relations in the data types in the earlier layers of the net- work and can easily be reused for different problems in other domains. The deeper we go into the network, the more specific the features, until the network overfits its training data and it becomes harder to generalize to different tasks. The lower-level features are almost always transferable from one task to another because they contain generic information like the structure and nature of how images look. Transferring information like lines, dots, curves, and small parts of objects is very valuable for the network to learn faster and with less data on the new task. Faces Cars Elephants Chairs Figure 6.7 Feature maps extracted from four models that are trained to classify faces, cars, elephants, and chairs" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"254 CHAPTER 6Transfer learning 6.3.2 Transferability of features extracted at later layers The transferability of features that are extracted at later layers depends on the similar- ity of the original and new datasets. The idea is that all images must have shapes and edges, so the early layers are usually transferable between different domains. We can only identify differences between objects when we start extracting higher-level fea- tures: say, the nose on a face or the tires on a car. Only then can we say, “Okay, this is a person, because it has a nose. And this is a car, because it has tires.” Based on the sim- ilarity of the source and target domains, we can decide whether to transfer only the low-level features from the source domain, or the high-level features, or somewhere in between. This is motivated by the observation that the later layers of the network become progressively more specific to the details of the classes contained in the origi- nal dataset, as we are going to discuss in the next section. DEFINITIONS The source domain is the original dataset that the pretrained net- work is trained on. The target domain is the new dataset that we want to train the network on. 6.4 Transfer learning approaches There are three major transfer learning approaches: pretrained network as a classifier, pretrained network as a feature extractor, and fine-tuning. Each approach can be effective and save significant time in developing and training a deep CNN model. It may not be clear which use of a pretrained model may yield the best results on your new CV task, so some experimentation may be required. In this section, we will explain these three scenarios and give examples of how to implement them. 6.4.1 Using a pretrained network as a classifier Using a pretrained network as a classifier doesn’t involve freezing any layers or doing extra model training. Instead, we just take a network that was trained on a similar problem and deploy it directly to our task. The pretrained model is used directly to classify new images with no changes applied to it and no extra training. All we do is download the network architecture and its pretrained weights and then run the pre- dictions directly on our new data. In this case, we are saying that the domain of our new problem is very similar to the one that the pretrained network was trained on, and it is ready to be deployed. In the dog breed example, we could have used a VGG16 network that was trained on an ImageNet dataset directly to run predictions. ImageNet already contains a lot of dog images, so a significant portion of the representational power of the pre- trained network may be devoted to features that are specific to differentiating between dog breeds. Let’s see how to use a pretrained network as a classifier. In this example, we will use a VGG16 network that was pretrained on the ImageNet dataset to classify the image of the German Shepherd dog in figure 6.8." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"255 Transfer learning approaches The steps are as follows: 1Import the necessary libraries: from keras.preprocessing.image import load_img from keras.preprocessing.image import img_to_array from keras.applications.vgg16 import preprocess_input from keras.applications.vgg16 import decode_predictions from keras.applications.vgg16 import VGG16 2Download the pretrained model of VGG16 and its ImageNet weights. We set include_top to True because we want to use the entire network as a classifier: model = VGG16(weights = ""imagenet"" , include_top= True, input_shape = (224,224, 3)) 3Load and preprocess the input image: image = load_img( 'path/to/image.jpg' , target_size=(224, 224)) image = img_to_array(image) image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2])) image = preprocess_input(image) Figure 6.8 A sample image of a German Shepherd that we will use to run predictions Loads an image from a file Converts the image pixels to a NumPy array Reshapes the data for the model Prepares the image for the VGG model" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"256 CHAPTER 6Transfer learning 4Now our input image is ready for us to run predictions: yhat = model.predict(image) label = decode_predictions(yhat) label = label[0][0] print('%s (%.2f%%)' % (label[1], label[2]*100)) When you run this code, you will get the following output: >> German_shepherd (99.72%) You can see that the model was already trained to predict the correct dog breed with a high confidence score (99.72%). This is because the ImageNet dataset has more than 20,000 labeled dog images classified into 120 classes. Go to the book’s website to play with the code yourself with your own images: www.manning.com/books/deep-learning- for-vision-systems or www.computervisionbook.com . Feel free to explore the classes available in ImageNet and run this experiment on your own images. 6.4.2 Using a pretrained network as a feature extractor This approach is similar to the dog breed example that we implemented earlier in this chapter: we take a pretrained CNN on ImageNet, freeze its feature extraction part, remove the classifier part, and add our own new, dense classifier layers. In figure 6.9, we use a pretrained VGG16 network, freeze the weights in all 13 convolutional layers, and replace the old classifier with a new one to be trained from scratch. We usually go with this scenario when our new task is similar to the original data- set that the pretrained network was trained on. Since the ImageNet dataset has a lot of dog and cat examples, the feature maps that the network has learned contain a lot of dog and cat features that are very applicable to our new task. This means we can use the high-level features that were extracted from the ImageNet dataset in this new task. To do that, we freeze all the layers from the pretrained network and only train the classifier part that we just added on the new dataset. This approach is called using a pretrained network as a feature extractor because we freeze the feature extractor part to transfer all the learned feature maps to our new problem. We only add a new classi- fier, which will be trained from scratch, on top of the pretrained model so that we can repurpose the previously learned feature maps for our dataset. We remove the classification part of the pretrained network because it is often very specific to the original classification task, and subsequently it is specific to the set of classes on which the model was trained. For example, ImageNet has 1,000Predicts the probability across all output classes Converts the probabilities to class labels Retrieves the most likely result with the highest probability Prints the classification" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"257 Transfer learning approaches Freeze the weights in the feature extraction layers. Remove the classifier. Add a softmax layer with 2 units.3 × 3 CONV, 64 3 × 3 CONV, 64 Pool/2 Pool/23 × 3 CONV, 128 Pool/23 × 3 CONV, 128 3 × 3 CONV, 256 3 × 3 CONV, 256 Pool/23 × 3 CONV, 256 3 × 3 CONV, 512 3 × 3 CONV, 512 Pool/23 × 3 CONV, 512 3 × 3 CONV, 512 3 × 3 CONV, 512 3 × 3 CONV, 512 Add the new classifier.FC 4096 Softmax 1000 +FC 4096 FC 4096 Softmax 2FC 4096 Figure 6.9 Load a pretrained VGG16 network, remove the classifier, and add your own classifier." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"258 CHAPTER 6Transfer learning classes. The classifier part has been trained to overfit the training data to classify them into 1,000 classes. But in our new problem, let’s say cats versus dogs, we have only two classes. So, it is a lot more effective to train a new classifier from scratch to overfit these two classes. 6.4.3 Fine-tuning So far, we’ve seen two basic approaches of using a pretrained network in transfer learning: using a pretrained network as a classifier or as a feature extractor. We gener- ally use these approaches when the target domain is somewhat similar to the source domain. But what if the target domain is different from the source domain? What if it is very different? Can we still use transfer learning? Yes. Transfer learning works great even when the domains are very different. We just need to extract the correct feature maps from the source domain and fine-tune them to fit the target domain. In figure 6.10, we show the different approaches of transferring knowledge from a pretrained network. If you are downloading the entire network with no changes and just running predictions, then you are using the network as a classifier. If you are freezing the convolutional layers only, then you are using the pretrained network as a feature extractor and transferring all of its high-level feature maps to your domain. The formal definition of fine-tuning is freezing a few of the network layers that are used for feature extraction, and jointly training both the non-frozen layers and the newly added classifier layers of the pretrained model. It is called fine-tuning because when we retrain the feature extraction layers, we fine-tune the higher-order feature representations to make them more relevant for the new task dataset. In more practical terms, if we freeze features maps 1 and 2 in figure 6.10, the new network will take feature maps 2 as its input and will start learning from this point to adapt the features of the later layers to the new dataset. This saves the network the time that it would have spent learning feature maps 1 and 2. ...Feature map 1 InputFeature map 2 Feature map 3 Feature map 4Classifier Pretrained as a classifierPretrained as a feature extractorOr here?Flatten Or here? Fine-tuning rangeFreeze here?Retrain the entire network.... Figure 6.10 The network learns features through its layers. In transfer learning, we make a decision to freeze specific layers of a pretrained network to preserve the learned features. For example, if we freeze the network at feature maps of layer 3, we preserve what it has learned in layers 1, 2, and 3. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"259 Transfer learning approaches As we discussed earlier, feature maps that are extracted early in the network are generic. The feature maps get progressively more specific as we go deeper in the net- work. This means feature maps 4 in figure 6.10 are very specific to the source domain. Based on the similarity of the two domains, we can decide to freeze the network at the appropriate level of feature maps: If the domains are similar, we might want to freeze the network up to the last feature map level (feature maps 4, in the example). If the domains are very different, we might decide to freeze the pretrained net- work after feature maps 1 and retrain all the remaining layers. Between these two possibilities are a range of fine-tuning options that we can apply. We can retrain the entire network, or freeze the pretrained network at any level of feature maps 1, 2, 3, or 4 and retrain the remainder of the network. We typically decide the appropriate level of fine-tuning by trial and error. But there are guidelines that we can follow to intuitively decide on the fine-tuning level for the pretrained net- work. The decision is a function of two factors: the amount of data we have and the level of similarity between the source and target domains. We will explain these fac- tors and the four possible scenarios to choose the appropriate level of fine-tuning in section 6.5. WHY IS FINE-TUNING BETTER THAN TRAINING FROM SCRATCH ? When we train a network from scratch, we usually randomly initialize the weights and apply a gradient descent optimizer to find the best set of weights that optimizes our error function (as discussed in chapter 2). Since these weights start with random val- ues, there is no guarantee that they will begin with values that are close to the desired optimal values. And if the initialized value is far from the optimal value, the optimizer will take a long time to converge. This is when fine-tuning can be very useful. The pre- trained network’s weights have been already optimized to learn from its dataset. Thus, when we use this network in our problem, we start with the weight values that it ended with. So, the network converges much faster than if it had to randomly initialize the weights. We are basically fine-tuning the already-optimized weights to fit our new prob- lem instead of training the entire network from scratch with random weights. Even if we decide to retrain the entire pretrained network, starting with the trained weights will converge faster than having to train the network from scratch with randomly ini- tialized weights. USING A SMALLER LEARNING RATE WHEN FINE-TUNING It’s common to use a smaller learning rate for ConvNet weights that are being fine- tuned, in comparison to the (randomly initialized) weights for the new linear classi- fier that computes the class scores of a new dataset. This is because we expect that the ConvNet weights are relatively good, so we don’t want to distort them too quickly and too much (especially while the new classifier above them is being trained from ran- dom initialization)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"260 CHAPTER 6Transfer learning 6.5 Choosing the appropriate level of transfer learning Recall that early convolutional layers extract generic features and become more spe- cific to the training data the deeper we go through the network. With that said, we can choose the level of detail for feature extraction from an existing pretrained model. For example, if a new task is quite different from the source domain of the pretrained network (for example, different from ImageNet), then perhaps the output of the pre- trained model after the first few layers would be appropriate. If a new task is similar to the source domain, then perhaps the output from layers much deeper in the model can be used, or even the output of the fully connected layer prior to the softmax layer. As mentioned earlier, choosing the appropriate level for transfer learning is a func- tion of two important factors: Size of the target dataset (small or large) —When we have a small dataset, the net- work probably won’t learn much from training more layers, so it will tend to overfit the new data. In this case, we most likely want to do less fine-tuning and rely more on the source dataset. Domain similarity of the source and target datasets —How similar is our new problem to the domain of the original dataset? For example, if your problem is to classify cars and boats, ImageNet could be a good option because it contains a lot of images of similar features. On the other hand, if your problem is to classify lung cancer on X-ray images, this is a completely different domain that will likely require a lot of fine-tuning. These two factors lead to the four major scenarios: 1The target dataset is small and similar to the source dataset. 2The target dataset is large and similar to the source dataset. 3The target dataset is small and very different from the source dataset. 4The target dataset is large and very different from the source dataset. Let’s discuss these scenarios one by one to learn the common rules of thumb for navi- gating our options. 6.5.1 Scenario 1: Target dataset is small and similar to the source dataset Since the original dataset is similar to our new dataset, we can expect that the higher- level features in the pretrained ConvNet are relevant to our dataset as well. Then it might be best to freeze the feature extraction part of the network and only retrain the classifier. Another reason it might not be a good idea to fine-tune the network is that our new dataset is small. If we fine-tune the feature extraction layers on a small dataset, that will force the network to overfit to our data. This is not good because, by defini- tion, a small dataset doesn’t have enough information to cover all possible features of its objects, which makes it fail to generalize to new, previously unseen, data. So in" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"261 Choosing the appropriate level of transfer learning this case, the more fine-tuning we do, the more the network is prone to overfit the new data. For example, suppose all the images in our new dataset contain dogs in a specific weather environment—snow, for example. If we fine-tuned on this dataset, we would force the new network to pick up features like snow and a white background as dog- specific features and make it fail to classify dogs in other weather conditions. Thus the general rule of thumb is: if you have a small amount of data, be careful of overfitting when you fine-tune your pretrained network. 6.5.2 Scenario 2: Target dataset is large and similar to the source dataset Since both domains are similar, we can freeze the feature extraction part and retrain the classifier, similar to what we did in scenario 1. But since we have more data in the new domain, we can get a performance boost from fine-tuning through all or part of the pretrained network with more confidence that we won’t overfit. Fine-tuning through the entire network is not really needed because the higher-level features are related (since the datasets are similar). So a good start is to freeze approximately 60–80% of the pretrained network and retrain the rest on the new data. 6.5.3 Scenario 3: Target dataset is small and different from the source dataset Since the dataset is different, it might not be best to freeze the higher-level features of the pretrained network, because they contain more dataset-specific features. Instead, it would work better to retrain layers from somewhere earlier in the network—or to not freeze any layers and fine-tune the entire network. However, since you have a small dataset, fine-tuning the entire network on the dataset might not be a good idea, because doing so will make it prone to overfitting. A midway solution will work better in this case. A good start is to freeze approximately the first third or half of the pre- trained network. After all, the early layers contain very generic feature maps that will be useful for your dataset even if it is very different. 6.5.4 Scenario 4: Target dataset is large and different from the source dataset Since the new dataset is large, you might be tempted to just train the entire network from scratch and not use transfer learning at all. However, in practice, it is often still very beneficial to initialize weights from a pretrained model, as we discussed earlier. Doing so makes the model converge faster. In this case, we have a large dataset that provides us with the confidence to fine-tune through the entire network without hav- ing to worry about overfitting. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"262 CHAPTER 6Transfer learning 6.5.5 Recap of the transfer learning scenarios We’ve explored the two main factors that help us define which transfer learning approach to use (size of our data and similarity between the source and target data- sets). These two factors give us the four major scenarios defined in table 6.1. Figure 6.11 summarizes the guidelines for the appropriate fine-tuning level to use in each of the scenarios. 6.6 Open source datasets The CV research community has been pretty good about posting datasets on the inter- net. So, when you hear names like ImageNet, MS COCO, Open Images, MNIST, CIFAR, and many others, these are datasets that people have posted online and that a lot of computer researchers have used as benchmarks to train their algorithms and get state-of-the-art results. Table 6.1 Transfer learning scenarios Scenario Size of the target dataSimilarity of the original and new datasetsApproach 1 Small Similar Pretrained network as a feature extractor 2 Large Similar Fine-tune through the full network 3 Small Very different Fine-tune from activations earlier in the network 4 Large Very different Fine-tune through the entire network ... Scenario #1: You have a small dataset that is similar to the source dataset.... Scenario #2: You have a large dataset that is similar to the source dataset. Scenario #3: You have a small dataset that is different from the source dataset. Scenario #4: You have a large dataset that is different from the source dataset. Figure 6.11 Guidelines for the appropriate fine-tuning level to use in each of the four scenarios" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"263 Open source datasets In this section, we will review some of the popular open source datasets to help guide you in your search to find the most suitable dataset for your problem. Keep in mind that the ones listed in this chapter are the most popular datasets used in the CV research community at the time of writing; we do not intend to provide a comprehen- sive list of all the open source datasets out there. A great many image datasets are available, and the number is growing every day. Before starting your project, I encour- age you to do your own research to explore the available datasets. 6.6.1 MNIST MNIST ( http:/ /yann.lecun.com/exdb/mnist ) stands for Modified National Institute of Standards and Technology. It contains labeled handwritten images of digits from 0 to 9. The goal of this dataset is to classify handwritten digits. MNIST has been popular with the research community for benchmarking classification algorithms. In fact, it is considered the “hello, world!” of image datasets. But nowadays, the MNIST dataset is comparatively pretty simple, and a basic CNN can achieve more than 99% accuracy, so MNIST is no longer considered a benchmark for CNN performance. We imple- mented a CNN classification project using MNIST dataset in chapter 3; feel free to go back and review it. MNIST consists of 60,000 training images and 10,000 test images. All are grayscale (one-channel), and each image is 28 pixels high and 28 pixels wide. Figure 6.12 shows some sample images from the MNIST dataset. Figure 6.12 Samples from the MNIST dataset" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"264 CHAPTER 6Transfer learning 6.6.2 Fashion-MNIST Fashion-MNIST was created with the intention of replacing the original MNIST data- set, which has become too simple for modern convolutional networks. The data is stored in the same format as MNIST, but instead of handwritten digits, it contains 60,000 training images and 10,000 test images of 10 fashion clothing classes: t-shirt/top, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot. Visit https:/ / github.com/zalandoresearch/fashion-mnist to explore and download the dataset. Fig- ure 6.13 shows a sample of the represented classes. 6.6.3 CIFAR CIFAR-10 ( www.cs.toronto.edu/~kriz/cifar.html ) is considered another benchmark dataset for image classification in the CV and ML literature. CIFAR images are more complex than those in MNIST in the sense that MNIST images are all grayscale with Ankle boot T-shirt/top T-shirt/top Dress T-shirt/top Pullover Sneaker Pullover Sandal Sandal T-shirt/top Ankle boot Sandal Sandal Sneaker Ankle boot Trouser T-shirt/top Shirt Coat Dress Trouser Coat Bag Coat Figure 6.13 Sample images from the Fashion-MNIST dataset" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"265 Open source datasets perfectly centered objects, whereas CIFAR images are color (three channels) with dra- matic variation in how the objects appear. The CIFAR-10 dataset consists of 32×32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images. Figure 6.14 shows the classes in the dataset. CIFAR-100 is the bigger brother of CIFAR-10: it contains 100 classes with 600 images each. These 100 classes are grouped into 20 superclasses. Each image comes with a fine label (the class to which it belongs) and a coarse label (the superclass to which it belongs). 6.6.4 ImageNet We’ve discussed the ImageNet dataset several times in the previous chapters and used it extensively in chapter 5 and this chapter. But for completeness of this list, we are discuss- ing it here as well. At the time of writing, ImageNet is considered the current bench- mark and is widely used by CV researchers to evaluate their classification algorithms. ImageNet is a large visual database designed for use in visual object recognition software research. It is aimed at labeling and categorizing images into almost 22,000 categories based on a defined set of words and phrases. The images were collected from the web and labeled by humans via Amazon’s Mechanical Turk crowdsourcing Airplane Automobile Bird Cat Deer Dog Frog Horse Ship Truck Figure 6.14 Sample images from the CIFAR-10 dataset" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"266 CHAPTER 6Transfer learning tool. At the time of this writing, there are over 14 million images in the ImageNet project. To organize such a massive amount of data, the creators of ImageNet followed the WordNet hierarchy: each meaningful word/phrase in WordNet is called a synonym set (synset for short). Within the ImageNet project, images are organized according to these synsets, with the goal being to have 1,000+ images per synset. Figure 6.15 shows a collage of ImageNet examples put together by Stanford University. The CV community usually refers to the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) when talking about ImageNet. In this challenge, software pro- grams compete to correctly classify and detect objects and scenes. We will be using the ILSVRC challenge as a benchmark to compare the different networks’ performance. 6.6.5 MS COCO MS COCO ( http:/ /cocodataset.org ) is short for Microsoft Common Objects in Con- text. It is an open source database that aims to enable future research for object detec- tion, instance segmentation, image captioning, and localizing person keypoints. It Figure 6.15 A collage of ImageNet examples compiled by Stanford University" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"267 Open source datasets contains 328,000 images. More than 200,000 of them are labeled, and they include 1.5 million object instances and 80 object categories that would be easily recognizable by a 4-year-old. The original research paper by the creators of the dataset describes the motivation for and content of this dataset.2 Figure 6.16 shows a sample of the dataset provided on the MS COCO website. 6.6.6 Google Open Images Open Images ( https:/ /storage.googleapis.com/openimages/web/index.html ) is an open source image database created by Google. It contains more than 9 million images as of this writing. What makes it stand out is that these images are mostly of complex scenes that span thousands of classes of objects. Additionally, more than 2 million of these images are hand-annotated with bounding boxes, making Open Images by far the largest existing dataset with object-location annotations (see figure 6.17). In this subset of images, there are ~15.4 million bounding boxes of 600 classes of objects. Similar to ImageNet and ILSVRC, Open Images has a challenge called the Open Images Chal- lenge ( http:/ /mng.bz/aRQz ). 6.6.7 Kaggle In addition to the datasets listed in this section, Kaggle ( www.kaggle.com ) is another great source for datasets. Kaggle is a website that hosts ML and DL challenges where people from all around the world can participate and submit algorithms for evaluations. You are strongly encouraged to explore these datasets and search for the many other open source datasets that come up every day, to gain a better understanding of the classes and use cases they support. We mostly use ImageNet in this chapter’s proj- ects; and throughout the book, we will be using MS COCO, especially in chapter 7. 2 Tsung-Yi Lin, Michael Maire, Serge Belongie, et al., “Microsoft COCO: Common Objects in Context” (Feb- ruary 2015), https:/ /arxiv.org/pdf/1405.0312.pdf . Figure 6.16 A sample of the MS COCO dataset (Image copyright © 2015, COCO Consortium, used by permission under Creative Commons Attribution 4.0 License.)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"268 CHAPTER 6Transfer learning 6.7 Project 1: A pretrained network as a feature extractor In this project, we use a very small amount of data to train a classifier that detects images of dogs and cats. This is a pretty simple project, but the goal of the exercise is to see how to implement transfer learning when you have a very small amount of data and the target domain is similar to the source domain (scenario 1). As explained in this chapter, in this case, we will use the pretrained convolutional network as a feature extractor. This means we are going to freeze the feature extractor part of the network, add our own classifier, and then retrain the network on our new small dataset. One other important takeaway from this project is learning how to preprocess cus- tom data and make it ready to train your neural network. In previous projects, we used the CIFAR and MNIST datasets: they are preprocessed by Keras, so all we had to do was download them from the Keras library and use them directly to train the network. This project provides a tutorial of how to structure your data repository and use the Keras library to get your data ready. Visit the book’s website at www.manning.com/books/deep-learning-for-vision- systems or www.computervisionbook.com to download the code notebook and the dataset used for this project. Since we are using transfer learning, the training does not require high computation power, so you can run this notebook on your personal computer; you don’t need a GPU. For this implementation, we’ll be using the VGG16. Although it didn’t record the lowest error in the ILSVRC, I found that it worked well for the task and was quicker to train than other models. I got an accuracy of about 96%, but you can feel free to use GoogLeNet or ResNet to experiment and compare results. Tree Person ClothingHuman head Human face Human nosePerson ClothingHuman head Human face Human nose Person ClothingHuman head Human faceHuman nose Building Shelf FurnitureBed Table Chair Figure 6.17 Annotated images from the Open Images dataset, taken from the Google AI Blog (Vittorio Ferrari, “An Update to Open Images—Now with Bounding-Boxes,” July 2017, http:/ /mng.bz/yyVG )." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"269 Project 1: A pretrained network as a feature extractor The process to use a pretrained model as a feature extractor is well established: 1Import the necessary libraries. 2Preprocess the data to make it ready for the neural network. 3Load pretrained weights from the VGG16 network trained on a large dataset. 4Freeze all the weights in the convolutional layers (feature extraction part). Remember, the layers to freeze are adjusted depending on the similarity of the new task to the original dataset. In our case, we observed that ImageNet has a lot of dog and cat images, so the network has already been trained to extract the detailed features of our target object. 5Replace the fully connected layers of the network with a custom classifier. You can add as many fully connected layers as you see fit, and each can have as many hidden units as you want. For simple problems like this, we will just add one hidden layer with 64 units. You can observe the results and tune up if the model is underfitting or down if the model is overfitting. For the softmax layer, the number of units must be set equal to the number of classes (two units, in our case). 6Compile the network, and run the training process on the new data of cats and dogs to optimize the model for the smaller dataset. 7Evaluate the model. Now, let’s go through these steps and implement this project: 1Import the necessary libraries: from keras.preprocessing.image import ImageDataGenerator from keras.preprocessing import image from keras.applications import imagenet_utils from keras.applications import vgg16 from keras.applications import mobilenet from keras.optimizers import Adam, SGD from keras.metrics import categorical_crossentropy from keras.layers import Dense, Flatten, Dropout, BatchNormalization from keras.models import Model from sklearn.metrics import confusion_matrix import itertools import matplotlib.pyplot as plt %matplotlib inline 2Preprocess the data to make it ready for the neural network. Keras has an ImageDataGenerator class that allows us to easily perform image augmentation on the fly; you can read about it at https:/ /keras.io/api/preprocessing/image . In this example, we use ImageDataGenerator to generate our image tensors, but for simplicity, we will not implement image augmentation. The ImageDataGenerator class has a method called flow_from_directory() that is used to read images from folders containing images. This method expects your data directory to be structured as in figure 6.18." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"270 CHAPTER 6Transfer learning I have the data structured in the book’s code so it’s ready for you to use flow_ from_directory() . Now, load the data into train_path , valid_path , and test _path variables, and then generate the train, valid, and test batches: train_path = 'data/train' valid_path = 'data/valid' test_path = 'data/test' train_batches = ImageDataGenerator().flow_from_directory(train_path, target_size=(224,224), batch_size=10) valid_batches = ImageDataGenerator().flow_from_directory(valid_path, target_size=(224,224), batch_size=30) test_batches = ImageDataGenerator().flow_from_directory(test_path, target_size=(224,224), batch_size=50, shuffle= False) 3Load in pretrained weights from the VGG16 network trained on a large dataset. Similar to the examples in this chapter, we download the VGG16 network from Keras and download its weights after they are pretrained on the ImageNet data- set. Remember that we want to remove the classifier part from this network, so we set the parameter include_top=False : base_model = vgg16.VGG16(weights = ""imagenet"" , include_top= False, input_shape = (224,224, 3)) 4Freeze all the weights in the convolutional layers (feature extraction part). We freeze the convolutional layers from the base_model created in the previousData Valid Test Class_b Class_a class_a_500.jpg class_b_500.jpgTrain Class_b Class_a class_a_1.jpg class_b_1.jpgTest_folder test_1.jpg Figure 6.18 The required directory structure for your dataset to use the .flow_from_directory() method from Keras ImageDataGenerator generates batches of tensor image data with real-time data augmentation. The data will be looped over (in batches). In this example, we won’t be doing any image augmentation." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"271 Project 1: A pretrained network as a feature extractor step and use that as a feature extractor, and then add a classifier on top of it in the next step: for layer in base_model.layers: layer.trainable = False 5Add the new classifier, and build the new model. We add a few layers on top of the base model. In this example, we add one fully connected layer with 64 hid- den units and a softmax with 2 hidden units. We also add batch norm and drop- out layers to avoid overfitting: last_layer = base_model.get_layer( 'block5_pool' ) last_output = last_layer.output x = Flatten()(last_output) x = Dense(64, activation= 'relu', name='FC_2')(x) x = BatchNormalization()(x) x = Dropout(0.5)(x) x = Dense(2, activation= 'softmax' , name='softmax' )(x) new_model = Model(inputs=base_model.input, outputs=x) new_model.summary() _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 224, 224, 3) 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 _________________________________________________________________Iterates through layers and locks them to make them non-trainable with this code Uses the get_layer method to save the last layer of the network. Then saves the output of the last layer to be the input of the next layer. Flattens the classifier input, which is output of the last layer of the VGG 16 model Adds one fully connected layer that has 64 units and batchnorm, dropout, and softmax layersInstantiates a new_model using Keras’s Model class" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"272 CHAPTER 6Transfer learning block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 25088) 0 _________________________________________________________________ FC_2 (Dense) (None, 64) 1605696 _________________________________________________________________ batch_normalization_1 (Batch (None, 64) 256 _________________________________________________________________ dropout_1 (Dropout) (None, 64) 0 _________________________________________________________________ softmax (Dense) (None, 2) 130 ================================================================= Total params: 16,320,770 Trainable params: 1,605,954 Non-trainable params: 14,714,816 _________________________________________________________________ 6Compile the model and run the training process: new_model.compile(Adam(lr=0.0001), loss= 'categorical_crossentropy' , metrics=[ 'accuracy' ]) new_model.fit_generator(train_batches, steps_per_epoch=4, validation_data=valid_batches, validation_steps=2, epochs=20, verbose=2) When you run the previous code snippet, the verbose training is printed after each epoch as follows: Epoch 1/20 - 28s - loss: 1.0070 - acc: 0.6083 - val_loss: 0.5944 - val_acc: 0.6833 Epoch 2/20 - 25s - loss: 0.4728 - acc: 0.7754 - val_loss: 0.3313 - val_acc: 0.8605 Epoch 3/20 - 30s - loss: 0.1177 - acc: 0.9750 - val_loss: 0.2449 - val_acc: 0.8167 Epoch 4/20 - 25s - loss: 0.1640 - acc: 0.9444 - val_loss: 0.3354 - val_acc: 0.8372 Epoch 5/20 - 29s - loss: 0.0545 - acc: 1.0000 - val_loss: 0.2392 - val_acc: 0.8333 Epoch 6/20" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"273 Project 1: A pretrained network as a feature extractor - 25s - loss: 0.0941 - acc: 0.9505 - val_loss: 0.2019 - val_acc: 0.9070 Epoch 7/20 - 28s - loss: 0.0269 - acc: 1.0000 - val_loss: 0.1707 - val_acc: 0.9000 Epoch 8/20 - 26s - loss: 0.0349 - acc: 0.9917 - val_loss: 0.2489 - val_acc: 0.8140 Epoch 9/20 - 28s - loss: 0.0435 - acc: 0.9891 - val_loss: 0.1634 - val_acc: 0.9000 Epoch 10/20 - 26s - loss: 0.0349 - acc: 0.9833 - val_loss: 0.2375 - val_acc: 0.8140 Epoch 11/20 - 28s - loss: 0.0288 - acc: 1.0000 - val_loss: 0.1859 - val_acc: 0.9000 Epoch 12/20 - 29s - loss: 0.0234 - acc: 0.9917 - val_loss: 0.1879 - val_acc: 0.8372 Epoch 13/20 - 32s - loss: 0.0241 - acc: 1.0000 - val_loss: 0.2513 - val_acc: 0.8500 Epoch 14/20 - 29s - loss: 0.0120 - acc: 1.0000 - val_loss: 0.0900 - val_acc: 0.9302 Epoch 15/20 - 36s - loss: 0.0189 - acc: 1.0000 - val_loss: 0.1888 - val_acc: 0.9000 Epoch 16/20 - 30s - loss: 0.0142 - acc: 1.0000 - val_loss: 0.1672 - val_acc: 0.8605 Epoch 17/20 - 29s - loss: 0.0160 - acc: 0.9917 - val_loss: 0.1752 - val_acc: 0.8667 Epoch 18/20 - 25s - loss: 0.0126 - acc: 1.0000 - val_loss: 0.1823 - val_acc: 0.9070 Epoch 19/20 - 29s - loss: 0.0165 - acc: 1.0000 - val_loss: 0.1789 - val_acc: 0.8833 Epoch 20/20 - 25s - loss: 0.0112 - acc: 1.0000 - val_loss: 0.1743 - val_acc: 0.8837 Notice that the model was trained very quickly using regular CPU computing power. Each epoch took approximately 25 to 29 seconds, which means the model took less than 10 minutes to train for 20 epochs. 7Evaluate the model. First, let’s define the load_dataset() method that we will use to convert our dataset into tensors: from sklearn.datasets import load_files from keras.utils import np_utils import numpy as np def load_dataset(path): data = load_files(path) paths = np.array(data[ 'filenames' ]) targets = np_utils.to_categorical(np.array(data[ 'target' ])) return paths, targets test_files, test_targets = load_dataset( 'small_data/test' ) Then, we create test_tensors to evaluate the model on them: from keras.preprocessing import image from keras.applications.vgg16 import preprocess_input from tqdm import tqdm" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"274 CHAPTER 6Transfer learning def path_to_tensor(img_path): img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) return np.expand_dims(x, axis=0) def paths_to_tensor(img_paths): list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)] return np.vstack(list_of_tensors) test_tensors = preprocess_input(paths_to_tensor(test_files)) Now we can run Keras’s evaluate() method to calculate the model accuracy: print('\nTesting loss: {:.4f}\n Testing accuracy: {:.4f}'.format(*new_model.evaluate(test_tensors, test_targets))) Testing loss: 0.1042 Testing accuracy: 0.9579 The model has achieved an accuracy of 95.79% in less than 10 minutes of training. This is very good, given our very small dataset. 6.8 Project 2: Fine-tuning In this project, we are going to explore scenario 3, discussed earlier in this chapter, where the target dataset is small and very different from the source dataset. The goal of this project is to build a sign language classifier that distinguishes 10 classes: the sign language digits from 0 to 9. Figure 6.19 shows a sample of our dataset.Loads an RGB image as PIL.Image.Image typeConverts the PIL.Image.Image type to a 3D tensor with shape (224, 224, 3) Converts the 3D tensor to a 4D tensor with shape (1, 224, 224, 3) and returns the 4D tensor Figure 6.19 A sample from the sign language dataset" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"275 Project 2: Fine-tuning Following are the details of our dataset: Number of classes = 10 (digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) Image size = 100 × 100 Color space = RGB 1,712 images in the training set 300 images in the validation set 50 images in the test set It is very noticeable how small our dataset is. If you try to train a network from scratch on this very small dataset, you will not achieve good results. On the other hand, we were able to achieve an accuracy higher than 98% by using transfer learning, even though the source and target domains were very different. NOTE Please take this evaluation with a grain of salt, because the network hasn't been thoroughly tested with a lot of data. We only have 50 test images in this dataset. Transfer learning is expected to achieve good results anyway, but I wanted to highlight this fact. Visit the book’s website at www.manning.com/books/deep-learning-for-vision-systems or www.computervisionbook.com to download the source code notebook and the dataset used for this project. Similar to project 1, the training does not require high computation power, so you can run this notebook on your personal computer; you don’t need a GPU. For ease of comparison with the previous project, we will use the VGG16 network trained on the ImageNet dataset. The process to fine-tune a pretrained network is as follows: 1Import the necessary libraries. 2Preprocess the data to make it ready for the neural network. 3Load in pretrained weights from the VGG16 network trained on a large dataset (ImageNet). 4Freeze part of the feature extractor part. 5Add the new classifier layers. 6Compile the network, and run the training process to optimize the model for the smaller dataset. 7Evaluate the model. Now let’s implement this project: 1Import the necessary libraries: from keras.preprocessing.image import ImageDataGenerator from keras.preprocessing import image from keras.applications import imagenet_utils from keras.applications import vgg16 from keras.optimizers import Adam, SGD from keras.metrics import categorical_crossentropy" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"276 CHAPTER 6Transfer learning from keras.layers import Dense, Flatten, Dropout, BatchNormalization from keras.models import Model from sklearn.metrics import confusion_matrix import itertools import matplotlib.pyplot as plt %matplotlib inline 2Preprocess the data to make it ready for the neural network. Similar to proj- ect 1, we use the ImageDataGenerator class from Keras and the flow_from_ directory() method to preprocess our data. The data is already structured for you to directly create your tensors: train_path = 'dataset/train' valid_path = 'dataset/valid' test_path = 'dataset/test' train_batches = ImageDataGenerator().flow_from_directory(train_path, target_size=(224,224), batch_size=10) valid_batches = ImageDataGenerator().flow_from_directory(valid_path, target_size=(224,224), batch_size=30) test_batches = ImageDataGenerator().flow_from_directory(test_path, target_size=(224,224), batch_size=50, shuffle= False) Found 1712 images belonging to 10 classes. Found 300 images belonging to 10 classes. Found 50 images belonging to 10 classes. 3Load in pretrained weights from the VGG16 network trained on a large data- set (ImageNet). We download the VGG16 architecture from the Keras library with ImageNet weights. Note that we use the parameter pooling='avg' here: this basically means global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D ten- sor. We use this as an alternative to the Flatten layer before adding the fully connected layers: base_model = vgg16.VGG16(weights = ""imagenet"" , include_top= False, input_shape = (224,224, 3), pooling= 'avg') 4Freeze some of the feature extractor part, and fine-tune the rest on our new training data. The level of fine-tuning is usually determined by trial and error. VGG16 has 13 convolutional layers: you can freeze them all or freeze a few of them, depending on how similar your data is to the source data. In the signImageDataGenerator generates batches of tensor image data with real-time data augmentation. The data will be looped over (in batches). In this example, we won’t be doing any image augmentation." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"277 Project 2: Fine-tuning language case, the new domain is very different from our domain, so we will start with fine-tuning only the last five layers; if we don’t get satisfying results, we can fine-tune more. It turns out that after we trained the new model, we got 98% accuracy, so this was a good level of fine-tuning. But in other cases, if you find that your network doesn’t converge, try fine-tuning more layers. for layer in base_model.layers[:-5]: layer.trainable = False base_model.summary() _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 224, 224, 3) 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 _________________________________________________________________ global_average_pooling2d_1 ( (None, 512) 0 =================================================================Iterates through layers and locks them, except for the last five layers" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"278 CHAPTER 6Transfer learning Total params: 14,714,688 Trainable params: 7,079,424 Non-trainable params: 7,635,264 _________________________________________________________________ 5Add the new classifier layers, and build the new model: last_output = base_model.output x = Dense(10, activation= 'softmax' , name='softmax' )(last_output) new_model = Model(inputs=base_model.input, outputs=x) new_model.summary() Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 224, 224, 3) 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________Saves the output of base_model to be the input of the next layerAdds our new softmax layer with 10 hidden units Instantiates a new_model using Keras’s Model classPrints the new_model summary" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"279 Project 2: Fine-tuning block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 _________________________________________________________________ global_average_pooling2d_1 ( (None, 512) 0 _________________________________________________________________ softmax (Dense) (None, 10) 5130 ================================================================= Total params: 14,719,818 Trainable params: 7,084,554 Non-trainable params: 7,635,264 6Compile the network, and run the training process to optimize the model for the smaller dataset: new_model.compile(Adam(lr=0.0001), loss= 'categorical_crossentropy' , metrics=[ 'accuracy' ]) from keras.callbacks import ModelCheckpoint checkpointer = ModelCheckpoint(filepath= 'signlanguage.model.hdf5' , save_best_only= True) history = new_model.fit_generator(train_batches, steps_per_epoch=18, validation_data=valid_batches, validation_steps=3, epochs=20, verbose=1, callbacks=[checkpointer]) Epoch 1/150 18/18 [==============================] - 40s 2s/step - loss: 3.2263 - acc: 0.1833 - val_loss: 2.0674 - val_acc: 0.1667 Epoch 2/150 18/18 [==============================] - 41s 2s/step - loss: 2.0311 - acc: 0.1833 - val_loss: 1.7330 - val_acc: 0.3000 Epoch 3/150 18/18 [==============================] - 42s 2s/step - loss: 1.5741 - acc: 0.4500 - val_loss: 1.5577 - val_acc: 0.4000 Epoch 4/150 18/18 [==============================] - 42s 2s/step - loss: 1.3068 - acc: 0.5111 - val_loss: 0.9856 - val_acc: 0.7333 Epoch 5/150 18/18 [==============================] - 43s 2s/step - loss: 1.1563 - acc: 0.6389 - val_loss: 0.7637 - val_acc: 0.7333 Epoch 6/150 18/18 [==============================] - 41s 2s/step - loss: 0.8414 - acc: 0.6722 - val_loss: 0.7550 - val_acc: 0.8000 Epoch 7/150 18/18 [==============================] - 41s 2s/step - loss: 0.5982 - acc: 0.8444 - val_loss: 0.7910 - val_acc: 0.6667 Epoch 8/150 18/18 [==============================] - 41s 2s/step - loss: 0.3804 - acc: 0.8722 - val_loss: 0.7376 - val_acc: 0.8667 Epoch 9/150 18/18 [==============================] - 41s 2s/step - loss: 0.5048 - acc: 0.8222 - val_loss: 0.2677 - val_acc: 0.9000" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"280 CHAPTER 6Transfer learning Epoch 10/150 18/18 [==============================] - 39s 2s/step - loss: 0.2383 - acc: 0.9276 - val_loss: 0.2844 - val_acc: 0.9000 Epoch 11/150 18/18 [==============================] - 41s 2s/step - loss: 0.1163 - acc: 0.9778 - val_loss: 0.0775 - val_acc: 1.0000 Epoch 12/150 18/18 [==============================] - 41s 2s/step - loss: 0.1377 - acc: 0.9667 - val_loss: 0.5140 - val_acc: 0.9333 Epoch 13/150 18/18 [==============================] - 41s 2s/step - loss: 0.0955 - acc: 0.9556 - val_loss: 0.1783 - val_acc: 0.9333 Epoch 14/150 18/18 [==============================] - 41s 2s/step - loss: 0.1785 - acc: 0.9611 - val_loss: 0.0704 - val_acc: 0.9333 Epoch 15/150 18/18 [==============================] - 41s 2s/step - loss: 0.0533 - acc: 0.9778 - val_loss: 0.4692 - val_acc: 0.8667 Epoch 16/150 18/18 [==============================] - 41s 2s/step - loss: 0.0809 - acc: 0.9778 - val_loss: 0.0447 - val_acc: 1.0000 Epoch 17/150 18/18 [==============================] - 41s 2s/step - loss: 0.0834 - acc: 0.9722 - val_loss: 0.0284 - val_acc: 1.0000 Epoch 18/150 18/18 [==============================] - 41s 2s/step - loss: 0.1022 - acc: 0.9611 - val_loss: 0.0177 - val_acc: 1.0000 Epoch 19/150 18/18 [==============================] - 41s 2s/step - loss: 0.1134 - acc: 0.9667 - val_loss: 0.0595 - val_acc: 1.0000 Epoch 20/150 18/18 [==============================] - 39s 2s/step - loss: 0.0676 - acc: 0.9777 - val_loss: 0.0862 - val_acc: 0.9667 Notice the training time of each epoch from the verbose output. The model was trained very quickly using regular CPU computing power. Each epoch took approximately 40 seconds, which means it took the model less than 15 minutes to train for 20 epochs. 7Evaluate the accuracy of the model. Similar to the previous project, we create a load_dataset() method to create test_targets and test_tensors and then use the evaluate() method from Keras to run inferences on the test images and get the model accuracy: print('\nTesting loss: {:.4f}\n Testing accuracy: {:.4f}'.format(*new_model.evaluate(test_tensors, test_targets))) Testing loss: 0.0574 Testing accuracy: 0.9800 A deeper level of evaluating your model involves creating a confusion matrix. We explained the confusion matrix in chapter 4: it is a table that is often used to describe the performance of a classification model, to provide a deeper" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"281 Project 2: Fine-tuning understanding of how the model performed on the test dataset. See chapter 4 for details on the different model evaluation metrics. Now, let’s build the confu- sion matrix for our model (see figure 6.20): from sklearn.metrics import confusion_matrix import numpy as np cm_labels = [ '0','1','2','3','4','5','6','7','8','9'] cm = confusion_matrix(np.argmax(test_targets, axis=1), np.argmax(new_model.predict(test_tensors), axis=1)) plt.imshow(cm, cmap=plt.cm.Blues) plt.colorbar() indexes = np.arange( len(cm_labels)) for i in indexes: for j in indexes: plt.text(j, i, cm[i, j]) plt.xticks(indexes, cm_labels, rotation=90) plt.xlabel( 'Predicted label' ) plt.yticks(indexes, cm_labels) plt.ylabel( 'True label' ) plt.title( 'Confusion matrix' ) plt.show() To read this confusion matrix, look at the number on the Predicted Label axis and check whether it was correctly classified on the True Label axis. For exam- ple, look at number 0 on the Predicted Label axis: all five images were classified as 0, and no images were mistakenly classified as any other number. Similarly,Predicted labelConfusion matrix True label55 012340 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 1 4 0 00 1 2 3 4 5 6 7 8 9 0 0 0 0 0 0 0 0 5 0123456789 Figure 6.20 Confusion matrix for the sign language classifier" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"282 CHAPTER 6Transfer learning go through the rest of the numbers on the Predicted Label axis. You will notice that the model successfully made the correct predictions for all the test images except the image with true label = 8. In that case, the model mistakenly classi- fied an image of number 8 as number = 7. Summary Transfer learning is usually the go-to approach when starting a classification and object detection project, especially when you don’t have a lot of training data. Transfer learning migrates the knowledge learned from the source dataset to the target dataset, to save training time and computational cost. The neural network learns the features in your dataset step by step in increasing levels of complexity. The deeper you go through the network layers, the more image-specific the features that are learned. Early layers in the network learn low-level features like lines, blobs, and edges. The output of the first layer becomes input to the second layer, which produces higher-level features. The next layer assembles the output of the previous layer into parts of familiar objects, and a subsequent layer detects the objects. The three main transfer learning approaches are using a pretrained network as a classifier, using a pretrained network as a feature extractor, and fine-tuning. Using a pretrained network as a classifier means using the network directly to classify new images without freezing layers or applying model training. Using a pretrained network as a feature extractor means freezing the classifier part of the network and retraining the new classifier. Fine-tuning means freezing a few of the network layers that are used for feature extraction, and jointly training both the non-frozen layers and the newly added classifier layers of the pretrained model. The transferability of features from one network to another is a function of the size of the target data and the domain similarity between the source and target data. Generally, fine-tuning parameters use a smaller learning rate, while training the output layer from scratch can use a larger learning rate." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"283Object detection with R-CNN, SSD, and YOLO In the previous chapters, we explained how we can use deep neural networks for image classification tasks. In image classification, we assume that there is only one main target object in the image, and the model’s sole focus is to identify the target category. However, in many situations, we are interested in multiple targets in the image. We want to not only classify them, but also obtain their specific positions in the image. In computer vision, we refer to such tasks as object detection . Figure 7.1 explains the difference between image classification and object detection tasks. Object detection is a CV task that involves both main tasks: localizing one or more objects within an image and classifying each object in the image (see table 7.1). This is done by drawing a bounding box around the identified object with its pre- dicted class. This means the system doesn’t just predict the class of the image, as in image classification tasks; it also predicts the coordinates of the bounding box thatThis chapter covers Understanding image classification vs. object detection Understanding the general framework of object detection projects Using object detection algorithms like R-CNN, SSD, and YOLO" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"284 CHAPTER 7Object detection with R-CNN, SSD, and YOLO fits the detected object. This is a challenging CV task because it requires both success- ful object localization, in order to locate and draw a bounding box around each object in an image, and object classification to predict the correct class of object that was localized. Object detection is widely used in many fields. For example, in self-driving technol- ogy, we need to plan routes by identifying the locations of vehicles, pedestrians, roads, and obstacles in a captured video image. Robots often perform this type of task toTable 7.1 Image classification vs. object detection Image classification Object detection The goal is to predict the type or class of an object in an image. Input: an image with a single object Output: a class label (cat, dog, etc.) Example output: class probability (for example, 84% cat)The goal is to predict the location of objects in an image via bounding boxes and the classes of the located objects. Input: an image with one or more objects Output: one or more bounding boxes (defined by coordi- nates) and a class label for each bounding box Example output for an image with two objects: – box1 coordinates ( x, y, w, h) and class probability – box2 coordinates and class probability Note that the image coordinates ( x, y, w, h) are as follows: (x and y) are the coordinates of the bounding-box center point, and ( w and h) are the width and height of the box. Cat, Cat, Duck, DogObject detection (classification and localization) Cat, Cat, Duck, Dog CatImage classification Figure 7.1 Image classification vs. object detection tasks. In classification tasks, the classifier outputs the class probability (cat), whereas in object detection tasks, the detector outputs the bounding box coordinates that localize the detected objects (four boxes in this example) and their predicted classes (two cats, one duck, and one dog)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"285 General object detection framework detect targets of interest. And systems in the security field need to detect abnormal targets, such as intruders or bombs. This chapter’s layout is as follows: 1We will explore the general framework of the object detection algorithms. 2We will dive deep into three of the most popular detection algorithms: the R-CNN family of networks, SSD, and the YOLO family of networks. 3We will use what we’ve learned in a real-world project to train an end-to-end object detector. By the end of this chapter, we will have gained an understanding of how DL is applied to object detection, and how the different object detection models inspire and diverge from one another. Let’s get started! 7.1 General object detection framework Before we jump into the object detection systems like R-CNN, SSD, and YOLO, let’s discuss the general framework of these systems to understand the high-level workflow that DL-based systems follow to detect objects and the metrics they use to evaluate their detection performance. Don’t worry about the code implementation details of object detectors yet. The goal of this section is to give you an overview of how different object detection systems approach this task and introduce you to a new way of think- ing about this problem and a set of new concepts to set you up to understand the DL architectures that we will explain in sections 7.2, 7.3, and 7.4. Typically, an object detection framework has four components: 1Region proposal —An algorithm or a DL model is used to generate regions of interest (RoIs) to be further processed by the system. These are regions that the network believes might contain an object; the output is a large number of bounding boxes, each of which has an objectness score . Boxes with large object- ness scores are then passed along the network layers for further processing. 2Feature extraction and network predictions —Visual features are extracted for each of the bounding boxes. They are evaluated, and it is determined whether and which objects are present in the proposals based on visual features (for exam- ple, an object classification component). 3Non-maximum suppression (NMS) —In this step, the model has likely found multi- ple bounding boxes for the same object. NMS helps avoid repeated detection of the same instance by combining overlapping boxes into a single bounding box for each object. 4Evaluation metrics —Similar to accuracy, precision, and recall metrics in image classification tasks (see chapter 4), object detection systems have their own metrics to evaluate their detection performance. In this section, we will explain the most popular metrics, like mean average precision (mAP), precision-recall curve (PR curve), and intersection over union (IoU)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"286 CHAPTER 7Object detection with R-CNN, SSD, and YOLO Now, let’s dive one level deeper into each one of these components to build an intu- ition about what their goals are. 7.1.1 Region proposals In this step, the system looks at the image and proposes RoIs for further analysis. RoIs are regions that the system believes have a high likelihood of containing an object, called the objectness score (figure 7.2). Regions with high objectness scores are passed to the next steps; regions with low scores are abandoned. There are several approaches to generate region proposals. Originally, the selective search algorithm was used to generate object proposals; we will talk more about this algorithm when we discuss the R-CNN network. Other approaches use more complex visual features extracted from the image by a deep neural network to generate regions (for example, based on the features from a DL model). We will talk in more detail about how different object detection systems approach this task. The important thing to note is that this step produces a lot (thousands) of bounding boxes to be further analyzed and classified by the network. During this step, the network analyzes these regions in the image and classifies each region as fore- ground (object) or background (no object) based on its objectness score. If the object- ness score is above a certain threshold, then this region is considered a foreground and pushed forward in the network. Note that this threshold is configurable based on your problem. If the threshold is too low, your network will exhaustively generate all possible proposals, and you will have a better chance of detecting all objects in the image. On the flip side, this is very computationally expensive and will slow down Low objectness score High objectness score Figure 7.2 Regions of interest (RoIs) proposed by the system. Regions with high objectness score represent areas of high likelihood to contain objects (foreground), and the ones with low objectness score are ignored because they have a low likelihood of containing objects (background)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"287 General object detection framework detection. So, the trade-off with generating region proposals is the number of regions versus computational complexity—and the right approach is to use problem-specific information to reduce the number of RoIs. 7.1.2 Network predictions This component includes the pretrained CNN network that is used for feature extraction to extract features from the input image that are representative for the task at hand and to use these features to determine the class of the image. In object detection frame- works, people typically use pretrained image classification models to extract visual fea- tures, as these tend to generalize fairly well. For example, a model trained on the MS COCO or ImageNet dataset is able to extract fairly generic features. In this step, the network analyzes all the regions that have been identified as having a high likelihood of containing an object and makes two predictions for each region: Bounding-box prediction —The coordinates that locate the box surrounding the object. The bounding box coordinates are represented as the tuple ( x, y, w, h), where x and y are the coordinates of the center point of the bounding box and w and h are the width and height of the box. Class prediction : The classic softmax function that predicts the class probability for each object. Since thousands of regions are proposed, each object will always have multiple bound- ing boxes surrounding it with the correct classification. For example, take a look at the image of the dog in figure 7.3. The network was clearly able to find the object (dog) and successfully classify it. But the detection fired a total of five times because Figure 7.3 The bounding-box detector produces more than one bounding box for an object. We want to consolidate these boxes into one bounding box that fits the object the most. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"288 CHAPTER 7Object detection with R-CNN, SSD, and YOLO the dog was present in the five RoIs produced in the previous step: hence the five bounding boxes around the dog in the figure. Although the detector was able to successfully locate the dog in the image and classify it correctly, this is not exactly what we need. We need just one bounding box for each object for most problems. In some problems, we only want the one box that fits the object the most. What if we are building a system to count dogs in an image? Our current system will count five dogs. We don’t want that. This is when the non-maximum suppression technique comes in handy. 7.1.3 Non-maximum suppression (NMS) As you can see in figure 7.4, one of the problems of an object detection algorithm is that it may find multiple detections of the same object. So, instead of creating only one bounding box around the object, it draws multiple boxes for the same object. NMS is a technique that makes sure the detection algorithm detects each object only once. As the name implies, NMS looks at all the boxes surrounding an object to find the box that has the maximum prediction probability, and it suppresses or eliminates the other boxes (hence the name). The general idea of NMS is to reduce the number of candidate boxes to only one bounding box for each object. For example, if the object in the frame is fairly large and more than 2,000 object proposals have been generated, it is quite likely that some of them will have significant overlap with each other and the object. After applying non-maximum suppression Predictions before NMS Figure 7.4 Multiple regions are proposed for the same object. After NMS, only the box that fits the object the best remains; the rest are ignored, as they have large overlaps with the selected box." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"289 General object detection framework Let’s see the steps of how the NMS algorithm works: 1Discard all bounding boxes that have predictions that are less than a certain threshold, called the confidence threshold . This threshold is tunable, which means a box will be suppressed if the prediction probability is less than the set threshold. 2Look at all the remaining boxes, and select the bounding box with the highest probability. 3Calculate the overlap of the remaining boxes that have the same class predic- tion. Bounding boxes that have high overlap with each other and that predict the same class are averaged together. This overlap metric is called intersection over union (IoU) . IoU is explained in detail in the next section. 4Suppress any box that has an IoU value smaller than a certain threshold (called the NMS threshold ). Usually the NMS threshold is equal to 0.5, but it is tunable as well if you want to output fewer or more bounding boxes. NMS techniques are typically standard across the different detection frameworks, but it is an important step that may require tweaking hyperparameters such as the confi- dence threshold and the NMS threshold based on the scenario. 7.1.4 Object-detector evaluation metrics When evaluating the performance of an object detector, we use two main evaluation metrics: frames per second and mean average precision. FRAMES PER SECOND (FPS) TO MEASURE DETECTION SPEED The most common metric used to measure detection speed is the number of frames per second (FPS). For example, Faster R-CNN operates at only 7 FPS, whereas SSD operates at 59 FPS. In benchmarking experiments, you will see the authors of a paper state their network results as: “Network X achieves mAP of Y% at Z FPS,” where X is the network name, Y is the mAP percentage, and Z is the FPS. MEAN AVERAGE PRECISION (MAP) TO MEASURE NETWORK PRECISION The most common evaluation metric used in object recognition tasks is mean average precision (mAP) . It is a percentage from 0 to 100, and higher values are typically better, but its value is different from the accuracy metric used in classification. To understand how mAP is calculated, you first need to understand intersection over union (IoU) and the precision-recall curve (PR curve). Let’s explain IoU and the PR curve and then come back to mAP. INTERSECTION OVER UNION (IOU) This measure evaluates the overlap between two bounding boxes: the ground truth bounding box ( Bground truth ) and the predicted bounding box ( Bpredicted ). By applying the IoU, we can tell whether a detection is valid (True Positive) or not (False Positive). Figure 7.5 illustrates the IoU between a ground truth bounding box and a predicted bounding box." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"290 CHAPTER 7Object detection with R-CNN, SSD, and YOLO The intersection over the union value ranges from 0 (no overlap at all) to 1 (the two bounding boxes overlap each other 100%). The higher the overlap between the two bounding boxes (IoU value), the better (figure 7.6). To calculate the IoU of a prediction, we need the following: The ground truth bounding box ( Bground truth ): the hand-labeled bounding box created during the labeling process The predicted bounding box ( Bpredicted ) from our model We calculate IoU by dividing the area of overlap by the area of the union, as in the fol- lowing equation: Score =Area of overlap Area of unionPredicted person bounding box Ground truth person bounding box Figure 7.5 The IoU score is the overlap between the ground truth bounding box and the predicted bounding box. PoorIoU: 0.4034 Good ExcellentIoU: 0.7330 IoU: 0.9264 Figure 7.6 IoU scores range from 0 (no overlap) to 1 (100% overlap). The higher the overlap (IoU) between the two bounding boxes, the better. IoUBground truth Bpredicted∩ Bground truth Bpredicted∪---------------------------------------------------------- - =" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"291 General object detection framework IoU is used to define a correct prediction , meaning a prediction (True Positive) that has an IoU greater than some threshold. This threshold is a tunable value depending on the challenge, but 0.5 is a standard value. For example, some challenges, like Micro- soft COCO, use mAP@0.5 (IoU threshold of 0.5) or mAP@0.75 (IoU threshold of 0.75). If the IoU value is above this threshold, the prediction is considered a True Pos- itive (TP); and if it is below the threshold, it is considered a False Positive (FP). PRECISION -RECALL CURVE (PR CURVE ) With the TP and FP defined, we can now calculate the precision and recall of our detection for a given class across the testing dataset. As explained in chapter 4, we cal- culate the precision and recall as follows (recall that FN stands for False Negative): Recall = Precision = After calculating the precision and recall for all classes, the PR curve is then plotted as shown in figure 7.7. The PR curve is a good way to evaluate the performance of an object detector, as the confidence is changed by plotting a curve for each object class. A detector is consid- ered good if its precision stays high as recall increases, which means if you vary theTP TP FN+-------------------- TP TP FP+------------------- 1.0 Precision Recall0.8 0.6 0.4 0.2 0.0 0.0 1.0 0.8 0.6 0.4 0.2 Figure 7.7 A precision-recall curve is used to evaluate the performance of an object detector." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"292 CHAPTER 7Object detection with R-CNN, SSD, and YOLO confidence threshold, the precision and recall will still be high. On the other hand, a poor detector needs to increase the number of FPs (lower precision) in order to achieve a high recall. That’s why the PR curve usually starts with high precision values, decreasing as recall increases. Now that we have the PR curve, we can calculate the average precision (AP) by cal- culating the area under the curve (AUC). Finally, the mAP for object detection is the average of the AP calculated for all the classes. It is also important to note that some research papers use AP and mAP interchangeably. RECAP To recap, the mAP is calculated as follows: 1Get each bounding box’s associated objectness score (probability of the box containing an object). 2Calculate precision and recall. 3Compute the PR curve for each class by varying the score threshold. 4Calculate the AP: the area under the PR curve. In this step, the AP is computed for each class. 5Calculate the mAP: the average AP over all the different classes. The last thing to note about mAP is that it is more complicated to calculate than other traditional metrics like accuracy. The good news is that you don’t need to compute mAP values yourself: most DL object detection implementations handle computing the mAP for you, as you will see later in this chapter. Now that we understand the general framework of object detection algorithms, let’s dive deeper into three of the most popular. In this chapter, we will discuss the R-CNN family of networks, SSD, and YOLO networks in detail to see how object detec- tors have evolved over time. We will also examine the pros and cons of each network so you can choose the most appropriate algorithm for your problem. 7.2 Region-based convolutional neural networks (R-CNNs) The R-CNN family of object detection techniques usually referred to as R-CNNs , which is short for region-based convolutional neural networks , was developed by Ross Girshick et al. in 2014.1 The R-CNN family expanded to include Fast-RCNN2 and Faster-RCN3 in 2015 and 2016, respectively. In this section, I’ll quickly walk you through the evolution of the R-CNN family from R-CNNs to Fast R-CNN to Faster R-CNN, and then we will dive deeper into the Faster R-CNN architecture and code implementation. 1Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” 2014, http:/ /arxiv.org/abs/1311.2524 . 2Ross Girshick, “Fast R-CNN,” 2015, http:/ /arxiv.org/abs/1504.08083 . 3Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, “Faster R-CNN: Towards Real-Time Object Detec- tion with Region Proposal Networks,” 2016, http:/ /arxiv.org/abs/1506.01497 ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"293 Region-based convolutional neural networks (R-CNNs) 7.2.1 R-CNN R-CNN is the least sophisticated region-based architecture in its family, but it is the basis for understanding how multiple object-recognition algorithms work for all of them. It was one of the first large, successful applications of convolutional neural networks to the problem of object detection and localization, and it paved the way for the other advanced detection algorithms. The approach was demonstrated on benchmark data- sets, achieving then-state-of-the-art results on the PASCAL VOC-2012 dataset and the ILSVRC 2013 object detection challenge. Figure 7.8 shows a summary of the R-CNN model architecture. The R-CNN model consists of four components: Extract regions of interest —Also known as extracting region proposals . These regions have a high probability of containing an object. An algorithm called selective search scans the input image to find regions that contain blobs, and proposes them as RoIs to be processed by the next modules in the pipeline. The pro- posed RoIs are then warped to have a fixed size; they usually vary in size, but as we learned in previous chapters, CNNs require a fixed input image size. Feature extraction module —We run a pretrained convolutional network on top of the region proposals to extract features from each candidate region. This is the typical CNN feature extractor that we learned about in previous chapters. Classification module —We train a classifier like a support vector machine (SVM), a traditional machine learning algorithm, to classify candidate detections based on the extracted features from the previous step. Localization module —Also known as a bounding-box regressor . Let’s take a step back to understand regression. ML problems are categorized as classification or regres- sion problems. Classification algorithms output discrete, predefined classes (dog, cat, elephant), whereas regression algorithms output continuous value predic- tions. In this module, we want to predict the location and size of the bounding Input image Extract regions of interest (ROI) using selective search algorithmA pretrained CNN to extract featuresA classifier and bounding-box regressorWarped regionCNNPerson? yes. TV monitor? no.Airplane? no. Figure 7.8 Summary of the R-CNN model architecture. (Modified from Girshick et al., “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation.”)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"294 CHAPTER 7Object detection with R-CNN, SSD, and YOLO box that surrounds the object. The bounding box is represented by identifying four values: the x and y coordinates of the box’s origin ( x, y), the width, and the height of the box ( w, h). Putting this together, the regressor predicts the four real-valued numbers that define the bounding box as the following tuple: (x, y, w, h). Selective search Selective search is a greedy search algorithm that is used to provide region proposals that potentially contain objects. It tries to find areas that might contain an object by combining similar pixels and textures into rectangular boxes. Selective search com- bines the strength of both the exhaustive search algorithm (which examines all possible locations in the image) and the bottom-up segmentation algorithm (which hierarchically groups similar regions) to capture all possible object locations. The selective search algorithm works by applying a segmentation algorithm to find blobs in an image, in order to figure out what could be an object (see the image on the right in the following figure). Bottom-up segmentation recursively combines these groups of regions together into larger ones to create about 2,000 areas to be investigated, as follows: 1The similarities between all neighboring regions are calculated. 2The two most similar regions are grouped together, and new similarities are calculated between the resulting region and its neighbors. 3This process is repeated until the entire object is covered in a single region. Note that a review of the selective search algorithm and how it calculates regions’ similarity is outside the scope of this book. If you are interested in learning moreSegmentation The selective search algorithm looks for blob-like areas in the image to extract regions. At right, the segmentation algorithm defines blobs that could be objects. Then the selective search algorithm selects these areas to be passed along for further investigation." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"295 Region-based convolutional neural networks (R-CNNs) Figure 7.9 illustrates the R-CNN architecture in an intuitive way. As you can see, the network first proposes RoIs, then extracts features, and then classifies those regionsabout this technique, you can refer to the original paper.a For the purpose of under- standing R-CNNs, you can treat the selective search algorithm as a black box that intelligently scans the image and proposes RoI locations for us to use. aJ.R.R. Uijlings, K.E.A. van de Sande, T. Gevers, and A.W.M. Smeulders, “Selective Search for Object Recognition,” 2012, www.huppelen.nl/publications/selectiveSearchDraft.pdf .Input image Proposed regions After a few iterations After the first iteration An example of bottom-up segmentation using the selective search algorithm. It combines similar regions in every iteration until the entire object is covered in a single region. ConvNetSVMs Bbox reg ConvNetSVMs Bbox reg ConvNetSVMs Bbox reg 1. Selective search algorithm is used to extract RoIs from the input image.2.Extracted regions are warped before being fed to the ConvNet.3.Forward each region through the pretrained ConvNet to extract features.4.The network produces bounding-box and classification predictions. Figure 7.9 Illustration of the R-CNN architecture. Each proposed RoI is passed through the CNN to extract features, followed by a bounding-box regressor and an SVM classifier to produce the network output prediction." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"296 CHAPTER 7Object detection with R-CNN, SSD, and YOLO based on their features. In essence, we have turned object detection into an image classification problem. TRAINING R-CNN S We learned in the previous section that R-CNNs are composed of four modules: selec- tive search region proposal, feature extractor, classifier, and bounding-box regressor. All of the R-CNN modules need to be trained except the selective search algorithm. So, in order to train R-CNNs, we need to do the following: 1Train the feature extractor CNN. This is a typical CNN training process. We either train a network from scratch, which rarely happens, or fine-tune a pre- trained network, as we learned to do in chapter 6. 2Train the SVM classifier. The SVM algorithm is not covered in this book, but it is a traditional ML classifier that is no different from DL classifiers in the sense that it needs to be trained on labeled data. 3Train the bounding-box regressors. This model outputs four real-valued num- bers for each of the K object classes to tighten the region bounding boxes. Looking through the R-CNN learning steps, you could easily find out that training an R-CNN model is expensive and slow. The training process involves training three sepa- rate modules without much shared computation. This multistage pipeline training is one of the disadvantages of R-CNNs, as we will see next. DISADVANTAGES OF R-CNN R-CNN is very simple to understand, and it achieved state-of-the-art results when it first came out, especially when using deep ConvNets to extract features. However, it is not actually a single end-to-end system that learns to localize via a deep neural net- work. Rather, it is a combination of standalone algorithms, added together to perform object detection. As a result, it has the following notable drawbacks: Object detection is very slow. For each image, the selective search algorithm pro- poses about 2,000 RoIs to be examined by the entire pipeline (CNN feature extractor and classifier). This is very computationally expensive because it per- forms a ConvNet forward pass for each object proposal without sharing computa- tion, which makes it incredibly slow. This high computation need means R-CNN is not a good fit for many applications, especially real-time applications that require fast inferences like self-driving cars and many others. Training is a multi-stage pipeline. As discussed earlier, R-CNNs require the training of three modules: CNN feature extractor, SVM classifier, and bounding-box regressors. Thus the training process is very complex and not an end-to-end training. Training is expensive in terms of space and time. When training the SVM classifier and bounding-box regressor, features are extracted from each object proposal in each image and written to disk. With very deep networks, such as VGG16, the training process for a few thousand images takes days using GPUs. The training" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"297 Region-based convolutional neural networks (R-CNNs) process is expensive in space as well, because the extracted features require hundreds of gigabytes of storage. What we need is an end-to-end DL system that fixes the disadvantages of R-CNN while improving its speed and accuracy. 7.2.2 Fast R-CNN Fast R-CNN was an immediate descendant of R-CNN, developed in 2015 by Ross Gir- shick. Fast R-CNN resembled the R-CNN technique in many ways but improved on its detection speed while also increasing detection accuracy through two main changes: Instead of starting with the regions proposal module and then using the feature extraction module, like R-CNN, Fast-RCNN proposes that we apply the CNN feature extractor first to the entire input image and then propose regions. This way, we run only one ConvNet over the entire image instead of 2,000 ConvNets over 2,000 overlapping regions. It extends the ConvNet’s job to do the classification part as well, by replacing the traditional SVM machine learning algorithm with a softmax layer. This way, we have a single model to perform both tasks: feature extraction and object classification. FAST R-CNN ARCHITECTURE As shown in figure 7.10, Fast R-CNN generates region proposals based on the last fea- ture map of the network, not from the original image like R-CNN. As a result, we can train just one ConvNet for the entire image. In addition, instead of training many dif- ferent SVM algorithms to classify each object class, a single softmax layer outputs the class probabilities directly. Now we only have one neural net to train, as opposed to one neural net and many SVMs. The architecture of Fast R-CNN consists of the following modules: 1Feature extractor module —The network starts with a ConvNet to extract features from the full image. 2RoI extractor —The selective search algorithm proposes about 2,000 region can- didates per image. 3RoI pooling layer —This is a new component that was introduced to extract a fixed-size window from the feature map before feeding the RoIs to the fully connected layers. It uses max pooling to convert the features inside any valid RoI into a small feature map with a fixed spatial extent of height × width ( H × W). The RoI pooling layer will be explained in more detail in the Faster R-CNN sec- tion; for now, understand that it is applied on the last feature map layer extracted from the CNN, and its goal is to extract fixed-size RoIs to feed to the fully con- nected layers and then the output layers. 4Two-head output layer —The model branches into two heads: – A softmax classifier layer that outputs a discrete probability distribution per RoI – A bounding-box regressor layer to predict offsets relative to the original RoI " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"298 CHAPTER 7Object detection with R-CNN, SSD, and YOLO MULTI-TASK LOSS FUNCTION IN FAST R-CNN S Since Fast R-CNN is an end-to-end learning architecture to learn the class of an object as well as the associated bounding box position and size, the loss is multi-task loss. With multi-task loss, the output has the softmax classifier and bounding-box regressor, as shown in figure 7.10. In any optimization problem, we need to define a loss function that our optimizer algorithm is trying to minimize. (Chapter 2 gives more details about optimization and loss functions.) In object detection problems, our goal is to optimize for two goals: object classification and object localization. Therefore, we have two loss functions in this problem: Lcls for the classification loss and Lloc for the bounding box prediction defining the object location. A Fast R-CNN network has two sibling output layers with two loss functions: Classification —The first outputs a discrete probability distribution (per RoI) over K + 1 categories (we add one class for the background). The probability P is computed by a softmax over the K + 1 outputs of a fully connected layer. The classification loss function is a log loss for the true class u Lcls(p,u) = –log pu where u is the true label, u ∈ 0, 1, 2, . . . ( K + 1); where u = 0 is the background; and p is the discrete probability distribution per RoI over K + 1 classes. Input imageFeature extractorRoI extractor (selective searchRoI pooling layerFully connected layersTwo output layers FCsBounding-box regressorSoftmax classifier Fixed-size RoIs after the RoI pooling layer Proposed RoIs have different sizes. ConvNet Figure 7.10 The Fast R-CNN architecture consists of a feature extractor ConvNet, RoI extractor, RoI pooling layers, fully connected layers, and a two-head output layer. Note that, unlike R-CNNs, Fast R-CNNs apply the feature extractor to the entire input image before applying the regions proposal module." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"299 Region-based convolutional neural networks (R-CNNs) Regression —The second sibling layer outputs bounding box regression offsets v = (x, y, w, h) for each of the K object classes. The loss function is the loss for bounding box for class u Lloc(tu,u) = L1smooth (tiu – vi) where: – v is the true bounding box, v = (x, y, w, h). –tu is the prediction bounding box correction: tu = (txu, tyu, twu, thu) –L1smooth is the bounding box loss that measures the difference between tiu and vi using the smooth L1 loss function. It is a robust function and is claimed to be less sensitive to outliers than other regression losses like L2. The overall loss function is L = Lcls + Lloc L(p,u,tu,v) = Lcls(p,u) + [u ≥ 1]lbox(tu,v) Note that [ u ≥ 1] is added before the regression loss to indicate 0 when the region inspected doesn’t contain any object and contains a background. It is a way of ignor- ing the bounding box regression when the classifier labels the region as a back- ground. The indicator function [ u ≥ 1] is defined as [u ≥ 1] = DISADVANTAGES OF FAST R-CNN Fas t R- C NN i s mu ch f as t er in t er ms o f te s t ing ti me , be ca us e we do n’ t have t o f e e d 2,000 region proposals to the convolutional neural network for every image. Instead, a convolution operation is done only once per image, and a feature map is generated from it. Training is also faster because all the components are in one CNN network: feature extractor, object classifier, and bounding-box regressor. However, there is a big bottleneck remaining: the selective search algorithm for generating region proposals is very slow and is generated separately by another model. The last step to achieve a complete end-to-end object detection system using DL is to find a way to combine the region proposal algorithm into our end-to-end DL network. This is what Faster R-CNN does, as we will see next. 1i f u1≥ 0 otherwise" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"300 CHAPTER 7Object detection with R-CNN, SSD, and YOLO 7.2.3 Faster R-CNN Faster R-CNN is the third iteration of the R-CNN family, developed in 2016 by Shao- qing Ren et al. Similar to Fast R-CNN, the image is provided as an input to a convolu- tional network that provides a convolutional feature map. Instead of using a selective search algorithm on the feature map to identify the region proposals, a region proposal network (RPN) is used to predict the region proposals as part of the training process. The predicted region proposals are then reshaped using an RoI pooling layer and used to classify the image within the proposed region and predict the offset values for the bounding boxes. These improvements both reduce the number of region propos- als and accelerate the test-time operation of the model to near real-time with then- state-of-the-art performance. FASTER R-CNN ARCHITECTURE The architecture of Faster R-CNN can be described using two main networks: Region proposal network (RPN) —Selective search is replaced by a ConvNet that proposes RoIs from the last feature maps of the feature extractor to be consid- ered for investigation. The RPN has two outputs: the objectness score (object or no object) and the box location. Fast R-CNN —It consists of the typical components of Fast R-CNN: – Base network for the feature extractor: a typical pretrained CNN model to extract features from the input image – RoI pooling layer to extract fixed-size RoIs – Output layer that contains two fully connected layers: a softmax classifier to output the class probability and a bounding box regression CNN for the bounding box predictions As you can see in figure 7.11, the input image is presented to the network, and its fea- tures are extracted via a pretrained CNN. These features, in parallel, are sent to two different components of the Faster R-CNN architecture: The RPN to determine where in the image a potential object could be. At this point, we do not know what the object is, just that there is potentially an object at a certain location in the image. RoI pooling to extract fixed-size windows of features. The output is then passed into two fully connected layers: one for the object classifier and one for the bounding box coordinate predictions to obtain our final localizations. This architecture achieves an end-to-end trainable, complete object detection pipe- line where all of the required components are inside the network: Base network feature extractor Regions proposal RoI pooling Object classification Bounding-box regressor " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"301 Region-based convolutional neural networks (R-CNNs) BASE NETWORK TO EXTRACT FEATURES Similar to Fast R-CNN, the first step is to use a pretrained CNN and slice off its classifi- cation part. The base network is used to extract features from the input image. We covered how this works in detail in chapter 6. In this component, you can use any of the popular CNN architectures based on the problem you are trying to solve. The original Faster R-CNN paper used ZF4 and VGG5 pretrained networks on ImageNet; but since then, there have been lots of different networks with a varying number of weights. For example, MobileNet,6 a smaller and efficient network architecture opti- mized for speed, has approximately 3.3 million parameters, whereas ResNet-152 (152 layers)—once the state of the art in the ImageNet classification competition—has around 60 million. Most recently, new architectures like DenseNet7 are both improv- ing results and reducing the number of parameters. 4Matthew D. Zeiler and Rob Fergus, “Visualizing and Understanding Convolutional Networks,” 2013, http:/ /arxiv.org/abs/1311.2901 . 5Karen Simonyan and Andrew Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recog- nition,” 2014, http:/ /arxiv.org/abs/1409.1556 . 6Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017, http:/ /arxiv.org/abs/1704.04861 . 7Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger, “Densely Connected Convolu- tional Networks,” 2016, http:/ /arxiv.org/abs/1608.06993 . VGG-16 Feature mapBounding box coordinates (, , , )xywhRegion proposal network (RPN) Fast R-CNNNo object Object Bounding box coordinates (, , , )xywh Class A Class B Figure 7.11 The Faster R-CNN architecture has two main components: an RPN that identifies regions that may contain objects of interest and their approximate location, and a Fast R-CNN network that classifies objects and refines their location defined using bounding boxes. The two components share the convolutional layers of the pretrained VGG16." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"302 CHAPTER 7Object detection with R-CNN, SSD, and YOLO As we learned in earlier chapters, each convolutional layer creates abstractions based on the previous information. The first layer usually learns edges, the second finds pat- terns in edges to activate for more complex shapes, and so forth. Eventually we end up with a convolutional feature map that can be fed to the RPN to extract regions that contain objects. REGION PROPOSAL NETWORK (RPN) The RPN identifies regions that could potentially contain objects of interest, based on the last feature map of the pretrained convolutional neural network. An RPN is also known as an attention network because it guides the network’s attention to interesting regions in the image. Faster R-CNN uses an RPN to bake the region proposal directly into the R-CNN architecture instead of running a selective search algorithm to extract RoIs. The architecture of the RPN is composed of two layers (figure 7.12): A 3 × 3 fully convolutional layer with 512 channels Two parallel 1 × 1 convolutional layers: a classification layer that is used to pre- dict whether the region contains an object (the score of it being background or foreground), and a layer for regression or bounding box prediction.VGGNet vs. ResNet Nowadays, ResNet architectures have mostly replaced VGG as a base network for extracting features. The obvious advantage of ResNet over VGG is that it has many more layers (is deeper), giving it more capacity to learn very complex features. This is true for the classification task and should be equally true in the case of object detection. In addition, ResNet makes it easy to train deep models with the use of residual connections and batch normalization, which was not invented when VGG was first released. Please revisit chapter 5 for a more detailed review of the different CNN architectures. 3 × 3 CONV (pad 1, 512 output channels) 1 × 1 CONV (4 output channels)k1 × 1 CONV (2 output channels)kFigure 7.12 Convolutional implementation of an RPN architecture, where k is the number of anchors" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"303 Region-based convolutional neural networks (R-CNNs) The 3 × 3 convolutional layer is applied on the last feature map of the base network where a sliding window of size 3 × 3 is passed over the feature map. The output is then passed to two 1 × 1 convolutional layers: a classifier and a bounding-box regressor. Note that the classifier and the regressor of the RPN are not trying to predict the class of the object and its bounding box; this comes later, after the RPN. Remember, the goal of the RPN is to determine whether the region has an object to be investigated afterward by the fully connected layers. In the RPN, we use a binary classifier to pre- dict the objectness score of the region, to determine the probability of this region being a foreground (contains an object) or a background (doesn’t contain an object). It basically looks at the region and asks, “Does this region contain an object?” If the answer is yes, then the region is passed along for further investigation by RoI pooling and the final output layers (see figure 7.13). How does the regressor predict the bounding box? To answer this question, let’s first define the bounding box. It is the box that sur- rounds the object and is identified by the tuple ( x, y, w, h), where x and y are the coordinates in the image that describes the center of the bounding box and h and w are the height and width of the bounding box. Researchers have found that defining the ( x, y) coordinates of the center point can be challenging because we have to enforce some rules to make sure the network predicts values inside the boundaries of the image. Instead, we can create reference boxes called anchor boxes in the image and make the regression layer predict offsets from these boxes calledFully convolutional networks (FCNs) One important aspect of object detection networks is that they should be fully convo- lutional. A fully convolutional neural network means that the network does not contain any fully connected layers, typically found at the end of a network prior to making out- put predictions. In the context of image classification, removing the fully connected layers is normally accomplished by applying average pooling across the entire volume prior to using a single dense softmax classifier to output the final predictions. An FCN has two main benefits: It is faster because it contains only convolution operations and no fully con- nected layers. It can accept images of any spatial resolution (width and height), provided the image and network can fit into the available memory. Being an FCN makes the network invariant to the size of the input image. However, in practice, we might want to stick to a constant input size due to issues that only become apparent when we are implementing the algorithm. A significant such prob- lem is that if we want to process images in batches (because images in batches can be processed in parallel by the GPU, leading to speed boosts), all of the images must have a fixed height and width." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"304 CHAPTER 7Object detection with R-CNN, SSD, and YOLO deltas (Δx, Δy, Δw, Δh) to adjust the anchor boxes to better fit the object to get final proposals (figure 7.14). Anchor boxes Using a sliding window approach, the RPN generates k regions for each location in the feature map. These regions are represented as anchor boxes . The anchors are cen- tered in the middle of their corresponding sliding window and differ in terms of scale and aspect ratio to cover a wide variety of objects. They are fixed bounding boxes that are placed throughout the image to be used for reference when first predicting object High objectness score (foreground)Low objectness score (background)Figure 7.13 The RPN classifier predicts the objectness score, which is the probability of an image containing an object (foreground) or a background. Anchor box New widthPredicted bounding box New ( , ) xy New height(, )xyhy xwΔ= offsets Δ ΔΔ Figure 7.14 Illustration of predicting the delta shift from the anchor boxes and the bounding box coordinates" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"305 Region-based convolutional neural networks (R-CNNs) locations. In their paper, Ren et. al. generated nine anchor boxes that all had the same center but that had three different aspect ratios and three different scales. Figure 7.15 shows an example of how anchor boxes are applied. Anchors are at the center of the sliding windows; each window has k anchor boxes with the anchor at their center. Training the RPN The RPN is trained to classify an anchor box to output an objectness score and to approximate the four coordinates of the object (location parameters). It is trained using human annotators to label the bounding boxes. A labeled box is called the ground truth . For each anchor box, the overlap probability value ( p) is computed, which indi- cates how much these anchors overlap with the ground-truth bounding boxes: p = If an anchor has high overlap with a ground-truth bounding box, then it is likely that the anchor box includes an object of interest, and it is labeled as positive with respect to the object versus no object classification task. Similarly, if an anchor has small overlap with a ground-truth bounding box, it is labeled as negative. During the training process, Anchors The anchor is placed at the center of the sliding window. Each anchor has anchor boxes with varying sizes. The loU is calculated to choose the bounding box that overlaps the most with the ground-truth bounding box. Anchor boxesSliding windows Figure 7.15 Anchors are at the center of each sliding window. IoU is calculated to select the bounding box that overlaps the most with the ground truth. 1 if IoU 0.7 > 1– if IoU 0.3 < 0 otherwise" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"306 CHAPTER 7Object detection with R-CNN, SSD, and YOLO the positive and negative anchors are passed as input to two fully connected layers cor- responding to the classification of anchors as containing an object or no object, and to the regression of location parameters (four coordinates), respectively. Correspond- ing to the k number of anchors from a location, the RPN network outputs 2 k scores and 4 k coordinates. Thus, for example, if the number of anchors per sliding window (k) is 9, then the RPN outputs 18 objectness scores and 36 location coordinates (fig- ure 7.16). FULLY CONNECTED LAYER The output fully connected layer takes two inputs: the feature maps coming from the base ConvNet and the RoIs coming from the RPN. It then classifies the selected regions and outputs their prediction class and the bounding box parameters. The object classification layer in Faster R-CNN uses softmax activation, while the locationRPN as a standalone application An RPN can be used as a standalone application. For example, in problems with a single class of objects, the objectness probability can be used as the final class prob- ability. This is because in such a case, foreground means single class , and background means not a single class . The reason you would want to use RPN for cases like single-class detection is the gain in speed in both training and prediction. Since the RPN is a very simple network that only uses convolutional layers, the prediction time can be faster than using the classification base network.256-dIntermediate layer CONV feature mapSliding window2 scores 4kk anchor boxesreglayer ...clslayer kcoordinates Figure 7.16 Region proposal network" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"307 Region-based convolutional neural networks (R-CNNs) regression layer uses linear regression over the coordinates defining the location as a bounding box. All of the network parameters are trained together using multi-task loss. MULTI-TASK LOSS FUNCTION Similar to Fast R-CNN, Faster R-CNN is optimized for a multi-task loss function that combines the losses of classification and bounding box regression: L = Lcls + Lloc L({pi},{ti}) = Lcls(pi,pi*) + pi* · L1smooth (ti – ti*) The loss equation might look a little overwhelming at first, but it is simpler than it appears. Understanding it is not necessary to be able to run and train Faster R-CNNs, so feel free to skip this section. But I encourage you to power through this explana- tion, because it will add a lot of depth to your understanding of how the optimization process works under the hood. Let’s go through the symbols first; see table 7.2. Now that you know the definitions of the symbols, let’s try to read the multi-task loss func- tion again. To help understand this equation, just for a moment, ignore the normaliza- tion terms and the ( i) terms. Here’s the simplified loss function for each instance ( i): Loss = L cls(p, p*) + p* · L1smooth (t – t*)Table 7.2 Multi-task loss function symbols Symbol Explanation pi and p*ipi is the predicted probability of the anchor ( i) being an object and the ground, and p*i is the binary ground truth (0 or 1) of the anchor being an object. ti and t*iti is the predicted four parameters that define the bounding box, and t*i is the ground- truth parameters. Ncls Normalization term for the classification loss. Ren et al. set it to be a mini-batch size of ~256. Nloc Normalization term for the bounding box regression. Ren et al. set it to the number of anchor locations, ~2400. Lcls(pi, p*i) The log loss function over two classes. We can easily translate a multi-class classifi- cation into a binary classification by predicting whether a sample is a target object: Lcls(pi, p*i) = –p*i log pi – (1 – p*i) log (1 – pi) L1smooth As described in section 7.2.2, the bounding box loss measures the difference between the predicted and true location parameters ( ti, t*i) using the smooth L1 loss function. It is a robust function and is claimed to be less sensitive to outliers than other regression losses like L2. λ A balancing parameter, set to be ~10 in Ren et al. (so the Lcls and Lloc terms are roughly equally weighted).1 Ncls-------- -λ Nloc---------" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"308 CHAPTER 7Object detection with R-CNN, SSD, and YOLO This simplified function is the summation of two loss functions: the classification loss and the location loss (bounding box). Let’s look at them one at a time: The idea of any loss function is that it subtracts the predicted value from the true value to find the amount of error. The classification loss is the cross-entropy function explained in chapter 2. Nothing new. It is a log loss function that calculates the error between the prediction probability ( p) and the ground truth ( p*): Lcls(pi, p*i) = – p*i log pi – (1 – p*i) log (1 – pi) The location loss is the difference between the predicted and true location parameters ( ti, ti*) using the smooth L1 loss function. The difference is then multiplied by the ground truth probability of the region containing an object p*. If it is not an object, p* is 0 to eliminate the entire location loss for non-object regions. Finally, we add the values of both losses to create the multi-loss function: L = Lcls + Lloc There you have it: the multi-loss function for each instance ( i). Put back the ( i) and Σ symbols to calculate the summation of losses for each instance. 7.2.4 Recap of the R-CNN family Table 7.3 recaps the evolution of the R-CNN architecture: R-CNN —Bounding boxes are proposed by the selective search algorithm. Each is warped, and features are extracted via a deep convolutional neural network such as AlexNet, before a final set of object classifications and bounding box predictions is made with linear SVMs and linear regressors. Fast R-CNN —A simplified design with a single model. An RoI pooling layer is used after the CNN to consolidate regions. The model predicts both class labels and RoIs directly. Faster R-CNN —A fully end-to-end DL object detector. It replaces the selective search algorithm to propose RoIs with a region proposal network that inter- prets features extracted from the deep CNN and learns to propose RoIs directly." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"309 Region-based convolutional neural networks (R-CNNs) Table 7.3 The evolution of the CNN family of networks from R-CNN to Fast R-CNN to Faster R-CNN R-CNN Fast R-CNN Faster R-CNN mAP on the PASCAL Visual Object Classes Challenge 200766.0% 66.9% 66.9% Features 1Applies selective search to extract RoIs (~2,000) from each image. 2A ConvNet is used to extract features from each of the ~2,000 regions extracted. 3Uses classification and bounding box predictions.Each image is passed only once to the CNN, and feature maps are extracted. 1A ConvNet is used to extract feature maps from the input image. 2Selective search is used on these maps to generate predictions. This way, we run only one ConvNet over the entire image instead of ~2,000 ConvNets over 2000 overlapping regions.Replaces the selec- tive search method with a region pro- posal network, which makes the algorithm much faster. An end-to-end DL network. Limitations High computation time, as each region is passed to the CNN separately. Also, uses three different models for making predictions.Selective search is slow and, hence, computation time is still high.Object proposal takes time. And as there are different systems working one after the other, the perfor- mance of systems depends on how the previous system performed. Test time per image 50 seconds 2 seconds 0.2 seconds Speed-up from R-CNN1x 25x 250x ConvNetSVMs Bbox reg ConvNet Input imageSVMs Bbox reg ConvNetSVMs Bbox reg 1. Selective search algorithm is used to extract RoIs from the input image.2.Extracted regions are warped before being fed to the ConvNet.3.Forward each region through the pretrained ConvNet to extract features.4.The network produces bounding-box and classification predictions. Input imageFeature extractorRol extractor (selective searchRol pooling layerFully connected layersTwo output layers FCsBounding-box regressorSoftmax classifier Fixed-size RoIs after the Rol pooling layer Proposed RoIs have different sizes. ConvNet Input imageFeature mapsRegion proposal networkProposalsClassifier Rol pooling Conv layers" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"310 CHAPTER 7Object detection with R-CNN, SSD, and YOLO R-CNN LIMITATIONS As you might have noticed, each paper proposes improvements to the seminal work done in R-CNN to develop a faster network, with the goal of achieving real-time object detection. The achievements displayed through this set of work is truly amaz- ing, yet none of these architectures manages to create a real-time object detector. Without going into too much detail, the following problems have been identified with these networks: Training the data is unwieldy and takes too long. Training happens in multiple phases (such as the training region proposal ver- sus a classifier). The network is too slow at inference time. Fortunately, in the last few years, new architectures have been created to address the bottlenecks of R-CNN and its successors, enabling real-time object detection. The most famous are the single-shot detector (SSD) and you only look once (YOLO), which we will explain in sections 7.3 and 7.4. MULTI-STAGE VS. SINGLE -STAGE DETECTOR Models in the R-CNN family are all region-based. Detection happens in two stages, and thus these models are called two-stage detectors: 1The model proposes a set of RoIs using selective search or an RPN. The pro- posed regions are sparse because the potential bounding-box candidates can be infinite. 2A classifier only processes the region candidates. One-stage detectors take a different approach. They skip the region proposal stage and run detection directly over a dense sampling of possible locations. This approach is faster and simpler but can potentially drag down performance a bit. In the next two sections, we will examine the SSD and YOLO one-stage object detectors. In general, single-stage detectors tend to be less accurate than two-stage detectors but are signifi- cantly faster. 7.3 Single-shot detector (SSD) The SSD paper was released in 2016 by Wei Liu et al.8 The SSD network reached new records in terms of performance and precision for object detection tasks, scoring over 74% mAP at 59 FPS on standard datasets such as the PASCAL VOC and Microsoft COCO. 8Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C. Berg, “SSD: Single Shot MultiBox Detector,” 2016, http:/ /arxiv.org/abs/1512.02325 ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"311 Single-shot detector (SSD) We learned earlier that the R-CNN family are multi-stage detectors: the network first predicts the objectness score of the bounding box and then passes this box through a classifier to predict the class probability. In single-stage detectors like SSD and YOLO (discussed in section 7.4), the convolutional layers make both predictions directly in one shot: hence the name single-shot detector. The image is passed once through the network, and the objectness score for each bounding box is predicted using logistic regression to indicate the level of overlap with the ground truth. If the bounding box overlaps 100% with the ground truth, the objectness score is 1; and if there is no over- lap, the objectness score is 0. We then set a threshold value (0.5) that says, “If the objectness score is above 50%, this bounding box likely has an object of interest, and we get predictions. If it is less than 50%, we ignore the predictions.” 7.3.1 High-level SSD architecture The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a NMS step to produce the final detections. The architecture of the SSD model is composed of three main parts: Base network to extract feature maps —A standard pretrained network used for high-quality image classification, which is truncated before any classification layers. In their paper, Liu et al. used a VGG16 network. Other networks like VGG19 and ResNet can be used and should produce good results. Multi-scale feature layers —A series of convolution filters are added after the base network. These layers decrease in size progressively to allow predictions of detections at multiple scales. Non-maximum suppression —NMS is used to eliminate overlapping boxes and keep only one box for each object detected. As you can see in figure 7.17, layers 4_3, 7, 8_2, 9_2, 10_2, and 11_2 make predictions directly to the NMS layer. We will talk about why these layers progressively decrease inMeasuring detector speed (FPS: Frames per second) As discussed at the beginning of this chapter, the most common metric for measur- ing detection speed is the number of frames per second. For example, Faster R-CNN operates at only 7 frames per second (FPS). There have been many attempts to build faster detectors by attacking each stage of the detection pipeline, but so far, signifi- cantly increased speed has come only at the cost of significantly decreased detection accuracy. In this section, you will see why single-stage networks like SSD can achieve faster detections that are more suitable for real-time detection. For benchmarking, SSD300 achieves 74.3% mAP at 59 FPS, while SSD512 achieves 76.8% mAP at 22 FPS, which outperforms Faster R-CNN (73.2% mAP at 7 FPS). SSD300 refers to an input image of size 300 × 300, and SSD512 refers to an input image of size 512 × 512." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"312 CHAPTER 7Object detection with R-CNN, SSD, and YOLO size in section 7.3.3. For now, let’s follow along to understand the end-to-end flow of data in SSD. You can see in figure 7.17, that the network makes a total of 8,732 detections per class that are then fed to an NMS layer to reduce down to one detection per object. Where did the number 8,732 come from? To have more accurate detection, different layers of feature maps also go through a small 3 × 3 convolution for object detection. For example, Conv4_3 is of size 38 × 38 × 512, and a 3 × 3 convolutional is applied. There are four bounding boxes, each of which has ( number of classes + 4 box values) outputs. Suppose there are 20 object classes plus 1 background class; then the output number of bounding boxes is 38 × 38 × 4 = 5,776 bounding boxes. Similarly, we calculate the number of bounding boxes for the other convolutional layers: Conv7: 19 × 19 × 6 = 2,166 boxes (6 boxes for each location) Conv8_2: 10 × 10 × 6 = 600 boxes (6 boxes for each location) Conv9_2: 5 × 5 × 6 = 150 boxes (6 boxes for each location) Conv10_2: 3 × 3 × 4 = 36 boxes (4 boxes for each location) Conv11_2: 1 × 1 × 4 = 4 boxes (4 boxes for each location) If we sum them up, we get 5,776 + 2,166 + 600 + 150 + 36 + 4 = 8,732 boxes produced. This is a huge number of boxes to show for our detector. That’s why we apply NMS to reduce the number of the output boxes. As you will see in section 7.4, in YOLO there are 7 × 7 locations at the end with two bounding boxes for each location: 7 × 7 × 2 = 98 boxes.1024 512VGG16 through CONV5_3 layer CONV6 (FC6) CONV8_2 CONV9_2CONV7 (FC7)Detections: 8732 per classNon-maximum suppressionExtra feature layers Classifier: CONV: 3 × 3 × (4 × (classes + 4)) 5121024 3300300 3838 19 191010 25655CONV10_2 256 33CONV11_2 25611Image Classifier: CONV: 3 × 3 × (6 × (classes + 4)) CONV: 3 × 3 × 1024 CONV: 1 × 1 × 1024 CONV: 1 × 1 × 256 CONV: 1 × 1 × 128 CONV: 3 × 3 × 512-s2 CONV: 3 × 3 × 256-s1CONV: 1 × 1 × 128 CONV: 3 × 3 × 256-s1CONV: 3 × 3 × (4 × (classes + 4)) Figure 7.17 The SSD architecture is composed of a base network (VGG16), extra convolutional layers for object detection, and a non-maximum suppression (NMS) layer for final detections. Note that convolution layers 7, 8, 9, 10, and 11 make predictions that are directly fed to the NMS layer. ( Source: Liu et al., 2016.)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"313 Single-shot detector (SSD) Now, let’s dive a little deeper into each component of the SSD architecture. 7.3.2 Base network As you can see in figure 7.17, the SSD architecture builds on the VGG16 architecture after slicing off the fully connected classification layers (VGG16 is explained in detail in chapter 5). VGG16 was used as the base network because of its strong performance in high-quality image classification tasks and its popularity for problems where trans- fer learning helps to improve results. Instead of the original VGG fully connected lay- ers, a set of supporting convolutional layers (from Conv6 onward) was added to enable us to extract features at multiple scales and progressively decrease the size of the input to each subsequent layer. Following is a simplified code implementation of the VGG16 network used in SSD using Keras. You will not need to implement this from scratch; my goal in including this code snippet is to show you that this is a typical VGG16 network like the one implemented in chapter 5: conv1_1 = Conv2D( 64, (3, 3), activation ='relu', padding='same') conv1_2 = Conv2D( 64, (3, 3), activation ='relu', padding='same')(conv1_1) pool1 = MaxPooling2D( pool_size =(2, 2), strides=(2, 2), padding='same')(conv1_2) What does the output prediction look like? For each feature, the network predicts the following: 4 values that describe the bounding box ( x, y, w, h) 1 value for the objectness score C values that represent the probability of each class That’s a total of 5 + C prediction values. Suppose there are four object classes in our problem. Then each prediction will be a vector that looks like this: [ x, y, w, h, object- ness score , C1, C2, C3, C4]. An example visualization of the output prediction when we have four classes in our problem. The convolutional layer predicts the bounding box coordinates, objectness score, and four class probabilities: C1, C2, C3, and C4.Prediction X = xywhC1C2C3C4Pabj Prediction Y xywhC1C2C3C4PabjX Y" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"314 CHAPTER 7Object detection with R-CNN, SSD, and YOLO conv2_1 = Conv2D( 128, (3, 3), activation ='relu', padding='same')(pool1) conv2_2 = Conv2D( 128, (3, 3), activation ='relu', padding='same')(conv2_1) pool2 = MaxPooling2D( pool_size =(2, 2), strides=(2, 2), padding='same')(conv2_2) conv3_1 = Conv2D( 256, (3, 3), activation ='relu', padding='same')(pool2) conv3_2 = Conv2D( 256, (3, 3), activation ='relu', padding='same')(conv3_1) conv3_3 = Conv2D( 256, (3, 3), activation ='relu', padding='same')(conv3_2) pool3 = MaxPooling2D( pool_size =(2, 2), strides=(2, 2), padding='same')(conv3_3) conv4_1 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(pool3) conv4_2 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(conv4_1) conv4_3 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(conv4_2) pool4 = MaxPooling2D( pool_size =(2, 2), strides=(2, 2), padding='same')(conv4_3) conv5_1 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(pool4) conv5_2 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(conv5_1) conv5_3 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(conv5_2) pool5 = MaxPooling2D( pool_size =(3, 3), strides=(1, 1), padding='same')(conv5_3) You saw VGG16 implemented in Keras in chapter 5. The two main takeaways from adding this here are as follows: Layer conv4_3 will be used again to make direct predictions. Layer pool5 will be fed to the next layer (conv6), which is the first of the multi- scale features layers. HOW THE BASE NETWORK MAKES PREDICTIONS Consider the following example. Suppose you have the image in figure 7.18, and the network’s job is to draw bounding boxes around all the boats in the image. The pro- cess goes as follows: 1Similar to the anchors concept in R-CNN, SSD overlays a grid of anchors around the image. For each anchor, the network creates a set of bounding boxes at its center. In SSD, anchors are called priors . 2The base network looks at each bounding box as a separate image. For each bounding box, the network asks, “Is there a boat in this box?” Or in other words, “Did I extract any features of a boat in this box?” 3When the network finds a bounding box that contains boat features, it sends its coordinates prediction and object classification to the NMS layer. 4NMS eliminates all the boxes except the one that overlaps the most with the ground-truth bounding box. NOTE Liu et al. used VGG16 because of its strong performance in complex image classification tasks. You can use other networks like the deeper VGG19 or ResNet for the base network, and it should perform as well if not better in accuracy; but it could be slower if you chose to implement a deeper network. MobileNet is a good choice if you want a balance between a complex, high- performing deep network and being fast. Now, on to the next component of the SSD architecture: multi-scale feature layers." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"315 Single-shot detector (SSD) 7.3.3 Multi-scale feature layers These are convolutional feature layers that are added to the end of the truncated base network. These layers decrease in size progressively to allow predictions of detections at multiple scales. MULTI-SCALE DETECTIONS To understand the goal of the multi-scale feature layers and why they vary in size, let’s look at the image of horses in figure 7.19. As you can see, the base network may be Bounding boxes that contain boat featuresBounding boxes that contain no boat features Boat Boat Figure 7.18 The SSD base network looks at the anchor boxes to find features of a boat. Solid boxes indicate that the network has found boat features. Dotted boxes indicate no boat features. Figure 7.19 Horses at different scales in an image. The horses that are far from the camera are easier to detect because they are small in size and can fit inside the priors (anchor boxes). The base network might fail to detect the horse closest to the camera because it needs a different scale of anchors to be able to create priors that cover more identifiable features." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"316 CHAPTER 7Object detection with R-CNN, SSD, and YOLO able to detect the horse features in the background, but it may fail to detect the horse that is closest to the camera. To understand why, take a close look at the dotted bounding box and try to imagine this box alone outside the context of the full image (see figure 7.20). Can you see horse features in the bounding box in figure 7.20? No. To deal with objects of different scales in an image, some methods suggest preprocessing the image at different sizes and combining the results afterward (figure 7.21). However, by using different convolution layers that vary in size, we can use feature maps from several dif- ferent layers in a single network; for prediction we can mimic the same effect, while also sharing parameters across all object scales. As CNN reduces the spatial dimension gradually, the resolution of the feature maps also decreases. SSD uses lower-resolution layers to detect larger-scale objects. For example, 4 × 4 feature maps are used for larger scale objects. To visualize this, imagine that the network reduces the image dimensions to be able to fit all of the horses inside its bounding boxes (figure 7.22). The multi-scale fea- ture layers resize the image dimensions and keep the bounding-box sizes so that they Figure 7.20 An isolated horse feature 8 × 8 feature map 4 × 4 feature map Figure 7.21 Lower-resolution feature maps detect larger-scale objects (right); higher-resolution feature maps detect smaller-scale objects (left)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"317 Single-shot detector (SSD) can fit the larger horse. In reality, convolutional layers do not literally reduce the size of the image; this is just for illustration to help us intuitively understand the concept. The image is not just resized, it actually goes through the convolutional process and thus won’t look anything like itself anymore. It will be a completely random-looking image, but it will preserve its features. The convolutional process is explained in detail in chapter 3. Using multi-scale feature maps improves network accuracy significantly. Liu et al. ran an experiment to measure the advantage gained by adding the multi-scale feature layers. Figure 7.23 shows a decrease in accuracy with fewer layers; you can see the accuracy with different numbers of feature map layers used for object detection. Notice that network accuracy drops from 74.3% when having the prediction source from all six layers to 62.4% for one source layer. When using only the conv7 layer for Figure 7.22 Multi-scale feature layers reduce the spatial dimensions of the input image to detect objects with different scales. In this image, you can see that the new priors are kind of zoomed out to cover more identifiable features of the horse close to the camera. Prediction source layers from: conv4_3 conv7 conv8_2 conv9_2 conv10_2 conv11_2mAP use boundary boxes? Yes 74.3No 63.4# boxes 8,732 74.6 63.1 8,764 73.8 68.4 8,942 70.7 69.2 9,864 64.2 62.464.4 64.09,025 8,664 Figure 7.23 Effects of using multiple output layers from the original paper. The detector’s accuracy (mAP) increases when the authors add multi-scale features. (Source: Liu et al., 2016.)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"318 CHAPTER 7Object detection with R-CNN, SSD, and YOLO prediction, performance is the worst, reinforcing the message that it is critical to spread boxes of different scales over different layers. ARCHITECTURE OF THE MULTI -SCALE LAYERS Liu et al. decided to add six convolutional layers that decrease in size. They did this with a lot of tuning and trial and error until they produced the best results. As you saw in figure 7.17, convolutional layers 6 and 7 are pretty straightforward. Conv6 has a ker- nel size of 3 × 3, and conv7 has a kernel size of 1 × 1. Layers 8 through 11, on the other hand, are treated more like blocks, where each block consists of two convolutional lay- ers of kernel sizes 1 × 1 and 3 × 3. Here is the code implementation in Keras for layers 6 through 11 (you can see the full implementation in the book’s downloadable code): # conv6 and conv7 conv6 = Conv2D( 1024, (3, 3), dilation_rate =(6, 6), activation ='relu', padding ='same')(pool5) conv7 = Conv2D( 1024, (1, 1), activation ='relu', padding='same')(conv6) # conv8 block conv8_1 = Conv2D( 256, (1, 1), activation ='relu', padding='same')(conv7) conv8_2 = Conv2D( 512, (3, 3), strides=(2, 2), activation ='relu', padding ='valid')(conv8_1) # conv9 block conv9_1 = Conv2D( 128, (1, 1), activation ='relu', padding='same')(conv8_2) conv9_2 = Conv2D( 256, (3, 3), strides=(2, 2), activation ='relu', padding ='valid')(conv9_1) # conv10 block conv10_1 = Conv2D( 128, (1, 1), activation ='relu', padding='same')(conv9_2) conv10_2 = Conv2D( 256, (3, 3), strides=(1, 1), activation ='relu', padding ='valid')(conv10_1) # conv11 block conv11_1 = Conv2D( 128, (1, 1), activation ='relu', padding='same')(conv10_2) conv11_2 = Conv2D( 256, (3, 3), strides=(1, 1), activation ='relu', padding ='valid')(conv11_1) As mentioned before, if you are not working in research or academia, you most prob- ably won’t need to implement object detection architectures yourself. In most cases, you will download an open source implementation and build on it to work on your problem. I just added these code snippets to help you internalize the information dis- cussed about different layer architectures. Atrous (or dilated) convolutions Dilated convolutions introduce another parameter to convolutional layers: the dilation rate. This defines the spacing between the values in a kernel. A 3 × 3 kernel with a dilation rate of 2 has the same field of view as a 5 × 5 kernel while only using nine parameters. Imagine taking a 5 × 5 kernel and deleting every second column and row." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"319 Single-shot detector (SSD) Next, we discuss the third and last component of the SSD architecture: NMS. 7.3.4 Non-maximum suppression Given the large number of boxes generated by the detection layer per class during a forward pass of SSD at inference time, it is essential to prune most of the bounding box by applying the NMS technique (explained earlier in this chapter). Boxes with a confidence loss and IoU less than a certain threshold are discarded, and only the top N predictions are kept (figure 7.24). This ensures that only the most likely predictions are retained by the network, while the noisier ones are removed. How does SSD use NMS to prune the bounding boxes? SSD sorts the predicted boxes by their confidence scores. Starting from the top confidence prediction, SSD evaluates whether there are any previously predicted boundary boxes for the same class that overlap with each other above a certain threshold by calculating their IoU. (The IoU threshold value is tunable. Liu et al. chose 0.45 in their paper.) Boxes with IoU above the threshold are ignored because they overlap too much with another box that has a higher confidence score, so they are most likely detecting the same object. At most, we keep the top 200 predictions per image.This delivers a wider field of view at the same computational cost. Dilated convolutions are particularly popular in the field of real-time segmentation. Use them if you need a wide field of view and cannot afford multiple convolutions or larger kernels. The following code builds a dilated 3 × 3 convolution layer with a dilation rate of 2 using Keras: Conv2D(1024, (3, 3), dilation_rate =(2,2), activation ='relu', padding ='same')A 3 × 3 kernel with a dilation rate of 2 has the same field of view as a 5 × 5 kernel while only using nine parameters." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"320 CHAPTER 7Object detection with R-CNN, SSD, and YOLO 7.4 You only look once (YOLO) Similar to the R-CNN family, YOLO is a family of object detection networks developed by Joseph Redmon et al. and improved over the years through the following versions: YOLOv1 , published in 20169—Called “unified, real-time object detection” because it is a single-detection network that unifies the two components of a detector: object detector and class predictor. YOLOv2 (also known as YOLO9000), published later in 201610—Capable of detecting over 9,000 objects; hence the name. It has been trained on ImageNet and COCO datasets and has achieved 16% mAP, which is not good; but it was very fast during test time. YOLOv3 , published in 201811—Significantly larger than previous models and has achieved a mAP of 57.9%, which is the best result yet out of the YOLO fam- ily of object detectors. The YOLO family is a series of end-to-end DL models designed for fast object detec- tion, and it was among the first attempts to build a fast real-time object detector. It is one of the faster object detection algorithms out there. Although the accuracy of the models is close but not as good as R-CNNs, they are popular for object detection because of their detection speed, often demonstrated in real-time video or camera feed input. 9Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” 2016, http:/ /arxiv.org/abs/1506.02640 . 10Joseph Redmon and Ali Farhadi, “YOLO9000: Better, Faster, Stronger,” 2016, http:/ /arxiv.org/abs/ 1612.08242 . 11Joseph Redmon and Ali Farhadi, “YOLOv3: An Incremental Improvement,” 2018, http:/ /arxiv.org/abs/ 1804.02767 . Dog Building Car Figure 7.24 Non-maximum suppression reduces the number of bounding boxes to only one box for each object." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"321 You only look once (YOLO) The creators of YOLO took a different approach than the previous networks. YOLO does not undergo the region proposal step like R-CNNs. Instead, it only pre- dicts over a limited number of bounding boxes by splitting the input into a grid of cells; each cell directly predicts a bounding box and object classification. The result is a large number of candidate bounding boxes that are consolidated into a final predic- tion using NMS (figure 7.25). YOLOv1 proposed the general architecture, YOLOv2 refined the design and made use of predefined anchor boxes to improve bounding-box proposals, and YOLOv3 further refined the model architecture and training process. In this section, we are going to focus on YOLOv3 because it is currently the state-of-the-art architecture in the YOLO family. 7.4.1 How YOLOv3 works The YOLO network splits the input image into a grid of S × S cells. If the center of the ground-truth box falls into a cell, that cell is responsible for detecting the existence of that object. Each grid cell predicts B number of bounding boxes and their objectness score along with their class predictions, as follows: Coordinates of B bounding boxes —Similar to previous detectors, YOLO predicts four coordinates for each bounding box ( bx, by, bw, bh), where x and y are set to be offsets of a cell location. Objectness score (P0)—indicates the probability that the cell contains an object. The objectness score is passed through a sigmoid function to be treated as a probability with a value range between 0 and 1. The objectness score is calcu- lated as follows: P0 = Pr (containing an object) × IoU (pred, truth) Predicts bounding boxes and classificationsSplits the image into grids Final predictions after non-maximum suppression Figure 7.25 YOLO splits the image into grids, predicts objects for each grid, and then uses NMS to finalize predictions." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"322 CHAPTER 7Object detection with R-CNN, SSD, and YOLO Class prediction —If the bounding box contains an object, the network predicts the probability of K number of classes, where K is the total number of classes in your problem. It is important to note that before v3, YOLO used a softmax function for the class scores. In v3, Redmon et al. decided to use sigmoid instead. The reason is that soft- max imposes the assumption that each box has exactly one class, which is often not the case. In other words, if an object belongs to one class, then it’s guaranteed not to belong to another class. While this assumption is true for some datasets, it may not work when we have classes like Women and Person. A multilabel approach models the data more accurately. As you can see in figure 7.26, for each bounding box ( B), the prediction looks like this: [( bounding box coordinates ), (objectness score ), (class predictions )]. We’ve learned that Input image split into a 13 × 13 grid Box 1 Bounding box coordinatesClass predictionsObjectness scoretx ty tw th P1P2... Pc Po × BBox 2Box 3 Ground truth Cell at the center of the ground truth Attributes of each bounding boxPrediction feature map of the center cell Figure 7.26 Example of a YOLOv3 workflow when applying a 13 × 13 grid to the input image. The input image is split into 169 cells. Each cell predicts B number of bounding boxes and their objectness score along with their class predictions. In this example, we show the cell at the center of the ground-truth making predictions for 3 boxes (B = 3). Each prediction has the following attributes: bounding box coordinates, objectness score, and class predictions." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"323 You only look once (YOLO) the bounding box coordinates are four values plus one value for the objectness score and K values for class predictions. Then the total number of values predicted for all bounding boxes is 5 B + K multiplied by the number of cells in the grid S × S: Total predicted values = S × S × (5 B + K) PREDICTIONS ACROSS DIFFERENT SCALES Look closely at figure 7.26. Notice that the prediction feature map has three boxes. You might have wondered why there are three boxes. Similar to the anchors concept in SSD, YOLOv3 has nine anchors to allow for prediction at three different scales per cell. The detection layer makes detections at feature maps of three different sizes having strides 32, 16, and 8, respectively. This means that with an input image of size 416 × 416, we make detections on scales 13 × 13, 26 × 26, and 52 × 52 (figure 7.27). The 13 × 13 layer is responsible for detecting large objects, the 26 × 26 layer is for detecting medium objects, and the 52 × 52 layer detects smaller objects. This results in the prediction of three bounding boxes for each cell ( B = 3). That’s why in figure 7.26, the prediction feature map is predicting Box 1, Box 2, and Box 3. The bounding box responsible for detecting the dog will be the one whose anchor has the highest IoU with the ground-truth box. NOTE Detections at different layers help address the issue of detecting small objects, which was a frequent complaint with YOLOv2. The upsampling layers can help the network preserve and learn fine-grained features, which are instrumental for detecting small objects. The network does this by downsampling the input image until the first detection layer, where a detection is made using feature maps of a layer with stride 32. Further, layers are upsampled by a factor of 2 and concatenated with feature maps of previous 26 × 26 13 × 13 52× 52 Figure 7.27 Prediction feature maps at different scales" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"324 CHAPTER 7Object detection with R-CNN, SSD, and YOLO layers having identical feature-map sizes. Another detection is now made at layer with stride 16. The same upsampling procedure is repeated, and a final detection is made at the layer of stride 8. YOLO V3 OUTPUT BOUNDING BOXES For an input image of size 416 × 416, YOLO predicts ((52 × 52) + (26 × 26) + 13 × 13)) × 3 = 10,647 bounding boxes. That is a huge number of boxes for an output. In our dog example, we have only one object. We want only one bounding box around this object. How do we reduce the boxes from 10,647 down to 1? First, we filter the boxes based on their objectness score. Generally, boxes having scores below a threshold are ignored. Second, we use NMS to cure the problem of multiple detections of the same image. For example, all three bounding boxes of the outlined grid cell at the center of the image may detect a box, or the adjacent cells may detect the same object. 7.4.2 YOLOv3 architecture Now that you understand how YOLO works, going through the architecture will be very simple and straightforward. YOLO is a single neural network that unifies object detection and classifications into one end-to-end network. The neural network archi- tecture was inspired by the GoogLeNet model (Inception) for feature extraction. Instead of the Inception modules, YOLO uses 1 × 1 reduction layers followed by 3 × 3 convolutional layers. Redmon and Farhadi called this DarkNet (figure 7.28). YOLOv2 used a custom deep architecture darknet-19, an originally 19-layer network supplemented with 11 more layers for object detection. With a 30-layer architecture, YOLOv2 often struggled with small object detections. This was attributed to loss of fine-grained features as the layers downsampled the input. However, YOLOv2’s archi- tecture was still lacking some of the most important elements that are now stable inInput image Fully connected DarkNet architectureFully connected× timesB Length: 5 + BK( , , , , obj score)xywh Class probability× timesK Figure 7.28 High-level architecture of YOLO" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"325 You only look once (YOLO) most state-of-the art algorithms: no residual blocks, no skip connections, and no upsam- pling. YOLOv3 incorporates all of these updates. YOLOv3 uses a variant of DarkNet called Darknet-53 (figure 7.29). It has a 53-layer network that is trained on ImageNet. For the task of detection, 53 more layers are stacked onto it, giving us a 106-layer fully convolutional underlying architecture for YOLOv3. This is the reason behind the slowness of YOLOv3 compared to YOLOv2— but this comes with a great boost in detection accuracy. FULL ARCHITECTURE OF YOLO V3 We just learned that YOLOv3 makes predictions across three different scales. This becomes a lot clearer when you see the full architecture, shown in figure 7.30. The input image goes through the DarkNet-53 feature extractor, and then the image is downsampled by the network until layer 79. The network branches out and continues to downsample the image until it makes its first prediction at layer 82. This detection is made on a grid scale of 13 × 13 that is responsible for detecting large objects, as we explained before. Next the feature map from layer 79 is upsampled by 2x to dimensions 26 × 26 and concatenated with the feature map from layer 61. Then the second detection is made by layer 94 on a grid scale of 26 × 26 that is responsible for detecting medium objects. Finally, a similar procedure is followed again, and the feature map from layer 91 is subjected to few upsampling convolutional layers before being depth concatenated with a feature map from layer 36. A third prediction is made by layer 106 on a grid scale of 52 × 52, which is responsible for detecting small objects.Type 1×Convolutional Convolutional Convolutional Convolutional Residual Convolutional Convolutional Convolutional Residual Convolutional Convolutional Convolutional Residual Convolutional Convolutional Convolutional Residual Convolutional Convolutional Convolutional Residual Avgpool Connected SoftmaxFilters 32 64 32 34 128 64 128 256 128 256 512 256 512 1024 512 1024Size 3 × 3 3 × 3 / 2 1 × 1 3 × 3 3 × 3 / 2 1 × 1 3 × 3 3 × 3 / 2 1 × 1 3 × 3 3 × 3 / 2 1 × 1 3 × 3 3 × 3 / 2 1 × 1 3 × 3 Global 1000Output 256 × 256 128 × 128 128 × 128 64 × 64 64 × 64 32 × 32 32 × 32 16 × 16 16 × 16 8 × 8 8 × 82× 8× 8× 4× Figure 7.29 DarkNet-53 feature extractor architecture. ( Source: Redmon and Farhadi, 2018.)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"326 CHAPTER 7Object detection with R-CNN, SSD, and YOLO 7.5 Project: Train an SSD network in a self-driving car application The code for this project was created by Pierluigi Ferrari in his GitHub repository (https:/ /github.com/pierluigiferrari/ssd_keras ). The project was adapted for this chap- ter; you can find this implementation with the book’s downloadable code. Note that for this project, we are going to build a smaller SSD network called SSD7. SSD7 is a seven-layer version of the SSD300 network. It is important to note that while an SSD7 network would yield some acceptable results, this is not an optimized network architecture. The goal is just to build a low-complexity network that is fast enough for you to train on your personal computer. It took me around 20 hours to train this net- work on the road traffic dataset; training could take a lot less time on a GPU. NOTE The original repository created by Pierluigi Ferrari comes with imple- mentation tutorials for SSD7, SSD300, and SSD512 networks. I encourage you to check it out.DarkNet architectureUpsampling layer Detection layers at scale 1 Scale: 3 Stride: 8 106Scale: 2 Stride: 16 94Scale: 1 Stride: 32 82Upsampling layerConcatenation Concatenation36 ... ... ... ... ... ... ... ... ... ...+ ++ Detection layers at scale 2 Detection layers at scale 361 7991 Figure 7.30 YOLOv3 network architecture. (Inspired by the diagram in Ayoosh Kathuria’s post “What’s new in YOLO v3?” Medium , 2018, http:/ /mng.bz/lGN2 .)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"327 Project: Train an SSD network in a self-driving car application In this project, we will use a toy dataset created by Udacity. You can visit Udacity’s GitHub repository for more information on the dataset ( https:/ /github.com/udacity/ self-driving-car/tree/master/annotations ). It has more than 22,000 labeled images and 5 object classes: car, truck, pedestrian, bicyclist, and traffic light. All of the images have been resized to a height of 300 pixels and a width of 480 pixels. You can down- load the dataset as part of the book’s code. NOTE The GitHub data repository is owned by Udacity, and it may be updated after this writing. To avoid any confusion, I downloaded the dataset that I used to create this project and provided it with the book’s code to allow you to replicate the results in this project. What makes this dataset very interesting is that these are real-time images taken while driving in Mountain View, California, and neighboring cities during daylight condi- tions. No image cleanup was done. Take a look at the image examples in figure 7.31. Figure 7.31 Example images from the Udacity self-driving dataset (Image copyright © 2016 Udacity and published under MIT License.)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"328 CHAPTER 7Object detection with R-CNN, SSD, and YOLO As stated on Udacity’s page, the dataset was labeled by CrowdAI and Autti. You can find the labels in CSV format in the folder, split into three files: training, validation, and test datasets. The labeling format is straightforward, as follows: Xmin, xmax, ymin, and ymax are the bounding box coordinates. Class_id is the cor- rect label, and frame is the image name. 7.5.1 Step 1: Build the model Before jumping into the model training, take a close look at the build_model method in the keras_ssd7.py file. This file builds a Keras model with the SSD architecture. As we learned earlier in this chapter, the model consists of convolutional feature layers and a number of convolutional predictor layers that make their input from different feature layers. Here is what the build_model method looks like. Please read the comments in the keras_ssd7.py file to understand the arguments passed: def build_model (image_size, mode ='training' ,frame xmin xmax ymin ymax class_id 1478019952686311006.jpg 237 251 143 155 1 Data annotation using LabelImg If you are annotating your own data, there are several open source labeling applica- tions that you can use, like LabelImg ( https:/ /pypi.org/project/labelImg ). They are very easy to set up and use. Example of using the labelImg application to annotate images " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"329 Project: Train an SSD network in a self-driving car application l2_regularization =0.0, min_scale =0.1, max_scale =0.9, scales =None, aspect_ratios_global =[0.5, 1.0, 2.0], aspect_ratios_per_layer =None, two_boxes_for_ar1 =True, clip_boxes =False, variances =[1.0, 1.0, 1.0, 1.0], coords ='centroids' , normalize_coords =False, subtract_mean =None, divide_by_stddev =None, swap_channels =False, confidence_thresh =0.01, iou_threshold =0.45, top_k =200, nms_max_output_size =400, return_predictor_sizes =False) 7.5.2 Step 2: Model configuration In this section, we set the model configuration parameters. First we set the height, w i d t h , a n d n u m b e r o f c o l o r c h a n n e l s t o w h a t e v e r w e w a n t t h e m o d e l t o a c c e p t a s image input. If your input images have a different size than defined here, or if your images have non-uniform size, you must use the data generator’s image transforma- tions (resizing and/or cropping) so that your images end up having the required input size before they are fed to the model: img_height = 300 img_width = 480 img_channels = 3 intensity_mean = 127.5 intensity_range = 127.5 The number of classes is the number of positive classes in your dataset: for example, 20 for PASCAL VOC or 80 for COCO. Class ID 0 must always be reserved for the back- ground class: n_classes = 5 scales = [0.08, 0.16, 0.32, 0.64, 0.96] aspect_ratios = [0.5, 1.0, 2.0] steps = None offsets = None Height, width, and channels of the input images Set to your preference (maybe None). The current settings transform the input pixel values to the interval [– 1,1]. Number of classes in our datasetAn explicit list of anchor box scaling factors. If this is passed, it overrides the min_scale and max_scale arguments. List of aspect ratios for the anchor boxesIn case you’d like to set the step sizes for the anchor box grids manually; not recommended In case you’d like to set the offsets for the anchor box grids manually; not recommended" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"330 CHAPTER 7Object detection with R-CNN, SSD, and YOLO two_boxes_for_ar1 = True clip_boxes = False variances = [1.0, 1.0, 1.0, 1.0] normalize_coords = True 7.5.3 Step 3: Create the model Now we call the build_model() function to build our model: model = build_model(image_size=(img_height, img_width, img_channels), n_classes=n_classes, mode= 'training' , l2_regularization=0.0005, scales=scales, aspect_ratios_global=aspect_ratios, aspect_ratios_per_layer= None, two_boxes_for_ar1=two_boxes_for_ar1, steps=steps, offsets=offsets, clip_boxes=clip_boxes, variances=variances, normalize_coords=normalize_coords, subtract_mean=intensity_mean, divide_by_stddev=intensity_range) You can optionally load saved weights. If you don’t want to load weights, skip the fol- lowing code snippet: model.load_weights('', by_name=True) Instantiate an Adam optimizer and the SSD loss function, and compile the model. Here, we will use a custom Keras function called SSDLoss . It implements the multi- task log loss for classification and smooth L1 loss for localization. neg_pos_ratio and alpha are set as in the SSD paper (Liu et al., 2016): adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) model.compile(optimizer=adam, loss=ssd_loss.compute_loss)Specifies whether to generate two anchor boxes for aspect ratio 1 Specifies whether to clip the anchor boxes to lie entirely within the image boundariesList of variances by which the encoded target coordinates are scaled Specifies whether the model is supposed to use coordinates relative to the image size" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"331 Project: Train an SSD network in a self-driving car application 7.5.4 Step 4: Load the data To load the data, follow these steps: 1Instantiate two DataGenerator objects—one for training and one for validation: train_dataset = DataGenerator(load_images_into_memory= False, hdf5_dataset_path= None) val_dataset = DataGenerator(load_images_into_memory= False, hdf5_dataset_path= None) 2Parse the image and label lists for the training and validation datasets: images_dir = 'path_to_downloaded_directory' train_labels_filename = 'path_to_dataset/labels_train.csv' val_labels_filename = 'path_to_dataset/labels_val.csv' train_dataset.parse_csv(images_dir=images_dir, labels_filename=train_labels_filename, input_format=['image_name', 'xmin', 'xmax', 'ymin', 'ymax', 'class_id'], include_classes='all') val_dataset.parse_csv(images_dir=images_dir, labels_filename=val_labels_filename, input_format=['image_name', 'xmin', 'xmax', 'ymin', 'ymax', 'class_id'], include_classes='all') train_dataset_size = train_dataset.get_dataset_size() val_dataset_size = val_dataset.get_dataset_size() print(""Number of images in the training dataset: \t{:>6}"".format(train_dataset_size)) print(""Number of images in the validation dataset: \t{:>6}"".format(val_dataset_size)) This cell should print out the size of your training and validation datasets as follows: Number of images in the training dataset: 18000 Number of images in the validation dataset: 4241 3Set the batch size: batch_size = 16 As you learned in chapter 4, you can increase the batch size to get a boost in the computing speed based on the hardware that you are using for this training. Ground truth Gets the number of samples in the training and validation datasets" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"332 CHAPTER 7Object detection with R-CNN, SSD, and YOLO 4Define the data augmentation process: data_augmentation_chain = DataAugmentationConstantInputSize( random_brightness=(-48, 48, 0.5), random_contrast=(0.5, 1.8, 0.5), random_saturation=(0.5, 1.8, 0.5), random_hue=(18, 0.5), random_flip=0.5, random_translate=((0.03,0.5), (0.03,0.5), 0.5), random_scale=(0.5, 2.0, 0.5), n_trials_max=3, clip_boxes= True, overlap_criterion= 'area', bounds_box_filter=(0.3, 1.0), bounds_validator=(0.5, 1.0), n_boxes_min=1, background=(0,0,0)) 5Instantiate an encoder that can encode ground-truth labels into the format needed by the SSD loss function. Here, the encoder constructor needs the spatial dimensions of the model’s predictor layers to create the anchor boxes: predictor_sizes = [model.get_layer( 'classes4' ).output_shape[1:3], model.get_layer( 'classes5' ).output_shape[1:3], model.get_layer( 'classes6' ).output_shape[1:3], model.get_layer( 'classes7' ).output_shape[1:3]] ssd_input_encoder = SSDInputEncoder(img_height=img_height, img_width=img_width, n_classes=n_classes, predictor_sizes=predictor_sizes, scales=scales, aspect_ratios_global=aspect_ratios, two_boxes_for_ar1=two_boxes_for_ar1, steps=steps, offsets=offsets, clip_boxes=clip_boxes, variances=variances, matching_type= 'multi', pos_iou_threshold=0.5, neg_iou_limit=0.3, normalize_coords=normalize_coords) 6Create the generator handles that will be passed to Keras’s fit_generator() function: train_generator = train_dataset.generate(batch_size=batch_size, shuffle= True, transformations=[ data_augmentation_chain], label_encoder=ssd_input_encoder," deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"333 Project: Train an SSD network in a self-driving car application returns={ 'processed_images' , 'encoded_labels' }, keep_images_without_gt= False) val_generator = val_dataset.generate(batch_size=batch_size, shuffle= False, transformations=[], label_encoder=ssd_input_encoder, returns={ 'processed_images' , 'encoded_labels' }, keep_images_without_gt= False) 7.5.5 Step 5: Train the model Everything is set, and we are ready to train our SSD7 network. We’ve already chosen an optimizer and a learning rate and set the batch size; now let’s set the remaining training parameters and train the network. There are no new parameters here that you haven’t learned already. We will set the model checkpoint, early stopping, and learning rate reduction rate: model_checkpoint = ModelCheckpoint(filepath= 'ssd7_epoch- {epoch:02d} _loss-{loss:.4f} _val_loss- {val_loss:.4f} .h5', monitor= 'val_loss' , verbose=1, save_best_only= True, save_weights_only= False, mode= 'auto', period=1) csv_logger = CSVLogger(filename= 'ssd7_training_log.csv' , separator= ',', append= True) early_stopping = EarlyStopping(monitor= 'val_loss' , min_delta=0.0, patience=10, verbose=1) reduce_learning_rate = ReduceLROnPlateau(monitor= 'val_loss' , factor=0.2, patience=8, verbose=1, epsilon=0.001, cooldown=0, min_lr=0.00001) callbacks = [model_checkpoint, csv_logger, early_stopping, reduce_learning_rate] Set one epoch to consist of 1,000 training steps. I’ve arbitrarily set the number of epochs to 20 here. This does not necessarily mean that 20,000 training steps is the optimum number. Depending on the model, dataset, learning rate, and so on, you might have to train much longer (or less) to achieve convergence:Early stopping if val_loss did not improve for 10 consecutive epochs Learning rate reduction rate when it plateaus" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"334 CHAPTER 7Object detection with R-CNN, SSD, and YOLO initial_epoch = 0 final_epoch = 20 steps_per_epoch = 1000 history = model.fit_generator(generator=train_generator, steps_per_epoch=steps_per_epoch, epochs=final_epoch, callbacks=callbacks, validation_data=val_generator, validation_steps=ceil( val_dataset_size/batch_size), initial_epoch=initial_epoch) 7.5.6 Step 6: Visualize the loss Let’s visualize the loss and val_loss values to look at how the training and validation loss evolved and check whether our training is going in the right direction (figure 7.32): plt.figure(figsize=(20,12)) plt.plot(history.history[ 'loss'], label= 'loss') plt.plot(history.history[ 'val_loss' ], label= 'val_loss' ) plt.legend(loc= 'upper right' , prop={ 'size': 24})If you’re resuming previous training, set initial_epoch and final_epoch accordingly. Starts training 4.00 2.252.502.753.003.253.503.75 0.0 17.5 15.0 12.5 10.0 7.5 5.0 2.5loss val_loss Figure 7.32 Visualized loss and val_loss values during SSD7 training for 20 epochs" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"335 Project: Train an SSD network in a self-driving car application 7.5.7 Step 7: Make predictions Now let’s make some predictions on the validation dataset with the trained model. For convenience, we’ll use the validation generator that we’ve already set up. Feel free to change the batch size: predict_generator = val_dataset.generate(batch_size=1, shuffle= True, transformations=[], label_encoder= None, returns={ 'processed_images' , 'processed_labels' , 'filenames' }, keep_images_without_gt= False) batch_images, batch_labels, batch_filenames = next(predict_generator) y_pred = model.predict(batch_images) y_pred_decoded = decode_detections(y_pred, confidence_thresh=0.5, iou_threshold=0.45, top_k=200, normalize_coords=normalize_coords, img_height=img_height, img_width=img_width) np.set_printoptions(precision=2, suppress= True, linewidth=90) print(""Predicted boxes: \n"") print(' class conf xmin ymin xmax ymax' ) print(y_pred_decoded[i]) This code snippet prints the predicted bounding boxes along with their class and the level of confidence for each one, as shown in figure 7.33. When we draw these predicted boxes onto the image, as shown in figure 7.34, each predicted box has its confidence next to the category name. The ground-truth boxes are also drawn onto the image for comparison. 1. Set the generator for the predictions. 2. Generate samples. 3. Make a prediction. 4. Decode the raw prediction y_pred. class 1. 1. 1. 1. 1. 1. 2.conf 0.93 0.88 0.88 0.6 0.58 0.5 0.6xmin 131.96 52.39 262.65 234.53 73.2 225.06 266.38ymin 152.12 151.89 140.26 148.43 153.51 130.93 116.4xmax 159.29 87.44 286.45 267.19 91.79 274.15 282.23ymax 172.3 ] 179.34] 164.05] 170.34] 175.64] 169.79] 173.16]][[ [ [ [ [ [ [Figure 7.33 Predicted bounding boxes, confidence level, and class " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"336 CHAPTER 7Object detection with R-CNN, SSD, and YOLO Summary Image classification is the task of predicting the type or class of an object in an image. Object detection is the task of predicting the location of objects in an image via bounding boxes and the classes of the located objects. The general framework of object detection systems consists of four main com- ponents: region proposals, feature extraction and predictions, non-maximum suppression, and evaluation metrics. Object detection algorithms are evaluated using two main metrics: frame per second (FPS) to measure the network’s speed, and mean average precision (mAP) to measure the network’s precision. The three most popular object detection systems are the R-CNN family of net- works, SSD, and the YOLO family of networks. The R-CNN family of networks has three main variations: R-CNN, Fast R-CNN, and Faster R-CNN. R-CNN and Fast R-CNN use a selective search algorithm to propose RoIs, whereas Faster R-CNN is an end-to-end DL system that uses a region proposal network to propose RoIs. The YOLO family of networks include YOLOv1, YOLOv2 (or YOLO9000), and YOLOv3. 0 25020015010050 0 400 300 200 100jeep 0.88car 0.93car 0.60car 0.50car 0.88 car 0.60car 0.58 Figure 7.34 Predicted boxes drawn onto the image " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"337 Summary R-CNN is a multi-stage detector: it separates the process to predict the object- ness score of the bounding box and the object class into two different stages. SSD and YOLO are single-stage detectors: the image is passed once through the network to predict the objectness score and the object class. In general, single-stage detectors tend to be less accurate than two-stage detec- tors but are significantly faster." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf, deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"Part 3 Generative models and visual embeddings A t this point, we’ve covered a lot of ground about how deep neural net- works can help us understand image features and perform deterministic tasks on them, like object classification and detection. Now it’s time to turn our focus to a different, slightly more advanced area of computer vision and deep learn- ing: generative models. These neural network models actually create new con- tent that didn’t exist before—new people, new objects, a new reality, like magic! We train these models on a dataset from a specific domain, and then they create new images with objects from the same domain that look close to the real data. In this part of the book, we’ll cover both training and image generation, as well as look at neural transfer and the cutting edge of what’s happening in visual embeddings." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf, deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"341Generative adversarial networks (GANs) Generative adversarial networks (GANs) are a new type of neural architecture introduced by Ian Goodfellow and other researchers at the University of Montreal, including Yoshua Bengio, in 2014.1 GANs have been called “the most interesting idea in the last 10 years in ML” by Yann LeCun, Facebook’s AI research director. The excitement is well justified. The most notable feature of GANs is their capacity to create hyperrealistic images, videos, music, and text. For example, except for the far-right column, none of the faces shown on the right side of figure 8.1 belong to real humans; they are all fake. The same is true for the handwritten digits on theThis chapter covers Understanding the basic components of GANs: generative and discriminative models Evaluating generative models Learning about popular vision applications of GANs Building a GAN model 1Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative Adversarial Networks,” 2014, http:/ /arxiv.org/abs/1406.2661 ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"342 CHAPTER 8Generative adversarial networks (GANs) left side of the figure. This shows a GAN’s ability to learn features from the training images and imagine its own new images using the patterns it has learned. We’ve learned in the past chapters how deep neural networks can be used to understand image features and perform deterministic tasks on them like object classi- fication and detection. In this part of the book, we will talk about a different type of application for deep learning in the computer vision world: generative models . These are neural network models that are able to imagine and produce new content that hasn’t been created before. They can imagine new worlds, new people, and new reali- ties in a seemingly magical way. We train generative models by providing a training dataset in a specific domain; their job is to create images that have new objects from the same domain that look like the real data. For a long time, humans have had an advantage over computers: the ability to imagine and create. Computers have excelled in solving problems like regression, classi- fication, and clustering. But with the introduction of generative networks, researchers can make computers generate content of the same or higher quality compared to that created by their human counterparts. By learning to mimic any distribution of data, computers can be taught to create worlds that are similar to our own in any domain: images, music, speech, prose. They are robot artists, in a sense, and their output is impressive. GANs are also seen as an important stepping stone toward achieving artifi- cial general intelligence (AGI), an artificial system capable of matching human cogni- tive capacity to acquire expertise in virtually any domain—from images, to language, to creative skills needed to compose sonnets. Naturally, this ability to generate new content makes GANs look a little bit like magic, at least at first sight. In this chapter, we will only attempt to scratch the surface of what is possible with GANs. We will overcome the apparent magic of GANs in order to dive into the architectural ideas and math behind these models in order to provideFigure 8.1 Illustration of GANs’ abilities by Goodfellow and co-authors. These are samples generated by GANs after training on two datasets: MNIST and the Toronto Faces Dataset (TFD). In both cases, the right-most column contains true data. This shows that the produced data is really generated and not only memorized by the network. ( Source: Goodfellow et al., 2014.) " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"343 GAN architecture the necessary theoretical knowledge and practical skills to continue exploring any facet of this field that you find most interesting. Not only will we discuss the funda- mental notions that GANs rely on, but we will also implement and train an end-to-end GAN and go through it step by step. Let’s get started! 8.1 GAN architecture GANs are based on the idea of adversarial training . The GAN architecture basically consists of two neural networks that compete against each other: The generator tries to convert random noise into observations that look as if they have been sampled from the original dataset. The discriminator tries to predict whether an observation comes from the origi- nal dataset or is one of the generator’s forgeries. This competitiveness helps them to mimic any distribution of data. I like to think of the GAN architecture as two boxers fighting (figure 8.2): in their quest to win the bout, both are learning each others’ moves and techniques. They start with less knowledge about their opponent, and as the match goes on, they learn and become better. Another analogy will help drive home the idea: think of a GAN as the opposition of a counterfeiter and a cop in a game of cat and mouse, where the counterfeiter is learn- ing to pass false notes, and the cop is learning to detect them (figure 8.3). Both are dynamic: as the counterfeiter learns to perfect creating false notes, the cop is in train- ing and getting better at detecting the fakes. Each side learns the other’s methods in a constant escalation. Generator Generates images from the features learned in the training dataset Discriminator Predicts whether the image is real or fake Figure 8.2 A fight between two adversarial networks: generative and discriminative" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"344 CHAPTER 8Generative adversarial networks (GANs) As you can see in the architecture diagram in figure 8.4, a GAN takes the following steps: 1The generator takes in random numbers and returns an image. 2This generated image is fed into the discriminator alongside a stream of images taken from the actual, ground-truth dataset. 3The discriminator takes in both real and fake images and returns probabilities: numbers between 0 and 1, with 1 representing a prediction of authenticity and 0 representing a prediction of fake. If you take a close look at the generator and discriminator networks, you will notice that the generator network is an inverted ConvNet that starts with the flattened vector. The images are upscaled until they are similar in size to the images in the training dataset. We will dive deeper into the generator architecture later in this chapter—I just wanted you to notice this phenomenon now. Police Counterfeiters = Figure 8.3 The GAN’s generator and discriminator models are like a counterfeiter and a police officer. Real FakeRandom noise GeneratorFake imageTraining set Discriminator Figure 8.4 The GAN architecture is composed of generator and discriminator networks. Note that the discriminator network is a typical CNN where the convolutional layers reduce in size until they get to the flattened layer. The generator network, on the other hand, is an inverted CNN that starts with the flattened vector: the convolutional layers increase in size until they form the dimension of the input images." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"345 GAN architecture 8.1.1 Deep convolutional GANs (DCGANs) In the original GAN paper in 2014, multi-layer perceptron (MLP) networks were used to build the generator and discriminator networks. However, since then, it has been proven that convolutional layers give greater predictive power to the discriminator, which in turn enhances the accuracy of the generator and the overall model. This type of GAN is called a deep convolutional GAN (DCGAN) and was developed by Alec Radford et al. in 2016.2 Now, all GAN architectures contain convolutional layers, so the “DC” is implied when we talk about GANs; so, for the rest of this chapter, we refer to DCGANs as both GANs and DCGANs. You can also go back to chapters 2 and 3 to learn more about the differences between MLP and CNN networks and why CNN is preferred for image problems. Next, let’s dive deeper into the architecture of the dis- criminator and generator networks. 8.1.2 The discriminator model As explained earlier, the goal of the discriminator is to predict whether an image is real or fake. This is a typical supervised classification problem, so we can use the tradi- tional classifier network that we learned about in the previous chapters. The network consists of stacked convolutional layers, followed by a dense output layer with a sig- moid activation function. We use a sigmoid activation function because this is a binary classification problem: the goal of the network is to output prediction probabilities values that range between 0 and 1, where 0 means the image generated by the genera- tor is fake and 1 means it is 100% real. The discriminator is a normal, well understood classification model. As you can see in figure 8.5, training the discriminator is pretty straightforward. We feed the discriminator 2Alec Radford, Luke Metz, and Soumith Chintala, “Unsupervised Representation Learning with Deep Convo- lutional Generative Adversarial Networks,” 2016, http:/ /arxiv.org/abs/1511.06434 .Real imagesConvolutional layersTraining dataset Discriminator network Sigmoid function Fake images0.1 realness probability output Figure 8.5 The discriminator for the GAN" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"346 CHAPTER 8Generative adversarial networks (GANs) labeled images: fake (or generated) and real images. The real images come from the training dataset, and the fake images are the output of the generator model. Now, let’s implement the discriminator network in Keras. At the end of this chap- ter, we will compile all the code snippets together to build an end-to-end GAN. We will first implement a discriminator_model function. In this code snippet, the shape of the image input is 28 × 28; you can change it as needed for your problem: def discriminator_model(): discriminator = Sequential() discriminator.add(Conv2D( 32, kernel_size= 3, strides= 2, input_shape= (28,28,1) ,padding= ""same"")) discriminator.add(LeakyReLU( alpha=0.2)) discriminator.add(Dropout( 0.25)) discriminator.add(Conv2D( 64, kernel_size= 3, strides= 2, padding= ""same"")) discriminator.add(ZeroPadding2D( padding= ((0,1),(0,1)))) discriminator.add(BatchNormalization( momentum= 0.8)) discriminator.add(LeakyReLU( alpha=0.2)) discriminator.add(Dropout( 0.25)) discriminator.add(Conv2D( 128, kernel_size= 3, strides= 2, padding= ""same"")) discriminator.add(BatchNormalization( momentum= 0.8)) discriminator.add(LeakyReLU( alpha=0.2)) discriminator.add(Dropout( 0.25)) discriminator.add(Conv2D( 256, kernel_size= 3, strides= 1, padding= ""same"")) discriminator.add(BatchNormalization( momentum= 0.8)) discriminator.add(LeakyReLU( alpha=0.2)) discriminator.add(Dropout( 0.25)) discriminator.add(Flatten()) discriminator.add(Dense( 1, activation= 'sigmoid' )) discriminator.summary() img_shape = (28,28,1) img = Input(shape=img_shape) probability = discriminator(img) return Model(img, probability) Instantiates a sequential model and names it discriminator Adds a convolutional layer to the discriminator modelAdds a leaky ReLU activation function Adds a dropout layer with a 25% dropout probability Adds a second convolutional layer with zero padding Adds a batch normalization layer for faster learning and higher accuracy Adds a third convolutional layer with batch normalization, leaky ReLU, and a dropout Adds the fourth convolutional layer with batch normalization, leaky ReLU, and a dropoutFlattens the network and adds the output dense layer with sigmoid activation function Prints the model summary Sets the input image shape Runs the discriminator model to get the output probability Returns a model that takes the image as input and produces the probability output" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"347 GAN architecture The output summary of the discriminator model is shown in figure 8.6. As you might have noticed, there is nothing new: the discriminator model follows the regular pat- tern of the traditional CNN networks that we learned about in chapters 3, 4, and 5. We stack convolutional, batch normalization, activation, and dropout layers to create our model. All of these layers have hyperparameters that we tune when we are training the network. For your own implementation, you can tune these hyperparameters and add or remove layers as you see fit. Tuning CNN hyperparameters is explained in detail in chapters 3 and 4. Layer (type) Total params: 393,729 Trainable params: 392,833 Non-trainable params: 896conv2d_1 (Conv2D) leaky_re_lu_1 (LeakyReLU)Output Shape (None, 14, 14, 32) (None, 14, 14, 32)Param # 320 0 dropout_1 (Dropout) (None, 14, 14, 32) 0 conv2d_2 (Conv2D) (None, 7, 7, 64) 18496 conv2d_3 (Conv2D) (None, 4, 4, 128) 73856zero_padding2d_1 (ZeroPaddin (None, 8, 8, 64) 0 batch_normalization_1 (Batch (None, 8, 8, 64) 250 leaky_re_lu_2 (LeakyReLU) (None, 8, 8, 64) 0 dropout_2 (Dropout) (None, 8, 8, 64) 0 flatten_1 (Flatten) (None, 4096 0 dense_1 (Dense) (None, 1) 4097batch_normalization_3 (Batch (None, 4, 4, 256) 1024 leaky_re_lu_4 (LeakyReLU) (None, 4, 4, 256) 0 dropout_4 (Dropout) (None, 4, 4, 256) 0conv2d_4 (Conv2D) (None, 4, 4, 256) 295168batch_normalization_2 (Batch (None, 4, 4, 128) 512 leaky_re_lu_3 (LeakyReLU) (None, 4, 4, 128) 0 dropout_3 (Dropout) (None, 4, 4, 128) 0 Figure 8.6 The output summary for the discriminator model " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"348 CHAPTER 8Generative adversarial networks (GANs) In the output summary in figure 8.6, note that the width and height of the output fea- ture maps decrease in size, whereas the depth increases in size. This is the expected behavior for traditional CNN networks as we’ve seen in previous chapters. Let’s see what happens to the feature maps’ size in the generator network in the next section. 8.1.3 The generator model The generator takes in some random data and tries to mimic the training dataset to generate fake images. Its goal is to trick the discriminator by trying to generate images that are perfect replicas of the training dataset. As it is trained, it gets better and bet- ter after each iteration. But the discriminator is being trained at the same time, so the generator has to keep improving as the discriminator learns its tricks. As you can see in figure 8.7, the generator model looks like an inverted ConvNet. The generator takes a vector input with some random noise data and reshapes it into a cube volume that has a width, height, and depth. This volume is meant to be treated as a feature map that will be fed to several convolutional layers that will create the final image. UPSAMPLING TO SCALE FEATURE MAPS Traditional convolutional neural networks use pooling layers to downsample input images. In order to scale the feature maps, we use upsampling layers that scale the image dimensions by repeating each row and column of the input pixels. Keras has an upsampling layer ( Upsampling2D ) that scales the image dimensions by taking a scaling factor ( size ) as an argument: keras.layers.UpSampling2D(size=( 2, 2)) This line of code repeats every row and column of the image matrix two times, because the size of the scaling factor is set to (2, 2); see figure 8.8. If the scaling factor is (3, 3), the upsampling layer repeats each row and column of the input matrix three times, as shown in figure 8.9.Random noise input vector 7 × 7 × 12814 × 14 × 128Upsampling Reshaping 28 × 28 × 64 28 × 28 × 1 Figure 8.7 The generator model of the GAN" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"349 GAN architecture When we build the generator model, we keep adding upsampling layers until the size of the feature maps is similar to the training dataset. You will see how this is imple- mented in Keras in the next section. Now, let’s build the generator_model function that builds the generator network: def generator_model(): generator = Sequential() generator.add(Dense( 128 * 7 * 7, activation= ""relu"", input_dim= 100)) generator.add(Reshape(( 7, 7, 128))) generator.add(UpSampling2D( size=(2,2))) generator.add(Conv2D( 128, kernel_size= 3, padding= ""same"")) generator.add(BatchNormalization( momentum= 0.8)) generator.add(Activation("" relu"")) generator.add(UpSampling2D( size=(2,2))) # convolutional + batch normalization layers generator.add(Conv2D( 64, kernel_size= 3, padding= ""same"")) generator.add(BatchNormalization( momentum= 0.8)) generator.add(Activation("" relu"")) # convolutional layer with filters = 1 generator.add(Conv2D( 1, kernel_size= 3, padding= ""same"")) generator.add(Activation("" tanh"")) generator.summary() noise = Input(shape=(100,)) fake_image = generator(noise) return Model(noise, fake_image) Input =1, 2 3, 4 Output =1, 1, 2, 2 1, 1, 2, 2 3, 3, 4, 4 3, 3, 4, 4 Figure 8.8 Upsampling example when the scaling size is (2, 2)[[1. 1. 1. 2. 2. 2.] [1. 1. 1. 2. 2. 2.] [1. 1. 1. 2. 2. 2.] [3. 3. 3. 4. 4. 4.] [3. 3. 3. 4. 4. 4.] [3. 3. 3. 4. 4. 4.]] Figure 8.9 Upsampling example when scaling size is (3, 3) Instantiates a sequential model and names it generator Adds a dense layer that has a number of neurons = 128 × 7 × 7Reshapes the image dimensions to 7 × 7 × 128 Upsampling layer to double the size of the image dimensions to 14 × 14Adds a convolutional layer to run the convolutional process and batch normalization Upsamples the image dimensions to 28 × 28 We don’t add upsampling here because the image size of 28 × 28 is equal to the image size in the MNIST dataset. You can adjust this for your own problem.Prints the model summary Generates the input noise vector of length = 100. We use 100 here to create a simple network. Runs the generator model to create the fake imageReturns a model that takes the noise vector as input and outputs the fake image" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"350 CHAPTER 8Generative adversarial networks (GANs) The output summary of the generator model is shown in figure 8.10. In the code snip- pet, the only new component is the Upsampling layer to double its input dimensions by repeating pixels. Similar to the discriminator, we stack convolutional layers on top of each other and add other optimization layers like BatchNormalization . The key difference in the generator model is that it starts with the flattened vector; images are upsampled until they have dimensions similar to the training dataset. All of these lay- ers have hyperparameters that we tune when we are training the network. For your own implementation, you can tune these hyperparameters and add or remove layers as you see fit. Notice the change in the output shape after each layer. It starts from a 1D vector of 6,272 neurons. We reshaped it to a 7 × 7 × 128 volume, and then the width and height were upsampled twice to 14 × 14 followed by 28 × 28. The depth decreased from 128 to 64 to 1 because this network is built to deal with the grayscale MNIST dataset project that we will implement later in this chapter. If you are building a generator model to generate color images, then you should set the filters in the last convolutional layer to 3. Layer (type) Total params: 856,193 Trainable params: 855,809 Non-trainable params: 384dense_2 (Dense) reshape_1 (Reshape)Output Shape (None, 6272 (None, 7, 7, 128)Param # 633472 0 up_sampling2d_1 (UpSampling2 (None, 14, 14, 128) 0 conv2d_5 (Conv2D) (None, 14, 14, 128) 147584 batch_normalization_5 (Batch (None, 28, 28, 64) 256batch_normalization_4 (Batch (None, 14, 14, 128) 512 activation_1 (Activation) (None, 14, 14, 128) 0 up_sampling2d_2 (UpSampling2 (None, 28, 28, 128) 0 conv2d_6 (Conv2D) (None, 28, 28, 64) 73792 activation_2 (Activation) (None, 28, 28, 64) 0 conv2d_7 (Conv2D) (None, 28, 28, 1) 577 activation_3 (Activation) (None, 28, 28, 1) 0 Figure 8.10 The output summary of the generator model" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"351 GAN architecture 8.1.4 Training the GAN Now that we’ve learned the discriminator and generator models separately, let’s put them together to train an end-to-end generative adversarial network. The discrimina- tor is being trained to become a better classifier to maximize the probability of assign- ing the correct label to both training examples (real) and images generated by the generator (fake): for example, the police officer becomes better at differentiating between fakes and real currency. The generator, on the other hand, is being trained to become a better forger, to maximize its chances of fooling the discriminator. Both networks are getting better at what they do. The process of training GAN models involves two processes: 1Train the discriminator . This is a straightforward supervised training process. The network is given labeled images coming from the generator (fake) and the training data (real), and it learns to classify between real and fake images with a sigmoid prediction output. Nothing new here. 2Train the generator . This process is a little tricky. The generator model cannot be trained alone like the discriminator. It needs the discriminator model to tell it whether it did a good job of faking images. So, we create a combined network to train the generator, composed of both discriminator and generator models. Think of the training processes as two parallel lanes. One lane trains the discriminator alone, and the other lane is the combined model that trains the generator. The GAN training process is illustrated in figure 8.11. As you can see in figure 8.11, when training the combined model, we freeze the weights of the discriminator because this model focuses only on training the generator. Discriminator training Fake data from the generator Binary classification: real/fakeReal data Discriminator Update the modelGenerator training Discriminator (freeze training)Input vector Generator Training data Update the weightFake Real Binary classification: real/fake Figure 8.11 The process flow to train GANs" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"352 CHAPTER 8Generative adversarial networks (GANs) We will discuss the intuition behind this idea when we explain the generator training proces. For now, just know that we need to build and train two models: one for the dis- criminator alone and the other for both discriminator and generator models. Both processes follow the traditional neural network training process explained in chapter 2. It starts with the feedforward process and then makes predictions and cal- culates and backpropagates the error. When training the discriminator, the error is backpropagated back to the discriminator model to update its weights; in the com- bined model, the error is backpropagated back to the generator to update its weights. During the training iterations, we follow the same neural network training proce- dure to observe the network’s performance and tune its hyperparameters until we see that the generator is achieving satisfying results for our problem. This is when we can stop the training and deploy the generator model. Now, let’s see how we compile the discriminator and the combined networks to train the GAN model. TRAINING THE DISCRIMINATOR As we said before, this is a straightforward process. First, we build the model from the discriminator_model method that we created earlier in this chapter. Then we com- pile the model and use the binary_crossentropy loss function and an optimizer of your choice (we use Adam in this example). Let’s see the Keras implementation that builds and compiles the generator. Please note that this code snippet is not meant to be compilable on its own—it is here for illustration. At the end of this chapter, you can find the full code of this project: discriminator = discriminator_model() discriminator.compile( loss='binary_crossentropy' ,optimizer= 'adam', metrics= ['accuracy' ]) We can train the model by creating random training batches using Keras’ train_on _batch method to run a single gradient update on a single batch of data: noise = np.random.normal( 0, 1, (batch_size, 100)) gen_imgs = generator.predict(noise) # Train the discriminator (real classified as ones and generated as zeros) d_loss_real = discriminator.train_on_batch(imgs, valid) d_loss_fake = discriminator.train_on_batch(gen_imgs, fake) TRAINING THE GENERATOR (COMBINED MODEL ) Here is the one tricky part in training GANs: training the generator. While the dis- criminator can be trained in isolation from the generator model, the generator needs the discriminator in order to be trained. For this, we build a combined model that contains both the generator and the discriminator, as shown in figure 8.12. When we want to train the generator, we freeze the weights of the discriminator model because the generator and discriminator have different loss functions pulling in different directions. If we don’t freeze the discriminator weights, it will be pulled in the same direction the generator is learning so it will be more likely to predict generatedSample noiseGenerates a batch of new images" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"353 GAN architecture images as real, which is not the desired outcome. Freezing the weights of the discrimi- nator model doesn’t affect the existing discriminator model that we compiled earlier when we were training the discriminator. Think of it as having two discriminator models—this is not the case, but it is easier to imagine. Now, let’s build the combined model: generator = generator_model() z = Input(shape=(100,)) image = generator(z) discriminator.trainable = False valid = discriminator(img) combined = Model(z, valid) Now that we have built the combined model, we can proceed with the training process as normal. We compile the combined model with a binary_crossentropy loss func- tion and an Adam optimizer: combined.compile( loss='binary_crossentropy' , optimizer= optimizer) g_loss = self.combined.train_on_batch(noise, valid) TRAINING EPOCHS In the project at the end of the chapter, you will see that the previous code snippet is put inside a loop function to perform the training for a certain number of epochs. For each epoch, the two compiled models (discriminator and combined) are trained simultaneously. During the training process, both the generator and discriminatorRandom noise GeneratorFeedback through backpropagation Fake imageDiscriminatorOutput (e.g. 0.3) Figure 8.12 Illustration of the combined model that contains both the generator and discriminator models Builds the generator The generator takes noise as input and generates an image. Freezes the weights of the discriminator model The discriminator takes generated images as input and determines their validity.The combined model (stacked generator and discriminator) trains the generator to fool the discriminator. Trains the generator (wants the discriminator to mistake images for being real)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"354 CHAPTER 8Generative adversarial networks (GANs) improve. You can observe the performance of your GAN by printing out the results after each epoch (or a set of epochs) to see how the generator is doing at generating synthetic images. Figure 8.13 shows an example of the evolution of the generator’s performance throughout its training process on the MNIST dataset. In the example, epoch 0 starts with random noise data that doesn’t yet represent the features in the training dataset. As the GAN model goes through the training, its gen- erator gets better and better at creating high-quality imitations of the training dataset that can fool the discriminator. Manually observing the generator’s performance is a good way to evaluate system performance to decide on the number of epochs and when to stop training. We’ll look more at GAN evaluation techniques in section 8.2. 8.1.5 GAN minimax function GAN training is more of a zero-sum game than an optimization problem. In zero- sum games, the total utility score is divided among the players. An increase in one player’s score results in a decrease in another player’s score. In AI, this is called mini- max game theory. Minimax is a decision-making algorithm, typically used in turn- based, two-player games. The goal of the algorithm is to find the optimal next move. One player, called the maximizer , works to get the maximum possible score; the other player, called the minimizer , tries to get the lowest score by counter-moving against the maximizer. GANs play a minimax game where the entire network attempts to optimize the function V(D,G) in the following equation: The goal of the discriminator ( D) is to maximize the probability of getting the correct label of the image. The generator’s ( G) goal, on the other hand, is to minimize theEpoch: 5,500 Epoch: 3,500 Epoch: 7,500 Epoch: 2,500 Epoch: 0 Epoch: 1,500 Epoch: 9,500 Figure 8.13 The generator gets better at mimicking the handwritten digits of the MNIST dataset throughout its training from epoch 0 to epoch 9,500. Min Max ( , ) = [log ( )] + [log(1 – ( ( )))] GDVD G E Dx E DGzxp zP z z ~ ~( )data Discriminator output for real data xDiscriminator output for generated fake data ( ) Gz" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"355 GAN architecture chances of getting caught. So, we train D to maximize the probability of assigning the correct label to both training examples and samples from G. We simultaneously train G to minimize log(1 – D(G(z))). In other words, D and G play a two-player minimax game with the value function V(D,G). Like any other mathematical equation, the preceding one looks terrifying to anyone who isn’t well versed in the math behind it, but the idea it represents is simple yet pow- erful. It’s just a mathematical representation of the two competing objectives of the discriminator and the generator models. Let’s go through the symbols first (table 8.1) and then explain it. The discriminator takes its input from two sources: Data from the generator , G(z)—This is fake data ( z). The discriminator output from the generator is denoted as D(G(z)). Real input from the real training data (x)—The discriminator output from the real data is denoted as log D(x). To simplify the minimax equation, the best way to look at it is to break it down into two components: the discriminator training function and the generator training (combinedMinimax game theory In a two-person, zero-sum game, a person can win only if the other player loses. No cooperation is possible. This game theory is widely used in games such as tic-tac- toe, backgammon, mancala, chess, and so on. The maximizer player tries to get the highest score possible, while the minimizer player tries to do the opposite and get the lowest score possible. In a given game state, if the maximizer has the upper hand, then the score will tend to be a positive value. If the minimizer has the upper hand in that state, then the score will tend to be a negative value. The values are calculated by heuristics that are unique for every type of game. Table 8.1 Symbols used in the minimax equation Symbol Explanation G Generator. D Discriminator. z Random noise fed to the generator ( G). G(z) The generator takes the random noise data ( z) and tries to reconstruct the real images. D(G(z)) The discriminator ( D) output from the generator. log D(x) The discriminator’s probability output for real data." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"356 CHAPTER 8Generative adversarial networks (GANs) model) function. During the training process, we created two training flows, and each has its own error function: One for the discriminator alone, represented by the following function that aims to maximize the minimax function by making the predictions as close as possible to 1: Ex~pdata [log D(x)] One for the combined model to train the generator represented by the follow- ing function, which aims to minimize the minimax function by making the pre- dictions as close as possible to 0: Ez~Pz(z) [log(1 – D(G(z)))] Now that we understand the equation symbols and have a better understanding of how the minimax function works, let’s look at the function again: The goal of the minimax objective function V(D, G) is to maximize D(x) from the true data distribution and minimize D(G(z)) from the fake data distribution. To achieve this, we use the log-likelihood of D(x) and 1 – D(z) in the objective function. The log of a value just makes sure that the closer we are to an incorrect value, the more we are penalized. Early in the GAN training process, the discriminator will reject fake data from the generator with high confidence, because the fake images are very different from the real training data—the generator hasn’t learned yet. As we train the discriminator to maximize the probability of assigning the correct labels to both real examples and fake images from the generator, we simultaneously train the generator to minimize the discriminator classification error for the generated fake data. The discriminator wants to maximize objectives such that D(x) is close to 1 for real data and D(G(z)) is close to 0 for fake data. On the other hand, the generator wants to minimize objec- tives such that D(G(z)) is close to 1 so that the discriminator is fooled into thinking the generated G(z) is real. We stop the training when the fake data generated by the generator is recognized as real data.Min Max ( , ) = [log ( )] + [log(1 – ( ( )))] GDVD G E Dx E DGzxp zP z z ~ ~( )data Error from the discriminator model trainingError from the combined model training" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"357 Evaluating GAN models 8.2 Evaluating GAN models Deep learning neural network models that are used for classification and detection problems are trained with a loss function until convergence. A GAN generator model, on the other hand, is trained using a discriminator that learns to classify images as real or generated. As we learned in the previous section, both the generator and discrimi- nator models are trained together to maintain an equilibrium. As such, no objective loss function is used to train the GAN generator models, and there is no way to objec- tively assess the progress of the training and the relative or absolute quality of the model from loss alone. This means models must be evaluated using the quality of the generated synthetic images and by manually inspecting the generated images. A good way to identify evaluation techniques is to review research papers and the techniques the authors used to evaluate their GANs. Tim Salimans et al. (2016) evalu- ated their GAN performance by having human annotators manually judge the visual quality of the synthesized samples.3 They created a web interface and hired annotators on Amazon Mechanical Turk (MTurk) to distinguish between generated data and real data. One downside of using human annotators is that the metric varies depending on the setup of the task and the motivation of the annotators. The team also found that results changed drastically when they gave annotators feedback about their mistakes: by learning from such feedback, annotators are better able to point out the flaws in generated images, giving a more pessimistic quality assessment. Other non-manual approaches were used by Salimans et al. and by other research- ers we will discuss in this section. In general, there is no consensus about a correct way to evaluate a given GAN generator model. This makes it challenging for researchers and practitioners to do the following: Select the best GAN generator model during a training run—in other words, decide when to stop training. Choose generated images to demonstrate the capability of a GAN generator model. Compare and benchmark GAN model architectures. Tune the model hyperparameters and configuration and compare results. Finding quantifiable ways to understand a GAN’s progress and output quality is still an active area of research. A suite of qualitative and quantitative techniques has been developed to assess the performance of a GAN model based on the quality and diver- sity of the generated synthetic images. Two commonly used evaluation metrics for image quality and diversity are the inception score and the Fréchet inception distance (FID). In this section, you will discover techniques for evaluating GAN models based on gen- erated synthetic images. 3Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. “Improved Techniques for Training GANs,” 2016, http:/ /arxiv.org/abs/1606.03498 ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"358 CHAPTER 8Generative adversarial networks (GANs) 8.2.1 Inception score The inception score is based on a heuristic that realistic samples should be able to be classified when passed through a pretrained network such as Inception on Image- Net (hence the name inception score ). The idea is really simple. The heuristic relies on two values: High predictability of the generated image —We apply a pretrained inception classi- fier model to every generated image and get its softmax prediction. If the gen- erated image is good enough, then it should give us a high predictability score. Diverse generated samples —No classes should dominate the distribution of the generated images. A large number of generated images are classified using the model. Specifically, the probability of the image belonging to each class is predicted. The probabilities are then summarized in the score to capture both how much each image looks like a known class and how diverse the set of images is across the known classes. If both these traits are satisfied, there should be a large inception score. A higher inception score indicates better-quality generated images. 8.2.2 Fréchet inception distance (FID) The FID score was proposed and used by Martin Heusel et al. in 2017.4 The score was proposed as an improvement over the existing inception score. Like the inception score, the FID score uses the Inception model to capture specific features of an input image. These activations are calculated for a collection of real and generated images. The activations for each real and generated image are summarized as a multivariate Gaussian, and the distance between these two distributions is then calcu- lated using the Fréchet distance, also called the Wasserstein-2 distance. An important note is that the FID needs a decent sample size to give good results (the suggested size is 50,000 samples). If you use too few samples, you will end up over- estimating your actual FID, and the estimates will have a large variance. A lower FID score indicates more realistic images that match the statistical properties of real images. 8.2.3 Which evaluation scheme to use Both measures (inception score and FID) are easy to implement and calculate on batches of generated images. As such, the practice of systematically generating images and saving models during training can and should continue to be used to allow post hoc model selection. Diving deep into the inception score and FID is out of the scope of this book. As mentioned earlier, this is an active area of research, and there is no consensus in the industry as of the time of writing about the one best 4Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter, “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium,” 2017, http:/ /arxiv.org/ abs/1706.08500 ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"359 Popular GAN applications approach to evaluate GAN performance. Different scores assess various aspects of the image-generation process, and it is unlikely that a single score can cover all aspects. The goal of this section is to expose you to some techniques that have been developed in recent years to automate the GAN evaluation process, but manual eval- uation is still widely used. When you are getting started, it is a good idea to begin with manual inspection of generated images in order to evaluate and select generator models. Developing GAN models is complex enough for both beginners and experts; manual inspection can get you a long way while refining your model implementation and testing model configurations. Other researchers are taking different approaches by using domain-specific evalu- ation metrics. For example, Konstantin Shmelkov and his team (2018) used two mea- sures based on image classification, GAN-train and GAN-test, which approximated the recall (diversity) and precision (quality of the image) of GANs, respectively.5 8.3 Popular GAN applications Generative modeling has come a long way in the last five years. The field has devel- oped to the point where it is expected that the next generation of generative models will be more comfortable creating art than humans. GANs now have the power to solve the problems of industries like healthcare, automotive, fine arts, and many oth- ers. In this section, we will learn about some of the use cases of adversarial networks and which GAN architecture is used for that application. The goal of this section is not to implement the variations of the GAN network, but to provide some exposure to potential applications of GAN models and resources for further reading. 8.3.1 Text-to-photo synthesis Synthesis of high-quality images from text descriptions is a challenging problem in CV. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. The GAN network that was built for this application is the stacked generative adversarial network (StackGAN).6 Zhang et al. were able to generate 256 × 256 photo- realistic images conditioned on text descriptions. StackGANs work in two stages (figure 8.14): Stage-I : StackGAN sketches the primitive shape and colors of the object based on the given text description, yielding low-resolution images. 5Konstantin Shmelkov, Cordelia Schmid, and Karteek Alahari, “How Good Is My GAN?” 2018, http:/ /arxiv .org/abs/1807.09499 . 6Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris Metaxas, “StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks,” 2016, http:/ /arxiv.org/abs/1612.03242 ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"360 CHAPTER 8Generative adversarial networks (GANs) Stage-II : StackGAN takes the output of stage-I and a text description as input and generates high-resolution images with photorealistic details. It is able to rectify defects in the images created in stage-I and add compelling details with the refinement process. 8.3.2 Image-to-image translation (Pix2Pix GAN) Image-to-image translation is defined as translating one representation of a scene into another, given sufficient training data. It is inspired by the language translation anal- ogy: just as an idea can be expressed by many different languages, a scene may be ren- dered by a grayscale image, RGB image, semantic label maps, edge sketches, and so on. In figure 8.15, image-to-image translation tasks are demonstrated on a range of appli- cations such as converting street scene segmentation labels to real images, grayscale to color images, sketches of products to product photographs, and day photographs to night ones. Pix2Pix is a member of the GAN family designed by Phillip Isola et al. in 2016 for general-purpose image-to-image translation.7 The Pix2Pix network architecture is similar to the GAN concept: it consists of a generator model for outputting new syn- thetic images that look realistic, and a discriminator model that classifies images as 7Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” 2016, http:/ /arxiv.org/abs/1611.07004 .Figure 8.14 (a) Stage-I: Given text descriptions, StackGAN sketches rough shapes and basic colors of objects, yielding low-resolution images. (b) Stage-II takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photorealistic details. ( Source: Zhang et al., 2016.) a) StackGAN Stage-I 64 × 64 imagesThis bird is white with some black on its head and wings, and has a long orange beak. b) StackGAN Stage-II 256 × 256 imagesThis bird has a yellow belly and tarsus, gray back, wings, and brown throat, nape with a black face.This flower has overlapping pink pointed petals surrounding a ring of short yellow filaments." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"361 Popular GAN applications real (from the dataset) or fake (generated). The training process is also similar to that used for GANs: the discriminator model is updated directly, whereas the gener- ator model is updated via the discriminator model. As such, the two models are trained simultaneously in an adversarial process where the generator seeks to better fool the discriminator and the discriminator seeks to better identify the counterfeit images. The novel idea of Pix2Pix networks is that they learn a loss function adapted to the task and data at hand, which makes them applicable in a wide variety of settings. They are a type of conditional GAN (cGAN) where the generation of the output image is conditional on an input source image. The discriminator is provided with both a source image and the target image and must determine whether the target is a plausible transformation of the source image. The results of the Pix2Pix network are really promising for many image-to-image translation tasks. Visit https:/ /affinelayer.com/pixsrv t o p l a y m o r e w i t h t h e P i x 2 P i x network; this site has an interactive demo created by Isola and team in which you can convert sketch edges of cats or products to photos and façades to real images. 8.3.3 Image super-resolution GAN (SRGAN) A certain type of GAN models can be used to convert low-resolution images into high- resolution images. This type is called a super-resolution generative adversarial networksFigure 8.15 Examples of Pix2Pix applications taken from the original paper.Black and white to color Input OutputEdges to photos Day to night Input Output Input Output" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"362 CHAPTER 8Generative adversarial networks (GANs) (SRGAN) and was introduced by Christian Ledig et al. in 2016.8 Figure 8.16 shows how SRGAN was able to create a very high-resolution image. 8.3.4 Ready to get your hands dirty? GAN models have huge potential for creating and imagining new realities that have never existed before. The applications mentioned in this chapter are just a few exam- ples to give you an idea of what GANs can do today. Such applications come out every few weeks and are worth trying. If you are interested in getting your hands dirty with more GAN applications, visit the amazing Keras-GAN repository at https:/ /github.com/ eriklindernoren/Keras-GAN , maintained by Erik Linder-Norén. It includes many GAN models created using Keras and is an excellent resource for Keras examples. Much of the code in this chapter was inspired by and adapted from this repository. 8.4 Project: Building your own GAN In this project, you’ll build a GAN using convolutional layers in the generator and dis- criminator. This is called a deep convolutional GAN (DCGAN) for short. The DCGAN architecture was first explored by Alec Radford et al. (2016), as discussed in section 8.1.1, and has seen impressive results in generating new images. You can follow along with the implementation in this chapter or run code in the project notebook available with this book’s downloadable code. 8Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Net- work,” 2016, http:/ /arxiv.org/abs/1609.04802 .Figure 8.16 SRGAN converting a low-resolution image to a high-resolution image. ( Source: Ledig et al., 2016.)Original SRGAN " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"363 Project: Building your own GAN In this project, you’ll be training DCGAN on the Fashion-MNIST dataset ( https:/ / github.com/zalandoresearch/fashion-mnist ). Fashion-MNIST consists of 60,000 gray- scale images for training and a test set of 10,000 images (figure 8.17). Each 28 × 28 grayscale image is associated with a label from 10 classes. Fashion-MNIST is intended to serve as a direct replacement for the original MNIST dataset for benchmarking machine learning algorithms. I chose grayscale images for this project because it requires less computational power to train convolutional networks on one-channel grayscale images compared to three-channel colored images, which makes it easier for you to train on a personal computer without a GPU. The dataset is broken into 10 fashion categories. The class labels are as follows: Label Description 0 T-shirt/top 1 Trouser 2 Pullover 3D r e s s 4C o a t 5 Sandal 6S h i r t 7 Sneaker 8B a g 9 Ankle bootFigure 8.17 Fashion-MNIST dataset examples " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"364 CHAPTER 8Generative adversarial networks (GANs) STEP 1: I MPORT LIBRARIES As always, the first thing to do is to import all the libraries we use in this project: from __future__ import print_function, division from keras.datasets import fashion_mnist from keras.layers import Input, Dense, Reshape, Flatten, Dropout from keras.layers import BatchNormalization, Activation, ZeroPadding2D from keras.layers.advanced_activations import LeakyReLU from keras.layers.convolutional import UpSampling2D, Conv2D from keras.models import Sequential, Model from keras.optimizers import Adam import numpy as np import matplotlib.pyplot as plt STEP 2: D OWNLOAD AND VISUALIZE THE DATASET Keras makes the Fashion-MNIST dataset available for us to download with just one command: fashion_mnist.load_data() . Here, we download the dataset and rescale the training set to the range –1 to 1 to allow the model to converge faster (see the “Data normalization” section in chapter 4 for more details on image scaling): (training_data, _), (_, _) = fashion_mnist.load_data() X_train = training_data / 127.5 - 1. X_train = np.expand_dims(X_train, axis=3) Just for the fun of it, let’s visualize the image matrix (figure 8.18): def visualize_input(img, ax): ax.imshow(img, cmap= 'gray') width, height = img.shape thresh = img.max()/2.5 for x in range(width): for y in range(height): ax.annotate( str(round(img[x][y],2)), xy=(y,x), horizontalalignment= 'center' , verticalalignment= 'center' , color= 'white' if img[x][y] max_loss: break print('...Loss value at' , i, ':', loss_value) x += step * grad_values return x Now we are ready to develop our DeepDream algorithm. The process is as follows: 1Load the input image. 2Define the number of scales, from smallest to largest. 3Resize the input image to the smallest scale. 4For each scale, start with the smallest and apply the following: – Gradient ascent function – Upscaling to the next scale – Re-injecting details that were lost during the upscale process 5Stop the process when we are back to the original size. First, we set the algorithm parameters: step = 0.01 num_octave = 3 octave_scale = 1.4 iterations = 20 max_loss = 10. Note that playing with these hyperparameters will allow you to achieve new effects. Let’s define the input image that we want to use to create our dream. For this example, I downloaded an image of the Golden Gate Bridge in San Francisco (see fig- ure 9.14); feel free to use an image of your own. Figure 9.15 shows the DeepDream output image.Gradient ascent step size Number of scales at which we run gradient ascent Size ratio between scales Number of iterations Figure 9.14 Example input image" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"391 DeepDream Here’s the Keras code: base_image_path = 'input.jpg' img = preprocess_image(base_image_path) original_shape = img.shape[1:3] successive_shapes = [original_shape] for i in range(1, num_octave): shape = tuple([int(dim / (octave_scale ** i)) for dim in original_shape]) successive_shapes.append(shape) successive_shapes = successive_shapes[::-1] original_img = np.copy(img) shrunk_original_img = resize_img(img, successive_shapes[0]) for shape in successive_shapes: print('Processing image shape' , shape) img = resize_img(img, shape) img = gradient_ascent(img, iterations=iterations, step=step, max_loss=max_loss) upscaled_shrunk_original_img = resize_img(shrunk_original_img, shape) same_size_original = resize_img(original_img, shape) lost_detail = same_size_original - upscaled_shrunk_original_img img += lost_detail shrunk_original_img = resize_img(original_img, shape) phil_img = deprocess_image(np.copy(img)) save_img( 'deepdream_output/dream_at_scale_' + str(shape) + '.png', phil_img) final_img = deprocess_image(np.copy(img)) save_img( 'final_dream.png' , final_img) Figure 9.15 DeepDream output Defines the path to the input image Saves the result to disk" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"392 CHAPTER 9DeepDream and neural style transfer 9.3 Neural style transfer So far, we have learned how to visualize specific filters in a network. We also learned how to manipulate features of an input image to create dream-like hallucinogenic images using the DeepDream algorithm. In this section, we explore a new type of artistic image that ConvNets can create using neural style transfer : the technique of transfer- ring the style from one image to another. The goal of the neural style transfer algorithm is to take the style of an image (style image) and apply it to the content of another image (content image). Style in this con- text means texture, colors, and other visual patterns in the image. And content is the higher-level macrostructure of the image. The result is a combined image that con- tains both the content of the content image and the style of the style image. For example, let’s look at figure 9.16. The objects in the content image (like dol- phins, fish, and plants) are kept in the combined image but with the specific texture of the style image (blue and yellow brushstrokes). The idea of neural style transfer was introduced by Leon A. Gatys et al. in 2015.4 The concept of style transfer, which is tightly related to texture generation, had a long his- tory in the image-processing community prior to that; but as it turns out, the DL-based implementations of style transfer offer results unparalleled by what had been previ- ously achieved with traditional CV techniques, and they triggered an amazing renais- sance in creative CV applications. Among the different neural network techniques that create art (like DeepDream), style transfer is the closest to my heart. DeepDream can create cool hallucination-like images, but it can be disturbing sometimes. Plus, as a DL engineer, it is not easy to intentionally create a specific piece of art that you have in your mind. Style transfer, on the other hand, can use an artistic engineer to mix the content that you want from an image with your favorite painting to create something that you have imagined. It is 4Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge, “A Neural Algorithm of Artistic Style,” 2015, http:/ / arxiv.org/abs/1508.06576 .Content image Style image Combined image += Figure 9.16 Example of neural style transfer" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"393 Neural style transfer a really cool technique that, if used by an artist engineer, can be used to create beauti- ful art on par with that produced by professional painters. The main idea behind implementing style transfer is the same as the one central to all DL algorithms, as explained in chapter 2: we first define a loss function to define what we aim to achieve, and then we work on optimizing this function. In style-transfer problems, we know what we want to achieve: conserving the content of the original image while adopting the style of the reference image. Now all we need to do is to define both content and style in a mathematical representation, and then define an appropriate loss function to minimize. The key notion in defining the loss function is to remember that we want to pre- serve content from one image and style from another: Content loss —Calculate the loss between the content image and the combined image. Minimizing this loss means the combined image will have more content from the original image. Style loss —Calculate the loss in style between the style image and the combined image. Minimizing this loss means the combined image will have style similar to the style image. Noise loss —This is called the total variation loss . It measures the noise in the combined image. Minimizing this loss creates an image with a higher spatial smoothness. Here is the equation of the total loss: total_loss = [style(style_image) - style(combined_image)] + [content(original_image) - content(combined_image)] + total_variation_loss NOTE Gatys et al. (2015) on transfer learning does not include the total varia- tion loss. After experimentation, the researchers found that the network gener- ated better, more aesthetically-pleasing style transfers when they encouraged spatial smoothness across the output image. Now that we have a big-picture idea of how the neural style transfer algorithm works, we are going to dive deeper into each type of loss to see how it is derived and coded in Keras. We will then understand how to train a neural style transfer network to mini- mize the total_loss function that we just defined. 9.3.1 Content loss The content loss measures how different two images are in terms of subject matter and the overall placement of content. In other words, two images that contain similar scenes should have a smaller loss value than two images that contain completely differ- ent scenes. Image subject matter and content placement are measured by scoring images based on higher-level feature representations in the ConvNet, such as dolphins , plants , and water . Identifying these features is the whole premise behind deep neural networks: these networks are trained to extract the content of an image and learn the higher-level features at the deeper layers by recognizing patterns in simpler features" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"394 CHAPTER 9DeepDream and neural style transfer from the previous layers. With that said, we need a deep neural network that has been trained to extract the features of the content image so that we can tap into a deep layer of the network to extract high-level features. To calculate the content loss, we measure the mean squared error between the out- put for the content image and the combined image. By trying to minimize this error, the network tries to add more content to the combined image to make it more and more similar to the original content image: Content loss = Σ[content(original_image) – content(combined_image)]2 Minimizing the content loss function ensures that we preserve the content of the orig- inal image and create it in the combined image. To calculate the content loss, we feed both the content and style images into a pre- trained network and select a deep layer from which to extract high-level features. We then calculate the mean squared error between both images. Let’s see how we calcu- late the content loss between two images in Keras. NOTE The code snippets in this section are adapted from the neural style trans- fer example in the official Keras documentation ( https:/ /keras.io/examples/ generative/neural_style_transfer/ ). If you want to re-create this project and experiment with different parameters, I suggest that you work from Keras’ Github repository as a starting point ( http:/ /mng.bz/GVzv ) or run the adapted code available for download with this book. First, we define two Keras variables to hold the content image and style image. And we create a placeholder tensor that will contain the generated combined image: content_image_path = '/path_to_images/content_image.jpg' style_image_path = '/path_to_images/style_image.jpg' content_image = K.variable(preprocess_image(content_image_path)) style_image = K.variable(preprocess_image(style_image_path)) combined_image = K.placeholder(( 1, img_nrows, img_ncols, 3)) Now, we concatenate the three images into one input tensor and feed it to the VGG19 neural network. Note that when we load the VGG19 model, we set the include_top parameter to False because we don’t need to include the classification fully con- nected layers for this task. This is because we are only interested in the feature- extraction part of the network: input_tensor = K.concatenate([content_image, style_image, combined_image], axis= 0) model = vgg19.VGG19(input_tensor=input_tensor, weights= 'imagenet' , include_top= False) 1 2-- - Paths to the content and style images Gets tensor representations of our images Combines the three images into a single Keras tensor Builds the VGG 19 network with our three images as input. The model will be loaded with pretrained ImageNet weights." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"395 Neural style transfer Similar to what we did in section 9.1, we now select the network layer we want to use to calculate the content loss. We wanted to choose a deep layer to make sure it contains higher-level features of the content image. If you choose an earlier layer of the net- work (like block 1 or block 2), the network won’t be able to transfer the full content from the original image because the earlier layers extract low-level features like lines, edges, and blobs. In this example, we choose the second convolutional layer in block 5 (block5_conv2 ): outputs_dict = dict([(layer.name, layer.output) for layer in model.layers]) layer_features = outputs_dict[ 'block5_conv2' ] Now we can extract the features from the layer that we chose from the input tensor: content_image_features = layer_features[ 0, :, :, :] combined_features = layer_features[ 2, :, :, :] Finally, we create the content_loss function that calculates the mean squared error between the content image and the combined image. We create an auxiliary loss func- tion designed to preserve the features of the content_image and transfer it to the combined-image : def content_loss (content_image, combined_image): return K.sum(K.square(combined - base)) content_loss = content_weight * content_loss(content_image_features, combined_features) Weighting parameters In this code implementation, you will see the following weighting parameters: content _weight , style_weight , and total_variation_weight . These are scaling param- eters set by us as an input to the network as follows: content_weight = content_weight total_variation_weight = tv_weight style_weight = style_weight These weight parameters describe the importance of content, style, and noise in our out- put image. For example, if we set style_weight = 100 and content_weight = 1, we are implying that we are willing to sacrifice a bit of the content for a more artistic style transfer. Also, a higher total_variation_weight implies higher spatial smoothness. Gets the symbolic outputs of each key layer (we gave them unique names) Mean square error function between the content image output and the combined image content_loss is scaled by a weighting parameter." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"396 CHAPTER 9DeepDream and neural style transfer 9.3.2 Style loss As we mentioned before, style in this context means texture, colors, and other visual patterns in the image. MULTIPLE LAYERS TO REPRESENT STYLE FEATURES Defining the style loss is a little more challenging than what we did with the content loss. In the content loss, we cared only about the higher-level features that are extracted at the deeper levels, so we only needed to choose one layer from the VGG19 network to preserve its features. In style loss, on the other hand, we want to choose multiple layers because we want to obtain a multi-scale representation of the image style. We want to capture the image style at lower-level layers, mid-level layers, and higher-level layers. This allows us to capture the texture and style of our style image and exclude the global arrangement of objects in the content image. GRAM MATRIX TO MEASURE JOINTLY ACTIVATED FEATURE MAPS The gram matrix is a method that is used to numerically measure how much two fea- ture maps are jointly activated. Our goal is to build a loss function that captures the style and texture of multiple layers in a CNN. To do that, we need to compute the correlations between the activation layers in our CNN. This correlation can be cap- tured by computing the gram matrix—the feature-wise outer product—between the activations. To calculate the gram matrix of the feature map, we flatten the feature map and calculate the dot product: def gram_matrix(x): features = K.batch_flatten(K.permute_dimensions(x, ( 2, 0, 1))) gram = K.dot(features, K.transpose(features)) return gram Let’s build the style_loss function. It calculates the gram matrix for a set of layers throughout the network for both the style and combined images. It then compares the similarities of style and texture between them by calculating the sum of squared errors: def style_loss (style, combined): S = gram_matrix(style) C = gram_matrix(combined) channels = 3 size = img_nrows * img_ncols return K.sum(K.square(S - C)) / ( 4.0 * (channels ** 2) * (size ** 2)) In this example, we are going to calculate the style loss over five layers: the first convo- lutional layer in each of the five blocks of the VGG19 network (note that if you change the feature layers, the network will preserve different styles): feature_layers = ['block1_conv1', 'block2_conv1', 'block3_conv1', 'block4_conv1', 'Block5_conv1']" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"397 Neural style transfer Finally, we loop through these feature_layers to calculate the style loss: for layer_name in feature_layers: layer_features = outputs_dict[layer_name] style_reference_features = layer_features[ 1, :, :, :] combination_features = layer_features[ 2, :, :, :] sl = style_loss(style_reference_features, combination_features) style_loss += (style_weight / len(feature_layers)) * sl During training, the network works on minimizing the loss between the style of the output image (combined image) and the style of the input style image. This forces the style of the combined image to correlate with the style image. 9.3.3 Total variance loss The total variance loss is the measure of noise in the combined image. The network’s goal is to minimize this loss function in order to minimize the noise in the output image. Let’s create the total_variation_loss function that calculates how noisy an image is. This is what we are going to do: 1Shift the image one pixel to the right, and calculate the sum of the squared error between the transferred image and the original. 2Repeat step 1, this time shifting the image one pixel down. The sum of these two terms ( a and b) is the total variance loss: def total_variation_loss(x): a = K.square( x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, 1:, :img_ncols - 1, :]) b = K.square( x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, :img_nrows - 1, 1:, :]) return K.sum(K.pow(a + b, 1.25)) tv_loss = total_variation_weight * total_variation_loss(combined_image) Finally, we calculate the overall loss of our problem, which is the sum of the content, style, and total variance losses: total_loss = content_loss + style_loss + tv_loss 9.3.4 Network training Now that we have defined the total loss function for our problem, we can run the GD optimizer to minimize this loss function. First we create an object class Evaluator that contains methods that calculate the overall loss, as described previously, and gradients of the loss with respect to the input image:Scales the style loss by a weighting parameter and the number of layers over which the style loss is calculated Scales the total variance loss by the weighting parameter" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"398 CHAPTER 9DeepDream and neural style transfer class Evaluator (object): def __init__ (self): self.loss_value = None self.grads_values = None def loss(self, x): assert self.loss_value is None loss_value, grad_values = eval_loss_and_grads(x) self.loss_value = loss_value self.grad_values = grad_values return self.loss_value def grads(self, x): assert self.loss_value is not None grad_values = np.copy(self.grad_values) self.loss_value = None self.grad_values = None return grad_values evaluator = Evaluator() Next, we use the methods in our evaluator class in the training process. To minimize the total loss function, we use the SciPy ( https:/ /scipy.org/scipylib ) based optimiza- tion method scipy.optimize.fmin_l_bfgs_b : from scipy.optimize import fmin_l_bfgs_b Iterations = 1000 x = preprocess_image(content_image_path) for i in range(iterations): x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(), fprime=evaluator.grads, maxfun= 20) img = deprocess_image(x.copy()) fname = result_prefix + '_at_iteration_%d.png' % i save_img(fname, img) TIP When training your own neural style transfer network, keep in mind that content images that do not require high levels of detail work better and are known to create visually appealing or recognizable artistic images. In addi- tion, style images that contain a lot of textures are better than flat style images: flat images (like a white background) will not produce aesthetically appealing results because there is not much texture to transfer.Trains for 1,000 iterations The training process is initialized with content_image as the first iteration of the combined image. Runs scipy-based optimization (L-BFGS) over the pixels of the generated image to minimize total_loss.Saves the current generated image" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"399 Summary Summary CNNs learn the information in the training set through successive filters. Each layer of the network deals with features at a different level of abstraction, so the complexity of the features generated depends on the layer’s location in the net- work. Earlier layers learn low-level features; the deeper the layer is in the network, the more identifiable the extracted features are. Once a network is trained, we can run it in reverse to adjust the original image slightly so that a given output neuron (such as the one for faces or certain ani- mals) yields a higher confidence score. This technique can be used for visualiza- tions to better understand the emergent structure of the neural network and is the basis for the DeepDream concept. DeepDream processes the input image at different scales called octaves. We pass each scale, re-inject image details, pass it through the DeepDream algorithm, and then upscale the image for the next octave. The DeepDream algorithm is similar to the filter-visualization algorithm. It runs the ConvNet in reverse to generate output based on the representations extracted by the network. DeepDream differs from filter-visualization in that it needs an input image and maximizes the entire layer, not specific filters within the layer. This allows Deep- Dream to mix together a large number of features at once. DeepDream is not specific to images—it can be used for speech, music, and more. Neural style transfer is a technique that trains the network to preserve the style (texture, color, patterns) of the style image and preserve the content of the con- tent image. The network then creates a new combined image that combines the style of the style image and the content from the content image. Intuitively, if we minimize the content, style, and variation losses, we get a new image that contains low variance in content and style from the content and style images, respectively, and low noise. Different values for content weight, style weight, and total variation weight will give you different results." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"400Visual embeddings BY RATNESH KUMAR Obtaining meaningful relationships between images is a vital building block for many applications that touch our lives every day, such as face recognition and image search algorithms. To tackle such problems, we need to build an algorithmThis chapter covers Expressing similarity between images via loss functions Training CNNs to achieve a desired embedding function with high accuracy Using visual embeddings in real-world applications Ratnesh Kumar obtained his PhD from the STARS team at Inria, France, in 2014. While working on his PhD, he focused on problems in video understanding: video segmentation and multiple object tracking. He also has a Bachelor of Engineering from Manipal University, India, and a Master of Science from the University of Florida at Gainesville. He has co-authored several scientific publications on learning visual embedding for re-identifying objects in camera networks." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"401 that can extract relevant features from images and subsequently compare them using their corresponding features. In the previous chapters, we learned that we can use convolutional neural net- works (CNNs) to extract meaningful features for an image. This chapter will use our understanding of CNNs to train (jointly) a visual embedding layer. In this chapter’s con- text, visual embedding refers to the last fully connected layer (prior to a loss layer) appended to a CNN. Joint training refers to training both the embedding layer and the CNN parameters jointly. This chapter explores the nuts and bolts of training and using visual embeddings for large-scale, image-based query-retrieval systems such as applications of visual embed- dings (see figure 10.1). To perform this task, we first need to project ( embed ) our data- base of images onto a vector space (embedding ). This way, comparisons between images can be performed by measuring their pairwise distances in this embedding space. This is the high-level idea of visual embedding systems. DEFINITION An embedding is a vector space, typically of lower dimension than the input space, which preserves relative dissimilarity (in the input space). We use the terms vector space and embedding space interchangeably. In the context of this chapter, the last fully connected layer of a trained CNN is this vector (embedding) space. As an example, a fully connected layer of 128 neurons corresponds to a vector space of 128 dimensions. For a reliable comparison among images, the embedding function needs to capture a desired input similarity measure. This embedding function can be learned using vari- ous approaches; one of the popular ways is to use a deep CNN. Figure 10.2 illustrates a high-level process of using CNNs to create an embedding.Database of imagesHow do I compare these images? Search the database for images similar to this. Figure 10.1 Example applications we encounter in everyday life when working with images: a machine comparing two images (left); querying the database to find images similar to the input image (right). Comparing two images is a non-trivial task and is key to many applications relating to meaningful image search. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"402 CHAPTER 10 Visual embeddings In the following section, we explore some example applications of using visual embed- dings for large-scale query-retrieval systems. Then we will dive deeper into the different components of the visual embedding systems: loss functions, mining informative data, and training and testing the embedding network. Subsequently, we will use these con- cepts to solve our chapter project on building visual embedding–based query-retrieval systems. Thereafter, we will explore approaches to push the boundaries of the project’s network accuracy. By the end of this chapter, you will be able to train a CNN to obtain a reliable and meaningful embedding and use it in real-world applications. 10.1 Applications of visual embeddings Let’s look at some practical day-to-day information-retrieval algorithms that use the concept of visual embeddings. Some of the prominent applications for retrieving sim- ilar images given an input query include face recognition (FR), image recommenda- tion, and object re-identification systems. 10.1.1 Face recognition FR is about automatically identifying or tagging an image with the exact identities of persons in the image. Day-to-day applications include searching for celebrities on the web, auto-tagging friends and family in images, and many more. Recognition is a form of fine-grained classification. The Handbook of Face Recognition [1] categorizes two modes of a FR system (figure 10.3 compares them): Face identification —One-to-many matches that compare a query face image against all the template images in the database to determine the identity of the query face. For example, city authorities can check a watch list to match a query to a list of suspects (one-to-few matches). Another fun example is automatically tagging users to photos they appear in, a feature implemented by major social network platforms. Face verification —One-to-one match that compares a query face image against a template face image whose identity is being claimed. EmbeddingCNN Figure 10.2 Using CNNs to obtain an embedding from an input image" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"403 Applications of visual embeddings 10.1.2 Image recommendation systems In this task, the user seeks to find similar images with respect to a given query image. Shopping websites provide product suggestions (via images) based on the selection of a particular product, such as showing all kinds of shoes that are similar to the ones a user selected. Figure 10.4 shows an example in the context of apparel search. Note that the similarity between two images varies depending on the context of choosing the similarity measure. The embedding of an image differs based on the type of similarity measure chosen. Some examples of similarity measures are color similarity and semantic similarity : Color similarity —The retrieved images have similar colors, as shown in figure 10.5. This measure is used in applications like retrieving similarly colored paintings, similarly colored shoes (not necessarily determining style), and many more. Semantic similarity —The retrieved image has the same semantic properties, as shown in figure 10.6. In our earlier example of shoe retrieval, the user expects to see suggestions of shoes having the same semantics as high-heeled shoes. You can be creative and decide to incorporate color similarity with semantics for more meaningful suggestions. Face verification Face verification systemPerson 1Person 1 Person 2 Not person 1Face identification Face identification system Haven’t seen her before Figure 10.3 Face-verification and face-recognition systems: an example of a face-verification system comparing one-on-one matches to identify whether or not the image is Sundar (left); an example of a face- identification system comparing one-to-many matches to identify all images (right). Despite the objective- level difference between recognition and identification, they both rely on a good embedding function that captures meaningful differences between faces. (The figure was inspired by [2].)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"404 CHAPTER 10 Visual embeddings Query Retrievals Figure 10.4 Apparel search. The leftmost image in each row is the query image, and the subsequent columns show various apparel that look similar to it. (Images in this figure are taken from [3].) Color similarity Red carsWhite carsBlack cars Figure 10.5 Similarity example where cars are differentiated by their color. Notice that the similarly colored cars are closer in this illustrative two-dimensional embedding space." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"405 Applications of visual embeddings 10.1.3 Object re-identification An example of object re-identification is security camera networks (CCTV monitor- ing), as depicted in figure 10.7. The security operator may be interested in querying a particular person and finding out their location in all the cameras. The system is required to identify a moving object in one camera and then re-identify the object across cameras to establish consistent identity. Trucks Semantic similarity Sports cars SUVs Figure 10.6 Example of identity embeddings. Cars with similar features are projected closer to each other in the embedding space. Query Gallery Figure 10.7 Multi-camera dataset showing the presence of a person (queried) across cameras. ( Source: [4].)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"406 CHAPTER 10 Visual embeddings This problem is commonly known as person re-identification . Notice that it is similar to a face-verification system where we are interested in capturing whether any two people in separate cameras are the same, without needing to know exactly who a person is. One of the central aspects in all these applications is the reliance on an embed- ding function that captures and preserves the input’s similarity (and dissimilarity) to the output embedding space. In the following sections, we will delve into designing appropriate loss functions and sampling (mining) informative data points to guide the training of a CNN for a high-quality embedding function. But before we jump into the details of creating an embedding, let’s answer this question: why do we need to embed—can’t we just use the images directly? Let’s review the bottlenecks with this naive approach of directly using image pixel values as an embedding. Embedding dimensionality in this approach (assuming all images are high definition) would be 1920 × 1080, represented in a computer’s memory in double preci- sion, which is computationally prohibitive for both storage and retrieval given any meaningful time requirements. Moreover, most embeddings need to be learned in a supervised setting, as a priori semantics for comparison are not known (that is when we unleash the power of CNNs to extract meaningful and relevant semantics). Any learn- ing algorithm on such a high-dimensional embedding space will suffer from the curse of dimensionality: as the dimensionality increases, the volume of the space increases so fast that the available data becomes sparse. The geometry and data distribution of natural data are non-uniform and concat- enate around low-dimensional structures. Hence, using an image size as data dimensions is overkill (let alone the exorbitant computational complexity and redundancy). Therefore, our goal in learning embedding is twofold: learning the required semantics for comparison, and achieving a low(er) dimensionality of the embedding space. 10.2 Learning embedding Learning an embedding function involves defining a desired criterion to measure a sim- ilarity; it can be based on color, semantics of the objects present in an image, or purely data-driven in a supervised form. Since a priori knowing the right semantics (for com- paring images) is difficult, supervised learning is more popular. Instead of hand-crafting similarity criteria features, in this chapter we will focus on the supervised data-driven learning of embeddings wherein we assume we are given a training set. Figure 10.8 depicts a high-level architecture to learn an embedding using a deep CNN. The process to learn an embedding is straightforward: 1Choose a CNN architecture. Any suitable CNN architecture can be used. In practice, the last fully connected layer is used to determine the embedding. Hence the size of this fully connected layer determines the dimension of the embedding vector space. Depending on the size of the training dataset, it may be prudent to use pretraining with, for example, the ImageNet dataset." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"407 Loss functions 2Choose a loss function. Popular loss functions are contrastive and triplet loss. (These are explained in section 10.3.) 3Choose a dataset sampling (mining) method. Naively feeding all possible sam- ples from the dataset is wasteful and prohibitive. Hence we need to resort to sampling (mining) informative data points to train our CNN. We will learn vari- ous sampling techniques in section 10.4. 4During test time, the last fully connected layer acts as the embedding of the cor- responding image. Now that we have reviewed the big picture of the training and inference process for learning embedding, we will delve into defining useful loss functions to express our desired embedding objectives. 10.3 Loss functions We learned in chapter 2 that optimization problems require the definition of a loss function to minimize. Learning embedding is not different from any other DL prob- lem: we first define a loss function that we need to minimize, and then we train a neu- ral network to choose the parameter (weights) values that yield the minimum error value. In this section, we will look more deeply at key embedding loss functions: cross- entropy , contrastive , and triplet . First we will formalize the problem setup. Then we will explore the different loss functions and their mathematical formulas. EmbeddingEmbedding layerCNN CNNLoss1. Classification 2. Contrastive 3. Triplet 4. Many more multiple views Figure 10.8 An illustration of learning machinery (top); the (test) process outline (bottom)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"408 CHAPTER 10 Visual embeddings 10.3.1 Problem setup and formalization To understand loss functions for learning embedding and eventually train a CNN (for this loss), let’s first formalize the input ingredients and desired output characteristics. This formalization will be used in later sections to understand and categorize various loss functions in a succinct manner. For the purposes of this conversation, our dataset can be represented as follows: χ = {(xi, yi)}N i=1 N is the number of training images, xi is the input image, and yi is its corresponding label. Our objective is to create an embedding f(x; θ): ԹD → ԹF to map images in ԹD onto a feature (embedding) space in ԹF such that images of sim- ilar identity are metrically close in this feature space (and vice versa for images of dis- similar identities) θ* = arg min θࣦ(f(θ; χ)) where θ is the parameter set of the learning function. Let D(xi, xj) : ԹF X ԹF → Թ be the metric measuring distance of images xi and xj in the embedding space. For simplicity, we drop the input labels and denote D(xi, xj) as Dij · yij = 1. Both samples (i) and ( j) belong to the same class, and the value yij = 0 indicates samples of different classes. Once we train an embedding network for its optimal parameters, we desire the learned function to have the following characteristics: An embedding should be invariant to viewpoints, illumination, and shape changes in the object. From a practical application deployment, computation of embedding and rank- ing should be efficient. This calls for a low-dimension vector space (embed- ding). The bigger this space is, the more computation is required to compare any two images, which in turn affects the time complexity. Popular choices for learning an embedding are cross-entropy loss, contrastive loss, and triplet loss. The subsequent sections will introduce and formalize these losses." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"409 Loss functions 10.3.2 Cross-entropy loss Learning an embedding can also be formulated as a fine-grained classification prob- lem, and the corresponding CNN can be trained using the popular cross-entropy loss (explained in detail in chapter 2). The following equation expresses cross-entropy loss, where p(yij|f(x; θ)) represents the posterior class probability. In CNN literature, softmax loss implies a softmax layer trained in a discriminative regime using cross- entropy loss: ࣦ(χ) = – yij logp(yij|f(x; θ)) During training, a fully connected (embedding) layer is added prior to the loss layer. Each identity is considered a separate category, and the number of categories is equal to the number of identities in the training set. Once the network is trained using clas- sification loss, the final classification layer is stripped off and an embedding is obtained from the new final layer of the network (figure 10.9). By minimizing the cross-entropy loss, the parameters ( θ) of the CNN are chosen such that the estimated probability is close to 1 for the correct class and close to 0 for all other classes. Since the target of the cross-entropy loss is to categorize features into predefined classes, usually the performance of such a network is poor when com- pared to losses incorporating similarity (and dissimilarity) constraints directly in thek1=C  i1=N  Images ConvNets Features fc Probabilities Softmax loss Predictions Train modeImage classification Test modeImages ConvNets Features fc Probabilities Softmax loss Train modePerson re-identification Distance matrix Predictions Test mode Figure 10.9 An illustration of how cross-entropy loss is used to train an embedding layer (fully connected). The right side demonstrates the inference process and outlines the disconnect in training and inference in straightforward usage of cross-entropy loss for learning an embedding. (This figure is adapted from [5].) " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"410 CHAPTER 10 Visual embeddings embedding space during training. Furthermore, learning becomes computationally prohibitive when considering datasets of, for example, 1 million identities. (Imagine a loss layer with 1 million neurons!) Nevertheless, pretraining a network with cross- entropy loss (on a viable subset of the dataset, such as a subset of 1,000 identities) is a popular strategy used to pretrain the CNN, which in turn makes embedding losses converge faster. We will explore this further while mining informative samples during training in section 10.4. NOTE One of the disadvantages of the cross-entropy loss is the disconnect between training and inference. Hence, it generally performs poorly when compared with embedding learning losses (contrastive and triplet). These losses explicitly try to incorporate the relative distance preservation from the input image space to the embedding space. 10.3.3 Contrastive loss Contrastive loss optimizes the training objective by encouraging all similar class instances to come infinitesimally closer to each other, while forcing instances from other classes to move far apart in the output embedding space (we say infinitesimally here because a CNN can’t be trained with exactly zero loss). Using our problem for- malization, this loss is defined as lcontrastive (i, j) = yij D2 ij + (1 – yij)[α – D2 ij]+ Note that [.] + = max(0,.) in the loss function indicates hinge loss, and α is a predeter- mined threshold (margin) determining the max loss for when the two samples i and j are in different classes. Geometrically, this implies that two samples of different classes contribute to the loss only if the distance between them in the embedding space is less than this magin. Dij, as noted in the formulation, refers to the distance between two samples i and j in the embedding space. This loss is also known as Siamese loss , because we can visualize this as a twin net- work with shared parameters; each of the two CNNs is fed an image. Contrastive loss was employed in the seminal work by Chopra et al. [6] for the face-verification prob- lem, where the objective is to verify whether two presented faces belong to the same identity. An illustration of this loss is provided in the context of face recognition in fig- ure 10.10. Notice that the choice of the margin α is the same for all dissimilar classes. Man- matha et al. [7] analyze the impact: this choice of α implies that for dissimilar identities, visually diverse classes are embedded in the same feature space as the visually similar ones. This assumption is stricter when compared to triplet loss (explained next) and restricts the structure of the embedding manifold, which subsequently makes learning tougher. The training complexity per epoch is O(N2) for a dataset of N samples, as this loss requires traversing a pair of samples to compute the contrastive loss." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"411 Loss functions 10.3.4 Triplet loss Inspired by the seminal work on metric learning for nearest neighbor classification by Weinberger et al. [8], FaceNet (Schroff et al. [9]) proposed a modification suited for query-retrieval tasks called triplet loss . Triplet loss forces data points from the same class to be closer to each other than they are to a data point from another class. Unlike con- trastive loss, triplet loss adds context to the loss function by considering both positive and negative pair distances from the same point. Mathematically, with respect to our problem formalization from earlier, triplet loss can be formulated as follows: ltriplet (a, p, n) = [Dap – Dan + α]+ Note that Dap represents the distance between the anchor and a positive sample, while Dan is the distance between the anchor and a negative sample. Figure 10.11 illustrates the computation of the loss term using an anchor, a positive sample, and a negative sample. Upon successful training, the hope is that we will get all the same class pairs closer than pairs from different classes. Because computing triplet loss requires three terms, the training complexity per epoch is O(N3), which is computationally prohibitive on practical datasets. HighAnchor Embedding SpacePositive Anchor Negative Embedding Space Figure 10.10 Computing contrastive loss requires two images. When the two images are of the same class, the optimization tries to put them closer in the embedding space, and vice versa when the images belong to different classes." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"412 CHAPTER 10 Visual embeddings computational complexity in triplet and contrastive losses have motivated a host of sam- pling approaches for efficient optimization and convergence. Let’s review the complex- ity of implementing these losses in a naive and straightforward manner. 10.3.5 Naive implementation and runtime analysis of losses Consider a toy example with the following specifications: Number of identities ( N): 100 Number of samples per identity ( S): 10 If we implement the losses in a naive manner (see figure 10.12), it leads to per-epoch (inner for loop,1 in figure 10.12) training complexity: Cross-entropy loss —This is a relatively straightforward loss. In an epoch, it just needs to traverse all samples. Hence our complexity here is O(N × S) = O(103). 1In practice, this step gets unwound into two for loops due to host memory limitations.Anchor Positive Negative Anchor TrainAnchor Negative Negative Positive Positive Embedding Space Figure 10.11 Computing triplet loss requires three samples. The goal of learning is to embed the samples of the same class closer than samples belonging to different classes." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"413 Loss functions Contrastive loss —This loss visits all pairwise distances, so complexity is quadratic in terms of number of samples ( N × S): that is, O(100 × 10 × 100 × 10) = O(106). Triplet loss —For every loss computation, we need to visit three samples, so the worst-case complexity is cubic. In terms of total number of samples, that is O(109). Despite the ease of computing cross-entropy loss, its performance is relatively low when compared to other embedding losses. Some intuitive explanations are pointed out in section 10.3.2. In recent academic works (such as [10, 11, 13]), triplet loss has generally given better results than contrastive loss when provided with appropriate hard data mining, which we will explain in the next section. NOTE In the following sections, we refer to triplet loss, owing to its high per- formance over contrastive loss in several academic works. One important point to notice is that not many of the triplets of the O(109) contrib- ute to the loss in a strong manner. In practice, during a training epoch, most of the triplets are trivial: that is, the current network is already at a low loss on these, and hence anchor-positive pairs of these trivial triplets are much closer (in the embedding space) than anchor-negative pairs. These trivial triplets do not add meaningful infor- mation to update the network parameters, thereby stagnating convergence. Further- more, there are far fewer informative triplets than trivial triplets, which in turn leads to washing out the contribution of informative triplets. To improve the computational complexity of triplet enumeration and conver- gence, we need to come up with an efficient strategy for enumerating triplets and feed the CNN (during training) informative triplet samples (without trivial triplets). This process of selecting informative triplets is called mining . Informative data points is the essence of this chapter and is discussed in the following sections. A popular strategy to tackle this cubic complexity is to enumerate triplets in the following manner: 1Construct a triplet set using only the current batch constructed by the dataloader. 2Mine an informative triplet subset from this set. The next section looks at this strategy in detail. Algorithm 1: Naive implementation of training for learning an embedding. Result: A trained CNN with a desirable embedding size. Initialization: Dataset, a CNN, a loss function, embedding dimension; while do > 0 numEpochs for doall dataset samples Compute any one of the losses (from cross-entropy, contrastive, ) over all possible data samples. triplet end – = 1 numEpochs endFigure 10.12 Algorithm 1, for a naive implementation" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"414 CHAPTER 10 Visual embeddings 10.4 Mining informative data So far, we have looked at how triplet and contrastive losses are computationally pro- hibitive for practical dataset sizes. In this section, we take a deep dive into understand- ing the key steps during training a CNN for triplet loss and learn how to improve the training convergence and computational complexity. The straightforward implementation in figure 10.12 is classified under offline train- ing, as the selection of a triplet must consider the full dataset and therefore cannot be done on the fly while training a CNN. As we noted earlier, this approach of computing valid triplets is inefficient and is computationally infeasible for DL datasets. To deal with this complexity, FaceNet [9] proposes using online batch-based triplet mining. The authors construct a batch on the fly and perform mining of triplets for this batch, ignoring the rest of the dataset outside this batch. This strategy proved effective and led to state-of-the-art accuracy in face recognition. Let’s summarize this information flow during a training epoch (see figure 10.13). During training, mini-batches are constructed from the dataset, and valid triplets are subsequently identified for each sample in the mini-batch. These triplets are then used to update the loss, and the process iterates until all the batches are exhausted, thereby completing an epoch. Similar to FaceNet, OpenFace [37] proposed a training scheme wherein the data- loader constructs a training batch of predefined statistics, and embeddings for the batch are computed on the GPU. Subsequently, valid triplets are generated on the CPU to compute the loss. In the next subsection, we look into an improved dataloader that can give us good batch statistics to mine triplets. Subsequently, we will explore how we can efficiently mine good, informative triplets to improve training convergence. 10.4.1 Dataloader Let’s examine the dataloader’s setup and its role in training with triplet loss. The data- loader selects a random subset from the dataset and is crucial to mining informative triplets. If we resort to a trivial dataloader to choose a random subset (mini-batch) of the dataset, it may not result in good class diversity for finding many triplets. For example, randomly selecting a batch with only one category will not have any validDataset Dataloader Find triplets Train Figure 10.13 Information flow during an online training process. The dataloader samples a random subset of training data to the GPU. Subsequently, triplets are computed to update the loss." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"415 Mining informative data triplets and thus will result in a wasteful batch iteration. We must take care at the data- loader level to have well distributed batches to mine triplets. NOTE The requirement for better convergence at the dataloader level is to form a batch with enough class diversity to facilitate the triplet mining step in figure 10.11. A general and effective approach to training is to first mine a set of triplets of size B, so that B terms contribute to the triplet loss. Once set B is chosen, their images are stacked to form a batch size of 3 B images ( B anchors, B positives, and B negatives), and subsequently 3 B embeddings are computed to update the loss. Hermans et al. [11], in their impressive work on revisiting triplet loss, realize the under-utilization of valid triplets in online generation presented in the previous section. In a set of 3 B images ( B anchors, B positives, B negatives), we have a total of 6B2 – 4B valid triplets, so using only B triplets is under-utilization. In light of the previous discussion, to use the triplets more efficiently, Hermans et al. propose a key organizational modification at the dataloader level: construct a batch by randomly sampling P identities from dataset X and subsequently sampling K images (randomly) for each identity, thus resulting in a batch size of PK images. Using this dataloader (with appropriate triplet mining), the authors demonstrate state-of-the-art accuracy on the task of person re-identification. We look more at the mining tech- niques introduced in [11] in the following subsections. Using this organizational modification, Kumar et al. [10, 12] demonstrate state-of-the-art results for the task of vehicle re-identification across many diverse datasets.Computing the number of valid triplets in stacked 3B images of B triplets To understand the computation of the number of valid triplets in a stack of 3 B images (that is, B anchors, B positives, B negatives), let’s assume we have exactly one pair of the same class. This implies we could choose 3 B – 2 negatives for an (anchor, positive) pair. There are 2 B possible anchor-positive pairs in this set, leading to a total of 2 B (3B – 2) valid triplets. The following figure shows an example. Anchor Positive NegativeAn example with B = 3. Circles with the same pattern are of the same class. Since only the first two columns have a possible positive sample, there are a total of 2 B (six) anchors. After selecting an anchor, we are left with 3 B – 2 (seven) negatives, implying a sum total of 2 B (3B – 2) triplets." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"416 CHAPTER 10 Visual embeddings Owing to the superior results on re-identification tasks, [11] has become one of the mainstays in recognition literature, and the batch construction (dataloader) is now a standard in practice. The default recommendation for the batch size is P = 18, K = 4, leading to 42 samples. Now that we have built an efficient dataloader for mining triplets, we are ready to explore various techniques for mining informative triplets while training a CNN. In the following sections, we first look at hard data mining in general and subsequently focus on online generation (mining) of informative triplets following the batch con- struction approach in [11]. 10.4.2 Informative data mining: Finding useful triplets Mining informative samples while training a machine learning model is an important problem, and many solutions exist in academic literature. We take a quick peek at them here. A popular sampling approach to find informative samples is hard data mining , which is used in many CV applications such as object detection and action localiza- tion. Hard data mining is a bootstrapping technique used in iterative training of a model: at every iteration, the current model is applied on a validation set to mine hard data on which this model performs poorly. Only this hard data is then presented to the optimizer, which increases the ability of the model to learn effectively and con- verge faster to an optimum. On the flip side, if a model is only presented with hardComputing the number of valid triplets Let’s make this concept clearer with a working example of computing the number of valid triplets in a batch. Assuming we have selected a random batch of size PK: P = 10 different classes K = 4 samples per class Using these values, we have the following batch statistics: Total number of anchors = 40 = ( PK) Number of positive samples per anchor = 3 = ( K – 1) Number of negative samples per anchor = 9 × 4 = ( K(P – 1)) Total number of valid triplets = products of the previous results = 40 × 3 × (9 × 4) Taking a peek at upcoming concepts on mining informative triplets, notice that for each anchor, we have a set of positive samples and a set of negative samples. We argued earlier that many triplets are non-informative, and hence in the subsequent sections we look at various ways to filter out important triplets. More precisely, we examine techniques that help filter out informative subsets of positive and negative samples (for an anchor)." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"417 Mining informative data data, which could consist of outliers, its ability to discriminate outliers with respect to normal data suffers, stalling the training progress. An outlier in a dataset could be a result of mislabeling or a sample captured with poor image quality. In the context of triplet loss, a hard negative s a m p l e i s o n e t h a t i s c l o s e r t o t h e anchor (as this sample would incur a high loss). Similarly, a hard positive sample is one that is far from an anchor in embedding space. To deal with outliers during hard data sampling, FaceNet [9] proposed semi-hard sampling that mines moderate triplets that are neither too hard nor too trivial for get- ting meaningful gradients during training. This is done by using the margin parame- ter: only negatives that lie in the margin and are farther from the selected positive for an anchor are considered (see figure 10.14), thereby ignoring negatives that are too easy and too hard. However, this in turn adds additional burden on training for tun- ing an additional hyperparameter. This ad hoc strategy of semi-hard negatives is put into practice in a large batch size of 1,800 images, thereby enumerating triplets on the CPU. Notice that with the default batch size (42 images) in [11], it is possible to enu- merate the set of valid triplets efficiently on the GPU. AnchorHard negative Positive MarginSemi-hard negativeEasy negative Figure 10.14 Margin: grading triplets into hard, semi-hard, and easy. This illustration (in the context of face recognition) is for an anchor and a corresponding negative sample. Therefore, negative samples that are closer to the anchor are hard." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"418 CHAPTER 10 Visual embeddings Figure 10.15 illustrates the hardness of a triplet. Remember that a positive sample is harder if the network (at a training epoch) puts this sample far from its anchor in the embedding space. Similarly, in a plot of distances from the anchor to negative data, the samples closer (less distant) to the anchor are harder. As a reminder, here is the triplet loss function for an anchor ( a), positive ( p), and negative ( n): ltriplet (a, p, n) = [Dap – Dan + α]+ 0.5 1 2 3 4 5 Distances from anchor: positive samples0.01.0 Anchor 0.5 1 2 3 4 5 Distances from anchor: negative samples0.01.0 Anchor Figure 10.15 Hard-positive and hard-negative data. The plot shows the distances of positive samples (top) and negative samples (bottom) with respect to an anchor (at a particular epoch). The hardness of samples increases as we move from left to right on both plots." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"419 Mining informative data Having explored the concept of hard data and its pitfalls, we will now explore various online triplet mining techniques for our batch. Once a batch (of size PK) is con- structed by the dataloader, there are PK possible anchors. How to find positive and negative data for these anchors is the crux of mining techniques. First we look at two simple and effective online triplet mining techniques: batch all (BA) and batch hard (BH). 10.4.3 Batch all (BA) In the context of a batch, batch all (BA) refers to using all possible and valid triplets; that is, we are not performing any ranking or selection of triplets. In implementation terms, for an anchor, this loss is computed by summing across all possible valid triplets. For a batch size of PK images, since BA selects all triplets, the number of terms updat- ing the triplet loss is PK(K – 1)( K(P – 1)). Using this approach, all samples (triplets) are equally important; hence this is straight- forward to implement. On the other hand, BA can potentially lead to information averag- ing out. In general, many valid triplets are trivial (at a low loss or non-informative), and only a few are informative. Summing across all valid triplets with equal weights leads to averaging out the contribution of the informative triplets. Hermans et al. [11] experi- enced this averaging out and reported it in the context of person re-identification. 10.4.4 Batch hard (BH) As opposed to BA, batch hard (BH) considers only the hardest data for an anchor. For each possible anchor in a batch, BH computes the loss with exactly one hardest posi- tive data item and one hardest negative data item. Notice that here, the hardness of a datapoint is relative to the anchor. For a batch size of PK images, since BH selects only one positive and one negative per anchor, the number of terms updating the triplet loss is PK (total number of possible anchors). BH is robust to information averaging out, because trivial (easier) samples are ignored. However, it is potentially difficult to disambiguate with respect to outliers: outliers can creep in due to incorrect annotations, and the model tries hard to converge on them, thereby jeopardizing training quality. In addition, when a not-pretrained net- work is used prior to using BH, the hardness of a sample (with respect to an anchor) cannot be determined reliably. There is no way to gain this information during train- ing, because the hardest sample is now any random sample, and this can lead to a stall in training. This is reported in [9] and when BH is applied to train a network from scratch in the context of vehicle re-identification in [10]. To visually understand BA and BH, let’s look again at our figure illustrating the dis- tances of the anchor to all positive and negative data (figure 10.16). BA performs no selection and uses all five samples to compute a final loss, whereas BH uses only the hardest available data (ignoring all the rest). Figure 10.17 shows the algorithm outline for computing BH and BA. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"420 CHAPTER 10 Visual embeddings An alternative formalization of triplet loss Ristani et al., in their famous paper on features for multi-camera re-identification [13], unify various batch-sampling techniques under one expression. In a batch, let a be an anchor sample and N(a) and P(a) represent a subset of negative and positive samples for the corresponding anchor a. The triplet loss can then be writ- ten as ltriplet(a) = [α + wpDap – wnDan]+0.5 1 2 3 4 5 Distances from anchor: positive samples0.01.0 Anchor 0.5 1 2 3 4 5 Distances from anchor: negative samples0.01.0 Anchor Figure 10.16 Illustration of hard data: distances of positive samples from an anchor (at a particular epoch) (left); distances of negative samples from an anchor (right). BA takes all samples into account, whereas BH takes samples only at the far-right bar (the hardest-positive data for this mini-batch). Algorithm 2: Algorithm outline for sampling information data points. Result: A trained CNN with a desirable embedding size. Initialization: Dataset, a CNN, a loss function, embedding dimension, batch size ( ); PK while do A valid batch numAnchors = PK > 0 while do numAnchors Select an anchor in [0 . . . ], without replacement; PK BA: Compute loss over all possible valid triplets for this anchor; OR BH: Compute loss over all possible valid triplets for this hard anchor; numAnchors – – end endFigure 10.17 Algorithm for computing BA and BH pP a () ∈ nN a() ∈" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"421 Mining informative data 10.4.5 Batch weighted (BW) BA is a straightforward sampling that weights all samples uniformly. This uniform weight distribution can ignore the contribution of important tough samples, as these samples are typically outnumbered by trivial, easy samples. To mitigate this issue with BA, Ristani et al. [13] employ a batch weighted (BW) weighting scheme: a sample is weighted based on its distance from the corresponding anchor, thereby giving more importance to informative (harder) samples than trivial samples. Corresponding weights for positive and negative data are shown in table 10.1. Figure 10.18 demon- strates the weighting of samples in this technique. 10.4.6 Batch sample (BS) Another sampling technique is batch sample (BS); it is actively discussed in the imple- mentation page of Hermans el al. [11] and has been used for state-of-the-art vehicle re-identification by Kumar et al. [10]. BS uses the distribution of anchor-to-sampleFor an anchor sample a, wp represents the weight (importance) of a positive sample p; similarly, wn signifies the importance of a negative sample n. The total loss in an epoch is then obtained by ࣦ(θ; χ) = – ltriplet(a) In this formulation, BA and BH could be integrated as shown in the following figure (see also table 10.1 in the following section). The Y-axis in this figure represents selection weight-age. Plot showing selection weights for positive samples with respect to an anchor. For BA, all samples are equally important, while BH gives importance to only the hardest samples (the rest are ignored).aB∈ all batches 0.5 1234 5 Batch all: selection weights for anchor-positive pairs0.01.0 0.5 1234 5 Batch hard: selection weights for anchor-positive pairs0.01.0" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"422 CHAPTER 10 Visual embeddings distances to mine2 positive and negative data for an anchor (see figure 10.19). This tech- nique thereby avoids sampling outliers when compared with BH, and it also hopes to determine the most relevant sample as the sampling is done using a distances-to-anchor distribution.Table 10.1 Snapshot of various ways to mine good positive xp and negative xn [10]. BS and BW are explored in the upcoming section with examples. Mining Positive weight: wp Negative weight: wn Comments All (BA) 1 1 All samples are weighted uniformly. Hard (BH) [ xp == arg max x∈P(a)Dax] [ xn == arg min x∈N(a)Dax] Pick one hardest sample. Sample (BS) [ xp == multinomial { x∈P(a)Dax}] [ xn == multinomial x∈N(a){–Dax}] Pick one from the multinomial distribution. Weighted (BW) Weights are sam- pled based on their distance from the anchor. 2Categorically. For an example in Tensorflow, see http:/ /mng.bz/zjvQ .eDap eDax xP a ()∈---------------------eD–an eD–ax xN a ()∈----------------------- 0.5 1 2 3 4 5 Distances from anchor: positive samples0.01.0 Anchor 0.5 12345 Batch weighted: selection weights for anchor-positive pairs0.01.0 Figure 10.18 BW illustration of selecting positive data for the anchor in the left plot. In this case, all five positive samples are used (as in BA), but a weight-age is assigned to each sample. Unlike BA, which weighs every sample equally, the plot at right weighs each sample in proportion to the corresponding distance from the anchor. This effectively means we are paying more attention to a positive sample that is farther from the anchor (and thus is harder and more informative). Negative data for this anchor is chosen in the same manner, but with reverse weight-age." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"423 Project: Train an embedding network Now, let’s unpack these ideas by working through a project and diving deeper into the machinery required for training and testing a CNN for an embedding. 10.5 Project: Train an embedding network In this project, we put our concepts into practice by building an image-based query retrieval system. We chose two problems that are popular in the visual embedding lit- erature and have been actively studied to find better solutions: Shopping dilemma —Find me apparel that is similar to a query item. Re-identification —Find similar cars in a database; that is, identify a car from dif- ferent viewpoints (cameras). Regardless of the tasks, the training, inference, and evaluation processes are the same. Here are some of the ingredients for successfully training an embedding network: Training set —We follow a supervised learning approach with annotations under- lining the inherent similarity measure. The dataset can be organized into a set of folders where each folder determines the identity/category of the images. The objective is that images belonging to the same category are kept closer to one another in the embedding space, and vice versa for images in separate categories. Testing set —The test set is usually split into two sets: query and gallery (often, aca- demic papers refer to the gallery set as the test set ). The query set consists of images that are used as queries. Each image in the gallery set is ranked (retrieved) against0.5 1 2 3 4 5 Distances from anchor: positive samples0.01.0 Anchor 0.5 12345 Batch sample: selection weights for anchor-positive pairs0.01.0 Figure 10.19 BS illustration of selecting a positive data for an anchor. Similarly to BH, the aim is to find one positive data item (for the anchor in the left plot) that is informative and not an outlier. BH would take the hardest data item, which could lead to finding outliers. BS uses the distances as a distribution to mine a sample in a categorical fashion, thereby selecting a sample that is informative and that may not be an outlier. (Note that this is a random multinomial selection; we chose the third sample here just to illustrate the concept.)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"424 CHAPTER 10 Visual embeddings every query image. If the embedding is learned perfectly, the top-ranked (retrieved) items for a query all belong to the same class. Distance metric —To express similarity between two images in an embedding space, we use the Euclidean (L 2) distance between the respective embeddings. Evaluation —To quantitatively evaluate a trained model, we use the top- k accu- racy and mean average precision (mAP) metrics explained in chapters 4 and 7, respectively. For each object in a query set, the aim is to retrieve a similar iden- tity from the test set (gallery set). AP(q) for a query image q is defined as AP(q) = where P(k) represents precision at rank k, Ngt(q) is the total number of true retrievals for q, and δk is a Boolean indicator function. So, its value is 1 when the matching of query image q to a test image is correct at rank ≤ k. Correct retrieval means the ground- truth label for both query and test is the same. The mAP is then computed as an average over all query images mAP = where Q is the total number of query images. The following sections look at both tasks in more detail. 10.5.1 Fashion: Get me items similar to this The first task is to determine whether two images taken in a shop belong to the same clothing item. Shopping objects (clothes, shoes) related to fashion are key areas of visual search in industrial applications such as image-recommendation engines that recommend products similar to what a shopper is looking for. Liu et al. [3] intro- duced one of the largest datasets (DeepFashion) for shopping image-retrieval tasks. This benchmark contains 54,642 images of 11,735 clothing items from the popular Forever 21 catalog. The dataset comprises 25,000 training images and about 26,000 test images, split across query and gallery sets; figure 10.20 shows sample images. 10.5.2 Vehicle re-identification Re-identification is the task of matching the appearance of objects in and across camera networks. A usual pipeline here involves a user seeking all instances of a query object’s presence in all cameras within a network. For example, a traffic regulator may be look- ing for a particular car across a city-wide camera network. Other examples are person and face re-identification, which are mainstays in security and biometrics. This task uses the famous VeRi dataset from Liu et al. [14, 36]. This dataset encom- passes 40,000 bounding-box annotations of 776 cars (identities) across 20 cameras inPk() δk× k Ngtq()------------------------------- AP q() q Q-----------------------" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"425 Project: Train an embedding network traffic surveillance scenes; figure 10.21 shows sample images. Each vehicle is captured by 2 to 18 cameras in various viewpoints and varying illuminations. Notably, the viewpoints are not restricted to only front/rear but also include side views, thereby Figure 10.20 Each row indicates a particular category and corresponding similar images. A perfectly learned embedding would make embeddings of images in each row closer to each other than any two images across columns (which belong to different apparel categories). (Images in this figure are taken from the DeepFashion dataset [3].) Figure 10.21 Each row indicates a vehicle class. Similar to the apparel task, the goal (training an embedding CNN) here is to push the embeddings of the same class closer than the embeddings of different classes. (Images in this figure are from the VeRi dataset [14].)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"426 CHAPTER 10 Visual embeddings making this a challenging dataset. The annotations include make and model of vehi- cles, color, and inter-camera relations and trajectory information. We will use only category (or identity) level annotations; we will not use attributes like make, model, and spatio-temporal location. Incorporating more information during training could help gain accuracy, but this is beyond the scope of this chapter. However, the last part of the chapter references some cool new developments in incorporating multi-source information for learning embeddings. 10.5.3 Implementation This project uses the GitHub codebase of triplet learning ( https:/ /github.com/Visual- ComputingInstitute/triplet-reid/tree/sampling ) attached to [11]. Dataset preprocessing and a summary of steps are available with the book’s downloadable code; go to the project’s Jupyter notebook to follow along with a step-by-step tutorial of the project implementation. TensorFlow users are encouraged to look at the blog post “Triplet Loss and Online Triplet Mining in TensorFlow” by Olivier Moindrot ( https:/ /omoin- drot.github.io/triplet-loss ) to understand various ways of implementing triplet loss. Training a deep CNN involves several key hyperparameters, and we briefly discuss them here. Following is a summary of the hyperparameters we set for this project: Pre-training is performed on the ImageNet dataset [15]. Input image size is 224 × 224. Meta-architecture : We use Mobilenet-v1 [16], which has 569 million MACs and measures the number of fused multiplication and addition operations. This architecture has 4.24 million parameters and achieves a top-1 accuracy of 70.9% on ImageNet’s image classification benchmark, with input image size of 224 × 224. Optimizer : We use the Adam optimizer [17] with default hyperparameters (ε=1 0–3, β1 = 0.9, β2 = 0.999). Initial learning rate is set to 0.0003. Data augmentation is performed in an online fashion using a standard image-flip operation. Batch size is 18 ( P) randomly sampled identities, with 4 ( K) samples per identity, for a total of 18 × 4 samples in a batch. Margin : The authors replaced the hinge loss [.] + with a smooth variation called soft- plus: ln(1 + .). Our experiments also apply softplus instead of using a hard margin. Embedding dimension corresponds to the dimension of the last fully connected layer. We fix this to 128 units for all experiments. Using a lower embedding size is helpful for computational efficiency. DEFINITION In computing, the multiply–accumulate operation is a common step that computes the product of two numbers and adds that product to an accu- mulator. The hardware unit that performs the operation is known as a multiplier– accumulator (MAC, or MAC unit ); the operation itself is also often referred to as MAC or a MAC operation . " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"427 Project: Train an embedding network 10.5.4 Testing a trained model To test a trained model, each dataset presents two files: a query set and a gallery set. These sets can be used to compute the evaluation metrics mentioned earlier: mAP and top-k accuracy. While evaluation metrics are a good summary, we also look at the results visually. To this end, we take random images in a query set and find (plot) the top-k retrievals from the gallery set. The following subsections show quantitative and qualitative results of using various mining techniques from this chapter. TASK 1: I N-SHOP RETRIEVAL Let’s look at sample retrievals from the learned embeddings in figure 10.22. The results look visually pleasing: the top retrievals are from the same class as the query. The network does reasonably well at inferring different views of the same query in the top ranks.A note on comparisons to state-of-the-art approaches Before diving into comparisons, remember that training a deep neural network requires tuning several hyperparameters. This may in turn lead to pitfalls while compar- ing several algorithms: for example, an approach could perform better if the underlying CNN performs favorably on the same pretrained dataset(s). Other similar hyperparam- eters are the training algorithm choice (such as vanilla SGD or a more sophisticated Adam) and many other parameters that we have seen throughout this book. You must delve deeper into an algorithm’s machinery to see the complete picture. Query Retrievals Figure 10.22 Sample retrievals from the fashion dataset using various embedding approaches. Each row indicates the query image and top-5 retrievals for this query image. An X indicates an incorrect retrieval." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"428 CHAPTER 10 Visual embeddings Table 10.2 outlines the performance of triplet loss under various sampling scenarios. BW outperforms all other sampling approaches. Top-1 accuracy is quite good in this case: we were able to retrieve the same class of fashion object in the very first retrieval, with accuracy over 87%. Notice that with the evaluation setup, the top- k accuracy for k > 1 is higher (monotonically). Our results compare favorably with the state-of-the-art results. Using attention-based ensemble (ABE) [18], a diverse set of ensembles are trained that attend to parts of the image. Boosting independent embeddings robustly (BIER) [19] trains an ensemble of metric CNNs with a shared feature representation as an online gradient boosting problem. Noticeably, this ensemble framework does not introduce any additional parameters (and works with any differential loss). TASK 2: V EHICLE RE-IDENTIFICATION Kumar et al. [12] recently performed an exhaustive evaluation of the sampling vari- ants for optimizing triplet loss. The results are summarized in table 10.3 with com- parisons from several state-of-the-art approaches. Noticeably, the authors perform favorably compared to state-of-the-art approaches without using any other informa- tion sources, such as spatio-temporal distances and attributes. Qualitative results are shown in figure 10.23, demonstrating the robustness of embeddings with respect to the viewpoints. Notice that the retrieval has the desired property of being viewpoint- invariant, as different views of the same vehicle are retrieved into top-5 ranks.Table 10.2 Performance of various sampling approaches on the in-shop retrieval task Method top-1 top-2 top-5 top-10 top-20 Batch all 83.79 89.81 94.40 96.38 97.55 Batch hard 86.40 91.22 95.43 96.85 97.83 Batch sample 86.62 91.36 95.36 96.72 97.84 Batch weighted 87.70 92.26 95.77 97.22 98.09 Capsule embeddings 33.90 – – 75.20 84.60 ABE [18] 87.30 – – 96.70 97.90 BIER [19] 76.90 – – 92.80 95.20 Table 10.3 Comparison of various proposed approaches on the VeRi dataset. An asterisk (*) indicates the usage of spatio-temporal information. Method mAP top-1 top-5 Batch sample 67.55 90.23 96.42 Batch hard 65.10 87.25 94.76 Batch all 66.91 90.11 96.01" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"429 Project: Train an embedding network Batch weighted 67.02 89.99 96.54 GSTE [20] 59.47 96.24 98.97 VAMI [21] 50.13 77.03 90.82 VAMI+ST * [21] 61.32 85.92 91.84 Path-LSTM * [22] 58.27 83.49 90.04 PAMTRI (RS) [23] 63.76 90.70 94.40 PAMTRI (All) [23] 71.88 92.86 96.97 MSVR [24] 49.30 88.56 – AAVER [25] 61.18 88.97 94.70Table 10.3 Comparison of various proposed approaches on the VeRi dataset. An asterisk (*) indicates the usage of spatio-temporal information. (continued) Method mAP top-1 top-5 Query Retrievals Figure 10.23 Sample retrievals on the VeRi dataset using various embedding approaches. Each row indicates a query image and the top-5 retrievals for it. An X indicates an incorrect retrieval. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"430 CHAPTER 10 Visual embeddings To gauge the pros and cons of various approaches in the literature, let’s conceptually examine the competing approaches in vehicle re-identification: Kanaci et al. [26] proposed cross-level vehicle re-identification (CLVR ) on the basis of using classification loss with model labels (see figure 10.24) to train a fine- grained vehicle categorization network. This setup is similar to the one we saw in section 10.3.2 and figure 10.9. The authors did not perform an evaluation on the VeRi dataset. You are encouraged to refer to this paper to understand the performance on other vehicle re-identification datasets. Group-sensitive triplet embedding (GSTE ) by Bai et al. [20] is a novel training pro- cess that clusters intra-class variations using K-Means. This helps with more guided training at the expense of an additional parameter, K-Means clustering. Pose aware multi-task learning (PAMTRI ) by Zheng et al. [23] trains a network for embedding in a multi-task regime using keypoint annotations in conjunction with synthetic data (thereby tackling keypoint annotation requirements). PAMTRI (All) achieves the best results on this dataset. PAMTRI (RS) uses a mix of real and synthetic data for learning embedding, and PAMTRI (All) additionally uses vehicle keypoints and attributes in a multi-task learning framework. Adaptive attention for vehicle re-identification (AAVER ) by Khorramshahi et al. [25] is a recent work wherein the authors construct a dual-path network for extract- ing global and local features. These are then concatenated to form a final embedding. The proposed embedding loss is minimized using identity and key- point orientation annotations. A training procedure for viewpoint attentive multi-view inference (VAMI ) by Zhou et al. [21] includes a generative adversarial network (GAN) and multi-view attention learning. The authors’ conjecture that being able to synthesize (gen- erate using GAN) multiple viewpoint views would help learn a better final embedding. 2048Dense1024Dense228 Testing re-IDAvg. pool. ID labels2048Dense1024Softmax Figure 10.24 Cross-level vehicle re-identification (CLVR). ( Source: [24].)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"431 Pushing the boundaries of current accuracy With Path-LSTM , Shen et al. [22] employ a generation of several path proposals for their spatio-temporal regularization and require an additional LSTM to rank these proposals. Kanaci et al. [24] proposed multi-scale vehicle representation (MSVR ) for re- identification by exploiting a pyramid-based DL method. MSVR learns vehi- cle re-identification sensitive feature representations from an image pyramid with a network architecture of multiple branches, all of which are optimized concurrently. A snapshot summary of these approaches with respect to the key hyperparameters is summarized in table 10.4. Usually, license plates are a global unique identifier. However, with the standard instal- lation of traffic cameras, license plates are difficult to extract; hence, visual-based fea- tures are required for vehicle re-identification. If two cars are of the same make, model, and color, then visual features cannot disambiguate them (unless there are some distinctive marks such as text or scratches). In these tough scenarios, only spatio- temporal information (like GPS information) can help. To learn more, you are encouraged to look into recent proposed datasets by Tang et al. [27]. 10.6 Pushing the boundaries of current accuracy Deep learning is an evolving field, and novel approaches to training are being intro- duced every day. This section provides ideas for improving the current level of embed- dings and some recently introduced tips and tricks to train a deep CNN: Re-ranking —After obtaining an initial ranking of gallery images (to an input query image), re-ranking uses a post-processing step with the aim of improving the ranking of relevant images. This is a powerful, widely used step in many re- identification and information-retrieval systems. A popular approach in re-identification is by Zhong et al. [28] (see figure 10.25). Given a probe p and a gallery set, the appearance feature (embedding)Table 10.4 Summary of some important hyperparameters and labeling used during training Method ED Annotations Ours 0128 ID GSTE [20] 1024 ID VAMI [21] 2048 ID + A PAMTRI (All) [23] 1024 ID + K + A MSVR [24] 2048 ID AAVER [25] 2048 ID + K Note: ED = embedding dimension; K = keypoints; A = attributes." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"432 CHAPTER 10 Visual embeddings and k-reciprocal feature are extracted for each person. The original distance d and Jaccard distance Jd are calculated for each pair of a probe person and a gallery person. The final distance is then computed as the combination of d and Jd and used to obtain the proposed ranking list. A recent work in vehicle re-identification, AAVER [25] boosts mAP accuracy by 5% by post-processing using re-ranking. DEFINITION The Jaccard distance is computed among two sets of data and expresses the intersection over the union of the two sets. Tips and tricks —Luo et al. [29] demonstrated powerful baseline performance on the task of person re-identification. The authors follow the same batch con- struction from Hermans et al. [11] (studied in this chapter) and use tricks for data augmentation, warm-up learning rate, and label smoothing, to name a few. Noticeably, the authors perform favorably compared to many state-of-the-art methods. You are encouraged to apply these general tricks for training a CNN for any recognition-related tasks. DEFINITIONS The warm-up learning rate refers to a strategy with a learning rate scheduler that modulates the learning rate linearly with respect to a pre- defined number of initial training epochs. Label smoothing modulates the cross-entropy loss so the resulting loss is less overconfident on the training set, thereby helping with model generalization and preventing overfitting. This is particularly useful in small-scale datasets.Probe Initial ranking list: AP = 9.05% Proposed ranking list: AP = 51.98%Mahalanobis metricOriginal distance ...Xp X1 XNdFinal distance Aggregation d*k-reciprocal feature ...Vp V1 VNJaccard metric Jaccard distance dj Gallery Appearance feature Figure 10.25 Re-ranking proposal by Zhong et al. ( Source: [28].)" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"433 References Attention —In this chapter, we focused on learning embedding in a global fash- ion: that is, we did not explicitly guide the network to attend to, for exam- ple, discriminative parts of an object. Some of the prominent works employing attention are Liu et al. [30] and Chen et al. [31]. Employing attention could also help improve the cross-domain performance of a re-identification network, as demonstrated in [32]. Guiding training with more information —The state-of-the-art comparisons in table 10.3 briefly touched on works incorporating information from multiple sources: identity, attributes (such as the make and model of a vehicle), and spatio- temporal information (GPS location of each query and gallery image). Ideally, including more information helps obtain higher accuracy. However, this comes at the expense of labeling data with annotations. A reasonable approach for training with a multi-attribute setup is to use multi-task learning (MTL). Often, the loss becomes conflicting; this is resolved by weighting the tasks appropri- ately (using cross validation). A MTL framework to resolve this conflicting loss scenario using multi-objective optimization is by Sener al. [32]. Some popular works of MTL in the context of face, person, and vehicle cate- gorization are by Ranjan et al. [34], Ling et al. [35], and Tang [23]. Summary Image-retrieval systems require the learning of visual embeddings (a vector space). Any pair of images can be compared using their geometric distance in this embedding space. To learn embeddings using a CNN, there are three popular loss functions: cross-entropy, triplet, and contrastive. Naive training of triplet loss is computationally prohibitive. Hence we use batch-based informative data minings: batch all, batch hard, batch sample, and batch weighted. References 1S.Z. Li and A.K. Jain. 2011. Handbook of Face Recognition . Springer Science & Business Media. https:/ /www.springer.com/gp/book/9780857299314 . 2V. Gupta and S. Mallick. 2019. “Face Recognition: An Introduction for Begin- ners.” Learn OpenCV. April 16, 2019. https:/ /www.learnopencv.com/face- recognition-an-introduction-for-beginners . 3Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. 2016. “Deepfashion: Powering robust clothes recognition and retrieval with rich annotations.” IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR). http:/ /mmlab.ie .cuhk.edu.hk/projects/DeepFashion.html . 4T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang. 2016. “Joint Detection and Identi- fication Feature Learning for Person Search.” http:/ /arxiv.org/abs/1604.01850 ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"434 CHAPTER 10 Visual embeddings 5Y. Zhai, X. Guo, Y. Lu, and H. Li. 2018. “In Defense of the Classification Loss for Person Re-Identification.” http:/ /arxiv.org/abs/1809.05864 . 6S. Chopra, R. Hadsell, and Y. LeCun. 2005. “Learning a Similarity Metric Dis- criminatively, with Application to Face Verification.” In 2005 IEEE Computer Soci- ety Conference on Computer Vision and Pattern Recognition (CVPR’05 ), 1: 539–46 vol. 1. https:/ /doi.org/10.1109/CVPR.2005.202 . 7C-Y. Wu, R. Manmatha, A.J. Smola, and P. Krähenbühl. 2017. “Sampling Matters in Deep Embedding Learning.” http:/ /arxiv.org/abs/1706.07567 . 8Q. Weinberger and L.K. Saul. 2009. “Distance Metric Learning for Large Mar- gin Nearest Neighbor Classification.” The Journal of Machine Learning Research 10: 207–244. https:/ /papers.nips.cc/paper/2795-distance-metric-learning-for-large- margin-nearest-neighbor-classification.pdf. 9F. Schroff, D. Kalenichenko, and J. Philbin. 2015. “FaceNet: A Unified Embed- ding for Face Recognition and Clustering.” In 2015 IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR ), 815–23. https:/ /ieeexplore.ieee.org/ document/7298682 . 10R. Kumar, E. Weill, F. Aghdasi, and P. Sriram. 2019. “Vehicle Re-Identification: An Efficient Baseline Using Triplet Embedding.” https:/ /arxiv.org/pdf/1901 .01015.pdf . 11A. Hermans, L. Beyer, and B. Leibe. 2017. “In Defense of the Triplet Loss for Person Re-Identification.” http:/ /arxiv.org/abs/1703.07737 . 12R. Kumar, E. Weill, F. Aghdasi, and P. Sriram. 2020. “A Strong and Efficient Baseline for Vehicle Re-Identification Using Deep Triplet Embedding.” Journal of Artificial Intelligence and Soft Computing Research 10 (1): 27–45. https:/ /content .sciendo.com/view/journals/jaiscr/10/1/article-p27.xml . 13E. Ristani and C. Tomasi. 2018. “Features for Multi-Target Multi-Camera Track- ing and Re-Identification.” http:/ /arxiv.org/abs/1803.10859 . 14X. Liu, W. Liu, T. Mei, and H. Ma. 2018. “PROVID: Progressive and Multimodal Vehicle Reidentification for Large-Scale Urban Surveillance.” IEEE Transactions on Multimedia 20 (3): 645–58. https:/ /doi.org/10.1109/TMM.2017.2751966 . 15J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. 2009. “ImageNet: A Large-Scale Hierarchical Image Database.” In 2009 IEEE Conference on Computer Vision and Pattern Recognition , 248–55. http:/ /ieeexplore.ieee.org/lpdocs/epic03/ wrapper.htm?arnumber=5206848 . 16A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. 2017. “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” http:/ /arxiv.org/abs/1704.04861 . 17D.P. Kingma and J. Ba. 2014. “Adam: A Method for Stochastic Optimization.” http:/ /arxiv.org/abs/1412.6980 . 18W. Kim, B. Goyal, K. Chawla, J. Lee, and K. Kwon. 2018. “Attention-based ensemble for deep metric learning.” In 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR ), 760–777, https:/ /arxiv.org/abs/1804.00382 ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"435 References 19M. Opitz, G. Waltner, H. Possegger, and H. Bischof. 2017. “BIER—Boosting Inde- pendent Embeddings Robustly.” In 2017 IEEE International Conference on Computer Vision (ICCV ), 5199–5208. https:/ /ieeexplore.ieee.org/document/8237817 . 20Y. Bai, Y. Lou, F. Gao, S. Wang, Y. Wu, and L. Duan. 2018. “Group-Sensitive Trip- let Embedding for Vehicle Reidentification.” IEEE Transactions on Multimedia 20 (9): 2385–99. https:/ /ieeexplore.ieee.org/document/8265213 . 21Y. Zhouy and L. Shao. 2018. “Viewpoint-Aware Attentive Multi-View Inference for Vehicle Re-Identification.” In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition , 6489–98. https:/ /ieeexplore.ieee.org/document/8578777 . 22Y. Shen, T. Xiao, H. Li, S. Yi, and X. Wang. 2017. “Learning Deep Neural Net- works for Vehicle Re-ID with Visual-Spatio-Temporal Path Proposals.” In 2017 IEEE International Conference on Computer Vision (ICCV ), 1918–27. https:/ /ieeex- plore .ieee.org/document/8237472 . 23Z. Tang, M. Naphade, S. Birchfield, J. Tremblay, W. Hodge, R. Kumar, S. Wang, and X. Yang. 2019. “PAMTRI: Pose-Aware Multi-Task Learning for Vehicle Re- Identification Using Highly Randomized Synthetic Data.” In Proceedings of the IEEE International Conference on Computer Vision , 211–20. http:/ /openaccess.thecvf .com/content_ICCV_2019/html/Tang_PAMTRI_Pose-Aware_Multi-Task_Learning _for_Vehicle_Re-Identification_Using_Highly_Randomized_ICCV_2019_paper .html . 24A. Kanacı, X. Zhu, and S. Gong. 2017. “Vehicle Reidentification by Fine- Grained Cross-Level Deep Learning.” In BMVC AMMDS Workshop , 2:772–88. https:/ /arxiv.org/abs/1809.09409 . 25P. Khorramshahi, A. Kumar, N. Peri, S.S. Rambhatla, J.-C. Chen, and R. Chel- lappa. 2019. “A Dual-Path Model With Adaptive Attention For Vehicle Re- Identification.” http:/ /arxiv.org/abs/1905.03397 . 26A. Kanacı, X. Zhu, and S. Gong. 2017. “Vehicle Reidentification by Fine-Grained Cross-Level Deep Learning.” In BMVC AMMDS Workshop , 2:772–88. http:/ /www .eecs.qmul.ac.uk/~xiatian/papers . 27Z. Tang, M. Naphade, M.-Y. Liu, X. Yang, S. Birchfield, S. Wang, R. Kumar, D. Anastasiu, and J.-N. Hwang. 2019. “CityFlow: A City-Scale Benchmark for Multi- Target Multi-Camera Vehicle Tracking and Re-Identification.” In 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR ). http:/ /arxiv.org/ abs/1903.09254 . 28Z. Zhong, L. Zheng, D. Cao, and S. Li. 2017. “Re-Ranking Person Re-Identifica- tion with K-Reciprocal Encoding.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR ), 3652–3661, https:/ /arxiv.org/abs/1701.08398 . 29H. Luo, Y. Gu, X. Liao, S. Lai, and W. Jiang. 2019. “Bag of Tricks and A Strong Baseline for Deep Person Re-Identification.” In 2019 IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR ) Workshops. https:/ /arxiv.org/abs/ 1903.07071 ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"436 CHAPTER 10 Visual embeddings 30H. Liu, J. Feng, M. Qi, J. Jiang, and S. Yan. 2016. “End-to-End Comparative Attention Networks for Person Re-Identification.” IEEE Transactions on Image Processing 26 (7): 3492–3506. https:/ /arxiv.org/abs/1606.04404 . 31G. Chen, C. Lin, L. Ren, J. Lu, and J. Zhou. 2019. “Self-Critical Attention Learn- ing for Person Re-Identification.” In Proceedings of the IEEE International Confer- ence on Computer Vision , 9637–46. http:/ /openaccess.thecvf.com/content_ICCV_ 2019/html/Chen_Self-Critical_Attention_Learning_for_Person_Re-Identification _ICCV_2019_paper.html . 32H. Liu, J. Cheng, S. Wang, and W. Wang. 2019. “Attention: A Big Surprise for Cross-Domain Person Re-Identification.” http:/ /arxiv.org/abs/1905.12830 . 33O. Sener and V. Koltun. 2018. “Multi-Task Learning as Multi-Objective Optimi- zation.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems , 525–36. http:/ /dl.acm.org/citation.cfm?id=3326943.3326992 . 34R. Ranjan, S. Sankaranarayanan, C. D. Castillo, and R. Chellappa. 2017. “An All- In-One Convolutional Neural Network for Face Analysis.” In 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017 ), 17–24. https:/ /arxiv.org/abs/1611.00851 . 35H. Ling, Z. Wang, P. Li, Y. Shi, J. Chen, and F. Zou. 2019. “Improving Person Re- Identification by Multi-Task Learning.” Neurocomputing 347: 109–118. https:/ / doi.org/10.1016/j.neucom.2019.01.027 . 36X. Liu, W. Liu, T. Mei, and H. Ma. 2016. “A Deep Learning-Based Approach to Progressive Vehicle Re-Identification for Urban Surveillance.” In Computer Vision – ECCV 2016 , 869–84. https:/ /doi.org/10.1007/978-3-319-46475-6_53 . 37B. Amos, B. Ludwiczuk, M. Satyanarayanan, et al. 2016. “Openface: A General- Purpose Face Recognition Library with Mobile Applications.” CMU School of Computer Science 6: 2. http:/ /elijah.cs.cmu.edu/DOCS/CMU-CS-16-118.pdf ." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"437appendix A Getting set up All of the code in this book is written in Python 3, Open CV, Keras, and TensorFlow. The process of setting up a DL environment on your computer is fairly involved and consists of the following steps, which this appendix covers in detail: 1Download the code repository. 2Install Anaconda. 3Set up your DL environment: install all the packages that you need for proj- ects in this book (NumPy, OpenCV, Keras, TensorFlow, and others). 4[Optional] Set up the AWS EC2 environment. This step is optional if you want to train your networks on GPUs. A.1 Downloading the code repository All the code shown in this book can be downloaded from the book’s website (www.manning.com/books/deep-learning-for-vision-systems ) and also from GitHub (https:/ /github.com/moelgendy/deep_learning_for_vision_systems ) in the form of a Git repo. The GitHub repo contains a directory for each chapter. If you’re unfamiliar with version control using Git and GitHub, you can review the boot- camp articles ( https:/ /help.github.com/categories/bootcamp ) and/or beginning resources ( https:/ /help.github.com/articles/good-resources-for-learning-git-and- github ) for learning these tools." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"438 APPENDIX AGetting set up A.2 Installing Anaconda Anaconda ( https:/ /anaconda.org ) is a distribution of packages built for data science and ML projects. It comes with conda, a package and environment manager. You’ll be using conda to create isolated environments for your projects that use different ver- sions of your libraries. You’ll also use it to install, uninstall, and update packages in your environments. Note that Anaconda is a fairly large download (~600 MB) because it comes with the most common ML packages in Python. If you don’t need all the packages or need to conserve bandwidth or storage space, there is also Miniconda, a smaller distribu- tion that includes only conda and Python. You can still install any of the available packages with conda; it just doesn’t come with them. Follow these steps to install Anaconda on your computer: 1Anaconda is available for Windows, macOS, and Linux. You can find the install- ers and installation instructions at www.anaconda.com/distribution . Choose the Python 3 version, because Python 2 has been deprecated as of January 2020. Choose the 64-bit installer if you have a 64-bit operating system; otherwise, go with the 32-bit installer. Go ahead and download the appropriate version. 2Follow the installation through the graphical interface installer shown in fig- ure A.1. Figure A.1 Anaconda installer on macOS" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"439 Setting up your DL environment 3After installation is complete, you’re automatically in the default conda envi- ronment with all packages installed. You can check out your own install by entering conda list into your terminal to see your conda environments. A.3 Setting up your DL environment Now you will create a new environment and install the packages that you will use for your projects. You will use conda as a package manager to install libraries that you need. You are probably already familiar with pip; it’s the default package manager for Python libraries. Conda is similar to pip, except that the available packages are focused around data science, while pip is for general use. Conda is also a virtual environment manager. It’s similar to other popular envi- ronment managers like virtualenv ( https:/ /virtualenv.pypa.io/en/stable ) and pyenv (https:/ /github.com/pyenv/pyenv ). However, conda is not Python-specific like pip is: it can also install non-Python packages. It is a package manager for any software stack. That being said, not all Python libraries are available from the Anaconda distribution and conda. You can (and will) still use pip alongside conda to install packages. A.3.1 Setting up your development environment manually Follow these steps to manually install all the libraries needed for the projects in this book. Otherwise, skip to the next section to install the environment created for you in the book’s GitHub repo. 1On your terminal, create a new conda environment with Python 3 and call it deep_learning_for_vision_systems : conda create -n deep_learning_for_vision_systems python=3 Note that to remove a conda environment, you use conda env remove -n . 2Activate your environment. You must activate the environment before installing your packages. This way, all packages are installed only for this environment: conda activate deep_learning_for_vision_systems Note that to deactivate an environment, you use conda deactivate . Now you are inside your new environment. To see the default packages installed in this environment, type the following command: conda list . Next, you will install the packages used for the projects in this book. 3Install NumPy, pandas, and Matplotlib. These are very common ML packages that you will almost always use in your projects for math operations, data manip- ulation, and visualization tasks: conda install numpy pandas matplotlib" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"440 APPENDIX AGetting set up Note that throughout these installs, you will be prompted to confirm to pro- ceed (Proceed ([y]/n)?). Type Y and press Enter to continue the installation. 4Install the Jupyter notebooks. We use Jupyter notebooks in this book for easier development: conda install jupyter notebook 5Install OpenCV (the most popular open source CV library): conda install -c conda-forge opencv 6Install Keras: pip install keras 7Install TensorFlow: pip install tensorflow Now everything is complete and your environment is ready to start developing. If you want to view all the libraries installed in your environment, type the fol- lowing command: conda list These packages are separate from your other environments. This way, you can avoid any version-conflict issues. A.3.2 Using the conda environment in the book’s repo 1Clone the book’s GitHub repository from https:/ /github.com/moelgendy/ deep_learning_for_vision_systems . The environment is located in the installer/ application.yaml file: cd installer 2Create the conda deep_learning environment: conda env create -f my_environment.yaml 3Activate the conda environment: conda activate deep_learning 4Launch your Jupyter notebook (make sure you are located in the root of the deep_learning_for_vision_systems repository): jupyter notebook Now you are ready to run the notebooks associated with the book. " deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"441 Setting up your AWS EC2 environment A.3.3 Saving and loading environments It is best practice to save your environment if you want to share it with others so that they can install all the packages used in your code with the correct version. To do that, you can save the packages to a YAML ( https:/ /yaml.org ) file with this command: conda env export > my_environment.yaml T h i s w a y , o t h e r s c a n u s e t h i s Y A M L f i l e t o r e p l i c a t e y o u r e n v i r o n m e n t o n t h e i r machine using the following command: conda env create -f my_environment.yaml You can also export the list of packages in an environment to a .txt file and then include that file with your code. This allows other people to easily load all the depen- dencies for your code. Pip has similar functionality with this command: pip freeze > requirements.txt You can find the environment details used for this book’s projects in the downloaded code in the installer directory. You can use it to replicate my environment in your machine. A.4 Setting up your AWS EC2 environment Training and evaluating deep neural networks is a computationally intensive task depending on your dataset size and the size of the neural network. All projects in this book are specifically designed to have modes-sized problems and datasets to allow you to train networks on the CPU in your local machine. But some of these projects could take up to 20 hours to train—or even more, depending on your computer specifica- tions and other parameters like the number of epochs, neural network size, and other factors. A faster alternative is to train on a graphics processing unit (GPU), which is a type of processor that supports greater parallelism. You can either build your own DL rig or use cloud services like Amazon AWS EC2. Many cloud service providers offer equiv- alent functionality, but EC2 is a reasonable default that is available to most beginners. In the next few sections, we’ll go over the steps from nothing to running a neural net- work on an Amazon server. A.4.1 Creating an AWS account Follow these steps: 1Visit aws.amazon.com , and click the Create an AWS Account button. You will also need to choose a support plan. You can choose the free Basic Support Plan. You might be asked to provide credit card information, but you won’t be charged for anything yet." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"442 APPENDIX AGetting set up 2Launch an EC2 instance: aGo to the EC2 Management Console ( https:/ /console.aws.amazon.com/ec2/ v2/home ), and click the Launch Instance button. bClick AWS Marketplace. cSearch for Deep Learning AMI , and select the AMI that is suitable for your envi- ronment. Amazon Machine Images (AMI) contains all the environment files and drivers for you to train on a GPU. It has cuDNN and many other packages required for projects in this book. Any additional packages required for spe- cific projects are detailed in the appropriate project instructions. dChoose an instance type: – Filter the instance list to only show GPU instances. – Select the p2.xlarge instance type. This instance is powerful enough for our projects and not very expensive. Feel free to choose more powerful instances if you are interested in trying them out. – Click the Review and Launch button. eEdit the security group. You will be running Jupyter notebooks in this book, which default to port 8888. To access this port, you need to open it on AWS by editing the security group: – Select Create a New Security Group. – Set Security Group Name to Jupyter . – Click Add Rule, and set a Custom TCP Rule. – Set Port Range to 8888. – Select Anywhere as the Source. – Click Review and Launch. fClick the Launch button to launch your GPU instance. You’ll need to specify an authentication key pair to be able to access your instance. So, when you are launching the instance, make sure to select Create a New Key Pair and click the Download Key Pair button. This will download a .pem file, which you’ll need to be able to access your instance. Move the .pem file to a secure and eas- ily remembered location on your computer; you’ll need to access your instance through the location you select. After the .pem file has been down- loaded, click the Launch Instances button. WARNING From this point on, AWS will charge you for running this EC2 instance. You can find the details on the EC2 On-Demand Pricing page (https:/ /aws.amazon.com/ec2/pricing/on-demand ). Most important, always remember to stop your instances when you are not using them. Otherwise, they might keep running, and you’ll wind up with a large bill! AWS charges primarily for running instances, so most of the charges will cease once you stop the instance. However, smaller storage charges continue to accrue until you terminate (delete) the instance." deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"443 Setting up your AWS EC2 environment A.4.2 Connecting remotely to your instance Now that you have created your EC2 instance, go to your EC2 dashboard, select the instance, and start it, as shown in figure A.2. Allow a minute or two for the EC2 instance to launch. You will know it is ready when the instance Status Check shows “checks passed.” Scroll to the Description section, and make a note of the IPv4 Public IP address (in the format X.X.X.X) on the EC2 Dashboard; you will need it in the next step to access your instance remotely. On your terminal, follow these steps to connect to your EC2 server: 1Navigate to the location where you stored your .pem file from the previous section. 2Type the following: ssh -i YourKeyName.pem user@X.X.X.X user could be ubuntu@ or ec2-user@ . X.X.X.X is the IPv4 Public IP that you just saved from the EC2 instance description. And YourKeyName.pem is the name of your .pem file. TIP If you see a “bad permissions” or “permission denied” error message regarding your .pem file, try executing chmod 400 path/to/YourKeyName.pem and then running the ssh command again. A.4.3 Running your Jupyter notebook The final step is to run your Jupyter notebook on the EC2 server. After you have accessed the instance remotely from your terminal, follow these steps: 1Type the following command on your terminal: jupyter notebook --ip=0.0.0.0 --no-browser Figure A.2 How to remotely connect to your instance" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"444 APPENDIX AGetting set up When you press Enter, you will get an access token, as shown in figure A.3. Copy this token value, because you will use it in the next step. 2On your browser, go to this URL: http:/ /:8888. Note that the IPv4 public IP is the one you saved from the EC2 instance description. For example, if the public IP was 25.153.17.47, then the URL would be http:/ / 25.153.17.47:8888. 3Enter the token key that you copied in step 1 into the token field, and click Log In (figure A.4). 4Install the libraries that you will need for your projects, similarly to what you did in section A.3.1. But this time, use pip install instead of conda install . For example, to install Keras, you need to type pip install keras . That’s it. You are now ready to start coding! Figure A.3 Copy the token to run the notebook. Figure A.4 Logging in" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"445index Numerics 1 × 1 convolutional layer 220–221 A AAVER (adaptive attention for vehicle re- identification) 430 acc value 142 accuracy 431–433 as metric for evaluating models 147 improvements to 192 of image classification 185–192 building model architecture 187 –189 evaluating models 191 –192 importing dependencies 185 preparing data for training 186 –187 training models 190 –191 activation functions 51–60, 63, 200, 205 binary classifier 54 heaviside step function 54 leaky ReLU 59–60 linear transfer function 53 logistic function 55 ReLU 58–59 sigmoid function 55 softmax function 57 tanh 58–59 activation maps 108, 252 activation type 165 Adam (adaptive moment estimation) 175 Adam optimizer 190, 352 adaptive learning 170–171 adversarial training 343 AGI (artificial general intelligence) 342 AI vision systems 6AlexNet 203–211 architecture of 205 data augmentation 206 dropout layers 206 features of 205–207 in Keras 207–210 learning hyperparameters in 210 local response normalization 206 performance 211 ReLu activation function 205 training on multiple GPUs 207 weight regularization 207 algorithms classifier learning algorithms 33–34 in DeepDream 385–387 alpha 330 AMI (Amazon Machine Images) 442 Anaconda 438–439 anchor boxes 303–305 AP (average precision) 292 artificial neural networks (ANNs) 4, 8, 37, 42, 49, 92 atrous convolutions 318 attention network 302 AUC (area under the curve) 292 augmenting data 180–181 for image classification 187 in AlexNet 206 images 156 average pooling 115–116, 200 AWS EC2 environment creating AWS account 441–442 Jupyter notebooks 443–444 remotely connect to instance 443 setting up 441–444" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 446 B background region 286, 306 backpropagation 86–90 backward pass 87 base networks 313–314 predicting with 314 to extract features 301–302 baseline models 149–150 base_model summary 246–247, 270 batch all (BA) 419 batch gradient descent (BGD) 77–85, 171 derivative 80 direction 79–80 gradient 79 learning rate 80 pitfalls of 82–83 step size 79–80 batch hard (BH) 419 batch normalization 181–185 covariate shift defined 181 –182 in neural networks 182 –183 in Keras 184 overview 183 batch normalization (BN) 206, 227, 230, 350 batch sample (BS) 421–423 batch weighted (BW) 421 batch_size hyperparameter 51, 85, 190 Bayes error rate 158 biases 63 BIER (boosting independent embeddings robustly) 428 binary classifier 54 binary_crossentropy function 352–353 block1_conv1 layer 378, 381 block3_conv2 layer 378 block5_conv2 layer 383, 395 block5_conv3 layer 378, 383 blocks. See residual blocks bottleneck layers 221 bottleneck residual block 233 bottleneck_residual_block function 234, 237 bottom-up segmentation algorithm 294–295 bounding box coordinates 322 bounding box prediction 287 bounding boxes in YOLOv3 324 predicting with regressors 303–304 bounding-box regressors 293, 296–297 build_discriminator function 367 build_model() function 328, 330C Cars Dataset, Stanford 372 categories 18 CCTV monitoring 405 cGAN (conditional GAN) 361 chain rule 88 channels value 122 CIFAR dataset 264–265 Inception performance on 229 ResNet performance on 238 CIFAR-10 dataset 99, 133, 185–186 class predictions 287, 322 classes 18 classes argument 237 Class_id label 328 classification 105 classification loss 308 classification module 18, 293, 298 classifier learning algorithms 33–34 classifiers 233 binary 54 in Keras 229 pretrained networks as 254–256 CLVR (cross-level vehicle re-identification) 430 CNNs (convolutional neural networks) adding dropout layers to avoid overfitting 124–128 advantages of 126 in CNN architecture 127 –128 overview of dropout layers 125 overview of overfitting 125 architecture of 102–105, 195–239 AlexNet 203 classification 105 feature extraction 104 GoogLeNet 217 –229 Inception 217 –229 LeNet-5 199 –203 ResNet 230 –238 VGGNet 212 –216 color images 128–132 computational complexity 130–132 convolution on 129 –130 convolutional layers 107–114 convolutional operations 108 –111 kernel size 112 –113 number of filters in 111 –112 overview of convolution 107 –108 padding 113 –114 strides 113 –114 design patterns 197–199 fully connected layers 119" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 447 CNNs (convolutional neural networks) (continued) image classification 92, 121–144 building model architecture 121 –122 number of parameters 123 –124 weights 123 –124 with color images 133 –144 with MLPs 93 –102 implementing feature visualizer 381–383 overview 102–103, 375–383 pooling layers 114–118 advantages of 116 –117 convolutional layers 117 –118 max pooling vs. average pooling 115 –116 subsampling 114–118 visualizing features 377–381 coarse label 265 COCO datasets 320 code respositories 437 collecting data 162 color channel 198 color images 21–22, 128–132 computational complexity 130–132 converting to grayscale images 23–26 convolution on 129–130 image classification for 133–144 compiling models 140 –141 defining model architecture 137 –140 evaluating models 144 image preprocessing 134 –136 loading datasets 134 loading models with val_acc 143 training models 141 –143 color similarity 403 combined models 368–369 combined-image 395 compiling models 140–141 computation problem 242 computational complexity 130–132 computer vision. See CV (computer vision) conda activate deep_learning 440 conda list command 439 confidence threshold 289 confusion matrix 147–148 connection weights 38 content image 392 content loss 393–395 content_image 395 content_loss function 395 content_weight parameter 395 contrastive loss 410–411, 413 CONV_1 layer 122 CONV1 layer 207 CONV_2 layer 118, 123 CONV2 layer 208 Conv2D function 209CONV3 layer 208 CONV4 layer 208 CONV5 layer 208 ConvNet weights 259 convolution on color images 129–130 overview 107–108 convolutional layers 107–114, 117–118, 200, 212, 217 convolutional operations 108–111 kernel size 112–113 number of filters in 111–112 padding 113–114 strides 113–114 convolutional neural network 10 convolutional neural networks. See CNNs (convolutional neural networks) convolutional operations 108–111 correct prediction 291 cost functions 68 covariate shift defined 181–182 in neural networks 182–183 cross-entropy 71–72 cross-entropy loss 409–410 cuDNN 442 CV (computer vision) 3–35 applications of 10–15 creating images 13 –14 face recognition 15 image classification 10 –11 image recommendation systems 15 localization 12 neural style transfer 12 object detection 12 classifier learning algorithms 33–34 extracting features 27–33 automatically extracted features 31–33 handcrafted features 31 –33 features advantages of 33 overview 27 –31 image input 19–22 color images 21 –22 computer processing of images 21 images as functions 19 –20 image preprocessing 23–26 interpreting devices 8–10 pipeline 4, 17–19, 36 sensing devices 7 vision systems 5–6 AI vision systems 6 human vision systems 5 –6 visual perception 5" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 448 D DarkNet 324 Darknet-53 325 data augmenting 180–181 for image classification 187 in AlexNet 206 collecting 162 loading 331–332 mining 414–423 BA 419 BH 419 BS 421 –423 BW 421 dataloader 414 –416 finding useful triplets 416 –419 normalizing 154–155, 186 preparing for training 151–156, 186–187 preprocessing 153–156 augmenting images 156 grayscaling images 154 resizing images 154 splitting 151–153 data distillation 137 DataGenerator objects 331 dataloader 414–416 datasets downloading to GANs 364 Kaggle 267 loading 134 MNIST 203, 263 splitting for training 136 splitting for validation 136 validation datasets 152 DCGANs (deep convolutional generative adversar- ial networks) 345, 362, 365, 370 decay schedule 166–168 deep neural network 48 DeepDream 374, 384–399 algorithms in 385–387 in Keras 387–391 deltas 304 dendrites 38 dense layers 96, 120 See also fully connected layers Dense_1 layer 123 Dense_2 layer 123 dependencies, importing 185 deprocess_image(x) 383 derivatives 77, 80–81 design patterns 197–199 detection measuring speed of 289 multi-stage vs. single-stage 310diagnosing overfitting 156–158 underfitting 156–158 dilated convolutions 318 dilation rate 318 dimensionality reduction with Inception 220–223 1 × 1 convolutional layer 220–221 impact on network performance 222 direction 79–80 discriminator 343, 351 discriminator models 345–348 discriminator_model method 346, 352 discriminators in GANs 367 training 352 DL (deep learning) environments conda environment 440 loading environments 441 manual development environments 439–440 saving environments 441 setting up 439–441 dropout hyperparameter 51 dropout layers 179–180 adding to avoid overfitting 124–128 advantages of 126 in AlexNet 206 in CNN architecture 127–128 overview 125 dropout rate 179 dropout regularization 215 E early stopping 175–177 EC2 Management Console 442 EC2 On-Demand Pricing page 442 edges 46 embedding networks, training 423–431 finding similar items 424 implementation 426 object re-identification 424–426 testing trained models 427–431 object re-identification 428 –431 retrievals 427 –428 embedding space 401 endAnaconda 438 environments conda 440 developing manually 439–440 loading 441 saving 441 epochs 85, 169, 190 number of 51, 175–177 training 353–354" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 449 error functions 68–73 advantages of 69 cross-entropy 71–72 errors 72–73 mean squared error 70–71 overview 69 weights 72–73 errors 72–73 evaluate() method 191, 274, 280 evaluation schemes 358–359 Evaluator class 397 exhaustive search algorithm 294 exploding gradients 230 exponential decay 170 F f argument 234 face identification 15, 402 face recognition (FR) 15, 402 face verification 15, 402 false negatives (FN) 148–149, 291 false positives (FP) 148–149, 289 False setting 394 Fashion-MNIST 264, 363, 372 fashion_mnist.load_data() method 341 Fast R-CNNs (region-based convolutional neural networks) 297–299 architecture of 297 disadvantages of 299 multi-task loss function in 298–299 Faster R-CNNs (region-based convolutional neural networks) architecture of 300 base network to extract features 301–302 fully connected layers 306–307 multi-task loss function 307–308 object detection with 300–308 RPNs 302–306 anchor boxes 304 –305 predicting bounding box with regressor 303–304 training 305 –306 FC layer 208 FCNs (fully convolutional networks) 48, 120, 303 feature extraction 104, 301–302 automatically 31–33 handcrafted features 31–33 feature extractors 232, 244, 256–258, 297 feature maps 103–104, 108, 241, 243, 250, 252, 396–397 feature vector 18 feature visualizer 381–383 feature_layers 397features advantages of 33 handcrafted 31–33 learning 65–66, 252–253 overview 27–31 transferring 254 visualizing 377–381 feedforward process 62–66 feedforward calculations 64 learning features 65–66 FID (Fréchet inception distance) 357–358 filter hyperparameter 138 filter_index 381 filters 111–112 filters argument 117, 234 fine label 265 fine-tuning 258–259 advantages of 259 learning rates when 259 transfer learning 274–282 .fit() method 141 fit_generator() function 332 Flatten layer 95–96, 123, 208, 276 flattened vector 119 FLOPs (floating-point operations per second) 77 flow_from_directory() method 269, 276 foreground region 286, 306 FPS (frames per second) 289, 311 freezing layers 247 F-score 149 fully connected layers 101, 119, 212, 306–307 functions images as 19–20 training 369–370 G gallery set 423 GANs (generative adversarial networks) 341–373, 430 applications for 359–362 image-to-image translation 360 –361 Pix2Pix GAN 360 –361 SRGAN 361 –362 text-to-photo synthesis 359 –360 architecture of 343–356 DCGANs 345 discriminator models 345 –348 generator models 348 –350 minimax function 354 –356 building 362–372 combined models 368–369 discriminators 367 downloading datasets 364" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 450 GANs (generative adversarial networks) (continued) evaluating models of 357–359 choosing evaluation scheme 358 –359 FID 358 inception score 358 generators 365–366 importing libraries 364 training 351–354, 370–372 discriminators 352 epochs 353 –354 generators 352 –353 training functions 369–370 visualizing datasets 364 generative models 342 generator models 348–350 generator_model function 349 generators 343, 351 in GANs 365–366 training 352–353 global average pooling 115 global minima 83 Google Open Images 267 GoogLeNet 217–229 architecture of 226–227 in Keras 225–229 building classifiers 229 building inception modules 228 –229 building max-pooling layers 228 –229 building network 227 learning hyperparameters in 229 GPUs (graphics processing units) 190, 207, 268, 296, 326, 372, 414, 441 gradient ascent 377 gradient descent (GD) 84–86, 155, 166–167, 184, 377 overview 78 with momentum 174–175 gradients function 382 gram matrix 396–397 graph transformer network 201 grayscaling converting color images 23–26 images 154 ground truth bounding box 289–290, 305 GSTE (group-sensitive triplet embedding) 430 H hard data mining 416 hard negative sample 417 hard positive sample 417 heaviside step function 54, 60 height value 122 hidden layers 46–47, 50, 62, 65, 119, 203 hidden units 111high-recall model 149 human in the loop 162 human vision systems 5–6 hyperbolic tangent function 61 hyperparameters learning in AlexNet 210 in GoogLeNet 229 in Inception 229 in LeNet-5 202 –203 in ResNet 238 in VGGNet 216 neural network hyperparameters 163–164 parameters vs. 163 tuning 162–165 collecting data vs. 162 neural network hyperparameters 163 –164 parameters vs. hyperparameters 163 I identity function 53, 60 if-else statements 30 image classification 10–11 for color images 133–144 compiling models 140 –141 defining model architecture 137 –140 evaluating models 144 image preprocessing 134 –136 loading datasets 134 loading models with val_acc 143 training models 141 –143 with CNNs 121–124 building model architecture 121 –122 number of parameters 123 –124 weights 123 –124 with high accuracy 185–192 building model architecture 187 –189 evaluating models 191 –192 importing dependencies 185 preparing data for training 186 –187 training models 190 –191 with MLPs 93–102 drawbacks of 99 –102 hidden layers 96 input layers 94 –96 output layers 96 image classifier 18 image flattening 95 image preprocessing 33 image recommendation systems 15, 403 ImageDataGenerator class 181, 269, 276 ImageNet 265–266 ImageNet Large Scale Visual Recognition Chal- lenge (ILSVRC) 204, 211, 224, 230, 266, 293" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 451 images 19–22 as functions 19–20 augmenting 156 color images 21–22 computer processing of 21 creating 13–14 grayscaling 154 preprocessing 23–26, 134–136 converting color to grayscale 23 –26 one-hot encoding 135 preparing labels 135 splitting datasets for training 136 splitting datasets for validation 136 rescaling 135 resizing 154 image-to-image translation 360–361 Inception 217–229 architecture of 223–224 features of 217–218 learning hyperparameters in 229 modules 222–223 naive version 218–219 performance on CIFAR dataset 229 with dimensionality reduction 220–223 1 × 1 convolutional layer 220 –221 impact on network performance 222 inception modules 217, 228–229, 324 inception scores 358 inception_module function 225–226 include_top argument 247, 255, 394 input image 33, 385 input layers 46, 62 input vector 39 input_shape argument 122, 188, 237 instances 443 interpreting devices 8–10 IoU (intersection over union) 289–291, 319 J Jaccard distance 432 joint training 401 Jupyter notebooks 443–444 K K object classes 296 Kaggle datasets 267 Keras API AlexNet in 207–210 batch normalization in 184 DeepDream in 387–391 GoogLeNet in 225–229 building classifiers 229 building inception modules 228 –229building max-pooling layers 228 –229 building network 227 LeNet-5 in 200–201 ResNet in 235–237 keras.datasets 134 keras_ssd7.py file 328 kernel 107 kernel size 112–113 kernel_size hyperparameter 138, 187 L L2 regularization 177–179 label smoothing 432 labeled data 6 labeled images 11 LabelImg application 328 labels 135 lambda parameter 178 lambda value 207 layer_name 382 layers 47–48, 138 1 × 1 convolutional 220–221 dropout 179–180 adding to avoid overfitting 124 –128 advantages of 126 in AlexNet 206 in CNN architecture 127 –128 overview 125 fully connected 101, 119, 306–307 hidden 47 representing style features 396 Leaky ReLU 61–62, 165 learning 166–173 adaptive 170–171 decay schedule 166–168 embedding 406–407 features 65–66, 252–253 finding optimal learning rate 169–170 hyperparameters in AlexNet 210 in GoogLeNet 229 in Inception 229 in LeNet-5 202 –203 in ResNet 238 in VGGNet 216 mini-batch size 171–173 See also transfer learning learning curves, plotting 158–159, 191 learning rates 166–168 batch gradient descent 79–80 decay 170–171 derivative and 80 optimal, finding 169–170 when fine-tuning 259" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 452 LeNet-5 199–203 architecture of 199 in Keras 200–201 learning hyperparameters in 202–203 on MNIST dataset 203 libraries in GANs 364 linear combination 40 linear datasets 45 linear decay 170 linear transfer function 53, 60 load_data() method 134 load_dataset() method 273, 280 loading data 331–332 datasets 134 environments 441 models 143 local minima 83 local response normalization 206 localization 12 localization module 293 locally connected layers 101 LocalResponseNorm layer 227 location loss 308 logistic function 55, 61 loss content loss 393–395 runtime analysis of 412–413 total variance 397 visualizing 334 loss functions 407–413 contrastive loss 410 cross-entropy loss 409–410 naive implementation 412–413 triplet loss 411–412 loss value 142–143, 191, 334 lr variable 169 lr_schedule function 202 M MAC (multiplier–accumulator) 426 MAC operation 426 machine learning human brain vs. 10 with handcrafted features 31 main path 233 make_blobs 160 matrices 67 matrix multiplication 67 max pooling 115–116, 200 max-pooling layers 228–229 mean absolute error (MAE) 71 mean average precision (mAP) 285, 289, 292, 317, 424, 427mean squared error (MSE) 70–71 Mechanical Turk crowdsourcing tool, Amazon 266 metrics 140 min_delta argument 177 mini-batch gradient descent (MB-GD) 77, 84–85, 173, 238 mini-batch size 171–173 minimax function 354–356 mining data 414–423 BA419 BH 419 BS421–423 BW 421 dataloader 414–416 finding useful triplets 416–419 mixed2 layer 389 mixed3 layer 389 mixed4 layer 389 mixed5 layer 389 MLPs (multilayer perceptrons) 45 architecture of 46–47 hidden layers 47 image classification with 93–102 drawbacks of 99 –102 hidden layers 96 input layers 94 –96 output layers 96 layers 47–48 nodes 47–48 MNIST (Modified National Institute of Standards and Technology) dataset 98, 101, 121, 203, 263, 354 models architecture of 121–122, 137–140, 187–189 building 328, 330 compiling 140–141 configuring 329 designing 149–150 evaluating 144, 156–161, 191–192 building networks 159 –161 diagnosing overfitting 156 –158 diagnosing underfitting 156 –158 evaluating networks 159 –161 plotting learning curves 158 –159, 191 training networks 159 –161 loading 143 of GANs choosing evaluation scheme 358 –359 evaluating 357 –359 FID 358 inception score 358 testing 427–431 object re-identification 428 –431 retrievals 427 –428 training 141–143, 190–191, 333" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 453 momentum, gradient descent with 174–175 monitor argument 177 MS COCO (Microsoft Common Objects in Context) 266–267 multi-scale detections 315–318 multi-scale feature layers 315–319 architecture of 318–319 multi-scale detections 315–318 multi-scale vehicle representation (MSVR) 431 multi-stage detectors 310 multi-task learning (MTL) 433 multi-task loss function 298–299, 307–308 N naive implementation 412–413 naive representation 218–219, 222 n-dimensional array 67 neg_pos_ratio 330 networks 162–165, 222 architecture of 137–140 activation type 165 depth of neural networks 164 –165 improving 164 –165 width of neural networks 164 –165 building 159–161 evaluating 159–161 improving 162–165 in Keras 227 measuring precision of 289 predictions 287–288 pretrained as classifiers 254 –256 as feature extractors 256 –258 to extract features 301–302 training 159–161, 397–398 neural networks 36–91 activation functions 51–60 binary classifier 54 heaviside step function 54 leaky ReLU 59 –60 linear transfer function 53 logistic function 55 ReLU 58 –59 sigmoid function 55 softmax function 57 tanh 58 –59 backpropagation 86–90 covariate shift in 182–183 depth of 164–165 error functions 68–73 advantages of 69 cross-entropy 71 –72 errors 72 –73MSE 70 –71 overview 69 weights 72 –73 feedforward process 62–66 feedforward calculations 64 learning features 65 –66 hyperparameters in 163–164 learning features 252–253 multilayer perceptrons 45–51 architecture of 46 –47 hidden layers 47 layers 47 –48 nodes 47 –48 optimization 74–77 optimization algorithms 74–86 batch gradient descent 77 –83 gradient descent 85 –86 MB-GD 84 stochastic gradient descent 83 –84 overview 376–377 perceptrons 37–45 learning logic of 43 neurons 43 –45 overview 38 –42 width of 164–165 neural style transfer 12, 374, 392–399 content loss 393–395 network training 397–398 style loss 396–397 gram matrix for measuring jointly activated feature maps 396 –397 multiple layers for representing style features 396 total variance loss 397 neurons 8, 38, 40, 43–45, 206 new_model 248 NMS (non-maximum suppression) 285, 288–289, 319 no free lunch theorem 163 node values 63 nodes 46–48 noise loss 393 nonlinear datasets 44–45 nonlinearities 51 non-trainable params 124 normalizing data 154–155, 186 nstaller/application.yaml file 440 O oad_weights() method 143 object detection 12 framework 285–292 network predictions 287 –288 NMS 288 –289" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 454 object detection (continued) object-detector evaluation metrics 289 –292 region proposals 286 –287 with Fast R-CNNs 297–299 architecture of 297 disadvantages of 299 multi-task loss function in 298 –299 with Faster R-CNNs 300–308 architecture of 300 base network to extract features 301 –302 fully connected layers 306 –307 multi-task loss function 307 –308 RPNs 302 –306 with R-CNNs 283–297, 310–337 disadvantages of 296 –297 limitations of 310 multi-stage detectors vs. single-stage detectors 310 training 296 with SSD 283–310, 319–337 architecture of 311 –313 base networks 313 –314 multi-scale feature layers 315 –319 NMS 319 training SSD networks 326 –335 with YOLOv3 283–320, 325–337 architecture of 324 –325 overview 321 –324 object re-identification 405–406, 424–426, 428–431 object-detector evaluation metrics 289–292 FPS to measure detection speed 289 IoU 289–291 mAP to measure network precision 289 PR CURVE 291–292 objectness score 285–286, 313, 322 octaves 385–386 offline training 414 one-hot encoding 135, 187 online learning 171 Open Images Challenge 267 open source datasets 262–267 CIFAR 264–265 Fashion-MNIST 264 Google Open Images 267 ImageNet 265–266 Kaggle 267 MNIST 263 MS COCO 266–267 optimal weights 74 optimization 74–77 optimization algorithms 74–86, 174–177 Adam (adaptive moment estimation) 175 batch gradient descent 77–83 derivative 80 direction 79 –80gradient 79 learning rate 79 –80 pitfalls of 82 –83 step size 79 –80 early stopping 175–177 gradient descent 85–86 overview 78 with momentum 174 –175 MB-GD 84 number of epochs 175–177 stochastic gradient descent 83–84 optimization value 74 optimized weights 250 optimizer 352 output layer 40, 47, 62, 119 Output Shape columns 122 outside-inside rule 88 overfitting adding dropout layers to avoid 124–128 diagnosing 156–158 overview 125 regularization techniques to avoid 177–181 augmenting data 180 –181 dropout layers 179 –180 L2 regularization 177 –179 P padding 113–114, 118, 138, 212, 219 PAMTRI (pose aware multi-task learning) 430 parameters calculating 123–124 hyperparameters vs. 163 non-trainable params 124 number of 123–124 overview 123 trainable params 124 params non-trainable 124 trainable 124 PASCAL VOC-2012 dataset 293 Path-LSTM 431 patience variable 177 .pem file 442 perceptrons 37–45 learning logic of 43 neurons 43–45 overview 38–42 step activation function 42 weighted sum function 40 performance metrics 146–149 accuracy 147 confusion matrix 147–148 F-score 149" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 455 performance metrics (continued) precision 148 recall 148 person re-identification 406 pip install 444 Pix2Pix GAN (generative adversarial network) 360–361 plot_generated_images() function 370 plotting learning curves 158–159, 191 POOL layer 208 POOL_1 layer 123 POOL_2 layer 123 pooling layers 114–118, 203, 217 advantages of 116–117 convolutional layers 117–118 max pooling vs. average pooling 115–116 PR CURVE (precision-recall curve) 291–292 precision 148, 289 predictions 335 across different scales 323–324 bounding box with regressors 303–304 for networks 287–288 with base network 314 preprocessing data 153–156 augmenting images 156 grayscaling images 154 normalizing data 154 –155 resizing images 154 images 23–26, 134–136 converting color images to grayscale images 23 –26 one-hot encoding 135 preparing labels 135 splitting datasets for training 136 splitting datasets for validation 136 pretrained model 244 pretrained networks as classifiers 254–256 as feature extractors 256–258, 268–274 priors 314 Q query sets 423 Quick, Draw! dataset, Google 372 R rate of change 81 R-CNNs (region-based convolutional neural networks) disadvantages of 296–297 limitations of 310 multi-stage detectors vs. single-stage detectors 310object detection with 283–297, 310–337 training 296 recall 148 receptive field 108, 213 reduce argument 235 reduce layer 220 reduce shortcut 234 region proposals 286–287 regions of interest (RoIs) 285–286, 293, 295–297, 306, 310 regression layer 299 regressors 303–304 regular shortcut 234 regularization techniques to avoid overfitting 177–181 augmenting data 180–181 dropout layers 179–180 L2 regularization 177–179 ReLU (rectified linear unit) 58–59, 61–62, 96, 106, 111, 118, 139, 151, 160, 165, 188, 199, 203, 205, 209, 212, 231, 366 activation functions 205 leaky 59–60 rescaling images 135 residual blocks 232–235 residual module architecture 230 residual notation 71 resizing images 154 ResNet (Residual Neural Network) 230–238 features of 230–233 in Keras 235–237 learning hyperparameters in 238 performance on CIFAR dataset 238 residual blocks 233–235 results, observing 370–372 retrievals 427–428 RGB (Red Green Blue) 21, 360 RoI extractor 297 RoI pooling layer 297, 300, 308 RoIs (regions of interest) 285–286, 293, 295–297, 306, 310 RPNs (region proposal networks) 302–306 anchor boxes 304–305 predicting bounding box with regressors 303–304 training 305–306 runtime analysis of losses 412–413 S s argument 235 save_interval 369 scalar 67 scalar multiplication 67 scales, predictions across 323–324" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 456 scipy.optimize.fmin_l_bfgs_b method 398 selective search algorithm 286, 294–295 semantic similarity 403 sensing devices 7 sensitivity 148 shortcut path 233 Siamese loss 410 sigmoid function 55, 61, 63, 205 single class 306 single-stage detectors 310 skip connections 230–231 sliding windows 101 slope 81 softmax function 50, 57, 61–62, 106, 189 Softmax layer 208, 248, 297 source domain 254 spatial features 99–100 specificity 148 splitting data 151–153 datasets for training 136 for validation 136 SRGAN (super-resolution generative adversarial networks) 361 SSD (single-shot detector) architecture of 311–313 base network 313–314 multi-scale feature layers 315–319 architecture of multi-scale layers 318 –319 multi-scale detections 315 –318 non-maximum suppression 319 object detection with 283–310, 319–337 training networks 326–335 building models 328 configuring models 329 creating models 330 loading data 331 –332 making predictions 335 training models 333 visualizing loss 334 SSDLoss function 330 ssh command 443 StackGAN (stacked generative adversarial network) 359 step activation function 42 step function 38 step functions. See heaviside step function step size 79–80 stochastic gradient descent (SGD) 77, 83–85, 171, 427 strides 113–114 style loss 396–397 gram matrix for measuring jointly activated fea- ture maps 396–397multiple layers for representing style features 396 style_loss function 393, 396 style_weight parameter 395 subsampling 114–118 supervised learning 6 suppression. See NMS (non-maximum suppression) synapses 38 synset (synonym set) 204, 266 T tanh (hyperbolic tangent function) 58–59 tanh activation function 200, 205 Tensorflow playground 165 tensors 67 testing trained model 427–431 object re-identification 428–431 retrievals 427–428 test_path variable 270 test_targets 280 test_tensors 280 text-to-photo synthesis 359–360 TN (true negatives) 147 to_categorical function 160, 187 top-1 error rate 211 top-5 error rate 211, 216 top-k accuracy 427 Toronto Faces Dataset (TFD) 342 total variance loss 397 total variation loss 393 total_loss function 393 total_variation_loss function 397 total_variation_weight parameter 395 TP (true positives) 147, 289 train() function 369–370 trainable params 124 train_acc value 152, 162 train_error value 157, 176 training AlexNet 207 by trial and error 6 discriminators 352 embedding networks 423–431 finding similar items 424 implementation 426 object re-identification 424 –426 testing trained models 427 –431 epochs 353–354 functions 369–370 GANs 351–354, 370–372 generators 352–353 models 141–143, 190–191, 333 networks 159–161, 397–398" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 457 training (continued) preparing data for 151–156, 186–187 augmenting data 187 normalizing data 186 one-hot encode labels 187 preprocessing data 153 –156 splitting data 151 –153 R-CNNs 296 RPNs 305–306 splitting datasets for 136 SSD networks 326–335 building models 328 configuring models 329 creating models 330 loading data 331 –332 making predictions 335 training models 333 visualizing loss 334 train_loss value 152 train_on_batch method 352 train_path variable 270 transfer functions 51 in GANs 369–370 linear 53 transfer learning 150, 240–282 approaches to 254–259 using pretrained network as classifier 254–256 using pretrained network as feature extractor 256 –258 choosing level of 260–262 when target dataset is large and different from source dataset 261 when target dataset is large and similar to source dataset 261 when target dataset is small and different from source 261 when target dataset is small and similar to source dataset 260 –261 fine-tuning 258–259, 274–282 open source datasets 262–267 CIFAR 264 –265 Fashion-MNIST 264 Google Open Images 267 ImageNet 265 –266 Kaggle 267 MNIST 263 MS COCO 266 –267 overview 243–254 neural networks learning features 252 –253 transferring features 254 pretrained networks as feature extractors 268–274 when to use 241–243 transferring features 254transposition 68 triplet loss 410–412 triplets, finding 416–419 tuning hyperparameters 162–165 collecting data vs. 162 neural network hyperparameters 163–164 parameters vs. hyperparameters 163 U underfitting 125, 143, 156–158 untrained layers 248 Upsampling layer 350 Upsampling2D layer 348 V val_acc 143 val_acc value 142–143, 152, 162 val_error value 157, 176 validation datasets overview 152 splitting 136 valid_path variable 270 val_loss value 142–143, 152, 169, 191, 334 VAMI (viewpoint attentive multi-view inference) 430 vanishing gradients 62, 230 vector space 67, 401 VeRi dataset 424–425, 428, 430 VGG16 configuration 213, 215–216, 311, 313, 381 VGG19 configuration 213–214, 314 VGGNet (Visual Geometry Group at Oxford University) 212–216 configurations 213–216 features of 212–213 learning hyperparameters in 216 performance 216 vision systems 5–6 AI6 human 5–6 visual embedding layer 401 visual embeddings 400–436 applications of 402–406 face recognition 402 image recommendation systems 403 object re-identification 405 –406 learning embedding 406–407 loss functions 407–413 contrastive loss 410 cross-entropy loss 409 –410 naive implementation 412 –413 runtime analysis of losses 412 –413 triplet loss 411 –412" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"INDEX 458 visual embeddings (continued) mining informative data 414–423 BA 419 BH 419 BS 421 –423 BW 421 dataloader 414 –416 finding useful triplets 416 –419 training embedding networks 423–431 finding similar items 424 implementation 426 object re-identification 424 –426 testing trained models 427 –431 visual perception 5 visualizing datasets 364 features 377–381 loss 334 VUIs (voice user interfaces) 4 W warm-up learning rate 432 weight connections 46 weight decay 178 weight layers 199 weight regularization 207 weighted sum 38weighted sum function 40 weights 72–73, 123–124 calculating parameters 123–124 non-trainable params 124 trainable params 124 weights vector 39 width value 122 X X argument 234 x_test 186 x_train 186 x_valid 186 Y YOLOv3 (you only look once) architecture of 324–325 object detection with 283–320, 325–337 overview 321–324 output bounding boxes 324 predictions across different scales 323–324 Z zero-padding 114" deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf,"Mohamed Elgendy ISBN: 978-1-61729-619-2 How much has computer vision advanced? One ride in a Tesla is the only answer you’ll need. Deep learning techniques have led to exciting breakthroughs in facial recognition, interactive simulations, and medical imaging, but nothing beats seeing a car respond to real-world stimuli while speeding down the highway. How does the computer learn to understand what it sees? Deep Learning for Vision Systems answers that by applying deep learning to computer vision. Using only high school algebra, this book illuminates the concepts behind visual intuition. You’ll understand how to use deep learning architectures to build vision system applications for image generation and facial recognition. What’s Inside ● Image classifi cation and object detection ● Advanced deep learning architectures ● T ransfer learning and generative adversarial networks ● DeepDream and neural style transfer ● Visual embeddings and image search For intermediate Python programmers. Mohamed Elgendy is the VP of Engineering at Rakuten. A seasoned AI expert, he has previously built and managed AI products at Amazon and T wilio. To download their free eBook in PDF, ePub, and Kindle formats, owners of this book should visit www.manning.com/books/deep-learning-for-vision-systems $49.99 / Can $65.99 [INCLUDING eBOOK] Deep Learning for Vision SystemsDATA SCIENCE/COMPUTER VISION MANNING“From text and object detec- tion to DeepDream and facial recognition . . . this book is comprehensive, approachable, and relevant for modern applications of deep learning to computer vision systems!” —Bojan Djurkovic, DigitalOcean “Real-world problem solving without drowning you in details. It elaborates concepts bit by bit, making them easy to assimilate.”—Burhan Ul Haq, Audit XPRT “An invaluable and comprehensive tour for anyone looking to build real-world vision systems.”—Richard Vaughan Purple Monkey Collective “Shows you what’s behind modern technologies that allow computers to see things.”—Alessandro Campeis, VimarSee first page" Deep-Learning-with-PyTorch.pdf,"MANNINGEli Stevens Luca AntigaThomas Viehmann Foreword by Soumith Chintala" Deep-Learning-with-PyTorch.pdf," PRODUCTION SERVERCLOUD PRODUCTION (ONnX, JIT TORCHSCRIPT)TRAINED MODEL TRAINING LOoPBATCH TENSORSAMPLE TENSORS DATA SOURCEMUL TIPROCESs DATA LOADINGUNTRAINED MODEL DISTRIBUTED TRAINING ON MUL TIPLE SERVERS/GPU S" Deep-Learning-with-PyTorch.pdf,"Deep Learning with PyTorch ELI STEVENS, LUCA ANTIGA, AND THOMAS VIEHMANN FOREWORD BY SOUMITH CHINTALA MANNING SHELTER ISLAND" Deep-Learning-with-PyTorch.pdf,"For online information and ordering of this and other Manning books, please visit www.manning.com. The publisher offers discounts on this book when ordered in quantity. For more information, please contact Special Sales Department Manning Publications Co. 20 Baldwin RoadPO Box 761 Shelter Island, NY 11964 Email: orders@manning.com ©2020 by Manning Publications Co. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without prior written permission of the publisher. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in the book, and Manning Publications was aware of a trademark claim, the designations have been printed in initial caps or all caps. Recognizing the importance of preserving what has been written, it is Manning’s policy to have the books we publish printed on acid-free paper, and we exert our best efforts to that end. Recognizing also our responsibility to conserve the resources of our planet, Manning books are printed on paper that is at least 15 percent recycled and processed without the use of elemental chlorine. Manning Publications Co. Development editor: Frances Lefkowitz 20 Baldwin Road Technical development editor: Arthur Zubarev PO Box 761 Review editor: Ivan Martinovic ´ Shelter Island, NY 11964 Production editor: Deirdre Hiam Copyeditor: Tiffany Taylor Proofreader: Katie Tennant Technical proofreader: Kostas Passadis Typesetter: Gordan Salinovic Cover designer: Marija Tudor ISBN 9781617295263 Printed in the United States of America 1234567891 0–S P –2 42 32 22 12 01 9" Deep-Learning-with-PyTorch.pdf," To my wife (this book would not have happened without her invaluable support and partnership), my parents (I would not have happened without them), and my children (this book would have happened a lot sooner but for them). Thank you for being my home, my foundation, and my joy. —Eli Stevens Same :-) But, really, this is for you, Alice and Luigi. —Luca Antiga To Eva, Rebekka, Jonathan, and David. —Thomas Viehmann " Deep-Learning-with-PyTorch.pdf," " Deep-Learning-with-PyTorch.pdf,"vcontents foreword xv preface xvii acknowledgments xix about this book xxiabout the authors xxvii about the cover illustration xxviii PART 1C ORE PYTORCH .................. ................... ................1 1 Introducing deep learning and the PyTorch Library 3 1.1 The deep learning revolution 4 1.2 PyTorch for deep learning 6 1.3 Why PyTorch? 7 The deep learning competitive landscape 8 1.4 An overview of how PyTo rch supports deep learning projects 10 1.5 Hardware and software requirements 13 Using Jupyter Notebooks 14 1.6 Exercises 15 1.7 Summary 15" Deep-Learning-with-PyTorch.pdf,"CONTENTS vi 2 Pretrained networks 16 2.1 A pretrained network that recognizes the subject of an image 17 Obtaining a pretrained network for image recognition 19 AlexNet 20■ResNet 22■Ready, set, almost run 22 Run! 25 2.2 A pretrained model that fakes it until it makes it 27 The GAN game 28■CycleGAN 29■A network that turns horses into zebras 30 2.3 A pretrained network that describes scenes 33 NeuralTalk2 34 2.4 Torch Hub 35 2.5 Conclusion 37 2.6 Exercises 38 2.7 Summary 38 3 It starts with a tensor 39 3.1 The world as floating-point numbers 40 3.2 Tensors: Multidimensional arrays 42 From Python lists to PyTorch tensors 42■Constructing our first tensors 43■The essence of tensors 43 3.3 Indexing tensors 46 3.4 Named tensors 46 3.5 Tensor element types 50 Specifying the numeric type with dtype 50■A dtype for every occasion 51■Managing a tensor’s dtype attribute 51 3.6 The tensor API 52 3.7 Tensors: Scenic views of storage 53 Indexing into storage 54■Modifying stored values: In-place operations 55 3.8 Tensor metadata: Size, offset, and stride 55 Views of another tensor’s storage 56■Transposing without copying 58■Transposing in higher dimensions 60 Contiguous tensors 60 3.9 Moving tensors to the GPU 62 Managing a tensor’s device attribute 63" Deep-Learning-with-PyTorch.pdf,"CONTENTS vii 3.10 NumPy interoperability 64 3.11 Generalized tensors are tensors, too 65 3.12 Serializing tensors 66 Serializing to HDF5 with h5py 67 3.13 Conclusion 68 3.14 Exercises 68 3.15 Summary 68 4 Real-world data representation using tensors 70 4.1 Working with images 71 Adding color channels 72■Loading an image file 72 Changing the layout 73■Normalizing the data 74 4.2 3D images: Volumetric data 75 Loading a specialized format 76 4.3 Representing tabular data 77 Using a real-world dataset 77■Loading a wine data tensor 78 Representing scores 81■One-hot encoding 81■When to categorize 83■Finding thresholds 84 4.4 Working with time series 87 Adding a time dimension 88■Shaping the data by time period 89■Ready for training 90 4.5 Representing text 93 Converting text to numbers 94■One-hot-encoding characters 94 One-hot encoding whole words 96■Text embeddings 98 Text embeddings as a blueprint 100 4.6 Conclusion 101 4.7 Exercises 1014.8 Summary 102 5 The mechanics of learning 103 5.1 A timeless lesson in modeling 104 5.2 Learning is just parameter estimation 106 A hot problem 107■Gathering some data 107■Visualizing the data 108■Choosing a linear model as a first try 108 5.3 Less loss is what we want 109 From problem back to PyTorch 110" Deep-Learning-with-PyTorch.pdf,"CONTENTS viii 5.4 Down along the gradient 113 Decreasing loss 113■Getting analytical 114■Iterating to fit the model 116■Normalizing inputs 119■Visualizing (again) 122 5.5 PyTorch’s autograd: Backpropagating all things 123 Computing the gradient automatically 123■Optimizers a la carte 127■Training, validation, and overfitting 131 Autograd nits and switching it off 137 5.6 Conclusion 139 5.7 Exercise 1395.8 Summary 139 6 Using a neural network to fit the data 141 6.1 Artificial neurons 142 Composing a multilayer network 144■Understanding the error function 144■All we need is activation 145■More activation functions 147■Choosing the best activation function 148 What learning means for a neural network 149 6.2 The PyTorch nn module 151 Using __call__ rather than forward 152■Returning to the linear model 153 6.3 Finally a neural network 158 Replacing the linear model 158■Inspecting the parameters 159 Comparing to the linear model 161 6.4 Conclusion 162 6.5 Exercises 1626.6 Summary 163 7 Telling birds from airplanes: Learning from images 164 7.1 A dataset of tiny images 165 Downloading CIFAR-10 166■The Dataset class 166 Dataset transforms 168■Normalizing data 170 7.2 Distinguishing birds from airplanes 172 Building the dataset 173■A fully connected model 174 Output of a classifier 175■Representing the output as probabilities 176■A loss for classifying 180■Training the classifier 182■The limits of going fully connected 189 7.3 Conclusion 191" Deep-Learning-with-PyTorch.pdf,"CONTENTS ix 7.4 Exercises 191 7.5 Summary 192 8 Using convolutions to generalize 193 8.1 The case for convolutions 194 What convolutions do 194 8.2 Convolutions in action 196 Padding the boundary 198■Detecting features with convolutions 200■Looking further with depth and pooling 202 Putting it all together for our network 205 8.3 Subclassing nn.Module 207 Our network as an nn.Module 208■How PyTorch keeps track of parameters and submodules 209■The functional API 210 8.4 Training our convnet 212 Measuring accuracy 214■Saving and loading our model 214 Training on the GPU 215 8.5 Model design 217 Adding memory capacity: Width 218■Helping our model to converge and generali ze: Regularization 219■Going deeper to learn more complex structures: Depth 223■Comparing the designs from this section 228■It’s already outdated 229 8.6 Conclusion 229 8.7 Exercises 230 8.8 Summary 231 PART 2L EARNING FROM IMAGES IN THE REAL WORLD : EARLY DETECTION OF LUNG CANCER ............. ..........233 9 Using PyTorch to fight cancer 235 9.1 Introduction to the use case 236 9.2 Preparing for a large-scale project 2379.3 What is a CT scan, exactly? 238 9.4 The project: An end-to-end detector for lung cancer 241 Why can’t we just throw data at a neural ne twork until it works? 245■What is a nodule? 249■Our data source: The LUNA Grand Challenge 251■Downloading the LUNA data 251" Deep-Learning-with-PyTorch.pdf,"CONTENTS x 9.5 Conclusion 252 9.6 Summary 253 10 Combining data sources into a unified dataset 254 10.1 Raw CT data files 256 10.2 Parsing LUNA’s annotation data 256 Training and validation sets 258■Unifying our annotation and candidate data 259 10.3 Loading individual CT scans 262 Hounsfield Units 264 10.4 Locating a nodule using the patient coordinate system 265 The patient coordinate system 265■CT scan shape and voxel sizes 267■Converting between mi llimeters and voxel addresses 268■Extracting a nodule from a CT scan 270 10.5 A straightforward dataset implementation 271 Caching candidate arrays wi th the getCtRawCandidate function 274■Constructing our dataset in LunaDataset .__init__ 275■A training/validation split 275■Rendering the data 277 10.6 Conclusion 277 10.7 Exercises 278 10.8 Summary 278 11 Training a classification mode l to detect suspected tumors 279 11.1 A foundational model and training loop 280 11.2 The main entry point for our application 28211.3 Pretraining setup and initialization 284 Initializing the model and optimizer 285■Care and feeding of data loaders 287 11.4 Our first-pass neural network design 289 The core convolutions 290■The full model 293 11.5 Training and validating the model 295 The computeBatchLoss function 297■The validation loop is similar 299 11.6 Outputting performance metrics 300 The logMetrics function 301" Deep-Learning-with-PyTorch.pdf,"CONTENTS xi 11.7 Running the training script 304 Needed data for training 305■Interlude: The enumerateWithEstimate function 306 11.8 Evaluating the model: Gett ing 99.7% correct means we’re done, right? 308 11.9 Graphing training metrics with TensorBoard 309 Running TensorBoard 309■Adding TensorBoard support to the metrics logging function 313 11.10 Why isn’t the model lear ning to detect nodules? 315 11.11 Conclusion 316 11.12 Exercises 316 11.13 Summary 316 12 Improving training with metrics and augmentation 318 12.1 High-level plan for improvement 319 12.2 Good dogs vs. bad guys: False positives and false negatives 320 12.3 Graphing the positives and negatives 322 Recall is Roxie’s strength 324■Precision is Preston’s forte 326 Implementing precision and recall in logMetrics 327■Our ultimate performance metric: The F1 score 328■How does our model perform with our new metrics? 332 12.4 What does an ideal dataset look like? 334 Making the data look less like the actual and more like the “ideal” 336Contrasting training with a bala nced LunaDataset to previous runs 341 ■Recognizing the symptoms of overfitting 343 12.5 Revisiting the problem of overfitting 345 An overfit face-to-age prediction model 345 12.6 Preventing overfitting with data augmentation 346 Specific data augmentation techniques 347■Seeing the improvement from data augmentation 352 12.7 Conclusion 354 12.8 Exercises 355 12.9 Summary 356 13 Using segmentation to fi nd suspected nodules 357 13.1 Adding a second model to our project 358 13.2 Various types of segmentation 360" Deep-Learning-with-PyTorch.pdf,"CONTENTS xii 13.3 Semantic segmentation: Pe r-pixel classification 361 The U-Net architecture 364 13.4 Updating the model for segmentation 366 Adapting an off-the-shelf model to our project 367 13.5 Updating the dataset for segmentation 369 U-Net has very specific input size requirements 370■U-Net trade- offs for 3D vs. 2D data 370■Building the ground truth data 371■Implementing Luna2dSegmentationDataset 378 Designing our training and validation data 382■Implementing TrainingLuna2dSegmentationDataset 383■Augmenting on the GPU 384 13.6 Updating the training script for segmentation 386 Initializing our segmentation and augmentation models 387 Using the Adam optimizer 388■Dice loss 389■Getting images into TensorBoard 392■Updating our metrics logging 396 Saving our model 397 13.7 Results 399 13.8 Conclusion 40113.9 Exercises 402 13.10 Summary 402 14 End-to-end nodule analysis , and where to go next 404 14.1 Towards the finish line 405 14.2 Independence of the validation set 407 14.3 Bridging CT segmentati on and nodule candidate classification 408 Segmentation 410■Grouping voxels into nodule candidates 411 Did we find a nodule? Classification to reduce false positives 412 14.4 Quantitative validation 416 14.5 Predicting malignancy 417 Getting malignancy information 417■An area under the curve baseline: Classifying by diameter 419■Reusing preexisting weights: Fine-tuning 422■More output in TensorBoard 428 14.6 What we see when we diagnose 432 Training, validation, and test sets 433 14.7 What next? Additional source s of inspiration (and data) 434 Preventing overfitting: Better regularization 434■Refined training data 437■Competition results and research papers 438" Deep-Learning-with-PyTorch.pdf,"CONTENTS xiii 14.8 Conclusion 439 Behind the curtain 439 14.9 Exercises 441 14.10 Summary 441 PART 3D EPLOYMENT ................... ................... ...............443 15 Deploying to production 445 15.1 Serving PyTorch models 446 Our model behind a Flask server 446■What we want from deployment 448■Request batching 449 15.2 Exporting models 455 Interoperability beyond PyTorch with ONNX 455■PyTorch’s own export: Tracing 456■Our server with a traced model 458 15.3 Interacting with the PyTorch JIT 458 What to expect from moving be yond classic Py thon/PyTorch 458 The dual nature of PyTorch as interface and backend 460TorchScript 461 ■Scripting the gaps of traceability 464 15.4 LibTorch: PyTorch in C++ 465 Running JITed models from C++ 465■C++ from the start: The C++ API 468 15.5 Going mobile 472 Improving efficiency: Model design and quantization 475 15.6 Emerging technology: Ente rprise serving of PyTorch models 476 15.7 Conclusion 477 15.8 Exercises 477 15.9 Summary 477 index 479 " Deep-Learning-with-PyTorch.pdf," " Deep-Learning-with-PyTorch.pdf,"xvforeword When we started the PyTorch project in mid-2016, we were a band of open source hackers who met online and wanted to write be tter deep learning software. Two of the three authors of this book, Luca Antiga and Thomas Viehmann, were instrumental in developing PyTorch and making it the success that it is today. Our goal with PyTorch was to build the mo st flexible framewor k possible to express deep learning algorithms. We executed with focus and had a relatively short develop- ment time to build a polish ed product for the developer market. This wouldn’t have been possible if we hadn’t been standing on the shoulders of giants. PyTorch derives asignificant part of its codebase from the To rch7 project started in 2007 by Ronan Col- lobert and others, which has roots in th e Lush programming language pioneered by Yann LeCun and Leon Bottou. This rich hist ory helped us focus on what needed to change, rather than conceptu ally starting from scratch. It is hard to attribute the success of PyTo rch to a single factor. The project offers a good user experience and enhanced debugg ability and flexibility, ultimately making users more productive. The hu ge adoption of PyTorch has resulted in a beautiful eco- system of software and research built on to p of it, making PyTorch even richer in its experience. Several courses and university curricula, as well as a huge number of online blogs and tutorials, have been offered to make PyTorch easier to learn. However, we have seen very few books. In 2017, when someone asked me, “When is the PyTorch book going to be written?” I responded, “If it gets written now, I can guarantee that it will be outdated by the time it is completed.”" Deep-Learning-with-PyTorch.pdf,"FOREWORD xvi With the publication of Deep Learning with PyTorch , we finally have a definitive trea- tise on PyTorch. It covers the basics and ab stractions in great detail, tearing apart the underpinnings of data structures like te nsors and neural networks and making sure you understand their implementation. Additionally, it covers advanced subjects suchas JIT and deployment to production (an aspect of PyTorch that no other book cur- rently covers). Additionally, the book covers applications , taking you through the steps of using neural networks to help so lve a complex and important medical problem. With Luca’s deep expertise in bioengineer ing and medical imaging, Eli’s practical experience cre- ating software for medical devices and dete ction, and Thomas’s background as a PyTorch core developer, this journey is treated carefully, as it should be. All in all, I hope this book becomes yo ur “extended” reference document and an important part of your library or workshop. S OUMITH CHINTALA COCREATOR OF PYTORCH" Deep-Learning-with-PyTorch.pdf,"xviipreface As kids in the 1980s, taking our first st eps on our Commodore VIC 20 (Eli), the Sin- clair Spectrum 48K (Luca), and the Commo dore C16 (Thomas), we saw the dawn of personal computers, learned to code and write algorithms on ever-faster machines, and often dreamed about where computers wo uld take us. We also were painfully aware of the gap between what computers di d in movies and what they could do in real life, collectively rollin g our eyes when the main ch aracter in a spy movie said, “Computer, enhance.” Later on, during our professional lives , two of us, Eli and Luca, independently challenged ourselves with medical image an alysis, facing the same kind of struggle when writing algorithms that could handle the natural variability of the human body. There was a lot of heuristics involved when choosing the best mix of algorithms thatcould make things work and save the day. Thomas studied neural nets and pattern recognition at the turn of the century bu t went on to get a PhD in mathematics doing modeling. When deep learning came about at the beginning of the 2010s, making its initial appearance in computer visi on, it started being applied to medical image analysis tasks like the identification of structures or lesions on medical images. It was at that time, in the first half of the decade, that deep learning appear ed on our individual radars. It took a bit to realize that deep learning represented a whole new way of writ- ing software: a new class of multipurpose algori thms that could learn how to solve complicated tasks through the observation of data." Deep-Learning-with-PyTorch.pdf,"PREFACE xviii To our kids-of-the-80s minds, the hori zon of what computers could do expanded overnight, limited not by the brains of th e best programmers, but by the data, the neu- ral network architecture, and the training process. The next step was getting our hands dirty. Luca choose Torch 7 ( http://torch.ch ), a venerable precursor to PyTorch; it’s nimble, lightweight, and fast, with approachable source code written in Lua and plain C, a supportive community, an d a long history behind it. For Luca, it was love at first sight. The only real drawback with To rch 7 was being detached from the ever-growing Python data science ecos ystem that the other frameworks could draw from. Eli had been interest ed in AI since college,1 but his career pointed him in other directions, and he found other, earlier deep learning frameworks a bit too laborious to get enthusiastic about using them for a hobby project. So we all got really excited when the fi rst PyTorch release was made public on Jan- uary 18, 2017. Luca started contributing to the core, and Eli was part of the commu-nity very early on, submitting the odd bu g fix, feature, or documentation update. Thomas contributed a ton of features and bug fixes to PyTorch and eventually became one of the independent core contributors. There was the feeling that something big was starting up, at the right level of comp lexity and with a mini mal amount of cogni- tive overhead. The lean design lessons lear ned from the Torch 7 days were being car- ried over, but this time with a modern set of features like automatic differentiation, dynamic computation graphs, and NumPy integration. Given our involvement and enthusiasm, an d after organizing a couple of PyTorch workshops, writing a book felt like a natu ral next step. The goal was to write a book that would have been appealing to our former selves getting started just a few years back. Predictably, we started with grandiose id eas: teach the basics, walk through end-to- end projects, and demonstrate the latest an d greatest models in PyTorch. We soon realized that would take a lot more than a single book, so we decided to focus on our initial mission: devote time and depth to cover the key concepts underlying PyTorch, assuming little or no prior knowledge of deep learning, and get to the point where we could walk our readers through a complete proj ect. For the latter, we went back to our roots and chose to demonstrate a medical image analysis challenge. 1Back when “deep” neural networks meant three hidden layers!" Deep-Learning-with-PyTorch.pdf,"xixacknowledgments We are deeply indebted to the PyTorch team . It is through their collective effort that PyTorch grew organically fr om a summer internship proj ect to a world-class deep learning tool. We would like to mention Soumith Chintala and Adam Paszke, who, in addition to their technical excellence, work ed actively toward adopting a “community first” approach to managing the project. The level of health and inclusiveness in thePyTorch community is a testament to their actions. Speaking of community, PyTorch would not be what it is if not for the relentless work of individuals helping early adopters and experts alike on the discussion forum. Of all the honorable contributors, Piotr Bialec ki deserves our particular badge of grat- itude. Speaking of the book, a particular shout-out goes to Joe Spisak, who believed in the value that this book could bring to the community, and also Jeff Smith, who did anincredible amount of work to bring that va lue to fruition. Bruce Lin’s work to excerpt part 1 of this text and provide it to the PyTorch community free of charge is also hugely appreciated. We would like to thank the team at Ma nning for guiding us through this journey, always aware of the delicate balance between family, job, and writing in our respective lives. Thanks to Erin Twohey for reaching out and asking if we’d be interested in writ- ing a book, and thanks to Michael Stephens for tricking us into saying yes. We told you we had no time! Brian Hanafee went abov e and beyond a reviewer’s duty. Arthur Zubarev and Kostas Passadis gave great feed back, and Jennifer Houle had to deal with our wacky art style. Our copy editor, Tiffany Taylor, has an impressive eye for detail; any mistakes are ours and ours alone. We wo uld also like to thank our project editor," Deep-Learning-with-PyTorch.pdf,"ACKNOWLEDGMENTS xx Deirdre Hiam, our proofreader, Katie Tenna nt, and our review editor, Ivan M arti- novic´. There are also a host of people work ing behind the scenes, glimpsed only on the CC list of status update threads, and all necessary to bring this book to print. Thank you to every name we’ve left off th is list! The anonymous reviewers who gave their honest feedback helped make this book what it is. Frances Lefkowitz, our tireless editor, de serves a medal and a week on a tropical island after dragging this bo ok over the finish line. Than k you for all you’ve done and for the grace with which you did it. We would also like to thank our reviewer s, who have helped to improve our book in many ways: Aleksandr Erofeev, Audrey Ca rstensen, Bachir Chihani, Carlos Andres Mariscal, Dale Neal, Daniel Berecz, Doniyo r Ulmasov, Ezra Stevens, Godfred Asamoah, Helen Mary Labao Barrameda, Hilde Van Gyse l, Jason Leonard, Je ff Coggshall, Kostas Passadis, Linnsey Nil, Mathieu Zhang, Mi chael Constant, Miguel Montalvo, Orlando Alejo Méndez Morales, Philippe Van Bergen, Reece Stevens, Srinivas K. Raman, and Yujan Shrestha. To our friends and family, wondering wh at rock we’ve been hiding under these past two years: Hi! We missed yo u! Let’s have dinner sometime." Deep-Learning-with-PyTorch.pdf,"xxiabout this book This book has the aim of providing the foun dations of deep learning with PyTorch and showing them in action in a re al-life project. We strive to provide the key concepts under- lying deep learning and show how PyTorch pu ts them in the hands of practitioners. In the book, we try to provide intuition that will support further exploration, and in doing so we selectively delve into details to sh ow what is going on behind the curtain. Deep Learning with PyTorch doesn’t try to be a referenc e book; rather, it’s a concep- tual companion that will allow you to in dependently explore more advanced material online. As such, we focus on a subset of the features offered by PyTorch. The mostnotable absence is recurrent neural networks, but the same is true for other parts of the PyTorch API. Who should read this book This book is meant for developers who are or aim to become de ep learning practi- tioners and who want to get acquainted wi th PyTorch. We imagine our typical reader to be a computer scientist, data scientist, or software engineer, or an undergraduate- or-later student in a related program. Sinc e we don’t assume prior knowledge of deep learning, some parts in the first half of th e book may be a repetition of concepts that are already known to ex perienced practitioners. For those readers, we hope the expo- sition will provide a slightly different angle to known topics. We expect readers to have basic knowle dge of imperative an d object-oriented pro- gramming. Since the book uses Python, you should be familiar with the syntax and operating environment. Knowing how to in stall Python packages and run scripts on" Deep-Learning-with-PyTorch.pdf,"ABOUT THIS BOOK xxii your platform of choice is a prerequisite. Readers coming from C++, Java, JavaScript, Ruby, or other such languages should have an easy time picking it up but will need to do some catch-up outside this book. Similarly, being fami liar with NumPy will be use- ful, if not strictly required. We also expect familiarity with some basic linear algebra, such as knowing what matrices and vectors are and what a dot product is. How this book is organized: A roadmap Deep Learning with PyTorch is organized in three distinct parts. Part 1 covers the founda- tions, while part 2 walks you through an end-to-end project, building on the basic con- cepts introduced in part 1 and adding more advanced ones. The short part 3 rounds off the book with a tour of what PyTorch offers for deployment. You will likely noticedifferent voices and graphical styles among the parts. Although the book is a result of endless hours of collaborative planning, disc ussion, and editing, the act of writing and authoring graphics was split among the parts: Luca was primarily in charge of part 1 and Eli of part 2. 2 When Thomas came along, he tried to blend the style in part 3 and various sections here and there with the writ ing in parts 1 and 2. Rather than finding a minimum common denominator, we decided to preserve the original voices that char- acterized the parts. Following is a breakdown of each part into chapters and a brief description of each. PART 1 In part 1, we take our first steps with PyTorch, building the fundamental skills needed to understand PyTorch projects out there in the wild as well as starting to build our own. We’ll cover the PyTorch API and some behind-the-scenes features that make PyTorch the library it is, and work on training an initial classification model. By the end of part 1, we’ll be ready to tackle a real-world project. Chapter 1 introduces PyTorch as a library and its place in the deep learning revolu- tion, and touches on what sets PyTorch ap art from other deep learning frameworks. Chapter 2 shows PyTorch in action by ru nning examples of pretrained networks; it demonstrates how to download and run models in PyTorch Hub. Chapter 3 introduces the basic buildi ng block of PyTorch—the tensor—showing its API and going behind the scenes with some implementation details. Chapter 4 demonstrates how different kind s of data can be represented as tensors and how deep learning models expects tensors to be shaped. Chapter 5 walks through the mechanics of learning through gradient descent and how PyTorch enables it with automatic differentiation. Chapter 6 shows the process of building and training a neural network for regres- sion in PyTorch using the nn and optim modules. Chapter 7 builds on the pr evious chapter to create a fully connected model for image classification and expand the knowledge of the PyTorch API. Chapter 8 introduces convolutional neural networks and touches on more advanced concepts for building neural network models and their PyTorch implementation. 2A smattering of Eli’s and Thomas’s art appears in other parts; don’t be shocked if the style changes mid-chapter!" Deep-Learning-with-PyTorch.pdf,"ABOUT THIS BOOK xxiii PART 2 I n p a r t 2 , e a c h c h a p t e r m o ves us closer to a comprehe nsive solution to automatic detection of lung cancer. We’ll use this diffi cult problem as motivation to demonstrate the real-world approaches ne eded to solve large-scale pr oblems like cancer screening. It is a large project with a focus on clea n engineering, troubles hooting, and problem solving. Chapter 9 describes the end-to-end strategy we’ll use for lung tumor classification, starting from computed tomography (CT) imaging. Chapter 10 loads the human annotation da ta along with the images from CT scans and converts the relevant information in to tensors, using standard PyTorch APIs. Chapter 11 introduces a fi rst classification model that consumes the training data introduced in chapter 10. We train the mo del and collect basic performance metrics. We also introduce using Tens orBoard to monitor training. Chapter 12 explores and implements standard performance metrics and uses those metrics to identify weaknesses in the training done previously. We then mitigate those flaws with an improved training set that uses data balancing and augmentation. Chapter 13 describes segmentation, a pixel- to-pixel model architecture that we use to produce a heatmap of possible nodule lo cations that covers the entire CT scan. This heatmap can be used to find nodules on CT scans for which we do not havehuman-annotated data. Chapter 14 implements the final end-to-e nd project: diagnosis of cancer patients using our new segmentation mode l followed by classification. PART 3 Part 3 is a single chapter on deployment. Chapter 15 provides an overview of how to deploy PyTorch models to a simple web se rvice, embed them in a C++ program, or bring them to a mobile phone. About the code All of the code in this book was written fo r Python 3.6 or later. The code for the book is available for download from Manning’s website ( www.manning.com/books/ deep-learning-with-pytorch ) and on GitHub ( https://github.com/deep-learning-with- pytorch/dlwpt-code ). Version 3.6.8 was current at the time of writing and is what we used to test the examples in this book. For example: $ python Python 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 on linuxType ""help"", ""copyright"", ""credits"" or ""license"" for more information. >>> Command lines intended to be ente red at a Bash prompt start with $ (for example, the $ python line in this example). Fixed-width inline code looks like self . Code blocks that begin with >>> are transcripts of a sessi on at the Python interac- tive prompt. The >>> characters are not meant to be considered input; text lines that" Deep-Learning-with-PyTorch.pdf,"ABOUT THIS BOOK xxiv do not start with >>> or … are output. In some cases, an extra blank line is inserted before the >>> to improve readability in print. These blank lines are not included when you actually enter the text at the interactive prompt: >>> print(""Hello, world!"") Hello, world! >>> print(""Until next time..."") Until next time... We also make heavy use of Jupyter Notebook s, as described in chapter 1, in section 1.5.1. Code from a notebook that we provide as part of the official GitHub repository looks like this: # In[1]: print(""Hello, world!"") # Out[1]: Hello, world! # In[2]: print(""Until next time..."") # Out[2]: Until next time... Almost all of our example notebooks contain the following boilerplate in the first cell (some lines may be missing in early chapte rs), which we skip including in the book after this point: # In[1]: %matplotlib inline from matplotlib import pyplot as pltimport numpy as np import torch import torch.nn as nn import torch.nn.functional as Fimport torch.optim as optim torch.set_printoptions(edgeitems=2) torch.manual_seed(123) Otherwise, code blocks are partial or entire sections of .py source files. def main(): print(""Hello, world!"") if __name__ == '__main__': main()Listing 15.1 main.py:5, def mainThis blank line would not be present during an actual interactive session." Deep-Learning-with-PyTorch.pdf,"ABOUT THIS BOOK xxv Many of the code samples in the book are pr esented with two-space indents. Due to the limitations of print, code list ings are limited to 80-character lines, which can be imprac- tical for heavily indented sections of code. The use of two-space indents helps to miti- gate the excessive line wrapping that woul d otherwise be present. All of the code available for download for the book (again, at www.manning.com/books/deep-learn- ing-with-pytorch and https://github.com/deep-learning-with-pytorch/dlwpt-code ) uses a consistent four-space indent. Variables named with a _t suffix are tensors stored in CPU memory, _g are tensors in GPU memory, and _a are NumPy arrays. Hardware and software requirements Part 1 has been designed to not require any particular computing resources. Any recent computer or online co mputing resource will be adeq uate. Similarly, no certain operating system is required. In part 2, we anticipate that comp leting a full training run for the more advanced examples will require a CUDA-capable GPU. The default parameters used in part 2 assume a GPU with 8 GB of RAM (we suggest an NVIDIA GTX 1070 or better), but the parameters can be adjusted if your hardware has lessRAM available. The raw data needed for part 2’s cancer-detection project is about 60 GB to download, and you will need a total of 200 GB (at minimum) of free disk space on the system that will be used for training. Luckily, online computing servicesrecently started offering GPU time for fr ee. We discu ss computing requirements in more detail in the appropriate sections. You need Python 3.6 or later; instructions can be found on the Python website ( www .python.org/downloads ). For PyTorch installation information, see the Get Started guide on the official PyTorch website ( https://pytorch.org/get-started/locally ). We suggest that Windows users instal l with Anaconda or Miniconda ( https://www .anaconda.com/distribution or https://docs.conda.io/en /latest/miniconda.html ). Other operating systems like Linux typically have a wider variety of workable options, with Pip being the most common package manager for Python. We provide a require- ments.txt file that Pip can use to install dependencies. Since current Apple laptops do not include GPUs that support CUDA, the precompiled macOS packages for PyTorch are CPU-only. Of course, experienced users ar e free to install packages in the way that is most compatible with your preferred development environment. liveBook discussion forum Purchase of Deep Learning with PyTorch includes free access to a private web forum run by Manning Publications where you can ma ke comments about the book, ask technical questions, and receive help from the auth ors and from other users. To access the forum, go to https://livebook.manning.com/#!/bo ok/deep-learning-with-pytorch/ discussion . You can learn more about Manning’s forums and the rules of conduct at https://livebook.manning .com/#!/discussion . Manning’s commitm ent to our read- ers is to provide a venue where a meaningful dialogue between individual readers and between readers and the author can take plac e. It is not a commitment to any specific" Deep-Learning-with-PyTorch.pdf,"ABOUT THIS BOOK xxvi amount of participation on the part of th e authors, whose contri bution to the forum remains voluntary (and unpaid). We suggest you try asking them some challenging questions lest their interest stray! The forum and the archives of previous discussions will be accessible from the publisher’s we bsite as long as the book is in print. Other online resources Although this book does not assume prior kn owledge of deep learning, it is not a foun- dational introduction to deep learning. We co ver the basics, but our focus is on proficiency with the PyTorch library. We en courage interested readers to build up an intuitive under- standing of deep learning either before, during, or after reading this book. Toward thatend, Grokking Deep Learning (www.manning.com/books/gro kking-deep-learning ) is a great resource for developing a strong ment al model and intuitio n about the mechanism underlying deep neural networks. For a thor ough introduction and reference, we direct you to Deep Learning by Goodfellow et al. ( www.deeplearningbook.org ). And of course, Manning Publications has an extensive catalog of deep learning titles ( www.manning .com/catalog#section-83 ) that cover a wide variety of topics in the space. Depending on your interests, many of them will make an excellent next book to read." Deep-Learning-with-PyTorch.pdf,"xxviiabout the authors Eli Stevens has spent the majority of his care er working at startups in Silicon Valley, with roles ranging from software engineer (making enterprise networking appliances) to CTO (developing software for radiation oncology). At publication, he is working on machine learning in the self-driving-car industry. Luca Antiga worked as a researcher in biomedical engineering in the 2000s, and spent the last decade as a cofounder and CT O of an AI engineering company. He has contributed to several open source projec ts, including the PyTorch core. He recently cofounded a US-based startu p focused on infras tructure for data -defined software. Thomas Viehmann is a machine learning and PyTorch specialty trainer and con- sultant based in Munich, Germany, and a PyTorch core developer. With a PhD in mathematics, he is not scared by theory, bu t he is thoroughly practical when applying it to computing challenges." Deep-Learning-with-PyTorch.pdf,"xxviiiabout the cover illustration The figure on the cover of Deep Learning with PyTorch is captioned “Kardinian.” The illustration is taken from a collection of dress costumes from various countries by Jacques Grasset de Saint-Sauveur (1757-1810), titled Costumes civils actuels de tous les peuples connus , published in France in 1788. Each illustration is finely drawn and col- ored by hand. The rich variety of Grasset de Saint-Sauveur’s colle ction reminds us viv- idly of how culturally apart the world’s towns and regions were just 200 years ago. Isolated from each other, people spoke differ ent dialects and languages. In the streets or in the countryside, it was easy to iden tify where they lived and what their trade or station in life was just by their dress. The way we dress has changed since then an d the diversity by region, so rich at the time, has faded away. It is now hard to tell apart the inhabitants of different conti-nents, let alone different towns, regions, or countries. Perhaps we have traded cultural diversity for a more varied personal life—c ertainly for a more varied and fast-paced technological life. At a time when it is hard to tell one computer book from another, Manning cele- brates the inventiveness and initiative of the computer busine ss with book covers based on the rich diversity of regional life of two centuries ago, brought back to life by Grasset de Saint-Sauveur’s pictures." Deep-Learning-with-PyTorch.pdf,"Part 1 Core PyTorch W elcome to the first part of this book . This is where we’ll take our first steps with PyTorch, gaining the fundamen tal skills needed to understand its anatomy and work out the mech anics of a PyTorch project. In chapter 1, we’ll make our first cont act with PyTorch, understand what it is and what problems it solves, and how it relates to other deep learning frame- works. Chapter 2 will take us on a tour, giving us a chance to play with models that have been pretrained on fun tasks. Chapter 3 gets a bi t more serious and teaches the basic data structure used in PyTorch programs: the tensor. Chapter 4 will take us on another tour, this time across ways to represent data from differ- ent domains as PyTorch tensors. Chap ter 5 unveils how a program can learn from examples and how PyTorch supports this process. Chapter 6 provides thefundamentals of what a neural network is and how to build a neural network with PyTorch. Chapter 7 tackles a simple image classification problem with a neural network archit ecture. Finally, chapter 8 sh ows how the same problem can be cracked in a much smarter way us ing a convolutional neural network. By the end of part 1, we’ll have what it takes to tackle a real-world problem with PyTorch in part 2. " Deep-Learning-with-PyTorch.pdf," " Deep-Learning-with-PyTorch.pdf,"3Introducing deep learning and the PyTorch Library The poorly defined term artificial intelligence covers a set of disciplines that have been subjected to a tremendous amount of research, scrutiny, confusion, fantasti- cal hype, and sci-fi fearmonger ing. Reality is, of course, far more sanguine. It would be disingenuous to assert that today’s machines are learning to “think” in any human sense of the word. Rather, we’ve di scovered a general class of algorithmsThis chapter covers How deep learning changes our approach to machine learning Understanding why PyTorch is a good fit for deep learning Examining a typical deep learning project The hardware you’ll need to follow along with the examples" Deep-Learning-with-PyTorch.pdf,"4 CHAPTER 1Introducing deep learning and the PyTorch Library that are able to approximate complicated, nonlinear processes very, very effectively, which we can use to automate tasks that were previously limited to humans. For example, at https://talktotransformer.com , a language model called GPT-2 can generate coherent paragrap hs of text one word at a time. When we fed it this very paragraph, it produced the following: Next we’re going to feed in a list of phras es from a corpus of email addresses, and see if the program can parse the lists as sentences. Again, this is much more complicated and far more complex than the search at the beginning of th is post, but hopefully helps you understand the basics of constructing sentence struct ures in various programming languages. That’s remarkably coherent for a machine, even if there isn’t a well-defined thesis behind the rambling. Even more impressively, the ability to perform these formerly human-only tasks is acquired through examples , rather than encoded by a human as a set of handcrafted rules. In a way, we’re learning that intellige nce is a notion we often conflate with self- awareness, and self-awareness is definitely not required to successfully carry out these kinds of tasks. In the end, the question of computer intelligence might not even bei m p o r t a n t . E d s g e r W . D i j k s t r a f o u n d t h a t t h e q u e s t i o n o f w h e t h e r m a c h i n e s c o u l d think was “about as relevant as the qu estion of whether Submarines Can Swim.” 1 That general class of algorithms we’re ta lking about falls under the AI subcategory of deep learning , which deals with training mathematical entities named deep neural net- works by presenting instructive examples. Deep learning uses large amounts of data to approximate complex functions whose inputs an d outputs are far apar t, like an input image and, as output, a line of text describing the input; or a written script as input and a natural-sounding voice reciting the scri pt as output; or, even more simply, asso- ciating an image of a golden retriever with a flag that tells us “Yes, a golden retriever is present.” This kind of capability allows us to create programs with functionality that was, until very recently, exclusively the domain of human beings. 1.1 The deep learning revolution To appreciate the paradigm shift ushered in by this deep learning approach, let’s take a step back for a bit of perspective. Until the last decade, the broader class of systems that fell under the label machine learning relied heavily on feature engineering . Features are transformations on input data that facili tate a downstream algorithm, like a classi- fier, to produce correct outcomes on new data. Feature engineering consists of com- ing up with the right transformations so that the downstream algorithm can solve a task. For instance, in order to tell ones fr om zeros in images of handwritten digits, we would come up with a set of filters to estimate the direction of edges over the image, and then train a classifier to predict the correct digit given a distribution of edge directions. Another useful feat ure could be the number of enclosed holes, as seen in a zero, an eight, and, particularly, loopy twos. 1Edsger W. Dijkstra, “The Threats to Computing Science,” http://mng.bz/nPJ5 ." Deep-Learning-with-PyTorch.pdf,"5 The deep learning revolution Deep learning, on the other hand, deals with finding such representations auto- matically, from raw data, in order to succ essfully perform a task . In the ones versus zeros example, filters would be refined during training by iteratively looking at pairs of examples and target labels. This is not to say that feature engineering has no place with deep learning; we often need to inject some form of prior knowledge in a learn- ing system. However, the abilit y of a neural network to ingest data and extract useful representations on the basis of examples is what makes deep le arning so powerful. The focus of deep learning practitioners is not so much on handcrafting those repre- sentations, but on operating on a mathematical entity so that it discovers representa- tions from the training data autonomously. Often, these automatically createdfeatures are better than those that are handcrafted! As with many disruptive technolo- gies, this fact has led to a change in perspective. On the left side of figure 1.1, we see a practitioner busy defining engineering fea- tures and feeding them to a learning algorith m; the results on the task will be as good as the features the practitioner engineers. On the right, with deep learning, the raw data is fed to an algorithm that extracts hierarchical features automatically, guided by the optimization of its own performance on th e task; the results will be as good as the ability of the practitioner to dr ive the algorithm toward its goal. DATA DATA DEeP LEARNINGMACHINE OUTCOME 42 420 OUTCOMEREPRESENTATIONS THE PARAdIGm SHIFTLEARNING MACHINEHAND- CRAFTEDFEATURES Figure 1.1 Deep learning exchanges the need to handcraft features for an increase in data and computational requirements." Deep-Learning-with-PyTorch.pdf,"6 CHAPTER 1Introducing deep learning and the PyTorch Library Starting from the right side in figure 1.1, we already get a glimps e of what we need to execute successful deep learning: We need a way to ingest whatever data we have at hand. We somehow need to define the deep learning machine. We must have an automated way, training , to obtain useful representations and make the machine produce desired outputs. This leaves us with taking a closer look at this training thing we keep talking about. During training, we use a criterion , a real-valued function of model outputs and refer- ence data, to provide a numerical score fo r the discrepancy between the desired and actual output of our model (by convention, a lower score is typically better). Training consists of driving the criterion toward lower and lower scores by incrementally modi-fying our deep learning machine until it ac hieves low scores, even on data not seen during training. 1.2 PyTorch for deep learning PyTorch is a library for Python programs that facilitates building deep learning proj- ects. It emphasizes flexibility and allows d eep learning models to be expressed in idi- omatic Python. This approachability and ease of use found early adopters in the research community, and in th e years since its first releas e, it has grown into one of the most prominent deep le arning tools across a broa d range of applications. As Python does for programming, PyTorch provides an excellent introduction to deep learning. At the same time, PyTorch has been proven to be fully qualified for use in professional contexts for real-world, hi gh-profile work. We believe that PyTorch’s clear syntax, streamlined API, and easy de bugging make it an excellent choice for introducing deep learning. We highly re commend studying PyTorch for your first deep learning library. Whethe r it ought to be the last deep learning library you learn is a decision we leave up to you. At its core, the deep learning machine in figure 1.1 is a rather complex mathemat- ical function mapping inputs to an output . To facilitate expre ssing this function, PyTorch provides a core data structure, the tensor , which is a multidimensional array that shares many similarities with NumP y arrays. Around that foundation, PyTorch comes with features to perform accelerate d mathematical operations on dedicated hardware, which makes it convenient to desi gn neural network architectures and train them on individual machines or parallel computing resources. This book is intended as a starting point for software engineers, data scientists, and motivated students fluent in Python to become comfortable using PyTorch to builddeep learning projects. We want this book to be as accessible and useful as possible, and we expect that you will be able to take the concepts in this book and apply them to other domains. To that end, we use a hands-on approach and encourage you tokeep your computer at the re ady, so you can play with the examples and take them a step further. By the time we are through with the book, we expect you to be able to" Deep-Learning-with-PyTorch.pdf,"7 Why PyTorch? take a data source and build out a deep learning project with it, supported by the excellent official documentation. Although we stress the practical aspect s of building deep learning systems with PyTorch, we believe that pr oviding an accessible introduc tion to a foundational deep learning tool is more than just a way to fac ilitate the acquisition of new technical skills. It is a step toward equipping a new genera tion of scientists, engineers, and practi- tioners from a wide range of disciplines wi th working knowledge that will be the back- bone of many software projects during the decades to come. In order to get the most out of this book, you will need two things: Some experience programmi ng in Python. We’re not going to pull any punches on that one; you’ll need to be up on Python data types, classes, floating-point numbers, and the like. A willingness to dive in and get your ha nds dirty. We’ll be starting from the basics and building up our working know ledge, and it will be much easier for you to learn if you follow along with us. Deep Learning with PyTorch is organized in three distinct parts. Part 1 covers the founda- tions, examining in detail the facilities PyTorc h offers to put the sketch of deep learn- ing in figure 1.1 into action with code. Pa rt 2 walks you through an end-to-end project involving medical imaging: finding and cla ssifying tumors in CT scans, building on the basic concepts intr oduced in part 1, and adding more advanced topics. The short part 3 rounds off the book with a tour of what PyTorch offers for deploying deeplearning models to production. Deep learning is a huge space. In this book, we will be covering a tiny part of that space: specifically, using PyTorch for small er-scope classification and segmentation projects, with image processing of 2D and 3D datasets used for most of the motivating examples. This book focuses on practical Py Torch, with the aim of covering enough ground to allow you to solve real-world mach ine learning problems, such as in vision, with deep learning or explore new models as they pop up in research literature. Most, if not all, of the latest publications relate d to deep learning research can be found in the arXiV public preprint repository, hosted at https://arxiv.org . 2 1.3 Why PyTorch? As we’ve said, deep learning allows us to ca rry out a very wide range of complicated tasks, like machine translation, playing strategy games, or identifying objects in cluttered scenes, by exposing our model to illustrative examples. In order to do so in practice, we need tools that are flexible, so they can be adapted to such a wide range of problems, and efficient, to allow training to occur over large amounts of data in reasonable times; and we need the trained model to perform corr ectly in the presence of variability in the inputs. Let’s take a look at some of the reasons we decided to use PyTorch. 2We also recommend www.arxiv-sanity.com to help organize research papers of interest." Deep-Learning-with-PyTorch.pdf,"8 CHAPTER 1Introducing deep learning and the PyTorch Library PyTorch is easy to recommend because of its simplicity. Many researchers and prac- titioners find it easy to learn, use, extend, and debug. It’s Pythonic, and while like any complicated domain it has caveats and best practices, using the li brary generally feels familiar to developers who have used Python previously. More concretely, programming the deep learning machine is very natural in PyTorch. PyTorch gives us a data type, the Tensor , to hold numbers, vectors, matrices, or arrays in general. In addition, it prov ides functions for operating on them. We can program with them incrementally and, if we wa nt, interactively, just like we are used to from Python. If you know NumPy, this will be very familiar. But PyTorch offers two things that make it particularly relevant for deep learning: first, it provides accelerated computatio n using graphical processing units (GPUs), often yielding speedups in the range of 50x over doing the same calculation on a CPU. Second, PyTorch provides facilities that support numerical optimization on generic mathematical expressions, which de ep learning uses for training. Note that both features are useful for scientific computing in general, not exclusively for deep learning. In fact, we can safely characte rize PyTorch as a high-performance library with optimization support for sc ientific computing in Python. A design driver for PyTorch is expressivi ty, allowing a developer to implement com- plicated models without undue complexity being imposed by the library (it’s not a framework!). PyTorch arguably offers one of the most seamless tr anslations of ideas into Python code in the deep learning la ndscape. For this reason, PyTorch has seen widespread adoption in research, as witnesse d by the high citation counts at interna- tional conferences.3 PyTorch also has a compelling story for th e transition from research and develop- ment into production. While it was initiall y focused on research workflows, PyTorch has been equipped with a high-performance C++ runtime that can be used to deploy models for inference without relying on Py thon, and can be used for designing and training models in C++. It has also grow n bindings to other languages and an inter- face for deploying to mobile devices. Thes e features allow us to take advantage of PyTorch’s flexibility and at the same time take our app lications where a full Python runtime would be hard to get or would impose expensive overhead. Of course, claims of ease of use and high performance are trivial to make. We hope that by the time you are in the thick of this book, you’ll agree with us that our claims here are well founded. 1.3.1 The deep learning competitive landscape While all analogies are flawed, it seems that the release of PyTorch 0.1 in January 2017 marked the transition from a Cambrian-exp losion-like proliferation of deep learning libraries, wrappers, and data-exchange fo rmats into an era of consolidation and unification. 3At the International Conference on Learning Representations (ICLR) 2019, PyTorch appeared as a citation in 252 papers, up from 87 the previous year and at the same level as TensorFlow, which appeared in 266 papers." Deep-Learning-with-PyTorch.pdf,"9 Why PyTorch? NOTE The deep learning landscape has been moving so quickly lately that by the time you read this in print, it will li kely be out of date. If you’re unfamiliar with some of the libraries mentioned here, that’s fine. At the time of PyTorch’s first beta release: Theano and TensorFlow were the premie re low-level libraries, working with a model that had the user define a com putational graph and then execute it. Lasagne and Keras were high-level wrappers around Theano, with Keras wrap- ping TensorFlow and CNTK as well. Caffe, Chainer, DyNet, Torch (the Lua-based precursor to PyTorch), MXNet,CNTK, DL4J, and others filled va rious niches in the ecosystem. In the roughly two years that followed, th e landscape changed drastically. The com- munity largely consolidated behind either PyTorch or TensorFl ow, with the adoption of other libraries dwindling, except for th ose filling specific niches. In a nutshell: Theano, one of the first deep learning fr ameworks, has ceased active development. TensorFlow: – Consumed Keras entirely, promot ing it to a first-class API – Provided an immediate-execution “eager mode” that is somewhat similar to how PyTorch approaches computation – Released TF 2.0 with eager mode by default JAX, a library by Google that was de veloped independently from TensorFlow, has started gaining traction as a NumPy equivalent with GPU, autograd and JIT capabilities. PyTorch: – Consumed Caffe2 for its backend – Replaced most of the low-level code re used from the Lua-based Torch project – Added support for ONNX, a vendor-neutral model description and exchange format – Added a delayed-execution “g raph mode” runtime called TorchScript – Released version 1.0 – Replaced CNTK and Chainer as the fram ework of choice by their respective corporate sponsors TensorFlow has a robust pipeline to prod uction, an extensive industry-wide commu- nity, and massive mindshare. PyTorch has ma de huge inroads with the research and teaching communities, thanks to its ease of use, and has picked up momentum since, as researchers and graduates train students and move to industry. It has also built up steam in terms of production solutions. Inte restingly, with the ad vent of TorchScript and eager mode, both PyTorch and TensorFlow have seen their feature sets start toconverge with the other’s, though the pres entation of these features and the overall experience is still quite different between the two. " Deep-Learning-with-PyTorch.pdf,"10 CHAPTER 1Introducing deep learning and the PyTorch Library 1.4 An overview of how PyTorch su pports deep learning projects We have already hinted at a few building bl ocks in PyTorch. Let’s now take some time to formalize a high-level map of the main components that form PyTorch. We can bestdo this by looking at what a deep learning pr oject needs from PyTorch. First, PyTorch has the “Py” as in Python, but there’s a lot of non-Python code in it. Actually, for performance reasons, most of PyTorch is written in C++ and CUDA (www.geforce.com/hard ware/technology/cuda ), a C++-like language from NVIDIA that can be compiled to ru n with massive parallelism on GPUs. There are ways to run PyTorch directly from C++, and we’ll look in to those in chapter 15. One of the motiva- tions for this capability is to provide a re liable strategy for deploying models in pro- duction. However, most of the time we’ll interact with PyTorch from Python, building models, training them, and using the tr ained models to solve actual problems. Indeed, the Python API is where PyTorch shines in term of usability and integra- tion with the wider Python ecosystem. Let’ s take a peek at the mental model of what PyTorch is. As we already touched on , at its core, PyTorch is a library that provides multidimen- sional arrays , or tensors in PyTorch parlance (we’ll go into details on those in chapter 3), and an extensive libr ary of operations on them, provided by the torch module. Both tensors and the operations on them can be used on the CPU or the GPU. Mov- ing computations from the CPU to the GPU in PyTorch doesn’t require more than an additional function call or two. The second core thing that PyTorch provides is the ability of tensors to keep track of the op erations performed on them and to analyti- cally compute derivatives of an output of a computation with respect to any of its inputs. This is used for numerical optimization, and it is provided natively by tensorsby virtue of dispatching through PyTorch’s autograd engine under the hood. By having tensors and the autograd-enabl ed tensor standard library, PyTorch can be used for physics, rendering, optimiza tion, simulation, modeling, and more—we’re very likely to see PyTorch used in creative ways throughout the spectrum of scientificapplications. But PyTorch is first and foremo st a deep learning library, and as such it provides all the building blocks needed to build neural networks and train them. Fig- ure 1.2 shows a standard setup that loads da ta, trains a model, and then deploys that model to production. The core PyTorch modules for building neural networks are located in torch.nn , which provides common neural network laye rs and other architectural components. Fully connected layers, convolutional layers , activation functions, and loss functions can all be found here (we’ll go into more detail about what all that means as we go through the rest of this book). These compon ents can be used to build and initialize the untrained model we see in the center of figure 1.2. In order to train our model, we need a few additional things : a source of training data , an optimizer to adapt the model to the training data, and a way to get the model and data to the hardware thatwill actually be pe rforming the calculations ne eded for training the model." Deep-Learning-with-PyTorch.pdf,"11 An overview of how PyTorch supports deep learning projects At left in figure 1.2, we see that quite a bit of data processing is needed before the training data even reaches our model.4 First we need to physically get the data, most often from some sort of storage as the data source. Then we need to convert each sam- ple from our data into a something PyTorch can actually handle: tensors. This bridge between our custom data (in whatever format it might be) and a standardized PyTorch tensor is the Dataset class PyTorch provides in torch.utils.data . As this process is wildly different from one proble m to the next, we will have to implement this data sourcing ourselves. We will look in detail at how to represent various type of data we might want to work with as tensors in chapter 4. As data storage is often slow, in particul ar due to access latency, we want to paral- lelize data loading. But as the many things Python is well loved for do not include easy,efficient, parallel processing, we will need multiple processes to load our data, in order to assemble them into batches : tensors that encompass seve ral samples. This is rather elaborate; but as it is also relatively gene ric, PyTorch readily provides all that magic in the DataLoader class. Its instances can spawn child processes to load data from a data- set in the background so that it’s ready and waiting for the training loop as soon as the loop can use it. We will meet and use Dataset and DataLoader in chapter 7. 4And that’s just the data preparation that is done on th e fly, not the preprocessing, which can be a pretty large part in practical projects. PRODUCTION SERVERCLOUD PRODUCTION (ONnX, JIT TORCHSCRIPT)TRAINED MODEL TRAINING LOoPBATCH TENSORSAMPLE TENSORS DATA SOURCEMUL TIPROCESs DATA LOADINGUNTRAINED MODEL DISTRIBUTED TRAINING ON MUL TIPLE SERVERS/GPU S Figure 1.2 Basic, high-level structure of a PyTorch project, with data loading, training, and deployment to production" Deep-Learning-with-PyTorch.pdf,"12 CHAPTER 1Introducing deep learning and the PyTorch Library With the mechanism for getting batches of samples in place, we can turn to the training loop itself at the ce nter of figure 1.2. Typically, the training loop is imple- mented as a standard Python for loop. In the simplest case, the model runs the required calculations on the local CPU or a single GPU, and once the training loophas the data, computation can start immediat ely. Chances are this will be your basic setup, too, and it’s the one we’ll assume in this book. At each step in the training loop, we evaluate our model on the samples we got from the data loader. We then compare th e outputs of our model to the desired out- put (the targets) using some criterion or loss function . Just as it offers the components from which to build our model, PyTorch also has a variety of loss functions at our dis- posal. They, too, are provided in torch.nn . After we have comp ared our actual out- puts to the ideal with the loss functions, we need to push the model a little to move its outputs to better resemble the target. As mentioned earlier, this is where the PyTorch autograd engine comes in ; but we also need an optimizer doing the updates, and that is what PyTorch offers us in torch.optim . We will start looking at training loops with loss functions and optimizers in chapter 5 and th en hone our skills in chapters 6 through 8 before embarking on our big project in part 2. It’s increasingly common to use more elaborate hardware like multiple GPUs or multiple machines that contribute their reso urces to training a la rge model, as seen in the bottom center of figure 1.2. In those cases, torch.nn.parallel.Distributed- DataParallel and the torch.distributed submodule can be employed to use the additional hardware. The training loop might be the most unex citing yet most time-consuming part of a deep learning project. At the end of it, we are rewarded with a model whose parame- ters have been optimized on our task: the trained model depicted to the right of the training loop in the figure. Having a model to solve a task is great, but in order for it to be useful, we must put it where the work is needed. This deployment part of the pro- cess, depicted on the right in figure 1.2, may involve putting the model on a server or exporting it to load it to a cloud engine, as shown in the figure. Or we might integrate it with a larger applicatio n, or run it on a phone. One particular step of the deployment exercise can be to export the model. As mentioned earlier, PyTorch defaults to an immediate execution model (eager mode). Whenever an instruction involving PyTorch is executed by the Python interpreter, the corresponding operation is immediately carried out by the underlying C++ or CUDAimplementation. As more instructions oper ate on tensors, more operations are exe- cuted by the backen d implementation. PyTorch also provides a way to compile models ahead of time through TorchScript . Using TorchScript, PyTorch can serialize a mo del into a set of instructions that can be invoked independently from Python: say, fr om C++ programs or on mobile devices. We can think about it as a virtual machine with a limited instruction set, specific to tensor operations. This allows us to export our mo del, either as TorchScript to be used with the PyTorch runtime, or in a standardized format called ONNX . These features are at" Deep-Learning-with-PyTorch.pdf,"13 Hardware and software requirements the basis of the production deployment capabilities of PyTorch. We’ll cover this in chapter 15. 1.5 Hardware and software requirements This book will require coding and running tasks that involve heavy numerical comput- ing, such as multiplication of large numbers of matrices. As it turns out, running a pretrained network on new data is within th e capabilities of any recent laptop or per- sonal computer. Even taking a pretrained ne twork and retraining a small portion of it to specialize it on a new dataset doesn’t ne cessarily require specialized hardware. You can follow along with everything we do in part 1 of this book using a standard per- sonal computer or laptop. However, we anticipate that completing a full training run for the more advanced examples in part 2 will re quire a CUDA-capable GPU. Th e default parameters used in part 2 assume a GPU with 8 GB of RAM (we suggest an NV IDIA GTX 1070 or better), but those can be adjusted if your hardware has less RAM available. To be clear: such hardware is not mandatory if you’re willin g to wait, but running on a GPU cuts train- ing time by at least an order of magnitude (a nd usually it’s 40–50x faster). Taken indi- vidually, the operations required to co mpute parameter updates are fast (from fractions of a second to a few seconds) on modern hardware like a typical laptop CPU. The issue is that training involves running these operations over and over, many, many times, incrementally updating the network pa rameters to minimize the training error. Moderately large networks can take hours to days to train from scratch on large, real-world datasets on workstations equi pped with a good GPU. That time can be reduced by using multiple GPUs on the sa me machine, and even further on clusters of machines equipped with multiple GPUs. These setups are less prohibitive to access than it sounds, thanks to the offerings of cloud computing pr oviders. DAWNBench (https://dawn.cs.stanford.e du/benchmark/index.html ) is an interesting initiative from Stanford University aimed at providin g benchmarks on training time and cloud computing costs related to co mmon deep learning tasks on publicly available datasets. So, if there’s a GPU around by the time you reach part 2, then great. Otherwise, we suggest checking out the offerings from the va rious cloud platforms, many of which offer GPU-enabled Jupyter Notebooks with PyTorch pr einstalled, often with a free quota. Goo- gle Colaboratory ( https://colab.research.google.com ) is a great place to start. The last consideration is the operating system (OS). PyTorch has supported Linux and macOS from its first release, and it gained Windows support in 2018. Since cur- rent Apple laptops do not include GPUs that support CUDA, the precompiled macOS packages for PyTorch are CPU-only. Througho ut the book, we will try to avoid assum- ing you are running a particular OS, although some of the scripts in part 2 are shown as if running from a Bash prompt under Linux. Those scripts’ command lines should convert to a Windows-compatible form readil y. For convenience, code will be listed as if running from a Jupyter Notebook when possible." Deep-Learning-with-PyTorch.pdf,"14 CHAPTER 1Introducing deep learning and the PyTorch Library For installation information, please see the Get Started guide on the official PyTorch website ( https://pytorch.org/get-started/locally ). We suggest that Windows users install with Anaconda or Miniconda ( https://www.anaconda.com/distribution or https://docs.conda.io/en /latest/miniconda.html ). Other operating systems like Linux typically have a wider variety of work able options, with Pip being the most com- mon package manager for Python. We provide a requirements.txt file that pip can use to install dependencies. Of course, experien ced users are free to install packages in the way that is most compatible with your preferred development environment. Part 2 has some nontrivial download bandwidth and disk space requirements as well. The raw data needed for the cancer-detec tion project in part 2 is about 60 GB to download, and when uncompressed it requ i r e s a b o u t 1 2 0 G B o f s p a c e . T h e c o m - pressed data can be removed after decompre ssing it. In addition, due to caching some of the data for performance reasons, anot her 80 GB will be needed while training. You will need a total of 200 GB (at minimum) of free disk space on the system that will be used for training. While it is possible to use network storage for this, there might be training speed penalties if the network access is slower than local disk. Preferably you will have space on a local SSD to store the data for fast retrieval. 1.5.1 Using Jupyter Notebooks We’re going to assume you’ve installed PyTorch and the other dependencies and have verified that things are working. Earlier we touched on the possibilities for following along with the code in the book. We are goin g to be making heavy use of Jupyter Note- books for our example code. A Jupyter Notebook shows itself as a page in the browser through which we can run code interact ively. The code is evaluated by a kernel , a process running on a server that is ready to receiv e code to execute and send back the results, which are then rendered inli ne on the page. A notebook maintains the state of the ker- nel, like variables defined during the evalua tion of code, in memory until it is termi- nated or restarted. The fundamental unit with which we interact with a notebook is a cell: a box on the page where we can type code and have the kernel evaluate it (through the menu item or by pressing Shift-Enter). We can add multiple cells in a notebook, and the new cells will see the variables we create d in the earlier cells. The value returned by the last line of a cell will be printed right below the cell after execution, and the same goes for plots. By mixing so urce code, results of evaluations, and Markdown-formatted text cells, we can generate beautiful inte ractive documents. You can read everything about Jupyter Notebooks on the project website ( https://jupyter.org ). At this point, you need to start the note book server from the root directory of the code checkout from GitHub. How exactly st arting the server looks depends on the details of your OS and how and where you inst alled Jupyter. If you have questions, feel free to ask on the book’s forum.5 Once started, your default browser will pop up, showing a list of local notebook files. 5https://forums.manning.com/for ums/deep-learning-with-pytorch" Deep-Learning-with-PyTorch.pdf,"15 Summary NOTE Jupyter Notebooks are a powerful tool for expressing and investigating ideas through code. While we think that they make for a good fit for our use case with this book, they’re not for everyone. We would argue that it’s import- ant to focus on removing friction an d minimizing cognitive overhead, and that’s going to be different for everyone. Use what you like during your exper- imentation with PyTorch. Full working code for all listings from the book can be found at the book’s website (www.manning.com/books/dee p-learning-with-pytorch ) and in our repository on GitHub ( https://github.com/deep-lear ning-with-pytorch/dlwpt-code ). 1.6 Exercises 1Start Python to get an interactive prompt. aWhat Python version are you using? We hope it is at least 3.6! bCan you import torch ? What version of PyTorch do you get? cWhat is the result of torch.cuda.is_available() ? Does it match your expectation based on the hardware you’re using? 2Start the Jupyter notebook server. aWhat version of Python is Jupyter using? bIs the location of the torch library used by Jupyter the same as the one you imported from the interactive prompt? 1.7 Summary Deep learning models automatically lear n to associate inputs and desired out- puts from examples. Libraries like PyTorch allow you to bu ild and train neural network models efficiently. PyTorch minimizes cognitive overhead while focusing on flexibility and speed. It also defaults to immediate execution for operations. TorchScript allows us to precompile models and invoke them not only from Python but also from C++ prog rams and on mobile devices. Since the release of PyTorch in early 201 7, the deep learning tooling ecosystem has consolidated significantly. PyTorch provides a number of utility librari es to facilitate deep learning projects." Deep-Learning-with-PyTorch.pdf,"16Pretrained networks We closed our first chapter promising to un veil amazing things in this chapter, and now it’s time to deliver. Computer vision is certainly one of the fields that have been most impacted by the advent of deep learning, for a variety of reasons. The need to classify or interpret the content of natural images existed, very large data- sets became available, and new construc ts such as convolutional layers were invented and could be run quickly on GP Us with unpreceden ted accuracy. All of these factors combined with the internet giants’ desire to understand pictures taken by millions of users with their m obile devices and managed on said giants’ platforms. Quite the perfect storm. We are going to learn how to use the work of the best researchers in the field by downloading and running very interesting mo dels that have already been trained on open, large-scale datasets. We can think of a pretrained neural network as similar toThis chapter covers Running pretrained image-recognition models An introduction to GANs and CycleGAN Captioning models that can produce text descriptions of images Sharing models through Torch Hub" Deep-Learning-with-PyTorch.pdf,"17 A pretrained network that recognizes the subject of an image a program that takes inputs and generates ou tputs. The behavior of such a program is dictated by the architecture of the neural network and by the examples it saw during training, in terms of desired input-output pairs, or desired properties that the output should satisfy. Using an o ff-the-shelf model can be a quick way to jump-start a deep learning project, since it draws on expertise from the researchers who designed the model, as well as the computation time that went into training the weights. In this chapter, we will explore three popular pretrain ed models: a model that can label an image according to it s content, another that can fabricate a new image from a real image, and a model that can descri be the content of an image using proper English sentences. We will learn how to load and run these pretrained models in PyTorch, and we will introduce PyTorch Hu b, a set of tools through which PyTorch models like the pretrained ones we’ll disc uss can be easily made available through a uniform interface. Along the way, we’ll disc uss data sources, de fine terminology like label, and attend a zebra rodeo. If you’re coming to PyTorch from anot her deep learning framework, and you’d rather jump right into learning the nuts and bolts of PyTorch, you can get away with skipping to the next chapter. The things we ’ll cover in this chapter are more fun than foundational and are somewhat independent of any given deep learning tool. That’s not to say they’re not important! But if you’ve worked with pretrained models in otherdeep learning frameworks, th en you already know how powerful a tool they can be. And if you’re already familiar with the generative adversaria l network (GAN) game, you don’t need us to explain it to you. We hope you keep reading, though, sinc e this chapter hides some important skills under the fun. Learning how to run a pr etrained model using PyTorch is a useful skill—full stop. It’s especially useful if th e model has been trained on a large dataset. We will need to get accustomed to the mechanics of obtaining and running a neural network on real-world data, and then visualizing and eval uating its outputs, whether we trained it or not. 2.1 A pretrained network that reco gnizes the subject of an image As our first foray into deep learning, we’ll run a state-of-the-art deep neural network that was pretrained on an object-recogni tion task. There are many pretrained net- works that can be accessed through source code repositories. It is common for researchers to publish their source code along with their papers, and often the code comes with weights that were obtained by training a model on a reference dataset. Using one of these models could enable us to, for example, equip our next web ser- vice with image-recognition capab ilities with very little effort. The pretrained network we’l l explore here was trained on a subset of the ImageNet dataset ( http://imagenet.stanford.edu ). ImageNet is a very large dataset of over 14 mil- lion images maintained by Stanford University . All of the images are labeled with a hier- archy of nouns that come from the WordNet dataset ( http://wordnet.princeton.edu ), which is in turn a large lexical database of the English language." Deep-Learning-with-PyTorch.pdf,"18 CHAPTER 2Pretrained networks The ImageNet dataset, like several other public datasets , has its origin in academic competitions. Competitions have traditiona lly been some of the main playing fields where researchers at institutions and co mpanies regularly challenge each other. Among others, the ImageNet Large Scale Visu al Recognition Challenge (ILSVRC) has gained popularity since its inception in 2 010. This particular competition is based on a few tasks, which can vary each year, such as image classification (telling what object categories the image contains), object loca lization (identifying objects’ position in images), object detection (identifying and la beling objects in images), scene classifica- tion (classifying a situation in an image) , and scene parsing (segmenting an image into regions associated with semantic categories, such as cow, house, cheese, hat). In particular, the image-classifica tion task consists of taking an input image and produc- ing a list of 5 labels out of 1,000 total cate gories, ranked by confidence, describing the content of the image. The training set for ILSVRC consists of 1.2 million images labeled with one of 1,000 nouns (for example, “dog”), referred to as the class of the image. In this sense, we will use the terms label and class interchangeably. We can take a peek at images from ImageNet in figure 2.1. Figure 2.1 A small sample of ImageNet images" Deep-Learning-with-PyTorch.pdf,"19 A pretrained network that recognizes the subject of an image We are going to end up being able to take our own images and feed them into our pretrained model, as pictured in figure 2.2. This will result in a list of predicted labels for that image, which we can then examine to see what the model thinks our image is. Some images will have predictions th at are accurate, an d others will not! The input image will first be preprocessed into an instance of the multidimen- sional array class torch.Tensor . It is an RGB image with height and width, so this ten- sor will have three dimensions: the thre e color channels, and two spatial image dimensions of a specific size. (We’ll get into the details of what a tensor is in chapter 3, but for now, think of it as being like a ve ctor or matrix of floating-point numbers.) Our model will take that processed input im age and pass it into the pretrained net- work to obtain scores for each class. The highest score corresponds to the most likely class according to the weights. Each class is then mapped one-to-one onto a class label. That output is contained in a torch.Tensor with 1,000 elements, each representing the score associated with that class. Before we can do all that, we’ll need to get the network itself, take a peek under the hood to see how it’s structured, and learn about how to prepare our data before the model can use it. 2.1.1 Obtaining a pretrained network for image recognition As discussed, we will now equip ourselves with a network trained on ImageNet. To do so, we’ll take a look at the TorchVision project ( https://github.com/pytorch/vision ), which contains a few of the best-perform ing neural network architectures for com- puter vision, such as AlexNet ( http://mng.bz/lo6z ), ResNet ( https://arxiv.org/pdf/ 1512.03385.pdf ), and Inception v3 ( https://arxiv.org/pdf/1512.00567.pdf ). It also has easy access to datasets like ImageNet and other utilities for getting up to speed with computer vision applications in PyTorc h. We’ll dive into some of these further along in the book. For now, let’s load up and run two networks: first AlexNet, one of the early breakthrough networks for image recognition; and then a residual network, ResNet for short, which won the ImageNet cl assification, detection, and localization DOG IMAGERESIZE, CENTER, and NORMALIZEFORWARD PASs 1,000 SCORESMAX SCOREPRETRAINED WEIGHTS 1,000 LABELSBANANAYAKDOGBEACH AVOCADO Figure 2.2 The inference process" Deep-Learning-with-PyTorch.pdf,"20 CHAPTER 2Pretrained networks competitions, among others, in 2015. If you didn’t get PyTorch up and running in chapter 1, now is a good time to do that. The predefined models can be found in torchvision.models (code/p1ch2/2 _pre_trained_networks.ipynb): # In[1]: from torchvision import models We can take a look at the actual models: # In[2]: dir(models) # Out[2]: ['AlexNet', 'DenseNet','Inception3', 'ResNet', 'SqueezeNet','VGG', ... 'alexnet', 'densenet','densenet121', ... 'resnet','resnet101', 'resnet152', ... ] The capitalized names refer to Python cla sses that implement a number of popular models. They differ in their architecture—t hat is, in the arrangement of the operations occurring between the input and the output. The lowercase names are convenience functions that return models instantiated fr om those classes, sometimes with different parameter sets. For instance, resnet101 returns an instance of ResNet with 101 layers, resnet18 has 18 layers, and so on. We’ll now turn our attention to AlexNet. 2.1.2 AlexNet The AlexNet architecture won the 2012 ILSVRC by a large margin, with a top-5 testerror rate (that is, the correct label must be in the top 5 predictions) of 15.4%. By comparison, the seco nd-best submission, which wasn’t based on a deep network, trailed at 26.2%. This was a defining moment in the hist ory of computer vision: the moment when the community started to real ize the potential of deep learning for vision tasks. That leap wa s followed by constant improvement, with more modern architectures and training methods getti ng top-5 error rates as low as 3%." Deep-Learning-with-PyTorch.pdf,"21 A pretrained network that recognizes the subject of an image By today’s standards, AlexNet is a rather small network, compar ed to state-of-the- art models. But in our case, it’s perfect for taking a first peek at a neural network that does something and learning how to run a pretrained version of it on a new image. We can see the structure of AlexNet in figu re 2.3. Not that we have all the elements for understanding it now, but we can anticipate a few aspects. First, each block consists of a bunch of multiplications and additions, plus a sprinkle of other functions in the output that we’ll discover in chapter 5. We can think of it as a filter—a function that takes one or more images as input and pr oduces other images as output. The way it does so is determined during trai ning, based on the examples it has seen and on the desired outputs for those. In figure 2.3, input images come in from th e left and go through five stacks of filters, each producing a number of output images. After each filter, the images are reduced in size, as annotated. The images produced by the last stack of filt ers are laid out as a 4,096-element 1D vector and classified to produce 1,000 output probabilities, one for each output class. In order to run the AlexNet architecture on an input image, we can create an instance of the AlexNet class. This is how it’s done: # In[3]: alexnet = models.AlexNet() At this point, alexnet is an object that can run th e AlexNet architecture. It’s not essential for us to understand the details of this architecture for now. For the time being, AlexNet is just an opaque object that can be called like a function. By providing ALEXNET 964,096 4,0961,000 256386 384 256 Figure 2.3 The AlexNet architecture" Deep-Learning-with-PyTorch.pdf,"22 CHAPTER 2Pretrained networks alexnet with some precisely size d input data (we’ll see sh ortly what this input data should be), we will run a forward pass through the network. That is, the input will run through the first set of neurons, whose outputs will be fed to the next set of neurons, all the way to the final output. Practi cally speaking, assuming we have an input object of the right type, we can run the forward pass with output = alexnet(input) . But if we did that, we would be feeding data through the whole network to pro- duce … garbage! That’s because the network is uninitialized: its weights, the numbers by which inputs are added and multiplied, have not been trained on anything—the network itself is a blank (or rather, random ) slate. We’d need to either train it from scratch or load weights from pr ior training, which we’ll do now. To this end, let’s go back to the models module. We learned that the uppercase names correspond to classes that impl ement popular architectures for computer vision. The lowercase names, on the other hand, are functions th at instantiate models with predefined numbers of layers and un its and optionally download and load pre- trained weights into them. Note that th ere’s nothing essential about using one of these functions: they just make it conven ient to instantiate the model with a number of layers and units that matches how the pretrained netw orks were built. 2.1.3 ResNet Using the resnet101 function, we’ll now instantiate a 101-layer convolutional neural network. Just to put things in perspective, before the advent of residual networks in 2015, achieving stable training at such de pths was considered extremely hard. Resid- ual networks pulled a trick that made it po ssible, and by doing so, beat several bench- marks in one sweep that year. Let’s create an instance of the networ k now. We’ll pass an argument that will instruct the function to download the weights of resnet101 trained on the ImageNet dataset, with 1.2 million images and 1,000 categories: # In[4]: resnet = models.resnet101(pretrained=True) While we’re staring at the download progress, we can take a minute to appreciate that resnet101 sports 44.5 million parameters—that’s a lot of parameters to optimize automatically! 2.1.4 Ready, set, almost run OK, what did we just get? Since we’re curious, we’ll take a peek at what a resnet101 looks like. We can do so by printing the va lue of the returned model. This gives us a textual representation of the same kind of information we saw in 2.3, providing details about the structure of the network. For now, this will be information overload, but as we progress through the book, we’ll increase our ability to understand what this code is telling us:" Deep-Learning-with-PyTorch.pdf,"23 A pretrained network that recognizes the subject of an image # In[5]: resnet # Out[5]: ResNet( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (layer1): Sequential( (0): Bottleneck( ... ) ) (avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0)(fc): Linear(in_features=2048, out_features=1000, bias=True) ) What we are seeing here is modules , one per line. Note that they have nothing in com- mon with Python modules: they are individu al operations, the bu ilding blocks of a neural networ k. They are also called layers in other deep learning frameworks. If we scroll down, we’ll see a lot of Bottleneck modules repeating one after the other (101 of them!), containing convolutio ns and other modules. That’s the anat- omy of a typical deep neural network for computer vision: a mo re or less sequential cascade of filters and nonlinear fu nctions, ending with a layer ( fc) producing scores for each of the 1,000 output classes ( out_features ). The resnet variable can be called like a fu nction, taking as input one or more images and producing an equal number of scores for each of the 1,000 ImageNet classes. Before we can do that, however, we have to preprocess the input images so they are the right size and so that their valu es (colors) sit roughly in the same numeri- cal range. In order to do that, the torchvision module provides transforms , which allow us to quickly define pipelines of basic preprocessing functions: # In[6]: from torchvision import transformspreprocess = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224),transforms.ToTensor(), transforms.Normalize( mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225] )]) In this case, we defined a preprocess function that will scale the input image to 256 × 256, crop the image to 224 × 224 around the center, transform it to a tensor (a PyTorch multidimensional array: in this case, a 3D array with color, height, and" Deep-Learning-with-PyTorch.pdf,"24 CHAPTER 2Pretrained networks width), and normalize its RGB (red, green, blue) components so that they have defined means and standard deviations. Th ese need to match what was presented to the network during training, if we want the network to produce meaningful answers. We’ll go into more depth about transforms when we dive into making our own image-recognition models in section 7.1.3. We can now grab a picture of our favorite dog (say, bobby.jpg from the GitHub repo), preprocess it, and then see what ResNet thinks of it. We can start by loading an image from the local filesystem using Pillow ( https://pillow.readthedocs.io/en/stable ), an image-manipulation module for Python: # In[7]: from PIL import Imageimg = Image.open(""../data/p1ch2/bobby.jpg"") If we were following along from a Jupyter Notebook, we would do the following to see the picture inline (it would be shown where the Otherwise, we can invoke the show method, which will pop up a window with a viewer, to see the image shown in figure 2.4: >>> img.show() Figure 2.4 Bobby, our very special input image " Deep-Learning-with-PyTorch.pdf,"25 A pretrained network that recognizes the subject of an image Next, we can pass the image through our preprocessing pipeline: # In[9]: img_t = preprocess(img) Then we can reshape, crop, and normalize th e input tensor in a way that the network expects. We’ll understand more of this in the next two chapters; hold tight for now: # In[10]: import torch batch_t = torch.unsqueeze(img_t, 0) We’re now ready to run our model. 2.1.5 Run! The process of running a trained model on new data is called inference in deep learn- ing circles. In order to do inference, we need to put the network in eval mode: # In[11]: resnet.eval() # Out[11]: ResNet( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (layer1): Sequential( (0): Bottleneck( ... ) ) (avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0)(fc): Linear(in_features=2048, out_features=1000, bias=True) ) If we forget to do that, so me pretrained models, like batch normalization and dropout , will not produce meaningful answers, just because of the way they work internally. Now that eval has been set, we’re ready for inference: # In[12]: out = resnet(batch_t)out # Out[12]: tensor([[ -3.4803, -1.6618, -2.4515, -3.2662, -3.2466, -1.3611, -2.0465, -2.5112, -1.3043, -2.8900, -1.6862, -1.3055, ... 2.8674, -3.7442, 1.5085, -3.2500, -2.4894, -0.3354, 0.1286, -1.1355, 3.3969, 4.4584]])" Deep-Learning-with-PyTorch.pdf,"26 CHAPTER 2Pretrained networks A staggering set of operations involving 44.5 million parame ters has just happened, pro- ducing a vector of 1,000 scores, one per Image Net class. That didn’t take long, did it? We now need to find out the label of the class that received the highest score. This will tell us what the model saw in the im age. If the label matches how a human would describe the image, that’s great! It means ever ything is working. If not, then either some- thing went wrong during training, or the im age is so different from what the model expects that the model can’t process it prop erly, or there’s some other similar issue. To see the list of predicted labels, we will load a text file listing the labels in the same order they were presented to the netw ork during training, and then we will pick out the label at the index that produced the highest score from the network. Almost all models meant for image recognition have output in a form similar to what we’re about to work with. Let’s load the file containing the 1,00 0 labels for the ImageNet dataset classes: # In[13]: with open('../data/p1ch2/imagenet_classes.txt') as f: labels = [line.strip() for line in f.readlines()] At this point, we need to determine th e index corresponding to the maximum score in the out tensor we obtained previously . We can do that using the max function in PyTorch, which outputs the ma ximum value in a tensor as well as the indices where that maximum value occurred: # In[14]: _, index = torch.max(out, 1) We can now use the index to access the label. Here, index is not a plain Python num- ber, but a one-element, one-dime nsional tensor (specifically, tensor([207]) ), so we need to get the actual numerical va lue to use as an index into our labels list using index[0] . We also use torch.nn.functional.softmax (http://mng.bz/BYnq ) to nor- malize our outputs to the range [0, 1], and di vide by the sum. That gives us something roughly akin to the confidence that the mode l has in its prediction. In this case, the model is 96% certain that it knows what it’s looking at is a golden retriever: # In[15]: percentage = torch.nn.functional.softmax(out, dim=1)[0] * 100labels[index[0]], percentage[index[0]].item() # Out[15]: ('golden retriever', 96.29334259033203) Uh oh, who’s a good boy? Since the model produced scores, we can al so find out what the second best, third best, and so on were. To do this, we can use the sort function, which sorts the values in ascending or descending or der and also provides the indices of the sorted values in the original array:" Deep-Learning-with-PyTorch.pdf,"27 A pretrained model that fakes it until it makes it # In[16]: _, indices = torch.sort(out, descending=True) [(labels[idx], percentage[idx].item()) for idx in indices[0][:5]] # Out[16]: [('golden retriever', 96.29334259033203), ('Labrador retriever', 2.80812406539917), ('cocker spaniel, English cocker spaniel, cocker', 0.28267428278923035), ('redbone', 0.2086310237646103),('tennis ball', 0.11621569097042084)] We see that the first four are dogs (redbon e is a breed; who knew?), after which things start to get funny. The fifth answer, “tennis ball,” is probably because there are enough pictures of tennis balls with dogs nearby that the model is essentially saying, “There’s a 0.1% chance that I’ve comple tely misunderstood what a tennis ball is.” This is a great example of the fundamental differences in how humans and neural networks view the world, as well as how easy it is for strang e, subtle biases to sneak into our data. Time to play! We can go ahead and in terrogate our network with random images and see what it comes up with. How successf ul the network will be will largely depend on whether the subjects were well represente d in the training set. If we present an image containing a subject outs ide the training set, it’s quite possible that the network will come up with a wrong answer with pret ty high confidence. It’s useful to experi- ment and get a feel for how a model reacts to unseen data. We’ve just run a network that won an im age-classification co mpetition in 2015. It learned to recognize our dog from examples of dogs, together with a ton of other real-world subjects. We’ll now see how diff erent architectures can achieve other kinds of tasks, starting wi th image generation. 2.2 A pretrained model that fakes it until it makes it Let’s suppose, for a moment, that we’re care er criminals who want to move into sell- ing forgeries of “lost” paintings by famous artists. We’re criminals, not painters, so as we paint our fake Rembrandts and Picassos, it quickly be comes apparent that they’re amateur imitations rather than the real deal. Even if we spend a bunch of time practic- ing until we get a canvas that we can’t tell is fake, trying to pass it off at the local art auction house is going to get us kicked out instantly. Even worse, being told “This is clearly fake; get out,” doesn’t help us impr ove! We’d have to randomly try a bunch of things, gauge which ones took slightly longer to recognize as forgeries, and emphasize those traits on our future attempts, which would take far too long. Instead, we need to find an art historian of questionab le moral standing to inspect our work and tell us exactly what it was that tipped them off that the painting wasn’t legit. With that feedback, we can improve ou r output in clear, directed ways, until our sketchy scholar can no longer tell our paintings from the real thing. Soon, we’ll have our “Botticelli” in the Louvre, and their Benjamins in our pockets. We’ll be rich!" Deep-Learning-with-PyTorch.pdf,"28 CHAPTER 2Pretrained networks While this scenario is a bit farcical, the underlying technology is sound and will likely have a profound impact on the percei ved veracity of digital data in the years to come. The entire concept of “photographic ev idence” is likely to become entirely sus- pect, given how easy it will be to automa te the production of convincing, yet fake, images and video. The only key ingredient is data. Let’ s see how this process works. 2.2.1 The GAN game In the context of deep learning, what we’ve just described is known as the GAN game , where two networks, one acting as the painte r and the other as the art historian, com- pete to outsmart each other at creating and detecting forgeries. GAN stands for gener- ative adversarial network , where generative means something is being created (in this case, fake masterpieces), adversarial means the two networks are competing to out- smart the other, and well, network is pretty obvious. These networks are one of the most original outcomes of re cent deep learning research. Remember that our overarching goal is to produce synthetic examples of a class of images that cannot be recognized as fake. When mixed in with legitimate examples, a skilled examiner would have trouble determ ining which ones are real and which are our forgeries. The generator network takes the role of the painter in our scenario, tasked with pro- ducing realistic-looking images, star ting from an arbitrary input. The discriminator net- work is the amoral art inspector, needing to tell whether a given image was fabricated by the generator or belongs in a set of real images. This two-network design is atypical for most deep learning arch itectures but, when used to implement a GAN game, can lead to incredible results. Figure 2.5 shows a rough picture of what’s going on. The end goal for the generator is to fool the discriminator into mixing up real and fake images. The end goal for the discriminator is to find out wh en it’s being tricked, but it also helps inform the gener- ator about the identifi able mistakes in the generated images. At the start, the generator produces confused, three-eyed monsters that look nothing like a Rembrandt portrait. The discriminator is easily able to distin guish the muddled messes from the real paint- ings. As training progresses, information fl ows back from the discriminator, and the generator uses it to improve. By the end of training, the generator is able to produce convincing fakes, and the discriminator no longer is able to tell which is which. Note that “Discriminator wins” or “Gen erator wins” shouldn’t be taken literally— there’s no explicit tournament between the two. However, both networks are trained based on the outcome of the other network, which drives the optimization of the parameters of each network. This technique has proven itself able to lead to generators that produce realistic images from nothing but noise and a conditio ning signal, like an attribute (for exam- ple, for faces: young, female , glasses on) or another imag e. In other words, a well- trained generator learns a pl ausible model for ge nerating images that look real even when examined by humans. " Deep-Learning-with-PyTorch.pdf,"29 A pretrained model that fakes it until it makes it 2.2.2 CycleGAN An interesting evolution of this concep t is the CycleGAN. A CycleGAN can turn images of one domain into images of another domain (and back), without the needfor us to explicitly provide matching pairs in the training set. In figure 2.6, we have a CycleGAN work flow for the task of turning a photo of a horse into a zebra, and vice versa. Note that there are two separate generator net-works, as well as two distinct discriminators. GENERATOR WINSLOoKS LEGIT!DISCRIMINATOR WINSDISCRIMINATOR GENERATOR GENERATED IMAGEREAL IMAGESGETS BETtER AT MAKING STUFf UP GETS BETtER AT NOT BEING FOoLEDREAL! FAKE! Figure 2.5 Concept of a GAN game REAL HORSE!REAL ZEBRA! ...SAME PROCESs STARTING FROM ZEBRA...GA2B GB2ADB DAINPUT Figure 2.6 A CycleGAN trained to the point that it can fool both discriminator networks" Deep-Learning-with-PyTorch.pdf,"30 CHAPTER 2Pretrained networks As the figure shows, the first generator lear ns to produce an imag e conforming to a tar- get distribution (zebras, in this case) star ting from an image be longing to a different distribution (horses), so that the discriminator can’t tell if the image produced from a horse photo is actually a genuine picture of a zebra or not. At the same time—and here’s where the Cycle prefix in the acronym comes in—t he resulting fake zebra is sent through a different generator going the other way (zebra to horse, in our case), to be judged by another discriminator on the other side. Creating such a cycle stabilizes the training process considerably, which addresse s one of the original issues with GANs. The fun part is that at this point, we don’t need matched horse/zebra pairs as ground truths (good luck getting them to ma tch poses!). It’s enough to start from a collection of unrelated horse images and zebra photos for the generators to learn their task, going beyond a purely supervised setting. The implications of this model go even further than this: the generator learns how to selectively change the appearance of objects in the scene without supervision about what’s what. There’s no signal indi- cating that manes are manes and legs are le gs, but they get translated to something that lines up with the anatomy of the other animal. 2.2.3 A network that tu rns horses into zebras We can play with this model right now. Th e CycleGAN network ha s been trained on a dataset of (unrelated) horse images and zebra images extracted from the ImageNet dataset. The network learns to take an imag e of one or more horses and turn them all into zebras, leaving the rest of the image as unmodified as possible. While humankind hasn’t held its breath over the last few thou sand years for a tool that turn horses into zebras, this task showcases the ability of these architectures to model complex real- world processes with distant su pervision. While they have their limits, there are hints that in the near future we won’t be able to tell real from fake in a live video feed, which opens a can of worms that we’ll duly close right now. P l a y i n g w i t h a p r e t r a i n e d C y c l e G A N w i l l g i v e u s t h e o p p o r t u n i t y t o t a k e a s t e p closer and look at how a network—a genera tor, in this case—is implemented. We’ll use our old friend ResNet. We’ll define a ResNetGenerator class offscreen. The code is in the first cell of the 3_cyclegan.ipynb file, but the implementation isn’t relevant right now, and it’s too complex to follow un til we’ve gotten a lot more PyTorch experi- ence. Right now, we’re focused on what it can do, rather than how it does it. Let’s instantiate the class with default para meters (code/p1ch2/3_cyclegan.ipynb): # In[2]: netG = ResNetGenerator() The netG model has been created, but it contai ns random weights. We mentioned ear- lier that we would run a generator model th at had been pretrained on the horse2zebra dataset, whose training set contains two se ts of 1068 and 1335 images of horses and zebras, respectively. The dataset be found at http://mng.bz/8pKP . The weights of the model have been saved in a .pth file, which is nothing but a pickle file of the model’s" Deep-Learning-with-PyTorch.pdf,"31 A pretrained model that fakes it until it makes it tensor parameters. We can load those into ResNetGenerator using the model’s load _state_dict method: # In[3]: model_path = '../data/p1ch2/horse2zebra_0.4.0.pth' model_data = torch.load(model_path)netG.load_state_dict(model_data) At this point, netG has acquired all the knowledge it achieved during training. Note that this is fully equivalent to what happened when we loaded resnet101 from torch- vision in section 2.1.3; but the torchvision.resnet101 function hid the loading from us. Let’s put the network in eval mode, as we did for resnet101 : # In[4]: netG.eval() # Out[4]: ResNetGenerator( (model): Sequential( ... ) ) Printing out the model as we did earlier, we can appreciate that it’s actually pretty con- densed, considering what it does. It takes an image, recognizes one or more horses in it by looking at pixels, and individually modifies the values of those pixels so that whatcomes out looks like a credible zebra. We won’t recognize anything zebra-like in the printout (or in the source code, for that ma tter): that’s because there’s nothing zebra- like in there. The network is a scaffold—the juice is in the weights. We’re ready to load a random image of a horse and see what our generator pro- duces. First, we need to import PIL and torchvision : # In[5]: from PIL import Image from torchvision import transforms Then we define a few input transformations to make sure data enters the network with the right shape and size: # In[6]: preprocess = transforms.Compose([transforms.Resize(256), transforms.ToTensor()]) Let’s open a horse file (see figure 2.7): # In[7]:img = Image.open(""../data/p1ch2/horse.jpg"") img" Deep-Learning-with-PyTorch.pdf,"32 CHAPTER 2Pretrained networks OK, there’s a dude on the horse. (Not for lo ng, judging by the picture.) Anyhow, let’s pass it through prep rocessing and turn it into a properly shaped variable: # In[8]: img_t = preprocess(img) batch_t = torch.unsqueeze(img_t, 0) We shouldn’t worry about the details right no w. The important thing is that we follow from a distance. At this point, batch_t can be sent to our model: # In[9]: batch_out = netG(batch_t) batch_out is now the output of the generator, which we can convert back to an image: # In[10]: out_t = (batch_out.data.squeeze() + 1.0) / 2.0 out_img = transforms.ToPILImage()(out_t) # out_img.save('../data/p1ch2/zebra.jpg')out_img # Out[10]: Oh, man. Who rides a zebra that way? The re sulting image (figure 2.8) is not perfect, but consider that it is a bit unusual for th e network to find some one (sort of) riding on top of a horse. It bears re peating that the learning process has not passed through direct supervision, where humans have deline ated tens of thousands of horses or man- ually Photoshopped thousands of zebra stri pes. The generator has learned to produce an image that would fool the discriminator into thinking th at was a zebra, and there was nothing fishy about the image (clearly the discriminator has never been to a rodeo). Figure 2.7 A man riding a horse. The horse is not having it." Deep-Learning-with-PyTorch.pdf,"33 A pretrained network that describes scenes Many other fun generators have been deve loped using adversarial training or other approaches. Some of them are capable of creating credible human faces of nonexis- tent individuals; others can translate sketch es into real-looking pictures of imaginary landscapes. Generative models are also being explored for producing real-sounding audio, credible text, and enjoya ble music. It is likely that th ese models will be the basis of future tools that supp ort the creative process. On a serious note, it’s hard to overstate th e implications of this kind of work. Tools like the one we just downloaded are only going to become higher quality and more ubiquitous. Face-swapping technology, in particular, has gotten considerable media attention. Searching for “deep fakes” will turn up a plethora of example content1 (though we must note that there is a nont rivial amount of not-safe-for-work content labeled as such; as with everything on the internet, click carefully). So far, we’ve had a chance to play with a model that sees into images and a model that generates new images. We’ll end our to ur with a model that involves one more, fundamental ingredient: natural language. 2.3 A pretrained network that describes scenes In order to get firsthand experience with a model involving natural language, we will use a pretrained image-captioning model, generously provided by Ruotian Luo.2 It is an implementation of the NeuralTalk2 mo del by Andrej Karpathy. When presented with a natural image, this kind of model ge nerates a caption in English that describes the scene, as shown in figure 2.9. The mo del is trained on a large dataset of images 1A relevant example is described in the Vox article “Jordan Peele’s simulated Obama PSA is a double-edged warning against fake news,” by Aja Romano; http://mng.bz/dxBz (warning: coarse language ). 2We maintain a clone of the code at https://github.com/deep-learning-with-pytorch/ImageCaptioning .pytorch . Figure 2.8 A man riding a zebra. The zebra is not having it." Deep-Learning-with-PyTorch.pdf,"34 CHAPTER 2Pretrained networks along with a paired sentence description: for example, “A Tabby cat is leaning on a wooden table, with one paw on a laser mouse and the other on a black laptop.”3 This captioning model has two connected halves. The first half of the model is a network that learns to generate “descripti ve” numerical representations of the scene (Tabby cat, laser mouse, paw), which are th en taken as input to the second half. That second half is a recurrent neural network that generates a cohere nt sentence by putting those numerical descriptions together. Th e two halves of the model are trained together on image-caption pairs. The second half of the model is called recurrent because it generates its outputs (individual words) in subseq uent forward passes, where the input to each forward pass includes the outputs of the previous forward pass. This generates a dependency of thenext word on words that were generated earl ier, as we would expect when dealing with sentences or, in gene ral, with sequences. 2.3.1 NeuralTalk2 The NeuralTalk2 model can be found at https://github.com/deep-learning-with- pytorch/ImageCapt ioning.pytorch . We can place a set of images in the data directory and run the following script: python eval.py --model ./data/FC/fc-model.pth ➥ --infos_path ./data/FC/fc-infos.pkl --image_folder ./data Let’s try it with our horse.jpg image. It sa ys, “A person riding a horse on a beach.” Quite appropriate. 3Andrej Karpathy and Li Fei-Fei, “Deep Visual-Seman tic Alignments for Generating Image Descriptions,” https://cs.stanford.edu/people/karpathy/cvpr2015.pdf . TRAINED END-TO-END ON IMAGE-CAPTION PAIRSCONVOLUTIONAL (IMAGE RECOGNITION)RECURrENT (TEXT GENERATION)“AN ODd-LOoKING FELlOW HOLDING A PINK BALlOoN” Figure 2.9 Concept of a captioning model" Deep-Learning-with-PyTorch.pdf,"35 Torch Hub Now, just for fun, let’s see if our Cycl eGAN can also fool this NeuralTalk2 model. Let’s add the zebra.jpg image in the data folder and rerun the model: “A group of zebras are standing in a field.” Well, it got the animal right, but it saw more than one zebra in the image. Certainly this is not a pose that the network has ever seen a zebra in, nor has it ever seen a rider on a zebra (with some spurious ze bra patterns). In addi- tion, it is very likely that zebras are depicted in groups in the training dataset, so there might be some bias that we could inve stigate. The captioning network hasn’t described the rider, either. Again, it’s probably for the same reason: the network wasn’t shown a rider on a zebra in the training dataset. In any case, this is an impres- sive feat: we generated a fake image with an impossible situation, and the captioning network was flexible enough to get the subject right. We’d like to stress that something like this, which would have been extremely hard to achieve before the advent of deep lear ning, can be obtained with under a thousand lines of code, with a general-purpose archit ecture that knows nothing about horses or zebras, and a corpus of images and their de scriptions (the MS COCO dataset, in this case). No hardcoded criterion or gra mmar—everything, including the sentence, emerges from patterns in the data. The network architecture in this last ca se was, in a way, more complex than the ones we saw earlier, as it includes two netw orks. One is recurrent, but it was built out of the same building blocks, all of which are provided by PyTorch. At the time of this writing, models such as these exist more as applied research or novelty projects, rather than something th at has a well-defined, concrete use. The results, while promising, just aren’t good enough to use … yet. With time (and addi- tional training data), we should expect this class of models to be able to describe the world to people with vision impairment, tr anscribe scenes from video, and perform other similar tasks. 2.4 Torch Hub Pretrained models have been published si nce the early days of deep learning, but until PyTorch 1.0, there was no way to ensu re that users would have a uniform inter- face to get them. TorchVision was a good example of a clean interface, as we saw ear- lier in this chapter; but other authors, as we have seen for CycleGAN and NeuralTalk2, chose different designs. PyTorch 1.0 saw the introduction of To rch Hub, which is a mechanism through which authors can publish a model on GitH ub, with or without pretrained weights, and expose it through an interface that PyTorch understands. This makes loading a pretrained model from a third party as easy as loading a TorchVision model. All it takes for an author to publish a model through the Torch Hub mechanism is to place a file named hubconf.py in the ro ot directory of the GitHub repository. The file has a very simple structure:" Deep-Learning-with-PyTorch.pdf,"36 CHAPTER 2Pretrained networks dependencies = ['torch', 'math'] def some_entry_fn(*args, **kwargs): model = build_some_model(*args, **kwargs)return model def another_entry_fn(*args, **kwargs): model = build_another_model(*args, **kwargs) return model In our quest for interesting pretrained mode ls, we can now search for GitHub reposi- tories that include hubconf.py, and we’ll know right away that we can load them using the torch.hub module. Let’s see how this is done in practice. To do that, we’ll go back to TorchVision, because it provides a clean example of how to interact with Torch Hub. Let’s visit https://github.com/pytorch/vision and notice that it contains a hub- conf.py file. Great, that checks. The first thing to do is to look in that file to see the entry points for the repo—we’ll need to specify them later. In the case of TorchVision, there are two: resnet18 and resnet50 . We already know what thes e do: they return an 18- layer and a 50-layer ResNet model, respectively. We also see that the entry-point func- tions include a pretrained keyword argument. If True , the returned models will be ini- tialized with weights lear ned from ImageNet, as we saw earlier in the chapter. Now we know the repo, the entry points , and one interesting keyword argument. That’s about all we need to load the mode l using torch.hub, without even cloning the repo. That’s right, PyTorch will handle that for us: import torch from torch import hub resnet18_model = hub.load('pytorch/vision:master', 'resnet18',pretrained=True) This manages to download a snapshot of the master branch of the pytorch/vision repo, along with the weights, to a local di rectory (defaults to .torch/hub in our home directory) and run the resnet18 entry-point function, which returns the instantiated model. Depending on the environment, Py thon may complain that there’s a module missing, like PIL. Torch Hub won’t install missing dependencies, but it will report them to us so that we can take action. At this point, we can invoke the return ed model with proper arguments to run a forward pass on it, the same way we did earlier. The nice part is that now every modelpublished through this mechanism will be accessible to us using the same modalities, well beyond vision.Optional list of modules the code depends on One or more functions to be exposed to users as entry points for the repository. These functions should initialize models according to the arguments and return them. Name and branch of the GitHub repo Name of the entry- point function Keyword argument" Deep-Learning-with-PyTorch.pdf,"37 Conclusion Note that entry points are supposed to return models; but, strictly speaking, they are not forced to. For instance, we could have an entry point for transforming inputs and another one for turning the output probabilities into a text label. Or we could have an entry point for just the model, and another that includes the model along with the pre- and postprocessing steps. By leaving these options open, the PyTorch developers have provided the community wi th just enough stan dardization and a lot of flexibility. We’ll see what pattern s will emerge from this opportunity. Torch Hub is quite new at the time of writing, and there are only a few models pub- lished this way. We can get at them by Googling “github. com hubconf.py.” Hopefully the list will grow in the future, as more auth ors share their models through this channel. 2.5 Conclusion We hope this was a fun chapter. We took so me time to play with models created with PyTorch, which were optimized to carry out sp ecific tasks. In fact , the more enterpris- ing of us could already put one of these mo dels behind a web server and start a busi- ness, sharing the profits wi th the original authors!4 Once we learn how these models are built, we will also be able to use the knowledge we gained here to download a pre- trained model and quickly fine-tune it on a slightly different task. We will also see how building models that deal with different problems on differ- ent kinds of data can be done using the same building blocks. One thing that PyTorch does particularly right is providing those bu ilding blocks in the form of an essential toolset—PyTorch is not a very large library from an API perspective, especially whencompared with other deep learning frameworks. This book does not focus on going thro ugh the complete PyTorch API or review- ing deep learning architectures; rather, we will build hands-on knowledge of these building blocks. This way, you will be ab le to consume the excellent online documen- tation and repositories on top of a solid foundation. Starting with the next chap ter, we’ll embark on a jour ney that will enable us to teach our computer skills li ke those described in this chapter from scratch, using PyTorch. We’ll also learn that starting from a pretrained network and fine-tuning it on new data, without starting from scratch, is an effective way to solve problems when the data points we have are not particularly numerous. This is one further reason pre- trained networks are an important tool for deep learning practitioners to have. Time to learn about the first fundam ental building block: tensors. 4Contact the publisher for franchise opportunities!" Deep-Learning-with-PyTorch.pdf,"38 CHAPTER 2Pretrained networks 2.6 Exercises 1Feed the image of the golden retrie ver into the horse-to-zebra model. aWhat do you need to do to the image to prepare it? bWhat does the output look like? 2Search GitHub for projects th at provide a hubconf.py file. aHow many repositories are returned? bFind an interesting-looking project with a hubconf.py. Can you understand the purpose of the projec t from the documentation? cBookmark the project, an d come back after you’ve finished this book. Can you understand the implementation? 2.7 Summary A pretrained network is a model that has already been trained on a dataset. Such networks can typically produce us eful results immediately after loading the network parameters. By knowing how to use a pretrained mode l, we can integrat e a neural network into a project without having to design or train it. AlexNet and ResNet are two deep convol utional networks th at set new bench- marks for image recognition in the years they were released. Generative adversarial networks (GANs) have two parts—the generator and thediscriminator—that work together to produce output indistinguishable from authentic items. CycleGAN uses an architecture that supports converting back and forth between two different classes of images. NeuralTalk2 uses a hybrid model archit ecture to consume an image and pro- duce a text description of the image. Torch Hub is a standardized way to load models and weights from any project with an appropriate hubconf.py file." Deep-Learning-with-PyTorch.pdf,"39It starts with a tensor In the previous chapter, we took a tour of some of the many applications that deep learning enables. They invariably consisted of taking data in some form, like images or text, and producing data in another fo rm, like labels, numbers, or more images or text. Viewed from this an gle, deep learning really co nsists of building a system that can transform data from one representa tion to another. This transformation is driven by extracting commonalities from a series of examples that demonstrate the desired mapping. For example, the system might note the general shape of a dog and the typical colors of a golden retrieve r. By combining the two image properties, the system can correctly map images with a given shape and color to the golden retriever label, instead of a black lab (or a tawny tomcat, for that matter). Theresulting system can consume broad swaths of similar inputs and produce meaning- ful output for those inputs.This chapter covers Understanding tensors, the basic data structure in PyTorch Indexing and operating on tensors Interoperating with NumPy multidimensional arrays Moving computations to the GPU for speed" Deep-Learning-with-PyTorch.pdf,"40 CHAPTER 3It starts with a tensor The process begins by converting our in put into floating-poi nt numbers. We will cover converting image pixels to numbers, as we see in the first step of figure 3.1, in chap- ter 4 (along with many other types of data). Bu t before we can get to that, in this chapter, we learn how to deal with all the floating -point numbers in PyTorch by using tensors. 3.1 The world as floating-point numbers Since floating-point numbers are the way a ne twork deals with information, we need a way to encode real-world data of the kind we want to process into something digestible by a network and then decode the output back to something we can understand and use for our purpose. A deep neural network typically learns the transformation from one form of data to another in stages, which means the partiall y transformed data between each stage can be thought of as a sequence of intermedia te representations. For image recognition, early representations can be things such as edge detection or cert ain textures like fur. Deeper representations can capture more comp lex structures like ears, noses, or eyes. In general, such intermediate represen tations are collections of floating-point numbers that characterize the input and capt ure the data’s structure in a way that is instrumental for describing how inputs are mapped to th e outputs of the neural net- work. Such characterization is specific to th e task at hand and is learned from relevant INPUT REPRESENTATION(VALUES OF PIXELS)158 186 2200.19 0.230.460.77...0.91 0.010.0 0.520.910.0...0.74 0.45...172 175 ... INTERMEDIATE REPRESENTATIONS SIMILAR INPUTS SHOULD LEAD TOCLOSE REPRESENTATIONS(ESPECIALlY AT DEePER LEVELS)OUTPUT REPRESENTATION(PROBABILITY OF CLASsES)“SUN” “SEASIdE”“SCENERY” Figure 3.1 A deep neural network learns how to transform an input representation to an output representation. (Note: The numbers of neurons and outputs are not to scale.)" Deep-Learning-with-PyTorch.pdf,"41 The world as floating-point numbers examples. These collections of floating-p oint numbers and their manipulation are at the heart of modern AI—we will see several examples of this throughout the book. It’s important to keep in mind that th ese intermediate representations (like those shown in the second step of figure 3.1) ar e the results of combining the input with the weights of the previous layer of neurons. Each intermediate representation is unique to the inputs that preceeded it. Before we can begin the process of conver ting our data to floating-point input, we must first have a solid un derstanding of how PyTorch handles and stores data—as input, as intermediate representations, and as output. This chapter will be devoted to precisely that. To this end, PyTorch introduces a fundamental data structure: the tensor . We already bumped into tensors in chapter 2, when we ran inference on pretrained net- works. For those who come from mathematic s, physics, or engineering, the term tensor comes bundled with the noti on of spaces, reference systems, and transformations between them. For better or worse, those no tions do not apply here. In the context of deep learning, tensors refer to the generaliza tion of vectors and matrices to an arbi- trary number of dimensions, as we can see in figure 3.2. Another name for the same concept is multidimensional array . The dimensionality of a tensor coincides with the number of indexes used to refer to scalar values within the tensor. PyTorch is not the only library that deals with multidimensional arrays. NumPy is by far the most popular multidimensional array li brary, to the point that it has now argu- ably become the lingua franca of data science. PyTorch fe atures seamless interoperabil- ity with NumPy, which brings with it first-cl ass integration with the rest of the scientific libraries in Python, such as SciPy ( www.scipy.org ), Scikit-learn ( https://scikit-learn .org), and Pandas ( https://pandas.pydata.org ). Compared to NumPy arrays, PyTorch tensor s have a few superpowers, such as the ability to perform very fa st operations on graphica l processing units (GPUs), distribute operations on mult iple devices or machines, an d keep track of the graph of 34 1 546 737 9 12557 941 79 3 5 86 4 73 352 SCALAR VECTOR MATRIX TENSOR 0D 1DX[2] = 5 X[1, 0] = 7 X[0, 2, 1] = 5 X[1, 3, ...,2] = 4 2D 3DTENSOR N-D DATA N INDICES Figure 3.2 Tensors are the building blocks for representing data in PyTorch." Deep-Learning-with-PyTorch.pdf,"42 CHAPTER 3It starts with a tensor computations that created them. These are all important features when implementing a modern deep learning library. We’ll start this chapter by introducing PyTorch tensors, covering the basics in order to set things in motion for our work in the rest of the book. First and foremost,we’ll learn how to manipulate tensors using the PyTorch tensor library. This includes things like how the data is stored in me mory, how certain operations can be per- formed on arbitraril y large tensors in constant time , and the aforementioned NumPy interoperability and GPU acceleration. Unders tanding the capabilities and API of ten- sors is important if they’re to become go-t o tools in our programming toolbox. In the next chapter, we’ll put this knowledge to good use and learn how to represent several different kinds of data in a way that enables learning with neural networks. 3.2 Tensors: Multidimensional arrays We have already learned that tensors are th e fundamental data structure in PyTorch. A tensor is an array: that is, a data structure that stores a collection of numbers that are accessible individually using an index, and that can be indexed with multiple indices. 3.2.1 From Python lists to PyTorch tensors Let’s see list indexing in action so we can compar e it to tensor indexing. Take a list of three numbers in Python (.code/p1ch3/1_tensors.ipynb): # In[1]: a = [1.0, 2.0, 1.0] We can access the first element of the list using the corresponding zero-based index: # In[2]: a[0] # Out[2]: 1.0 # In[3]: a[2] = 3.0 a # Out[3]: [1.0, 2.0, 3.0] It is not unusual for simple Python programs dealing with vectors of numbers, such as the coordinates of a 2D line, to use Python li sts to store the vectors. As we will see in the following chapter, using the more effici ent tensor data structure, many types of data—from images to time series, and even sentences—can be represented. By defin- ing operations over tensors, some of which we’ll explore in this chapter, we can slice and manipulate data expressively and efficien tly at the same time, even from a high- level (and not particularly fast) language such as Python. " Deep-Learning-with-PyTorch.pdf,"43 Tensors: Multidimensional arrays 3.2.2 Constructing our first tensors Let’s construct our first PyTorch tensor and se e what it looks like. It won’t be a partic- ularly meaningful tens or for now, just three ones in a column: # In[4]: import torch a = torch.ones(3) a # Out[4]: tensor([1., 1., 1.]) # In[5]: a[1] # Out[5]: tensor(1.) # In[6]: float(a[1]) # Out[6]: 1.0 # In[7]: a[2] = 2.0 a # Out[7]: tensor([1., 1., 2.]) After importing the torch module, we call a function th at creates a (one-dimensional) tensor of size 3 filled with the value 1.0. We can access an element using its zero-based index or assign a new value to it. Although on the surface this example doesn’t differmuch from a list of number objects, under the hood things are completely different. 3.2.3 The essence of tensors Python lists or tuples of nu mbers are collections of Python objects that are individually allocated in memory, as shown on the left in figure 3.3. PyTorch tensors or NumPy arrays, on the other hand, are views over (t ypically) contiguous memory blocks contain- ing unboxed C numeric types rather than Python obj ects. Each element is a 32-bit (4-byte) float in this case, as we can see on the right side of figure 3.3. This means storing a 1D tensor of 1,000,000 float numbers will require exactly 4,000,000 contiguous bytes, plus a small overhead for the metadata (suc h as dimensions and numeric type). Say we have a list of coordinates we’d like to use to represent a geometrical object: perhaps a 2D triangle with vertices at coordinates (4, 1), (5, 3), and (2, 1). The example is not particularly pertinent to deep learning, but it’s ea sy to follow. Instead of having coordinates as numbers in a Pyth on list, as we did earlier, we can use aImports the torch module Creates a one-dimensional tensor of size 3 filled with 1s" Deep-Learning-with-PyTorch.pdf,"44 CHAPTER 3It starts with a tensor one-dimensional tensor by storing Xs in the even indices and Ys in the odd indices, like this: # In[8]: points = torch.zeros(6)points[0] = 4.0 points[1] = 1.0 points[2] = 5.0points[3] = 3.0 points[4] = 2.0 points[5] = 1.0 We can also pass a Python list to th e constructor, to the same effect: # In[9]: points = torch.tensor([4.0, 1.0, 5.0, 3.0, 2.0, 1.0]) points # Out[9]: tensor([4., 1., 5., 3., 2., 1.]) To get the coordinates of the first point, we do the following: # In[10]:float(points[0]), float(points[1]) # Out[10]: (4.0, 1.0) This is OK, although it would be practical to have the first index refer to individual 2D points rather than point coordinates. For this, we can use a 2D tensor: # In[11]: points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]]) points MEMORY MEMORY PYTHON LIST TENSOR OR ARrAY([1.0, 2.2, 0.3, 7.6, ...]) [1.0, 2.2, 0.3, 7.6, ...]7.6 Figure 3.3 Python object (boxed) numeric va lues versus tensor (unboxed array) numeric values Using .zeros is just a way to get an appropriately sized array. We overwrite those zeros with the values we actually want." Deep-Learning-with-PyTorch.pdf,"45 Tensors: Multidimensional arrays # Out[11]: tensor([[4., 1.], [5., 3.],[2., 1.]]) Here, we pass a list of lists to the construc tor. We can ask the tensor about its shape: # In[12]: points.shape # Out[12]: torch.Size([3, 2]) This informs us about the size of the tensor along each dimension. We could also use zeros or ones to initialize the tensor, providing the size as a tuple: # In[13]: points = torch.zeros(3, 2) points # Out[13]: tensor([[0., 0.], [0., 0.], [0., 0.]]) Now we can access an individual element in the tensor using two indices: # In[14]:points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]]) points # Out[14]: tensor([[4., 1.], [5., 3.],[2., 1.]]) # In[15]: points[0, 1] # Out[15]: tensor(1.) This returns the Y-coordinate of the zeroth point in our dataset. We can also access the first element in the tensor as we did be fore to get the 2D coordinates of the first point: # In[16]: points[0] # Out[16]: tensor([4., 1.])" Deep-Learning-with-PyTorch.pdf,"46 CHAPTER 3It starts with a tensor The output is another tensor that presents a different view of the same underlying data. The new tensor is a 1D tensor of size 2, re ferencing the values of the first row in the points tensor. Does this mean a new chunk of memory was allocated, values were copied into it, and the new memory was returned wr apped in a new tensor object? No, because that would be very inefficient, especially if we had millions of points. We’ll revisit how tensors are stored later in this chapter when we cover views of te nsors in section 3.7. 3.3 Indexing tensors What if we need to obtain a te nsor containing all points but the first? That’s easy using range indexing notation, whic h also applies to standard Python lists. Here’s a reminder: # In[53]: some_list = list(range(6)) some_list[:] some_list[1:4]some_list[1:]some_list[:4] some_list[:-1] some_list[1:4:2] To achieve our goal, we can use the same no tation for PyTorch tensors, with the added benefit that, just as in NumP y and other Python scientific libraries, we can use range indexing for each of the tensor’s dimensions: # In[54]: points[1:] points[1:, :]points[1:, 0] points[None] In addition to using ranges, PyTorch features a powerful form of indexing, called advanced indexing , which we will look at in the next chapter. 3.4 Named tensors The dimensions (or axes) of our tensors us ually index something like pixel locations or color channels. This means when we wa nt to index into a tensor, we need to remember the ordering of the dimensions and write our indexing accordingly. As data is transformed through multiple tens ors, keeping track of which dimension con- tains what data can be error-prone.All elements in the list From element 1 inclusive to element 4 exclusive From element 1 inclusive to the end of the list From the start of the list to element 4 exclusive From the start of the list to one before the last elementFrom element 1 inclusive to element 4 exclusive, in steps of 2 All rows after the first; implicitly all columnsAll rows after the first; all columns All rows after the first; first column Adds a dimension of size 1, just like unsqueeze" Deep-Learning-with-PyTorch.pdf,"47 Named tensors To make things concrete, imagine that we have a 3D tensor like img_t from section 2.1.4 (we will use dummy data for simplicity he re), and we want to convert it to gray- scale. We looked up typical weights for the colors to derive a single brightness value:1 # In[2]: img_t = torch.randn(3, 5, 5) # shape [channels, rows, columns]weights = torch.tensor([0.2126, 0.7152, 0.0722]) We also often want our code to generalize— for example, from grayscale images repre- sented as 2D tensors with height and width dimensions to color images adding a third channel dimension (as in RGB), or from a si ngle image to a batch of images. In sec- tion 2.1.4, we introduced an additional batch dimension in batch_t ; here we pretend to have a batch of 2: # In[3]: batch_t = torch.randn(2, 3, 5, 5) # shape [batch, channels, rows, columns] So sometimes the RGB channels are in dimension 0, and sometimes they are in dimen- sion 1. But we can generalize by counting from the end: th ey are always in dimension –3, the third from the end. The lazy, unweighted mean can thus be written as follows: # In[4]: img_gray_naive = img_t.mean(-3) batch_gray_naive = batch_t.mean(-3)img_gray_naive.shape, batch_gray_naive.shape # Out[4]: (torch.Size([5, 5]), torch.Size([2, 5, 5])) But now we have the weight, t oo. PyTorch will allow us to multiply things that are the same shape, as well as shapes where one oper and is of size 1 in a given dimension. It also appends leading dimensions of size 1 automatically. This is a feature called broad- casting . batch_t of shape (2, 3, 5, 5) is multiplied by unsqueezed_weights of shape (3, 1, 1), resulting in a tensor of shape (2, 3, 5, 5), from which we can then sum the third dimension from the end (the three channels): # In[5]: unsqueezed_weights = weights.unsqueeze(-1).unsqueeze_(-1) img_weights = (img_t * unsqueezed_weights) batch_weights = (batch_t * unsqueezed_weights)img_gray_weighted = img_weights.sum(-3) batch_gray_weighted = batch_weights.sum(-3) batch_weights.shape, batch_t.shape, unsqueezed_weights.shape # Out[5]: (torch.Size([2, 3, 5, 5]), torch.Size([2, 3, 5, 5]), torch.Size([3, 1, 1])) 1As perception is not trivial to norm, people ha ve come up with many weights. For example, see https://en.wikipedia.o rg/wiki/Luma_(video) ." Deep-Learning-with-PyTorch.pdf,"48 CHAPTER 3It starts with a tensor Because this gets messy quickly—and for the sake of efficiency—the PyTorch function einsum (adapted from NumPy) specif ies an indexing mini-language2 giving index names to dimensions for sums of such products. As often in Python, broadcasting—a form of summarizing unnamed th ings—is done using three dots '…'; but don’t worry too much about einsum , because we will not use it in the following: # In[6]: img_gray_weighted_fancy = torch.einsum('...chw,c->...hw', img_t, weights) batch_gray_weighted_fancy = torch.einsum('...chw,c->...hw', batch_t, weights)batch_gray_weighted_fancy.shape # Out[6]: torch.Size([2, 5, 5]) As we can see, there is quite a lot of book keeping involved. This is error-prone, espe- cially when the locations where tensors are created and used are far apart in our code. This has caught the eye of practiti oners, and so it has been suggested3 that the dimen- sion be given a name instead. PyTorch 1.3 added named tensors as an experimental feature (see https://pytorch .org/tutorials/intermediate/named_tensor_tutorial.html and https://pytorch.org/ docs/stable/named_tensor.html ). Tensor factory functions such as tensor and rand take a names argument. The names should be a sequence of strings: # In[7]: weights_named = torch.tensor([0.2126, 0.7152, 0.0722], names=['channels']) weights_named # Out[7]: tensor([0.2126, 0.7152, 0.0722], names=('channels',)) When we already have a tensor and want to add names (but not change existing ones), we can call the method refine_names on it. Similar to indexing, the ellipsis ( …) allows you to leave out any nu mber of dimensions. With the rename sibling method, you can also overwrite or drop (by passing in None ) existing names: # In[8]: img_named = img_t.refine_names(..., 'channels', 'rows', 'columns') batch_named = batch_t.refine_names(..., 'channels', 'rows', 'columns') print(""img named:"", img_named.shape, img_named.names)print(""batch named:"", batch_named.shape, batch_named.names) # Out[8]: img named: torch.Size([3, 5, 5]) ('channels', 'rows', 'columns') batch named: torch.Size([2, 3, 5, 5]) (None, 'channels', 'rows', 'columns') 2Tim Rocktäschel’s blog post “Einsum is All Yo u Need—Einstein Summation in Deep Learning” ( https:// rockt.github.io/2018/04/30/einsum ) gives a good overview. 3See Sasha Rush, “Tensor Considered Harmful,” Harvardnlp, http://nlp.seas.harvard.edu/NamedTensor ." Deep-Learning-with-PyTorch.pdf,"49 Named tensors For operations with two inputs, in additi on to the usual dimension checks—whether sizes are the same, or if one is 1 and can be broadcast to the other—PyTorch will now check the names for us. So far, it does no t automatically align dimensions, so we need to do this explicitly. The method align_as returns a tensor with missing dimensions added and existing ones permuted to the right order: # In[9]: weights_aligned = weights_named.align_as(img_named) weights_aligned.shape, weights_aligned.names # Out[9]: (torch.Size([3, 1, 1]), ('channels', 'rows', 'columns')) Functions accepting dime nsion arguments, like sum, also take named dimensions: # In[10]: gray_named = (img_named * weights_aligned).sum('channels') gray_named.shape, gray_named.names # Out[10]: (torch.Size([5, 5]), ('rows', 'columns')) If we try to combine dimensions with different names, we get an error: gray_named = (img_named[..., :3] * weights_named).sum('channels') RuntimeError: Error when attempting to broadcast dims ['channels', 'rows', 'columns'] and dims ['channels']: dim 'columns' and dim 'channels' are at the same position from the right but do not match. If we want to use tensors outside functions that operate on named tensors, we need to drop the names by renaming them to None . The following gets us back into the world of unnamed dimensions: # In[12]: gray_plain = gray_named.rename(None)gray_plain.shape, gray_plain.names # Out[12]: (torch.Size([5, 5]), (None, None)) Given the experimental nature of this feat ure at the time of writing, and to avoid mucking around with indexi ng and alignment, we will stick to unnamed in the remainder of the book. Named tensors have the potential to eliminate many sourcesof alignment errors, which—if the PyTorch forum is any indication—can be a source of headaches. It will be interesting to see how widely they will be adopted. " Deep-Learning-with-PyTorch.pdf,"50 CHAPTER 3It starts with a tensor 3.5 Tensor element types So far, we have covered the basics of how tensors work, but we have not yet touched on what kinds of numeric types we can store in a Tensor . As we hinted at in section 3.2, using the standard Python numeric type s can be suboptimal for several reasons: Numbers in Python are objects. Whereas a floating-point number might require only, for instance, 32 bits to be repres ented on a computer, Python will convert it into a full-fledged Python object wi th reference counting, and so on. This operation, called boxing , is not a problem if we need to store a small number of numbers, but allocating mil lions gets very inefficient. Lists in Python are meant for se quential collections of objects. There are no operations defined for, say, efficiently taking the do t product of two vectors, or summing vec- tors together. Also, Python lists have no way of optimizing the layout of their con-tents in memory, as they are indexable co llections of pointers to Python objects (of any kind, not just numbers). Finally, Python lists are one-dimensional, and although we can create lists of list s, this is again very inefficient. The Python interpreter is slow compared to optimized, compiled code. Performing math- ematical operations on large collections of numerical data can be much fasterusing optimized code written in a compiled, low-level language like C. For these reasons, data science libraries rely on NumPy or introduce dedicated data structures like PyTorch tensors, which provide efficient low-level implementations of numerical data structures and related oper ations on them, wrapped in a convenient high-level API. To enable this, the objects within a tensor must all be numbers of the same type, and PyTorch must keep track of this numeric type. 3.5.1 Specifying the numeric type with dtype The dtype argument to tensor construc tors (that is, functions like tensor , zeros , and ones ) specifies the numerical data (d) type th at will be containe d in the tensor. The data type specifies the possible values the tensor can hold (integers versus floating- point numbers) and the number of bytes per value.4 The dtype argument is deliber- ately similar to the standard NumPy argument of the same name. Here’s a list of the possible values for the dtype argument: torch.float32 or torch.float : 32-bit floating-point torch.float64 or torch.double : 64-bit, double-precision floating-point torch.float16 or torch.half : 16-bit, half-precision floating-point torch.int8 : signed 8-bit integers torch.uint8 : unsigned 8-bit integers torch.int16 or torch.short : signed 16-bit integers torch.int32 or torch.int : signed 32-bit integers torch.int64 or torch.long : signed 64-bit integers torch.bool : Boolean 4And signed-ness, in the case of uint8 ." Deep-Learning-with-PyTorch.pdf,"51 Tensor element types The default data type for tens ors is 32-bit floating-point. 3.5.2 A dtype for every occasion As we will see in future chapters, computations happening in neural networks are typ- ically executed with 32-bit fl oating-point precision. Higher precision, like 64-bit, will not buy improvements in the accuracy of a model and will require more memory and computing time. The 16-bit fl oating-point, half-precision data type is not present natively in standard CPUs, but it is offered on modern GPUs. It is possible to switch to half-precision to decrease the footprint of a neural network model if needed, with a minor impact on accuracy. Tensors can be used as indexes in othe r tensors. In this case, PyTorch expects indexing tensors to have a 64-bit integer data type. Creating a tensor with integers as arguments, such as using torch.tensor([2, 2]) , will create a 64-bit integer tensor by default. As such, we’ll spend mo st of our time dealing with float32 and int64 . Finally, predicates on tensors, such as points > 1.0 , produce bool tensors indicat- ing whether each individual element satisf ies the condition. These are the numeric types in a nutshell. 3.5.3 Managing a tensor’s dtype attribute In order to allocate a tensor of the right numeric type, we can specify the proper dtype as an argument to the constructor. For example: # In[47]: double_points = torch.ones(10, 2, dtype=torch.double) short_points = torch.tensor([[1, 2], [3, 4]], dtype=torch.short) We can find out about the dtype for a tensor by accessing the corresponding attribute: # In[48]:short_points.dtype # Out[48]: torch.int16 We can also cast the output of a tensor creation function to the right type using the corresponding casting method, such as # In[49]: double_points = torch.zeros(10, 2).double()short_points = torch.ones(10, 2).short() or the more convenient to method: # In[50]: double_points = torch.zeros(10, 2).to(torch.double) short_points = torch.ones(10, 2).to(dtype=torch.short)" Deep-Learning-with-PyTorch.pdf,"52 CHAPTER 3It starts with a tensor Under the hood, to checks whether the conversion is necessary and, if so, does it. The dtype -named casting methods like float are shorthands for to, but the to method can take additional arguments th at we’ll discuss in section 3.9. When mixing input types in operations, th e inputs are converted to the larger type a u t o m a t i c a l l y . T h u s , i f w e w a n t 3 2 - b i t c o m p u t a t i o n , w e n e e d t o m a k e s u r e a l l o u r inputs are (at most) 32-bit: # In[51]: points_64 = torch.rand(5, dtype=torch.double)points_short = points_64.to(torch.short) points_64 * points_short # works from PyTorch 1.3 onwards # Out[51]: tensor([0., 0., 0., 0., 0.], dtype=torch.float64) 3.6 The tensor API At this point, we know what PyTorch tens ors are and how they work under the hood. Before we wrap up, it is worth taking a lo ok at the tensor operations that PyTorch offers. It would be of little use to list them all here. Instead, we’re going to get a gen-eral feel for the API and establish a few di rections on where to find things in the online documentation at http://pytorch.org/docs . First, the vast majority of operations on and between tensors are available in the torch module and can also be called as method s of a tensor object. For instance, the transpose function we encountered earlier can be used from the torch module # In[71]: a = torch.ones(3, 2) a_t = torch.transpose(a, 0, 1) a.shape, a_t.shape # Out[71]: (torch.Size([3, 2]), torch.Size([2, 3])) or as a method of the a tensor: # In[72]: a = torch.ones(3, 2)a_t = a.transpose(0, 1) a.shape, a_t.shape# Out[72]: (torch.Size([3, 2]), torch.Size([2, 3])) There is no difference between the two fo rms; they can be used interchangeably. We mentioned the online docs earlier ( http://pytorch.org/docs ). They are exhaustive and well organized, with the tensor operations divided into groups:rand initializes the tensor elements to random numbers between 0 and 1." Deep-Learning-with-PyTorch.pdf,"53 Tensors: Scenic views of storage Creation ops —Functions for construc ting a tensor, like ones and from_numpy Indexing, slicing, joining, mutating ops —Functions for changing the shape, stride, or content of a tensor, like transpose Math ops —Functions for manipulating the content of the tensor through computations –Pointwise ops —Functions for obtaining a new te nsor by applying a function to each element independently, like abs and cos –Reduction ops —Functions for computing aggregate values by iterating through tensors, like mean , std, and norm –Comparison ops —Functions for evaluating nume rical predicates over tensors, like equal and max –Spectral ops —Functions for transforming in and operating in the frequency domain, like stft and hamming_window –Other operations —Special functions operat ing on vectors, like cross , or matri- ces, like trace –BLAS and LAPACK operations —Functions following the Basic Linear Algebra Subprograms (BLAS) specification for scalar , vector-vector, matrix-vector, and matrix-matrix operations Random sampling —Functions for generating values by drawing randomly from probability distributions, like randn and normal Serialization —Functions for saving and loading tensors, like load and save Parallelism —Functions for controlling the numb er of threads for parallel CPU execution, like set_num_threads Take some time to play with the general te nsor API. This chapter has provided all the prerequisites to enable this kind of interact ive exploration. We will also encounter sev- eral of the tensor operations as we proceed with the book, starting in the next chapter. 3.7 Tensors: Scenic views of storage It is time for us to look a bit closer at the implementation under the hood. Values in tensors are allocated in contiguous chunks of memory managed by torch.Storage instances. A storage is a one- dimensional array of numerica l data: that is, a contiguous block of memory containing numb ers of a given type, such as float (32 bits repre- senting a floating-p oint number) or int64 (64 bits representing an integer). A PyTorch Tensor instance is a view of such a Storage instance that is capable of index- ing into that storage using an offset and per-dimension strides.5 Multiple tensors can index the same storag e even if they index into the data differ- ently. We can see an example of this in figure 3.4. In fact, when we requested points[0] in section 3.2, what we got back is another tensor that indexes the same 5Storage may not be directly accessible in future PyTorch re leases, but what we show here still provides a good mental picture of how tensors work under the hood." Deep-Learning-with-PyTorch.pdf,"54 CHAPTER 3It starts with a tensor storage as the points tensor—just not all of it, and wi th different dimensionality (1D versus 2D). The underlying memory is allocated only once, however, so creating alter- nate tensor-views of the data can be done quickly regardless of the size of the data managed by the Storage instance. 3.7.1 Indexing into storage Let’s see how indexing into the storage works in practice with our 2D points. The stor- age for a given tensor is accessible using the .storage property: # In[17]: points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]]) points.storage() # Out[17]: 4.01.0 5.0 3.02.0 1.0 [torch.FloatStorage of size 6] Even though the tensor reports itself as having three rows and two columns, the stor- age under the hood is a contiguous array of si ze 6. In this sense, the tensor just knows how to translate a pair of indice s into a location in the storage. We can also index into a storage manually. For instance: # In[18]: points_storage = points.storage() points_storage[0] # Out[18]: 4.0 STORAGE“START AT 0 2 ROWS 3 COLS”“START AT 0 3 ROWS 2 COLS”4411 1255 1233TENSORS (REFERENCING THE SAME STORAGE) WHERE THENUMBERSACTUALlYARE1532 1... Figure 3.4 Tensors are views of a Storage instance." Deep-Learning-with-PyTorch.pdf,"55 Tensor metadata: Size, offset, and stride # In[19]: points.storage()[1] # Out[19]: 1.0 We can’t index a storage of a 2D tensor us ing two indices. The layout of a storage is always one-dimensional, regardless of the dimensionality of any and all tensors that might refer to it. At this point, it shouldn’t come as a su rprise that changing the value of a storage leads to changing the content of its referring tensor: # In[20]: points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]]) points_storage = points.storage()points_storage[0] = 2.0 points # Out[20]: tensor([[2., 1.], [5., 3.], [2., 1.]]) 3.7.2 Modifying stored va lues: In-place operations In addition to the operations on tensors introduced in the previous section, a small number of operations exist only as methods of the Tensor object. They are recogniz- able from a trailing underscore in their name, like zero_ , which indicates that the method operates in place by modifying the input instead of creating a new output tensor and returning it. For instance, the zero_ method zeros out all the elements of the input. Any method without the trailing underscore leaves the source tensor unchanged and instead returns a new tensor: # In[73]: a = torch.ones(3, 2) # In[74]: a.zero_()a # Out[74]: tensor([[0., 0.], [0., 0.], [0., 0.]]) 3.8 Tensor metadata: Si ze, offset, and stride In order to index into a storage, tensors rely on a few pieces of information that, together with their storage, unequivocally define them: size, offset, and stride. How these interact is shown in figure 3.5. The size (or shape, in NumPy parlance) is a tuple" Deep-Learning-with-PyTorch.pdf,"56 CHAPTER 3It starts with a tensor indicating how many elements across each dimension the tensor represents. The stor- age offset is the index in the storage corres ponding to the first element in the tensor. The stride is the number of elements in th e storage that need to be skipped over to obtain the next element along each dimension. 3.8.1 Views of another tensor’s storage We can get the second point in the tens or by providing the corresponding index: # In[21]: points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]]) second_point = points[1]second_point.storage_offset() # Out[21]: 2 # In[22]: second_point.size() # Out[22]: torch.Size([2]) The resulting tensor has offset 2 in the stor age (since we need to skip the first point, which has two items), and the si z e i s a n i n s t a n c e o f t h e Size class containing one STRIDE = (3, 1) 57 4 31 732 8 6574 132 738 +3 NEXT ROW (STRIDE[0]=3)+1NEXT COL (STRIDE[1]=1)(first INDEX) (second INDEX)COLS ROWSSHAPE = (3, 3) OFfSET = 1 Figure 3.5 Relationship between a tensor’s offset , size, and stride. Here the tensor is a view of a larger storage, like one that might ha ve been allocated when creating a larger tensor." Deep-Learning-with-PyTorch.pdf,"57 Tensor metadata: Size, offset, and stride element, since the tensor is one-dimensional. It’s important to note that this is the same information contained in the shape property of tensor objects: # In[23]: second_point.shape # Out[23]: torch.Size([2]) The stride is a tuple indicating the number of elements in the storage that have to be skipped when the index is increased by 1 in each dimension. For instance, our points tensor has a stride of (2, 1) : # In[24]: points.stride() # Out[24]: (2, 1) Accessing an element i, j in a 2D tensor results in accessing the storage_offset + stride[0] * i + stride[1] * j element in the storage. The offset will usually be zero; if this tensor is a view of a storage cr eated to hold a larger tensor, the offset might be a positive value. This indirection between Tensor and Storage makes some operations inexpen- sive, like transposing a tensor or extracting a subtensor, because they do not lead to memory reallocations. Instead, th ey consist of allocating a new Tensor object with a different value for size, st orage offset, or stride. We already extracted a subtensor when we indexed a specific point and saw the storage offset increasing. Let’s see what happens to the size and stride as well: # In[25]: second_point = points[1] second_point.size() # Out[25]: torch.Size([2]) # In[26]: second_point.storage_offset() # Out[26]: 2 # In[27]: second_point.stride() # Out[27]: (1,)" Deep-Learning-with-PyTorch.pdf,"58 CHAPTER 3It starts with a tensor The bottom line is that the subtensor has one less dimension, as we would expect, while still indexing the same storage as the original points tensor. This also means changing the subtensor will have a si de effect on the original tensor: # In[28]: points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]])second_point = points[1] second_point[0] = 10.0 points # Out[28]: tensor([[ 4., 1.], [10., 3.], [ 2., 1.]]) This might not always be desirable, so we can eventually clone the subtensor into a new tensor: # In[29]: points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]])second_point = points[1].clone() second_point[0] = 10.0 points # Out[29]: tensor([[4., 1.], [5., 3.], [2., 1.]]) 3.8.2 Transposing without copying Let’s try transposing now. Let’s take our points tensor, which has individual points in the rows and X and Y coordinates in the columns, and turn it around so that individ- ual points are in the columns. We ta ke this opportunity to introduce the t function, a shorthand alternative to transpose for two-dimensional tensors: # In[30]: points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]]) points # Out[30]: tensor([[4., 1.], [5., 3.], [2., 1.]]) # In[31]: points_t = points.t() points_t # Out[31]: tensor([[4., 5., 2.], [1., 3., 1.]])" Deep-Learning-with-PyTorch.pdf,"59 Tensor metadata: Size, offset, and stride TIP To help build a solid understanding of the mechanics of tensors, it may be a good idea to grab a pencil and a piece of paper and scribble diagrams like the one in figure 3.5 as we st ep through the code in this section. We can easily verify that the two tensors share the same storage # In[32]: id(points.storage()) == id(points_t.storage()) # Out[32]: True and that they differ only in shape and stride: # In[33]:points.stride() # Out[33]: (2, 1) # In[34]: points_t.stride() # Out[34]: (1, 2) This tells us that increasing the first index by one in points —for example, going from points[0,0] to points[1,0] —will skip along the storage by two elements, while increas- ing the second index—from points[0,0] to points[0,1] —will skip along the storage by one. In other words, the storage holds the elem ents in the tensor sequentially row by row. We can transpose points into points_t , as shown in figure 3.6. We change the order of the elements in the stride. After that, in creasing the row (the first index of the ten- sor) will skip along the storage by one, just like when we were moving along columns in points . This is the very definition of transpos ing. No new memory is allocated: trans- posing is obtained only by creating a new Tensor instance with different stride ordering than the original. Figure 3.6 Transpose operation applied to a tensor TRANSPOSE STRIDE = (3, 1)33 12 2 31241744 11 17 7 STRIDE = (1, 3) NEXT COL NEXT ROW+1 NEXT ROW NEXT COL+3" Deep-Learning-with-PyTorch.pdf,"60 CHAPTER 3It starts with a tensor 3.8.3 Transposing in higher dimensions Transposing in PyTorch is not limited to matrices. We can transpose a multidimen- sional array by specifying the two dimensions along whic h transposing (flipping shape and stride) should occur: # In[35]: some_t = torch.ones(3, 4, 5) transpose_t = some_t.transpose(0, 2)some_t.shape # Out[35]: torch.Size([3, 4, 5]) # In[36]: transpose_t.shape # Out[36]: torch.Size([5, 4, 3]) # In[37]: some_t.stride() # Out[37]: (20, 5, 1) # In[38]: transpose_t.stride() # Out[38]: (1, 5, 20) A tensor whose values are laid out in the storage starting from the rightmost dimen- sion onward (that is, moving along rows for a 2D tensor) is defined as contiguous . Contiguous tensors are convenient because we can visit them efficiently in order with- out jumping around in the storage (improvi ng data locality improves performance because of the way memory access works on modern CPUs). This advantage of course depends on the way algorithms visit. 3.8.4 Contiguous tensors Some tensor operations in PyTorch only work on contiguous tensors, such as view , which we’ll encounter in the next chapter. In that case, PyTorch will throw an infor- mative exception and require us to call contiguous explicitly. It’s worth noting that calling contiguous will do nothing (and will not hu rt performance) if the tensor is already contiguous. In our case, points is contiguous, while its transpose is not: # In[39]: points.is_contiguous()" Deep-Learning-with-PyTorch.pdf,"61 Tensor metadata: Size, offset, and stride # Out[39]: True # In[40]: points_t.is_contiguous() # Out[40]: False We can obtain a new contiguous tensor from a non-contiguous one using the contigu- ous method. The content of the tensor will be the same, but the stride will change, as will the storage: # In[41]: points = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]]) points_t = points.t()points_t # Out[41]: tensor([[4., 5., 2.], [1., 3., 1.]]) # In[42]: points_t.storage() # Out[42]: 4.0 1.0 5.03.0 2.0 1.0 [torch.FloatStorage of size 6] # In[43]: points_t.stride() # Out[43]: (1, 2) # In[44]: points_t_cont = points_t.contiguous() points_t_cont # Out[44]: tensor([[4., 5., 2.], [1., 3., 1.]]) # In[45]: points_t_cont.stride() # Out[45]: (3, 1)" Deep-Learning-with-PyTorch.pdf,"62 CHAPTER 3It starts with a tensor # In[46]: points_t_cont.storage() # Out[46]: 4.0 5.02.0 1.0 3.01.0 [torch.FloatStorage of size 6] Notice that the storage has been reshuffled in order for elements to be laid out row- by-row in the new storage. The stride ha s been changed to reflect the new layout. As a refresher, figure 3.7 shows our diagra m again. Hopefully it will all make sense now that we’ve taken a good l ook at how tensors are built. 3.9 Moving tensors to the GPU So far in this chapter, when we’ve talked about storage, we’ve meant memory on the CPU. PyTorch tensors also can be stored on a different kind of processor: a graphics processing unit (GPU). Every PyTorch tens or can be transferred to (one of) the GPU(s) in order to perform massively parallel, fast com putations. All operations that will be performed on the tensor will be carried out using GPU-specific routines that come with PyTorch. STRIDE = (3, 1) 57 4 31 732 8 6574 132 738 +3 NEXT ROW (STRIDE[0]=3)+1NEXT COL (STRIDE[1]=1)(first INDEX) (second INDEX)COLS ROWSSHAPE = (3, 3) OFfSET = 1 Figure 3.7 Relationship between a tensor’s offset , size, and stride. Here the tensor is a view of a larger storage, like one that might ha ve been allocated when creating a larger tensor." Deep-Learning-with-PyTorch.pdf,"63 Moving tensors to the GPU 3.9.1 Managing a tensor’s device attribute In addition to dtype , a PyTorch Tensor also has the notion of device , which is where on the computer the tensor data is placed. Here is how we can create a tensor on theGPU by specifying the correspond ing argument to the constructor: # In[64]: points_gpu = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]], device='cuda') We could instead copy a tensor created on the CPU onto the GPU using the to method: # In[65]: points_gpu = points.to(device='cuda') Doing so returns a new tensor that has th e same numerical data , but stored in the RAM of the GPU, rather than in regular system RAM. Now that the data is stored locally on the GPU, we’ll start to see th e speedups mentioned earlier when perform- ing mathematical operations on the tensor. In almost all cases, CPU- and GPU-based tensors expose the same user-facing API, maki ng it much easier to write code that is agnostic to where, exactly, the heavy number crunching is running. If our machine has more than one GPU, we can also decide on which GPU we allo- cate the tensor by passing a zero-based integer identifying the GPU on the machine, such as # In[66]: points_gpu = points.to(device='cuda:0') At this point, any operation performed on the tensor, such as multiplying all elements by a constant, is carried out on the GPU: # In[67]: points = 2 * points points_gp u=2* points.to(device='cuda')PyTorch support for various GPUs As of mid-2019, the main PyTo rch releases only have acce leration on GPUs that have support for CUDA. PyTorch can run on AMD’s ROCm ( https://rocm.github.io ), and the master repository provides support, but so far, you need to compile it yourself. (Before the regular build process, you need to run tools/amd_build/build_amd.py to translate the GPU code.) Support for Google’s tensor processing units (TPUs) is a work in progress ( https://github.com/pytorch/xla ), with the current proof of concept available to the public in Google Colab: https://colab.research.google.com. Imple- mentation of data structures and kern els on other GPU technologies, such as OpenCL, are not planned at the time of this writing. Multiplication perf ormed on the CPU Multiplication performed on the GPU" Deep-Learning-with-PyTorch.pdf,"64 CHAPTER 3It starts with a tensor Note that the points_gpu tensor is not brought back to the CPU once the result has been computed. Here’s what happened in this line: 1The points tensor is copied to the GPU. 2A new tensor is allocated on the GPU and used to store the result of the multi- plication. 3A handle to that GPU tensor is returned. Therefore, if we also add a constant to the result # In[68]: points_gpu = points_gpu + 4 the addition is still performed on the GP U, and no information flows to the CPU (unless we print or access the resulting tensor). In order to move the tensor back to the CPU, we need to provide a cpu argument to the to method, such as # In[69]: points_cpu = points_gpu.to(device='cpu') We can also use the shorthand methods cpu and cuda instead of the to method to achieve the same goal: # In[70]: points_gpu = points.cuda() points_gpu = points.cuda(0) points_cpu = points_gpu.cpu() It’s also worth mentioning that by using the to method, we can change the placement and the data type simultan eously by providing both device and dtype as arguments. 3.10 NumPy interoperability We’ve mentioned NumPy here and there. While we do not consider NumPy a prereq- uisite for reading this book, we strongly encourage you to become familiar with NumPy due to its ubiquity in the Python da ta science ecosystem. PyTorch tensors can be converted to NumPy arrays and vice versa very efficientl y. By doing so, we can take advantage of the huge swath of functionality in the wide r Python ecosystem that has built up around the NumPy array type. This zero-cop y interoperability with NumPy arrays is due to the storage system working with the Python buffer protocol (https://docs.python.or g/3/c-api/buffer.html ). To get a NumPy array out of our points tensor, we just call # In[55]: points = torch.ones(3, 4) points_np = points.numpy() points_np # Out[55]:Defaults to GPU index 0" Deep-Learning-with-PyTorch.pdf,"65 Generalized tensors are tensors, too array([[1., 1., 1., 1.], [1., 1., 1., 1.], [1., 1., 1., 1.]], dtype=float32) which will return a NumPy multidimensional array of the right size, shape, and numerical type. Interestingly, the returned array shares the same underlying buffer with the tensor storage. This means the numpy method can be effectively executed at basically no cost, as long as the data si ts in CPU RAM. It also means modifying the NumPy array will lead to a change in the or iginating tensor. If the tensor is allocated on the GPU, PyTorch will make a copy of the content of the tensor into a NumPy array allocated on the CPU. Conversely, we can obtain a PyTorch tensor from a NumPy array this way # In[56]: points = torch.from_numpy(points_np) which will use the same buffer-shar ing strategy we just described. NOTE While the default numeric type in Py Torch is 32-bit floating-point, for NumPy it is 64-bit. As discussed in section 3.5.2, we usually want to use 32-bitfloating-points, so we need to make sure we have tensors of dtype torch .float after converting. 3.11 Generalized tensors are tensors, too For the purposes of this book, and for the vast majority of applications in general, ten- sors are multidimensional arrays, just as we’v e seen in this chapter. If we risk a peek under the hood of PyTorch, there is a twist: how the data is stored under the hood is separate from the tensor API we discusse d in section 3.6. Any implementation that meets the contract of that AP I can be considered a tensor! PyTorch will cause the right computatio n functions to be called regardless of whether our tensor is on the CPU or th e GPU. This is accomplished through a dis- patching mechanism, and that mechanism can ca ter to other tensor types by hooking up the user-facing API to th e right backend functions. Sure enough, there are other kinds of tensors: some are sp ecific to certain cl asses of hardware devices (like Google TPUs), and others have data-r epresentation strategies that differ from the dense array style we’ve seen so far. For example, sparse tensors store only nonzero entries, along with index information. The PyTorch dispatcher on the left in figure 3.8 is designedto be extensible; the subsequent switchin g done to accommodat e the various numeric types of figure 3.8 shown on the right is a fixed aspect of the implementation coded into each backend. We will meet quantized tensors in chapter 15, which are implemented as another type of tensor with a specialized computat ional backend. Sometimes the usual tensors we use are called dense or strided to differentiate them from tensors using other mem- ory layouts." Deep-Learning-with-PyTorch.pdf,"66 CHAPTER 3It starts with a tensor As with many things, the number of kinds of tensors has grown as PyTorch supports a broader range of hardware an d applications. We can expect new kinds to continue to arise as people explore new ways to express and perform computations with PyTorch. 3.12 Serializing tensors Creating a tensor on the fly is all well and g ood, but if the data inside is valuable, we will want to save it to a file and load it back at some point. After all, we don’t want to have to retrain a model from scratch every time we start running our program! PyTorch uses pickle under the hood to serialize the tensor object, plus dedicated serialization code for the storage. Here’s how we can save our points tensor to an ourpoints.t file: # In[57]: torch.save(points, '../data/p1ch3/ourpoints.t') As an alternative, we can pass a file descriptor in lieu of the filename: # In[58]: with open('../data/p1ch3/ourpoints.t','wb') as f: torch.save(points, f) Loading our points back is similarly a one-liner # In[59]: points = torch.load('../data/p1ch3/ourpoints.t') or, equivalently, # In[60]: with open('../data/p1ch3/ourpoints.t','rb') as f: points = torch.load(f) Figure 3.8 The dispatcher in PyTorch is one of its key infrastructure bits." Deep-Learning-with-PyTorch.pdf,"67 Serializing tensors While we can quickly save tensors this way if we only want to load them with PyTorch, the file format itself is not interoperable: we can’t read the tensor with software other than PyTorch. Depending on the use case, th is may or may not be a limitation, but we should learn how to save tensors interopera bly for those times when it is. We’ll look next at how to do so. 3.12.1 Serializing to HDF5 with h5py Every use case is unique, but we suspect n eeding to save tensors interoperably will be more common when introducing PyTorch into existing systems that already rely on different libraries. New projects probab ly won’t need to do this as often. For those cases when you need to, however, you can use the HDF5 format and library ( www.hdfgroup.org/solutions/hdf5 ). HDF5 is a portab le, widely supported format for representing seria lized multidimensional arrays, organized in a nested key- value dictionary. Python su pports HDF5 through the h5py library ( www.h5py.org ), which accepts and returns data in the form of NumPy arrays. We can install h5py using $ conda install h5py At this point, we can save our points tensor by converting it to a NumPy array (at no cost, as we noted earlier) and passing it to the create_dataset function: # In[61]: import h5py f = h5py.File('../data/p1ch3/ourpoints.hdf5', 'w') dset = f.create_dataset('coords', data=points.numpy()) f.close() Here 'coords' is a key into the HDF5 file. We ca n have other keys—even nested ones. One of the interesting things in HDF5 is that we can index the dataset while on diskand access only the elements we’re interested in. Let’s suppose we want to load just the last two points in our dataset: # In[62]: f = h5py.File('../data/p1ch3/ourpoints.hdf5', 'r')dset = f['coords'] last_points = dset[-2:] The data is not loaded when the file is op ened or the dataset is required. Rather, the data stays on disk until we request the seco n d and las t ro ws i n the dat as e t. At t hat point, h5py accesses those two columns and returns a NumPy array-like object encapsulating that region in that dataset that behaves like a NumPy array and has the same API." Deep-Learning-with-PyTorch.pdf,"68 CHAPTER 3It starts with a tensor Owing to this fact, we can pass the returned object to the torch.from_numpy func- tion to obtain a tensor directly. Note that in this case, the data is copied over to the tensor’s storage: # In[63]: last_points = torch.from_numpy(dset[-2:])f.close() Once we’re finished loading da ta, we close the file. Closing the HDFS file invalidates the datasets, and trying to access dset afterward will give an exception. As long as we stick to the order shown here, we ar e fine and can now work with the last_points tensor. 3.13 Conclusion Now we have covered everything we need to get started with representing everything in floats. We’ll cover other aspects of tensors—su ch as creating views of tensors; indexing tensors with other tensors; and broadcasti ng, which simplifies pe rforming element-wise operations between tensors of different sizes or shapes—as n eeded along the way. In chapter 4, we will learn how to represent real-world data in PyTorch. We will start with simple tabular data and move on to something more elaborate. In the pro- cess, we will get to kn ow more about tensors. 3.14 Exercises 1Create a tensor a from list(range(9)) . Predict and then check the size, offset, and stride. aCreate a new tensor using b = a.view(3, 3) . What does view do? Check that a and b share the same storage. bCreate a tensor c = b[1:,1:] . Predict and then check the size, offset, and stride. 2Pick a mathematical operation like cosine or square root. Can you find a corre- sponding function in the torch library? aApply the function element-wise to a. Why does it return an error? bWhat operation is required to make the function work? cIs there a version of your func tion that operates in place? 3.15 Summary Neural networks transform floating-point representations into other floating- point representations. The starting an d ending representations are typically human interpretable, but the intermediate representations are less so. These floating-point representa tions are stored in tensors. Tensors are multidimensional arrays; they are the basic data structure in PyTorch." Deep-Learning-with-PyTorch.pdf,"69 Summary PyTorch has a comprehensive standard library for tensor creation, manipula- tion, and mathematical operations. Tensors can be serialized to disk and loaded back. All tensor operations in PyTorch can execute on the CPU as well as on the GPU, with no change in the code. PyTorch uses a trailing underscore to indi cate that a function operates in place on a tensor (for example, Tensor.sqrt_ )." Deep-Learning-with-PyTorch.pdf,"70Real-world data representation using tensors In the previous chapter, we learned that tensors are the building blocks for data in PyTorch. Neural networks take tensors as input and produce tensors as outputs. Infact, all operations within a neural network and during optimization are operations between tensors, and all parameters (for example, weights and biases) in a neural network are tensors. Having a good sense of how to perform operations on tensors and index them effectively is central to using tools like PyTorch successfully. NowThis chapter covers Representing real-world data as PyTorch tensors Working with a range of data types Loading data from a file Converting data to tensors Shaping tensors so they can be used as inputs for neural network models" Deep-Learning-with-PyTorch.pdf,"71 Working with images that you know the basics of tensors, your dexterity with them will grow as you make your way through the book. Here’s a question that we can already ad dress: how do we take a piece of data, a video, or a line of text, and represent it wi th a tensor in a way that is appropriate for training a deep learning model? This is wh at we’ll learn in this chapter. We’ll cover different types of data with a focus on the types relevant to this book and show how torepresent that data as tensors. Then we’ll learn how to load the data from the mostcommon on-disk formats and get a feel for th ose data types’ structure so we can see how to prepare them for traini ng a neural network. Often, our raw data won’t be per- fectly formed for the problem we’d like to so lve, so we’ll have a ch ance to practice our tensor-manipulation skills with a few more interesting tensor operations. Each section in this chapter will describe a data type, and each will come with its own dataset. While we’ve structured the chapte r so that each data type builds on the previous one, feel free to skip ar ound a bit if you’re so inclined. We’ll be using a lot of image and volume tric data through the rest of the book, since those are common data types and they reproduce well in book format. We’ll also cover tabular data, time series, and text, as those will also be of interest to a number ofour readers. Since a picture is worth a thou sand words, we’ll st art with image data. We’ll then demonstrate working with a three-dimensional array using medical datathat represents patient anatom y as a volume. Next, we’ll work with tabular data about wines, just like what we’d find in a spreadsheet. After that, we’ll move to ordered tabular data, with a time-series dataset from a bike-s haring program. Finally, we’ll dip our toes into text data from Jane Austen. Text da ta retains its ordered aspect but introduces the problem of representing words as arrays of numbers. In every section, we will stop where a deep learning re searcher would start: right before feeding the data to a model. We enco urage you to keep these datasets; they will constitute excellent material for when we start learning how to train neural network models in the next chapter. 4.1 Working with images The introduction of convolutional neural networks revolutionized computer vision (see http://mng.bz/zjMa ), and image-based systems have since acquired a whole new set of capabilities. Problems th at required complex pipeline s of highly tuned algorith- mic building blocks are now solvable at un precedented levels of performance by train- ing end-to-end networks using paired inpu t-and-desired-output examples. In order to participate in this revolution, we need to be able to load an image from common image formats and then transform the data in to a tensor representation that has the various parts of the image arrang ed in the way PyTorch expects. An image is represented as a collection of scalars arranged in a regular grid with a height and a width (in pixels). We might have a single scalar per grid point (the pixel), which would be represented as a gray scale image; or multiple scalars per grid point, which would typically represent differen t colors, as we saw in the previous chap- ter, or different features like depth from a depth camera." Deep-Learning-with-PyTorch.pdf,"72 CHAPTER 4Real-world data representation using tensors Scalars representing values at individual pixels are often encoded using 8-bit inte- gers, as in consumer cameras. In medical, scientific, and industrial applications, it is not unusual to find higher numerical precision, such as 12-bit or 16-bit. This allows a wider range or increased sensitivity in cases where the pixel encodes information about a physical property , like bone density, temperature, or depth. 4.1.1 Adding color channels We mentioned colors earlier. There are several ways to encode colors into numbers.1 The most common is RGB, where a color is defined by three numbers representing the intensity of red, green, and blue. We can think of a color channel as a grayscale intensity map of only the color in question, similar to what you’d see if you looked at the scene in question using a pair of pure red sunglasses. Figure 4.1 shows a rainbow, where each of the RGB channels captures a certain portion of the spectrum (the fig- ure is simplified, in that it elides things like the orange and yellow bands being repre- sented as a combinatio n of red and green). The red band of the rainbow is brightest in the red channel of the image, while the blue channel has both the blue band of the rainbow and the sky as high-intensity.Note also that the white clouds are high-intensity in all three channels. 4.1.2 Loading an image file Images come in several different file format s, but luckily there are plenty of ways to load images in Python. Let’s start by loading a PNG image using the imageio module (code/p1ch4/1_im age_dog.ipynb). 1This is something of an understatement: https://en.wikipedia.org/wiki/Color_model . red grEen blue Figure 4.1 A rainbow, broken into red, green, and blue channels" Deep-Learning-with-PyTorch.pdf,"73 Working with images # In[2]: import imageio img_arr = imageio.imread('../data/p1ch4/image-dog/bobby.jpg') img_arr.shape # Out[2]: (720, 1280, 3) NOTE We’ll use imageio throughout the chapter because it handles different data types with a uniform API. For ma ny purposes, using TorchVision is a great default choice to deal with image and video data. We go with imageio here for somewhat lighter exploration. At this point, img is a NumPy array-like object with three dimensions: two spatial dimensions, width and height; and a third dimension corresponding to the red,green, and blue channels. Any library that outputs a NumPy array w ill suffice to obtain a PyTorch tensor. The only thing to watch out for is the layout of the dimensions. PyTorch modules dealing with image data require tensors to be laid out as C × H × W: channels, height, and width, respectively. 4.1.3 Changing the layout We can use the tensor’s permute method with the old dimensions for each new dimen- sion to get to an appropriate layout. Given an input tensor H × W × C as obtained pre- viously, we get a proper layout by having channel 2 first and then channels 0 and 1: # In[3]: img = torch.from_numpy(img_arr)out = img.permute(2, 0, 1) We’ve seen this previously, but note that th is operation does not make a copy of the tensor data. Instead, out uses the same underlying storage as img and only plays with the size and stride information at the tens or level. This is convenient because the operation is very cheap; but just as a heads-up: changing a pixel in img will lead to a change in out. Note also that othe r deep learning frameworks use different layouts. For instance, originally TensorFlow kept the channel dimension last, resulting in an H × W × C lay- out (it now supports multiple layouts). This strategy has pros and cons from a low-level performance standpoint, but fo r our concerns, it doesn’t make a difference as long as we reshape our tensors properly. So far, we have described a single imag e. Following the same strategy we’ve used for earlier data types, to create a dataset of multiple images to use as an input for our neural networks, we store the images in a batch along the first dimension to obtain an N × C × H × W tensor.Listing 4.1 code/p1ch4/1_image_dog.ipynb" Deep-Learning-with-PyTorch.pdf,"74 CHAPTER 4Real-world data representation using tensors As a slightly more efficient alternative to using stack to build up the tensor, we can pre- allocate a tensor of appropriate size and fill it with images loaded from a directory, like so: # In[4]: batch_size = 3 batch = torch.zeros(batch_size, 3, 256, 256, dtype=torch.uint8) This indicates that our batch will consist of three RGB images 256 pixels in height and 256 pixels in width. Notice the type of the tensor: we’re expecting each color to be rep- resented as an 8-bit integer, as in most photographic formats from standard consumer cameras. We can now load all PNG images from an input directory and store them in the tensor: # In[5]: import os data_dir = '../data/p1ch4/image-cats/' filenames = [name for name in os.listdir(data_dir) if os.path.splitext(name)[-1] == '.png'] for i, filename in enumerate(filenames): img_arr = imageio.imread(os.path.join(data_dir, filename))img_t = torch.from_numpy(img_arr) img_t = img_t.permute(2, 0, 1) img_t = img_t[:3]batch[i] = img_t 4.1.4 Normalizing the data We mentioned earlier that neur al networks usually work wi th floating-point tensors as their input. Neural networks exhibit the best training performance when the input data ranges roughly from 0 to 1, or from -1 to 1 (this is an effect of how their building blocks are defined). So a typical thing we’ll want to do is ca st a tensor to floating-point and normalize the values of the pixels. Casting to floating-p oint is easy, but normalization is trickier, as it depends on what range of the input we decide should lie between 0 and 1 (or -1 and 1). One possibility is to just divide the values of the pixels by 255 (the maximum representable number in 8-bit unsigned): # In[6]: batch = batch.float() batch /= 255.0 Another possibility is to compute the mean and standard deviation of the input data and scale it so that the output has zero mean and unit standard deviation across each channel: # In[7]: n_channels = batch.shape[1] for c in range(n_channels): mean = torch.mean(batch[:, c]) std = torch.std(batch[:, c]) batch[:, c] = (batch[:, c] - mean) / stdHere we keep only the first three channels. Sometimes images also have an alpha channel indicating transparency , but our network only wants RGB input." Deep-Learning-with-PyTorch.pdf,"75 3D images: Volumetric data NOTE Here, we normalize just a single batch of images because we do not know yet how to operate on an entire dataset. In working with images, it is goodpractice to compute the mean and standa rd deviation on all the training data in advance and then subtract and divide by these fixed, precomputed quanti- ties. We saw this in the preprocessing fo r the image classifier in section 2.1.4. We can perform several other operations on inputs, such as geometric transforma- tions like rotations, scaling, and cropping . These may help with training or may be required to make an arbitrary input conform to the input requirements of a network, like the size of the image. We will stumble on quite a few of these strategies in section 12.6. For now, just remember that you ha ve image-manipulation options available. 4.2 3D images: Volumetric data We’ve learned how to load and represent 2D images, like the ones we take with a camera. In some contexts, such as me dical imaging applications involving, say, CT (computed tomography) scans, we typicall y deal with sequences of images stacked along the head- to-foot axis, each corresponding to a slice ac ross the human body. In CT scans, the inten- sity represents the density of the different parts of the body—lungs, fat, water, muscle, and bone, in order of increasing density—mapped from dark to bright when the CT scan is displayed on a clinical workstation. The density at each point is computed from the amount of X-rays reaching a detector after crossing through the body, with some complex math to deconvolve the raw sensor data into the full volume. CTs have only a single intensity channel, similar to a grayscale image. This means that often, the channel dimension is left out in native data formats; so, similar to thelast section, the raw data typically has three dimensions. By stacking individual 2D slices into a 3D tensor, we can build volume tric data representing the 3D anatomy of a subject. Unlike what we saw in figure 4.1, the extra dimension in figure 4.2 represents an offset in physical space, rather than a particular ba nd of the visible spectrum. top braineye more brainnosetEeth spinetop of skuLlmiDdle boTtom Figure 4.2 Slices of a CT scan, from the top of the head to the jawline" Deep-Learning-with-PyTorch.pdf,"76 CHAPTER 4Real-world data representation using tensors Part 2 of this book will be devoted to tackling a medical imaging problem in the real world, so we won’t go into the details of medical-imaging data formats. For now, it suf- fices to say that there’s no fundamental di fference between a tensor storing volumet- ric data versus image data. We just have an extra dimension, depth , after the channel dimension, leading to a 5D tensor of shape N × C × D × H × W. 4.2.1 Loading a specialized format Let’s load a sample CT scan using the volread function in the imageio module, which takes a directory as an argument and a ssembles all Digital Imaging and Communi- cations in Medicine (DICOM) files2 in a series in a NumPy 3D array (code/p1ch4/ 2_volumetric_ct.ipynb). # In[2]: import imageio dir_path = ""../data/p1ch4/volumetric-dicom/2-LUNG 3.0 B70f-04083"" vol_arr = imageio.volread(dir_path, 'DICOM') vol_arr.shape # Out[2]: Reading DICOM (examining files): 1/99 files (1.0%99/99 files (100.0%) Found 1 correct series. Reading DICOM (loading data): 31/99 (31.392/99 (92.999/99 (100.0%) (99, 512, 512) As was true in section 4.1.3, the layout is different from what PyTorch expects, due to having no channel information. So we’ll have to make room for the channel dimen- sion using unsqueeze : # In[3]: vol = torch.from_numpy(vol_arr).float()vol = torch.unsqueeze(vol, 0) vol.shape# Out[3]: torch.Size([1, 99, 512, 512]) At this point we could assemble a 5D data set by stacking multip le volumes along the batch direction, just as we did in the previous section. We’ll see a lot more CT data in part 2. 2From the Cancer Imaging Arch ive’s CPTAC-LSCC collection: http://mng.bz/K21K .Listing 4.2 code/p1ch4/2_volumetric_ct.ipynb" Deep-Learning-with-PyTorch.pdf,"77 Representing tabular data 4.3 Representing tabular data The simplest form of data we’ll encounter on a machine learning job is sitting in a spreadsheet, CSV file, or database. Whatever the medium, it’s a table containing one row per sample (or record), where column s contain one piece of information about our sample. At first we are going to assume there’s no meaning to the order in which samples appear in the table: such a table is a coll ection of independent samples, unlike a time series, for instance, in which sample s are related by a time dimension. Columns may contain numerical values, like temperatures at specific locations; or labels, like a string expressing an attribute of the sample, like “blue.” Therefore, tabu- lar data is typically not homogeneous: diffe rent columns don’t ha ve the same type. We might have a column showing the weight of apples and another encoding their color in a label. PyTorch tensors, on the other hand, ar e homogeneous. Inform ation in PyTorch is typically encoded as a number, typically floating-point (though integer types and Boolean are supported as well). This nume ric encoding is deliberate, since neural networks are mathematical entities that ta ke real numbers as inputs and produce real numbers as output through successive appl ication of matrix multiplications and nonlinear functions. 4.3.1 Using a real-world dataset Our first job as deep learning practitioner s is to encode heterogeneous, real-world data into a tensor of floating-point numb ers, ready for consumption by a neural net- work. A large number of tabular datasets are freely available on the internet; see, for instance, https://github.com/caesar0 301/awesome-public-datasets . Let’s start with something fun: wine! The Wine Quality data set is a freely availa ble table containing chemical characteriza tions of samples of vinho verde , a wine from north Portugal, together with a sensory quality score. Th e dataset for white wines can be downloaded here: http://mng.bz/90Ol . For convenience, we also created a copy of the dataset on the Deep Learning with PyTorch Git repository, under data/p1ch4/tabular-wine. The file contains a comma-separated coll ection of values organized in 12 columns preceded by a header line containing th e column names. The first 11 columns con- tain values of chemical variables, and th e last column contains the sensory quality score from 0 (very bad) to 10 (excellent). These are the column names in the order they appear in the dataset: fixed acidity volatile acidity citric acidresidual sugar chlorides free sulfur dioxidetotal sulfur dioxide density" Deep-Learning-with-PyTorch.pdf,"78 CHAPTER 4Real-world data representation using tensors pH sulphates alcoholquality A possible machine learning task on this dataset is predicting the quality score from chemical characterization alone. Don’t worr y, though; machine learning is not going to kill wine tasting anytime soon. We have to get the training data from somewhere! As we can see in figure 4.3, we’re hoping to find a relationship between one of the chem- ical columns in our data and the quality column. Here, we’re expecting to see quality increase as sulfur decreases. 4.3.2 Loading a wine data tensor Before we can get to that, however, we need to be able to examine the data in a moreusable way than opening the file in a text editor. Let’s see how we can load the data using Python and then turn it into a PyTorc h tensor. Python offers several options for quickly loading a CSV file. Three popular options are The csv module that ships with Python NumPy Pandas ACID SUlfur PH Quality 8 4 64 5 6110 162110 162 162 141.83 110 37sulfur quality1331 2 3 Figure 4.3 The (we hope) relationship between sulfur and quality in wine" Deep-Learning-with-PyTorch.pdf,"79 Representing tabular data The third option is the most time- and me mory-efficient. However, we’ll avoid intro- ducing an additional library in our learning trajectory just because we need to load a file. Since we alread y introduced NumPy in the previous section, and PyTorch has excellent NumPy interoperability, we’ll go wi th that. Let’s load ou r file and turn the resulting NumPy array into a PyTorch te nsor (code/p1ch4/3_tabular_wine.ipynb). # In[2]: import csv wine_path = ""../data/p1ch4/tabular-wine/winequality-white.csv""wineq_numpy = np.loadtxt(wine_path, dtype=np.float32, delimiter="";"", skiprows=1) wineq_numpy # Out[2]: array([[ 7. , 0.27, 0.36, ..., 0.45, 8.8 , 6. ], [ 6.3 , 0.3 , 0.34, ..., 0.49, 9.5 , 6. ], [ 8.1 , 0.28, 0.4 , ..., 0.44, 10.1 , 6. ], ...,[ 6.5 , 0.24, 0.19, ..., 0.46, 9.4 , 6. ],[ 5.5 , 0.29, 0.3 , ..., 0.38, 12.8 , 7. ], [ 6. , 0.21, 0.38, ..., 0.32, 11.8 , 6. ]], dtype=float32) Here we just prescribe what the type of th e 2D array should be (32-bit floating-point), the delimiter used to separate values in each row, and the fact that the first line should not be read since it contains the column na mes. Let’s check that all the data has been read # In[3]: col_list = next(csv.reader(open(wine_path), delimiter=';')) wineq_numpy.shape, col_list # Out[3]: ((4898, 12), ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar','chlorides', 'free sulfur dioxide', 'total sulfur dioxide','density', 'pH', 'sulphates','alcohol', 'quality'])Listing 4.3 code/p1ch4/3_tabular_wine.ipynb" Deep-Learning-with-PyTorch.pdf,"80 CHAPTER 4Real-world data representation using tensors and proceed to convert the NumP y array to a PyTorch tensor: # In[4]: wineq = torch.from_numpy(wineq_numpy) wineq.shape, wineq.dtype# Out[4]: (torch.Size([4898, 12]), torch.float32) At this point, we have a floating-point torch.Tensor containing all the columns, including the last, which refers to the quality score. 3 3As a starting point for a more in-depth discussion, refer to https://en.wikipedia.org/wi ki/Level_of_measurement .Continuous, ordinal, and categorical values We should be aware of three different kinds of numerical values as we attempt to make sense of our data.3 The first kind is continuous values. These are the most intu- itive when represented as numbers. They are strictly ordered, and a difference between various values has a strict meanin g. Stating that package A is 2 kilograms heavier than package B, or that package B came from 100 miles farther away than Ahas a fixed meaning, regardless of whether package A is 3 kilograms or 10, or if B came from 200 miles away or 2,000. If you’re counting or measuring something with units, it’s probably a continuous value. Th e literature actually divides continuous val- ues further: in the previous examples, it makes sense to say something is twice as heavy or three times farther away, so those values are said to be on a ratio scale . The time of day, on the other hand, does have the notion of difference, but it is notreasonable to claim that 6:00 is twice as late as 3:00; so time of day only offers an interval scale . Next we have ordinal values. The strict ordering we have with continuous values remains, but the fixed relationship between values no longer applies. A good exampleof this is ordering a small, medium, or la rge drink, with small mapped to the value 1, medium 2, and large 3. The large drink is bigger than the medium, in the same way that 3 is bigger than 2, but it doesn’t tell us anything about how much bigger. If we were to convert our 1, 2, and 3 to the actual volumes (say, 8, 12, and 24 fluid ounces), then they would switch to being interval values. It’s important to remember that we can’t “do math” on the values outside of ordering them; trying to averagelarge = 3 and small = 1 does not result in a medium drink! Finally, categorical values have neither ordering nor numerical meaning to their values. These are often just enumerations of possib ilities assigned arbitrary numbers. Assign- ing water to 1, coffee to 2, soda to 3, and milk to 4 is a good example. There’s noreal logic to placing water firs t and milk last; they simply need distinct values to dif- ferentiate them. We could assign coffee to 10 and milk to –3, and there would be no significant change (though assigning values in the range 0.. N – 1 will have advantages for one-hot encoding and the embeddings we’ll discuss in section 4.5.4.) Because the numerical values bear no meaning, they are said to be on a nominal scale . " Deep-Learning-with-PyTorch.pdf,"81 Representing tabular data 4.3.3 Representing scores We could treat the score as a continuous va riable, keep it as a real number, and per- form a regression task, or treat it as a la bel and try to guess the label from the chemi- cal analysis in a classification task. In bo th approaches, we will typically remove the score from the tensor of input data and keep it in a separate tensor, so that we can use the score as the ground truth with out it being input to our model: # In[5]: data = wineq[:, :-1] data, data.shape # Out[5]: (tensor([[ 7.00, 0.27, ..., 0.45, 8.80], [ 6.30, 0.30, ..., 0.49, 9.50],..., [ 5.50, 0.29, ..., 0.38, 12.80], [ 6.00, 0.21, ..., 0.32, 11.80]]), torch.Size([4898, 11])) # In[6]: target = wineq[:, -1]target, target.shape # Out[6]: (tensor([6., 6., ..., 7., 6.]), torch.Size([4898])) If we want to transform the target tensor in a tensor of labels, we have two options, depending on the strategy or what we use the categorical data for. One is simply to treat labels as an integer vector of scores: # In[7]: target = wineq[:, -1].long()target # Out[7]: tensor([6, 6, ..., 7, 6]) If targets were string labels, like wine color , assigning an integer number to each string would let us follow the same approach. 4.3.4 One-hot encoding The other approach is to build a one-hot encoding of the scores: that is, encode each of the 10 scores in a vector of 10 elements, wi th all elements set to 0 but one, at a differ- ent index for each score. This way, a sc ore of 1 could be mapped onto the vector (1,0,0,0,0,0,0,0,0,0) , a score of 5 onto (0,0,0,0,1,0,0,0,0,0) , and so on. Note that the fact that the score corresponds to the index of the nonzero element is purely incidental: we could shuffle the assignment, and nothing would change from a classifi- cation standpoint.Selects all rows and all columns except the last Selects all rows and the last column" Deep-Learning-with-PyTorch.pdf,"82 CHAPTER 4Real-world data representation using tensors There’s a marked differen ce between the two approach es. Keeping wine quality scores in an integer vector of scores indu ces an ordering on the scores—which might be totally appropriate in this case, since a sc ore of 1 is lower than a score of 4. It also induces some sort of distance between scores: that is, the distance between 1 and 3 is the same as the distance between 2 and 4. If th is holds for our quantity, then great. If, on the other hand, scores are pure ly discrete, like grape variety, one-hot encoding will be a much better fit, as there’s no implied ordering or distance. One-hot encoding is alsoappropriate for quantitative sc ores when fractional values in between integer scores, like 2.4, make no sense for the appl ication—for when the score is either this or that. We can achieve one-hot encoding using the scatter_ method, which fills the ten- sor with values from a source tensor along the indices prov ided as arguments: # In[8]: target_onehot = torch.zeros(target.shape[0], 10) target_onehot.scatter_(1, target.unsqueeze(1), 1.0) # Out[8]: tensor([[0., 0., ..., 0., 0.], [0., 0., ..., 0., 0.], ..., [0., 0., ..., 0., 0.],[0., 0., ..., 0., 0.]]) Let’s see what scatter_ does. First, we notice that its name ends with an underscore. As you learned in the previous chapter, this is a convention in PyTorch that indicates the method will not return a new tensor, bu t will instead modify the tensor in place. The arguments for scatter_ are as follows: The dimension along which the foll owing two argument s are specified A column tensor indicating the indices of the elements to scatter A tensor containing the elements to sca tter or a single scalar to scatter (1, in this case) In other words, the previous invocation reads, “For each row, take the index of the tar- get label (which coincides with the score in our case) and use it as the column index to set the value 1.0.” The end result is a tensor encoding categorical information. The second argument of scatter_ , the index tensor, is required to have the same number of dimensions as the tensor we scatter into. Since target_onehot has two dimensions (4,898 × 10), we need to add an extra dummy dimension to target using unsqueeze : # In[9]: target_unsqueezed = target.unsqueeze(1)target_unsqueezed # Out[9]: tensor([[6]," Deep-Learning-with-PyTorch.pdf,"83 Representing tabular data [6], ..., [7],[6]]) The call to unsqueeze adds a singleton dimension, from a 1D tensor of 4,898 elements to a 2D tensor of size (4,898 × 1), with out changing its contents—no extra elements are added; we just decided to use an extr a index to access the elements. That is, we access the first element of target as target[0] and the first element of its unsqueezed counterpart as target_unsqueezed[0,0] . PyTorch allows us to use class indices dire ctly as targets while training neural net- works. However, if we wanted to use the sc ore as a categorical input to the network, we would have to transform it to a one-hot-encoded tensor. 4.3.5 When to categorize Now we have seen ways to deal with both continuous and categorical data. You may wonder what the deal is with the ordinal ca se discussed in the earlier sidebar. There is no general recipe for it; most commonly, such data is either treated as categorical (los- ing the ordering part, and hoping that maybe our model will pick it up during train- ing if we only have a few categories) or co ntinuous (introducing an arbitrary notion of distance). We will do the latter for the weat her situation in figure 4.5. We summarize our data mapping in a small flow chart in figure 4.4. Continuous Data Ordinal Data Categorical DataUse values directly Use one-hot or embeDdingColumn contains yes yes yesnono3.1415 0 0 0 0 1 0 0 0 0Example representation of one value Treat as continuous Treat as categoricalordering a priority?yes no Figure 4.4 How to treat columns with continuous, ordinal, and categorical data" Deep-Learning-with-PyTorch.pdf,"84 CHAPTER 4Real-world data representation using tensors Let’s go back to our data tensor, containing the 11 variab les associated with the chemical analysis. We can use the functions in the Py Torch Tensor API to manipulate our data in tensor form. Let’s first obtain the mean and standard deviations for each column: # In[10]: data_mean = torch.mean(data, dim=0)data_mean # Out[10]: tensor([6.85e+00, 2.78e-01, 3.34e-01, 6.39e+00, 4.58e-02, 3.53e+01, 1.38e+02, 9.94e-01, 3.19e+00, 4.90e-01, 1.05e+01]) # In[11]: data_var = torch.var(data, dim=0) data_var # Out[11]: tensor([7.12e-01, 1.02e-02, 1.46e-02, 2.57e+01, 4.77e-04, 2.89e+02, 1.81e+03, 8.95e-06, 2.28e-02, 1.30e-02, 1.51e+00]) In this case, dim=0 indicates that the reduction is performed along dimension 0. At this point, we can normalize the data by subtracting the mean and dividing by the standard deviation, which helps with the le arning process (we’ll discuss this in more detail in chapter 5, in section 5.4.4): # In[12]: data_normalized = (data - data_mean) / torch.sqrt(data_var)data_normalized # Out[12]: tensor([[ 1.72e-01, -8.18e-02, ..., -3.49e-01, -1.39e+00], [-6.57e-01, 2.16e-01, ..., 1.35e-03, -8.24e-01], ...,[-1.61e+00, 1.17e-01, ..., -9.63e-01, 1.86e+00], [-1.01e+00, -6.77e-01, ..., -1.49e+00, 1.04e+00]]) 4.3.6 Finding thresholds Next, let’s start to look at the data with an eye to seeing if there is an easy way to tell good and bad wines apart at a glance. First, we’re going to determine which rows in target correspond to a score less than or equal to 3: # In[13]: bad_indexes = target <= 3 bad_indexes.shape, bad_indexes.dtype, bad_indexes.sum() # Out[13]: (torch.Size([4898]), torch.bool, tensor(20))PyTorch also provides comparison functions, here torch.le(target, 3), but using operators seems to be a good standard." Deep-Learning-with-PyTorch.pdf,"85 Representing tabular data Note that only 20 of the bad_indexes entries are set to True ! By using a feature in PyTorch called advanced indexing , we can use a tensor with data type torch.bool to index the data tensor. This will essentially filter data to be only items (or rows) corre- sponding to True in the indexing tensor. The bad_indexes tensor has the same shape as target , with values of False or True depending on the outcome of the comparison between our threshold and each element in the original target tensor: # In[14]: bad_data = data[bad_indexes]bad_data.shape # Out[14]: torch.Size([20, 11]) Note that the new bad_data tensor has 20 rows, the same as the number of rows with True in the bad_indexes tensor. It retains all 11 colu mns. Now we can start to get information about wines grouped into good , middling, and bad categories. Let’s take the .mean() of each column: # In[15]: bad_data = data[target <= 3] mid_data = data[(target > 3) & (target < 7)] good_data = data[target >= 7] bad_mean = torch.mean(bad_data, dim=0) mid_mean = torch.mean(mid_data, dim=0)good_mean = torch.mean(good_data, dim=0) for i, args in enumerate(zip(col_list, bad_mean, mid_mean, good_mean)): print('{:2} {:20} {:6.2f} {:6.2f} {:6.2f}'.format(i, *args)) # Out[15]: 0 fixed acidity 7.60 6.89 6.73 1 volatile acidity 0.33 0.28 0.27 2 citric acid 0.34 0.34 0.333 residual sugar 6.39 6.71 5.26 4 chlorides 0.05 0.05 0.04 5 free sulfur dioxide 53.33 35.42 34.556 total sulfur dioxide 170.60 141.83 125.25 7 density 0.99 0.99 0.99 8 pH 3.19 3.18 3.229 sulphates 0.47 0.49 0.50 10 alcohol 10.34 10.26 11.42 It looks like we’re on to so mething here: at first glance, the bad wines seem to have higher total sulfur dioxide, among other differences. We could use a threshold ontotal sulfur dioxide as a crude criterion fo r discriminating good wines from bad ones. Let’s get the indexes where the total sulfur dioxide column is below the midpoint we calculated earlier, like so:For Boolean NumPy arrays and PyTorch tensors, the & operator does a logical “and” operation." Deep-Learning-with-PyTorch.pdf,"86 CHAPTER 4Real-world data representation using tensors # In[16]: total_sulfur_threshold = 141.83 total_sulfur_data = data[:,6]predicted_indexes = torch.lt(total_sulfur_data, total_sulfur_threshold) predicted_indexes.shape, predicted_indexes.dtype, predicted_indexes.sum()# Out[16]: (torch.Size([4898]), torch.bool, tensor(2727)) This means our threshold implies that just over half of all the wines are going to be high quality. Next, we’ll need to get the indexes of the actually good wines: # In[17]: actual_indexes = target > 5 actual_indexes.shape, actual_indexes.dtype, actual_indexes.sum() # Out[17]: (torch.Size([4898]), torch.bool, tensor(3258)) Since there are about 500 more actually go od wines than our threshold predicted, we already have hard evidence th at it’s not perfect. Now we need to see how well our pre- dictions line up with the actual rankings. We will perform a logical “and” between our prediction indexes and the actu al good indexes (remember th at each is just an array of zeros and ones) and use that intersecti on of wines-in-agreement to determine how well we did: # In[18]: n_matches = torch.sum(actual_indexes & predicted_indexes).item()n_predicted = torch.sum(predicted_indexes).item() n_actual = torch.sum(actual_indexes).item() n_matches, n_matches / n_predicted, n_matches / n_actual # Out[18]: (2018, 0.74000733406674, 0.6193984039287906) We got around 2,000 wines right! Since we predicted 2,700 wines, this gives us a 74% chance that if we predict a wi ne to be high quality, it ac tually is. Unfortunately, there are 3,200 good wines, and we only identified 61% of them. Well, we got what wesigned up for; that’s barely better than ra ndom! Of course, this is all very naive: we know for sure that multiple variables contri bute to wine quality, and the relationships between the values of these variables an d the outcome (which could be the actual score, rather than a binarized version of it) is likely more comp licated than a simple threshold on a single value. Indeed, a simple neural network would over come all of these limitations, as would a lot of other basic machine learning methods. We’ll have the tools to tackle this prob- lem after the next two chapters, once we have learned how to build our first neural" Deep-Learning-with-PyTorch.pdf,"87 Working with time series network from scratch. We will also revisit ho w to better grade our results in chapter 12. Let’s move on to other data types for now. 4.4 Working with time series In the previous section, we covered how to represent data organized in a flat table. As we noted, every row in the table was indepe ndent from the others; their order did not matter. Or, equivalently, there was no co lumn that encoded information about what rows came earlier and what came later. Going back to the wine dataset, we could have had a “year” column that allowed us to look at how wine quality evolved year after year. Unfort unately, we don’t have such data at hand, but we’re working hard on ma nually collecting the data samples, bottle by bottle. (Stuff for our second edition.) In the meantime, we’l l switch to another interesting dataset: data from a Washington , D.C., bike-sharing system reporting the hourly count of rental bikes in 2011–2012 in the Capital Bikeshare system, along with weather and seasonal info rmation (available here: http://mng.bz/jgOx ). Our goal will be to take a flat, 2D dataset and transform it into a 3D one, as shown in figure 4.5. day 1 day 2 day 3 <-midnight - no0n - midnight-> <-midnight - no0n - midnight-><-midnight - no0n - midnight-><-midnight - no0n - midnight->time of day weather temperature humidity wind spEed bike count etc. weather temperature humidity wind spEed bike count etc.weather temperature humidity wind spEed bike count etc.day 1 day 2 day 3 Figure 4.5 Transforming a 1D, multichannel dataset into a 2D, multichannel dataset by separating the date and hour of each sample into separate axes" Deep-Learning-with-PyTorch.pdf,"88 CHAPTER 4Real-world data representation using tensors 4.4.1 Adding a time dimension In the source data, each row is a separate hour of data (figure 4.5 shows a transposed version of this to better fit on the printed page). We want to change the row-per-hour organization so that we have one axis that increases at a rate of one day per index incre- ment, and another axis that re presents the hour of the day (independent of the date). The third axis will be our different columns of data (weather, temperature, and so on). Let’s load the data (code/p1 ch4/4_time_serie s_bikes.ipynb). # In[2]: bikes_numpy = np.loadtxt( ""../data/p1ch4/bike-sharing-dataset/hour-fixed.csv"",dtype=np.float32, delimiter="","", skiprows=1,converters={1: lambda x: float(x[8:10])}) bikes = torch.from_numpy(bikes_numpy) bikes # Out[2]: tensor([[1.0000e+00, 1.0000e+00, ..., 1.3000e+01, 1.6000e+01], [2.0000e+00, 1.0000e+00, ..., 3.2000e+01, 4.0000e+01], ...,[1.7378e+04, 3.1000e+01, ..., 4.8000e+01, 6.1000e+01], [1.7379e+04, 3.1000e+01, ..., 3.7000e+01, 4.9000e+01]]) For every hour, the dataset reports the following variables: Index of record: instant Day of month: day Season: season (1: spring, 2: summer, 3: fall, 4: winter) Year: yr (0: 2011, 1: 2012) Month: mnth (1 to 12) Hour: hr (0 to 23) Holiday status: holiday Day of the week: weekday Working day status: workingday Weather situation: weathersit (1: clear, 2:mist, 3: light rain/snow, 4: heavy rain/snow) Temperature in °C: temp Perceived temperature in °C: atemp Humidity: hum Wind speed: windspeed Number of casual users: casual Number of registered users: registered Count of rental bikes: cntListing 4.4 code/p1ch4/4_time_series_bikes.ipynb Converts date strings to numbers corresponding to the day of the month in column 1" Deep-Learning-with-PyTorch.pdf,"89 Working with time series In a time series dataset such as this one, rows represent successive time-points: there is a dimension along which they are ordered. Sure, we could treat each row as indepen- dent and try to predict the nu mber of circulating bikes base d on, say, a particular time of day regardless of what happened earlie r. However, the existence of an ordering gives us the opportunity to exploit causal relationships across time. For instance, it allows us to predict bike rides at one time based on the fact that it was raining at an earlier time. For the time being, we’re goin g to focus on learning how to turn our bike-sharing dataset into something that our neural network will be able to ingest in fixed-size chunks. This neural network model will need to see a number of sequences of values for each different quantity, such as ride count, time of day, temperature, and weather con- ditions: N parallel sequences of size C. C stands for channel , in neural network par- lance, and is the same as column f o r 1 D d a t a l i k e w e h a v e h e r e . T h e N dimension represents the time axis, here one entry per hour. 4.4.2 Shaping the data by time period We might want to break up the two-year da taset into wider observation periods, like days. This way we’ll have N (for number of samples ) collections of C sequences of length L. In other words, our time series dataset would be a tensor of dimension 3 and shape N × C × L. The C would remain our 17 channels, while L would be 24: 1 per hour of the day. There’s no particular reason why we must use chunks of 24 hours, though the general daily rhythm is likely to give us patterns we can exploit for predictions. We could also use 7 × 24 = 168 hour blocks to chunk by week instead, if we desired. All of this depends, naturally, on our dataset havi ng the right size—the number of rows must be a multiple of 24 or 168. Also, for this to make sense, we cannot have gaps in thetime series. Let’s go back to our bike-sharing dataset. The first column is the index (the global ordering of the data), the seco nd is the date, and the sixth is the time of day. We have everything we need to create a dataset of daily sequences of ride counts and other exogenous variables. Ou r dataset is already sorted, but if it were not, we could use torch.sort on it to order it appropriately. NOTE The version of the file we’re using, hour-fixed.csv, has had some pro- cessing done to include ro ws missing from the original dataset. We presume that the missing hours had zero bike ac tive (they were typically in the early morning hours). All we have to do to obtain our daily hours dataset is view the same tensor in batches of 24 hours. Let’s take a look at the shape and strides of our bikes tensor: # In[3]: bikes.shape, bikes.stride() # Out[3]: (torch.Size([17520, 17]), (17, 1))" Deep-Learning-with-PyTorch.pdf,"90 CHAPTER 4Real-world data representation using tensors That’s 17,520 hours, 17 columns. Now let’s re shape the data to have 3 axes—day, hour, and then our 17 columns: # In[4]: daily_bikes = bikes.view(-1, 24, bikes.shape[1]) daily_bikes.shape, daily_bikes.stride() # Out[4]: (torch.Size([730, 24, 17]), (408, 17, 1)) What happened here? First, bikes.shape[1] is 17, the number of columns in the bikes tensor. But the real crux of this code is the call to view , which is really import- ant: it changes the way the tensor looks at the same data as contained in storage. As you learned in the previous chapter, calling view on a tensor returns a new ten- sor that changes the number of dimensions and the striding information, without changing the storage. This means we can re arrange our tensor at basically zero cost, because no data will be copied. Our call to view requires us to provide the new shape for the returned tensor. We use -1 as a placeholder for “h owever many indexes are left, given the other dimensions and the original number of elements.” Remember also from the previous chapter that storage is a contiguous, linear con- tainer for numbers (floating-point, in this case). Our bikes tensor will have each row stored one after the other in its correspond ing storage. This is confirmed by the out- put from the call to bikes.stride() earlier. For daily_bikes , the stride is telling us that ad vancing by 1 along the hour dimen- sion (the second dimension) requires us to advance by 17 places in the storage (or one set of columns); whereas advancing al ong the day dimension (the first dimen- sion) requires us to advance by a number of elements equal to the length of a row in the storage times 24 (here, 408, which is 17 × 24). We see that the rightmost dimension is the number of columns in the original dataset. Then, in the middle dimension, we have time, split into chunks of 24 sequen- tial hours. In other words, we now have N sequences of L hours in a day, for C chan- nels. To get to our desired N × C × L ordering, we need to transpose the tensor: # In[5]: daily_bikes = daily_bikes.transpose(1, 2)daily_bikes.shape, daily_bikes.stride() # Out[5]: (torch.Size([730, 17, 24]), (408, 1, 17)) Now let’s apply some of the techniques we learned earlier to this dataset. 4.4.3 Ready for training The “weather situation” variable is ordinal. It has four levels: 1 for good weather, and 4 for, er, really bad. We could treat this variab le as categorical, with levels interpreted as labels, or as a continuous variable. If we de cided to go with categorical, we would turn" Deep-Learning-with-PyTorch.pdf,"91 Working with time series the variable into a one-hot-encoded vect or and concatenate the columns with the dataset.4 In order to make it easier to render ou r data, we’re going to limit ourselves to the first day for a moment. We initialize a zero-f illed matrix with a number of rows equal to the number of hours in the day and nu mber of columns equal to the number of weather levels: # In[6]: first_day = bikes[:24].long()weather_onehot = torch.zeros(first_day.shape[0], 4) first_day[:,9] # Out[6]: tensor([1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 2, 2, 2, 2]) Then we scatter ones into our matrix ac cording to the corresponding level at each row. Remember the use of unsqueeze to add a singleton dimension as we did in the previous sections: # In[7]: weather_onehot.scatter_( dim=1,index=first_day[:,9].unsqueeze(1).long() - 1, value=1.0) # Out[7]: tensor([[1., 0., 0., 0.], [1., 0., 0., 0.],..., [0., 1., 0., 0.], [0., 1., 0., 0.]]) Our day started with weather “1” and ended with “2,” so that seems right. Last, we concatenate our matrix to our original dataset using the cat function. Let’s look at the first of our results: # In[8]: torch.cat((bikes[:24], weather_onehot), 1)[:1] # Out[8]: tensor([[ 1.0000, 1.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 6.0000, 0.0000, 1.0000, 0.2400, 0.2879, 0.8100, 0.0000, 3.0000, 13.0000, 16.0000, 1.0000, 0.0000, 0.0000, 0.0000]]) 4This could also be a case where it is useful to go beyond the main path. Speculatively, we could also try to reflect like categorical, but with order more directly by generalizing one-hot encodings to mapping the ith of our four categories here to a vector that has ones in the positions 0… i and zeros beyond that. Or—similar to the embeddings we discussed in section 4.5.4—we could take partial sums of embeddings, in which case it might make sense to make those positive. As with many things we encounter in practical work, this could be a place where trying what works for others and then experimenting in a syst ematic fashion is a good idea.Decreases the values by 1 because weather situation ranges from 1 to 4, while indices are 0-based" Deep-Learning-with-PyTorch.pdf,"92 CHAPTER 4Real-world data representation using tensors Here we prescribed our original bikes dataset and our one-hot-encoded “weather sit- uation” matrix to be concatenated along the column dimension (that is, 1). In other words, the columns of the two datasets are stacked together; or, equivalently, the new one-hot-encoded columns are appended to the original dataset. For cat to succeed, it is required that the tensors have the sa me size along the ot her dimensions—the row dimension, in this case. Note th at our new last four columns are 1, 0, 0, 0 , exactly as we would expect with a weather value of 1. We could have done the same with the reshaped daily_bikes tensor. Remember that it is shaped ( B, C, L), where L = 24. We first create the zero tensor, with the same B and L, but with the number of additional columns as C: # In[9]: daily_weather_onehot = torch.zeros(daily_bikes.shape[0], 4, daily_bikes.shape[2]) daily_weather_onehot.shape # Out[9]: torch.Size([730, 4, 24]) Then we scatter the one-hot enco ding into the tensor in the C dimension. Since this operation is performed in place, only the content of the tensor will change: # In[10]: daily_weather_onehot.scatter_( 1, daily_bikes[:,9,:].long().unsqueeze(1) - 1, 1.0) daily_weather_onehot.shape # Out[10]: torch.Size([730, 4, 24]) And we concatenate along the C dimension: # In[11]: daily_bikes = torch.cat((daily_bikes, daily_weather_onehot), dim=1) We mentioned earlier that this is not the only way to treat our “weather situation” vari- able. Indeed, its labels have an ordinal rela tionship, so we could pretend they are spe- cial values of a continuous variable. We coul d just transform the variable so that it runs from 0.0 to 1.0: # In[12]: daily_bikes[:, 9, :] = (daily_bikes[:, 9, :] - 1.0) / 3.0 As we mentioned in the previous section, re scaling variables to the [0.0, 1.0] interval or the [-1.0, 1.0] interval is something we’ ll want to do for all quantitative variables, like temperature (column 10 in our dataset). We’ll se e why later; for now, let’s just say that this is beneficial to the training process." Deep-Learning-with-PyTorch.pdf,"93 Representing text There are multiple possibilit ies for rescaling variables. We can either map their range to [0.0, 1.0] # In[13]: temp = daily_bikes[:, 10, :] temp_min = torch.min(temp)temp_max = torch.max(temp) daily_bikes[:, 10, :] = ((daily_bikes[:, 10, :] - temp_min) / (temp_max - temp_min)) or subtract the mean and divide by the standard deviation: # In[14]:temp = daily_bikes[:, 10, :] daily_bikes[:, 10, :] = ((daily_bikes[:, 10, :] - torch.mean(temp)) / torch.std(temp)) In the latter case, our variable will have 0 mean and unitary standard deviation. If our variable were drawn from a Gaussian distri bution, 68% of the samples would sit in the [-1.0, 1.0] interval. Great: we’ve built another nice dataset, and we’ve seen how to deal with time series data. For this tour d’horizon, it’s importan t only that we got an idea of how a time series is laid out and how we can wrangle the data in a form that a network will digest. Other kinds of data look like a time series, in that there is a strict ordering. Top two on the list? Text and audi o. We’ll take a look at te xt next, and the “Conclusion” section has links to additi onal examples for audio. 4.5 Representing text Deep learning has taken the field of natura l language processing (NLP) by storm, par- ticularly using models that repeatedly co nsume a combination of new input and previ- ous model output. These models are called recurrent neural networks (RNNs), and they have been applied with great success to text categorization, text generation, and auto- mated translation systems. More rece ntly, a class of networks called transformers with a more flexible way to incorporate past in formation has made a big splash. Previous NLP workloads were characterized by sophisticated multistage pipelines that included rules encoding the grammar of a language.5 Now, state-of-the-art work trains networks end to end on large corpora starting from sc ratch, letting those rules emerge from the data. For the last several years, the most-u sed automated translat ion systems available as services on the internet have been based on deep learning. Our goal in this section is to turn text into something a neural network can pro- cess: a tensor of numbers, just like our pr evious cases. If we can do that and later choose the right architecture for our text-p rocessing job, we’ll be in the position of doing NLP with PyTorch. We see right away how powerful this all is: we can achieve 5Nadkarni et al., “Natural language processing: an introduction,” JAMIA, http://mng.bz/8pJP . See also https://en.wikipedia.org/wiki/Natural-language_processing ." Deep-Learning-with-PyTorch.pdf,"94 CHAPTER 4Real-world data representation using tensors state-of-the-art performance on a nu mber of tasks in different domains with the same PyTorch tools ; we just need to cast our problem in the right form. The first part of this job is reshaping the data. 4.5.1 Converting text to numbers There are two particularly intuitive levels at which networks operate on text: at the character level, by processing one characte r at a time, and at the word level, where individual words are the finest-g rained entities to be seen by the network. The tech- nique with which we encode text information into tensor form is the same whether we operate at the character level or the word level. And it’s not magic, either. We stum- bled upon it earlier: one-hot encoding. Let’s start with a character-level example. First, let’s get some text to process. An amazing resource here is Project Gutenberg ( www.gutenberg.org ), a volunteer effort to digitize and archive cultural work and ma ke it available for free in open formats, including plain text files. If we’re aiming at larger-scale corpora, the Wikipedia corpus stands out: it’s the complete collection of Wikipedia articles, co ntaining 1.9 billion words and more than 4.4 million articles. Several other corpora can be found at the English Corpora website ( www.english-corpora.org ). Let’s load Jane Austen’s Pride and Prejudice from the Project Gutenberg website: www.gutenberg.org/files/1342/1342-0.txt . We’ll just save the file and read it in (code/p1ch4/5_text_j ane_austen.ipynb). # In[2]: with open('../data/p1ch4/jane-austen/1342-0.txt', encoding='utf8') as f: text = f.read() 4.5.2 One-hot-encoding characters There’s one more detail we need to take care of before we proceed: encoding. This is a pretty vast subject, and we will just touch on it. Every wr itten character is represented by a code: a sequence of bits of appropri ate length so that each character can be uniquely identified. The simplest such en coding is ASCII (American Standard Code for Information Interchange), which dates ba ck to the 1960s. ASCII encodes 128 char- acters using 128 integers. For instance, the letter a corresponds to binary 1100001 or decimal 97, the letter b to binary 1100010 or decimal 98, and so on. The encoding fits 8 bits, which was a big bonus in 1965. NOTE 128 characters are clearly not enou gh to account for all the glyphs, accents, ligatures, and so on that ar e needed to properly represent written t e x t i n l a n g u a g e s o t h e r t h a n E n g l i s h . T o t h i s e n d , a n u m b e r o f e n c o d i n g shave been developed that use a larger number of bits as code for a wider range of characters. That wider range of characters was standardized as Uni- code, which maps all known characters to numbers, with the representationListing 4.5 code/p1ch4/5_text_jane_austen.ipynb" Deep-Learning-with-PyTorch.pdf,"95 Representing text in bits of those numbers provided by a specific encoding. Popular encodings are UTF-8, UTF-16, and UTF-32, in whic h the numbers are a sequence of 8-, 16-, or 32-bit integers, respectively. St rings in Python 3.x are Unicode strings. We are going to one-hot encode our characters. It is instrumental to limit the one-hot encoding to a character set that is useful fo r the text being analyze d. In our case, since we loaded text in English, it is safe to use ASCII and deal with a small encoding. We could also make all of the ch aracters lowercase, to redu ce the number of different characters in our encoding. Similarly, we could screen out punctuation, numbers, or other characters that aren’t relevant to our expected kind s of text. This may or may not make a practical difference to a neural netw ork, depending on the task at hand. At this point, we need to parse through the characters in the text and provide a one-hot encoding for each of them. Each ch aracter will be represented by a vector of length equal to the number of different ch aracters in the encoding. This vector will contain all zeros except a one at the index correspondin g to the location of the char- acter in the encoding. We first split our text into a list of li nes and pick an arbitrary line to focus on: # In[3]: lines = text.split('\n') line = lines[200]line # Out[3]: '“Impossible, Mr. Bennet, impossible, when I am not acquainted with him' Let’s create a tensor that can hold the total number of one-hot-encoded characters for the whole line: # In[4]: letter_t = torch.zeros(len(line), 128)letter_t.shape # Out[4]: torch.Size([70, 128]) Note that letter_t holds a one-hot-encoded characte r per row. Now we just have to set a one on each row in the correct position so that each row represents the correct character. The index where the one has to be set corresponds to the index of the char- acter in the encoding: # In[5]: for i, letter in enumerate(line.lower().strip()): letter_index = ord(letter) if ord(letter) < 128 else 0letter_t[i][letter_index] = 1128 hardcoded due to the limits of ASCII The text uses directional double quotes, which are not valid ASCII, so we screen them out here." Deep-Learning-with-PyTorch.pdf,"96 CHAPTER 4Real-world data representation using tensors 4.5.3 One-hot encoding whole words We have one-hot encoded our sentence into a representation that a neural network could digest. Word-level encoding can be do ne the same way by establishing a vocabu- lary and one-hot encoding sentences—sequ ences of words—along the rows of our tensor. Since a vocabulary has many words, this will produce very wide encoded vec- tors, which may not be practical. We will s ee in the next section that there is a more efficient way to represent text at the word level, using embeddings . For now, let’s stick with one-hot encodings and see what happens. We’ll define clean_words , which takes text and returns it in lowercase and stripped of punctuation. When we call it on our “Impossible, Mr. Bennet” line , we get the following: # In[6]: def clean_words(input_str): punctuation = '.,;:""!?”“_-' word_list = input_str.lower().replace('\n',' ').split() word_list = [word.strip(punctuation) for word in word_list]return word_list words_in_line = clean_words(line) line, words_in_line # Out[6]: ('“Impossible, Mr. Bennet, impossible, when I am not acquainted with him', ['impossible', 'mr', 'bennet','impossible', 'when', 'i','am', 'not', 'acquainted','with', 'him']) Next, let’s build a mapping of words to indexes in our encoding: # In[7]: word_list = sorted(set(clean_words(text)))word2index_dict = {word: i for (i, word) in enumerate(word_list)} len(word2index_dict), word2index_dict['impossible']# Out[7]: (7261, 3394) Note that word2index_dict is now a dictionary with words as keys and an integer as a value. We will use it to efficiently find th e index of a word as we one-hot encode it. Let’s now focus on our sentence: we break it up into words and one-hot encode it—" Deep-Learning-with-PyTorch.pdf,"97 Representing text that is, we populate a tensor with one on e-hot-encoded vector per word. We create an empty vector and assign the one-hot-encoded values of the word in the sentence: # In[8]: word_t = torch.zeros(len(words_in_line), len(word2index_dict)) for i, word in enumerate(words_in_line): word_index = word2index_dict[word] word_t[i][word_index] = 1 print('{:2} {:4} {}'.format(i, word_index, word)) print(word_t.shape) # Out[8]: 0 3394 impossible1 4305 mr 2 813 bennet 3 3394 impossible4 7078 when 5 3315 i 6 415 am7 4436 not8 239 acquainted 9 7148 with 10 3215 himtorch.Size([11, 7261]) At this point, tensor represents one sentence of length 11 in an encoding space of size 7,261, the number of words in our dictionary . Figure 4.6 compares the gist of our two options for splitting text (and using the embeddings we’ll look at in the next section). The choice between character-level and word-level encoding leaves us to make a trade-off. In many languages, there are sign ificantly fewer characters than words: rep- resenting characters has us representing ju st a few classes, while representing words requires us to represent a ve ry large number of classes an d, in any practical applica- tion, deal with words that are not in the dictionary. On the other hand, words convey much more meaning than individual characte rs, so a representation of words is con- siderably more informative by itself. Gi ven the stark contrast between these two options, it is perhaps unsurp rising that intermediate wa ys have been sought, found, and applied with great su ccess: for example, the byte pair encoding method6 starts with a dictionary of individual letters but then it eratively adds the most frequently observed pairs to the dictionary until it reaches a prescribed dictionary size. Our example sen- tence might then be split into tokens like this:7 ?Im|pos|s|ible|,|?Mr|.|?B|en|net|,|?impossible|,|?when|?I|?am|?not| ➥ ?acquainted|?with|?him 6Most commonly implemented by the subword-nmt and SentencePiece libraries. The conceptual drawback is that the representation of a sequence of characters is no longer unique. 7This is from a SentencePiece tokenizer trained on a machine translation dataset." Deep-Learning-with-PyTorch.pdf,"98 CHAPTER 4Real-world data representation using tensors For most things, our mapping is just splitt ing by words. But the rarer parts—the capi- talized Impossible and the name Bennet —are composed of subunits. 4.5.4 Text embeddings One-hot encoding is a very useful techniqu e for representing categorical data in ten- sors. However, as we have anticipated, on e-hot encoding starts to break down when the number of items to encode is effectiv ely unbound, as with words in a corpus. In just one book, we had over 7,000 items! We certainly could do some work to de duplicate words, condense alternate spell- ings, collapse past and future tenses into a single token, an d that kind of thing. Still, a general-purpose English-language encoding would be huge. Even worse, every time we encountered a new word, we would have to add a new column to the vector, which would mean adding a new set of weights to the model to account for that new vocabu- lary entry—which would be painful from a training perspective. How can we compress our encoding down to a more manageable size and put a cap on the size growth? Well, instead of vect ors of many zeros and a single one, we canimpoSsible 3394105 10911211111511510598 108101 shape: 1 7261Shape: 10 128 shape: 1 300 word one-hotword embeDdingcharacter one-hot i m p o ss i b l e Various poSsibilities for representing the word “impoSsible”conceptuaLly: mul tiplication with embeDding matrix... 0 ... 0 0 1 0 ... 00 ... 0 1 0 0 ... 0... EmbeDding matrix 7261 300 row 3394-1.70 -0.89 -0.20 1.78 -1.13 0 .... 0 1 0 .... 0word lOokupcharacter ""lOokup""Input as chars as words col 3394 Figure 4.6 Three ways to encode a word" Deep-Learning-with-PyTorch.pdf,"99 Representing text use vectors of floating-point numbers. A vect or of, say, 100 floating-point numbers can indeed represent a large number of words. Th e trick is to find an effective way to map individual words into this 100-dimensional space in a way that facilitates downstream learning. This is called an embedding . In principle, we could simply iterate over our vocabulary and generate a set of 100 random floating-point numbers for each wo rd. This would work, in that we could cram a very large vocabulary into just 100 numbers, but it would forgo any concept ofdistance between words based on meanin g or context. A model using this word embedding would have to deal with very litt le structure in its in put vectors. An ideal solution would be to generate the embedding in such a way that wo rds used in similar contexts mapped to nearby regions of the embedding. Well, if we were to design a solution to this problem by hand, we might decide to build our embedding space by choosing to map basic nouns and adjectives along the axes. We can generate a 2D sp ace where axes map to nouns— fruit (0.0-0.33), flower (0.33-0.66), and dog (0.66-1.0)—and adjectives— red (0.0-0.2), orange (0.2-0.4), yellow (0.4-0.6), white (0.6-0.8), and brown (0.8-1.0). Our goal is to take actual fruit, flowers, and dogs and lay them out in the embedding. As we start embedding words, we can map apple to a number in the fruit and red quadrant. Likewise, we can easily map tangerine , lemon , lychee , and kiwi (to round out our list of colorful fruits). Then we can start on flowers, and assign rose, poppy , daffodil , lily, and … Hmm. Not many brown flowers out there. Well, sunflower can get flower , yel- low, and brown , and then daisy can get flower , white , and yellow . Perhaps we should update kiwi to map close to fruit, brown , and green . 8 For dogs and color, we can embed redbone near red; uh, fox perhaps for orange ; golden retriever for yellow , poodle for white , and … most kinds of dogs are brown . Now our embeddings look like figure 4. 7. While doing this manually isn’t really f e a s i b l e f o r a l a r g e c o r p u s , n o t e t h a t a l t h o u g h w e h a d a n e m b e d d i n g s i z e o f 2 , w e described 15 different words besides the base 8 and could probably cram in quite a few more if we took the time to be creative about it. As you’ve probably already guessed, this kind of work can be automated. By pro- cessing a large corpus of organic text, embedd ings similar to the on e we just discussed can be generated. The main differences ar e that there are 100 to 1,000 elements in the embedding vector and that axes do not map directly to conc epts: rather, concep- tually similar words map in neighboring regions of an embedding space whose axes are arbitrary floating-point dimensions. While the exact algorithms9 used are a bit out of scop e for what we’re wanting to focus on here, we’d just like to mentio n that embeddings are often generated using neural networks, trying to pr edict a word from nearby wo rds (the context) in a sen- tence. In this case, we could start from one-hot-encoded words and use a (usually 8Actually, with our 1D view of color, this is not possible, as sunflower ’s yellow and brown will average to white — but you get the idea, and it does wo rk better in higher dimensions. 9One example is word2vec: https://code.google.co m/archive/p/word2vec ." Deep-Learning-with-PyTorch.pdf,"100 CHAPTER 4Real-world data representation using tensors rather shallow) neural network to generate the embedding. Once the embedding was available, we could use it for downstream tasks. One interesting aspect of the resulting em beddings is that similar words end up not only clustered together, but also having co nsistent spatial relationships with other words. For example, if we were to take the embedding vector for apple and begin to add and subtract the vectors for other words, we could begin to perform analogies like apple - red - sweet + yellow + sour and end up with a vector very similar to the one for lemon . More contemporary embedding models —with BERT and GPT-2 making headlines even in mainstream media—ar e much more elabor ate and are context sensitive: that is, the mapping of a word in the vocabulary to a vector is not fixed but depends on the surrounding sentence. Yet they are often used just like the simpler classic embeddings we’ve touched on here. 4.5.5 Text embeddings as a blueprint Embeddings are an essential tool for when a large number of entries in the vocabulary have to be represented by numeric vectors. But we won’t be using text and texte m b e d d i n g s i n t h i s b o o k , s o y o u m i g h t w o n d e r w h y w e i n t r o d u c e t h e m h e r e . W e believe that how text is represented and proc essed can also be seen as an example for dealing with categorical data in general. Embeddings are useful wherever one-hot encoding becomes cumbersome . Indeed, in the form described previously, they are an efficient way of representing one-hot en coding immediately followed by multiplica- tion with the matrix containing the embedding vectors as rows.0.8kiwikiwibrownbrown lemon yeLlowyeLlowwhitewhite orangeorangetangerine poPpydaFfodilpOodle golden retrievergolden retrieverkiwibrown yeLlowwhite orangegolden retriever fox rosered aPple fruit flowerredbone doglychEelily 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 Figure 4.7 Our manual word embeddings" Deep-Learning-with-PyTorch.pdf,"101 Exercises In non-text applications, we usually do not have the ability to construct the embed- dings beforehand, but we will start with the random numb ers we eschewed earlier and consider improving them part of our learning problem. This is a standard tech- nique—so much so that embeddings are a prominent alternative to one-hot encod- ings for any categorical data. On the flip si de, even when we deal with text, improving the prelearned embeddings while solving the problem at hand has become a common practice.10 When we are interested in co-occurrenc es of observations, the word embeddings we saw earlier can serve as a blueprint, too. For example, re commender systems—cus- tomers who liked our book also bought …— use the items the customer already inter- acted with as the context for predicting what else will spark interest. Similarly, processing text is perhaps the most common, well-explore d task dealing with sequences; so, for example, wh en working on tasks with time series, we might look for inspiration in what is done in natural language processing. 4.6 Conclusion We’ve covered a lot of ground in this chapter. We learned to load the most common types of data and shape them for consumption by a neural network. Of course, there are more data formats in the wild than we could hope to describe in a single volume. Some, like medical histories, are too complex to co ver here. Others, like audio and video, were deemed less crucial for the path of this book . If you’re interested , however, we provide short examples of audi o and video tensor creation in bonus Jupyter Notebooks provided on the book’s website ( www.manning.com/books/deep-learning-with-pytorch ) and in our code repository ( https://github.com/deep-learning-with-pytorch/dlwpt-code/ tree/master/p1ch4 ). Now that we’re familiar with tensors and how to store data in them, we can move on to the next step towards the goal of the bo ok: teaching you to train deep neural net- works! The next chapter covers the mechanic s of learning for simple linear models. 4.7 Exercises 1Take several pictures of red, blue, and green items with your phone or other dig- ital camera (or download some from the internet, if a camera isn’t available). aLoad each image, and convert it to a tensor. bFor each image tensor, use the .mean() method to get a sense of how bright the image is. cTake the mean of each channel of your images. Can you identify the red,green, and blue items from only the channel averages? 10This goes by the name fine-tuning ." Deep-Learning-with-PyTorch.pdf,"102 CHAPTER 4Real-world data representation using tensors 2Select a relatively large file containing Python source code. aBuild an index of all the words in the so urce file (feel free to make your toke- nization as simple or as complex as yo u like; we suggest st arting with replac- ing r""[^a-zA-Z0-9_]+"" with spaces). bCompare your index with the one we made for Pride and Prejudice . Which is larger? cCreate the one-hot encoding for the source code file. dWhat information is lost with this encoding? How does that informationcompare to what’s lost in the Pride and Prejudice encoding? 4.8 Summary Neural networks require data to be re presented as multidim ensional numerical tensors, often 32-bit floating-point. In general, PyTorch expects data to be laid out along specific dimensions according to the model architecture—for example, convolutional versus recur- rent. We can reshape data effectively with the PyTorch tensor API. Thanks to how the PyTorch libraries inte ract with the Python standard library and surrounding ecosystem, loading the most common types of data and con- verting them to PyTorch tensors is convenient. Images can have one or many channels. The most common are the red-green-blue channels of typical digital photos. Many images have a per-channel bit depth of 8, though 12 and 16 bits per chan- nel are not uncommon. These bit depths ca n all be stored in a 32-bit floating- point number without loss of precision. Single-channel data formats sometimes omit an explicit channel dimension. Volumetric data is similar to 2D imag e data, with the exception of adding a third dimension (depth). Converting spreadsheets to tensors can be very stra ightforward. Categorical- and ordinal-valued columns should be handled differently from interval-valued columns. Text or categorical data can be encoded to a one-hot representation through the use of dictionaries. Very often, embeddi ngs give good, effici ent representations." Deep-Learning-with-PyTorch.pdf,"103The mechanics of learning With the blooming of machine learning that has occurred over the last decade, the notion of machines that learn from expe rience has become a mainstream theme in both technical and journalistic circles. Now, how is it exactly that a machine learns? What are the mechanics of this pr ocess—or, in words, what is the algorithm behind it? From the point of view of an observ er, a learning algorithm is presented with input data that is paired with desired outputs. Once learning has occurred, that algorithm will be capa ble of producing correct outputs when it is fed new data that is similar enough to the input data it was trained on . With deep learning, this process works even when the input data and the desired output are far from each other: when they come from different domains, like an image and a sentence describing it, as we saw in chapter 2.This chapter covers Understanding how algorithms can learn from data Reframing learning as par ameter estimation, using differentiation and gradient descent Walking through a simple learning algorithm How PyTorch supports learning with autograd" Deep-Learning-with-PyTorch.pdf,"104 CHAPTER 5The mechanics of learning 5.1 A timeless lesson in modeling Building models that allow us to explain input/output relationsh ips dates back centu- ries at least. When Johannes Kepler, a Ge rman mathematical as tronomer (1571–1630), figured out his three laws of planetary moti on in the early 1600s, he based them on data collected by his mentor Tycho Brah e during naked-eye observations (yep, seen with the naked eye and written on a piece of paper). Not having Newton’s law of grav-itation at his disposal (actually, Newton us ed Kepler’s work to figure things out), Kepler extrapolated the simp lest possible geomet ric model that could fit the data. And, by the way, it took him six years of staring at data that didn’t make sense to him,together with incremental realizations, to finally formulate these laws. 1 We can see this process in figure 5.1. Kepler’s first law reads: “The orbit of every planet is an ellipse with the Sun at one of the two foci.” He didn’t know what caused orbits to be ellipses, but given a set of obser- vations for a planet (or a moon of a large pl anet, like Jupiter), he could estimate the shape (the eccentricity ) and size (the semi-latus rectum ) of the ellipse. With those two parameters computed from the data, he co uld tell where the planet might be during 1As recounted by physicist Michael Fowler: http://mng.bz/K2Ej . candidate Models johaNnes observations for mul tipleplanetskepler’s laws (first + second) focus of eLlipse equal areas over timefaster (eCcentricity is a lot larger than the earth’s)slower Figure 5.1 Johannes Kepler considers multiple candidate models that might fit the data at hand, settling on an ellipse." Deep-Learning-with-PyTorch.pdf,"105 A timeless lesson in modeling its journey in the sky. Once he figured ou t the second law—“A line joining a planet and the Sun sweeps out equal areas during eq ual intervals of time”—he could also tell when a planet would be at a particular poin t in space, given ob servations in time.2 So, how did Kepler estimate the eccentrici ty and size of the ellipse without comput- ers, pocket calculators, or even calculus , none of which had been invented yet? We can learn how from Kepler’s ow n recollection, in his book New Astronomy , or from how J. V. Field put it in his series of articles, “The origins of proof,” ( http://mng.bz/9007 ): Essentially, Kepler had to try di fferent shapes, using a certain number of observations to find the curve, then use the curve to find some more positions, for times when he had observations available, and then check whether these calcul ated positions agreed with the observed ones. —J. V. Field So let’s sum things up. Over six years, Kepler 1Got lots of good data from his frie nd Brahe (not without some struggle) 2Tried to visualize the heck out of it, be cause he felt there was something fishy going on 3Chose the simplest possible model that had a chance to fit the data (an ellipse) 4Split the data so that he could work on part of it and keep an independent setfor validation 5Started with a tentative ecce ntricity and size for the ellipse and iterated until the model fit the observations 6Validated his model on th e independent observations 7Looked back in disbelief There’s a data science handbook for you, all the way from 1609. The history of science is literally constructed on these seven step s. And we have learne d over the centuries that deviating from them is a recipe for disaster.3 This is exactly what we will set out to do in order to learn something from data. In fact, in this book there is virtually no difference between saying that we’ll fit the data or that we’ll make an algorithm learn from data. The process always involves a func- tion with a number of unknown parameters whose values are estimated from data: in short, a model . We can argue that learning from data presumes the underlying model is not engi- neered to solve a specific pr oblem (as was the ellipse in Ke pler’s work) and is instead capable of approximating a much wider family of functions. A neural network would have predicted Tycho Brahe’s trajectories re ally well without requiring Kepler’s flash of insight to try fitting the data to an ellipse. However, Sir Isaac Newton would havehad a much harder time deriving his la ws of gravitation from a generic model. 2Understanding the details of Kepler’s laws is not need ed to understand this chapter, but you can find more information at https://en.wikipedia.org/wiki/Kep ler%27s_laws_of_planetary_motion . 3Unless you’re a theoretical physicist ;)." Deep-Learning-with-PyTorch.pdf,"106 CHAPTER 5The mechanics of learning In this book, we’re interested in models that are not engineered for solving a spe- cific narrow task, but that can be automatica lly adapted to specialize themselves for any one of many similar tasks using input and output pairs—in other words, general models trained on data relevant to the spec ific task at hand. In particular, PyTorch is designed to make it easy to create models for which the derivatives of the fitting error, with respect to the parameters, can be expr essed analytically. No worries if this last sentence didn’t make any sense at all; comi ng next, we have a full section that hope- fully clears it up for you. This chapter is about how to automate ge neric function-fitting. After all, this is what we do with deep learning—deep neur al networks being the generic functions we’re talking about—and PyTorch makes this process as simple and transparent as possible. In order to make su re we get the key concepts right, we’ll start with a model that is a lot simpler than a deep neural netw ork. This will allow us to understand the mechanics of learning algorith ms from first princi ples in this chapter, so we can move to more complicated mo dels in chapter 6. 5.2 Learning is just parameter estimation In this section, we’ll learn how we can ta ke data, choose a model, and estimate the parameters of the model so that it will gi ve good predictions on new data. To do so, we’ll leave the intricacies of planetary motion and divert our attention to the second- hardest problem in physics: calibrating instruments. Figure 5.2 shows the high-level overview of what we’ll implement by the end of the chapter. Given input data and the corresponding desired outputs (ground truth), as well as initial values for the weights, the model is fed input data (forward pass), and a measure of the error is evaluated by comp aring the resulting outputs to the ground truth. In order to optimize the parameter of the model—its weights —the change in the error following a unit change in weights (that is, the gradient of the error with respect to the parameters) is computed using the chain rule for the derivative of acomposite function (backward pass). The value of the weights is then updated in the direction that leads to a decrease in the error. The procedure is repeated until the error, evaluated on unseen data, falls below an acceptable level. If what we just said sounds obscure, we’ve got a whole chapter to clear things up. By the time we’re done, all the pieces will fall into place, and this paragraph will make perfect sense. We’re now going to take a problem with a noisy dataset, build a model, and imple- ment a learning algorithm for it. When we start, we’ll be doing everything by hand, but by the end of the chapter we’ll be lett ing PyTorch do all the heavy lifting for us. When we finish the chapter, we will have covered many of the e ssential concepts that underlie training deep neural networks, even if our motivating example is very simple and our model isn’t actually a neural network (yet!)." Deep-Learning-with-PyTorch.pdf,"107 Learning is just parameter estimation 5.2.1 A hot problem We just got back from a trip to some ob scure location, and we brought back a fancy, wall-mounted analog thermometer. It looks gr eat, and it’s a perfect fit for our living room. Its only flaw is that it doesn’t show units. Not to worry, we’ve got a plan: we’ll build a dataset of readings and correspo nding temperature values in our favorite units, choose a model, adjust its weights iteratively until a measure of the error is low enough, and finally be able to interpret the new readings in units we understand.4 Let’s try following the same process Kepler used. Along the way, we’ll use a tool he never had available: PyTorch! 5.2.2 Gathering some data We’ll start by making a note of te mperature data in good old Celsius5 and measure- ments from our new thermomete r, and figure things out. After a couple of weeks, here’s the data (code/p1ch5/1_parameter_estimation.ipynb): 4This task—fitting model outputs to co ntinuous values in terms of the types discussed in chapter 4—is called a regression problem. In chapter 7 and part 2, we will be concerned with classification problems. 5The author of this chapter is Italian, so please forgive him for using sensible units. the learning proceSs eRrors (loSs function)change weights to decrease eRrorsinputs actual outputs given cuRrent weights new inputsforward iterate backwarddesired outputs (ground truth) validation Figure 5.2 Our mental model of the learning process" Deep-Learning-with-PyTorch.pdf,"108 CHAPTER 5The mechanics of learning # In[2]: t_c = [0.5, 14.0, 15.0, 28.0, 11.0, 8.0, 3.0, -4.0, 6.0, 13.0, 21.0] t_u = [35.7, 55.9, 58.2, 81.9, 56.3, 48.9, 33.9, 21.8, 48.4, 60.4, 68.4]t_c = torch.tensor(t_c) t_u = torch.tensor(t_u) Here, the t_c values are temperatures in Celsius, and the t_u values are our unknown units. We can expect noise in both measurements, coming from the devices them-selves and from our approximate readings . For convenience, we’ve already put the data into tensors; we’ll use it in a minute. 5.2.3 Visualizing the data A quick plot of our data in fi gure 5.3 tells us that it’s noisy, but we think there’s a pat- tern here. NOTE Spoiler alert: we know a linear mo del is correct because the problem and data have been fabricated, but please bear with us. It’s a useful motivating example to build our understanding of what PyTorch is doing under thehood. 5.2.4 Choosing a linear model as a first try In the absence of further kn owledge, we assume the simplest possi ble model for con- verting between the two sets of measurements , just like Kepler might have done. The two may be linearly related—that is, multiplying t_u by a factor and adding a constant, we may get the temperature in Celsius (up to an error that we omit): t _ c=w*t _ u+bFigure 5.3 Our unknown data just might follow a linear model.25 20 15 10temperature (°CELSIUS)5 0 20 30 40 50 measurement60 70 80-5" Deep-Learning-with-PyTorch.pdf,"109 Less loss is what we want Is this a reasonable assumption? Probably ; we’ll see how well the final model per- forms. We chose to name w and b after weight and bias, two very common terms for lin- ear scaling and the additive constant—w e’ll bump into those all the time.6 OK, now we need to estimate w and b, the parameters in our model, based on the data we have. We must do it so that temperat ures we obtain from running the unknown tem- peratures t_u through the model are close to temper atures we actually measured in Cel- sius. If that sounds like fitting a line thro ugh a set of measuremen ts, well, yes, because that’s exactly what we’re doing. We’ll go th rough this simple example using PyTorch and realize that training a neural network will essentially involve changing the model for a slightly more elaborate one, with a fe w (or a metric ton) more parameters. Let’s flesh it out again: we have a mo del with some unknown parameters, and we need to estimate those parameters so that the error between predicted outputs and measured values is as low as possible. We notice that we still need to exactly define ameasure of the error. Such a measure, which we refer to as the loss function , should be high if the error is high and should ideally be as low as possible for a perfect match. Our optimization process shou ld therefore aim at finding w and b so that the loss function is at a minimum. 5.3 Less loss is what we want A loss function (or cost function ) is a function that comput es a single numerical value that the learning process wi ll attempt to minimize. The calculation of loss typically involves taking the difference between the desired outputs for so me training samples and the outputs actually prod uced by the model when fed those samples. In our case, that would be the difference be tween the predicted temperatures t_p output by our model and the actual measurements: t_p – t_c . We need to make sure the loss func tion makes the loss positive both when t_p is greater than and when it is less than the true t_c, since the goal is for t_p to match t_c. We have a few choices, the most straigh tforward being |t_p – t_c| and (t_p – t_c)^2 . Based on the mathematical expression we ch oose, we can emphasize or discount certain errors. Conceptually, a loss function is a way of prioritizing which errors to fix from our training samples, so that ou r parameter updates result in adjustments to the outputs for the highly weighted sa mples instead of changes to some other samples’ output that had a smaller loss. Both of the example loss functions have a clear minimum at zero and grow mono- tonically as the predicted value moves furthe r from the true value in either direction. Because the steepness of the growth also monotonically in creases away from the mini- mum, both of them are said to be convex . Since our model is linear, the loss as a function of w and b is also convex.7 Cases where the loss is a convex function of the model param- eters are usually great to deal with becaus e we can find a minimum very efficiently 6The weight tells us how much a given input influences th e output. The bias is what the output would be if all inputs were zero. 7Contrast that with the function shown in figure 5.6, which is not convex." Deep-Learning-with-PyTorch.pdf,"110 CHAPTER 5The mechanics of learning through specialized algorithms. However, we will instead use less powerful but more generally applicable methods in this chapter. We do so because for the deep neural net- works we are ultimately intere sted in, the loss is not a convex function of the inputs. For our two loss functions |t_p – t_c| and (t_p – t_c)^2 , as shown in figure 5.4, we notice that the square of the differen ces behaves more nicely around the mini- mum: the derivative of the erro r-squared loss with respect to t_p is ze ro whe n t_p equals t_c. The absolute value, on the other ha nd, has an undefined derivative right where we’d like to converge. This is less of an issue in practice than it looks like, but we’ll stick to the square of differences for the time being. It’s worth noting that the square difference also penalizes wildly wrong results more than the absolute difference does. Often, having more slightly wrong results is better than hav- ing a few wildly wrong ones, and the squared di fference helps prioritize those as desired. 5.3.1 From problem back to PyTorch We’ve figured out the model and the loss fu nction—we’ve already got a good part of the high-level picture in figure 5.2 figured out. Now we need to set the learning pro- cess in motion and feed it ac tual data. Also, enough with math notation; let’s switch to PyTorch—after all, we came here for the fun. We’ve already created our data tensors, so now let’s write out the model as a Python function: # In[3]: def model(t_u, w, b): returnw*t _ u+b We’re expecting t_u, w, and b to be the input tensor, weight parameter, and bias parameter, respectively. In our model, the parameters will be PyTorch scalars (aka X - X X X X XX - X2 Figure 5.4 Absolute difference versus difference squared" Deep-Learning-with-PyTorch.pdf,"111 Less loss is what we want zero-dimensional tensors), and the product operation will use broadcasting to yield the returned tensors. Anyway, time to define our loss: # In[4]: def loss_fn(t_p, t_c): squared_diffs = (t_p - t_c)**2return squared_diffs.mean() Note that we are building a tensor of differences, taking their square element-wise, and finally producing a scalar loss function by averaging all of the elements in the resulting tensor. It is a mean square loss . We can now initialize the parameters, invoke the model, # In[5]: w = torch.ones(()) b = torch.zeros(()) t_p = model(t_u, w, b) t_p # Out[5]: tensor([35.7000, 55.9000, 58.2000, 81.9000, 56.3000, 48.9000, 33.9000, 21.8000, 48.4000, 60.4000, 68.4000]) and check the value of the loss: # In[6]:loss = loss_fn(t_p, t_c) loss # Out[6]: tensor(1763.8846) We implemented the model and the loss in this section. We’ve finally reached the meat of the example: how do we estimate w and b such that the loss reaches a mini- mum? We’ll first work things out by hand and then learn how to use PyTorch’s super- powers to solve the same problem in a more general, off-the-shelf way. Broadcasting We mentioned broadcasting in chapter 3, and we promised to look at it more carefully when we need it. In our example, we have two scalars (zero-dimensional tensors) w and b, and we multiply them with and add them to vectors (one-dimensional tensors) of length b. Usually—and in early versions of PyTorch, too—we can only use element-wise binary operations such as addition, subtraction, multiplication, and di vision for arguments of the same shape. Th e entries in matching positions in each of t he tensors will be used to calculate the corresponding entry in the result tensor." Deep-Learning-with-PyTorch.pdf,"112 CHAPTER 5The mechanics of learning (continued) Broadcasting, which is popular in NumPy and adapted by PyTorch, relaxes this assump- tion for most binary operations. It uses th e following rules to match tensor elements: For each index dimension, counted from the back, if one of the operands is size 1 in that dimension, PyTorch will use the single entry along this dimen-sion with each of the entries in the other tensor along this dimension. If both sizes are greater than 1, they must be the same, and natural matching is used. If one of the tensors has more index dimensions than the other, the entirety of the other tensor will be used for each entry along these dimensions. This sounds complicated (and it can be error-prone if we don’t pay close attention, which is why we have named the tensor dimensions as shown in section 3.4), but usually,we can either write down the tensor dimensions to see what happens or picture what happens by using space dimensions to show the broadcasting, as in the following figure. Of course, this would all be theory if we didn’t have some code examples: # In[7]: x = torch.ones(())y = torch.ones(3,1) z = torch.ones(1,3) a = torch.ones(2, 1, 1)print(f""shapes: x: {x.shape}, y: {y.shape}"") print(f"" z: {z.shape}, a: {a.shape}"") print(""x * y:"", (x * y).shape) print(""y * z:"", (y * z).shape) print(""y * z * a:"", ( y*z*a ) .shape) # Out[7]:shapes: x: torch.Size([]), y: torch.Size([3, 1]) z: torch.Size([1, 3]), a: torch.Size([2, 1, 1]) x * y: torch.Size([3, 1]) y * z: torch.Size([3, 3]) y*z*a : torch.Size([2, 3, 3]) " Deep-Learning-with-PyTorch.pdf,"113 Down along the gradient 5.4 Down along the gradient We’ll optimize the loss fu nction with respect to the parameters using the gradient descent algorithm. In this section, we’ll buil d our intuition for ho w gradient descent works from first principles, which will help us a lot in the future. As we mentioned, there are ways to solve our example problem more efficiently, but those approaches aren’t applicable to most deep learning tasks. Gradient de scent is actually a very sim- ple idea, and it scales up su rprisingly well to large neur al network models with mil- lions of parameters. Let’s start with a mental image, which we conveniently sketched ou t in figure 5.5. Sup- pose we are in front of a machine sporting two knobs, labeled w and b. We are allowed to see the value of the loss on a screen, and we are told to minimize that value. Not knowing the effect of the knobs on the loss, we start fid-dling with them and decide for each knob which direction makes the loss decrease. We decide to rotate both knobs in their directionof decreasing loss. Supp ose we’re far from the optimal value: we’d likely see the loss decrease quickly and then slow down as it gets closer tothe minimum. We notice that at some point, the loss climbs back up again, so we invert the direction of rotation for one or both knobs.We also learn that when the loss changes slowly, it’s a good idea to adjust the knobs more finely, to avoid reaching the point wh ere the loss goes back up. After a while, eventually, we converge to a minimum. 5.4.1 Decreasing loss Gradient descent is not that different from the scenario we just described. The idea is to compute the rate of change of the loss with respect to each parameter, and modify each parameter in the direction of decreasi ng loss. Just like when we were fiddling with the knobs, we can estimate the rate of change by adding a small number to w and b and seeing how much the loss changes in that neighborhood: # In[8]: delta = 0.1 loss_rate_of_change_w = \ (loss_fn(model(t_u , w + delta, b), t_c) - loss_fn(model(t_u , w - delta, b), t_c)) / (2.0 * delta) Figure 5.5 A cartoon depiction of the optimization process, where a person with knobs for w and b searches for the direction to turn the knobs that makes the loss decrease" Deep-Learning-with-PyTorch.pdf,"114 CHAPTER 5The mechanics of learning This is saying that in the neighborhood of the current values of w and b, a unit increase in w leads to some change in the loss. If the change is negative, then we need to increase w to minimize the loss, whereas if the change is positive, we need to decrease w. By how much? Applying a change to w that is proportional to the rate of change of the loss is a good idea, especial ly when the loss has several parameters: we apply a change to those that exert a signific ant change on the loss. It is also wise to change the parameters slowly in general, be cause the rate of change could be dramat- ically different at a distance from the neighborhood of the current w value. Therefore, we typically should scale the rate of change by a small factor. This scaling factor has many names; the one we use in machine learning is learning_rate : # In[9]: learning_rate = 1e-2 w=w- learning_rate * loss_rate_of_change_w We can do the same with b: # In[10]: loss_rate_of_change_b = \ (loss_fn(model(t_u, w ,b+d e lta), t_c) - loss_fn(model(t_u, w ,b-d e lta), t_c)) / (2.0 * delta) b=b- learning_rate * loss_rate_of_change_b This represents the basic parameter-update step for gradient descent. By reiterating these evaluations (and provided we choose a small enough learning rate), we will converge to an optimal value of the para meters for which the loss computed on the given data is minimal. We’ll show the complete iterative process soon, but the way we just computed our rates of change is ra ther crude and needs an upgrade before we move on. Let’s see why and how. 5.4.2 Getting analytical Computing the rate of change by using re peated evaluations of the model and loss in order to probe the behavior of the lo ss function in the neighborhood of w and b doesn’t scale well to models with many pa rameters. Also, it is not always clear how large the neighborhood should be. We chose delta equal to 0.1 in the previous sec- tion, but it all depends on the sh ape of the loss as a function of w and b. If the loss changes too quickly compared to delta , we won’t have a very good idea of in which direction the loss is decreasing the most. What if we could make the neighborhood infinitesimally small, as in figure 5.6? That’s exactly what happens when we analyt ically take the derivative of the loss with respect to a parameter. In a model with two or more parameters like the one we’re dealing with, we compute the individual de rivatives of the loss with respect to each parameter and put them in a vector of derivatives: the gradient ." Deep-Learning-with-PyTorch.pdf,"115 Down along the gradient COMPUTING THE DERIVATIVES In order to compute the derivative of the loss with respect to a parameter, we can apply the chain rule and compute the derivati ve of the loss with respect to its input (which is the output of the model), times the derivative of the model with respect to the parameter: d loss_f n/dw=( d loss_fn / d t_p) * (d t_ p/dw ) Recall that our model is a line ar function, and our loss is a sum of squares. Let’s figure out the expressions for the derivatives. Recalling th e expression for the loss: # In[4]: def loss_fn(t_p, t_c): squared_diffs = (t_p - t_c)**2 return squared_diffs.mean() Remembering that d x^2 / d x = 2 x , we get # In[11]:def dloss_fn(t_p, t_c): dsq_diff s=2* (t_p - t_c) / t_p.size(0) return dsq_diffs APPLYING THE DERIVATIVES TO THE MODEL For the model, recalling that our model is # In[3]: def model(t_u, w, b): returnw*t _ u+b we get these derivatives:Figure 5.6 Differences in the estimated directions for descent when evaluating them at discrete locations versus analytically loSs 1 23 4 The division is from the derivative of mean. " Deep-Learning-with-PyTorch.pdf,"116 CHAPTER 5The mechanics of learning # In[12]: def dmodel_dw(t_u, w, b): return t_u # In[13]: def dmodel_db(t_u, w, b): return 1.0 DEFINING THE GRADIENT FUNCTION Putting all of this together, the function returning the grad ient of the loss with respect to w and b is # In[14]: def grad_fn(t_u, t_c, t_p, w, b): dloss_dtp = dloss_fn(t_p, t_c) dloss_dw = dloss_dtp * dmodel_dw(t_u, w, b)dloss_db = dloss_dtp * dmodel_db(t_u, w, b) return torch.stack([dloss_dw.sum(), dloss_db.sum()]) The same idea expressed in mathematical notation is shown in figure 5.7. Again, we’re averaging (that is, summing and dividing by a constant) over all the data points to get a single scalar qu antity for each partial derivative of the loss. 5.4.3 Iterating to fit the model We now have everything in place to optimize our parameters. Starting from a tentative value for a parameter, we can iteratively appl y updates to it for a fixed number of iter- ations, or until w and b stop changing. There are seve ral stopping criteria; for now, we’ll stick to a fixed number of iterations. THE TRAINING LOOP Since we’re at it, let’s introduce another piec e of terminology. We call a training itera- tion during which we update the paramete rs for all of our training samples an epoch .The summation is the reverse of the broadcasting we implicitly do when applying the parameters to an entire vector of inputs in the model. Figure 5.7 The derivative of the loss function with respect to the weights " Deep-Learning-with-PyTorch.pdf,"117 Down along the gradient The complete training l oop looks like this (code/p1ch5/1_parameter_estimation .ipynb): # In[15]: def training_loop(n_epochs, learning_rate, params, t_u, t_c): for epoch in range(1, n_epochs + 1): w, b = params t_p = model(t_u, w, b) loss = loss_fn(t_p, t_c) grad = grad_fn(t_u, t_c, t_p, w, b) params = params - learning_rate * grad print('Epoch %d, Loss %f' % (epoch, float(loss))) return params The actual logging logi c used for the output in this te xt is more complicated (see cell 15 in the same notebook: http://mng.bz/pBB8 ), but the differences are unimportant for understanding the core concepts in this chapter. Now, let’s invoke our training loop: # In[17]: training_loop( n_epochs = 100,learning_rate = 1e-2, params = torch.tensor([1.0, 0.0]), t_u = t_u,t_c = t_c) # Out[17]: Epoch 1, Loss 1763.884644 Params: tensor([-44.1730, -0.8260]) Grad: tensor([4517.2969, 82.6000]) Epoch 2, Loss 5802485.500000 Params: tensor([2568.4014, 45.1637]) Grad: tensor([-261257.4219, -4598.9712]) Epoch 3, Loss 19408035840.000000 Params: tensor([-148527.7344, -2616.3933]) Grad: tensor([15109614.0000, 266155.7188]) ... Epoch 10, Loss 90901154706620645225508955521810432.000000 Params: tensor([3.2144e+17, 5.6621e+15])Grad: tensor([-3.2700e+19, -5.7600e+17]) Epoch 11, Loss inf Params: tensor([-1.8590e+19, -3.2746e+17])Grad: tensor([1.8912e+21, 3.3313e+19]) tensor([-1.8590e+19, -3.2746e+17])Forward pass Backward pass This logging line can be very verbose." Deep-Learning-with-PyTorch.pdf,"118 CHAPTER 5The mechanics of learning OVERTRAINING Wait, what happened? Our training process literally blew up, leading to losses becom- ing inf. This is a clear sign that params is receiving updates that are too large, and their values start oscillating back and fo rth as each update overshoots and the next overcorrects even more. The optimization process is unstable: it diverges instead of converging to a minimum. We want to see smaller and smaller updates to params , not larger, as shown in figure 5.8. How can we limit the magnitude of learning_rate * grad ? Well, that looks easy. We could simply choose a smaller learning_rate , and indeed, the learning rate is one of the things we typically change when traini ng does not go as well as we would like.8 We usually change learning rates by orders of magnitude, so we might try with 1e-3 or 1e-4 , which would decrease the magnitude of the updates by orders of magnitude. Let’s go with 1e-4 and see how it works out: # In[18]: training_loop( n_epochs = 100, 8The fancy name for this is hyperparameter tuning . Hyperparameter refers to the fact that we are training the model’s parameters, but the hyperparameters control how this training goes. Typically these are more or less set manually. In particular, they canno t be part of the same optimization.Figure 5.8 Top: Diverging optimization on a convex function (parabola-like) due to large steps. Bottom: Converging optimization with small steps. AB C DE F 11 11 1 12 22 23 3 " Deep-Learning-with-PyTorch.pdf,"119 Down along the gradient learning_rate = 1e-4, params = torch.tensor([1.0, 0.0]), t_u = t_u,t_c = t_c) # Out[18]: Epoch 1, Loss 1763.884644 Params: tensor([ 0.5483, -0.0083]) Grad: tensor([4517.2969, 82.6000]) Epoch 2, Loss 323.090546 Params: tensor([ 0.3623, -0.0118]) Grad: tensor([1859.5493, 35.7843]) Epoch 3, Loss 78.929634 Params: tensor([ 0.2858, -0.0135]) Grad: tensor([765.4667, 16.5122]) ... Epoch 10, Loss 29.105242 Params: tensor([ 0.2324, -0.0166])Grad: tensor([1.4803, 3.0544]) Epoch 11, Loss 29.104168 Params: tensor([ 0.2323, -0.0169])Grad: tensor([0.5781, 3.0384]) ... Epoch 99, Loss 29.023582 Params: tensor([ 0.2327, -0.0435])Grad: tensor([-0.0533, 3.0226]) Epoch 100, Loss 29.022669 Params: tensor([ 0.2327, -0.0438])Grad: tensor([-0.0532, 3.0226]) tensor([ 0.2327, -0.0438]) Nice—the behavior is now stable. But ther e’s another problem: the updates to param- eters are very small, so the loss decreases very slowly and eventually stalls. We could obviate this issue by making learning_rate adaptive: that is, change according to the magnitude of updates. There are optimization schemes that do that, and we’ll see one toward the end of this chapter, in section 5.5.2. However, there’s another potential troubl emaker in the update term: the gradient itself. Let’s go back and look at grad at epoch 1 during optimization. 5.4.4 Normalizing inputs We can see that the first-epoch gradient fo r the weight is about 50 times larger than the gradient for the bias. This means the weight and bias live in differently scaled spaces. If this is the case, a learning rate that’s large enough to meaningfully update one will be so large as to be unstable for the other; and a rate that’s appropriate for the other won’t be large enough to meaningf ully change the first. That means we’re not going to be able to update our parameters unless we change something about our formulation of the problem. We could have individual learning rates for each parame- ter, but for models with many parameters, this would be too much to bother with; it’s babysitting of the kind we don’t like." Deep-Learning-with-PyTorch.pdf,"120 CHAPTER 5The mechanics of learning There’s a simpler way to keep things in ch eck: changing the inputs so that the gra- dients aren’t quite so different. We can ma ke sure the range of the input doesn’t get too far from the range of –1.0 to 1.0, roughly speaking. In our case, we can achieve something close enough to that by simply multiplying t_u by 0.1: # In[19]: t_un = 0.1 * t_u Here, we denote the normalized version of t_u by appending an n to the variable name. At this point, we can run the tr aining loop on ou r normalized input: # In[20]: training_loop( n_epochs = 100, learning_rate = 1e-2, params = torch.tensor([1.0, 0.0]),t_u = t_un, t_c = t_c) # Out[20]: Epoch 1, Loss 80.364342 Params: tensor([1.7761, 0.1064]) Grad: tensor([-77.6140, -10.6400]) Epoch 2, Loss 37.574917 Params: tensor([2.0848, 0.1303]) Grad: tensor([-30.8623, -2.3864]) Epoch 3, Loss 30.871077 Params: tensor([2.2094, 0.1217]) Grad: tensor([-12.4631, 0.8587]) ... Epoch 10, Loss 29.030487 Params: tensor([ 2.3232, -0.0710])Grad: tensor([-0.5355, 2.9295]) Epoch 11, Loss 28.941875 Params: tensor([ 2.3284, -0.1003])Grad: tensor([-0.5240, 2.9264]) ... Epoch 99, Loss 22.214186 Params: tensor([ 2.7508, -2.4910]) Grad: tensor([-0.4453, 2.5208]) Epoch 100, Loss 22.148710 Params: tensor([ 2.7553, -2.5162]) Grad: tensor([-0.4446, 2.5165]) tensor([ 2.7553, -2.5162]) Even though we set our learning rate back to 1e-2 , parameters don’t blow up during iterative updates. Let’s take a look at the gradients: they’re of similar magnitude, so using a single learning_rate for both parameters works just fine. We could probably do a better job of normalization than a simp le rescaling by a factor of 10, but since doing so is good enough for our needs, we’re going to stick with that for now.We’ve updated t_u to our new, rescaled t_un." Deep-Learning-with-PyTorch.pdf,"121 Down along the gradient NOTE The normalization here absolutely helps get the network trained, but you could make an argument that it’s not strictly needed to optimize the parameters for this particular problem. That’s absolutely true! This problem is small enough that there are numerous wa ys to beat the parameters into sub- mission. However, for larger, more soph isticated problems, normalization is an easy and effective (if not crucial!) tool to use to improve model convergence. Let’s run the loop for enough iterations to see the changes in params get small. We’ll change n_epochs to 5,000: # In[21]: params = training_loop( n_epochs = 5000, learning_rate = 1e-2,params = torch.tensor([1.0, 0.0]), t_u = t_un, t_c = t_c,print_params = False) params # Out[21]: Epoch 1, Loss 80.364342 Epoch 2, Loss 37.574917Epoch 3, Loss 30.871077 ... Epoch 10, Loss 29.030487Epoch 11, Loss 28.941875 ... Epoch 99, Loss 22.214186Epoch 100, Loss 22.148710 ... Epoch 4000, Loss 2.927680Epoch 5000, Loss 2.927648 tensor([ 5.3671, -17.3012]) Good: our loss decreases while we change pa rameters along the direction of gradient descent. It doesn’t go exactly to zero; this could mean there aren’t enough iterations to converge to zero, or that the data points don’t sit exactly on a line. As we anticipated, our measurements were not perfectly accurate, or there was noise involved in the reading. But look: the values for w and b look an awful lot like the numbers we need to use to convert Celsius to Fahrenheit (after ac counting for our earlier normalization when we multiplied our inputs by 0. 1). The exact values would be w=5.5556 and b=- 17.7778 . Our fancy thermometer was showing te mperatures in Fahrenheit the whole time. No big discovery, except that our gr adient descent optimi zation process works!" Deep-Learning-with-PyTorch.pdf,"122 CHAPTER 5The mechanics of learning 5.4.5 Visualizing (again) Let’s revisit something we did right at the star t: plotting our data. Seriously, this is the first thing anyone doing data science should do. Always plot the heck out of the data: # In[22]: %matplotlib inline from matplotlib import pyplot as plt t_p = model(t_un, *params) fig = plt.figure(dpi=600) plt.xlabel(""Temperature (°Fahrenheit)"") plt.ylabel(""Temperature (°Celsius)"") plt.plot(t_u.numpy(), t_p.detach().numpy())plt.plot(t_u.numpy(), t_c.numpy(), 'o ') We are using a Python trick called argument unpacking here: *params means to pass the elements of params as individual arguments. In Python , this is usually done with lists or tuples, but we can also use argument unpacking with PyTorch tensors, which are split along the leading dimension. So here, model(t_un, *params) is equivalent to model(t_un, params[0], params[1]) . This code produces figure 5.9. Our linear model is a good model for the data, it seems. It also seems our me asurements are somewhat erratic. We should either call our optometrist for a new pair of glasses or think about returning our fancy ther- mometer. Remember that we’re training on the normalized unknown units. We also use argument unpacking. But we’re plotting the raw unknown values. Figure 5.9 The plot of our linear-fit model (solid line) versus our input data (circles) 25 20 15 10temperature (°CELSIUS)5 0 20 30 40 50 temperature (°fahrenheit)60 70 80-5" Deep-Learning-with-PyTorch.pdf,"123 PyTorch’s autograd: Backpropagating all things 5.5 PyTorch’s autograd: Backpropagating all things In our little adventure, we just saw a si mple example of back propagation: we com- puted the gradient of a composition of functions—the model and the loss—with respect to their innermost parameters ( w and b) by propagating derivatives backward using the chain rule . The basic requirement here is that all function s we’re dealing with can be differentiated an alytically. If this is the ca se, we can compute the gradi- ent—what we earlier called “the rate of change of the loss”—with respect to the parameters in one sweep. Even if we have a complicated model with millions of parameters, as long as our model is differentiable, computing the gradie nt of the loss with respect to the param- eters amounts to writing the analytical ex pression for the derivatives and evaluating them once. Granted, writing the analytical expre ssion for the derivatives of a very deep composition of linear and nonlinear functions is not a lot of fun.9 It isn’t particularly quick, either. 5.5.1 Computing the gradient automatically This is when PyTorch tensors come to the rescue, with a PyTorch component called autograd . Chapter 3 presented a comprehensive ov erview of what te nsors are and what functions we can call on them. We left out one very interesting aspect, however: PyTorch tensors can remember where they come from, in terms of the operations and parent tensors that originated them, and they can automatically provide the chain ofderivatives of such operations with respect to their inpu ts. This means we won’t need to derive our model by hand; 10 given a forward expression, no matter how nested, PyTorch will automatically provide the gradient of that expression with respect to its input parameters. APPLYING AUTOGRAD At this point, the best way to proceed is to rewrite our thermometer calibration code, this time using autograd, and see what ha ppens. First, we reca ll our model and loss function. # In[3]: def model(t_u, w, b): returnw*t _ u+b # In[4]: def loss_fn(t_p, t_c): squared_diffs = (t_p - t_c)**2 return squared_diffs.mean() 9Or maybe it is; we won’t judge how you spend your weekend! 10Bummer! What are we going to do on Saturdays, now?Listing 5.1 code/p1ch5/2_autograd.ipynb" Deep-Learning-with-PyTorch.pdf,"124 CHAPTER 5The mechanics of learning Let’s again initialize a parameters tensor: # In[5]: params = torch.tensor([1.0, 0.0], requires_grad=True) USING THE GRAD ATTRIBUTE Notice the requires_grad=True argument to the tensor constructor? That argument is telling PyTorch to track the entire family tree of tensors resulting from operations on params . In other words, any te nsor that will have params as an ancestor will have access to the chain of functions that were called to get from params to that tensor. In case these functions are differentiable (and most PyTorch tensor operations will be), the value of the derivative will be automatically populated as a grad attribute of the params tensor. In general, all PyTorch tensors have an attribute named grad . Normally, it’s None : # In[6]: params.grad is None # Out[6]: True All we have to do to populate it is to start with a tensor with requires_grad set to True , then call the model and compute the loss, and then call backward on the loss tensor: # In[7]:loss = loss_fn(model(t_u, *params), t_c) loss.backward() params.grad # Out[7]: tensor([4517.2969, 82.6000]) At this point, the grad attribute of params contains the derivatives of the loss with respect to each element of params. When we compute our loss while the parameters w and b require gradients, in addition to performing the actual comput ation, PyTorch create s the autograd graph with the operations (in blac k circles) as nodes, as sh own in the top row of fig- ure 5.10. When we call loss.backward() , PyTorch traverses this graph in the reverse direction to compute the gradients, as shown by the arrows in the bottom row of the figure. " Deep-Learning-with-PyTorch.pdf,"125 PyTorch’s autograd: Backpropagating all things ACCUMULATING GRAD FUNCTIONS We could have any number of tensors with requires_grad set to True and any compo- sition of functions. In this case, PyTorch would compute the derivatives of the loss throughout the chain of functions (the co mputation graph) and accumulate their val- ues in the grad attribute of those tensors (the leaf nodes of the graph). Alert! Big gotcha ahead . This is something PyTorch newcomers—and a lot of more experienced folks, too—trip up on regularly. We just wrote accumulate , not store. WARNING Calling backward will lead derivatives to accumulate at leaf nodes. We need to zero the gradient explicitly after using it for parameter updates. Let’s repeat together: calling backward will lead derivatives to accumulate at leaf nodes. So if backward was called earlier, the loss is evaluated again, backward is called again (as in any training loop), and the gradient at each leaf is accumulated (that is, summed) on top of the one computed at th e previous iteration, which leads to an incorrect value for the gradient. In order to prevent this from occurring, we need to zero the gradient explicitly at each iteration. We can do this easily using the in-place zero_ method: # In[8]: if params.grad is not None: params.grad.zero_() Figure 5.10 The forward graph and backward graph of the model as computed with autograd" Deep-Learning-with-PyTorch.pdf,"126 CHAPTER 5The mechanics of learning NOTE You might be curious why zeroing the gradient is a required step instead of zeroing happening automatically whenever we call backward . Doing it this way provides more flexibility and control when working with gra-dients in complicated models. Having this reminder drilled into our he ads, let’s see what our autograd-enabled training code looks like, start to finish: # In[9]: def training_loop(n_epochs, learning_rate, params, t_u, t_c): for epoch in range(1, n_epochs + 1): if params.grad is not None: params.grad.zero_() t_p = model(t_u, *params) loss = loss_fn(t_p, t_c) loss.backward() with torch.no_grad(): params -= learning_rate * params.grad if epoch % 500 == 0: print('Epoch %d, Loss %f' % (epoch, float(loss))) return params Note that our code updating params is not quite as straightforward as we might have expected. There are two particularities. Fi rst, we are encapsulating the update in a no_grad context using the Python with statement. This means within the with block, the PyTorch autograd mechanism should look away :11 that is, not add edges to the for- ward graph. In fact, when we are executing this bit of code, the forward graph that PyTorch records is consumed when we call backward , leaving us with the params leaf node. But now we want to change this leaf node before we start building a fresh for- ward graph on top of it. While this use ca se is usually wrapped inside the optimizers we discuss in section 5.5.2, we will take a closer look when we see another common useof no_grad in section 5.5.4. Second, we update params in place. This means we keep the same params tensor around but subtract our update from it. Wh en using autograd, we usually avoid in- place updates because PyTorch’s autograd en gine might need the values we would be modifying for the backward pass. Here, howe ver, we are operatin g without autograd, and it is beneficial to keep the params tensor. Not replacing the parameters by assign- ing new tensors to their variable name w ill become crucial when we register our parameters with the optimizer in section 5.5.2. 11In reality, it will track that something changed params using an in-place operation.This could be done at any point in the loop prior to calling loss.backward(). This is a somewhat cumbersome bit of code, but as we’ll see in the next section, it’s not an issue in practice." Deep-Learning-with-PyTorch.pdf,"127 PyTorch’s autograd: Backpropagating all things Let’s see if it works: # In[10]: training_loop( n_epochs = 5000, learning_rate = 1e-2,params = torch.tensor([1.0, 0.0], requires_grad=True), t_u = t_un, t_c = t_c) # Out[10]: Epoch 500, Loss 7.860116Epoch 1000, Loss 3.828538 Epoch 1500, Loss 3.092191 Epoch 2000, Loss 2.957697Epoch 2500, Loss 2.933134 Epoch 3000, Loss 2.928648 Epoch 3500, Loss 2.927830Epoch 4000, Loss 2.927679 Epoch 4500, Loss 2.927652 Epoch 5000, Loss 2.927647 tensor([ 5.3671, -17.3012], requires_grad=True) The result is the same as we got previously . Good for us! It means that while we are capable of computing derivatives by hand, we no longer need to. 5.5.2 Optimizers a la carte In the example code, we used vanilla gradient descent for optimization, which worked fine for our simple ca se. Needless to say, there are seve ral optimization strategies and tricks that can assist convergence, es pecially when models get complicated. We’ll dive deeper into this topic in la ter chapters, but now is the right time to introduce the way PyTorch abstracts the opti mization strategy away from user code: that is, the training loop we’ve examined. This saves us from the boilerplate busywork of having to update each and every parameter to our model ourselves. The torch module has an optim submodule where we can find classes implementing different optimization algorithms. Here ’s an abridged list (cod e/p1ch5/3_optimizers.ipynb): # In[5]: import torch.optim as optim dir(optim) # Out[5]: ['ASGD', 'Adadelta', 'Adagrad','Adam', 'Adamax', 'LBFGS','Optimizer',Adding requires_grad=True is key. Again, we’re using the normalized t_un instead of t_u." Deep-Learning-with-PyTorch.pdf,"128 CHAPTER 5The mechanics of learning 'RMSprop', 'Rprop', 'SGD','SparseAdam', ... ] Every optimizer constructor takes a list of parameters (aka PyTorch tensors, typically with requires_grad set to True ) as the first input. All parameters passed to the opti- mizer are retained inside the optimizer obje ct so the optimizer can update their val- ues and access their grad attribute, as represented in figure 5.11. Each optimizer expo ses two methods: zero_grad and step . zero_grad zeroes the grad attribute of all the parameters passe d to the optimizer upon construction. step updates the value of those parameters acco rding to the optimization strategy imple- mented by the specific optimizer. USING A GRADIENT DESCENT OPTIMIZER Let’s create params and instantiate a gradient descent optimizer: # In[6]: params = torch.tensor([1.0, 0.0], requires_grad=True) learning_rate = 1e-5 optimizer = optim.SGD([params], lr=learning_rate) A B CD Figure 5.11 (A) Conceptual representation of how an optimizer holds a reference to parameters. (B) After a loss is computed from inputs, (C) a call to .backward leads to .grad being populated on parameters. (D) At that point, the optimizer can access .grad and compute the parameter updates." Deep-Learning-with-PyTorch.pdf,"129 PyTorch’s autograd: Backpropagating all things Here SGD stands for stochastic gradient descent . Actually, the optimize r itself is exactly a vanilla gradient descent (as long as the momentum argument is set to 0.0, which is the default). The term stochastic comes from the fact that the gradient is typically obtained by averaging over a random subset of all input samples, called a minibatch . However, the optimizer does not know if the loss was eval uated on all the sample s (vanilla) or a ran- dom subset of them (stochastic), so the algo rithm is literally the same in the two cases. Anyway, let’s take our fancy new optimizer for a spin: # In[7]: t_p = model(t_u, *params) loss = loss_fn(t_p, t_c) loss.backward() optimizer.step() params # Out[7]: tensor([ 9.5483e-01, -8.2600e-04], requires_grad=True) The value of params is updated upon calling step without us having to touch it our- selves! What happens is that the optimizer looks into params.grad and updates params , subtracting learning_rate times grad from it, exactly as in our former hand- rolled code. Ready to stick this code in a training loop? Nope! The big gotcha almost got us— we forgot to zero out the gradients. Had we called the previous code in a loop, gradi- ents would have accumulated in the leaves at every call to backward , and our gradient descent would have been all over the place! Here’s the loop-ready code, with the extra zero_grad at the correct spot (right before the call to backward ): # In[8]: params = torch.tensor([1.0, 0.0], requires_grad=True)learning_rate = 1e-2 optimizer = optim.SGD([params], lr=learning_rate) t_p = model(t_un, *params) loss = loss_fn(t_p, t_c) optimizer.zero_grad() loss.backward() optimizer.step() params # Out[8]: tensor([1.7761, 0.1064], requires_grad=True) Perfect! See how the optim module helps us abstract aw ay the specific optimization scheme? All we have to do is provide a list of params to it (that list can be extremelyAs before, the exact placement of this call is somewhat arbitrary. It could be earlier in the loop as well." Deep-Learning-with-PyTorch.pdf,"130 CHAPTER 5The mechanics of learning long, as is needed for very deep neural network models), and we can forget about the details. Let’s update our traini ng loop accordingly: # In[9]: def training_loop(n_epochs, optimizer, params, t_u, t_c): for epoch in range(1, n_epochs + 1): t_p = model(t_u, *params) loss = loss_fn(t_p, t_c) optimizer.zero_grad() loss.backward() optimizer.step() if epoch % 500 == 0: print('Epoch %d, Loss %f' % (epoch, float(loss))) return params # In[10]: params = torch.tensor([1.0, 0.0], requires_grad=True)learning_rate = 1e-2 optimizer = optim.SGD([params], lr=learning_rate) training_loop( n_epochs = 5000,optimizer = optimizer, params = params, t_u = t_un,t_c = t_c) # Out[10]: Epoch 500, Loss 7.860118 Epoch 1000, Loss 3.828538Epoch 1500, Loss 3.092191 Epoch 2000, Loss 2.957697 Epoch 2500, Loss 2.933134Epoch 3000, Loss 2.928648 Epoch 3500, Loss 2.927830 Epoch 4000, Loss 2.927680Epoch 4500, Loss 2.927651 Epoch 5000, Loss 2.927648 tensor([ 5.3671, -17.3012], requires_grad=True) Again, we get the same result as before. Great: this is further confirmation that we know how to descend a gradient by hand! TESTING OTHER OPTIMIZERS In order to test more optimizers, all we have to do is instantiate a different optimizer, say Adam , instead of SGD. The rest of the code stays as it is. Pretty handy stuff. We won’t go into much detail about Adam; su ffice to say that it is a more sophisti- cated optimizer in which the learning rate is set adaptively. In addi tion, it is a lot less sensitive to the scaling of the parameters—so insensitive that we can go back to usingIt’s important that both params are the same object; otherwise the optimizer won’t know what parameters were used by the model." Deep-Learning-with-PyTorch.pdf,"131 PyTorch’s autograd: Backpropagating all things the original (non-normalized) input t_u, and even increase the learning rate to 1e-1 , and Adam won’t even blink: # In[11]: params = torch.tensor([1.0, 0.0], requires_grad=True) learning_rate = 1e-1optimizer = optim.Adam([params], lr=learning_rate) training_loop( n_epochs = 2000, optimizer = optimizer, params = params,t_u = t_u, t_c = t_c) # Out[11]: Epoch 500, Loss 7.612903 Epoch 1000, Loss 3.086700Epoch 1500, Loss 2.928578 Epoch 2000, Loss 2.927646 tensor([ 0.5367, -17.3021], requires_grad=True) The optimizer is not the only flexible part of our training loop. Let’s turn our atten- tion to the model. In order to train a neural network on the same data and the same loss, all we would need to change is the model function. It wouldn’t make particular sense in this case, since we know that conv erting Celsius to Fahr enheit amounts to a linear transformation, but we’ll do it anyw ay in chapter 6. We’ll see quite soon that neural networks allow us to remove our ar bitrary assumptions about the shape of the function we should be appr oximating. Even so, we’ll se e how neural networks manage to be trained even when the underlying pr ocesses are highly nonlinear (such in the case of describing an image with a sentence, as we saw in chapter 2). We have touched on a lot of the essentia l concepts that will enable us to train complicated deep learning models while knowing what’s going on under the hood: backpropagation to estimate gradients, au tograd, and optimizing weights of models using gradient descent or other optimizers. Really, there isn’t a lot more. The rest is mostly filling in the blanks, however extensive they are. Next up, we’re going to offer an aside on how to split our samples, because that sets up a perfect use case for learni ng how to better control autograd. 5.5.3 Training, valida tion, and overfitting Johannes Kepler taught us one last thing th at we didn’t discuss so far, remember? He kept part of the data on the side so that he could validate his models on independent observations. This is a vital thing to do, especially when the model we adopt could potentially approximate function s of any shape, as in the case of neural networks. In other words, a highly adaptable model will tend to use its many parameters to make sure the loss is minimal at the data points, but we’ll have no guarantee that the modelNew optimizer class We’re back to the original t_u as our input." Deep-Learning-with-PyTorch.pdf,"132 CHAPTER 5The mechanics of learning behaves well away from or in between the data points. After all, that’s what we’re asking the optimizer to do: minimize the loss at the data points. Sure enough, if we had inde- pendent data points that we didn’t use to evaluate our loss or descend along its nega- tive gradient, we would soon find out that evaluating the loss at those independent data points would yield higher-than-expect ed loss. We have already mentioned this phenomenon, called overfitting . The first action we can take to combat ov erfitting is recognizing that it might hap- pen. In order to do so, as Kepler figured out in 1600, we must take a few data points out of our dataset (the validation set ) and only fit our model on the remaining data points (the training set ), as shown in figure 5.12. Then, while we’re fitting the model, we can evaluate the loss once on the training set and once on the validation set. When we’re trying to decide if we’ve done a good job of fitting our model to the data, we must look at both! EVALUATING THE TRAINING LOSS The training loss will tell us if our model ca n fit the training set at all—in other words, if our model has enough capacity to process the relevant information in the data. If our mysterious thermometer somehow managed to measure temperatures using a log- arithmic scale, our poor li near model would not have had a chance to fit those mea- surements and provide us with a sensible co nversion to Celsius. In that case, our training loss (the loss we were printing in the training loop) would stop decreasing well before approaching zero. data- producing proceSs training setvalidation set trained model modelparameter optimization (training)performance Figure 5.12 Conceptual representation of a data- producing process and the collection and use of training data and independent validation data" Deep-Learning-with-PyTorch.pdf,"133 PyTorch’s autograd: Backpropagating all things A deep neural network can potentially approximate complica ted functions, pro- vided that the number of neurons, and th erefore parameters, is high enough. The fewer the number of parameters, the simpler the shape of the func tion our network will be able to approximate. So, ru le 1: if the training loss is not decreasing, chances are the model is too simple for the data. The other po ssibility is that our data just doesn’t con- tain meaningful information that lets it expl ain the output: if the nice folks at the shop sell us a barometer instead of a thermometer, we will have little chance of predicting temperature in Celsius from ju st pressure, even if we us e the latest neural network architecture from Quebec ( www.umontreal.ca/en/ar tificialintelligence ). GENERALIZING TO THE VALIDATION SET What about the validation set? Well, if the loss evaluated in the validation set doesn’t decrease along with the training set, it me ans our model is improving its fit of the sam- ples it is seeing during training, but it is not generalizing to samples outside this precise set. As soon as we evaluate the model at new, previously unseen points, the values of the loss function are poor. So, rule 2: if the training loss and the validation loss diverge, we’re overfitting. Let’s delve into this phenomenon a little, going back to our thermometer exam- ple. We could have decided to fit the data with a more complicated function, like a piecewise polynomial or a really large ne ural network. It could generate a model meandering its way through the data points, as in figure 5.13, just because it pushes the loss very close to zero. Since the beha vior of the function away from the data points does not increase the loss, there’ s nothing to keep the model in check for inputs away from the training data points. Figure 5.13 Rather extreme example of overfitting" Deep-Learning-with-PyTorch.pdf,"134 CHAPTER 5The mechanics of learning What’s the cure, though? Good question. Fr om what we just said, overfitting really looks like a problem of making sure the behavior of the model in between data points is sensible for the process we’re trying to approximate. First of all, we should make sure we get enough data for the process. If we collected data from a sinusoidal pro- cess by sampling it re gularly at a low frequency, we would have a hard time fitting a model to it. A s s u m i n g w e h a v e e n o u g h d a t a p o i n t s , we should make sure the model that is capable of fitting the training data is as regular as possible in between them. There are several ways to achieve this. One is adding penalization terms to the loss function, to make it cheaper for the model to behave more smoothly and chan ge more slowly (up to a point). Another is to add noise to the in put samples, to artificially create new data points in between training da ta samples and force the model to try to fit those, too. There are several other ways, all of them some what related to these. But the best favor we can do to ourselves, at le ast as a first move, is to ma ke our model simpler. From an intuitive standpoint, a simpler model may not fit the training data as perfectly as a more complicated model would, but it will likely behave more regularly in between data points. We’ve got some nice trade-offs here. On the one hand, we need the model to have enough capacity for it to fit the training se t. On the other, we need the model to avoid overfitting. Therefore, in order to choose the right size for a neural network model in terms of parameters, the process is based on two steps: increase th e size until it fits, and then scale it down un til it stops overfitting. We’ll see more about this in chapter 12—we’ll discover that our life will be a bal- ancing act between fitting and overfitting. For now, let’s get back to our example and see how we can split the data into a training set and a validation set. We’ll do it by shuffling t_u and t_c the same way and then splittin g the resulting shuffled tensors into two parts. SPLITTING A DATASET Shuffling the elements of a tensor amounts to finding a permutation of its indices. The randperm function does exactly this: # In[12]: n_samples = t_u.shape[0] n_val = int(0.2 * n_samples) shuffled_indices = torch.randperm(n_samples) train_indices = shuffled_indices[:-n_val] val_indices = shuffled_indices[-n_val:] train_indices, val_indices # Out[12]: (tensor([9, 6, 5, 8, 4, 7, 0, 1, 3]), tensor([ 2, 10]))Since these are random, don’t be surprised if your values end up different from here on out." Deep-Learning-with-PyTorch.pdf,"135 PyTorch’s autograd: Backpropagating all things We just got index tensors that we can use to build training and va lidation sets starting from the data tensors: # In[13]: train_t_u = t_u[train_indices] train_t_c = t_c[train_indices] val_t_u = t_u[val_indices] val_t_c = t_c[val_indices] train_t_un = 0.1 * train_t_u val_t_un = 0.1 * val_t_u Our training loop doesn’t really change. We just want to ad ditionally evaluate the vali- dation loss at every epoch, to have a chan ce to recognize whether we’re overfitting: # In[14]: def training_loop(n_epochs, optimizer, params, train_t_u, val_t_u, train_t_c, val_t_c): for epoch in range(1, n_epochs + 1): train_t_p = model(train_t_u, *params) train_loss = loss_fn(train_t_p, train_t_c) val_t_p = model(val_t_u, *params) val_loss = loss_fn(val_t_p, val_t_c) optimizer.zero_grad() train_loss.backward() optimizer.step() if epoch <= 3 or epoch % 500 == 0: print(f""Epoch {epoch}, Training loss {train_loss.item():.4f},"" f"" Validation loss {val_loss.item():.4f}"") return params # In[15]: params = torch.tensor([1.0, 0.0], requires_grad=True)learning_rate = 1e-2 optimizer = optim.SGD([params], lr=learning_rate) training_loop( n_epochs = 3000,optimizer = optimizer, params = params, train_t_u = train_t_un,val_t_u = val_t_un, train_t_c = train_t_c, val_t_c = val_t_c) # Out[15]: Epoch 1, Training loss 66.5811, Validation loss 142.3890 Epoch 2, Training loss 38.8626, Validation loss 64.0434 Epoch 3, Training loss 33.3475, Validation loss 39.4590Epoch 500, Training loss 7.1454, Validation loss 9.1252These two pairs of lines are the same except for the train_* vs. val_* inputs. Note that there is no val_loss.backward() here, since we don’t want to train the model on the va lidation data. Since we’re using SGD again, we’re back to using normalized inputs." Deep-Learning-with-PyTorch.pdf,"136 CHAPTER 5The mechanics of learning Epoch 1000, Training loss 3.5940, Validation loss 5.3110 Epoch 1500, Training loss 3.0942, Validation loss 4.1611 Epoch 2000, Training loss 3.0238, Validation loss 3.7693Epoch 2500, Training loss 3.0139, Validation loss 3.6279 Epoch 3000, Training loss 3.0125, Validation loss 3.5756 tensor([ 5.1964, -16.7512], requires_grad=True) Here we are not being entirely fair to our model. The validation set is really small, so the validation loss will only be meaningful up to a point. In any case, we note that the validation loss is higher than our training loss, although not by an order of magni- tude. We expect a model to perform better on the training set, since the model parameters are being shaped by the training set. Our main go al is to also see both the training loss and the validation loss decreasing. While ideally both losses would be roughly the same value, as long as the va lidation loss stays reas onably close to the training loss, we know that our model is continuing to le arn generalized things about our data. In figure 5.14, case C is ideal, while D is acceptable. In case A, the model isn’t learning at all; and in case B, we s ee overfitting. We’ll see more meaningful exam- ples of overfitting in chapter 12. AB CloSsloSs loSs loSsiterations iterations iterations iterationsD Figure 5.14 Overfitting scenarios when looking at the training (solid line) and validation (dotted line) losses. (A) Training and validation losses do not decrease; the model is not learning due to no information in the data or insufficient capacity of the model. (B) Training loss decreases while validation loss increases: overfitting. (C) Trai ning and validation losses decrease exactly in tandem. Performance may be improved further as the model is not at the limit of overfitting. (D) Training and validation losses have different absolute values but similar trends: overfitting is under control." Deep-Learning-with-PyTorch.pdf,"137 PyTorch’s autograd: Backpropagating all things 5.5.4 Autograd nits and switching it off From the previous training loop, we ca n appreciate that we only ever call backward on train_loss . Therefore, errors will only ever backpropagate based on the training set—the validation set is used to provide an independent evaluation of the accuracy of the model’s output on data th at wasn’t used for training. The curious reader will have an embryo of a question at this point. The model is evaluated twice—once on train_t_u and once on val_t_u —and then backward is called. Won’t this confuse autograd? Won’t backward be influenced by the values gen- erated during the pass on the validation set? Luckily for us, this isn’t the case. The fi rst line in the training loop evaluates model on train_t_u to produce train_t_p . Then train_loss is evaluated from train_t_p . This creates a computation graph that links train_t_u to train_t_p to train_loss . When model is evaluated again on val_t_u , it produces val_t_p and val_loss . In this case, a separate computation grap h will be created that links val_t_u to val_t_p to val_loss . Separate tensors have been ru n through the same functions, model and loss_fn , generating separate computation graphs, as shown in figure 5.15. The only tensors these two graphs have in common are the parameters. When we call backward on train_loss , we run backward o n t h e f i r s t g r a p h . I n o t h e r w o r d s , w e accumulate the derivatives of train_loss with respect to the pa rameters based on the computation generated from train_t_u . If we (incorrectly) called backward on val_loss as well, we would accumulate the derivatives of val_loss with respect to the parameters on the same leaf nodes . Remember the zero_grad thing, whereby gradients are accumu lated on top of each other every time we call backward unless we zero out th e gradients explicitly? Well, here somethingFigure 5.15 Diagram showing how gradient s propagate through a graph with two losses when .backward is called on one of them A B C" Deep-Learning-with-PyTorch.pdf,"138 CHAPTER 5The mechanics of learning very similar would happen: calling backward on val_loss would lead to gradients accu- mulating in the params tensor, on top of those generated during the train_loss.back- ward() call. In this case, we would effectively train our model on the whole dataset (both training and validation), since the gradient would depend on both. Pretty interesting. There’s another element for discussion here. Since we’re not ever calling back- ward on val_loss , why are we building the graph in the first place? We could in fact just call model and loss_fn as plain functions, withou t tracking the computation. However optimized, building the autograd gr aph comes with additional costs that we could totally forgo during the validation pa ss, especially when the model has millions of parameters. In order to address this, PyTorch allows us to switch off autograd when we don’t need it, using the torch.no_grad context manager.12 We won’t see any meaningful advantage in terms of speed or memory consumption on our small problem. How- ever, for larger models, the differences ca n add up. We can make sure this works by checking the value of the requires_grad attribute on the val_loss tensor: # In[16]: def training_loop(n_epochs, optimizer, params, train_t_u, val_t_u, train_t_c, val_t_c): for epoch in range(1, n_epochs + 1): train_t_p = model(train_t_u, *params) train_loss = loss_fn(train_t_p, train_t_c) with torch.no_grad(): val_t_p = model(val_t_u, *params)val_loss = loss_fn(val_t_p, val_t_c) assert val_loss.requires_grad == False optimizer.zero_grad() train_loss.backward() optimizer.step() Using the related set_grad_enabled context, we can also condition the code to run with autograd enabled or disabled, according to a Boolean expression —typically indi- cating whether we are running in training or inference mode. We could, for instance, define a calc_forward function that takes data as input and runs model and loss_fn with or without autograd according to a Boolean train_is argument: # In[17]: def calc_forward(t_u, t_c, is_train): with torch.set_grad_enabled(is_train): t_p = model(t_u, *params)loss = loss_fn(t_p, t_c) return loss 12We should not think that using torch.no_grad necessarily implies that the ou tputs do not require gradients. There are particular circumstances (involving vi ews, as discussed in section 3.8.1) in which requires_grad is not set to False even when created in a no_grad context. It is best to use the detach function if we need to be sure.Context manager here Checks that our output requires_grad args are forced to False inside this block" Deep-Learning-with-PyTorch.pdf,"139 Summary 5.6 Conclusion We started this chapter with a big question: how is it that a machine can learn from examples? We spent the rest of the chapte r describing the mech anism with which a model can be optimized to fit data. We chos e to stick with a simple model in order to see all the moving parts with out unneeded complications. Now that we’ve had our fill of appetizers , in chapter 6 we’ll finally get to the main course: using a neural network to fit ou r data. We’ll work on solving the same thermometer problem, but with the more powerful tools provided by the torch.nn module. We’ll adopt the same spirit of us ing this small problem to illustrate the larger uses of PyTorch. The problem do esn’t need a neural network to reach a solution, but it will allow us to develop a simpler understa nding of what’s required to train a neural network. 5.7 Exercise 1Redefine the model to be w2 * t_u ** 2 + w1 * t_u + b . aWhat parts of the training loop, and so on, need to change to accommodate this redefinition? bWhat parts are agnostic to swapping out the model? cIs the resulting loss higher or lower after training? dIs the actual result better or worse? 5.8 Summary Linear models are the simplest reasonable model to use to fit data. Convex optimization techniques can be us ed for linear models, but they do not generalize to neural networks, so we fo cus on stochastic gradient descent for parameter estimation. Deep learning can be used for generic models that ar e not engineered for solv- ing a specific task, but instead can be au tomatically adapted to specialize them- selves on the problem at hand. Learning algorithms amount to optimi zing parameters of models based on observations. A loss function is a measure of the error in carrying out a task, such as the error between predicted outputs and measured values. The goal isto get the loss function as low as possible. The rate of change of the loss function with respect to the model parameterscan be used to update the same paramete rs in the direction of decreasing loss. The optim module in PyTorch provides a collection of ready-to-use optimizers for updating parameters and minimizing loss functions. Optimizers use the autograd feature of PyTorch to compute the gradient for each parameter, depending on how that parameter contributes to the final out- put. This allows users to rely on th e dynamic computation graph during com- plex forward passes." Deep-Learning-with-PyTorch.pdf,"140 CHAPTER 5The mechanics of learning Context managers like with torch.no_grad(): can be used to control auto- grad’s behavior. Data is often split into separate sets of training samples and validation samples. This lets us evaluate a model on data it was not trained on. Overfitting a model happens when th e model’s performance continues to improve on the training set but degrades on the validation set. This is usually due to the model not generalizing, and instead memorizing the desired outputs for the training set." Deep-Learning-with-PyTorch.pdf,"141Using a neural network to fit the data So far, we’ve taken a close look at ho w a linear model can learn and how to make that happen in PyTorch. We’ve focused on a very simple regr ession problem that used a linear model with only one inpu t and one output. Such a simple example allowed us to dissect the mechanics of a model that learns, without getting overly distracted by the implementation of the mo del itself. As we saw in the overview dia- gram in chapter 5, figure 5.2 (repeated he re as figure 6.1), the exact details of a model are not needed to understand the hi gh-level process that trains the model. Backpropagating errors to parameters and then updating those parameters by tak-ing the gradient with respect to the loss is the same no matter what the underlying model is.This chapter covers Nonlinear activation functions as the key difference compared with linear models Working with PyTorch’s nn module Solving a linear-fit problem with a neural network" Deep-Learning-with-PyTorch.pdf,"142 CHAPTER 6Using a neural network to fit the data In this chapter, we will make some chan ges to our model architecture: we’re going to implement a full arti ficial neural network to so lve our temperature-conversion problem. We’ll continue using our training l oop from the last chap ter, along with our Fahrenheit-to-Celsius samples split into trai ning and validation sets. We could start to use a quadratic model: rewriting model as a quadratic function of its input (for example, y = a * x**2 + b * x + c ). Since such a model would be differentiable, PyTorch would take care of computing gradie nts, and the training loop would work as usual. That wouldn’t be too interesting for us, though, because we would still be fixing the shape of the function. This is the chapter where we begin to hook together the foundational work we’ve put in and the PyTorch features you’ll be us ing day in and day out as you work on your projects. You’ll gain an understanding of what’s going on underneath the porcelain ofthe PyTorch API, rather than it just being so much black magic. Before we get into the implementation of our new model, th ough, let’s cover what we mean by artificial neu- ral network . 6.1 Artificial neurons At the core of deep learning are neural networks: mathematical entities capable of representing complicated functions through a composition of simpler functions. The term neural network is obviously suggestive of a link to the way our brain works. As a the learning proceSs eRrors (loSs function)change weights to decrease eRrorsinputs actual outputs given cuRrent weights new inputsforward iterate backwarddesired outputs (ground truth) validation Figure 6.1 Our mental model of the lear ning process, as implemented in chapter 5" Deep-Learning-with-PyTorch.pdf,"143 Artificial neurons matter of fact, although the initial mo dels were inspired by neuroscience,1 modern artificial neur al networks bear only a slight re semblance to the mechanisms of neu- rons in the brain. It seems likely that both artificial and physiolo gical neural networks use vaguely similar mathematical strategies for approximating complicated functions because that family of strategies works very effectively. NOTE We are going to drop the artificial and refer to these constructs as just neural networks from here forward. The basic building block of thes e complicated functions is the neuron , as illustrated in figure 6.2. At its core, it is nothing but a linear transformation of the input (for exam- ple, multiplying the input by a number [the weight ] and adding a constant [the bias]) followed by the application of a fixed nonlinear function (referred to as the activation function ). Mathematically, we can write this out as o = f(w * x + b), with x as our input, w our weight or scaling factor, and b as our bias or offset. f is our activation function, set to the hyperbolic tangent, or tanh function here. In general, x and, hence, o can be sim- ple scalars, or vector-value d (meaning holding many scalar values); and similarly, w 1See F. Rosenblatt, “The Perceptron: A Probabilistic Mo del for Information Storage and Organization in the Brain,” Psychological Review 65(6), 386–408 (1958), https://pubmed.ncbi.nlm.nih.gov/13602029/ . THE “NEURON” LINEAR TRANSFORMATIONLEARNED PARAMETERS LEARNED 18= 2= 6 -10-2.792 + + +=== = = =1 -1 6 6 6 2218 -10-2.7942x -14.042 .04242 -141 -10.39690 =NONLINEAR FUNCTION (ACTIVATION)OUTPUT0 X = + INPUT Figure 6.2 An artificial neuron: a linear tr ansformation enclosed in a nonlinear function" Deep-Learning-with-PyTorch.pdf,"144 CHAPTER 6Using a neural network to fit the data can be a single scal ar or matrix, while b is a scalar or vector (the dimensionality of the inputs and weights must match, however). In the latter case, the previous expression is referred to as a layer of neurons, since it represents many neurons vi a the multidimen- sional weights and biases. 6.1.1 Composing a multilayer network A multilayer neural netw ork, as represented in figure 6. 3, is made up of a composition of functions like thos e we just discussed x _ 1=f ( w _ 0*x+ b_0) x_2 = f(w_1 * x_1 + b_1) ...y = f(w_n * x_n + b_n) where the output of a layer of neurons is used as an input for the following layer. Remember that w_0 here is a matrix, and x is a vector! Using a vector allows w_0 to hold an entire layer of neurons, not ju st a single weight. 6.1.2 Understanding the error function An important difference between our earlie r linear model and what we’ll actually be using for deep learning is the shape of the error function. Our linear model and error-squared loss function had a convex error curve with a singular, clearly defined minimum. If we were to us e other methods, we could so lve for the parameters mini- mizing the error function automatically and definitively. That means that our parame- ter updates were attempting to estimate that singular correct an swer as best they could. A NEURAL NETWORKINPUT OUTPUT INPUTXX++ +0 0=2 2 1 1 OUTPUT LAYERLAYERLAYERLEARNEDNEURONLEARNED PARAMETERS ACTIVATION Figure 6.3 A neural network with three layers" Deep-Learning-with-PyTorch.pdf,"145 Artificial neurons Neural networks do not have that same property of a convex error surface, even when using the same error-squared loss func tion! There’s no single right answer for each parameter we’re attempting to approximate. Instead, we are trying to get all of the parameters, when acting in concert , to produce a useful output. Since that useful output is only going to approximate the truth, there will be some level of imperfection. Where and how imperfections manifest is so mewhat arbitrary, and by implication the parameters that control the output (and , hence, the imperfections) are somewhat arbitrary as well. This results in neural network training looking very much like parameter estimation from a mechanical pers pective, but we must remember that the theoretical underpinning s are quite different. A big part of the reason neural networks have non-convex error surfaces is due to the activation function. The ability of an ensemble of neurons to approximate a very wide range of useful functions depends on the combination of the linear and nonlin- ear behavior inherent to each neuron. 6.1.3 All we need is activation As we have seen, the simplest unit in (dee p) neural networks is a linear operation (scaling + offset) followed by an activation function. We already had our linear opera- tion in our latest mode l—the linear operation was the entire model. The activation function plays two important roles: In the inner parts of the model, it allows the output function to have different slopes at different values—something a linear function by definition cannot do. By trickily composing these differently sloped parts for ma ny outputs, neural networks can approximate arbitrary functi ons, as we will se e in section 6.1.6.2 At the last layer of the network, it ha s the role of concentrating the outputs of the preceding linear operat ion into a given range. Let’s talk about what the second point me ans. Pretend that we’re assigning a “good doggo” score to images. Pictures of retrievers and spaniels should have a high score,while images of airplanes and garbage trucks should have a low score. Bear pictures should have a lowish score, too, al though higher than garbage trucks. The problem is, we have to define a “h igh score”: we’ve got the entire range of float32 to work with, and that means we can go pr etty high. Even if we say “it’s a 10-point scale,” there’s still the issue that sometimes our model is go ing to produce a score of 11 out of 10. Remember that unde r the hood, it’s all sums of ( w*x+b ) matrix multiplica- tions, and those won’t naturally limit them selves to a specific range of outputs. 2For an intuitive appreciation of this universal approx imation property, you can pick a function from figure 6.5 and then build a building-block function that is almost zero in most parts and positive around x = 0 from scaled (including multiplied by nega tive numbers) and translated copies of the activation function. With scaled, translated, and dilated (squeezed along the X-axis) copies of this building-block function, you can then approximate any (continuous) function. In figure 6.6 the function in the middle row to the right could be such a building block. Michael Nielsen has an interactive demonstration in his online book Neural Networks and Deep Learning at http://mng.bz/Mdon ." Deep-Learning-with-PyTorch.pdf,"146 CHAPTER 6Using a neural network to fit the data CAPPING THE OUTPUT RANGE We want to firmly constrain the output of our linear operation to a specific range so that the consumer of this output doesn’t ha ve to handle numerical inputs of puppies at 12/10, bears at –10, and garbage trucks at –1,000. One possibility is to just cap the output va lues: anything below 0 is set to 0, and any- thing above 10 is set to 10. That’s a simple activation function called torch.nn.Hardtanh (https://pytorch.org/docs /stable/nn.html#hardtanh , but note that the default range is –1 to +1). COMPRESSING THE OUTPUT RANGE Another family of functions that work well is torch.nn.Sigmoid , which includes 1 / (1 + e ** -x) , torch.tanh , and others that we’ll see in a moment. These functions have a curve that asymptotically approaches 0 or –1 as x goes to negative infinity, approaches 1 as x increases, and have a mostly constant slope at x == 0. Conceptually, functions shaped this way work well because there’s an area in the middle of our lin- ear function’s output that our neuron (which , again, is just a li near function followed by an activation) will be sensitive to, while everything else gets lumped next to the boundary values. As we can see in figure 6.4, our garbage truck gets a score of –0.97, while bears and foxes and wolves end up somewhere in the –0.3 to 0.3 range. GRIZzLY BEAR3 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 3GOoD DOGgO VERY MUCH DOGS GARBAGE TRUCK UNDER- SATURATEDSENSITIVE OVER- SATURATEDNOT DOGS Figure 6.4 Dogs, bears, and garbage truck s being mapped to how dog-like they are via the tanh activation function" Deep-Learning-with-PyTorch.pdf,"147 Artificial neurons This results in garbage trucks being flagged as “not dogs,” our good dog mapping to “clearly a dog,” and our bear ending up so mewhere in the middle. In code, we can see the exact values: >>> import math >>> math.tanh(-2.2)-0.9757431300314515 >>> math.tanh(0.1) 0.09966799462495582>>> math.tanh(2.5) 0.9866142981514303 With the bear in the sensitive range, small ch anges to the bear will result in a notice- able change to the result. Fo r example, we could switch from a grizzly to a polar bear (which has a vaguely more traditionall y canine face) and see a jump up the Y-axis as we slide toward the “very much a dog” en d of the graph. Conversely, a koala bear would register as less dog-like , and we would see a drop in the activated output. There isn’t much we could do to the garbage truck to make it register as dog-like, though: even with drastic changes, we might only see a shift from –0.97 to –0.8 or so. 6.1.4 More activation functions There are quite a few activation functions, some of which are shown in figure 6.5. In the first column, we se e the smooth functions Tanh and Softplus , while the second column has “hard” versions of the activation functions to their left: Hardtanh and ReLU . ReLU (for rectified linear unit ) deserves special note, as it is currently consideredGarbage truck Bear Good doggo 3Tanh hardtanh sigmoid softplus relu leakyrelu2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 33 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 33 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 3 3 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 33 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 33 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 3 Figure 6.5 A collection of common and not-so-common activation functions" Deep-Learning-with-PyTorch.pdf,"148 CHAPTER 6Using a neural network to fit the data one of the best-performing general activation functions; many stat e-of-the-art results have used it. The Sigmoid activation function , also known as the logistic function , was widely used in early deep learning wo rk but has since fallen out of common use except where we explicitly want to move to the 0…1 range: for example, when the out- put should be a probability. Finally, the LeakyReLU function modifies the standard ReLU to have a small positive slope, rather than being strictly zero for negative inputs (typically this slope is 0.01, but it’s shown here with slop e 0.1 for clarity). 6.1.5 Choosing the best activation function Activation functions are curious, because with such a wide variety of proven successful ones (many more than shown in figure 6.5), it ’s clear that there are few, if any, strict requirements. As such, we’re going to discu ss some generalities ab out activation func- tions that can probably be trivially disproved in the specific. That said, by definition,3 activation functions Are nonlinear. Repeated applications of ( w*x+b ) without an activation function results in a function of the same (affi ne linear) form. The nonlinearity allows the overall network to approx imate more complex functions. Are differentiable, so that gradients can be computed through them. Point dis- continuities, as we can see in Hardtanh or ReLU , are fine. Without these characteristics, the network either falls back to being a linear model or becomes difficult to train. The following are true for the functions: They have at least one sensitive range, where nontrivial changes to the input result in a corresponding nontrivial chan ge to the output. This is needed for training. Many of them have an insensitive (or saturated) range, where changes to the input result in little or no change to the output. By way of example, the Hardtanh function could easily be used to make piecewise-linear approximations of a function by combining the sensitive range with different weights and biases on the input. Often (but far from universally so), the ac tivation function will have at least one of these: A lower bound that is approached (or met) as the input goes to negative infinity A similar-but-inverse upper bound for positive infinity Thinking of what we know about how back propagation works, we can figure out that the errors will propagate backward through the activation more effectively when the inputs are in the response range, while errors will not greatly affect neurons for which 3Of course, even these statements aren’t always true; see Jakob Foerster, “Nonli near Computation in Deep Lin- ear Networks,” OpenAI, 2019, http://mng.bz/gygE ." Deep-Learning-with-PyTorch.pdf,"149 Artificial neurons the input is saturated (since the gradient w ill be close to zero, due to the flat area around the output). Put together, all this results in a pretty powerful mechanism: we ’re saying that in a network built out of linear + activation un its, when different inputs are presented to the network, (a) different units will respon d in different ranges for the same inputs, and (b) the errors associated with those in puts will primarily affect the neurons oper- ating in the sensitive range, leaving other units more or less unaffected by the learn-ing process. In addition, thanks to the fa ct that derivatives of the activation with respect to its inputs are often close to 1 in the sensitive range, estimating the parame- ters of the linear transformation through gradient descent for th e units that operate in that range will look a lot like th e linear fit we have seen previously. We are starting to get a deeper intuition for how joining many linear + activation units in parallel and stacking them one after the other leads us to a mathematical object that is capable of approximating complicated functions. Different combina- tions of units will respond to inputs in different ranges, and those parameters for those units are relatively easy to optimize through gradient descen t, since learning will behave a lot like that of a linear function until the output saturates. 6.1.6 What learning means for a neural network Building models out of stacks of linear tr ansformations followed by differentiable acti- vations leads to models that can approxim ate highly nonlinear processes and whose parameters we can estimate surprisingly we ll through gradient descent. This remains true even when dealing with models with millions of parameters. What makes using deep neural networks so attrac tive is that it saves us from worrying too much about the exact function that represents our data—whe ther it is quadratic, piecewise polyno- mial, or something el se. With a deep neural network model, we have a universal approximator and a method to estimate its parameters. This approximator can be cus- tomized to our needs, in terms of model capacity and its ability to model complicatedinput/output relationships, just by comp osing simple building blocks. We can see some examples of this in figure 6.6. The four upper-left graphs show four ne urons—A, B, C, and D—each with its own (arbitrarily chosen) weight and bias. Each neuron uses the Tanh activation function with a min of –1 and a max of 1. The varied weights and biases move the center point and change how drastically the transition from min to max happens, but they clearly all have the same general shape. The column s to the right of those show both pairs of neurons added together (A + B and then C + D). Here, we start to see some interest- ing properties that mimic a single la yer of neurons. A + B shows a slight S curve, with the extremes approaching 0, but both a posi tive bump and a negative bump in the middle. Conversely, C + D has only a larg e positive bump, which peaks at a higher value than our single-neuron max of 1. In the third row, we begin to compose our neurons as they would be in a two-layer net- work. Both C(A + B) and D(A + B) have the same positive and negative bumps that A + B shows, but the positive peak is more subt le. The composition of C(A + B) + D(A + B)" Deep-Learning-with-PyTorch.pdf,"150 CHAPTER 6Using a neural network to fit the data shows a new property: two clearly negative bumps, and possibly a very subtle second pos- itive peak as well, to th e left of the main area of interest. All this with only four neurons in two layers! Again, these neurons’ parameters were ch osen only to have a visually interesting result. Training consists of finding acceptable values for these weights and biases so that the resulting network correctly carries out a task, such as predicting likely tem- peratures given geographic coordi nates and time of the year. By carrying out a task suc- cessfully , we mean obtaining a correct output on unseen data produced by the same data-generating process used for training data. A successfully trained network, through the values of its weights and biases, will capture the inherent structure of the data in the form of meaningful numerical representations that wo rk correctly for pre- viously unseen data. 3A: Tanh(-2 * x -1.25) B: Tanh(1 * x + 0.75) A + B C: Tanh(4 * x + 1.0) D: Tanh(-3 * x - 1.5) C + D C(A +B) D(A +B) C(A + B) + D(A +B)2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 33 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 33 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 3 3 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 33 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 33 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 3 3 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 33 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 33 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 3 Figure 6.6 Composing multiple linear units and tanh activation functions to produce nonlinear outputs" Deep-Learning-with-PyTorch.pdf,"151 The PyTorch nn module Let’s take another step in our realizatio n of the mechanics of learning: deep neural networks give us the ability to approxim ate highly nonlinear phenomena without hav- ing an explicit model for them. Instead, st arting from a generic, untrained model, we specialize it on a task by providing it with a set of inputs and outputs and a loss function from which to backpropagate. Specializing a generic model to a task using examples is what we refer to as learning , because the model wasn’t buil t with that specific task in mind—no rules describing how that task worked were encoded in the model. For our thermometer example, we a ssumed that both thermometers measured temperatures linearly. That assumption is where we implicitly encoded a rule for our task: we hardcoded the shape of our input/ou tput function; we couldn’t have approx- imated anything other than data points sitti ng around a line. As the dimensionality of a problem grows (that is, many inputs to many outputs) and input/output relation- ships get complicated, assuming a shape for the input/output function is unlikely to work. The job of a physicist or an applied mathematician is often to come up with a functional description of a phenomenon from first principles, so that we can estimate the unknown parameters from measuremen ts and get an accurate model of the world. Deep neural networks, on the other hand, are families of functions that have the ability to approximate a wide range of input/output relationships without neces- sarily requiring us to come up with an explanatory model of a phenomenon. In a way, we’re renouncing an explanation in exchange for the possibility of tackling increas- ingly complicated problems. In another way, we sometimes lack the ability, informa- tion, or computational resources to build an explicit model of what we’re presented with, so data-driven methods are our only way forward. 6.2 The PyTorch nn module All this talking about neural networks is probably making you really curious about building one from scratch with PyTorch. Ou r first step will be to replace our linear model with a neural network unit. This will be a somewhat useless step backward from a correctness perspective, since we’ve alre ady verified that our calibration only required a linear function, but it will still be instrumental for star ting on a sufficiently simple problem and scaling up later. PyTorch has a whole submodule dedicated to neural networks, called torch.nn . It contains the building blocks needed to crea te all sorts of neural network architec- tures. Those building blocks are called modules in PyTorch parlance (such building blocks are often referred to as layers in other frameworks). A PyTorch module is a Python class deriving from the nn.Module base class. A module can have one or more Parameter instances as attributes, which are tensors whose values are optimized during the training process (think w and b in our linear model). A module can also have one or more submodules (subclasses of nn.Module ) as attributes, and it will be able to track their parameters as well." Deep-Learning-with-PyTorch.pdf,"152 CHAPTER 6Using a neural network to fit the data NOTE The submodules must be top-level attributes , not buried inside list or dict instances! Otherwise, the optimizer will not be able to locate the sub- modules (and, hence, their parameters). For situations where your modelrequires a list or dict of submodules, PyTorch provides nn.ModuleList and nn.ModuleDict . Unsurprisingly, we ca n find a subclass of nn.Module called nn.Linear , which applies an affine transfor mation to its input (via the parameter attributes weight and bias ) and is equivalent to what we implemente d earlier in our thermometer experiments. We’ll now start precisely where we left off and convert our previous code to a form that uses nn. 6.2.1 Using __call__ rather than forward All PyTorch-provided subclasses of nn.Module have their __call__ method defined. This allows us to instantiate an nn.Linear and call it as if it was a function, like so (code/p1ch6/1_neural _networks.ipynb): # In[5]: import torch.nn as nn linear_model = nn.Linear(1, 1) linear_model(t_un_val) # Out[5]: tensor([[0.6018], [0.2877]], grad_fn=) Calling an instance of nn.Module with a set of arguments ends up calling a method named forward with the same arguments. The forward method is what executes the forward computation, while __call__ does other rather important chores before and after calling forward . So, it is technically possible to call forward directly, and it will produce the same output as __call__ , but this should not be done from user code: y = model(x) y = model.forward(x) Here’s the implementation of Module._call_ (we left out the bits related to the JIT and made some simplifications for clarit y; torch/nn/modules/ module.py, line 483, class: Module): def __call__(self, *input, **kwargs): for hook in self._forward_pre_hooks.values(): hook(self, input) result = self.forward(*input, **kwargs) for hook in self._forward_hooks.values(): hook_result = hook(self, input, result)We’ll look into the constructor arguments in a moment. Correct! Silent error. Don’t do it!" Deep-Learning-with-PyTorch.pdf,"153 The PyTorch nn module # ... for hook in self._backward_hooks.values(): # ... return result As we can see, there are a lot of hooks that won’t get called properly if we just use .forward(…) directly. 6.2.2 Returning to the linear model Back to our linear model. The constructor to nn.Linear accepts three arguments: the number of input features, the number of output features, and whether the linear model includes a bias or not (defaulting to True , here): # In[5]: import torch.nn as nn linear_model = nn.Linear(1, 1) linear_model(t_un_val) # Out[5]: tensor([[0.6018], [0.2877]], grad_fn=) The number of features in our case just re fers to the size of the input and the output tensor for the module, so 1 and 1. If we used both temperature and barometric pres- sure as input, for instance, we would have two features in input and one feature in out- put. As we will see, for more complex models with severa l intermediate modules, the number of features will be associated with the capacity of the model. We have an instance of nn.Linear with one input and one output feature. That only requires one weight and one bias: # In[6]: linear_model.weight # Out[6]: Parameter containing: tensor([[-0.0674]], requires_grad=True) # In[7]: linear_model.bias # Out[7]: Parameter containing:tensor([0.7488], requires_grad=True)The arguments are input size, output size, and bias defaulting to True." Deep-Learning-with-PyTorch.pdf,"154 CHAPTER 6Using a neural network to fit the data We can call the module with some input: # In[8]: x = torch.ones(1) linear_model(x) # Out[8]: tensor([0.6814], grad_fn=) Although PyTorch lets us get away with it, we don’t actually provide an input with the right dimensionality. We have a model that takes one input and produces one output, but PyTorch nn.Module and its subclasses are designed to do so on multiple samples at the same time. To accommodate multiple sa mples, modules expect the zeroth dimen- sion of the input to be the number of samples in the batch . We encountered this con- cept in chapter 4, when we learned how to arrange real-world data into tensors. BATCHING INPUTS Any module in nn is written to produce outputs for a batch of multiple inputs at the same time. Thus, assuming we need to run nn.Linear on 10 samples, we can create an input tensor of size B × Nin, where B is the size of the batch and Nin is the number of input features, and run it once through the model. For example: # In[9]: x = torch.ones(10, 1)linear_model(x) # Out[9]: tensor([[0.6814], [0.6814], [0.6814],[0.6814], [0.6814], [0.6814],[0.6814], [0.6814], [0.6814],[0.6814]], grad_fn=) Let’s dig into what’s going on here, with figure 6.7 showing a similar situation with batched image data. Our input is B × C × H × W with a batch size of 3 (say, images of a dog, a bird, and then a car), three channel dimensions (red, green, and blue), and an unspecified number of pixels for he ight and width. As we can see, the out- put is a tensor of size B × Nout , where Nout is the number of output features: four, in this case. " Deep-Learning-with-PyTorch.pdf,"155 The PyTorch nn module OPTIMIZING BATCHES The reason we want to do this batching is multifaceted. One big motivation is to make sure the computation we’re asking for is big enough to saturate the computing resources we’re using to perform the computat ion. GPUs in particular are highly par- allelized, so a single input on a small model wi ll leave most of the computing units idle. By providing batches of inputs, the calculat ion can be spread across the otherwise-idle units, which means the batched results come back just as quickly as a single result would. Another benefit is that some advanc ed models use statistical information from the entire batch, and those statistics get better with larger batch sizes. Back to our thermometer data, t_u and t_c were two 1D tensors of size B. Thanks to broadcasting, we could write our linear model as w * x + b , where w and b were two scalar parameters. This worked because we had a single input feature: if we had two, we would need to add an extra dimens ion to turn that 1D tensor into a matrix with samples in the rows and features in the columns. That’s exactly what we need to do to switch to using nn.Linear . We reshape our B inputs to B × Nin, where Nin is 1. That is easily done with unsqueeze : # In[2]: t_c = [0.5, 14.0, 15.0, 28.0, 11.0, 8.0, 3.0, -4.0, 6.0, 13.0, 21.0]t_u = [35.7, 55.9, 58.2, 81.9, 56.3, 48.9, 33.9, 21.8, 48.4, 60.4, 68.4] t_c = torch.tensor(t_c).unsqueeze(1) t_u = torch.tensor(t_u).unsqueeze(1) t_u.shape # Out[2]: torch.Size([11, 1]) HEIGHT WIDTH RED GREeNBLUE CHANnEL B=3B=3BATCH = 3 B X C X H X W Figure 6.7 Three RGB images batched together and fed into a neural network. The output is a batch of three vectors of size 4. Adds the extra dimension at axis 1" Deep-Learning-with-PyTorch.pdf,"156 CHAPTER 6Using a neural network to fit the data We’re done; let’s update our training code . First, we replace our handmade model with nn.Linear(1,1) , and then we need to pass the linear model parameters to the optimizer: # In[10]: linear_model = nn.Linear(1, 1)optimizer = optim.SGD( linear_model.parameters(), lr=1e-2) Earlier, it was our responsibility to create parameters and pass them as the first argu- ment to optim.SGD . Now we can use the parameters method to ask any nn.Module for a list of parameters owned by it or any of its submodules: # In[11]: linear_model.parameters() # Out[11]: # In[12]: list(linear_model.parameters()) # Out[12]: [Parameter containing: tensor([[0.7398]], requires_grad=True), Parameter containing:tensor([0.7974], requires_grad=True)] This call recurses into submod ules defined in the module’s init constructor and returns a flat list of all parameters encounte red, so that we can conveniently pass it to the optimizer constructor as we did previously. We can already figure out what happens in the training loop. The optimizer is pro- vided with a list of tensors that were defined with requires_grad = True —all Parameter s are defined this way by definition, since they need to be optimized by gradient descent. When training_loss.backward() is called, grad is accumulated on the leaf nodes of the graph, which are precisely the parameters that were passed to the optimizer. At this point, the SGD optimize r has everything it needs. When optimizer.step() is called, it will iterate through each Parameter and change it by an amount propor- tional to what is stored in its grad attribute. Pretty clean design. Let’s take a look a the training loop now: # In[13]: def training_loop(n_epochs, optimizer, model, loss_fn, t_u_train, t_u_val, t_c_train, t_c_val): for epoch in range(1, n_epochs + 1): t_p_train = model(t_u_train) loss_train = loss_fn(t_p_train, t_c_train) t_p_val = model(t_u_val)This is just a redefinition from earlier. This method call replaces [params]. The model is now passed in, instead of the individual params." Deep-Learning-with-PyTorch.pdf,"157 The PyTorch nn module loss_val = loss_fn(t_p_val, t_c_val) optimizer.zero_grad() loss_train.backward() optimizer.step() if epoch == 1 or epoch % 1000 == 0: print(f""Epoch {epoch}, Training loss {loss_train.item():.4f},"" f"" Validation loss {loss_val.item():.4f}"") It hasn’t changed practically at al l, except that now we don’t pass params explicitly to model since the model itself holds its Parameters internally. There’s one last bit that we can leverage from torch.nn : the loss. Indeed, nn comes with several common loss functions, among them nn.MSELoss (MSE stands for Mean Square Error), which is exactly what we defined earlier as our loss_fn . Loss functions in nn are still subclasses of nn.Module , so we will create an instance and call it as a function. In our case, we get rid of the handwritten loss_fn and replace it: # In[15]: linear_model = nn.Linear(1, 1)optimizer = optim.SGD(linear_model.parameters(), lr=1e-2) training_loop( n_epochs = 3000, optimizer = optimizer, model = linear_model,loss_fn = nn.MSELoss(), t_u_train = t_un_train, t_u_val = t_un_val,t_c_train = t_c_train, t_c_val = t_c_val) print() print(linear_model.weight) print(linear_model.bias) # Out[15]: Epoch 1, Training loss 134.9599, Validation loss 183.1707Epoch 1000, Training loss 4.8053, Validation loss 4.7307 Epoch 2000, Training loss 3.0285, Validation loss 3.0889 Epoch 3000, Training loss 2.8569, Validation loss 3.9105 Parameter containing: tensor([[5.4319]], requires_grad=True)Parameter containing: tensor([-17.9693], requires_grad=True) Everything else input into our training lo op stays the same. Even our results remain the same as before. Of course, getting the same results is expected, as a difference would imply a bug in one of the two implementations. The loss function is also passed in. We’ll use it in a moment. We are no longer using our hand- written loss function from earlier." Deep-Learning-with-PyTorch.pdf,"158 CHAPTER 6Using a neural network to fit the data 6.3 Finally a neural network It’s been a long journey—there has been a lot to explore for these 20-something lines of code we require to define and train a model. Hopefully by now the magic involved in training has vanished and left room for the mechanics. What we learned so far will allow us to own the code we write instead of merely poking at a black box when things get more complicated. There’s one last step left to take: replac ing our linear model with a neural network as our approximating function . We said earlier that using a neural network will not result in a higher-quality model, since the process underlying our calibration problem was fundamentally linear. However, it’s good to make the leap from linear to neural network in a controlled environment so we won’t feel lost later. 6.3.1 Replacing the linear model We are going to keep everything else fixed, including the loss function, and only rede- fine model . Let’s build the simplest possible neur al network: a linear module, followed by an activation function, feeding into anot her linear module. The first linear + activa- tion layer is commonly referred to as a hidden layer for historical reasons, since its out- puts are not observed directly but fed into the output layer. While the input and output of the model are both of size 1 (they have one input and one output feature), the size of the output of the first linear module is usually larger than 1. Recalling our earlier explanation of the role of acti vations, this can lead different units to respond to different ranges of the input, which increases the capacity of our model. The last linear layer will take the output of activations and combine them linearly to produce the output value. There is no standard way to depict neural networks. Figure 6.8 shows two ways that seem to be somewhat prototypical: the left side shows how our network might be depicted in basic introduction s, whereas a style similar to that on the right is often used in the more advanced literature and research papers. It is common to make dia- gram blocks that roughly correspond to the neural network modules PyTorch offers (though sometimes things like the Tanh activation layer are not explicitly shown). Note that one somewhat subt le difference between the tw o is that the graph on the left has the inputs and (intermediate) result s in the circles as the main elements. On the right, the computational steps are more prominent. TanhOutput (1 ) input (1 ) linear (1 ) linear (13 ) ... Output HiDden inputFigure 6.8 Our simplest neural network in two views. Left: beginner’s version. Right: higher-level version." Deep-Learning-with-PyTorch.pdf,"159 Finally a neural network nn provides a simple way to concatenate modules through the nn.Sequential container: # In[16]: seq_model = nn.Sequential( nn.Linear(1, 13),nn.Tanh(), nn.Linear(13, 1)) seq_model # Out[16]: Sequential( (0): Linear(in_features=1, out_features=13, bias=True) (1): Tanh() (2): Linear(in_features=13, out_features=1, bias=True) ) The end result is a model that takes the inpu ts expected by the first module specified as an argument of nn.Sequential , passes intermediate outputs to subsequent mod- ules, and produces the output returned by the last module. The model fans out from 1 input feature to 13 hidden features, passes them through a tanh activation, and lin- early combines the resulting 13 numbers into 1 output feature. 6.3.2 Inspecting the parameters Calling model.parameters() will collect weight and bias from both the first and sec- ond linear modules. It’s instru ctive to inspect the parameters in this case by printing their shapes: # In[17]: [param.shape for param in seq_model.parameters()] # Out[17]: [torch.Size([13, 1]), torch.Size([13]), torch.Size([1, 13]), torch.Size([1])] These are the tensors that the optimizer will get. Again, after we call model.backward() , all parameters are populated with their grad , and the optimizer th en updates their val- ues accordingly during the optimizer.step() call. Not that different from our previous linear model, eh? After all, they’re both diff erentiable models that can be trained using gradient descent. A few notes on parameters of nn.Modules . When inspecting parameters of a model made up of several submodules, it is handy to be able to identify parameters by name. There’s a method for that, called named_parameters : # In[18]: for name, param in seq_model.named_parameters(): print(name, param.shape) # Out[18]: 0.weight torch.Size([13, 1])We chose 13 arbitrarily. We wanted a number that was a different size from the other tensor shapes we have floating around. This 13 must match the first size, however." Deep-Learning-with-PyTorch.pdf,"160 CHAPTER 6Using a neural network to fit the data 0.bias torch.Size([13]) 2.weight torch.Size([1, 13]) 2.bias torch.Size([1]) The name of each module in Sequential is just the ordinal with which the module appears in the argume nts. Interestingly, Sequential also accepts an OrderedDict ,4 in which we can name each module passed to Sequential : # In[19]: from collections import OrderedDict seq_model = nn.Sequential(OrderedDict([ ('hidden_linear', nn.Linear(1, 8)),('hidden_activation', nn.Tanh()), ('output_linear', nn.Linear(8, 1)) ])) seq_model # Out[19]: Sequential( (hidden_linear): Linear(in_features=1, out_features=8, bias=True)(hidden_activation): Tanh()(output_linear): Linear(in_features=8, out_features=1, bias=True) ) This allows us to get more explanatory names for submodules: # In[20]: for name, param in seq_model.named_parameters(): print(name, param.shape) # Out[20]: hidden_linear.weight torch.Size([8, 1]) hidden_linear.bias torch.Size([8]) output_linear.weight torch.Size([1, 8])output_linear.bias torch.Size([1]) This is more descriptive; but it does not gi ve us more flexibility in the flow of data through the network, which remains a purely sequential pass-through—the nn.Sequential is very aptly named. We will see how to take full control of the process- ing of input data by subclassing nn.Module ourselves in chapter 8. We can also access a particular Parameter by using submodules as attributes: # In[21]: seq_model.output_linear.bias # Out[21]: Parameter containing:tensor([-0.0173], requires_grad=True) 4Not all versions of Python specify the iteration order for dict , so we’re using OrderedDict here to ensure the ordering of the layers and emphasiz e that the order of the layers matters." Deep-Learning-with-PyTorch.pdf,"161 Finally a neural network This is useful for inspecting parameters or their gradients: for instance, to monitor gradients during training, as we did at the beginning of this chapter. Say we want to print out the gradients of weight of the linear portion of th e hidden layer. We can run the training loop for the new neural netw ork model and then look at the resulting gradients after the last epoch: # In[22]: optimizer = optim.SGD(seq_model.parameters(), lr=1e-3) training_loop( n_epochs = 5000, optimizer = optimizer,model = seq_model, loss_fn = nn.MSELoss(), t_u_train = t_un_train,t_u_val = t_un_val, t_c_train = t_c_train, t_c_val = t_c_val) print('output', seq_model(t_un_val)) print('answer', t_c_val) print('hidden', seq_model.hidden_linear.weight.grad) # Out[22]: Epoch 1, Training loss 182.9724, Validation loss 231.8708Epoch 1000, Training loss 6.6642, Validation loss 3.7330 Epoch 2000, Training loss 5.1502, Validation loss 0.1406 Epoch 3000, Training loss 2.9653, Validation loss 1.0005Epoch 4000, Training loss 2.2839, Validation loss 1.6580 Epoch 5000, Training loss 2.1141, Validation loss 2.0215 output tensor([[-1.9930], [20.8729]], grad_fn=) answer tensor([[-4.], [21.]]) hidden tensor([[ 0.0272], [ 0.0139], [ 0.1692],[ 0.1735], [-0.1697], [ 0.1455],[-0.0136], [-0.0554]]) 6.3.3 Comparing to the linear model We can also evaluate the model on all of the data and see how it differs from a line: # In[23]: from matplotlib import pyplot as plt t_range = torch.arange(20., 90.).unsqueeze(1) fig = plt.figure(dpi=600)We’ve dropped the learning rate a bit to help with stability. " Deep-Learning-with-PyTorch.pdf,"162 CHAPTER 6Using a neural network to fit the data plt.xlabel(""Fahrenheit"") plt.ylabel(""Celsius"") plt.plot(t_u.numpy(), t_c.numpy(), 'o')plt.plot(t_range.numpy(), seq_model(0.1 * t_range).detach().numpy(), 'c-') plt.plot(t_u.numpy(), seq_model(0.1 * t_u).detach().numpy(), 'kx') The result is shown in figure 6.9. We can appreciate that the ne ural network has a ten- dency to overfit, as we discussed in chapter 5, since it tries to chase the measurements, including the noisy ones. Even our tiny neur al network has too many parameters to fit the few measurements we have. It doesn’t do a bad job, though, overall. 6.4 Conclusion We’ve covered a lot in chapters 5 and 6, al though we have been dealing with a very simple problem. We dissected building differentiable models and training them using gradient descent, first using ra w autograd and then relying on nn. By now you should have confidence in your understanding of what’s going on behind the scenes. Hope- fully this taste of PyTorch has given you an appetite for more! 6.5 Exercises 1Experiment with the number of hidden neurons in our simple neural network model, as well as the learning rate. aWhat changes result in more linear output from the model? bCan you get the model to obviously overfit the data?Figure 6.9 The plot of our neural network model, with input data (circles) and model output (Xs). The continuous line shows behavior between samples. 30 25 20 15 10 5 0 20 20 40 50 60 FAHRENHEITCELSIUS 70 80 90-5" Deep-Learning-with-PyTorch.pdf,"163 Summary 2The third-hardest problem in physics is finding a proper wine to celebrate dis- coveries. Load the wine data from chap ter 4, and create a new model with the appropriate number of input parameters. aHow long does it take to train comp ared to the temperature data we have been using? bCan you explain what factors cont ribute to the training times? cCan you get the loss to decrease while training on this dataset? dHow would you go about graphing this dataset? 6.6 Summary Neural networks can be automatically ad apted to specialize themselves on the problem at hand. Neural networks allow easy access to th e analytical derivatives of the loss with respect to any parameter in the model, which makes evolving the parameters very efficient. Thanks to its automate d differentiation engine, PyTorch provides such derivatives effortlessly. Activation functions around linear transf ormations make neural networks capa- ble of approximating highly nonlinear functions, at the same time keeping them simple enough to optimize. The nn module together with the tensor st andard library provide all the build- ing blocks for creating neural networks. To recognize overfitting, it’s essential to maintain the training set of data pointsseparate from the validation set. There’s no one recipe to combat overfitting, but getting more data, or more variabilit y in the data, and resorting to simpler models are good starts. Anyone doing data science should be plotting data all the time." Deep-Learning-with-PyTorch.pdf,"164Telling birds from airplanes: Learning from images The last chapter gave us the opportunity to dive into the inner mechanics of learn- ing through gradient descent, and the facilities that PyTorch offers to build models and optimize them. We did so using a si mple regression model of one input and one output, which allowed us to have ever ything in plain sight but admittedly was only borderline exciting. In this chapter, we’ll keep moving ahead with building our neural network foun- dations. This time, we’ll turn our attention to images. Image recognition is argu- ably the task that made the world re alize the potential of deep learning.This chapter covers Building a feed-forward neural network Loading data using Dataset s and DataLoader s Understanding classification loss" Deep-Learning-with-PyTorch.pdf,"165 A dataset of tiny images We will approach a simple image recognition problem st ep by step, building from a simple neural network like the one we defi ned in the last chapter. This time, instead of a tiny dataset of numbers, we’ll use a mo re extensive dataset of tiny images. Let’s download the dataset first and ge t to work preparing it for use. 7.1 A dataset of tiny images There is nothing like an intuitive understa nding of a subject, and there is nothing to achieve that like working on simple data. One of the most basic datasets for image recognition is the handwritten digit-re cognition dataset known as MNIST. Here we will use another dataset that is similarl y simple and a bit more fun. It’s called CIFAR-10, and, like its sibling CIFAR-100, it has been a computer vision classic for a decade. CIFAR-10 consists of 60,000 tiny 32 × 32 color (RGB) images, labeled with an inte- ger corresponding to 1 of 10 classes: airpla ne (0), automobile (1), bird (2), cat (3), deer (4), dog (5), frog (6), ho rse (7), ship (8), and truck (9).1 Nowadays, CIFAR-10 is considered too simple for de veloping or validating new research, but it serves our learning purposes just fine. We will use the torchvision module to automatically download the dataset and load it as a collection of PyTorch tensors. Figure 7.1 gives us a taste of CIFAR-10. 1The images were collected and labeled by Krizhevsky , Nair, and Hinton of the Canadian Institute For Advanced Research (CIFAR) and were drawn from a larg er collection of unlabeled 32 × 32 color images: the “80 million tiny images dataset” from the Computer Sc ience and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology. AIRPLANE AUTOMOBILE BIRD CAT DEeR DOG FROG HORSE SHIP TRUCK Figure 7.1 Image samples from all CIFAR-10 classes" Deep-Learning-with-PyTorch.pdf,"166 CHAPTER 7Telling birds from airplanes: Learning from images 7.1.1 Downloading CIFAR-10 As we anticipated, let’s import torchvision and use the datasets module to down- load the CIFAR-10 data: # In[2]: from torchvision import datasetsdata_path = '../data-unversioned/p1ch7/' cifar10 = datasets.CIFAR10(data_path, train=True, download=True) cifar10_val = datasets.CIFAR10(data_path, train=False, download=True) The first argument we provide to the CIFAR10 function is the location from which the data will be downloaded; the second specif ies whether we’re interested in the training set or the validation set; and the third says whether we allow PyTorch to download the data if it is not found in the loca tion specified in the first argument. Just like CIFAR10 , the datasets submodule gives us prec anned access to the most popular computer vision datasets, such as MNIST, Fashion-MNIST, CIFAR-100, SVHN, Coco, and Omniglot. In each case, the dataset is returned as a subclass of torch.utils.data.Dataset . We can see that the method-resolution order of our cifar10 instance includes it as a base class: # In[4]: type(cifar10).__mro__ # Out[4]: (torchvision.datasets.cifar.CIFAR10, torchvision.datasets.vision.VisionDataset,torch.utils.data.dataset.Dataset, object) 7.1.2 The Dataset class It’s a good time to discov er what being a subclass of torch.utils.data.Dataset means in practice. Looking at fi gure 7.2, we see what PyTorch Dataset is all about. It is an object that is required to implement two methods: __len__ and __getitem__ . The former should return the number of items in the dataset; the latter should return the item, consisting of a sample and it s corresponding label (an integer index).2 In practice, when a Python object is equipped with the __len__ method, we can pass it as an argument to the len Python built-in function: # In[5]: len(cifar10) # Out[5]: 50000 2For some advanced uses, PyTorch also provides IterableDataset . This can be used in cases like datasets in which random access to the data is prohibitively expens ive or does not make sense: for example, because data is generated on the fly.Instantiates a dataset for the training data; TorchVision downloads the data if it is not present.With train=False, this gets us a dataset for the va lidation data, again downloading as necessary." Deep-Learning-with-PyTorch.pdf,"167 A dataset of tiny images Similarly, since the dataset is equipped with the __getitem__ method, we can use the standard subscript for indexing tuples and li sts to access individual items. Here, we get a PIL (Python Imaging Library, the PIL package) image with our desired output—an integer with the value 1, corresponding to “automobile”: # In[6]: img, label = cifar10[99]img, label, class_names[label] # Out[6]: (, 1, 'automobile') So, the sample in the data.CIFAR10 dataset is an instance of an RGB PIL image. We can plot it right away: # In[7]: plt.imshow(img) plt.show() This produces the output shown in figure 7.3. It’s a red car!3 3It doesn’t translate well to print; you’ ll have to take our word for it, or check it out in the eBook or the Jupyter Notebook. ACTUAL DATA HUMAN DOG HUMAN DOGHUMANDATASETQ: HOW MANY ELEMENTS ? Q: MAY I GET ITEM 4 ? 45 4 “HUMAN” Figure 7.2 Concept of a PyTorch Dataset object: it doesn’t necessarily hold the data, but it provides uniform access to it through __len__ and __getitem__ ." Deep-Learning-with-PyTorch.pdf,"168 CHAPTER 7Telling birds from airplanes: Learning from images 7.1.3 Dataset transforms That’s all very nice, but we’ll likely need a way to convert the PIL image to a PyTorch tensor before we can do anyt hing with it. That’s where torchvision.transforms comes in. This module defines a set of comp osable, function-like objects that can be passed as an argument to a torchvision dataset such as datasets.CIFAR10(…) , and that perform transformations on the data after it is loaded but before it is returned by __getitem__ . We can see the list of available objects as follows: # In[8]: from torchvision import transformsdir(transforms) # Out[8]: ['CenterCrop', 'ColorJitter', ...'Normalize', 'Pad', 'RandomAffine',... 'RandomResizedCrop', 'RandomRotation','RandomSizedCrop', ... 'TenCrop','ToPILImage', 'ToTensor', ... ] 0 5 10 15 20 25 30 0 5 10 15 20 25 30Figure 7.3 The 99th image from the CIFAR-10 dataset: an automobile" Deep-Learning-with-PyTorch.pdf,"169 A dataset of tiny images Among those transforms, we can spot ToTensor , which turns NumPy arrays and PIL images to tensors. It also takes care to la y out the dimensions of the output tensor as C × H × W (channel, height, width; just as we covered in chapter 4). Let’s try out the ToTensor transform. Once instantiated, it can be called like a function with the PIL image as the ar gument, returning a tensor as output: # In[9]: from torchvision import transforms to_tensor = transforms.ToTensor() img_t = to_tensor(img) img_t.shape # Out[9]: torch.Size([3, 32, 32]) The image has been turned into a 3 × 32 × 32 tensor and therefore a 3-channel (RGB) 32 × 32 image. Note that nothing has happened to label ; it is still an integer. As we anticipated, we can pass the tr ansform directly as an argument to dataset .CIFAR10 : # In[10]: tensor_cifar10 = datasets.CIFAR10(data_path, train=True, download=False, transform=transforms.ToTensor()) At this point, accessing an element of the dataset will return a tensor, rather than a PIL image: # In[11]: img_t, _ = tensor_cifar10[99]type(img_t) # Out[11]: torch.Tensor As expected, the shape has the channel as th e first dimension, while the scalar type is float32 : # In[12]: img_t.shape, img_t.dtype # Out[12]: (torch.Size([3, 32, 32]), torch.float32) Whereas the values in the original PIL imag e ranged from 0 to 255 (8 bits per chan- nel), the ToTensor transform turns the data into a 32-bit floating-point per channel, scaling the values down from 0.0 to 1.0. Let’s verify that:" Deep-Learning-with-PyTorch.pdf,"170 CHAPTER 7Telling birds from airplanes: Learning from images # In[13]: img_t.min(), img_t.max() # Out[13]: (tensor(0.), tensor(1.)) And let’s verify that we’re getting the same image out: # In[14]: plt.imshow(img_t.permute(1, 2, 0))plt.show() # Out[14]:
As we can see in figure 7.4, we get the same output as before. It checks. Note how we have to use permute to change the order of the axes from C × H × W to H × W × C to match what Matplotlib expects. 7.1.4 Normalizing data Transforms are really handy be cause we can chain them using transforms.Compose , and they can handle normalization and data augmentation transparently, directly inthe data loader. For instance, it’s good practice to normalize the dataset so that each channel has zero mean and un itary standard deviation. We mentioned this in chapter 4, but now, after going through chapter 5, we also have an intuition for why: by choosingactivation functions that are linear around 0 plus or minus 1 (or 2), keeping the data in the same range means it’s more likely that neurons have nonzero gradients and,Changes the order of the axes from C × H × W to H × W × C 0 5 10 15 20 25 30 0 5 10 15 20 25 30Figure 7.4 We’ve seen this one already." Deep-Learning-with-PyTorch.pdf,"171 A dataset of tiny images hence, will learn sooner. Also, normalizing each channel so that it has the same distribution will ensure that channel information can be mixed and updated through gradient descent using the same learning rate. This is just like the situation in section 5.4.4 when we rescaled the weight to be of the same magnitude as the bias in ourtemperature-conversion model. In order to make it so that each channe l has zero mean and unitary standard devi- ation, we can compute the mean value and the standard deviation of each channel across the dataset and apply the following transform: v_n[c] = (v[c] - mean[c]) / stdev[c] . This is what transforms.Normalize does. The values of mean and stdev must be computed offline (they are not co mputed by the transform). Let’s compute them for the CIFAR-10 training set. Since the CIFAR-10 dataset is small, we’ll be able to manipulate it entirely in mem- ory. Let’s stack all the tensors returned by the dataset along an extra dimension: # In[15]: imgs = torch.stack([img_t for img_t, _ in tensor_cifar10], dim=3) imgs.shape # Out[15]: torch.Size([3, 32, 32, 50000]) Now we can easily compute the mean per channel: # In[16]: imgs.view(3, -1).mean(dim=1) # Out[16]: tensor([0.4915, 0.4823, 0.4468]) Computing the standard deviation is similar: # In[17]: imgs.view(3, -1).std(dim=1) # Out[17]: tensor([0.2470, 0.2435, 0.2616]) With these numbers in our hands, we can initialize the Normalize transform # In[18]:transforms.Normalize((0.4915, 0.4823, 0.4468), (0.2470, 0.2435, 0.2616)) # Out[18]: Normalize(mean=(0.4915, 0.4823, 0.4468), std=(0.247, 0.2435, 0.2616)) and concatenate it after the ToTensor transform: # In[19]: transformed_cifar10 = datasets.CIFAR10( data_path, train=True, download=False,Recall that view(3, -1) keep s the three channels and merges all the remaining dimensions into one, figuring out the appropriate size. Here our 3 × 32 × 32 image is transformed into a 3 × 1,024 vector, and then the mean is taken over the 1,024 elements of each channel." Deep-Learning-with-PyTorch.pdf,"172 CHAPTER 7Telling birds from airplanes: Learning from images transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.4915, 0.4823, 0.4468), (0.2470, 0.2435, 0.2616)) ])) Note that, at this point, plotting an im age drawn from the dataset won’t provide us with a faithful representa tion of the actual image: # In[21]: img_t, _ = transformed_cifar10[99] plt.imshow(img_t.permute(1, 2, 0)) plt.show() The renormalized red car we get is shown in figure 7.5. This is because normalization has shifted the RGB levels outside the 0.0 to 1.0 range and changed the overall magni- tudes of the channels. All of the data is still there; it’s just that Matplotlib renders it asblack. We’ll keep this in mind for the future. Still, we have a fancy dataset loaded that contains tens of thousands of images! That’s quite convenient, because we were going to need something exactly like it. 7.2 Distinguishing birds from airplanes Jane, our friend at the bird-watching club, has set up a fleet of cameras in the woods south of the airport. The cameras are suppo sed to save a shot when something enters the frame and upload it to the club’s re al-time bird-watching blog. The problem is that a lot of planes coming and going from the airport end up triggering the camera, 0 5 10 15 20 25 30 0 5 10 15 20 25 30Figure 7.5 Our random CIFAR-10 image after normalization" Deep-Learning-with-PyTorch.pdf,"173 Distinguishing birds from airplanes so Jane spends a lot of time deleting pictures of airplanes from the blog. What she needs is an automated system like that show n in figure 7.6. Inst ead of manually delet- ing, she needs a neural netw ork—an AI if we’re into fancy marketing speak—to throw away the airplanes right away. No worries! We’ll take care of that, no problem—we just got the perfect dataset for it (what a coincidence, right?). We’ll pick out all the birds and airplanes from our CIFAR-10 dataset and build a neural networ k that can tell birds and airplanes apart. 7.2.1 Building the dataset The first step is to get the data in the right shape. We could create a Dataset subclass that only includes birds and airplanes. Howe ver, the dataset is small, and we only need indexing and len to work on our dataset. It doesn’ t actually have to be a subclass of torch.utils.data.dataset.Dataset ! Well, why not take a shortcut and just filter the data in cifar10 and remap the labels so they are contiguous? Here’s how: # In[5]: label_map = {0: 0, 2: 1} class_names = ['airplane', 'bird']cifar2 = [(img, label_map[label]) for img, label in cifar10 if label in [0, 2]] cifar2_val = [(img, label_map[label]) for img, label in cifar10_val if label in [0, 2]] AIRPLANE! BIRD! KEeP Figure 7.6 The problem at hand: we’re going to help our friend tell birds from airplanes for her blog, by training a neural network to do the job." Deep-Learning-with-PyTorch.pdf,"174 CHAPTER 7Telling birds from airplanes: Learning from images The cifar2 object satisfies the basic requirements for a Dataset —that is, __len__ and __getitem__ are defined—so we’re going to use that. We should be aware, however, that this is a clever shortcut and we might wish to implement a proper Dataset if we hit limitations with it.4 We have a dataset! Next, we need a model to feed our data to. 7.2.2 A fully connected model We learned how to build a neur al network in chapter 5. We know that it’s a tensor of features in, a tensor of features out. After all, an image is just a set of numbers laid out in a spatial configurat ion. OK, we don’t know how to handle the spatial configuration part just yet, but in theory if we just take the image pixels and straighten them into a long 1D vector, we could consider those numb ers as input features, right? This is what figure 7.7 illustrates. Let’s try that. How many features per sample ? Well, 32 × 32 × 3: that is, 3,072 input features per sample. Starting from the model we built in chapter 5, our new model would be an nn.Linear with 3,072 input features and some number of hidden features, 4Here, we built the new dataset manually and also wanted to remap the classes. In some cases, it may be enough t o t a k e a s u b s e t o f t h e i n d i c e s o f a g i v e n d a t a s e t . T h i s c a n b e a c c o m p l i s h e d u s i n g t h e torch.utils .data.Subset class. Similarly, there is ConcatDataset to join datasets (of compatible items) into a larger one. For iterable datasets, ChainDataset gives a larger, iterable dataset. PDOG0 00 0 0 0 0 0 0 0 0000000000000 0 10 11 1312 14 1598765430 11 151413120123 7654 89 10 21 0 0 00 00 00PHUMAN Figure 7.7 Treating our image as a 1D vector of values and training a fully connected classifier on it" Deep-Learning-with-PyTorch.pdf,"175 Distinguishing birds from airplanes followed by an activation, and then another nn.Linear that tapers the network down to an appropriate output number of features (2, for this use case): # In[6]: import torch.nn as nn n_out = 2 model = nn.Sequential( nn.Linear( 3072, 512, ), nn.Tanh(), nn.Linear( 512, n_out, ) ) We somewhat arbitrarily pick 512 hidden features. A neur al network needs at least one hidden layer (of activations, so two mo dules) with a nonlinearity in between in order to be able to learn ar bitrary functions in the way we discussed in section 6.3— otherwise, it would just be a linear mode l. The hidden features represent (learned) relations between the inputs encoded throug h the weight matrix. As such, the model might learn to “compare” vector elements 176 and 208, but it does not a priori focuson them because it is structur ally unaware that these are, indeed (row 5, pixel 16) and (row 6, pixel 16), and thus adjacent. So we have a model. Next we’ll discuss what our model output should be. 7.2.3 Output of a classifier In chapter 6, the network produced the predicted temperature (a number with a quantitative meaning) as output. We coul d do something similar here: make our net- work output a single scalar value (so n_out = 1 ), cast the labels to floats (0.0 for air- plane and 1.0 for bird), and use those as a target for MSELoss (the average of squared differences in the batch). Doing so, we woul d cast the problem into a regression prob- lem. However, looking more closely, we ar e now dealing with something a bit different in nature.5 We need to recognize that the output is categorical: it’s either a bird or an air- plane (or something else if we had all 10 of the original classes). As we learned in chapter 4, when we have to represent a ca tegorical variable, we should switch to a one-hot-encoding representation of that variable, such as [1, 0] for airplane or [0,1] 5Using distance on the “probability” vectors would already have been much better than using MSELoss with the class numbers—which, recalling our discussion of types of values in the sidebar “Continuous, ordinal, andcategorical values” from chapter 4, does not make sense for categories and does not work at all in practice. Still, MSELoss is not very well suited to classification problems.Input features Hidden layer size Output classes" Deep-Learning-with-PyTorch.pdf,"176 CHAPTER 7Telling birds from airplanes: Learning from images for bird (the order is arbitrary). This will still work if we have 10 classes, as in the full CIFAR-10 dataset; we’ll just have a vector of length 10.6 In the ideal case, the network would output torch.tensor([1.0, 0.0]) for an air- plane and torch.tensor([0.0, 1.0]) for a bird. Practically speaking, since our clas- sifier will not be perfect, we can expect the network to output something in between. The key realization in this case is that we can interpret our output as probabilities: the first entry is the probability of “airplane,” and the second is the probability of “bird.” Casting the problem in terms of probab ilities imposes a few extra constraints on the outputs of our network: Each element of the output must be in the [0.0, 1.0] range (a probability of an outcome cannot be less than 0 or greater than 1). The elements of the output must add up to 1.0 (we’re certain that one of the two outcomes will occur). It sounds like a tough constraint to enforce in a differentiable way on a vector of num- bers. Yet there’s a very smart trick that does exactly that, and it’s differentiable: it’scalled softmax . 7.2.4 Representing the output as probabilities Softmax is a function that takes a vector of values and produces another vector of the same dimension, where the values satisfy th e constraints we just listed to represent probabilities. The expression for softmax is shown in figure 7.8. That is, we take the elements of the vector, compute the elementwise exponential, and divide each element by the sum of expo nentials. In code, it’s something like this: # In[7]: def softmax(x): return torch.exp(x) / torch.exp(x).sum() Let’s test it on an input vector: # In[8]:x = torch.tensor([1.0, 2.0, 3.0]) softmax(x)# Out[8]: tensor([0.0900, 0.2447, 0.6652]) 6For the special binary classification case, using two values here is redundant, as one is always 1 minus the other. And indeed PyTorch lets us out put only a single probability using the nn.Sigmoid activation at the end of the model to get a probability and the binary cross-entropy loss function nn.BCELoss . There also is an nn.BCELossWithLogits merging these two steps." Deep-Learning-with-PyTorch.pdf,"177 Distinguishing birds from airplanes As expected, it satisfies th e constraints on probability: # In[9]: softmax(x).sum() # Out[9]: tensor(1.) Softmax is a monotone function, in that lo wer values in the input will correspond to lower values in the outp ut. However, it’s not scale invariant , in that the ratio between values is not preserved. In fact, the ratio between the first and second elements of the input is 0.5, while the ratio between the same elements in the output is 0.3678. This is not a real issue, since the le arning process will drive the parameters of the model in a way that values have appropriate ratios. The nn module makes softmax available as a module. Since, as usual, input tensors may have an additional batch 0th dimensio n, or have dimensions along which they encode probabilities and othe rs in which they don’t, nn.Softmax requires us to specify the dimension along which the softmax function is applied: # In[10]: softmax = nn.Softmax(dim=1) x = torch.tensor([[1.0, 2.0, 3.0], [1.0, 2.0, 3.0]])Figure 7.8 Handwritten softmax EACH ELEMENT BETWEeN 0 AND 11 10x1 x1 x2 x2x3x2 x2x1 x1 x1 x xxx1 x1 x1 x1x1, x2, x3x1, x2 x1,... , xx2 x3x1x2 x3x1 x2 x3x1x2x1 x2 x1 x2 x1x2x1x2 x1x2+ + + ++ +++= == + ++ +++== + SUM OF ELEMENTS EQUALS 1" Deep-Learning-with-PyTorch.pdf,"178 CHAPTER 7Telling birds from airplanes: Learning from images softmax(x) # Out[10]: tensor([[0.0900, 0.2447, 0.6652], [0.0900, 0.2447, 0.6652]]) In this case, we have two input vectors in two rows (just like when we work with batches), so we initialize nn.Softmax to operate along dimension 1. Excellent! We can now add a softmax at the end of our model, and our network will be equipped to produce probabilities: # In[11]: model = nn.Sequential( nn.Linear(3072, 512), nn.Tanh(), nn.Linear(512, 2),nn.Softmax(dim=1)) We can actually try running the model before even training it. Let’s do it, just to see what comes out. We first build a batch of one image, our bird (figure 7.9): # In[12]: img, _ = cifar2[0] plt.imshow(img.permute(1, 2, 0)) plt.show() 0 5 10 15 20 25 30 0 5 10 15 20 25 30Figure 7.9 A random bird from the CIFAR-10 dataset (after normalization)" Deep-Learning-with-PyTorch.pdf,"179 Distinguishing birds from airplanes Oh, hello there. In order to call the model, we need to make the input have the right dimensions. We recall that our model expect s 3,072 features in the input, and that nn works with data organized into batches al ong the zeroth dimension. So we need to turn our 3 × 32 × 32 image into a 1D tens or and then add an extra dimension in the zeroth position. We learned how to do this in chapter 3: # In[13]: img_batch = img.view(-1).unsqueeze(0) Now we’re ready to invoke our model: # In[14]: out = model(img_batch)out # Out[14]: tensor([[0.4784, 0.5216]], grad_fn=) So, we got probabilities! Well, we know we shouldn’t get too excited: the weights and biases of our linear layers have not been tr ained at all. Their elements are initialized randomly by PyTorch between –1.0 an d 1.0. Interestingly, we also see grad_fn for the output, which is the tip of the backward comp utation graph (it will be used as soon as we need to backpropagate).7 In addition, while we know which outp ut probability is supposed to be which (recall our class_names ), our network has no indication of that. Is the first entry “air- plane” and the second “bird,” or the othe r way around? The network can’t even tell that at this point. It’s the loss function that associates a meaning with these two num- bers, after backpropagation. If the labels are provided as index 0 for “airplane” and index 1 for “bird,” then that’s the order the outputs will be induced to take. Thus, after training, we will be able to get the label as an index by computing the argmax of the output probabilities: that is, the inde x at which we get the maximum probability. Conveniently, when supplied with a dimension, torch.max returns the maximum ele- ment along that dimension as well as the in dex at which that value occurs. In our case, we need to take the max along the probabi lity vector (not across batches), therefore, dimension 1: # In[15]: _, index = torch.max(out, dim=1) index # Out[15]: tensor([1]) 7While it is, in principle, possible to say that here the model is uncertain (because it assigns 48% and 52% prob- abilities to the two classes), it will turn out that typica l training results in highly overconfident models. Bayes- ian neural networks can provide some remedy, but they are beyond the scope of this book." Deep-Learning-with-PyTorch.pdf,"180 CHAPTER 7Telling birds from airplanes: Learning from images It says the image is a bird. Pure luck. Bu t we have adapted our model output to the classification task at hand by getting it to output probabilities. We also have now run our model against an input im age and verified th at our plumbing works. Time to get training. As in the previous two chapters, we need a loss to minimize during training. 7.2.5 A loss for classifying We just mentioned that the loss is what give s probabilities meaning. In chapters 5 and 6, we used mean square error (MSE) as ou r loss. We could still use MSE and make our output probabilities converge to [0.0, 1.0] and [1.0, 0.0] . However, thinking about it, we’re not really interested in reproducing these values exactly. Looking back at the argmax operation we used to extract the inde x of the predicted cla ss, what we’re really interested in is that the first probability is higher than the second for airplanes and vice versa for birds. In other words, we want to penalize misclassificati ons rather than pains- takingly penalize everything that do esn’t look exactly like a 0.0 or 1.0. What we need to maximize in this case is the probability associ ated with the correct class, out[class_index] , where out is the output of softmax and class_index is a vec- tor containing 0 for “airplane” and 1 for “b ird” for each sample . This quantity—that is, the probability associated with the correct class—is referred to as the likelihood (of our model’s parameters, given the data).8 In other words, we want a loss function that is very high when the likelihood is low: so low that the alternatives have a higher prob- ability. Conversely, the loss should be low wh en the likelihood is higher than the alter- natives, and we’re not really fixate d on driving the probability up to 1. There’s a loss function that be haves that way, and it’s called negative log likelihood (NLL). It has the expression NLL = - sum(log(out_i[c_i])) , where the sum is taken over N samples and c_i is the correct class for sample i. Let’s take a look at figure 7.10, which shows the NLL as a functi on of predicted probability. 8For a succinct definition of the term inology, refer to David MacKay’s Information Theory, Inference, and Learning Algorithms (Cambridge University Press, 2003), section 2.3.3.0 2.5 2.0 1.5 1.0 0.5 0.0 0.0 0.2 0.4 predicted likelihOod of target claSsNLl loSs 0.6 0.8 1.0Figure 7.10 The NLL loss as a function of the predicted probabilities" Deep-Learning-with-PyTorch.pdf,"181 Distinguishing birds from airplanes The figure shows that when low probabilitie s are assigned to the data, the NLL grows to infinity, whereas it decreases at a rather shallow rate when probabilities are greater than 0.5. Remember that the NLL takes pr obabilities as input; so, as the likelihood grows, the other probabilities will necessarily decrease. Summing up, our loss for classification ca n be computed as follows. For each sam- ple in the batch: 1Run the forward pass, and obtain the output values from the last (linear) layer. 2Compute their softmax, and obtain probabilities. 3Take the predicted probability corresponding to the correct class (the likeli- hood of the parameters). Note that we know what the correct class is becauseit’s a supervised problem—it’s our ground truth. 4Compute its logarithm, slap a minus sign in front of it, and add it to the loss. So, how do we do this in PyTorch? PyTorch has an nn.NLLLoss class. However (gotcha ahead), as opposed to what you might expect, it does not take probabilities but rather takes a tensor of log probabilities as inpu t. It then computes the NLL of our model given the batch of data. There’s a good re ason behind the input convention: taking the logarithm of a probability is tricky when the probability gets close to zero. The workaround is to use nn.LogSoftmax instead of nn.Softmax , which takes care to make the calculation numerically stable. We can now modify our model to use nn.LogSoftmax as the output module: model = nn.Sequential( nn.Linear(3072, 512), nn.Tanh(), nn.Linear(512, 2),nn.LogSoftmax(dim=1)) Then we instantiate our NLL loss: loss = nn.NLLLoss() The loss takes the output of nn.LogSoftmax for a batch as the first argument and a tensor of class indices (zeros and ones, in our case) as the second argument. We can now test it with our birdie: img, label = cifar2[0] out = model(img.view(-1).unsqueeze(0)) loss(out, torch.tensor([label])) tensor(0.6509, grad_fn=) Ending our investigation of losses, we can look at how using cross-entropy loss improves over MSE. In figure 7.11, we s ee that the cross-entropy loss has some slope" Deep-Learning-with-PyTorch.pdf,"182 CHAPTER 7Telling birds from airplanes: Learning from images when the prediction is off ta rget (in the low-loss corner, th e correct class is assigned a predicted probability of 99.97%), while th e MSE we dismissed at the beginning satu- rates much earlier and—crucially—also for very wrong predictions. The underlying reason is that the slope of the MSE is too low to compensate for the flatness of the soft-max function for wrong predictions. This is why the MSE for probabilities is not a good fit for classification work. 7.2.6 Training the classifier All right! We’re ready to bring back the training loop we wrote in chapter 5 and see how it trains (the process is illustrated in figure 7.12): import torch import torch.nn as nn model = nn.Sequential( nn.Linear(3072, 512), nn.Tanh(), nn.Linear(512, 2),nn.LogSoftmax(dim=1)) learning_rate = 1e-2optimizer = optim.SGD(model.parameters(), lr=learning_rate)suCceSsful and leSs suCceSsful claSsification loSses 1.7510 8 6 4 21.50 1.25 1.00 mse loSs croSs entropy loSs score of target claSs score of wrong claSs0.75 0.50 0.25 -4-2024 score of wrong claSs-4-2024 -4-2024 score of target claSs -4-2024 Figure 7.11 The cross entropy (left) and MSE between predicted probabilities and the target probability vector (right) as functions of the predicted s cores—that is, before the (log-) softmax" Deep-Learning-with-PyTorch.pdf,"183 Distinguishing birds from airplanes loss_fn = nn.NLLLoss() n_epochs = 100for epoch in range(n_epochs): for img, label in cifar2: out = model(img.view(-1).unsqueeze(0)) loss = loss_fn(out, torch.tensor([label])) optimizer.zero_grad() loss.backward() optimizer.step() print(""Epoch: %d, Loss: %f"" % (epoch, float(loss))) Looking more closely, we ma de a small change to the trai ning loop. In chapter 5, we had just one loop: over the epochs (recall th at an epoch ends when all samples in the training set have been evalua ted). We figured that evaluati ng all 10,000 images in a single batch would be too much, so we decided to have an inner loop where we evalu-ate one sample at a time and backpr opagate over that single sample. While in the first case the gradient is accumulated over all samples before being applied, in this case we apply changes to parameters based on a very partial estimationPrints the loss for the last image. In the next chapter, we will improve our output to give an average over the entire epoch. EPOCH ITERATION FWD BWD UPDATECAB EPOCH ITERATIONfor n epochs: with every sample in dataset: evaluate model (forward) compute loSs aCcumulate gradient of loSs (backward) update model with aCcumulated gradientfor n epochs: with every sample in dataset: evaluate model (forward) compute loSs compute gradient of loSs (backward) update model with gradient for n epochs: split dataset in minibatches for every minibatch: with every sample in minibatch: evaluate model (forward) compute loSs aCcumulate gradient of loSs (backward) update model with aCcumulated gradient FWD BWD UPDATE Figure 7.12 Training loops: (A) averaging updates over the whole dataset; (B) updating the model at each sample; (C) averaging updates over minibatches" Deep-Learning-with-PyTorch.pdf,"184 CHAPTER 7Telling birds from airplanes: Learning from images of the gradient on a single sample. However, what is a g ood direction for reducing the loss based on one sample might not be a good direction for others. By shuffling samples at each epoch and estimating the gradient on one or (preferably, for stability) a few samples at a time, we are e ffectively introducing randomness in our gradient descent. Remember SGD? It stands for stochastic gradient descent , and this is what the S is about: working on small batches (aka minibatches) of shuffled data. It turns out that following gradients estimated over mini batches, which are poorer approximatio ns of gradients estimated across the whole da taset, helps convergence an d prevents the optimization process from getting stuck in local minima it encounters along the way. As depicted in figure 7.13, gradients from minibatches are randomly off the ideal trajectory, which is part of the reason why we want to use a reasonably small learning rate. Shuffling the dataset at each epoch helps ensure that the sequence of gradients estimated over mini- batches is representative of the grad ients computed across the full dataset. Typically, minibatches are a constant size th at we need to set prior to training, just like the learning rate. These are called hyperparameters , to distinguish them from the parameters of a model. Figure 7.13 Gradient descent averaged over the w hole dataset (light path) versus stochastic gradient descent, where the gradient is estimated on randomly picked minibatches GRADIENT OVERMINIBATCH GRADIENT OVER ALl DATA UPDATE OVER MINIBATCH LOSs" Deep-Learning-with-PyTorch.pdf,"185 Distinguishing birds from airplanes In our training code, we chose minibatches of size 1 by picking one item at a time from the dataset. The torch.utils.data module has a class that helps with shuffling and organizing the data in minibatches: DataLoader . The job of a data loader is to sample minibatches from a dataset, giving us the fl exibility to choose from different sampling strategies. A very common strategy is unifor m sampling after shuffling the data at each epoch. Figure 7.14 shows the data loader shuffling the indices it gets from the Dataset . Let’s see how this is do ne. At a minimum, the DataLoader constructor takes a Dataset object as input, along with batch_size and a shuffle Boolean that indicates whether the data needs to be shuffled at the beginning of each epoch: train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True) A DataLoader can be iterated over, so we can use it directly in the inner loop of our new training code: import torch import torch.nn as nn train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True) model = nn.Sequential( nn.Linear(3072, 512),nn.Tanh(), nn.Linear(512, 2), nn.LogSoftmax(dim=1)) learning_rate = 1e-2 optimizer = optim.SGD(model.parameters(), lr=learning_rate) loss_fn = nn.NLLLoss()n_epochs = 100 for epoch in range(n_epochs): for imgs, labels in train_loader: DATASET 24, 13, 18, 710, 4, 11, 2= 4 =DATA LOADER Figure 7.14 A data loader dispensing minibatches by using a dataset to sample individual data items" Deep-Learning-with-PyTorch.pdf,"186 CHAPTER 7Telling birds from airplanes: Learning from images batch_size = imgs.shape[0] outputs = model(imgs.view(batch_size, -1)) loss = loss_fn(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() print(""Epoch: %d, Loss: %f"" % (epoch, float(loss)) ) At each inner iteration, imgs is a tensor of size 64 × 3 × 32 × 32—that is, a minibatch of 64 (32 × 32) RGB images—while labels is a tensor of size 64 containing label indices. Let’s run our training: Epoch: 0, Loss: 0.523478 Epoch: 1, Loss: 0.391083 Epoch: 2, Loss: 0.407412Epoch: 3, Loss: 0.364203 ... Epoch: 96, Loss: 0.019537Epoch: 97, Loss: 0.008973Epoch: 98, Loss: 0.002607 Epoch: 99, Loss: 0.026200 We see that the loss decreases somehow, but we have no id ea whether it’s low enough. Since our goal here is to correctly assign classes to images, and preferably do that on an independent dataset, we can compute the accuracy of our model on the validation set in terms of the number of corre ct classifications over the total: val_loader = torch.utils.data.DataLoader(cifar2_val, batch_size=64, shuffle=False) correct = 0 total = 0 with torch.no_grad(): for imgs, labels in val_loader: batch_size = imgs.shape[0]outputs = model(imgs.view(batch_size, -1)) _, predicted = torch.max(outputs, dim=1) total += labels.shape[0]correct += int((predicted == labels).sum()) print(""Accuracy: %f"", correct / total)Accuracy: 0.794000 Not a great performance, but quite a lot be tter than random. In our defense, our model was quite a shallow classifi er; it’s a miracle that it wo rked at all. It did because our dataset is really simple—a lot of the sa mples in the two classes likely have system- atic differences (such as the color of the background) that help the model tell birds from airplanes, based on a few pixels.Due to the shuffling, this now prints the loss for a random batch—clearly something we want to improve in chapter 8." Deep-Learning-with-PyTorch.pdf,"187 Distinguishing birds from airplanes We can certainly add some bling to our model by including more layers, which will increase the model’s depth and capacity . One rather arbitrary possibility is model = nn.Sequential( nn.Linear(3072, 1024), nn.Tanh(),nn.Linear(1024, 512), nn.Tanh(), nn.Linear(512, 128),nn.Tanh(), nn.Linear(128, 2), nn.LogSoftmax(dim=1)) Here we are trying to taper the number of features more gently toward the output, in the hope that intermediate layers will do a better job of squeezing information in increasingly shorter intermediate outputs. The combination of nn.LogSoftmax and nn.NLLLoss i s e q u i v a l e n t t o u s i n g nn.CrossEntropyLoss . This terminology is a particularity of PyTorch, as the nn.NLLoss computes, in fact, the cross entropy but with log probability predictions as inputs where nn.CrossEntropyLoss takes scores (sometimes called logits ). Techni- cally, nn.NLLLoss is the cross entropy between the Dirac distribution, putting all mass on the target, and the predic ted distribution given by the log probability inputs. To add to the confusion, in information th eory, up to normalizat ion by sample size, this cross entropy can be interpreted as a ne gative log likelihood of the predicted dis- tribution under the target dist ribution as an outcome. So both losses are the negative log likelihood of the model parameters given the data when our model predicts the(softmax-applied) probabilities. In this book, we won’t rely on these details, but don’t let the PyTorch naming confuse you when yo u see the terms used in the literature. It is quite common to drop the last nn.LogSoftmax layer from the network and use nn.CrossEntropyLoss as a loss. Let us try that: model = nn.Sequential( nn.Linear(3072, 1024), nn.Tanh(), nn.Linear(1024, 512),nn.Tanh(), nn.Linear(512, 128), nn.Tanh(),nn.Linear(128, 2)) loss_fn = nn.CrossEntropyLoss() Note that the numbers will be exactly the same as with nn.LogSoftmax and nn.NLLLoss . It’s just more convenient to do it all in one pass, with the only gotcha being that the out- put of our model will not be interpretable as probabilities (or log probabilities). We’ll need to explicitly pass the output through a softmax to obtain those." Deep-Learning-with-PyTorch.pdf,"188 CHAPTER 7Telling birds from airplanes: Learning from images Training this model and evaluating the accuracy on the validation set (0.802000) lets us appreciate that a larger model boug ht us an increase in accuracy, but not that much. The accuracy on the training set is practically perfect (0.9 98100). What is this telling us? That we are overfitting our model in both cases. Our fully connected model is finding a way to discriminate bird s and airplanes on the training set by mem- orizing the training set, but performance on the validation set is not all that great, even if we choose a larger model. PyTorch offers a quick way to determine how many parameters a model has through the parameters() method of nn.Model (the same method we use to provide the parameters to the optimizer). To find out how many elements are in each tensorinstance, we can call the numel method. Summing those gives us our total count. Depending on our use case, counting parame ters might require us to check whether a parameter has requires_grad set to True , as well. We might want to differentiate the number of trainable parameters from the overall model size. Let’s take a look at what we have right now: # In[7]: numel_list = [p.numel() for p in connected_model.parameters()if p.requires_grad == True] sum(numel_list), numel_list # Out[7]: (3737474, [3145728, 1024, 524288, 512, 65536, 128, 256, 2]) Wow, 3.7 million parameters! Not a small network for such a small input image, is it? Even our first networ k was pretty large: # In[9]: numel_list = [p.numel() for p in first_model.parameters()] sum(numel_list), numel_list # Out[9]: (1574402, [1572864, 512, 1024, 2]) The number of parameters in our first model is roughly half that in our latest model. Well, from the list of individual paramete r sizes, we start having an idea what’s responsible: the first module, which has 1. 5 million parameters. In our full network, we had 1,024 output features, which led th e first linear module to have 3 million parameters. This shouldn’t be unexpected: we know that a linear layer computes y = weight * x + bias , and if x has length 3,072 (disregarding the batch dimension for simplicity) and y must have length 1,024, then the weight tensor needs to be of size 1,024 × 3,072 and the bias size must be 1,024. And 1,024 * 3,072 + 1,024 = 3,146,752, as we found earlier. We can verify these quantities directly:" Deep-Learning-with-PyTorch.pdf,"189 Distinguishing birds from airplanes # In[10]: linear = nn.Linear(3072, 1024) linear.weight.shape, linear.bias.shape # Out[10]: (torch.Size([1024, 3072]), torch.Size([1024])) What is this telling us? That our neural ne twork won’t scale very well with the number of pixels. What if we had a 1,024 × 1,024 RGB image? That’s 3.1 million input values. Even abruptly going to 1,024 hidden features (which is not going to work for our clas- sifier), we would have over 3 billion parameters. Using 32-bit fl oats, we’re already at 12 GB of RAM, and we haven’t even hit the se cond layer, much less computed and stored the gradients. That’s just not goin g to fit on most present-day GPUs. 7.2.7 The limits of going fully connected Let’s reason about what using a linear module on a 1D view of our image entails—figure 7.15 shows what is going on. It’s like taking every single input value—that is, every single component in our RGB image—an d computing a linear combination of it with all the other values for every output feature. On one hand, we are allowing for the combina- tion of any pixel with every other pixel in the image being potentially relevant for our task. On the other hand, we aren’t utilizing the relative position of neighboring or far- away pixels, since we are treating th e image as one big vector of numbers. Figure 7.15 Using a fully connected module with an input image: every input pixel is combined with every other to produce each element in the output. INPUT IMAGE OUTPUT IMAGE THERE’S ONE VECTOR Of WEIGHTS PER OUTPUT PIXEL.ALl INPUT PIXELS CONTRIBUTE TOEVERY OUTPUT PIXEL.NOTE: OUTPUT PIXEL= 4 4 IMAGE16-VECTOR16-VECTOR4 4 IMAGE 16 16 WEIGHTSWEIGHTS RELATIVE TO OUTPUT PIXELTO VECTOR OF INPUT PIXELS OVERALl: " Deep-Learning-with-PyTorch.pdf,"190 CHAPTER 7Telling birds from airplanes: Learning from images An airplane flying in the sky captured in a 32 × 32 image will be very roughly similar to a dark, cross-like shape on a blue backgrou nd. A fully connected network as in figure 7.15 would need to learn that when pixel 0,1 is dark, pixel 1,1 is also dark, and so on, that’s a good indication of an airplane. This is illustrated in the top half of figure 7.16. However, shift the same airplane by one pixe l or more as in the bottom half of the fig- ure, and the relationships between pixels will have to be relearned from scratch: this time, an airplane is likely when pixel 0,2 is dark, pixel 1,2 is dark, and so on. In moretechnical terms, a fully connected network is not translation invariant . This means a network that has been trained to recognize a Spitfire starting at position 4,4 will not be able to recognize the exact same Spitfire starting at position 8,8. We would then have to augment the dataset—that is, apply random tran slations to images during training— so the network would have a chance to see Spitfires all over the image, and we would need to do this for every image in the data set (for the record, we could concatenate a PLANE PLANE (TRANSLATED) WEIGHTSOUTPUT 50 0 0 0 0000 0 0 000 000000 000 0 0 0 0000 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 00 00 0 X + + X11 1 1 1 1 11 1 1 1 1 11 1 111111 11 1 1111 11 1 Figure 7.16 Translation invariance, or the lack thereof, with fully connected layers" Deep-Learning-with-PyTorch.pdf,"191 Exercises transform from torchvision.transforms to do this transparently). However, this data augmentation strategy comes at a cost: the numb er of hidden features—that is, of parameters—must be large enough to store the information about all of these trans- lated replicas. So, at the end of this chapter, we have a dataset, a model, and a training loop, and our model learns. However, due to a misma tch between our problem and our network structure, we end up overfitting our traini ng data, rather than learning the general- ized features of what we want the model to detect. We’ve created a model that allows for re lating every pixel to every other pixel in the image, regardless of their spatial arra ngement. We have a reasonable assumption that pixels that are closer together are in theory a lot more related, though. This means we are training a classifier that is not translation-invarian t, so we’re forced to use a lot of capacity for learning translated replicas if we want to hope to do well onthe validation set. There has to be a better way, right? Of course, most such questions in a book like this are rhetorical. The solution to our current set of problems is to change our model to use convolutional layers. We’llcover what that means in the next chapter. 7.3 Conclusion In this chapter, we have solved a simple cl assification problem from dataset, to model, to minimizing an appropriate loss in a traini ng loop. All of these things will be stan- dard tools for your PyTorch toolbelt, and th e skills needed to use them will be useful throughout your PyTorch tenure. We’ve also found a severe shortcoming of our model: we have been treating 2D images as 1D data. Also, we do not have a natural way to incorporate the translation invariance of our problem. In the next ch apter, you’ll learn how to exploit the 2D nature of image data to get much better results.9 We could use what we have learned right away to process data without this translation invariance. For example, using it on tabular da ta or the time-series data we met in chap- ter 4, we can probably do great things already. To some extent, it would also be possible to use it on text data that is appropriately represented.10 7.4 Exercises 1Use torchvision to implement random cropping of the data. aHow are the resulting images differ ent from the uncropped originals? bWhat happens when you request the same image a second time? cWhat is the result of training using randomly cropped images? 9The same caveat about translation invariance also applies to purely 1D data: an audio classifier should likely produce the same output even if the sound to be cla ssified starts a tenth of a second earlier or later. 10Bag-of-words models , which just average over word embeddings, ca n be processed with the network design from this chapter. More contemporary models take the positions of the words into account and need more advanced models." Deep-Learning-with-PyTorch.pdf,"192 CHAPTER 7Telling birds from airplanes: Learning from images 2Switch loss functions (perhaps MSE). aDoes the training behavior change? 3Is it possible to reduce the capacity of th e network enough that it stops overfitting? aHow does the model perform on the validation set when doing so? 7.5 Summary Computer vision is one of the most ex tensive applications of deep learning. Several datasets of annotated images are publicly available; many of them can be accessed via torchvision . Dataset s and DataLoader s provide a simple yet effect ive abstraction for loading and sampling datasets. For a classification task, using the softmax function on the output of a networkproduces values that satisfy the requirem ents for being interpreted as probabili- ties. The ideal loss function fo r classification in this case is obtained by using the output of softmax as the input of a no n-negative log likelihood function. The combination of softmax and such loss is called cross entropy in PyTorch. Nothing prevents us from treating imag es as vectors of pixel values, dealing with them using a fully connected networ k, just like any other numerical data. However, doing so makes it much harder to take advantage of the spatial rela-tionships in the data. Simple models can be created using nn.Sequential ." Deep-Learning-with-PyTorch.pdf,"193Using convolutions to generalize In the previous chapter, we built a simple neural network that could fit (or overfit) the data, thanks to the many parameters av ailable for optimization in the linear lay- ers. We had issues with our model, however, in that it was better at memorizing the training set than it was at generalizing properties of birds and airplanes. Based on our model architecture, we’ve got a guess as to why that’s the case. Due to the fully connected setup needed to detect the vari ous possible translations of the bird or airplane in the image, we have both too many parameters (making it easier for the model to memorize the training set) an d no position independence (making it harder to generalize). As we discussed in the last chapter, we could augment ourThis chapter covers Understanding convolution Building a convolutional neural network Creating custom nn.Module subclasses The difference between the module and functional APIs Design choices for neural networks" Deep-Learning-with-PyTorch.pdf,"194 CHAPTER 8Using convolutions to generalize training data by using a wide variety of re cropped images to try to force generaliza- tion, but that won’t address the issu e of having too many parameters. There is a better way! It consists of repl acing the dense, fully connected affine trans- formation in our neural network unit with a different linear operation: convolution. 8.1 The case for convolutions Let’s get to the bottom of what convolutions are and how we can use them in our neu- ral networks. Yes, yes, we were in the middle of our quest to tell birds from airplanes, and our friend is still waiting for our solu tion, but this diversion is worth the extra time spent. We’ll develop an intuition fo r this foundational concept in computer vision and then return to our problem equipped with superpowers. In this section, we’ll see how convolutions deliver locality and translation invariance. We’ll do so by taking a close look at the formula defining convolutions and applying it using pen and paper—but don’t worry, the gi st will be in pictures, not formulas. We said earlier that taking a 1D view of our input image and multiplying it by an n_output_features × n_input_features weight matrix, as is done in nn.Linear , means for each channel in the image, comput ing a weighted sum of all the pixels mul- tiplied by a set of weight s, one per output feature. We also said that, if we want to recognize patterns corresponding to objects, like an airplane in the sky, we will likely need to look at how nearby pixels are arranged, and we will be less interested in how pixels that are far from each other appear in combi- nation. Essentially, it doesn’t matter if ou r image of a Spitfire has a tree or cloud or kite in the corner or not. In order to translate this intuition into mathematical form, we could compute the weighted sum of a pixel with its immediate ne ighbors, rather than with all other pixels in the image. This would be equivalent to building weight matrices, one per output feature and output pixel location, in which all weights beyond a certain distance from a center pixel are zero. This will still be a weighted sum: that is, a linear operation. 8.1.1 What convolutions do We identified one more desire d property earlier: we woul d like these localized patterns to have an effect on the output regardless of their location in the image: that is, to betranslation invariant . To achieve this goal in a matrix applied to the image-as-a-vector we used in chapter 7 would require implementing a rather complicated pattern of weights (don’t worry if it is too complicated; it’ll get better sh ortly): most of the weight matrix would be zero (for entries corresponding to input pixels too far away from the output pixel to have an influence). For other weig hts, we would have to find a way to keep entries in sync that correspond to the same re lative position of input and output pixels. This means we would need to initialize them to the same values and ensure that all these tied weights stayed the same while the network is updated during training. This way, we would ensure that weights operate in neighb orhoods to respond to local patterns, and local patterns are identified no matter where they occur in the image." Deep-Learning-with-PyTorch.pdf,"195 The case for convolutions Of course, this approach is more than im practical. Fortunately, there is a readily available, local, translation-invari ant linear operation on the image: a convolution . We can come up with a more compact descriptio n of a convolution, but what we are going to describe is exactly what we just de lineated—only taken from a different angle. Convolution, or more precisely, discrete convolution1 (there’s an analogous continu- ous version that we won’t go into here), is defined for a 2D image as the scalar prod- uct of a weight matrix, the kernel , with every neighborhood in the input. Consider a 3 × 3 kernel (in deep learning, we typically use small kernels; we’ll see why later on) as a 2D tensor weight = torch.tensor([[w00, w01, w02], [w10, w11, w12],[w20, w21, w22]]) and a 1-channel, MxN image: image = torch.tensor([[i00, i01, i02, i03, ..., i0N], [i10, i11, i12, i13, ..., i1N], [i20, i21, i22, i23, ..., i2N], [i30, i31, i32, i33, ..., i3N],... [iM0, iM1m iM2, iM3, ..., iMN]]) We can compute an element of the outp ut image (without bias) as follows: o11 = i11 * w00 + i12 * w01 + i22 * w02 + i21 * w10 + i22 * w11 + i23 * w12 +i31 * w20 + i32 * w21 + i33 * w22 Figure 8.1 shows this computation in action. That is, we “translate” the kernel on the i11 location of the input image, and we multiply each weight by the value of the input image at the corresponding location. Thus, the output image is created by transl ating the kernel on all input locations and performing the weighted sum. For a multichannel image, like our RGB image, theweight matrix would be a 3 × 3 × 3 matrix: one set of weights for every channel, con- tributing together to the output values. Note that, just like the elements in the weight matrix of nn.Linear , the weights in the kernel are not known in advance, but they are initialized randomly and updated through backpropagation. Note also that the same kernel, and thus each weight in the kernel, is reused across the whole image. Thin king back to autograd, this means the use of each weight has a history spanning the en tire image. Thus, the derivative of the loss with respect to a convolution weight incl udes contributions from the entire image. 1There is a subtle difference between PyTorch’s conv olution and mathematics’ co nvolution: one argument’s sign is flipped. If we were in a pedantic mood, we could call PyTorch’s convolutions discrete cross-correlations ." Deep-Learning-with-PyTorch.pdf,"196 CHAPTER 8Using convolutions to generalize It’s now possible to see the connection to what we were stating earlier: a convolution is equivalent to having multiple linear operat ions whose weights are zero almost every- where except around individual pixels and that receive equal updates during training. Summarizing, by switching to convolutions, we get Local operations on neighborhoods Translation invariance Models with a lot fewer parameters The key insight underlying the third point is that, with a convolution layer, the num- ber of parameters depends not on the number of pixels in the image, as was the case in our fully connected model, but rather on the size of the convolution kernel (3 × 3,5 × 5, and so on) and on how many convolution filters (or output channels) we decide to use in our model. 8.2 Convolutions in action Well, it looks like we’ve spent enough time down a rabbit hole! Let’s see some PyTorch in action on our birds vers us airplanes challenge. The torch.nn module provides con- volutions for 1, 2, and 3 dimensions: nn.Conv1d for time series, nn.Conv2d for images, and nn.Conv3d for volumes or videos. For our CIFAR-10 data, we’ll resort to nn.Conv2d . At a minimum, the arguments we provide to nn.Conv2d are the number of input features (or channels , since we’re dealing 0 01 1 1 111111 1111 1 11 11100 00 52 220 0 0 00 0 0 00 0 0111 11 1111 10000 000 0 1 0 0 0 0010 0 0 0110 0 0 00000 0000 0 00 00 00 001image kernel output kernel weights Locality TRanslation invariancescalar product betwEen translatedkernel and image(zeros outside the kernel)same kernel weights used acroSs the image Figure 8.1 Convolution: locality and translation invariance" Deep-Learning-with-PyTorch.pdf,"197 Convolutions in action with multichannel images: that is, more than one value per pixel), the number of output features, and the size of the kernel. For in stance, for our first convolutional module, we’ll have 3 input features per pixel (the RGB channels) and an arbitrary number of channels in the output—say, 16. The more ch annels in the output image, the more the capacity of the network. We need the channels to be able to detect many different typesof features. Also, because we are randomly in itializing them, some of the features we’ll get, even after training, will turn out to be useless. 2 Let’s stick to a kernel size of 3 × 3. I t i s v e r y c o m m o n t o h a v e k e r n e l s i z e s that are the same in all directions, so PyTorch has a shortcut for this: whenever kernel_size=3 is specified for a 2D convo- lution, it means 3 × 3 (provided as a tuple (3, 3) in Python). For a 3D convolution, it means 3 × 3 × 3. The CT scans we will see in part 2 of the book have a different voxel (volumetric pixel) resolution in one of the three axes. In such a case, it makes sense to consider kernels that have a different size for the exceptional dimension. But for now, we stick with having the same size of convolutions across all dimensions: # In[11]: conv = nn.Conv2d(3, 16, kernel_size=3)conv # Out[11]: Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1)) What do we expect to be the shape of the weight tensor? The kernel is of size 3 × 3, so we want the weight to consist of 3 × 3 parts. For a single output pixel value, our kernel would consider, say, in_ch = 3 input channels, so the weight component for a single output pixel value (and by translation the in variance for the entire output channel) is of shape in_ch × 3 × 3. Finally, we have as many of those as we have output channels, here out_ch = 16, so the complete weight tensor is out_ch × in_ch × 3 × 3, in our case 16 × 3 × 3 × 3. The bias will have size 16 (we haven’t talked about bias for a while for simplicity, but just as in th e linear module case, it’s a constant value we add to each channel of the output image). Let’s verify our assumptions: # In[12]: conv.weight.shape, conv.bias.shape # Out[12]: (torch.Size([16, 3, 3, 3]), torch.Size([16])) We can see how convolutions are a convenie nt choice for learning from images. We have smaller models looking for local patt erns whose weights are optimized across the entire image. A 2D convolution pass produces a 2D imag e as output, whose pixels are a weighted sum over neighborhoods of the input image. In our case, both the kernel weights and 2This is part of the lottery ticket hypothesis : that many kernels will be as useful as losing lottery tickets. See Jona- than Frankle and Michael Carbin, “The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Net- works,” 2019, https://arxiv.org/abs/1803.03635 .Instead of the shortc ut kernel_size=3, we could equivalently pass in the tuple that we see in the output: kernel_size=(3, 3)." Deep-Learning-with-PyTorch.pdf,"198 CHAPTER 8Using convolutions to generalize the bias conv.weight are initialized randomly, so the output image will not be particu- larly meaningful. As usual, we need to add the zeroth batch dimension with unsqueeze if we want to call the conv module with one input image, since nn.Conv2d expects a B × C × H × W shaped tensor as input: # In[13]: img, _ = cifar2[0] output = conv(img.unsqueeze(0)) img.unsqueeze(0).shape, output.shape # Out[13]: (torch.Size([1, 3, 32, 32]), torch.Size([1, 16, 30, 30])) We’re curious, so we can display the output, shown in figure 8.2: # In[15]: plt.imshow(output[0, 0].detach(), cmap='gray') plt.show() Wait a minute. Let’s take a look a the size of output : it’s torch.Size([1, 16, 30, 30]) . Huh; we lost a few pixels in the process. How did that happen? 8.2.1 Padding the boundary The fact that our output image is smaller than the input is a side effect of deciding what to do at the boundary of the image. Applyi ng a convolution kernel as a weighted sum of pixels in a 3 × 3 neighborhood requires that there are neighbor s in all directions. If we are at i00, we only have pixels to the right of and below us. By default, PyTorch will slide the convolution kernel within the input picture, getting width - kernel_width +1 horizontal and vertical positions. For odd-sized kernels, this results in images that are 0output 5 10 15 20 25 30 0 5 10 15 20 25 300input 5 10 15 20 25 30 0 5 10 15 20 25 30 Figure 8.2 Our bird after a random convolution treatment. (We cheated a little with the code to show you the input, too.)" Deep-Learning-with-PyTorch.pdf,"199 Convolutions in action one-half the convolution kernel’s width (in our case, 3//2 = 1) smaller on each side. This explains why we’re missing two pixels in each dimension. However, PyTorch gives us the possibility of padding the image by creating ghost pix- els around the border that have value zero as far as the convolution is concerned. Fig- ure 8.3 shows padding in action. In our case, specifying padding=1 when kernel_size=3 means i00 has an extra set of neighbors above it and to its left, so that an output of the convolution can be com-puted even in the corner of our original image. 3 The net result is that the output has now the exact same size as the input: # In[16]: conv = nn.Conv2d(3, 1, kernel_size=3, padding=1)output = conv(img.unsqueeze(0)) img.unsqueeze(0).shape, output.shape # Out[16]: (torch.Size([1, 3, 32, 32]), torch.Size([1, 1, 32, 32])) 3For even-sized kernels, we would need to pad by a di fferent number on the left and right (and top and bot- tom). PyTorch doesn’t offer to do this in the convolution itself, but the function torch.nn.functional .pad can take care of it. But it’s best to stay with odd kernel sizes; even-sized kernels are just odd. zeros outside OUTPUT00 00 001 1 111 11 1 00 00100 001 1 111111 1 0222 25 2222 01 01000 01 111 0 010 01 111 0 01 0 01 111 0 011 1 000 0 0 001 0 0 0 0 0 0 0 0 001 0 00 0010 0010 0111 0 00 00010 0 01 111 0 01011 0 000 101000 00000 010 00000000 0011 11 100 0011 11 10 1 00 0 00 1 0 00 00 01 1 1 0 0001 00 0011 11 100 0011 11 1 00 0011 11 100 0011 11 10 1 0 00 1 01 1 1 0000 000 0 0 0100 0 0 011 1 01001 00 0011 11 11 0001100 01 00 0011 11 110 1100 01 01 000 01 0 0000010 Figure 8.3 Zero padding to preserve the image size in the outputNow with padding" Deep-Learning-with-PyTorch.pdf,"200 CHAPTER 8Using convolutions to generalize Note that the sizes of weight and bias don’t change, regardless of whether padding is used. There are two main reasons to pad convol utions. First, doing so helps us separate the matters of convolution and changing im age sizes, so we ha ve one less thing to remember. And second, when we have more elaborate structures such as skip con- nections (discussed in section 8.5.3) or the U-Nets we’ll cover in part 2, we want the tensors before and after a few convolutions to be of compatible size so that we canadd them or take differences. 8.2.2 Detecting features with convolutions We said earlier that weight and bias are parameters that are learned through back- propagation, exactly as it happens for weight and bias in nn.Linear . However, we can play with convolution by setting we ights by hand and see what happens. Let’s first zero out bias , just to remove any confou nding factors, and then set weights to a constant value so that each pixel in the output gets the mean of its neigh- bors. For each 3 × 3 neighborhood: # In[17]: with torch.no_grad(): conv.bias.zero_() with torch.no_grad(): conv.weight.fill_(1.0 / 9.0) We could have gone with conv.weight.one_() —that would result in each pixel in the output being the sum of the pixels in the neighbor hood. Not a big difference, except that the values in the output image would have been nine times larger. Anyway, let’s see the effect on our CIFAR image: # In[18]: output = conv(img.unsqueeze(0))plt.imshow(output[0, 0].detach(), cmap='gray') plt.show() As we could have predicted, the filter produc es a blurred version of the image, as shown in figure 8.4. After all, every pixel of the ou tput is the average of a neighborhood of the input, so pixels in the output are correlated and change more smoothly. Next, let’s try something different. The following kernel may look a bit mysterious at first: # In[19]: conv = nn.Conv2d(3, 1, kernel_size=3, padding=1) with torch.no_grad(): conv.weight[:] = torch.tensor([[-1.0, 0.0, 1.0], [-1.0, 0.0, 1.0],[-1.0, 0.0, 1.0]]) conv.bias.zero_()" Deep-Learning-with-PyTorch.pdf,"201 Convolutions in action Working out the weighted sum for an arbitrary pixel in position 2,2, as we did earlier for the generic convolution kernel, we get o 2 2=i 1 3-i 1 1+ i23 - i21 + i33 - i31 which performs the difference of all pixels on the right of i22 minus the pixels on the left of i22. If the kernel is applied on a vertical boundary between two adjacent regions of different intensity, o22 will have a high va lue. If the kernel is applied on a region of uniform intensity, o22 will be zero. It’s an edge-detection kernel: the kernel highlights the vertical edge between two ho rizontally adjacent regions. Applying the convolution kernel to our image, we see the result shown in figure 8.5. As expected, the convolution kernel e nhances the vertical ed ges. We could build 0output 5 10 15 20 25 30 0 5 10 15 20 25 300input 5 10 15 20 25 30 0 5 10 15 20 25 30 Figure 8.4 Our bird, this time blurr ed thanks to a constant convolution kernel 0output 5 10 15 20 25 30 0 5 10 15 20 25 300input 5 10 15 20 25 30 0 5 10 15 20 25 30 Figure 8.5 Vertical edges throughout our bird, courtesy of a handcrafted convolution kernel" Deep-Learning-with-PyTorch.pdf,"202 CHAPTER 8Using convolutions to generalize lots more elaborate filters, such as for dete cting horizontal or diagonal edges, or cross- like or checkerboard patterns, where “det ecting” means the output has a high magni- tude. In fact, the job of a computer vision ex pert has historically been to come up with the most effective combinatio n of filters so that certai n features are highlighted in images and objects can be recognized. With deep learning, we let kernels be es timated from data in whatever way the dis- crimination is most effective: for instance, in terms of minimizing the negative cross- entropy loss between the output and the grou nd truth that we introduced in section 7.2.5. From this angle, the job of a convolutional neural network is to estimate the ker- nel of a set of filter banks in successive la yers that will transform a multichannel image into another multichannel im age, where different channels correspond to different features (such as one channel for the average, another channel for vertical edges, and so on). Figure 8.6 shows how the traini ng automatically learns the kernels. 8.2.3 Looking further with depth and pooling This is all well and good, but conceptually there’s an elephant in the room. We got all excited because by moving from fully conne cted layers to convolutions, we achieve locality and translation inva riance. Then we recommended the use of small kernels, like 3 × 3, or 5 × 5: that’s peak locality, all right. What about the big picture ? How do we know that all structures in our images are 3 pixels or 5 pixels wide? Well, we don’t, because they aren’t. And if they aren’t, ho w are our networks going to be equipped to see those patterns with larger scope? This is something we’ll really need if we want to convolution activation kernels chaNnels weight updatebackdroploSs learning ratederivative of loSs with respectto weightone output chaNnel perkernelR G B N input 3xx3 Figure 8.6 The process of learning with convolutions by estimating the gradient at the kernel weights and updating them individually in order to optimize for the loss" Deep-Learning-with-PyTorch.pdf,"203 Convolutions in action solve our birds versus airplanes problem ef fectively, since although CIFAR-10 images are small, the objects st ill have a (wing-)span several pixels across. One possibility could be to use large conv olution kernels. Well, sure, at the limit we could get a 32 × 32 kernel for a 32 × 32 image, but we would converge to the old fullyconnected, affine transformation and lose all the nice properties of convolution. Another option, which is used in convolutio nal neural networks, is stacking one con- volution after the other an d at the same time downsampling the image between suc- cessive convolutions. FROM LARGE TO SMALL : DOWNSAMPLING Downsampling could in principle occur in different ways. Scaling an image by half isthe equivalent of taking fo ur neighboring pixels as input and producing one pixel as output. How we compute the value of the ou tput based on the values of the input is up to us. We could Average the four pixels. This average pooling was a common approach early on but has fallen out of favor somewhat. Take the maximum of the four pixels. This approach, called max pooling, is currently the most commonly used approach, but it has a downside of discarding the other three-quarters of the data. Perform a strided convolution, where only every Nth pixel is calculated. A 3 × 4 convolu- tion with stride 2 still incorporates inpu t from all pixels from the previous layer. The literature shows promise for this approach, but it has not yet supplanted max pooling. We will be focusing on max pooling, illustra ted in figure 8.7, going forward. The fig- ure shows the most common setup of taking non-overlapping 2 x 2 tiles and taking the maximum over each of them as the new pixel at the reduced scale. Intuitively, the output images from a convolution layer, especially since they are fol- lowed by an activation just like any other li near layer, tend to have a high magnitude 22 22 25 52 outputINput (output of conv + activation) maxpOol2 21 max=2 max=2max=5 max=22 22 22 2 1000 000 25 2 1 22 20 01 0 0 Figure 8.7 Max pooling in detail" Deep-Learning-with-PyTorch.pdf,"204 CHAPTER 8Using convolutions to generalize where certain features corresponding to the estimated kernel are detected (such as vertical lines). By keeping the highest valu e in the 2 × 2 neighborhood as the downs- ampled output, we ensure that the features that are found survive the downsampling, at the expense of the weaker responses. Max pooling is provided by the nn.MaxPool2d module (as with convolution, there are versions for 1D and 3D data). It takes as in put the size of the neighborhood over which to operate the pooling operation. If we wish to downsample our image by half, we’ll want to use a size of 2. Let’s verify that it wo rks as expected directly on our input image: # In[21]: pool = nn.MaxPool2d(2) output = pool(img.unsqueeze(0)) img.unsqueeze(0).shape, output.shape # Out[21]: (torch.Size([1, 3, 32, 32]), torch.Size([1, 3, 16, 16])) COMBINING CONVOLUTIONS AND DOWNSAMPLING FOR GREAT GOOD Let’s now see how combining convolutions and downsampling can help us recognize larger structures. In figure 8.8, we start by applying a set of 3 × 3 kernels on our 8 × 8 image, obtaining a multichannel output imag e of the same size. Then we scale down the output image by half, obt aining a 4 × 4 image, and apply another set of 3 × 3 ker- nels to it. This second set of kernels oper ates on a 3 × 3 neighborhood of something that has been scaled down by half, so it effectively maps back to 8 × 8 neighborhoods of the input. In addition, the second set of kernels takes the output of the first set of kernels (features like averages, edges, and so on) and extracts ad ditional features on top of those. So, on one hand, the first set of kernels operates on small neighborhoods on first- order, low-level features, while the second se t of kernels effectively operates on wider neighborhoods, pr oducing features that are compositions of the previous features. This is a very powerful mechanism that pr ovides convolutional neural networks with the ability to see into very complex sc enes—much more complex than our 32 × 32 images from the CIFAR-10 dataset. input 11 111111 11 11 111133 113311 4 224 2 10 13 10 2421 10 13 10 113 21 14 21 2114 1413 1310 10524 41111 1145 2 24 1 1 4 22433 1 5 445 33 1 5 4451 111 1 1 1 11 11image conv = =conv kernelconv outputoutput max pOol output max pOol output “croSs top left”conv kernel Figure 8.8 More convolutions by hand, showing the eff ect of stacking convolutions and downsampling: a large cross is highlighted using two smal l, cross-shaped kernels and max pooling." Deep-Learning-with-PyTorch.pdf,"205 Convolutions in action 8.2.4 Putting it all together for our network With these building blocks in our hands, we can now proceed to build our convolu- tional neural network for dete cting birds and airplanes. Le t’s take our previous fully connected model as a star ting point and introduce nn.Conv2d and nn.MaxPool2d as described previously: # In[22]: model = nn.Sequential( nn.Conv2d(3, 16, kernel_size=3, padding=1), nn.Tanh(),nn.MaxPool2d(2), nn.Conv2d(16, 8, kernel_size=3, padding=1), nn.Tanh(),nn.MaxPool2d(2), # ... ) The first convolution takes us from 3 RGB ch annels to 16, thereby giving the network a chance to generate 16 independent featur es that operate to (hopefully) discrimi- nate low-level features of birds and airplanes. Then we apply the Tanh activation func- tion. The resulting 16-channel 32 × 32 image is pooled to a 16-channel 16 × 16 image by the first MaxPool3d . At this point, the downsamp led image undergoes another con- volution that generates an 8-channel 16 × 16 output. With any lu ck, this output will consist of higher-level fe atures. Again, we apply a Tanh activation and then pool to an 8-channel 8 × 8 output. Where does this end? After the input image has been reduced to a set of 8 × 8 fea- tures, we expect to be able to output some probabilities from the network that we canfeed to our negative log likelihood. However, probabilities are a pair of numbers in a 1D vector (one for airplane, one for bird), but here we’re still dealing with multichan- nel 2D features.The receptive field of output pixels When the second 3 × 3 convolution kernel produces 21 in its conv output in figure 8.8, this is based on the top-left 3 × 3 pixels of the first max pool output. They, in turn,correspond to the 6 × 6 pixels in the top-left corner in the first conv output, which in turn are computed by the first convolution from the top-left 7 × 7 pixels. So the pixel in the second convolution output is influenced by a 7 × 7 input square. The first convolution also uses an implicitly “padded” column and row to produce the output in the corner; otherwise, we would have an 8 × 8 square of input pixels informing a givenpixel (away from the boundary) in the second convolution’s output. In fancy language, we say that a given output neuron of the 3 × 3-conv, 2 × 2-max-pool, 3 × 3-conv construction has a receptive field of 8 × 8." Deep-Learning-with-PyTorch.pdf,"206 CHAPTER 8Using convolutions to generalize Thinking back to the beginning of this chapter, we already know what we need to do: turn the 8-channel 8 × 8 image into a 1D vector and complete our network with a set of fully connected layers: # In[23]: model = nn.Sequential( nn.Conv2d(3, 16, kernel_size=3, padding=1), nn.Tanh(), nn.MaxPool2d(2),nn.Conv2d(16, 8, kernel_size=3, padding=1), nn.Tanh(), nn.MaxPool2d(2),# ... nn.Linear( 8*8*8 , 32), nn.Tanh(),nn.Linear(32, 2)) This code gives us a neural ne twork as shown in figure 8.9. Ignore the “something missing” comment for a minute. Let’s first notice that the size of the linear layer is dependent on the expected size of the output of MaxPool2d : 8 × 8 × 8 = 512. Let’s count the number of parameters for this small model: # In[24]: numel_list = [p.numel() for p in model.parameters()] sum(numel_list), numel_list # Out[24]: (18090, [432, 16, 1152, 8, 16384, 32, 64, 2]) That’s very reasonable for a limited dataset of such small images. In order to increase the capacity of the model, we could increa se the number of output channels for the convolution layers (that is, the number of features each convolution layer generates), which would lead the linear layer to increase its size as well. We put the “Warning” note in the code for a reason. The model has zero chance of running without complaining:Warning: Something important is missing here! 0000000000000000000000000 00000 00000 Pbird = 0.8 Pairplane = 0.2 Figure 8.9 Shape of a typical convolutional network, including the one we’re building. An image is fed to a series of convolutions and max pooling modules and then straightened into a 1D vector and fed into fully connected modules." Deep-Learning-with-PyTorch.pdf,"207 Subclassing nn.Module # In[25]: model(img.unsqueeze(0)) # Out[25]: ... RuntimeError: size mismatch, m1: ➥ [64 x 8], m2: [512 x 32] at c:\...\THTensorMath.cpp:940 Admittedly, the error message is a bit obsc ure, but not too much so. We find refer- ences to linear in the traceback: looking back at the model, we see that only module that has to have a 512 × 32 tensor is nn.Linear(512, 32) , the first linear module after the last convolution block. What’s missing there is the reshaping step from an 8-channel 8 × 8 image to a 512- element, 1D vector (1D if we ignore the batch dimension, that is). This could be achieved by calling view on the output of the last nn.MaxPool2d , but unfortunately, we don’t have any explicit visibility of the output of each module when we use nn.Sequential .4 8.3 Subclassing nn.Module At some point in developing neural networks, we will find ourselves in a situation where we want to compute something that the premad e modules do not cover. Here, it is some- thing very simple like reshaping,5; but in section 8.5.3, we use the same construction to implement residual connections. So in this section, we learn how to make our own nn.Module subclasses that we can then use just like the prebuilt ones or nn.Sequential . When we want to build models that do more complex things than just applying one layer after another, we need to leave nn.Sequential for something that gives us added flexibility. PyTo rch allows us to use any comput ation in our model by subclass- ing nn.Module . In order to subclass nn.Module , at a minimum we need to define a forward function that takes the inputs to the module and retu rns the output. This is where we define our module’s computation. The name forward here is reminiscent of a distant past, when modules needed to define both the forward and backward passes we met in section5.5.1. With PyTorch, if we use standard torch operations, au tograd will take care of the backward pass automatically; and indeed, an nn.Module never comes with a backward . Typically, our computation will use ot her modules—premade like convolutions or customized. To include these submodules , we typically define them in the constructor __init__ and assign them to self for use in the forward function. They will, at the same time, hold their parameters throughout the lifetime of our mo dule. Note that you need to call super().__init__() before you can do that (or PyTorch will remind you). 4Not being able to do this kind of operation inside of nn.Sequential was an explicit design choice by the PyTorch authors and was left that way for a long time; see the linked comments from @soumith at https://github.com/pytorch/pytorch/issues/2486 . Recently, PyTorch gained an nn.Flatten layer. 5We could have used nn.Flatten starting from PyTorch 1.3." Deep-Learning-with-PyTorch.pdf,"208 CHAPTER 8Using convolutions to generalize Linear (512d->32d)TanhLinear (32d->2d)Output (2d)Net Tanh TanhView (512d) MaxPOol (2 x2)8cx8x8 Conv2d (3 x3, 16c->8c) MaxPOol (2 x2)16cx16x16 16cx32x32 Conv2d (3 x3, 3c->16c) Input (3c, 32 x32) Figure 8.10 Our baseline convolu- tional network architecture8.3.1 Our network as an nn.Module Let’s write our network as a submodule. To do so, we instantiate all the nn.Conv2d , nn.Linear , and so on that we previously passed to nn.Sequential in the constructor, and then use their instances one after another in forward : # In[26]: class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1) self.act1 = nn.Tanh()self.pool1 = nn.MaxPool2d(2) self.conv2 = nn.Conv2d(16, 8, kernel_size=3, padding=1) self.act2 = nn.Tanh()self.pool2 = nn.MaxPool2d(2) self.fc1 = nn.Linear(8 * 8 * 8, 32) self.act3 = nn.Tanh()self.fc2 = nn.Linear(32, 2) def forward(self, x): out = self.pool1(self.act1(self.conv1(x)))out = self.pool2(self.act2(self.conv2(out))) out = out.view(-1 ,8*8*8 ) out = self.act3(self.fc1(out))out = self.fc2(out) return out The Net class is equivalent to the nn.Sequential model we built earlier in terms of submodules; but by writing the forward function explicitly, we can manipulate the output of self.pool3 directly and call view on it to turn it into a B × N vector. Note that we leave the batch dimension as –1 in the call to view , since in principle we don’t know how many samp les will be in the batch. Here we use a subclass of nn.Module to contain our entire model. We could also use subclasses to define new building blocks for more complex net- works. Picking up on the diagram style in chapter 6,our network looks like the one shown in figure 8.10. We are making some ad ho c choices about what infor- mation to present where. Recall that the goal of classification networks typi- cally is to compress information in the sense that we start with an image with a sizable number of pixelsand compress it into (a vector of probabilities of) classes. Two things about our architecture deserve some commentary with respect to this goal.This reshape is what we were missing earlier." Deep-Learning-with-PyTorch.pdf,"209 Subclassing nn.Module First, our goal is reflected by the si ze of our intermediate values generally shrinking—this is done by reducing the nu mber of channels in the convolutions, by reducing the number of pixels through pool ing, and by having an output dimension lower than the input dimension in the linear layers. This is a common trait ofclassification networks. However, in many popular architectures like the ResNets we saw in chapter 2 and discuss more in section 8.5.3, the reduction is achieved by pooling in the spatial resolution, but the number of channels increases (still resulting in a reduction in size). It seems that our patt ern of fast information reduction works well with networks of limited depth and small im ages; but for deeper networks, the decrease is typically slower. Second, in one layer, there is not a reduction of output size with regard to input size: the initial convolution. If we consider a single output pixel as a vector of 32 ele- ments (the channels), it is a linear transf ormation of 27 elements (as a convolution of 3 channels × 3 × 3 kernel size)—only a moderate increase. In ResNet, the initial con- volution generates 64 channels from 147 el ements (3 channels × 7 × 7 kernel size). 6 So the first layer is exceptional in that it greatly increases the overall dimension (as in channels times pixels) of the data flowin g through it, but the mapping for each out- put pixel considered in isolation still ha s approximately as ma ny outputs as inputs.7 8.3.2 How PyTorch keeps track of parameters and submodules Interestingly, assigning an instance of nn.Module to an attribute in an nn.Module , as we did in the earlier constructor, automati cally registers the module as a submodule. NOTE The submodules must be top-level attributes , not buried inside list or dict instances! Otherwise th e optimizer will not be able to locate the sub- modules (and, hence, their parameters). For situations where your modelrequires a list or dict of submodules, PyTorch provides nn.ModuleList and nn.ModuleDict . We can call arbitrary methods of an nn.Module subclass. For example, for a model where training is substantially different than its use, say, for prediction, it may make sense to have a predict method. Be aware that calling such methods will be similar to calling forward instead of the module itself—they wi ll be ignorant of hooks, and the JIT does not see the module structure when using them because we are missing the equivalent of the __call__ bits shown in section 6.2.1. This allows Net to have access to the parameters of its submodules without further action by the user: 6The dimensions in the pixel-wise linear mapping define d by the first convolution were emphasized by Jeremy Howard in his fast.ai course ( https://www.fast.ai ). 7Outside of and older than deep learning, projecting into high-dimensional space and then doing conceptu- ally simpler (than linear) machine learning is commonly known as the kernel trick . The initial increase in the number of channels could be seen as a somewhat similar phenomenon, but striking a different balance between the cleverness of the embedding and the simplicity of the model working on the embedding." Deep-Learning-with-PyTorch.pdf,"210 CHAPTER 8Using convolutions to generalize # In[27]: model = Net() numel_list = [p.numel() for p in model.parameters()] sum(numel_list), numel_list # Out[27]: (18090, [432, 16, 1152, 8, 16384, 32, 64, 2]) What happens here is that the parameters() call delves into all submodules assigned as attributes in the constructor and recursively calls parameters() on them. No mat- ter how nested the submodule, any nn.Module can access the list of all child parame- ters. By accessing their grad attribute, which has been populated by autograd , the optimizer will know how to change parame ters to minimize the loss. We know that story from chapter 5. We now know how to implement our own mo dules—and we will need this a lot for part 2. Looking back at the implementation of the Net class, and thinking about the utility of registering submodules in the co nstructor so that we can access their param- eters, it appears a bit of a waste that we are also registering submodules that have no parameters, like nn.Tanh and nn.MaxPool2d . W o u l d n ’ t i t b e e a s i e r t o c a l l t h e s e directly in the forward function, just as we called view ? 8.3.3 The functional API It sure would! And that’s why PyTorch has functional counterparts for every nn module. By “functional” here we mean “having no internal state”—in othe r words, “whose out- put value is solely and fully determined by the value input arguments.” Indeed, torch .nn.functional provides many functions that wo rk like the modules we find in nn. But instead of working on the input argume nts and stored parameters like the mod- ule counterparts, they take inputs and para meters as arguments to the function call. For instance, the functional counterpart of nn.Linear is nn.functional.linear , which is a function that has signature linear(input, weight, bias=None) . The weight and bias parameters are arguments to the function. Back to our model, it makes sense to keep using nn modules for nn.Linear and nn.Conv2d so that Net will be able to manage their Parameter s during training. How- ever, we can safely switch to the function al counterparts of pooling and activation, since they have no parameters: # In[28]: import torch.nn.functional as F class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)self.conv2 = nn.Conv2d(16, 8, kernel_size=3, padding=1) self.fc1 = nn.Linear(8 * 8 * 8, 32) self.fc2 = nn.Linear(32, 2)" Deep-Learning-with-PyTorch.pdf,"211 Subclassing nn.Module def forward(self, x): out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)out = F.max_pool2d(torch.tanh(self.conv2(out)), 2) out = out.view(-1 ,8*8*8 ) out = torch.tanh(self.fc1(out))out = self.fc2(out) return out This is a lot more concise than and fully equivalent to our previous definition of Net in section 8.3.1. Note that it would still ma ke sense to instantiat e modules that require several parameters for their initialization in the constructor. TIP While general-purpose sc ientific functions like tanh still exist in torch.nn.functional in version 1.0, those entry points are deprecated in favor of functions in the top-level torch namespace. More niche functions like max_pool2d will remain in torch.nn.functional . Thus, the functional way also sheds light on what the nn.Module API is all about: a Module is a container for state in the forms of Parameter s and submodules combined with the instructions to do a forward. Whether to use the functional or the modu lar API is a decision based on style and taste. When part of a network is so simple that we want to use nn.Sequential , we’re in the modular realm. When we are writing ou r own forwards, it may be more natural to use the functional interface for things that do not need state in the form of parameters. In chapter 15, we will brie fly touch on quantization. Then stateless bits like activa- tions suddenly become statef ul because information abou t the quantization needs to be captured. This means if we aim to quantize our model, it might be worthwhile tostick with the modular API if we go for no n-JITed quantization. There is one style mat- ter that will help you avoid surprises with (originally unforeseen) uses: if you need sev- eral applications of stateless modules (like nn.HardTanh or nn.ReLU ), it is probably a good idea to have a separate instance fo r each. Reusing the same module appears to be clever and will give correct results with our standard Python usage here, but tools analyzing your model may trip over it. So now we can make our own nn.Module if we need to, and we also have the func- tional API for cases when inst antiating and then calling an nn.Module is overkill. This has been the last bit missing to understand how the code organization works in just about any neural network implemented in PyTorch. Let’s double-check that our model runs, and then we’ll get to the training loop: # In[29]: model = Net() model(img.unsqueeze(0)) # Out[29]: tensor([[-0.0157, 0.1143]], grad_fn=)" Deep-Learning-with-PyTorch.pdf,"212 CHAPTER 8Using convolutions to generalize We got two numbers! Informat ion flows correctly. We might not realize it right now, but in more complex models, getting the size of the first linear layer right is some- times a source of frustratio n. We’ve heard stories of fa mous practitioners putting in arbitrary numbers and then re lying on error messages from PyTorch to backtrack the correct sizes for their linear layers. Lame, eh? Nah, it’s all legit! 8.4 Training our convnet We’re now at the point where we can assemble our complete traini ng loop. We already developed the overall structure in chapter 5, and the training loop looks much like the one from chapter 6, but here we will revi sit it to add some details like some track- ing for accuracy. After we run our model, we wi ll also have an appetite for a little more speed, so we will learn how to run our models fast on a GPU. But first let’s look at the training loop. Recall that the core of our convnet is two nested loops: an outer one over the epochs and an inner one of the DataLoader that produces batches from our Dataset . In each loop, we then have to 1Feed the inputs through the model (the forward pass). 2Compute the loss (also part of the forward pass). 3Zero any old gradients. 4Call loss.backward() to compute the gradients of the loss with respect to all parameters (the backward pass). 5Have the optimizer take a step in toward lower loss. Also, we collect and print some information. So here is our training loop, looking almost as it does in the previous chapter—bu t it is good to remember what each thing is doing: # In[30]: import datetime def training_loop(n_epochs, optimizer, model, loss_fn, train_loader): for epoch in range(1, n_epochs + 1): loss_train = 0.0 for imgs, labels in train_loader: outputs = model(imgs) loss = loss_fn(outputs, labels)optimizer.zero_grad() loss.backward() optimizer.step()Uses the datetime module included with Python Our loop over the epochs, numbered from 1 to n_epochs rather than starting at 0 Loops over our dataset in the batches the data loader creates for usFeeds a batch through our model … … and computes the loss we wish to minimize After getting rid of the gradients from the last round … … performs the backward step. That is, we compute the gradients of all parameters we want the network to learn. Updates the model" Deep-Learning-with-PyTorch.pdf,"213 Training our convnet loss_train += loss.item() if epoch == 1 or epoch % 10 == 0: print('{} Epoch {}, Training loss {}'.format( datetime.datetime.now(), epoch, loss_train / len(train_loader))) We use the Dataset from chapter 7; wrap it into a DataLoader ; instantiate our net- work, an optimizer, and a loss function as before; and call our training loop. The substantial changes in our model fr om the last chapter are that now our model is a custom subclass of nn.Module and that we’re using convolutions. Let’s run training for 100 epochs while printing th e loss. Depending on your hardware, this may take 20 minutes or more to finish! # In[31]: train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True) model = Net() # optimizer = optim.SGD(model.parameters(), lr=1e-2) #loss_fn = nn.CrossEntropyLoss() # training_loop( n_epochs = 100, optimizer = optimizer, model = model,loss_fn = loss_fn, train_loader = train_loader, ) # Out[31]: 2020-01-16 23:07:21.889707 Epoch 1, Training loss 0.56348132669546052020-01-16 23:07:37.560610 Epoch 10, Training loss 0.3277610331109375 2020-01-16 23:07:54.966180 Epoch 20, Training loss 0.3035225479086493 2020-01-16 23:08:12.361597 Epoch 30, Training loss 0.282493785498248552020-01-16 23:08:29.769820 Epoch 40, Training loss 0.2611226033253275 2020-01-16 23:08:47.185401 Epoch 50, Training loss 0.24105800626574048 2020-01-16 23:09:04.644522 Epoch 60, Training loss 0.219971788204779282020-01-16 23:09:22.079625 Epoch 70, Training loss 0.20370126601047578 2020-01-16 23:09:39.593780 Epoch 80, Training loss 0.18939699422401987 2020-01-16 23:09:57.111441 Epoch 90, Training loss 0.172833965272660462020-01-16 23:10:14.632351 Epoch 100, Training loss 0.1614033816868712 So now we can train our networ k. But again, our friend the bird watcher will likely not be impressed when we tell her that we trained to very low training loss.Sums the losses we saw over the epoch. Recall that it is important to transform the loss to a Python number with .item(), to escape the gradients.Divides by the length of the training data loader to get the average loss per batch. This is a much more intuitive measure than the sum. The DataLoader batches up the ex amples of our cifar2 dataset. Shuffling randomizes the order of the examples from the dataset. Instantiates our network … … the stochastic gradient descent optimizer we have been working with … … and the cross entropy loss we met in 7.10 Calls the training loop we defined earlier" Deep-Learning-with-PyTorch.pdf,"214 CHAPTER 8Using convolutions to generalize 8.4.1 Measuring accuracy In order to have a measure that is more interpretable than the loss, we can take a look at our accuracies on the trai ning and validation datasets. We use the same code as in chapter 7: # In[32]: train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=False) val_loader = torch.utils.data.DataLoader(cifar2_val, batch_size=64, shuffle=False) def validate(model, train_loader, val_loader): for name, loader in [(""train"", train_loader), (""val"", val_loader)]: correct = 0total = 0 with torch.no_grad(): for imgs, labels in loader: outputs = model(imgs) _, predicted = torch.max(outputs, dim=1)total += labels.shape[0]correct += int((predicted == labels).sum()) print(""Accuracy {}: {:.2f}"".format(name , correct / total)) validate(model, train_loader, val_loader) # Out[32]: Accuracy train: 0.93 Accuracy val: 0.89 We cast to a Python int—for integer tensors, this is equivalent to using .item() , simi- lar to what we did in the training loop. This is quite a lot better than the full y connected model, which achieved only 79% accuracy. We about halved the number of er rors on the validation set. Also, we used far fewer parameters. This is telling us that the model does a better job of generalizing its task of re cognizing the subject of images from a new sample, through locality and translation invariance. We co uld now let it run for more epochs and see what perfor- mance we could squeeze out. 8.4.2 Saving and loading our model Since we’re satisfied with our model so far, it would be nice to actually save it, right? It’s easy to do. Let’s save the model to a file: # In[33]: torch.save(model.state_dict(), data_path + 'birds_vs_airplanes.pt') The birds_vs_airplanes.pt file now contains all the parameters of model : that is, weights and biases for the two convolution modules and the two linear modules. So,We do not want gradients here, as we will not want to update the parameters. Gives us the index of the highest value as outputCounts the number of examples, so total is increased by the batch size Comparing the predicte d class that had the maximum probability and the ground-truth labels, we first get a Bo olean array. Taking the sum gives the number of items in the batch where the prediction and ground truth agree." Deep-Learning-with-PyTorch.pdf,"215 Training our convnet no structure—just the weight s. This means when we deploy the model in production for our friend, we’ll need to keep the model class handy, create an instance, and then load the parameters back into it: # In[34]: loaded_model = Net()loaded_model.load_state_dict(torch.load(data_path + 'birds_vs_airplanes.pt')) # Out[34]: We have also included a pretrained model in our code repository, saved to ../data/ p1ch7/birds_vs_airplanes.pt. 8.4.3 Training on the GPU We have a net and can train it! But it would be good to make it a bit faster. It is no sur-prise by now that we do so by moving our training onto the GPU. Using the .to method we saw in chapter 3, we can move the tensors we get from the data loader to the GPU, after which our computation will au tomatically take place there. But we also need to move our parameters to the GPU. Happily, nn.Module implements a .to func- tion that moves all of its parameters to the GPU (or casts the type when you pass a dtype argument). There is a somewhat subtle difference between Module.to and Tensor.to . Module.to is in place: the module instance is modified. But Tensor.to is out of place (in some ways computation, just like Tensor.tanh ), returning a new tensor. One implication is that it is go od practice to create the Optimizer after moving the param- eters to the appropriate device. It is considered good style to move thin gs to the GPU if one is available. A good pattern is to set the a variable device depending on torch.cuda.is_available : # In[35]: device = (torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')) print(f""Training on device {device}."") Then we can amend the training loop by moving the tensors we get from the data loader to the GPU by using the Tensor.to method. Note that the code is exactly like our first version at the beginning of this section except for the two lines moving the inputs to the GPU: # In[36]: import datetime def training_loop(n_epochs, optimizer, model, loss_fn, train_loader): for epoch in range(1, n_epochs + 1): loss_train = 0.0We will have to make sure we don’t change the definition of Net between saving and later loading the model state." Deep-Learning-with-PyTorch.pdf,"216 CHAPTER 8Using convolutions to generalize for imgs, labels in train_loader: imgs = imgs.to(device=device) labels = labels.to(device=device)outputs = model(imgs) loss = loss_fn(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() loss_train += loss.item() if epoch == 1 or epoch % 10 == 0: print('{} Epoch {}, Training loss {}'.format( datetime.datetime.now(), epoch,loss_train / len(train_loader))) The same amendment must be made to the validate function. We can then instanti- ate our model, move it to device , and run it as before:8 # In[37]: train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True) model = Net().to(device=device) optimizer = optim.SGD(model.parameters(), lr=1e-2)loss_fn = nn.CrossEntropyLoss() training_loop( n_epochs = 100, optimizer = optimizer, model = model,loss_fn = loss_fn, train_loader = train_loader, ) # Out[37]: 2020-01-16 23:10:35.563216 Epoch 1, Training loss 0.57177913492652272020-01-16 23:10:39.730262 Epoch 10, Training loss 0.3285350770137872 2020-01-16 23:10:45.906321 Epoch 20, Training loss 0.29493294959994637 2020-01-16 23:10:52.086905 Epoch 30, Training loss 0.269623059945501342020-01-16 23:10:56.551582 Epoch 40, Training loss 0.24709946277794564 2020-01-16 23:11:00.991432 Epoch 50, Training loss 0.22623272664892446 2020-01-16 23:11:05.421524 Epoch 60, Training loss 0.209966728214625342020-01-16 23:11:09.951312 Epoch 70, Training loss 0.1934866009719053 2020-01-16 23:11:14.499484 Epoch 80, Training loss 0.1799132404908253 2020-01-16 23:11:19.047609 Epoch 90, Training loss 0.166200087067617742020-01-16 23:11:23.590435 Epoch 100, Training loss 0.15667157247662544 8There is a pin_memory option for the data loader that will caus e the data loader to use memory pinned to the GPU, with the goal of speeding up transfers. Whet her we gain something varies, though, so we will not pursue this here.These two lines that move imgs and labels to the device we are training on are the only di fference from our previous version. Moves our model (all parameters) to the GPU. If you forget to move either the model or the inputs to the GPU, you will get errors about tensors not being on the same device, because the PyTorch operators do not support mixing GPU and CPU inputs." Deep-Learning-with-PyTorch.pdf,"217 Model design Even for our small network here, we do see a sizable increase in speed. The advantage of computing on GPUs is more visible for larger models. There is a slight complication when load ing network weights: PyTorch will attempt to load the weight to the same device it was saved from—that is, weights on the GPU will be restored to the GPU. As we don’t know whether we want the same device, we have two options: we could move the network to the CPU before saving it, or move it back after restoring. It is a bit more concis e to instruct PyTorch to override the device information when loading weights. This is done by passing the map_location keyword argument to torch.load : # In[39]: loaded_model = Net().to(device=device)loaded_model.load_state_dict(torch.load(data_path + 'birds_vs_airplanes.pt', map_location=device)) # Out[39]: 8.5 Model design We built our model as a subclass of nn.Module , the de facto standard for all but the simplest models. Then we trained it succe ssfully and saw how to use the GPU to train our models. We’ve reached the point where we can build a feed-forward convolutional neural network and train it su ccessfully to classify images. The natural question is, what now? What if we are presented with a more complicated problem? Admittedly, our birds versus airplanes data set wasn’t that complicated: the images were very small, and the object under investigation was center ed and took up most of the viewport. If we moved to, say, ImageNet, we woul d find larger, more complex images, where the right answer would depend on multiple visual clues, often hierarchically orga- nized. For instance, when tr ying to predict whether a da rk brick shape is a remote control or a cell phone, the network could be looking for something like a screen. Plus images may not be our sole focus in the real world, where we have tabular data, sequences, and text. The promise of ne ural networks is sufficient flexibility to solve problems on all these kinds of data gi ven the proper archit ecture (that is, the interconnection of layers or modul es) and the proper loss function. PyTorch ships with a very comprehensive collection of modules and loss functions to implement state-of-the-art architecture s ranging from feed-forward components to long short-term memory (LSTM) modules an d transformer networks (two very popu- lar architectures for sequential data). Se veral models are available through PyTorch Hub or as part of torchvision and other vertical community efforts. We’ll see a few more advanced architecture s in part 2, where we’ll walk through an end-to-end problem of analyzing CT scans, but in ge neral, it is beyond the scope of this book to explore variations on neural networ k architectures. However, we can build on the knowledge we’ve accumulated thus fa r to understand how we can implement" Deep-Learning-with-PyTorch.pdf,"218 CHAPTER 8Using convolutions to generalize almost any architecture thanks to the expr essivity of PyTorch. The purpose of this section is precisely to provide conceptual to ols that will allow us to read the latest research paper and start implementing it in PyTorch—or, since authors often release PyTorch implementations of their papers, to read the implementations without chok- ing on our coffee. 8.5.1 Adding memory capacity: Width Given our feed-forward architecture, there are a couple of dime nsions we’d likely want to explore before getting into furthe r complications. The first dimension is the width of the network: the number of neurons per layer, or channels per convolution. We can make a model wider very easily in Py Torch. We just specify a larger number of output channels in the first convolution and increase the subsequent layers accord- ingly, taking care to change the forward function to reflect the fact that we’ll now have a longer vector once we switch to fully connected layers: # In[40]: class NetWidth(nn.Module): def __init__(self): super().__init__()self.conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1) self.conv2 = nn.Conv2d(32, 16, kernel_size=3, padding=1) self.fc1 = nn.Linear(16 * 8 * 8, 32)self.fc2 = nn.Linear(32, 2) def forward(self, x): out = F.max_pool2d(torch.tanh(self.conv1(x)), 2) out = F.max_pool2d(torch.tanh(self.conv2(out)), 2) out = out.view(-1, 16 * 8 * 8)out = torch.tanh(self.fc1(out)) out = self.fc2(out) return out If we want to avoid hardcoding numbers in the definition of the model, we can easily pass a parameter to init and parameterize the width, taking care to also parameterize the call to view in the forward function: # In[42]: class NetWidth(nn.Module): def __init__(self, n_chans1=32): super().__init__() self.n_chans1 = n_chans1self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1) self.conv2 = nn.Conv2d(n_chans1, n_chans1 // 2, kernel_size=3, padding=1) self.fc1 = nn.Linear(8 * 8 * n_chans1 // 2, 32) self.fc2 = nn.Linear(32, 2) def forward(self, x): out = F.max_pool2d(torch.tanh(self.conv1(x)), 2) out = F.max_pool2d(torch.tanh(self.conv2(out)), 2)" Deep-Learning-with-PyTorch.pdf,"219 Model design out = out.view(-1 ,8*8* self.n_chans1 // 2) out = torch.tanh(self.fc1(out)) out = self.fc2(out)return out The numbers specifying channels and featur es for each layer are directly related to the number of parameters in a model; all ot her things being equal, they increase the capacity of the model. As we did previously, we can look at how many parameters our model has now: # In[44]: sum(p.numel() for p in model.parameters()) # Out[44]: 38386 The greater the capacity, the more variability in the inputs the model will be able to manage; but at the same time, the more lik ely overfitting will be, since the model can use a greater number of parameters to memo rize unessential aspe cts of the input. We already went into ways to combat overfittin g, the best being incr easing the sample size or, in the absence of new data, augmenting existing data through artificial modifica-tions of the same data. There are a few more tricks we can play at the model level (without acting on the data) to control overfitting. Let’s review the most common ones. 8.5.2 Helping our model to converge and generalize: Regularization Training a model involves two critical step s: optimization, when we need the loss to decrease on the training set; and generali zation, when the model has to work not only on the training set but also on data it has not seen before, like the validation set. The mathematical tools aimed at easing thes e two steps are sometimes subsumed under the label regularization . KEEPING THE PARAMETERS IN CHECK : WEIGHT PENALTIES The first way to stabilize generalization is to add a regularization term to the loss. This term is crafted so that the weights of the mo del tend to be small on their own, limiting how much training makes them grow. In other words, it is a penalty on larger weight values. This makes the loss ha ve a smoother topography, an d there’s relatively less to gain from fitting individual samples. The most popular regularization terms of this kind are L2 regularization, which is the sum of squares of all weig hts in the model, and L1 regularization, which is the sum of the absolute values of all weights in the model.9 Both of them are scaled by a (small) factor, which is a hyperpar ameter we set prior to training. 9We’ll focus on L2 regularization here. L1 regularization—popularized in the more general statistics literature by its use in Lasso—has the attractive proper ty of resulting in sparse trained weights." Deep-Learning-with-PyTorch.pdf,"220 CHAPTER 8Using convolutions to generalize L2 regularization is also referred to as weight decay . The reason for this name is that, thinking about SGD and backpropagation, the negative gradient of the L2 regulariza- tion term with respect to a parameter w_i is - 2 * lambda * w_i , where lambda is the aforementioned hyperparameter, simply named weight decay in PyTorch. So, adding L2 regularization to the loss function is equi valent to decreasing each weight by an amount proportional to its current value during the optimization step (hence, the name weight decay ). Note that weight decay applies to all parameters of the network, such as biases. In PyTorch, we could implement regulariza tion pretty easily by adding a term to the loss. After computing the loss, whatever the loss function is, we can iterate the parameters of the mode l, sum their respective square (for L2) or abs (for L1), and backpropagate: # In[45]: def training_loop_l2reg(n_epochs, optimizer, model, loss_fn, train_loader): for epoch in range(1, n_epochs + 1): loss_train = 0.0for imgs, labels in train_loader: imgs = imgs.to(device=device) labels = labels.to(device=device) outputs = model(imgs)loss = loss_fn(outputs, labels) l2_lambda = 0.001 l2_norm = sum(p.pow(2.0).sum() for p in model.parameters()) loss = loss + l2_lambda * l2_norm optimizer.zero_grad() loss.backward()optimizer.step() loss_train += loss.item() if epoch == 1 or epoch % 10 == 0: print('{} Epoch {}, Training loss {}'.format( datetime.datetime.now(), epoch,loss_train / len(train_loader))) However, the SGD optimizer in PyTorch already has a weight_decay parameter that corresponds to 2 * lambda , and it directly performs weight decay during the update as described previously. It is fully equivalent to adding the L2 norm of weights to theloss, without the need for accumulating terms in the loss and involving autograd. NOT RELYING TOO MUCH ON A SINGLE INPUT: DROPOUT An effective strategy for combating overfitt ing was originally proposed in 2014 by Nit- ish Srivastava and coauthors from Geoff Hint on’s group in Toronto, in a paper aptly entitled “Dropout: a Simple Way to Prev ent Neural Networks from Overfitting” (http://mng.bz/nPMa ). Sounds like pretty much exactly what we’re looking for,Replaces pow(2.0) with abs() for L 1 regularization" Deep-Learning-with-PyTorch.pdf,"221 Model design right? The idea behind dropou t is indeed simple: zero ou t a random fraction of out- puts from neurons across the network, where the randomization happens at each training iteration. This procedure effectively generates slig htly different models with different neu- ron topologies at each iteration, giving neurons in the model less chance to coordi- nate in the memorization process that ha ppens during overfitting. An alternative point of view is that dropout perturbs the features being generated by the model, exerting an effect that is close to augmen tation, but this time throughout the network. In PyTorch, we can implement dropout in a model by adding an nn.Dropout mod- ule between the nonlinear activation functi on and the linear or convolutional module of the subsequent layer. As an argument, we need to specify the probability with which inputs will be zeroed out. In case of convolutions , we’ll use the specialized nn.Drop- out2d or nn.Dropout3d , which zero out entire channels of the input: # In[47]: class NetDropout(nn.Module): def __init__(self, n_chans1=32): super().__init__()self.n_chans1 = n_chans1self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1) self.conv1_dropout = nn.Dropout2d(p=0.4) self.conv2 = nn.Conv2d(n_chans1, n_chans1 // 2, kernel_size=3, padding=1) self.conv2_dropout = nn.Dropout2d(p=0.4) self.fc1 = nn.Linear(8 * 8 * n_chans1 // 2, 32)self.fc2 = nn.Linear(32, 2) def forward(self, x): out = F.max_pool2d(torch.tanh(self.conv1(x)), 2) out = self.conv1_dropout(out) out = F.max_pool2d(torch.tanh(self.conv2(out)), 2)out = self.conv2_dropout(out) out = out.view(-1 ,8*8* self.n_chans1 // 2) out = torch.tanh(self.fc1(out))out = self.fc2(out) return out Note that dropout is normally active during training, while during the evaluation of a trained model in production, dropout is by passed or, equivalently, assigned a proba- bility equal to zero. This is controlled through the train property of the Dropout module. Recall that PyTorch lets us switch between the two modalities by calling model.train() or model.eval()" Deep-Learning-with-PyTorch.pdf,"222 CHAPTER 8Using convolutions to generalize on any nn.Model subclass. The call will be automa tically replicated on the submodules so that if Dropout is among them, it will behave accordingly in subsequent forward and backward passes. KEEPING ACTIVATIONS IN CHECK : BATCH NORMALIZATION Dropout was all the rage when, in 2015 , another seminal paper was published by Sergey Ioffe and Christian Szegedy from Google, entitled “B atch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift” (https://arxiv.org/abs/1502.03167 ). The paper described a technique that had mul- tiple beneficial effects on training: allowing us to increase the learning rate and make training less dependent on initialization and act as a regularizer, thus representing an alternative to dropout. The main idea behind batch normalizatio n is to rescale the inputs to the activa- tions of the network so that minibatches ha ve a certain desirable distribution. Recall- ing the mechanics of learning and the role of nonlinear activation functions, this helps avoid the inputs to activation function s being too far into the saturated portion of the function, thereby killing gradients and slowing training. In practical terms, batch normalization shifts and scales an intermediate input using the mean and standard deviation coll ected at that intermediate location over the samples of the minibatch. The regularizati on effect is a result of the fact that an individual sample and its downstream acti vations are always seen by the model as shifted and scaled, depending on the statis tics across the random ly extracted mini- batch. This is in itself a form of principled augmentation. The authors of the paper suggest that using batch normalization elim inates or at least alleviates the need for dropout. Batch normalization in PyTorch is provided through the nn.BatchNorm1D , nn.BatchNorm2d , and nn.BatchNorm3d modules, depending on the dimensionality of the input. Since the aim for batch normalizat ion is to rescale the inputs of the activa- tions, the natural location is after the linear transformation (convo lution, in this case) and the activation, as shown here: # In[49]: class NetBatchNorm(nn.Module): def __init__(self, n_chans1=32): super().__init__()self.n_chans1 = n_chans1 self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1) self.conv1_batchnorm = nn.BatchNorm2d(num_features=n_chans1)self.conv2 = nn.Conv2d(n_chans1, n_chans1 // 2, kernel_size=3, padding=1) self.conv2_batchnorm = nn.BatchNorm2d(num_features=n_chans1 // 2)self.fc1 = nn.Linear(8 * 8 * n_chans1 // 2, 32) self.fc2 = nn.Linear(32, 2) def forward(self, x): out = self.conv1_batchnorm(self.conv1(x)) out = F.max_pool2d(torch.tanh(out), 2)" Deep-Learning-with-PyTorch.pdf,"223 Model design out = self.conv2_batchnorm(self.conv2(out)) out = F.max_pool2d(torch.tanh(out), 2) out = out.view(-1 ,8*8* self.n_chans1 // 2) out = torch.tanh(self.fc1(out)) out = self.fc2(out) return out Just as for dropout, batch normalization ne eds to behave differ ently during training and inference. In fact, at inference time, we want to avoid having the output for a spe- cific input depend on the statistics of the other inputs we’re presenting to the model. As such, we need a way to still normalize, but this time fixing the normalization parameters once and for all. As minibatches are processed, in additi on to estimating the mean and standard deviation for the current minibatch, PyTorc h also updates the running estimates for mean and standard deviation that are repr esentative of the whole dataset, as an approximation. This way, when the user specifies model.eval() and the model contains a batch normalizat ion module, the running estimates are fro- zen and used for normalization. To unfreez e running estimates and return to using the minibatch statistics, we call model.train() , just as we did for dropout. 8.5.3 Going deeper to learn mo re complex structures: Depth Earlier, we talked about width as the firs t dimension to act on in order to make a model larger and, in a way, more capable. The second fundamental dimension is obvi- ously depth . Since this is a deep learning book, depth is something we’re supposedly into. After all, deeper models are always be tter than shallow ones, aren’t they? Well, it depends. With depth, the complexity of th e function the network is able to approxi- mate generally increases. In regard to computer vision, a sh allower network could identify a person’s shape in a photo, wher eas a deeper network could identify the per- son, the face on their top half, and the mouth within the face. Depth allows a model to deal with hierarchical information wh en we need to understand the context in order to say somethin g about some input. There’s another way to think about depth: increasing depth is related to increasing t h e l e n g t h o f t h e s e q u e n c e o f o p e r a t i o n s t h a t t h e n e t w o r k w i l l b e a b l e t o p e r f o r m when processing input. This view—of a deep network that performs sequential opera- tions to carry out a task—is likely fascinat ing to software developers who are used to thinking about algorithms as sequences of operations like “find the person’s boundar- ies, look for the head on top of the bound aries, look for the mouth within the head.” SKIP CONNECTIONS Depth comes with some additi onal challenges, which prevented deep learning models from reaching 20 or more la yers until late 2015. Adding depth to a model generally makes training harder to co nverge. Let’s recall backprop agation and think about it in" Deep-Learning-with-PyTorch.pdf,"224 CHAPTER 8Using convolutions to generalize the context of a very deep network. The deri vatives of the loss function with respect to the parameters, especially those in early laye rs, need to be multip lied by a lot of other numbers originating from the chain of deri vative operations between the loss and the parameter. Those numbers bein g multiplied could be smal l, generating ever-smaller numbers, or large, swallowi ng smaller numbers due to fl oating-point approximation. The bottom line is that a long chain of multip lications will tend to make the contribu- tion of the parameter to the gradient vanish , leading to ineffective training of that layer since that parameter and others like it won’t be properly updated. In December 2015, Kaiming He and coauthors presented residual networks (ResNets), an architecture that uses a simple trick to allow very deep networks to be successf ully trained ( https:// arxiv.org/abs/1512.03385 ). That work opened the door to networks ranging from tens of layers to 100 layers in depth,surpassing the then state of the art in computer vision benchmark problems. We encountered residual networkswhen we were playing with pretrained models in chapter 2. The trick we men- tioned is the following: using a skip con- nection to short-circuit blocks of layers, as shown in figure 8.11. A skip connectio n is nothing but the addition of the input to the output of a block of layers. This is exactly how it is done in PyTorch. Let’s add onelayer to our simple convolutionalmodel, and let’s use ReLU as the acti- vation for a change. The vanilla mod- ule with an extra layer looks like this: # In[51]: class NetDepth(nn.Module): def __init__(self, n_chans1=32): super().__init__()self.n_chans1 = n_chans1 self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1) self.conv2 = nn.Conv2d(n_chans1, n_chans1 // 2, kernel_size=3, padding=1) self.conv3 = nn.Conv2d(n_chans1 // 2, n_chans1 // 2, kernel_size=3, padding=1) self.fc1 = nn.Linear(4 * 4 * n_chans1 // 2, 32) self.fc2 = nn.Linear(32, 2)Linear (32d->2d)Output (2d) relurelurelurelu Skip coNnectionNetDepth / NetRes MaxPOol (2 x2) Conv2d(3 x3, n/2c->n/2c) MaxPOol (2 x2) Conv2d(3 x3, nc->n/2c) Conv2d (3 x3, 3c->nc)Linear ((n/2*16)d->32d) View ((n/2*16)d) Input (3c, 32 x32)ncx16x16 ncx32x32MaxPOol (2 x2)n/2c x8x8n/2c x4x4 Figure 8.11 The architecture of our network with three convolutional layers. The skip connection is what differentiates NetRes from NetDepth ." Deep-Learning-with-PyTorch.pdf,"225 Model design def forward(self, x): out = F.max_pool2d(torch.relu(self.conv1(x)), 2)out = F.max_pool2d(torch.relu(self.conv2(out)), 2) out = F.max_pool2d(torch.relu(self.conv3(out)), 2) out = out.view(-1 ,4*4* self.n_chans1 // 2) out = torch.relu(self.fc1(out)) out = self.fc2(out) return out Adding a skip connection a la ResNet to th is model amounts to adding the output of the first layer in the forward function to the input of the third layer: # In[53]: class NetRes(nn.Module): def __init__(self, n_chans1=32): super().__init__() self.n_chans1 = n_chans1 self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1)self.conv2 = nn.Conv2d(n_chans1, n_chans1 // 2, kernel_size=3, padding=1) self.conv3 = nn.Conv2d(n_chans1 // 2, n_chans1 // 2, kernel_size=3, padding=1) self.fc1 = nn.Linear(4 * 4 * n_chans1 // 2, 32) self.fc2 = nn.Linear(32, 2) def forward(self, x): out = F.max_pool2d(torch.relu(self.conv1(x)), 2) out = F.max_pool2d(torch.relu(self.conv2(out)), 2)out1 = out out = F.max_pool2d(torch.relu(self.conv3(out)) + out1, 2) out = out.view(-1 ,4*4* self.n_chans1 // 2) out = torch.relu(self.fc1(out)) out = self.fc2(out) return out In other words, we’re using the output of th e first activations as inputs to the last, in addition to the standard feed-forward path. This is also referred to as identity mapping . So, how does this alleviate the issues wi th vanishing gradients we were mentioning earlier? Thinking about backpropagation, we can appreciate that a sk ip connection, or a sequence of skip connections in a deep network, creates a direct path from the deeper parameters to the loss. This makes their co ntribution to the gradient of the loss more direct, as partial derivatives of the loss with respect to those parameters have a chance not to be multiplied by a long chain of other operations. It has been observed that skip connections have a beneficial effect on convergence especially in the initial phases of training. Also, the loss landscape of deep residual networks is a lot smoother than feed-forwa rd networks of the same depth and width. It is worth noting that skip connections were not new to the world when ResNets came along. Highway networks and U-Net made use of skip connections of one form" Deep-Learning-with-PyTorch.pdf,"226 CHAPTER 8Using convolutions to generalize or another. However, the way ResNets us ed skip connections enabled models of depths greater than 100 to be amenable to training. Since the advent of ResNets, other arch itectures have taken skip connections to the next level. One in particular, DenseNet , proposed to connect each layer with sev- eral other layers downstream through skip connections, achieving state-of-the-art results with fewer parameters. By now, we know how to implement something like DenseNets: just arithmetical ly add earlier intermediate outputs to downstream inter- mediate outputs. BUILDING VERY DEEP MODELS IN PYTORCH We talked about exceeding 100 layers in a convolutional neural network. How can we build that network in PyTorch without losi ng our minds in the process? The standard strategy is to define a building block, such as a (Conv2d, ReLU, Conv2d) + skip connection block, and then build th e network dynamically in a for loop. Let’s see it done in practice. We will create the network depicted in figure 8.12. We first create a module subclass whose sole job is to provide the computation for one block—that is, one group of convolutions , activation, and skip connection:Linear (32d->2d)Output (2d) ... reluBatchNorm2d (nc)ReLU+Output (2d, n chaNnels) Input (2d, n chaNnels)ResBlock reluNetResDEep 100 ResBlocks Linear ((n*64)d->32d) MaxPOol (2 x2) ResBlock(nc) ResBlock(nc) Conv2d (3 x3, 3c->nc)View ((n*64)d) MaxPOol (2 x2)ncx16x16ncx16x16 ncx32x32ncx8x8 ncx16x16 Input (3c, 32 x32)Conv2d(3 x3, nc->nc) Figure 8.12 Our deep architecture with residual conn ections. On the left, we define a simplistic residual block. We use it as a building block in our network, as shown on the right." Deep-Learning-with-PyTorch.pdf,"227 Model design # In[55]: class ResBlock(nn.Module): def __init__(self, n_chans): super(ResBlock, self).__init__() self.conv = nn.Conv2d(n_chans, n_chans, kernel_size=3, padding=1, bias=False) self.batch_norm = nn.BatchNorm2d(num_features=n_chans) torch.nn.init.kaiming_normal_(self.conv.weight, nonlinearity='relu') torch.nn.init.constant_(self.batch_norm.weight, 0.5) torch.nn.init.zeros_(self.batch_norm.bias) def forward(self, x): out = self.conv(x) out = self.batch_norm(out)out = torch.relu(out) return out + x Since we’re planning to generate a deep mo del, we are including batch normalization in the block, since this will help prevent gradients from vanishing during training. We’d now like to generate a 100-block network. Does this mean we have to prepare for some serious cutting and pasting? Not at all; we already have the ingredients for imag- ining how this could look like. First, in init, we create nn.Sequential containing a list of ResBlock instances. nn.Sequential will ensure that the output of one block is used as input to the next. It will also ensure that all the parameters in the block are visible to Net. Then, in forward , we just call the sequential to traverse the 100 blocks and generate the output: # In[56]: class NetResDeep(nn.Module): def __init__(self, n_chans1=32, n_blocks=10): super().__init__()self.n_chans1 = n_chans1 self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1) self.resblocks = nn.Sequential( *(n_blocks * [ResBlock(n_chans=n_chans1)])) self.fc1 = nn.Linear(8 * 8 * n_chans1, 32) self.fc2 = nn.Linear(32, 2) def forward(self, x): out = F.max_pool2d(torch.relu(self.conv1(x)), 2)out = self.resblocks(out) out = F.max_pool2d(out, 2) out = out.view(-1 ,8*8* self.n_chans1) out = torch.relu(self.fc1(out)) out = self.fc2(out) return out In the implementation, we parameterize the actual number of layers, which is import- ant for experimentation and reuse. Also, need less to say, backprop agation will work as expected. Unsurprisingly, the ne twork is quite a bit slower to converge. It is also moreThe BatchNorm layer would cancel the effect of bias, so it is customarily left out. Uses custom initializations . kaiming_normal_ initializes with normal random elements with standard deviation as computed in the ResNet paper. The batch norm is init ialized to produce output distributions that initially have 0 mean and 0.5 variance." Deep-Learning-with-PyTorch.pdf,"228 CHAPTER 8Using convolutions to generalize fragile in convergence. This is why we us ed more-detailed initializations and trained our NetRes with a learning rate of 3e – 3 instead of the 1e – 2 we used for the other networks. We trained none of the networks to converge nce, but we would not have gotten anywhere without these tweaks. All this shouldn’t encourage us to seek depth on a dataset of 32 × 32 images, but it clearly demonstrates how this can be achieved on more challenging datasets like Image- Net. It also provides the key elements fo r understanding existing implementations for models like ResNet, for instance, in torchvision . INITIALIZATION Let’s briefly comment about the earlier initialization. Initialization is one of theimportant tricks in training neural networ ks. Unfortunately, fo r historical reasons, PyTorch has default weight initializations that are not ideal. People are looking at fix- ing the situation; if progress is ma de, it can be tracked on GitHub ( https:// github.com/pytorch/pytorch/issues/18182 ). In the meantime, we need to fix the weight initialization ourselves. We found that our model did not converge and looked at what people commonly choose as initializ ation (a smaller variance in weights; and zero mean and unit variance outputs for ba tch norm), and then we halved the output variance in the batch norm when the network would not converge. Weight initialization could fill an enti re chapter on its own, but we think that would be excessive. In chapte r 11, we’ll bump into initialization again and use what arguably could be PyTorch defaults without much explanation. Once you’ve pro- gressed to the point where the details of weight initialization are of specific interest to you—probably not before finishing this book—you might revisit this topic. 10 8.5.4 Comparing the designs from this section We summarize the effect of each of our de sign modifications in isolation in figure 8.13. We should not overinterpret any of the specific numbers—our problem setup and experiments are simplistic, and repeat ing the experiment with different random seeds will probably generate variation at le ast as large as the differences in validation accuracy. For this demonstration, we left all other things equal, fr om learning rate to number of epochs to train; in practice, we would try to get the best results by varying those. Also, we would likely want to combin e some of the additional design elements. But a qualitative observation may be in orde r: as we saw in section 5.5.3, when dis- cussing validatioin and overfitting, The we ight decay and dropout regularizations, which have a more rigorous statistical estimation interpretation as regularization thanbatch norm, have a much narr ower gap between the two a ccuracies. Batch norm, which 10The seminal paper on the topic is by X. Glorot and Y. Bengio: “Understanding the Difficulty of Training Deep Feedforward Neural Networks” (201 0), which introduces PyTorch’s Xavier initializations ( http:// mng.bz/vxz7 ). The ResNet paper we mentioned expands on th e topic, too, giving us the Kaiming initializa- tions used earlier. More recently, H. Zhang et al. have tweaked initialization to the point that they do not need batch norm in their experiments with very deep residual networks ( https://arxiv.org/abs/1901.09321 )." Deep-Learning-with-PyTorch.pdf,"229 Conclusion serves more as a convergence helper, lets us train the network to nearly 100% training accuracy, so we interpret the first two as regularization. 8.5.5 It’s already outdated The curse and blessing of a deep learning pr actitioner is that neural network architec- tures evolve at a very rapid pace. This is not to say that what we’ve seen in this chapter is necessarily old school, but a thorough illustration of the latest and greatest architec- tures is a matter for another book (and they would cease to be the latest and the great- est pretty quickly anyway). The take-home me ssage is that we should make every effort to proficiently translate the math behind a paper into actual PyTorch code, or at least understand the code that others have writte n with the same intention. In the last few chapters, you have hopefully gathered quite a few of the fundamental skills to trans- late ideas into implemented models in PyTorch. 8.6 Conclusion After quite a lot of work, we now have a mode l that our fictional friend Jane can use to filter images for her blog. All we have to do is take an incoming image, crop and resize it to 32 × 32, and see what the model has to say about it. Admittedly, we have solved only part of the problem, but it was a journey in itself. We have solved just part of the problem because there are a few interesting unknowns we would still have to face. One is picking out a bird or airplane from a0.700.75train val0.800.85ACcuracy0.900.951.00 baselinewidth 12 regdropout batch_normdepthres res dEep Figure 8.13 The modified networks all perform similarly." Deep-Learning-with-PyTorch.pdf,"230 CHAPTER 8Using convolutions to generalize larger image. Creating bounding boxes arou nd objects in an image is something a model like ours can’t do. Another hurdle concerns what happens wh en Fred the cat walks in front of the camera. Our model will not refrain from givi ng its opinion about how bird-like the cat is! It will happily outp ut “airplane” or “bird,” perhaps with 0.99 probability. This issue of being very confident about samples that are far from the training distribution is called overgeneralization . It’s one of the main problems when we take a (presumably good) model to production in those cases where we can’t really trust the input (which, sadly, is the majority of real-world cases). In this chapter, we have built reason able, working models in PyTorch that can learn from images. We did it in a way that helped us build our intuition around convo- lutional networks. We also explored ways in which we can make our models wider and deeper, while controlling effects like overfitt ing. Although we still only scratched the surface, we have taken anot her significant step ahead from the previous chapter. We now have a solid basis for facing the ch allenges we’ll encounter when working on deep learning projects. Now that we’re familiar with PyTorch conventions and common features, we’re ready to tackle something bigger. We’re going to transition from a mode where each chapter or two presents a small problem, to spending multiple chapters breaking down a bigger, real-world problem. Part 2 uses automatic detection of lung cancer as an ongoing example; we will go from bein g familiar with the PyTorch API to being able to implement entire projects using Py Torch. We’ll start in the next chapter by explaining the problem from a high level, and then we’ll get into the details of the data we’ll be using. 8.7 Exercises 1Change our model to use a 5 × 5 kernel with kernel_size=5 passed to the nn.Conv2d constructor. aW h a t i m p a c t d o e s t h i s c h a n g e h a v e o n t h e n u m b e r o f p a r a m e t e r s i n t h e model? bDoes the change improve or degrade overfitting? cRead https://pytorch.org/docs/ stable/nn.html#conv2d . dCan you describe what kernel_size=(1,3) will do? eHow does the model behave with such a kernel? 2Can you find an image that contains neit her a bird nor an airplane, but that the model claims has one or the other with more than 95% confidence? aCan you manually edit a neutral imag e to make it more airplane-like? bCan you manually edit an airplane imag e to trick the model into reporting a bird? cDo these tasks get easier with a netw ork with less capacity? More capacity?" Deep-Learning-with-PyTorch.pdf,"231 Summary 8.8 Summary Convolution can be used as the linear operation of a feed-forward network dealing with images. Using convolution produces networks with fewer parame- ters, exploiting locality and fe aturing translation invariance. Stacking multiple convolutions with th eir activations one after the other, and using max pooling in between, has the effect of applying convolutions to increasingly smaller feature images, ther eby effectively accounting for spatial relationships across larger portions of the input image as depth increases. Any nn.Module subclass can recursively collect and return its and its children’s parameters. This technique can be used to count them, feed them into the opti- mizer, or inspect their values. The functional API provides modules that do not depend on storing internalstate. It is used for operations that do not hold parameters and, hence, are nottrained. Once trained, parameters of a model ca n be saved to disk and loaded back in with one line of code each. " Deep-Learning-with-PyTorch.pdf," " Deep-Learning-with-PyTorch.pdf,"Part 2 Learning from images in the real world: Early detection of lung cancer Part 2 is structured differently than part 1; it’s almost a book within a book. We’ll take a single use case and explore it in depth over the course of several chap- ters, starting with the basic building blocks we learned in part 1, and building out a more complete project than we’ve seen so far. Our first attempts are going to be incomplete and inaccurate, and we’ll ex plore how to diagnose those problems and then fix them. We’ll also identify va rious other improvements to our solution, implement them, and measure their impact. In order to train the models we’lldevelop in part 2, you will need access to a GPU with at least 8 GB of RAM as well as several hundred gigabytes of free di sk space to store the training data. Chapter 9 introduces the project, environment, and data we will consume and the structure of the project we’ll implem ent. Chapter 10 shows how we can turn our data into a PyTorch dataset, and chap ters 11 and 12 introduce our classifica- tion model: the metrics we need to gaug e how well the dataset is training, and implement solutions to problems preventi ng the model from training well. In chapter 13, we’ll shift gears to the beginni ng of the end-to-end project by creating a segmentation model that produces a he atmap rather than a single classifica- tion. That heatmap will be used to generate locations to classify. Finally, in chap- ter 14, we’ll combine our segmentation an d classification mo dels to perform a final diagnosis." Deep-Learning-with-PyTorch.pdf," " Deep-Learning-with-PyTorch.pdf,"235Using PyTorch to fight cancer We have two main goals for this chapter. We ’ll start by covering the overall plan for part 2 of the book so that we have a solid idea of the larger scope the following indi- vidual chapters will be building toward. In chapter 10, we will begin to build out the data-parsing and data-manipulation routin es that will produce data to be con- sumed in chapter 11 while training our fi rst model. In order to do what’s needed for those upcoming chapters well, we’ll also use this chapter to cover some of the context in which our project will be oper ating: we’ll go over data formats, data sources, and exploring the co nstraints that our problem domain places on us. Get used to performing these ta sks, since you’ll have to do them for any serious deep learning project!This chapter covers Breaking a large problem into smaller, easier ones Exploring the constraints of an intricate deep learning problem, and deciding on a structure and approach Downloading the training data" Deep-Learning-with-PyTorch.pdf,"236 CHAPTER 9Using PyTorch to fight cancer 9.1 Introduction to the use case Our goal for this part of the book is to give you the tools to deal with situations where things aren’t working, which is a far more co mmon state of affairs than part 1 might have led you to believe. We can’t predict every failure case or cover every debugging tech- nique, but hopefully we’ll give you enough to not feel stuck when you encounter a new roadblock. Similarly, we want to help you avoid situations with your own projects where you have no idea what you could do next when your projects are under-performing. Instead, we hope your ideas list will be so long that the ch allenge will be to prioritize! In order to present these ideas and te chniques, we need a context with some nuance and a fair bit of heft to it. We’ve chosen automatic detection of malignant tumors in the lungs using only a CT scan of a patient’s chest as input. We’ll be focus- ing on the technical challenges rather than the human impact, but make no mis-take—even from just an engineering perspective, part 2 will require a more serious, structured approach than we needed in pa rt 1 in order to have the project succeed. NOTE CT scans are essentially 3D X-rays, re presented as a 3D array of single- channel data. We’ll cover them in more detail soon. As you might have guessed, the title of this chapter is more eye-catching, implied hyper- bole than anything approachin g a serious statement of intent. Let us be precise: our project in this part of the book will take three-dimensional CT scans of human torsos as input and produce as output the location of suspected malignant tumors, if any exist. Detecting lung cancer early has a huge impact on survival rate, but is difficult to do manually, especially in any comprehensive, whole-population sense. Currently, the work of reviewing the data must be perfor med by highly trained specialists, requires painstaking attention to detail, and it is dominated by cases where no cancer exists. Doing that job well is akin to being plac ed in front of 100 haystacks and being told, “Determine which of these, if any, contain a needle.” Searching this way results in the potential for mi ssed warning signs, particularly in the early stages when the hints are more subtle. The human brain just isn’t built well for that kind of monotonous work. And that, of course, is where deep learning comes in. Automating this process is going to give us experience working in an uncoopera- tive environment where we have to do more work from scratch, and there are fewer easy answers to problems that we might ru n into. Together, we’ll get there, though! Once you’re finished reading part 2, we think you’ll be ready to start working on a real-world, unsolv ed problem of your own choosing. We chose this problem of lung tumor detection for a few reasons. The primary rea- son is that the problem itself is unsolved! This is important, because we want to make it clear that you can use PyTorch to tackle cutting-edge projects effectively. We hopethat increases your confidence in PyTorch as a framework, as well as in yourself as a developer. Another nice aspect of this prob lem space is that while it’s unsolved, a lot of teams have been paying attention to it recently and have se en promising results. That means this challenge is probably righ t at the edge of our collective ability to solve; we won’t be wasting our time on a problem that’s actually decades away from" Deep-Learning-with-PyTorch.pdf,"237 Preparing for a large-scale project reasonable solutions. That attention on the problem has also result ed in a lot of high- quality papers and open source projects, wh ich are a great source of inspiration and ideas. This will be a huge help once we co nclude part 2 of the book, if you are inter- ested in continuing to improve on the solu tion we create. We’ll provide some links to additional informat ion in chapter 14. This part of the book will remain fo cused on the problem of detecting lung tumors, but the skills we’ll teach are genera l. Learning how to investigate, preprocess, and present your data for training is import ant no matter what project you’re working on. While we’ll be covering preprocessing in the specific context of lung tumors, the general idea is that this is what you should be prepared to do for your project to succeed. Similarly, setting up a training loop, gett ing the right performance metrics, and tying the project’s models together into a final a pplication are all general skills that we’ll employ as we go through chapters 9 through 14. NOTE While the end result of part 2 will wo rk, the output will not be accurate enough to use clinically. We’re focusing on using this as a motivating example for teaching PyTorch , not on employing every last trick to solve the problem. 9.2 Preparing for a large-scale project This project will build off of the foundational skills learned in part 1. In particular, the content covering model construction from chapter 8 will be directly relevant. Repeated convolutional layers followed by a resolution-reducing downsampling layer will still make up the majority of our mo del. We will use 3D data as input to our model, however. This is concep tually similar to the 2D image data used in the last few chapters of part 1, but we will not be able to rely on all of the 2D-specific tools avail- able in the PyTorch ecosystem. The main differences between the work we did with convolutional models in chap- ter 8 and what we’ll do in part 2 are related to how much effort we put into things out-side the model itself. In chap ter 8, we used a provided, off-the-shelf dataset and did little data manipulation before feeding the data into a model for classification. Almost all of our time and attention were spent bu ilding the model itself, whereas now we’re not even going to begin designing the first of our two model architectures until chap- ter 11. That is a direct consequence of having nonstandard data without prebuilt libraries ready to hand us trai ning samples suitable to plug into a model. We’ll have to learn about our data and implement quite a bit ourselves. Even when that’s done, this will not end up being a case where we convert the CT to a tensor, feed it into a neural network, an d have the answer pop out the other side. As is common for real-world use cases such as this, a workable approa ch will be more com- plicated to account for confounding factors such as limited data availability, finite computational resources, and limitations on our ability to design effective models. Please keep that in mind as we build to a high-l evel explanation of our project architecture. Speaking of finite comput ational resources, part 2 wi ll require access to a GPU to achieve reasonable training speeds, preferably one with at least 8 GB of RAM. Trying" Deep-Learning-with-PyTorch.pdf,"238 CHAPTER 9Using PyTorch to fight cancer to train the models we will build on CPU could take weeks!1 If you don’t have a GPU handy, we provide pretrained models in ch apter 14; the nodule an alysis script there can probably be run overnight. While we do n’t want to tie the book to proprietary ser- vices if we don’t have to, we should note that at the time of writing, Colaboratory (https://colab.research.google.com ) provides free GPU instances that might be of use. PyTorch even comes preinstalled! You wi ll also need to have at least 220 GB of free disk space to store the raw training data, cached data, and trained models. NOTE Many of the code examples presented in part 2 have complicating details omitted. Rather than clutter the examples with logging, error han- dling, and edge cases, the text of this book contains only code that expresses the core idea under discussion. Full working code samples can be found on the book’s website ( www.manning.com/books/deep -learning-with-pytorch ) and GitHub ( https://github.com/deep-learning-with-pytorch/dlwpt-code ). OK, we’ve established that this is a hard, multifaceted problem, bu t what are we going to do about it? Instead of looking at an en tire CT scan for signs of tumors or their potential malignancy, we’re going to solve a series of simpler problems that will com- bine to provide the end-to-end result we’re interested in. Like a factory assembly line, each step will take raw materials (data) an d/or output from previous steps, perform some processing, and hand off the result to the next station down the line. Not every problem needs to be solved this way, but br eaking off chunks of the problem to solve in isolation is often a great way to start. Even if it turns out to be the wrong approachfor a given project, it’s likely we’ll have learned enough while working on the individ- ual chunks that we’ll have a good idea how to restructure our approach into some- thing successful. Before we get into the details of how we’ll break down our problem, we need to learn some details about the medical domain . While the code listings will tell you what we’re doing, learning about ra diation oncology will explain why. Learning about the problem space is crucial, no matter what doma in it is. Deep learning is powerful, but it’s not magic, and trying to apply it blindly to nontrivial problems will likely fail. Instead, we have to combine insights into the space with intuit ion about neural net- work behavior. From there, disciplined experimentation and refinement should give us enough information to clos e in on a workable solution. 9.3 What is a CT scan, exactly? Before we get too far into the project, we need to take a moment to explain what a CT scan is. We will be using data from CT sc ans extensively as the main data format for our project, so having a work ing understanding of the data format’s strengths, weak- nesses, and fundamental nature will be crucial to utilizing it well. The key point we noted earlier is this: CT scans are essentia lly 3D X-rays, represented as a 3D array of 1We presume—we haven’t tried it, much less timed it." Deep-Learning-with-PyTorch.pdf,"239 What is a CT scan, exactly? single-channel data. As we might recall from ch apter 4, this is like a stacked set of gray- scale PNG images. In addition to medical data, we can see si milar voxel data in fluid simulations, 3D scene reconstructions from 2D images, light detection and ranging (LIDAR) data for self-driving cars, and many other problem sp aces. Those spaces all have their individ- ual quirks and subtleties, and while the APIs that we’re going to cover here apply gen- erally, we must also be aware of the nature of the data we’re using with those APIs if we want to be effective. Each voxel of a CT scan has a numeric va lue that roughly corr esponds to the aver- age mass density of the matter contained inside . Most visualizations of that data show high-density material like bo nes and metal implants as white, low-density air and lung tissue as black, and fat and tissue as various sh ades of gray. Again, this ends up looking somewhat similar to an X-ray , with some key differences. The primary difference between CT scans an d X-rays is that whereas an X-ray is a projection of 3D intensity (in this case, ti ssue and bone density) onto a 2D plane, a CT scan retains the third dimension of the data. Th is allows us to render the data in a vari- ety of ways: for example, as a grayscal e solid, which we can see in figure 9.1.Voxel A voxel is the 3D equivalent to the familiar two-di mensional pixel. It encloses a vol- ume of space (hence, “volumetric pixel”), rather than an area, and is typically arranged in a 3D grid to re present a field of data. Each of those dimensions will have a measurable distance associated with it. Of ten, voxels are cubic, but for this chap- ter, we will be dealing with voxe ls that are rectangular prisms. Figure 9.1 A CT scan of a human torso showing, from the top, skin, organs, spine, and patient support bed. Source: http://mng.bz/04r6 ; Mindways CT Software / CC BY-SA 3.0 ( https:// creativecommons.org/licenses/by-sa/ 3.0/deed.en )." Deep-Learning-with-PyTorch.pdf,"240 CHAPTER 9Using PyTorch to fight cancer NOTE CT scans actually measure radioden sity, which is a function of both mass density and atomic number of the material under examination. For our purposes here, the distinction isn’t re levant, since the model will consume and learn from the CT data no matter what the exact units of the input hap-pen to be. This 3D representation also allows us to “see inside” the subject by hiding tissue types we are not interested in. For example, we can render the data in 3D and restrict visibil- ity to only bone and lung tissue, as in figure 9.2. CT scans are much more difficult to acquir e than X-rays, because doing so requires a machine like the one shown in figure 9.3 th at typically costs upward of a million dol- lars new and requires traine d staff to operate it. Most hospitals and some well- equipped clinics have a CT sc anner, but they aren’t near ly as ubiquitous as X-ray machines. This, combined with patient privac y regulations, can make it somewhat dif- ficult to get CT scans unless someone has already done the work of gathering and organizing a coll ection of them. Figure 9.3 also shows an example boundi ng box for the area contained in the CT scan. The bed the patient is resting on move s back and forth, allowing the scanner to image multiple slices of the patient and hence fill the bounding box. The scanner’s darker, central ring is where the ac tual imaging equipment is located. A final difference between a CT scan and an X-ra y is that the data is a digital-only format. CT stands for computed tomography (https://en.wikipedia.or g/wiki/CT_scan#Process ). 600.0 500.0 500.0500.0 XyZ400.0 400.0400.0300.0 300.0300.0200.0 200.0200.0 100.0 100.0100.0 0.0 Figure 9.2 A CT scan showing ribs, spine, and lung structures" Deep-Learning-with-PyTorch.pdf,"241 The project: An end-to-end detector for lung cancer The raw output of the scanning process doesn’ t look particularly meaningful to the human eye and must be properly reinterpreted by a computer into somethin g we can understand. The settings of the CT scanner when the scan is taken can have a large impact on the result- ing data. While this information might not seem particularly relevant , we have actually learned something that is: from figure 9.3, we can see that the way the CT scanner measures distance along the head-to-foot axis is different than the other two axes. The patient actually moves along that axis! This ex plains (or at least is a strong hint as to) why our voxels might not be cubic, and also ties into how we approach massaging our data in chapter 12. This is a good example of why we need to understand our problem space if we’re going to make effective ch oices about how to solve our problem. When starting to work on your ow n projects, be sure you do th e same investigation into the details of your data. 9.4 The project: An end-to-e nd detector for lung cancer Now that we’ve got our heads wrapped around the basics of CT scan s, let’s discuss the structure of our project. Most of the bytes on disk will be devoted to storing the CT scans’ 3D arrays containing density inform ation, and our models will primarily con- sume various subslices of those 3D arrays. We’re going to use five main steps to go from examining a whole-chest CT scan to gi ving the patient a lung cancer diagnosis. Our full, end-to-end solution shown in figu re 9.4 will load CT data files to produce a Ct instance that contains the full 3D sc an, combine that with a module that per- forms segmentation (flagging voxels of interest), and then group the interesting voxels into small lumps in th e search for candidate nodules . The nodule locations are combined back with the CT voxel data to produce nod- ule candidates, which can then be examin ed by our nodule cla ssification model to determine whether they are actually nodules in the first place and, eventually, whether Figure 9.3 A patient inside a CT scanner, with the CT scan’s bounding box overlaid. Other than in stock photos, patients don’t typically wear street clothes while in the machine." Deep-Learning-with-PyTorch.pdf,"242 CHAPTER 9Using PyTorch to fight cancer they’re malignant. This latter task is partic ularly difficult because malignancy might not be apparent from CT imaging alone, bu t we’ll see how far we get. Last, each of those individual, per-nodule classifications can then be combined into a whole-patient diagnosis. In more detail, we will do the following: 1Load our raw CT scan data into a form that we can use with PyTorch. Puttingraw data into a form usable by PyTorch will be the first step in any project you face. The process is somewhat less compli cated with 2D image data and simpler still with non-image data. [(I,R,C), (I,R,C), (I,R,C), ...][NEG, POS, NEG, ... ] MAL/BENcandidate Locations ClaSsification modelSTep 2 (ch. 13): Segmentation Step 1 (ch. 10): Data LoadingStep 4 (ch. 11+12): ClaSsification Step 5 (ch. 14): Nodule analysis and Diagnosisp=0.1 p=0.9 p=0.2 p=0.9Step 3 (ch. 14): Grouping [( ) candidate Locations Step 4 (ch. 11+ ClaSsificati o .MHD .RAWCT Data segmentation modelcandidate Sample Figure 9.4 The end-to-end process of taking a full-chest CT scan and determining whether the patent has a malignant tumor Nodules A mass of tissue made of proliferating cells in the lung is a tumor . A tumor can be benign or it can be malignant , in which case it is also referred to as cancer . A small tumor in the lung (just a few millimeters wide) is called a nodule . About 40% of lung nodules turn out to be malignant—small cancers. It is ve ry important to catch those as early as pos- sible, and this depends on medical imagin g of the kind we are looking at here." Deep-Learning-with-PyTorch.pdf,"243 The project: An end-to-end detector for lung cancer 2Identify the voxels of potential tumors in the lungs using PyTorch to implement a technique known as segmentation . This is roughly akin to producing a heatmap of areas that should be fed in to our classifier in step 3. This will allow us to focus on potential tumors inside the lungs an d ignore huge swaths of uninteresting anatomy (a person can’t have lung ca ncer in the stomach, for example). Generally, being able to focus on a sing le, small task is best while learning. With experience, there are some situ ations where more complicated model structures can yield superl ative results (for example, the GAN game we saw in chapter 2), but designing th ose from scratch requires extensive mastery of the basic building blocks fi rst. Gotta walk before you run, and all that. 3Group interesting voxels into lumps: th at is, candidate nodu les (see figure 9.5 for more information on nodules). Here, we will find the rough center of each hotspot on our heatmap. Each nodule can be located by the index, row, and column of its center point. We do this to present a simple, constr ained problem to the final classifier. Grouping voxels will not involve PyTorc h directly, which is why we’ve pulled this out into a separate step. Often, when wo rking with multistep solutions, there will be non-deep-learning glue steps betw een the larger, deep-learning-powered portions of the project. 4Classify candidate nodules as actual nodul es or non-nodules using 3D convolution. This will be similar in concept to the 2D convolution we covered in chapter 8. The features that determine the nature of a tumor from a candidate structure are local to the tumor in question, so this approach should provide a good balance between limiting input data size and excluding relevant information. Making scope-limiting decisions li k e t h i s c a n k e e p e a c h i n d i v i d u a l t a s k c o n s t r a i n e d , which can help limit the amount of th ings to examine when troubleshooting. 5Diagnose the patient using the co mbined per-nodule classifications. Similar to the nodule classifier in the previous step, we w ill attempt to deter- mine whether the nodule is benign or malignant based on imaging data alone. We will take a simple maximum of the per-tu mor malignancy predictions, as only one tumor needs to be malignant for a patient to have cancer. Other projects mightwant to use different ways of aggregating the per-instance predictions into a file score. Here, we are asking, “Is there anything suspicio us?” so maximum is a good fit for aggregation. If we were looking for quantitative information like “the ratioof type A tissue to type B tissue,” we might take an approp riate mean instead. Figure 9.4 only depicts the final path thro ugh the system once we’ve built and trained all of the requisite models. The actual work required to train the relevant models will be detailed as we get closer to implementing each step. The data we’ll use for training provides human-annotated output for both steps 3 and 4. This allows us to treat steps 2 and 3 (identifying voxels and grouping them into nodule candidates) as almost a separate project from step 4 (nodule candidate" Deep-Learning-with-PyTorch.pdf,"244 CHAPTER 9Using PyTorch to fight cancer classification). Human experts have annotated the data with nodule locations, so we can work on either steps 2 and 3 or st ep 4 in whatever order we prefer. We will first work on step 1 (data loadin g), and then jump to step 4 before we come back and implement steps 2 an d 3, since step 4 (classification) requires an approach similar to what we used in chapter 8, usin g multiple convolutional and pooling layers to aggregate spatial information be fore feeding it into a linear classifier. Once we’ve got a handle on our classification model, we can start working on step 2 (segmentation). Since segmentation is the more complicated to pic, we want to tackle it without having to learn both segmentation and the fundamen tals of CT scans and malignant tumors at the same time. Instead, we’ll explore the cancer-detection spac e while working on a more familiar cla ssification problem. This approach of starting in the midd le of the problem and working our way out probably seems odd. Starting at step 1 and working our way forward would make more intuitive sense. Being able to carve up th e problem and work on steps independently is useful, however, since it can encourage mo re modular solutions; in addition, it’s eas- ier to partition the workload between members of a small team. Also, actual clinical users would likely prefer a system that flag s suspicious nodules for review rather than provides a single binary diagnosis. Adapti ng our modular solution to different use cases will probably be easier than if we ’d done a monolithic, from-the-top system. As we work our way through implementing each step, we’ll be going into a fair bit of detail about lung tumors, as well as presenting a lot of fine-grained detail about CT scans. While that might seem off-topic fo r a book that’s focused on PyTorch, we’re doing so specifically so that you begin to develop an intuition about the problem space. That’s crucial to have, because the space of all possible solutions and approaches is too large to effectively code, train, and evaluate. If we were working on a different project (say, the one you tackle after finishing this book), we’d still need to do an invest igation to understand the data and problem space. Perhaps you’re intere sted in satellite mapping, and your next project needs to consume pictures of our planet taken from orbit. You’d need to ask questions about the wavelengths being collected—do you ge t only normal RGB, or something moreOn the shoulders of giants We are standing on the shoulders of giants when deciding on this five-step approach. We’ll discuss these giants and their work more in chapter 14. There isn’t any partic- ular reason why we should k now in advance that this project structure will work well for this problem; instead, we’re relying on others who have actually implemented sim- ilar things and reported success when doing so. Expect to have to experiment to find workable approaches when transitioning to a different domain, but always try to learnfrom earlier efforts in the space and from th ose who have worked in similar areas and have discovered things that might transfer well. Go out there, look for what others have done, and use that as a benchmark. At the same time, avoid getting code andrunning it blindly , because you need to fully understand the code you’re running in order to use the results to make progress for yourself." Deep-Learning-with-PyTorch.pdf,"245 The project: An end-to-end detector for lung cancer exotic? What about infrared or ultraviolet? In ad dition, there might be impacts on the images based on time of day, or if the imag ed location isn’t directly under the satellite, skewing the image. Will th e image need correction? Even if your hypothetical third project’s data type remains the same, it’s probable that the domain you’ll be working in will change things, possibly drastically. Processing camera output for self-driving cars still involves 2D images, but the complications and caveats are wildly different. For example, it ’s much less likely that a mapping satellite will need to worry about the sun shining in to the camera, or getting mud on the lens! We must be able to use our intuition to guide our investigation into potential opti- mizations and improvements. That’s true of deep learning projects in general, andwe’ll practice using our intuition as we go through part 2. So, let’s do that. Take a quick step back, and do a gut check. Wh at does your intuition say about this approach? Does it seem overcomplicated to you? 9.4.1 Why can’t we just throw data at a neural network until it works? After reading the last section, we couldn’t blame you for thinking, “This is nothing like chapter 8!” You might be wondering wh y we’ve got two separate model architec- tures or why the overall data flow is so co mplicated. Well, our approach is different from that in chapter 8 for a reason. It’s a hard task to automate, and people haven’t fully figured it out yet. That difficulty tran slates to complexity; once we as a society have solved this problem definitively, ther e will probably be an off-the-shelf library package we can grab to have it Just Work, but we’re not there just yet. Why so difficult, though? Well, for starters, the majority of a CT scan is fundamentally uninteresting with regard to answering the question, “Does this patient have a malignant tumor?” Thismakes intuitive sense, since the vast majo rity of the patient’s body will consist of healthy cells. In the cases where there is a malignant tumor, up to 99.9999% of the voxels in the CT still won’t be cancer. That ratio is equivalent to a two-pixel blob of incorrectly tinted color somewhere on a hi gh-definition television, or a single mis- spelled word out of a shelf of novels. Can you identify the white dot in the three views of figure 9.5 that has been flagged as a nodule? 2 If you need a hint, the index, row, and co lumn values can be used to help find the relevant blob of dense tissue. Do you thin k you could figure out the relevant proper- ties of tumors given only images (and that means only the images—no index, row, and column information!) like thes e? What if you were given the entire 3D scan, not just three slices that intersect the interesting part of the scan? NOTE Don’t fret if you can’t locate the tu mor! We’re trying to illustrate just how subtle this data can be—the fact that it is hard to identify visually is the entire point of this example. 2The series_uid of this sample is 1.3.6.1.4.1.14519.5.2.1.6279.6001.12626457893177825889037 1755354 , which can be useful if you’d like to look at it in detail later." Deep-Learning-with-PyTorch.pdf,"246 CHAPTER 9Using PyTorch to fight cancer You might have seen elsewhere that end-to -end approaches for detection and classi- fication of objects are very successful in general vision ta sks. TorchVision includes end- to-end models like Fast R-CNN/Mask R- CNN, but these are typically trained on hundreds of thousands of images, and those datasets aren’t constrained by the number of samples from rare classes. The project architecture we will use has the benefit of working well with a more modest amount of data. So while it’s certainly theoretically possible to just throw an arbitrarily large am ount of data at a neural network until it learns the specifics of the prov erbial lost needle, as well as how to ignore the hay, it’s going to be practically prohibitive to coll ect enough data and wait for a long enough time to train the network pr operly. That won’t be the best approach since the results are poor, and most readers won’t ha ve access to the compute resour ces to pull it off at all. To come up with the best solution, we could investigate proven model designs that can better integrate data in an end-to-end manner.3 These complicated designs are capable of producing high-quality results, but they’re not the best because understand- ing the design decisions behind them re quires having mastered fundamental con- cepts first. That makes thes e advanced models poor candidates to use while teaching those same fundamentals! That’s not to say that our multistep design is the best approach, either, but that’s because “best” is only relative to the criter ia we chose to evaluate approaches. There are many “best” approaches, just as there are many goals we could have in mind as we work on a project. Our self-contained, multiste p approach has some disadvantages as well. Recall the GAN game from chapter 2. There, we had two networks cooperating to produce convincing forgeries of old master artists. The artist would produce a candi- date work, and the scholar would critique it, giving the artist feedback on how to 3For example, Retina U-Net ( https://arxiv.org/p df/1811.08661.pdf ) and FishNet ( http://mng.bz/K240 ). index 522 row 267 col 3670 100 200 300 400 500 0 100 200 300 400 500600 0500 400 300 200 100 0100 200 300 400 500600 0500 400 300 200 100 0100 200 300 400 500 Figure 9.5 A CT scan with approximately 1,000 structures that look like tumors to the untrained eye. Exactly one has been identified as a nodule when reviewed by a human speci alist. The rest are normal anatomical structures like blood vessels, lesions, and other non-problematic lumps." Deep-Learning-with-PyTorch.pdf,"247 The project: An end-to-end detector for lung cancer improve. Put in technical terms, the structure of the model allowed gradients to back- propagate from the final classi fier (fake or real) to the earliest parts of the project (the artist). Our approach for solving the problem wo n’t use end-to-end gradient backpropa- gation to directly optimize for our end goal . Instead, we’ll optimize discrete chunks of the problem individually, since our segmen tation model and classification model won’t be trained in tandem wi th each other. That might limit the top-end effectiveness of our solution, but we feel that this will make for a much better learning experience. We feel that being able to focus on a single step at a time allows us to zoom in and concentrate on the smaller number of new sk ills we’re learning. Each of our two mod- els will be focused on perf orming exactly one task. Simi lar to a human radiologist as they review slice after slice of CT, the job ge ts much easier to train for if the scope is well contained. We also want to provide tools that allow for rich manipulation of thedata. Being able to zoom in and focus on the detail of a particular location will have a huge impact on overall productivity while training the model compared to having to look at the entire image at once. Our segmentation model is forced to consume theentire image, but we will structure things so that our classification model gets a zoomed-in view of the areas of interest. Step 3 (grouping) will produce and step 4 (classification) will consume data simi- lar to the image in figure 9.6 containing se quential transverse sl ices of a tumor. This image is a close-up view of a (potentially malignant, or at least indeterminate) tumor, and it is what we’re going to train the step 4 model to identify, and the step 5 model to classify as either benign or malignant. Wh ile this lump may seem nondescript to an untrained eye (or untrained convolutional network), identifying the warning signs of malignancy in this sample is at least a far more constrained problem than having toconsume the entire CT we saw earlier. Our code for the next chapter will provide rou- tines to produce zoomed-in nodule images like figure 9.6. We will perform the step 1 data-loading work in chapter 10, and chapters 11 and 12 will focus on solving the problem of cl assifying these nodules. After that, we’ll back up to work on step 2 (using segmentation to find the candidate tumors) in chapter 13, and then we’ll close out part 2 of the book in chapter 14 by implementing the end-to- end project with step 3 (grouping) and step 5 (nodule analysis and diagnosis). NOTE Standard rendering of CTs places th e superior at the top of the image (basically, the head goes up), but CTs order their slices such that the first sliceis the inferior (toward the feet). So , Matplotlib renders the images upside down unless we take care to flip them. Since that flip doesn’t really matter toour model, we won’t complicate the code paths between our raw data and themodel, but we will add a flip to our rendering code to get the images right- side up. For more information about CT coordinate systems, see section 10.4. " Deep-Learning-with-PyTorch.pdf,"248 CHAPTER 9Using PyTorch to fight cancer Let’s repeat our high-level overview in figure 9.7. slice 5 0 10 20 30 40 01 0 2 0 3 0 4 0slice 7 0 10 20 30 40 01 0 2 0 3 0 4 0slice 9 0 10 20 30 40 01 0 2 0 3 0 4 0 slice 11 0 10 20 30 40 01 0 2 0 3 0 4 0slice 12 0 10 20 30 40 01 0 2 0 3 0 4 0slice 13 0 10 20 30 40 01 0 2 0 3 0 4 0 slice 15 0 10 20 30 40 01 0 2 0 3 0 4 0slice 17 0 10 20 30 40 01 0 2 0 3 0 4 0slice 21 0 10 20 30 40 01 0 2 0 3 0 4 0 Figure 9.6 A close-up, multislice crop of the tumor from the CT scan in figure 9.5" Deep-Learning-with-PyTorch.pdf,"249 The project: An end-to-end detector for lung cancer 9.4.2 What is a nodule? As we’ve said, in order to understand our data well enough to use it effectively, we need to learn some specific s about cancer and radiation oncology. One last key thing we need to understand is what a nodule is. Simply put, a nodule is any of the myriad lumps and bumps that might appear inside someone’s lungs. Some are problematic from a health-of-the-patient perspective; some are not. The precise definition4 limits the size of a nodule to 3 cm or less, with a larger lump being a lung mass ; but we’re going to use nodule interchangeably for all such anat omical structures, since it’s a somewhat arbitrary cutoff an d we’re going to deal with lumps on both sides of 3 cm using the same code paths. A nodule—a sma ll mass in the lung—can turn out to be benign or a malignant tumor (also referred to as cancer ). From a radiological perspec- tive, a nodule is really similar to other lumps that have a wide variety of causes: infec-tion, inflammation, blood-supply issues, ma lformed blood vessels, and diseases other than tumors. 4Eric J. Olson, “Lung nodules: Can they be cancerous?” Mayo Clinic, http://mng.bz/yyge . [(I,R,C), (I,R,C), (I,R,C), ...][NEG, POS, NEG, ...] MAL/BENcandidate Locations ClaSsification modelSTep 2 (ch. 13): Segmentation Step 1 (ch. 10): Data LoadingStep 4 (ch. 11+12): ClaSsification Step 5 (ch. 14): Nodule analysis and Diagnosisp=0.1 p=0.9 p=0.2 p=0.9Step 3 (ch. 14): Grouping [( ) candidate Locations Step 4 (ch. 11+ ClaSsificati o .MHD .RAWCT Data segmentation modelcandidate Sample Figure 9.7 The end-to-end process of taking a full-chest CT scan and determining whether the patient has a malignant tumor" Deep-Learning-with-PyTorch.pdf,"250 CHAPTER 9Using PyTorch to fight cancer The key part is this: the cancers that we are trying to detect will always be nodules, either suspended in the very non-dense tissue of the lung or attached to the lung wall. That means we can limit our classifier to only nodules, rather than have it examine all tissue. Being able to restrict the scope of expected inputs will help our classifier learn the task at hand. This is another example of how the underl ying deep learning techniques we’ll use are universal, but they can’t be applied blindly.5 We’ll need to understand the field we’re working in to make choices that will serve us well. In figure 9.8, we can see a stereotypica l example of a malignant nodule. The smallest nodules we’ll be concerned with are only a few millimeters across, though the one in figure 9.8 is larger. As we discussed earlier in the chapter, this makes the smallest nod- ules approximately a million times smaller than the CT scan as a whole. More than half of the nodules detected in patients are not malignant.6 5Not if we want decent results, at least. 6According to the National Cancer Inst itute Dictionary of Cancer Terms: http://mng.bz/jgBP . index 522 row 3320 100 200 300 400 500 0 100 200 300 400 5000 0255075100125 100 200 300 400 500 row 332 0 05101520 10 20 30 40col 174 0 05101520 10 20 30 40col 174 0 0255075100125 100 200 300 400 500 index 930 10 20 30 40 01 0 2 0 3 0 4 0 Figure 9.8 A CT scan with a malignant nodule displaying a visual discrepancy from other nodules" Deep-Learning-with-PyTorch.pdf,"251 The project: An end-to-end detector for lung cancer 9.4.3 Our data source: The LUNA Grand Challenge The CT scans we were just looking at co me from the LUNA (LUng Nodule Analysis) Grand Challenge. The LUNA Grand Challenge is the combination of an open dataset with high-quality labels of patient CT sc ans (many with lung nodules) and a public ranking of classifiers against the data. There is something of a cult ure of publicly shar- ing medical datasets for research and an alysis; open access to such data allows researchers to use, combine, and perform no vel work on this data without having to enter into formal research agreements betw een institutions (obviously, some data is kept private as well). The goal of th e LUNA Grand Challenge is to encourage improvements in nodule detection by making it easy for teams to compete for high positions on the leader board. A project team can test the efficacy of their detection methods against standardized criteria (the dataset provided). To be included in thepublic ranking, a team must provide a scient ific paper describing the project architec- ture, training methods, and so on. This ma kes for a great resource to provide further ideas and inspiration for project improvements. NOTE Many CT scans “in the w ild” are incredibly messy , in terms of idiosyn- crasies between various scanners an d processing programs. For example, some scanners indicate areas of the CT scan that are outside of the scanner’s field of view by setting the density of those voxels to something negative. CTscans can also be acquired with a variet y of settings on the CT scanner, which can change the resulting image in ways ranging from subtly to wildly differ- ent. Although the LUNA data is generally clean, be sure to check yourassumptions if you incorporate other data sources. We will be using the LUNA 2016 dataset. The LUNA site ( https://luna16.grand-challenge .org/Description ) describes two tracks for the challenge: the first track, “Nodule detec- tion (NDET),” roughly corresponds to our st ep 1 (segmentation); and the second track, “False positive reduction (FPRED),” is similar to our step 3 (classification). When the site discusses “locations of possible nodules,” it is talking about a process similar to what we’ll cover in chapter 13. 9.4.4 Downloading the LUNA data Before we go any further into the nuts and bolts of our project, we’ll cover how to get the data we’ll be using. It’s about 60 GB of data compressed, so depending on your internet connection, it might take a while to download. Once uncompressed, it takes up about 120 GB of space; and we’ll need another 100 GB or so of cache space to store smaller chunks of data so that we ca n access it more quickl y than reading in the whole CT.7 7The cache space required is per chapter, but once you’re done with a chapter, you can delete the cache to free up space." Deep-Learning-with-PyTorch.pdf,"252 CHAPTER 9Using PyTorch to fight cancer Navigate to https://luna16.grand-challenge.org/download and either register using email or use the Google OAuth login. Once logged in, you should see two down- load links to Zenodo data, as well as a li nk to Academic Torrents. The data should be the same from either. TIP The luna.grand-challenge.org domain does not have links to the data download page as of this writing. If you are having issues finding the down- load page, double-check the domain for luna16., not luna. , and reenter the URL if needed. The data we will be using comes in 10 subsets, aptly named subset0 through subset9 . Unzip each of them so you have separate subdirectories like code/data-unversioned/ part2/luna/subset0, and so on. On Linux, you’ll need the 7z decompression utility (Ubuntu provides this via the p7zip-full package). Windows users can get an extractor from the 7-Zip website (www.7-zip.org). Some decompression utilities willnot be able to open the archives; make sure you have the full ve rsion of the extractor if you get an error. In addition, you need the candidates.csv and annotations.csv files. We’ve included these files on the book’s website and in th e GitHub repository for convenience, so they should already be present in code /data/part2/luna/*.csv. They can also be downloaded from the same lo cation as the data subsets. NOTE If you do not have easy access to ~22 0 GB of free disk space, it’s possi- ble to run the examples using only 1 or 2 of the 10 subsets of data. The smaller training set will result in the model pe rforming much more poorly, but that’s better than not being ab le to run the examples at all. Once you have the candidates file and at least one subset do wnloaded, uncompressed, and put in the correct location, you should be able to start running the examples in this chapter. If you want to jump ahea d, you can use the code/p2ch09_explore_data .ipynb Jupyter Notebook to get started. Otherwise, we’ll return to the notebook in more depth later in the chapter. Hopefully yo ur downloads will finish before you start reading the next chapter! 9.5 Conclusion We’ve made major strides toward finishing our project! You might have the feeling that we haven’t accomplished much; after all, we haven’t implemented a single line of code yet. But keep in mind that you’ll need to do research and preparation as we have here when you tackle projects on your own. In this chapter, we set out to do two things: 1Understand the larger context around our lung cancer-detection project 2Sketch out the direction and stru cture of our project for part 2 If you still feel that we haven’t made real progress, please recognize that mindset as a trap—understanding the space your project is working in is crucial, and the design" Deep-Learning-with-PyTorch.pdf,"253 Summary work we’ve done will pay off handsomely as we move forward. We’ll see those divi- dends shortly, once we start implementing our data-loa ding routines in chapter 10. Since this chapter has been informationa l only, without any co de, we’ll skip the exercises for now. 9.6 Summary Our approach to detecting ca ncerous nodules will have five rough steps: data load- ing, segmentation, grouping , classification, and nodule analysis and diagnosis. Breaking down our project into smaller , semi-independent subprojects makes teaching each subproject easier. Other approaches might make more sense for future projects with different go als than the ones for this book. A CT scan is a 3D array of intensity da ta with approximately 32 million voxels, which is around a million times larger than the nodules we want to recognize. Focusing the model on a crop of the CT scan relevant to the task at hand will make it easier to get reason able results from training. Understanding our data will make it easi er to write processing routines for our data that don’t distort or destroy import ant aspects of the data. The array of CT scan data typically will not have cubic voxels; mapping location information in real-world units to array indexes requires conversion. The intensity of a CT scan corresponds roughly to mass density but uses unique units. Identifying the key concepts of a projec t and making sure they are well repre- s e n t e d i n o u r d e s i g n c a n b e c r u c i a l . M o st aspects of our project will revolve around nodules, which are small masses in the lungs and can be spotted on a CT along with many other structures that have a similar appearance. We are using the LUNA Grand Challeng e data to train our model. The LUNA data contains CT scans, as well as huma n-annotated outputs fo r classification and grouping. Having high-quality data has a major impact on a project’s success." Deep-Learning-with-PyTorch.pdf,"254Combining data sources into a unified dataset Now that we’ve discussed the high-level goals for part 2, as well as outlined how the data will flow through our system, let’s get into specifics of what we’re going to do in this chapter. It’s time to implement basi c data-loading and data-processing routines for our raw data. Basically, every significant project you work on will need something analogous to what we cover here.1 Figure 10.1 shows the hi gh-level map of our proj- ect from chapter 9. We’ll focus on step 1, data loading, for the rest of this chapter. Our goal is to be able to produce a training sample given our inputs of raw CT scan data and a list of annotations for those CTs. This might sound simple, but quite a bit needs to happen before we can load, process, and extract the data we’reThis chapter covers Loading and processing raw data files Implementing a Python clas s to represent our data Converting our data into a format usable by PyTorch Visualizing the training and validation data 1To the rare researcher who has all of their data well prepared for them in advance: lucky you! The rest of us will be busy writing code for loading and parsing." Deep-Learning-with-PyTorch.pdf,"255 interested in. Figure 10.2 shows what we’ll need to do to turn our raw data into a train- ing sample. Luckily, we got a head start on understanding our data in the last chapter, but we have more work to do on that front as well.Figure 10.1 Our end-to-end lung cancer detection project, with a focus on this chapter’s topic: step 1, data loading[(I,R,C), (I,R,C), (I,R,C), ...][NEG, POS, NEG, ... ] MAL/BENcandidate Locations ClaSsification modelSTep 2 (ch. 13): Segmentation Step 1 (ch. 10): Data LoadingStep 4 (ch. 11+12): ClaSsification Step 5 (ch. 14): Nodule analysis and Diagnosisp=0.1 p=0.9 p=0.2 p=0.9Step 3 (ch. 14): Grouping.MHD .RAWCT Data segmentation modelcandidate Sample [(I,R,C ), (I,R,C ), (I,R,C) , ... ] [NEG, POS, NEG, ... ] MAL/BEN ClaSsific ation model STep 2 (ch. 13 ): Segmentation Step 1 (ch. 10): Data Loading Step 4 (ch. 11+12) : on Step 5 (ch. 14 ): Nodule anal ysis and Di agnosi s p=0.1 p=0.9 , p=0.2 p=0.9 Step 3 (ch. 14 ): Grou ping [() candidate Locations Step 4 (ch. 11+ ClaSsificati o .M MHD .R RAW CT Data Data segment at tion model candidate Sample ANnotations .CSV[(I,R,C), (I,R,C), (I,R,C), ...]Training candidate locationSample tuple sample aRray ( , T/F, “1.2.3”, (I,R,C))is nodule? Series_uid candidate location (X,Y,Z) xyz2irc Transform Candidate COordinate 1 0 0 0 0 1 0 00 0 1 00 0 0 1 ANn Nnotatio ns .CSV.MHD .RAWCT ARrayCT Files Figure 10.2 The data transforms required to make a sample tuple. These sample tuples will be used as input to our model training routine." Deep-Learning-with-PyTorch.pdf,"256 CHAPTER 10 Combining data sources into a unified dataset This is a crucial moment, when we begin to transmute the leaden raw data, if not into gold, then at least into the stuff that our neural network will spin into gold. We first dis- cussed the mechanics of this transformation in chapter 4. 10.1 Raw CT data files Our CT data comes in two files: a .mhd file containing metadata header information, and a .raw file containing the raw bytes that make up the 3D array. Each file’s name starts with a unique identifier called the series UID (the name comes from the Digital Imaging and Communications in Medicine [DICOM] nomenclature) for the CT scan in ques- tion. For example, for series UID 1.2.3, ther e would be two files: 1.2.3.mhd and 1.2.3.raw. Our Ct class will consume those two files and produce the 3D array, as well as the transformation matrix to co nvert from the patient coordi nate system (which we will discuss in more detail in se ction 10.6) to the index, row, column coordinates needed by the array (these coordina tes are shown as (I,R,C) in the figures and are denoted with _irc variable suffixes in the code). Don’t sweat the details of all this right now; just remember that we’ve got some coordinate system conver sion to do before we can apply these coordinates to our CT data. We ’ll explore the details as we need them. We will also load the annotation data provided by LUNA, which will give us a list of nodule coordinates, each with a malignancy flag, along with the series UID of the rel- evant CT scan. By combining the nodule coor dinate with coordina te system transfor- mation information, we get the index, row, and column of the voxel at the center of our nodule. Using the (I,R,C) coordinates, we can crop a small 3D slice of our CT data to use as the input to our model. Along with this 3D sample array, we must construct the rest of our training sample tuple, which will have the sample array, nodule status flag, series UID, and the index of this sample in the CT list of nodule candidates. This sample tuple is exactly what PyTorch expects from our Dataset subclass and represents the last section of our bridge from our original raw data to the standard structure ofPyTorch tensors. Limiting or cropping our data so as no t to drown our model in noise is important, as is making sure we’re not so aggressive that our signal gets cropped out of our input. We want to make sure the range of our data is well behaved, especially after normalization. Clamping our data to remove outliers can be useful, especially if our data is prone to extreme outliers. We can also create handcrafted, algorithmic trans- formations of our input; this is known as feature engineering; and we discussed it briefly in chapter 1. We’ll usually want to let the model do most of the heavy lifting; feature engineering has its uses, but we won’t use it here in part 2. 10.2 Parsing LUNA’s annotation data The first thing we need to do is begin lo ading our data. When working on a new proj- ect, that’s often a good place to start. Maki ng sure we know how to work with the raw input is required no matter what, and know ing how our data will look after it loads" Deep-Learning-with-PyTorch.pdf,"257 Parsing LUNA’s annotation data can help inform the structure of our early experiments. We coul d try loading individ- ual CT scans, but we think it makes sense to parse the CSV files that LUNA provides, which contain information about the points of interest in each CT scan. As we can see in figure 10.3, we expect to get some coordinate information, an indication of whether the coordinate is a nodule, and a unique identifier for the CT scan. Since there are fewer types of information in the CS V files, and they’re easier to parse, we’re hoping they will give us some clues about what to look fo r once we start loading CTs. The candidates.csv file contains informatio n about all lumps that potentially look like nodules, whether those lumps are malignant, benign tumors, or something else alto- gether. We’ll use this as the basis for buildi ng a complete list of candidates that can then be split into our training and valida tion datasets. The following Bash shell ses- sion shows what th e file contains: $ wc -l candidates.csv 551066 candidates.csv $ head data/part2/luna/candidates.csv seriesuid,coordX,coordY,coordZ,class 1.3...6860,-56.08,-67.85,-311.92,0 1.3...6860,53.21,-244.41,-245.17,01.3...6860,103.66,-121.8,-286.62,0ANnotations .CSV[(I,R,C), (I,R,C), (I,R,C), ... ]Training candidate locationSample tuple sample aRray ( , T/F, “1.2.3”, (I,R,C))is nodule? Series_uid candidate location (X,Y,Z)xyz2irc Transform Candidate COordinate1 0 0 00 1 0 0 0 0 1 00 0 0 1.MHD .RAWCT ARrayCT Files [(I,R,C) , [ (I,R,C ), (I,R,C )))))))))))))))))))))))))))), ..................... ] Training candidate location Sample tuple ay sampleaRra T/F, .2.3”, “1 ,R,C) (I is nodule? iiiiisiiinodule? ries_ui d Se aaaaaaaaaaaaaannndnnnnnnnnnnidate cacaacaaaacacacaaacaaacccccccccc ocation lo (X,Y,Z) xyz xyz 2 2 irc irc Transfor m Candidate COordinate COordinate 1000 1 0 0 0 0 1 0 0 0010 0001 0 0 0 1 ANnANnNnANnNnNnNnANnANnANnNnNnNnANnANnANnNnNnNnAAAAAAAAAA NnNnNnNnNnNnNnNnNnNnNnNnNnNnNnNnNnNnotations .CSV .M MHD .R RAW CT ARray CT Fil es Figure 10.3 The LUNA annotations in candidates.csv contain the CT series, the nodule candidate’s position, and a flag indicating if the candidate is actually a nodule or not. Counts the number of lines in the file Prints the first few lines of the file The first line of the .csv file defines the column headers." Deep-Learning-with-PyTorch.pdf,"258 CHAPTER 10 Combining data sources into a unified dataset 1.3...6860,-33.66,-72.75,-308.41,0 ... $ grep ',1$' candidates.csv | wc -l 1351 NOTE The values in the seriesuid column have been elided to better fit the printed page. So we have 551,000 lines, each with a seriesuid (which we’ll call series_uid in the code), some (X,Y,Z) coordinates, and a class column that corresponds to the nodule status (it’s a Boolean value: 0 for a candidat e that is not an actu al nodule, and 1 for a candidate that is a nodule, either malignant or benign). We have 1,351 candidates flagged as actual nodules. The annotations.csv file contains inform ation about some of the candidates that have been flagged as nodules. We are interested in the diameter_mm information in particular: $ wc -l annotations.csv 1187 annotations.csv $ head data/part2/luna/annotations.csv seriesuid,coordX,coordY,coordZ,diameter_mm 1.3.6...6860,-128.6994211,-175.3192718,-298.3875064,5.6514706351.3.6...6860,103.7836509,-211.9251487,-227.12125,4.224708481 1.3.6...5208,69.63901724,-140.9445859,876.3744957,5.786347814 1.3.6...0405,-24.0138242,192.1024053,-391.0812764,8.143261683... We have size information for about 1,200 nodu les. This is useful, since we can use it to make sure our training and validation data includes a representative spread of nodulesizes. Without this, it’s possible that our va lidation set could end up with only extreme values, making it seem as though our model is underperforming. 10.2.1 Training and validation sets For any standard supervised learning task (c lassification is the pr ototypical example), we’ll split our data into training and validat ion sets. We want to make sure both sets are representative of the range of real-world input data we’re expecting to see and han- dle normally. If either set is meaningfully different from our real-world use cases, it’s pretty likely that our model will behave diffe rently than we expect—all of the training and statistics we collect won’t be predictive once we transfer over to production use!We’re not trying to make this an exact sc ience, but you should keep an eye out in future projects for hints that you are trai ning and testing on data that doesn’t make sense for your operating environment. Let’s get back to our nodules. We’re goin g to sort them by size and take every Nth one for our validation set. That should give us the representative spread we’re lookingCounts the number of lines that end with 1, which indicates malignancy This is a different number than in the candidates.csv file. The last column is also different." Deep-Learning-with-PyTorch.pdf,"259 Parsing LUNA’s annotation data for. Unfortunately, the loca tion information provided in annotations.csv doesn’t always precisely line up with the coordinates in candidates.csv: $ grep 100225287222365663678666836860 annotations.csv 1.3.6...6860, -128.6994211,-175.3192718,-298.3875064,5.651470635 1.3.6...6860,103.7836509,-211.9251487,-227.12125,4.224708481 $ grep '100225287222365663678666836860.*,1$' candidates.csv 1.3.6...6860,104.16480444,-211.685591018,-227.011363746,11.3.6...6860, -128.94,-175.04,-297.87,1 If we truncate the corresponding coordinate s from each file, we end up with (–128.70, –175.32,–298.39) versus (–128.94,–175.04,–297.87). Since the nodule in question has a diameter of 5 mm, both of these points ar e clearly meant to be the “center” of the nodule, but they don’t line up exactly. It would be a perfectly valid response to decide that dealing with this data mismatch isn’t worth it, and to ignore the file. We are going to do the legwork to make things line up, though, since real-world datasets are often imperfect this way, and this is a good exampl e of the kind of work you will need to do to assemble data from disparate data sources. 10.2.2 Unifying our annotation and candidate data Now that we know what our raw da ta files look like, let’s build a getCandidateInfo- List function that will stitch it all together . We’ll use a named tuple that is defined at the top of the file to hold the information for each nodule. from collections import namedtuple # ... line 27 CandidateInfoTuple = namedtuple( 'CandidateInfoTuple','isNodule_bool, diameter_mm, series_uid, center_xyz', ) These tuples are not our training samples, as they’re missing the chunks of CT data we need. Instead, these represent a sanitized, cleaned, unified interface to the human- annotated data we’re using. It’s very important to isolate having to deal with messy data from model training. Otherwise, your training loop can get cluttered quickly, because you have to keep dealing with spec ial cases and other distractions in the mid- dle of code that should be focused on training. TIP Clearly separate the code that’s re sponsible for data sanitization from the rest of your project. Don’t be afraid to rewrite your data once and save it to disk if needed. Our list of candidate information will have the nodule status (what we’re going to be training the model to classify), diameter (u seful for getting a good spread in training,Listing 10.1 dsets.py:7These two coordinates are very close to each other." Deep-Learning-with-PyTorch.pdf,"260 CHAPTER 10 Combining data sources into a unified dataset since large and small nodules will not have th e same features), series (to locate the correct CT scan), and candidate center (to find the ca ndidate in the larger CT). The function that will build a list of these NoduleInfoTuple instances starts by using an in- memory caching decorator, followed by ge tting the list of files present on disk. @functools.lru_cache(1) def getCandidateInfoList(requireOnDisk_bool=True): mhd_list = glob.glob('data-unversioned/part2/luna/subset*/*.mhd') presentOnDisk_set = {os.path.split(p)[-1][:-4] for p in mhd_list} Since parsing some of the data files can be slow, we’ll cache the re sults of this function call in memory. This will come in handy la ter, because we’ll be calling this function more often in future chapters. Speeding up ou r data pipeline by carefully applying in- memory or on-disk caching can result in some pretty impressive gains in training speed. Keep an eye out for these opportunities as you work on your projects. Earlier we said that we’ll support running our training program with less than the full set of training data, due to the long download times and high disk space require- ments. The requireOnDisk_bool parameter is what makes good on that promise; we’re detecting which LUNA series UIDs ar e actually present and ready to be loaded from disk, and we’ll use that information to limit which entries we use from the CSV files we’re about to parse. Being able to ru n a subset of our data through the training loop can be useful to verify that the co de is working as intended. Often a model’s training results are bad to useless when doing so, but ex ercising our logging, metrics, model check-pointing, and simila r functionality is beneficial. After we get our candidate information, we want to merge in the diameter infor- mation from annotations.csv. First we need to group our annotations by series_uid , as that’s the first key we’ll use to cro ss-reference each row from the two files. diameter_dict = {} with open('data/part2/luna/annotations.csv', ""r"") as f: for row in list(csv.reader(f))[1:]: series_uid = row[0]annotationCenter_xyz = tuple([float(x) for x in row[1:4]]) annotationDiameter_mm = float(row[4]) diameter_dict.setdefault(series_uid, []).append( (annotationCenter_xyz, annotationDiameter_mm) )Listing 10.2 dsets.py:32 Listing 10.3 dsets.py:40, def getCandidateInfoListStandard library in- memory caching requireOnDisk_bool defaults to screening out series from data subsets that aren’t in place yet." Deep-Learning-with-PyTorch.pdf,"261 Parsing LUNA’s annotation data Now we’ll build our full list of candidates using the information in the candidates.csv file. candidateInfo_list = [] with open('data/part2/luna/candidates.csv', ""r"") as f: for row in list(csv.reader(f))[1:]: series_uid = row[0] if series_uid not in presentOnDisk_set and requireOnDisk_bool: continue isNodule_bool = bool(int(row[4])) candidateCenter_xyz = tuple([float(x) for x in row[1:4]]) candidateDiameter_mm = 0.0 for annotation_tup in diameter_dict.get(series_uid, []): annotationCenter_xyz, annotationDiameter_mm = annotation_tupfor i in range(3): delta_mm = abs(candidateCenter_xyz[i] - annotationCenter_xyz[i]) if delta_mm > annotationDiameter_mm / 4: break else: candidateDiameter_mm = annotationDiameter_mm break candidateInfo_list.append(CandidateInfoTuple( isNodule_bool,candidateDiameter_mm, series_uid, candidateCenter_xyz, )) For each of the candidate entries for a given series_uid , we loop through the annota- tions we collected earlier for the same series_uid and see if the two coordinates are close enough to consider them the same nodule. If they are, great! Now we have diam-eter information for that nodule . If we don’t find a match, that’s fine; we’ll just treat the nodule as having a 0.0 diameter. Since we ’re only using this information to get a good spread of nodule sizes in our training and validation sets, having incorrect diam- eter sizes for some nodules shouldn’t be a problem, but we should remember we’re doing this in case our assumption here is wrong. That’s a lot of somewhat fiddly code ju st to merge in our no dule diameter. Unfor- tunately, having to do this kind of mani pulation and fuzzy matching can be fairly com- mon, depending on your raw data. Once we get to this point, however, we just need to sort the data and return it. Listing 10.4 dsets.py:51, def getCandidateInfoList If a series_uid isn’t present, it’s in a subset we don’t have on disk, so we should skip it. Divides the diameter by 2 to get the radius, and divides the radius by 2 to require that the two nodule center points not be too far apart relati ve to the size of the nodule. (This results in a bounding-box check, not a true distance check.)" Deep-Learning-with-PyTorch.pdf,"262 CHAPTER 10 Combining data sources into a unified dataset candidateInfo_list.sort(reverse=True) return candidateInfo_list The ordering of the tuple members in noduleInfo_list is driven by this sort. We’re using this sorting approach to help ensure that when we take a slice of the data, that slice gets a representative chunk of the ac tual nodules with a g ood spread of nodule diameters. We’ll discuss this more in section 10.5.3. 10.3 Loading individual CT scans Next up, we need to be able to take our CT data from a pile of bits on disk and turn it into a Python object from which we can extr act 3D nodule density data. We can see this path from the .mhd and .raw files to Ct objects in figure 10.4. Our nodule annotation information acts like a map to the interesting parts of our raw data. Before we can followthat map to our data of inte rest, we need to get the data into an addressable form. TIP Having a large amount of raw data, mo st of which is uninteresting, is a common situation; look for ways to limit your scope to only the relevant datawhen working on your own projects.Listing 10.5 dsets.py:80, def getCandidateInfoList This means we have a ll of the actual nodule samples starting with th e largest first, followed by all of the non-nodu le samples (which don’t have nodule size information). ANnotations .CSV[(I,R,C), (I,R,C), (I,R,C), ...]Training candidate locationSample tuple sample aRray ( , T/F, “1.2.3”, (I,R,C))is nodule? Series_uid candidate location (X,Y,Z)xyz2irc Transform Candidate COordinate1 0 0 0 0 1 0 00 0 1 00 0 0 1.MHD .RAWCT ARrayCT Files [[[[[[[[[[[[[[[[[[[[[[[[[[[[(I,R,C) , [[[[[[[[ (I,R,C ), (I,R,C ), ... ] Training candidate location Sample tupl e ay sample aRr a T/F, .2.3”, “1 ,R,C) (I is nodul e? isnodule? ries_u id Se andidate ca ocation lo (X,Y,Z) xyz 2 irc Transfor m CaCaCaCCCaCCCCCCaCaCaCaCaCCCaaaaaaaaaandnddddndddddddddddddnddndndnnnnnidddiddiddddidddidddididdddddiaaaaaaatataaaaaaaaae COordinat e COordinat e 100 0 1 0 0 0 0 1 0 0 0 0 1 0 000 1 0 0 0 1 ANn Nnotations .CSV .M MHD .R RAW CT ARray CT File s Figure 10.4 Loading a CT scan produces a voxel array and a transformation from patient coordinates to array indices." Deep-Learning-with-PyTorch.pdf,"263 Loading individual CT scans The native file format for CT scans is DI COM (www.dicomstandard.org). The first ver- sion of the DICOM standard was authored in 1984, and as we might expect from any- thing computing-related that comes from th at time period, it’s a bit of a mess (for example, whole sections that are now retired were devoted to the data link layer pro- tocol to use, since Ethernet hadn’t won yet). NOTE We’ve done the legwork of finding th e right library to parse these raw data files, but for other formats you’ve never heard of, you’ll have to find aparser yourself. We recommend taking th e time to do so! The Python ecosys- tem has parsers for just about every file format under the sun, and your time is almost certainly better spent working on the novel parts of your project than writing parsers for esoteric data formats. Happily, LUNA has converted the data we’re going to be using for this chapter into the MetaIO format, which is quite a bit easier to use ( https://itk.org/Wiki/MetaIO/ Documentation#Quick_Start ). Don’t worry if you’ve never heard of the format before! We can treat the format of the data files as a black box and use SimpleITK to load them into more familiar NumPy arrays. import SimpleITK as sitk # ... line 83class Ct: def __init__(self, series_uid): mhd_path = glob.glob( 'data-unversioned/part2/luna/subset*/{}.mhd'.format(series_uid) )[0] ct_mhd = sitk.ReadImage(mhd_path) ct_a = np.array(sitk.GetArrayFromImage(ct_mhd), dtype=np.float32) For real projects, you’ll want to understa nd what types of info rmation are contained in your raw data, but it’s perfectly fi ne to rely on third-party code like SimpleITK to parse the bits on disk. Finding the right balance of knowing everything about your inputs versus blindly accepting whatever your data-loading library hands you will prob- ably take some experience . Just remember that we’r e mostly concerned about data, not bits. It’s the information that matters, not how it’s represented. Being able to uniquely identify a given sa mple of our data can be useful. For exam- ple, clearly communicating which sample is causing a problem or is getting poor clas-sification results can drasti cally improve our ability to isolate and debug the issue. Depending on the nature of our samples, some times that unique identifier is an atom, like a number or a string , and sometimes it’s more complicated, like a tuple. We identify specific CT scans using the series instance UID ( series_uid ) assigned when the CT scan was created. DICOM make s heavy use of unique identifiers (UIDs)Listing 10.6 dsets.py:9 We don’t care to track which subset a given series_uid is in, so we wildcard the subset. sitk.ReadImage implicit ly consumes the .raw file in addition to th e passed-in .mhd file. Recreates an np.array since we want to convert the value type to np.float3" Deep-Learning-with-PyTorch.pdf,"264 CHAPTER 10 Combining data sources into a unified dataset for individual DICOM files, groups of file s, courses of treatment, and so on. These identifiers are similar in concept to UUIDs ( https://docs.pytho n.org/3.6/library/ uuid.html ), but they have a different creation process and are formatted differently. For our purposes, we can treat them as opaque ASCII strings that serve as unique keysto reference the various CT scans. Officia lly, only the characters 0 through 9 and the period (.) are valid characters in a DICOM UID, bu t some DICOM files in the wild have been anonymized with routines that replace the UIDs with hexadecimal (0–9and a–f) or other technically out-of-spec va lues (these out-of-s pec values typically aren’t flagged or cleaned by DICOM parsers; as we said before, it’s a bit of a mess). The 10 subsets we discussed earlier have about 90 CT scans each (888 in total), with every CT scan represented as two files: one with a .mhd extension and one with a .raw extension. The data being split betw een multiple files is hidden behind the sitk routines, however, and is not something we need to be directly concerned with. At this point, ct_a is a three-dimensional array. All three dimensions are spatial, and the single intensity channel is implicit. As we saw in chapter 4, in a PyTorch ten- sor, the channel information is represen ted as a fourth dimension with size 1. 10.3.1 Hounsfield Units Recall that earlier, we said that we need to understand our data, not the bits that store it. Here, we have a perfect example of th at in action. Withou t understanding the nuances of our data’s values and range, we’ll end up feeding values into our model that will hinder its ability to learn what we want it to. Continuing the __init__ method, we need to do a bit of cleanup on the ct_a val- ues. CT scan voxels are expre ssed in Hounsfield units (HU; https://en.wikipedia.org/ wiki/Hounsfield_scale ), which are odd units; air is –1 ,000 HU (close enough to 0 g/cc [grams per cubic centimeter] for our purpos es), water is 0 HU (1 g/cc), and bone is at least +1,000 HU (2–3 g/cc). NOTE HU values are typically stored on disk as signed 12-bit integers (shoved into 16-bit integers), which fits well with the level of precision CT scanners can provide. While this is perhaps intere sting, it’s not particularly relevant to the project. Some CT scanners use HU values that correspond to negative densities to indicate that those voxels are outside of the CT scanner’s field of view. For our purposes, every- thing outside of the patient should be air, so we discard that fi eld-of-view information by setting a lower bound of the values to –1,000 HU. Similarly, the exact densities of bones, metal implants, and so on are not relevant to our use case, so we cap density at roughly 2 g/cc (1,000 HU) even though that’s not biologically accurate in most cases. ct_a.clip(-1000, 1000, ct_a)Listing 10.7 dsets.py:96, Ct.__init__" Deep-Learning-with-PyTorch.pdf,"265 Locating a nodule using the patient coordinate system Values above 0 HU don’t scale perfectly with density, but the tumors we’re interested in are typically around 1 g/cc (0 HU), so we’re going to ignore that HU doesn’t map perfectly to common units like g/cc. That’s fine, since our model will be trained to consume HU directly. We want to remove all of these outlier values from our data: they aren’t directly rel- evant to our goal, and having those outliers can make the model’s job harder. This can happen in many ways, but a common example is when batch normalization is fedthese outlier values and the statistics about how to best normalize the data are skewed. Always be on the lookout for ways to clean your data. All of the values we’ve built are now assigned to self . self.series_uid = series_uid self.hu_a = ct_a It’s important to know that our data uses the range of –1,000 to +1,000, since in chap- ter 13 we end up adding channels of inform ation to our samples. If we don’t account for the disparity between HU and our additi onal data, those new channels can easily be overshadowed by the raw HU values. We won’t add more channels of data for the classification step of our project, so we don’t need to implement special handling right now. 10.4 Locating a nodule using th e patient coordinate system Deep learning models typica lly need fixed-size inputs,2 due to having a fixed number of input neurons. We need to be able to produce a fixed-size array containing the can- didate so that we can use it as input to our classifier. We’d like to train our model using a crop of the CT scan that has a candidate nicely centered, since then our model doesn’t have to learn how to notice nodules tucked away in the corner of the input. By reducing the variation in expected inputs, we make the model’s job easier. 10.4.1 The patient coordinate system Unfortunately, all of the candidate center da ta we loaded in section 10.2 is expressed in millimeters, not voxels! We can’t just pl ug locations in millimeters into an array index and expect everything to work out the way we want. As we can see in figure 10.5, we need to transform our coordinates fr om the millimeter-based coordinate system (X,Y,Z) they’re expressed in, to the voxel- address-based coordinate system (I,R,C) used to take array slices from our CT scan data. This is a classic example of how it’s important to handle units consistently! As we have mentioned previously, when dealing with CT scans, we refer to the array dimensions as index, row, and column, because a separate meaning exists for X, Y, and Z,Listing 10.8 dsets.py:98, Ct.__init__ 2There are exceptions, but they’re not relevant right now." Deep-Learning-with-PyTorch.pdf,"266 CHAPTER 10 Combining data sources into a unified dataset as illustrated in figure 10.6. The patient coordinate system defines positive X to be patient- left ( left), positive Y to be patient-behind ( posterior ), and positive Z to be toward-patient- head ( superior ). Left-posterior-superior is sometimes abbreviated LPS. ANnotations .CSV[(I,R,C), (I,R,C), (I,R,C), ...]Training candidate locationSample tuple sample aRray ( , T/F, “1.2.3”, (I,R,C))is nodule? Series_uid candidate location (X,Y,Z)xyz2irc Transform Candidate COordinate1 0 0 00 1 0 0 0 0 1 00 0 0 1.MHD .RAWCT ARrayCT Files [(I,R,C) , [ (I,R,C) , (I,R,C ), ... ] Training candidate location Sample tupl e ay sample aRr a T/F, .2.3”, “1 ,R,C) (I is nodul e? is nodule? ries_u id Se andidate ca ocation lo (X,Y,Z) xyz 2 irc Transform Candidate COordinate COordinate 1000 1 0 0 0 0 1 0 0 0 0 1 0 0001 0 0 0 1 ANn Nnotations .CSV .M MHD .R RAW CT ARray CT Files Figure 10.5 Using the transformation information to convert a nodule center coordinate in patient coordinates (X,Y,Z) to an array index (Index,Row,Column). Figure 10.6 Our inappropriately clothed patient demonstrating the axes of the patient coordinate system RightAnterior InferiorSuperior (+Z) Left (+X) Posterior (+Y)" Deep-Learning-with-PyTorch.pdf,"267 Locating a nodule using the patient coordinate system The patient coordinate system is measured in millimeters and has an arbitrarily posi- tioned origin that does not correspond to th e origin of the CT voxel array, as shown in figure 10.7. The patient coordinate system is often us ed to specify the locations of interesting anatomy in a way that is independent of any particular scan. The metadata that defines the relationship between the CT ar ray and the patient coordinate system is stored in the header of DICOM files, and that meta-image format preserves the data in its header as well. This metadata allo ws us to construct the transformation from (X,Y,Z) to (I,R,C) that we saw in figure 10.5. The raw data contains many other fieldsof similar metadata, but since we don’t have a use for them right now, those unneeded fields will be ignored. 10.4.2 CT scan shape and voxel sizes One of the most common variations between CT scans is the size of the voxels; typi- cally, they are not cubes. Instead, they ca n be 1.125 mm × 1.125 mm × 2.5 mm or simi- lar. Usually the row and column dimensions have voxel sizes th at are the same, and the index dimension has a larger value, but other ratios can exist. When plotted using square pixels, the non-cubic voxels can end up looking some- what distorted, similar to the distortion near the north and south poles when using a Mercator projection map. That’s an imperfec t analogy, since in this case the distortion is uniform and linear—the patient looks fa r more squat or barrel-chested in figure 10.8 than they would in reality. We will need to apply a scaling factor if we want theimages to depict realistic proportions. Knowing these kinds of details can help when trying to interpret our results visually. Without this information, it would be easy to assume that something was wrong with our data loading: we might think the data looked so squat because we were skipping half of -100 +100 +200 +300-100 +100 +200+X axis +Y Axis (511, 511)(0, 0)ARray COordinatesPatient COordinates -100 -100 +100 +200 (511, 511) (511511)(0, 0) (220, 150) Figure 10.7 Array coordinates and patient coordi nates have different origins and scaling." Deep-Learning-with-PyTorch.pdf,"268 CHAPTER 10 Combining data sources into a unified dataset the slices by accident, or some thing along those lines. It can be easy to waste a lot of time debugging something that’s been working a ll along, and being familiar with your data can help prevent that. CTs are commonly 512 rows by 512 column s, with the index dimension ranging from around 100 total slices up to perhaps 250 slices (250 slices ti mes 2.5 millimeters is typically enough to contain the anatomical re gion of interest). Th is results in a lower bound of approximately 225 voxels, or about 32 million data points. Each CT specifies the voxel size in millimeters as part of the file metadata; for example, we’ll call ct_mhd .GetSpacing() in listing 10.10. 10.4.3 Converting between millimeters and voxel addresses We will define some utility code to assist with the conv ersion between patient coordi- nates in millimeters (which we will denote in the code with an _xyz suffix on variables and the like) and (I,R,C) array coordinates (which we will denote in code with an _irc suffix). You might wonder whether the SimpleITK library comes with utility functions to convert these. And indeed, an Image instance does feature two methods— Transform- IndexToPhysicalPoint and TransformPhysicalPointToIndex —to do just that (except shuffling from CRI [column,row,index] IRC). However, we want to be able to do this computation without keeping the Image object around, so we’ll perform the math manually here. Flipping the axes (and potentially a rota tion or other transforms) is encoded in a 3 × 3 matrix returned as a tuple from ct_mhd.GetDirections() . To go from voxel indices to coordinates, we need to follow these four steps in order: 1Flip the coordinates from IRC to CRI, to align with XYZ. 2Scale the indices with the voxel sizes. 3Matrix-multiply with the directions matrix, using @ in Python. 4Add the offset for the origin. index 410 100 200 300 400 500 0 100 200 300 400 500row 229 0 0255075100175 125150 100 200 300 400 500col 457 0 0255075100175 125150 100 200 300 400 500 Figure 10.8 A CT scan with non-cubic voxels along the i ndex-axis. Note how compressed the lungs are from top to bottom." Deep-Learning-with-PyTorch.pdf,"269 Locating a nodule using the patient coordinate system To go back from XYZ to IRC, we need to perform the inverse of each step in the reverse order. We keep the voxel sizes in named tuples, so we convert these into arrays. IrcTuple = collections.namedtuple('IrcTuple', ['index', 'row', 'col']) XyzTuple = collections.namedtuple('XyzTuple', ['x', 'y', 'z']) def irc2xyz(coord_irc, origin_xyz, vxSize_xyz, direction_a): cri_a = np.array(coord_irc)[::-1]origin_a = np.array(origin_xyz) vxSize_a = np.array(vxSize_xyz) coords_xyz = (direction_a @ (cri_a * vxSize_a)) + origin_areturn XyzTuple(*coords_xyz) def xyz2irc(coord_xyz, origin_xyz, vxSize_xyz, direction_a): origin_a = np.array(origin_xyz)vxSize_a = np.array(vxSize_xyz) coord_a = np.array(coord_xyz) cri_a = ((coord_a - origin_a) @ np.linalg.inv(direction_a)) / vxSize_acri_a = np.round(cri_a) return IrcTuple(int(cri_a[2]), int(cri_a[1]), int(cri_a[0])) Phew. If that was a bit heavy, don’t worry. Ju st remember that we need to convert and use the functions as a black box. The metadata we need to convert from patient coor- dinates (_xyz ) to array coordinates ( _irc ) is contained in the MetaIO file alongside the CT data itself. We pull the voxel sizing and positioning metadata out of the .mhd file at the same time we get the ct_a . class Ct: def __init__(self, series_uid): mhd_path = glob.glob('data- unversioned/part2/luna/subset*/{}.mhd'.format(series_uid))[0] ct_mhd = sitk.ReadImage(mhd_path) # ... line 91 self.origin_xyz = XyzTuple(*ct_mhd.GetOrigin()) self.vxSize_xyz = XyzTuple(*ct_mhd.GetSpacing())self.direction_a = np.array(ct_mhd.GetDirection()).reshape(3, 3) These are the inputs we need to pass into our xyz2irc conversion func tion, in addition to the individual point to covert. With th ese attributes, our CT object implementationListing 10.9 util.py:16 Listing 10.10 dsets.py:72, class CtSwaps the order while we convert to a NumPy array The bottom three steps of our plan, all in one line Inverse of the last three steps Sneaks in proper rounding before converting to integersShuffles and converts to integers Converts the directions to an array, and reshapes the nine- element array to its proper 3 × 3 matrix shape" Deep-Learning-with-PyTorch.pdf,"270 CHAPTER 10 Combining data sources into a unified dataset now has all the data needed to convert a ca ndidate center from patient coordinates to array coordinates. 10.4.4 Extracting a nodule from a CT scan As we mentioned in chapter 9, up to 99.9999% of the voxels in a CT scan of a patient with a lung nodule won’t be part of the actual nodule (or canc er, for that matter). Again, that ratio is equivalent to a two-pi xel blob of incorrectly tinted color some- where on a high-definition television, or a si ngle misspelled word out of a shelf of nov- els. Forcing our model to examine such huge swaths of data looking for the hints of the nodules we want it to focus on is going to work about as well as asking you to find a single misspelled word from a set of novels written in a language you don’t know!3 Instead, as we can see in figure 10.9, we will extract an area around each candidate and let the model focus on one candidate at a time. This is akin to letting you read individual paragraphs in that foreign lang uage: still not an easy task, but far less daunting! Looking for ways to reduce th e scope of the problem for our model can help, especially in the early stages of a proj ect when we’re trying to get our first work- ing implementation up and running. 3Have you found a misspelled word in this book yet? ;)ANnotations .CSV[(I,R,C), (I,R,C), (I,R,C), ...]Training candidate locationSample tuple sample aRray ( , T/F, “1.2.3”, (I,R,C))is nodule? Series_uid candidate location (X,Y,Z)xyz2irc Transform Candidate COordinate1 0 0 0 0 1 0 0 0 0 1 00 0 0 1.MHD .RAWCT ARrayCT Files [(I,R,C) , [ (I,R,C ), (I,R,C ), ... ] Training candidate locatio n Sample tupl e ay sample aRr a TTTTTTTTTTTT/F, .2.3”, “1 ,R,C) (I is noddddddodddddddddddddddddddddddule? is noddduuuluuuuuuuue? ries_uid Se andidate ca ocatio n lo (X,Y,Z) xyz 2 irc Transfor m Candidate COordinate COordinate 1000 1 0 0 0 0100 0010 0001 0 0 0 1 ANn Nnotation s .CSV .M MHD .R RAW CT ARray CT Files Figure 10.9 Cropping a candidate sample out of the larger CT voxel array using the candidate center’s array coordinate information (Index,Row,Column)" Deep-Learning-with-PyTorch.pdf,"271 A straightforward dataset implementation The getRawNodule function takes the center expre ssed in the patient coordinate sys- tem (X,Y,Z), just as it’s specified in the LUNA CSV data, as well as a width in voxels. It returns a cubic chunk of CT, as well as the center of the candidate converted to array coordinates. def getRawCandidate(self, center_xyz, width_irc): center_irc = xyz2irc( center_xyz,self.origin_xyz, self.vxSize_xyz, self.direction_a, ) slice_list = [] for axis, center_val in enumerate(center_irc): start_ndx = int(round(center_val - width_irc[axis]/2)) end_ndx = int(start_ndx + width_irc[axis]) slice_list.append(slice(start_ndx, end_ndx)) ct_chunk = self.hu_a[tuple(slice_list)]return ct_chunk, center_irc The actual implementation will need to de al with situations where the combination of center and width puts the ed ges of the cropped areas outside of the array. But as noted earlier, we will skip complications that obscure the larger intent of the function. The full implementation can be found on the book’s website ( www.manning.com/ books/deep-learning-with-pytorch?query=pytorch ) and in the GitHub repository (https://github.com/deep-lear ning-with-pytorch/dlwpt-code ). 10.5 A straightforward dataset implementation We first saw PyTorch Dataset instances in chapter 7, but this will be the first time we’ve implemented one ours elves. By subclassing Dataset , we will take our arbitrary data and plug it into the rest of the PyTorch ecosystem. Each Ct instance represents hundreds of different samples that we can us e to train our model or validate its effec- tiveness. Our LunaDataset class will normalize those sa mples, flattening each CT’s nodules into a single collection from wh ich samples can be retrieved without regard for which Ct instance the sample originates from. This flattening is often how we want to process data, although as we’ll see in chap ter 12, in some situat ions a simple flatten- ing of the data isn’t enough to train a model well. In terms of implementation, we are goin g to start with the requirements imposed from subclassing Dataset and work backward. This is di fferent from the datasets we’ve worked with earlier; there we were usin g classes provided by external libraries, whereas here we need to implement and instantiate the class ourselves. Once we havedone so, we can use it similarly to those earlier examples. Luckil y, the implementationListing 10.11 dsets.py:105, Ct.getRawCandidate" Deep-Learning-with-PyTorch.pdf,"272 CHAPTER 10 Combining data sources into a unified dataset of our custom subclass will not be too diffic ult, as the PyTorch API only requires that any Dataset subclasses we want to implemen t must provide these two functions: An implementation of __len__ that must return a single, constant value after initialization (the value ends up being cached in some use cases) The __getitem__ method, which takes an index and returns a tuple with sam- ple data to be used for training (or validation, as the case may be) First, let’s see what the function signatures an d return values of those functions look like. def __len__(self): return len(self.candidateInfo_list) def __getitem__(self, ndx): # ... line 200return ( candidate_t, 1((CO10-1)) pos_t, 1((CO10-2))candidateInfo_tup.series_uid, torch.tensor(center_irc), ) Our __len__ implementation is straightforward: we have a list of candidates, each can- didate is a sample, and our dataset is as larg e as the number of samples we have. We don’t have to make the implementation as simple as it is here; in later chapters, we’ll see this change!4 The only rule is that if __len__ returns a value of N, then __getitem__ needs to return something valid for all inputs 0 to N – 1. For __getitem__ , we take ndx (typically an integer, given the rule about support- ing inputs 0 to N – 1) and return the four-item sample tuple as depicted in figure 10.2. Building this tuple is a bit more complica ted than getting the length of our dataset, however, so let’s take a look. The first part of this method implies that we need to construct self.candidateInfo _list as well as provide the getCtRawNodule function. def __getitem__(self, ndx): candidateInfo_tup = self.candidateInfo_list[ndx] width_irc = (32, 48, 48) candidate_a, center_irc = getCtRawCandidate( candidateInfo_tup.series_uid,candidateInfo_tup.center_xyz, width_irc, )Listing 10.12 dsets.py:176, LunaDataset.__len__ 4To something simpler, actually; but the point is, we have options.Listing 10.13 dsets.py:179, LunaDataset.__getitem__This is our training sample. The return value candidate_a has shape (32,48,48); the axes are depth, height, and width." Deep-Learning-with-PyTorch.pdf,"273 A straightforward dataset implementation We will get to those in a moment in sections 10.5.1 and 10.5.2. The next thing we need to do in the __getitem__ method is manipulate the data into the proper data types and required array dimensions that will be expected by downstream code. candidate_t = torch.from_numpy(candidate_a) candidate_t = candidate_t.to(torch.float32) candidate_t = candidate_t.unsqueeze(0 ) Don’t worry too much about exactly why we are manipulating dimensionality for now; the next chapter will contain the code that ends up consuming this output and impos- ing the constraints we’re proa ctively meeting here. This will be something you should expect for every custom Dataset you implement. These conversions are a key part of transforming your Wild West da ta into nice, orderly tensors. Finally, we need to buil d our classification tensor. pos_t = torch.tensor([ not candidateInfo_tup.isNodule_bool,candidateInfo_tup.isNodule_bool ], dtype=torch.long, ) This has two elements, one each for our possible candidate classes (nodule or non- nodule; or positive or negati ve, respectively). We could ha ve a single output for the nodule status, but nn.CrossEntropyLoss expects one output valu e per class, so that’s what we provide here. The exact details of the tensors you construct will change based on the type of project you’re working on. Let’s take a look at our fi nal sample tuple (the larger nodule_t output isn’t partic- ularly readable, so we elide most of it in the listing). # In[10]: LunaDataset()[0] # Out[10]: (tensor([[[[-899., -903., -825., ..., -901., -898., -893.], ...,[ -92., -63., 4., ..., 63., 70., 52.]]]]), tensor([0, 1]), '1.3.6...287966244644280690737019247886',tensor([ 91, 360, 341]))Listing 10.14 dsets.py:189, LunaDataset.__getitem__ Listing 10.15 dsets.py:193, LunaDataset.__getitem__ Listing 10.16 p2ch10_explore_data.ipynb.unsqueeze(0) adds the ‘Channel’ dimension. candidate_t cls_t candidate_tup.series_uid (elided) center_irc" Deep-Learning-with-PyTorch.pdf,"274 CHAPTER 10 Combining data sources into a unified dataset Here we see the four items from our __getitem__ return statement. 10.5.1 Caching candidate arrays with the getCtRawCandidate function In order to get decent performance out of LunaDataset , we’ll need to invest in some on-disk caching. This will allow us to avoid having to read an entire CT scan from disk for every sample. Doing so would be prohibit ively slow! Make sure you’re paying atten- tion to bottlenecks in your project and doing what you can to optimize them once they start slowing you down. We’re kind of jumping the gun here since we haven’t demonstrated that we need caching here. Without caching, the LunaDataset is easily 50 times slower! We’ll revisit th is in the chapter’s exercises. The function itself is easy . It’s a file-cache-backed ( https://pypi.python.org/pypi/ diskcache ) wrapper around the Ct.getRawCandidate method we saw earlier. @functools.lru_cache(1, typed=True) def getCt(series_uid): return Ct(series_uid) @raw_cache.memoize(typed=True) def getCtRawCandidate(series_uid, center_xyz, width_irc): ct = getCt(series_uid) ct_chunk, center_irc = ct.getRawCandidate(center_xyz, width_irc) return ct_chunk, center_irc We use a few different caching methods here. First, we’re caching the getCt return value in memory so that we can repeatedly ask for the same Ct instance without hav- ing to reload all of the data from disk. Th at’s a huge speed increase in the case of repeated requests, but we’re only keeping one CT in memory, so cache misses will be frequent if we’re not careful about access order. The getCtRawCandidate function that calls getCt also has its outputs cached, how- ever; so after our cache is populated, getCt won’t ever be called. These values are cached to disk using the Python library diskcache . We’ll discuss why we have this spe- cific caching setup in chapter 11. For now, it’s enough to know that it’s much, much faster to read in 215 float32 values from disk than it is to read in 225 int16 values, con- vert to float32 , and then select a 215 subset. From the second pass through the data forward, I/O times for input sh ould drop to insignificance. NOTE If the definitions of these function s ever materially change, we will need to remove the cached values from disk. If we don’t, the cache will con-tinue to return them, even if now th e function will not map the given inputs to the old output. The data is stored in the data-unversioned/cache directory. Listing 10.17 dsets.py:139" Deep-Learning-with-PyTorch.pdf,"275 A straightforward dataset implementation 10.5.2 Constructing our data set in LunaDataset.__init__ Just about every project will need to separa te samples into a training set and a valida- tion set. We are going to do that here by designating every tent h sample, specified by the val_stride parameter, as a member of the valid ation set. We will also accept an isValSet_bool parameter and use it to determine whether we should keep only the training data, the validation data, or everything. class LunaDataset(Dataset): def __init__(self, val_stride=0,isValSet_bool=None, series_uid=None, ): self.candidateInfo_list = copy.copy(getCandidateInfoList()) if series_uid: self.candidateInfo_list = [ x for x in self.candidateInfo_list if x.series_uid == series_uid ] If we pass in a truthy series_uid , then the instance will only have nodules from that series. This can be useful for visualization or debugging, by making it easier to look at, for instance, a single problematic CT scan. 10.5.3 A training/validation split We allow for the Dataset to partition out 1/ Nth of the data into a subset used for vali- dating the model. How we will handle that subset is based on the value of the isValSet _bool argument. if isValSet_bool: assert val_stride > 0, val_stride self.candidateInfo_list = self.candidateInfo_list[::val_stride]assert self.candidateInfo_list elif val_stride > 0: del self.candidateInfo_list[::val_stride]assert self.candidateInfo_list This means we can create two Dataset instances and be confident that there is strict segregation between our training data an d our validation data. Of course, this depends on there being a co nsistent sorted order to self.candidateInfo_list , which we ensure by having there be a stable sorted order to the candidate info tuples, and by the getCandidateInfoList function sorting the list before returning it.Listing 10.18 dsets.py:149, class LunaDataset Listing 10.19 dsets.py:162, LunaDataset.__init__Copies the return value so the cached copy won’t be impacted by altering self.candidateInfo_list Deletes the validation images (every val_stride-th item in the list) from self.candidateInfo_list. We made a copy earlier so that we don’t alter the original list." Deep-Learning-with-PyTorch.pdf,"276 CHAPTER 10 Combining data sources into a unified dataset The other caveat regarding separation of training and validation data is that, depending on the task at hand, we might need to ensure that data from a single patient is only present either in training or in testing but not both. Here this is not a problem; otherwise, we would have needed to split the list of patients and CT scansbefore going to the level of nodules. Let’s take a look at the data using p2ch10_explore_data.ipynb : # In[2]: from p2ch10.dsets import getCandidateInfoList, getCt, LunaDatasetcandidateInfo_list = getCandidateInfoList(requireOnDisk_bool=False) positiveInfo_list = [x for x in candidateInfo_list if x[0]] diameter_list = [x[1] for x in positiveInfo_list] # In[4]: for i in range(0, len(diameter_list), 100): print('{:4} {:4.1f} mm'.format(i, diameter_list[i])) # Out[4]: 03 2 . 3 m m 100 17.7 mm 200 13.0 mm 300 10.0 mm400 8.2 mm 500 7.0 mm 600 6.3 mm700 5.7 mm 800 5.1 mm 900 4.7 mm 1000 4.0 mm 1100 0.0 mm 1200 0.0 mm1300 0.0 mm We have a few very large candidates, starti ng at 32 mm, but they rapidly drop off to half that size. The bulk of the candidates are in the 4 to 10 mm range, and several hundred don’t have size information at all. This looks as expected; you might recall that we had more actual nodules than we had diameter annotations. Quick sanity checks on your data can be very helpful; catching a problem or mistaken assumption early may save hours of effort! The larger takeaway is that our training and validation splits should have a few properties in order to work well: Both sets should include examples of all variations of expected inputs. Neither set should have samples that ar en’t representative of expected inputs unless they have a specific purpose like traini ng the model to be robust to outliers. The training set shouldn’t offer unfair hints about the validation set that wouldn’t be true for real-world data (for example, including the same sample in both sets; this is known as a leak in the training set). " Deep-Learning-with-PyTorch.pdf,"277 Conclusion 10.5.4 Rendering the data Again, either use p2ch10_explore_data.ipyn b directly or start Jupyter Notebook and enter # In[7]: %matplotlib inline from p2ch10.vis import findNoduleSamples, showNodulenoduleSample_list = findNoduleSamples() TIP For more information about Jupy ter’s matplotlib inline magic,5 please see http://mng.bz/rrmD . # In[8]: series_uid = positiveSample_list[11][2] showCandidate(series_uid) This produces images akin to those showing CT and nodule slices earlier in this chapter. If you’re interested, we invite you to edit the impl ementation of the rendering code in p2ch10/vis.py to match your ne eds and tastes. The re ndering code makes heavy use of Matplotlib ( https://matplotlib.org ), which is too complex a library for us to attempt to cover here. Remember that rendering your data is not just about getting nifty-looking pictures. The point is to get an intuitive sense of what your inputs look like. Being able to tell at a glance “This problematic samp le is very noisy compared to the rest of my data” or “That’s odd, this looks pretty normal” can be useful when investigating issues. Effec- tive rendering also helps fo ster insights like “Perhaps if I modify things like so, I can solve the issue I’m having.” That level of familiarity will be necessary as you start tack- ling harder and harder projects. NOTE Due to the way each subset has been partitioned, combined with the sorting used when constructing LunaDataset.candidateInfo_list , the ordering of the entries in noduleSample_list is highly dependent on which subsets are present at the time the co de is executed. Please remember this when trying to find a particular sample a second time, especially after decom- pressing more subsets. 10.6 Conclusion In chapter 9, we got our heads wrapped ar ound our data. In this chapter, we got PyTorch’s head wrapped around our data! By transforming our DICOM-via-meta-image raw data into tensors, we’ve set the stage to start implementing a model and a training loop, which we’ll see in the next chapter. It’s important not to underestimate th e impact of the design decisions we’ve already made: the size of our inputs, the st ructure of our caching, and how we’re par- titioning our training and validation sets will all make a difference to the success or 5Their term, not ours!This magic line sets up the ability for images to be displayed inline via the notebook." Deep-Learning-with-PyTorch.pdf,"278 CHAPTER 10 Combining data sources into a unified dataset failure of our overall project. Don’t hesitate to revisit these decisions later, especially once you’re working on your own projects. 10.7 Exercises 1Implement a program that iterates through a LunaDataset instance, and time how long it takes to do so. In the intere st of time, it might make sense to have an option to limit the iterations to the first N=1000 samples. aHow long does it take to run the first time? bHow long does it take to run the second time? cWhat does clearing the cache do to the runtime? dWhat does using the last N=1000 samples do to the first/second runtime? 2Change the LunaDataset implementation to randomize the sample list during __init__ . Clear the cache, and run the modifi ed version. What does that do to the runtime of the first and second runs? 3Revert the randomization, and comment out the @functools.lru_cache(1, typed=True) decorator to getCt . Clear the cache, and run the modified version. How does the runtime change now? 10.8 Summary Often, the code required to parse and load raw data is nontrivial. For this proj- ect, we implement a Ct class that loads data from disk and provides access to cropped regions around points of interest. Caching can be useful if the parsing an d loading routines are expensive. Keep in mind that some caching can be done in memory, and some is best per- formed on disk. Each can have its place in a data-loading pipeline. PyTorch Dataset subclasses are used to convert data from its native form into tensors suitable to pass in to the model. We can use this functionality to inte- grate our real-world data with PyTorch APIs. Subclasses of Dataset need to provide implementations for two methods: __len__ and __getitem__ . Other helper methods are allowed but not required. Splitting our data into a sensible traini ng set and a validatio n set requires that we make sure no samp le is in both sets. We accomplish this here by using a con- sistent sort order and taking every tenth sample for our validation set. Data visualization is important; being able to investigate data visually can pro-vide important clues about errors or problems. We are using Jupyter Notebooks and Matplotlib to render our data." Deep-Learning-with-PyTorch.pdf,"279Training a classification model to detect suspected tumors In the previous chapters, we set the stage for our cancer-detecti on project. We cov- ered medical details of lung cancer, took a look at the main data sources we will use for our project, and transformed our raw CT scans into a PyTorch Dataset instance. Now that we have a dataset, we can easily consume our training data. So let’s do that!This chapter covers Using PyTorch DataLoader s to load data Implementing a model that performs classification on our CT data Setting up the basic skeleton for our application Logging and displaying metrics" Deep-Learning-with-PyTorch.pdf,"280 CHAPTER 11 Training a classification model to detect suspected tumors 11.1 A foundational mo del and training loop We’re going to do two main things in this chapter. We’ll start by building the nodule classification model and training loop that will be the foundation that the rest of part 2 uses to explore the larger project. To do that, we’ll use the Ct and LunaDataset classes we implemented in chapter 10 to feed DataLoader instances. Those instances, in turn, will feed our classification model with data via training an d validation loops. We’ll finish the chapter by using the result s from running that training loop to intro- duce one of the hardest challenges in this part of the book: how to get high-quality results from messy, limited data. In later ch apters, we’ll explore the specific ways in which our data is limited, as well as mitigate those limitations. Let’s recall our high-level roadmap from chapter 9, shown here in figure 11.1. Right now, we’ll work on producing a model capable of performing step 4: classifica- tion. As a reminder, we will classify candidat es as nodules or non-nodules (we’ll build another classifier to attempt to tell malign ant nodules from benign ones in chapter 14). That means we’re going to assign a sing le, specific label to each sample that we present to the model. In this case, those labels are “nodule” and “non-nodule,” since each sample represents a single candidate. Getting an early end-to-end version of a meaningful part of your project is a great milestone to reach. Having something that works well en ough for the results to be evaluated analytically let’s you move forwar d with future changes, confident that you [(I,R,C), (I,R,C), (I,R,C), ...][NEG, POS, NEG, ... ] MAL/BENcandidate Locations ClaSsification modelSTep 2 (ch. 13): Segmentation Step 1 (ch. 10): Data LoadingStep 4 (ch. 11+12): ClaSsification Step 5 (ch. 14): Nodule analysis and Diagnosisp=0.1 p=0.9 p=0.2 p=0.9Step 3 (ch. 14): Grouping.MHD .RAWCT Data segmentation modelcandidate Sample [(I,R,CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC), (I,R,CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC))))), (I,R,C ))))))))))))))))))))))))))))))))))),,, ... ] [NEG, POS, NEG, ... ] MAL/BEN ClaSsification model STep 2 (ch. 13 ): Segmentation Step 1 (ch. 10 ): Data Loading Step 4 (ch. 11+12): on StStStStStSSStSttttStStStepeppppeppepppepepepeepppeppepepeeeeeeeeeeee555555555555555555555((((((((((ch. 14 ): Nodule analysi s and Di agnosi s p=0.1 p=0.9 , p=0.2 p=0.9 Step 3 (ch. 14 ): Grouping [( ) candidateeeeeeeeeeeeeee Locationssssssssssssssssssss Step 4(ch. 11+ ClaSsific atio .M MHD .R RAW CT Data Data segment at tion model candidate Sample Figure 11.1 Our end-to-end project to detect lung cancer, with a focus on this chapter’s topic: step 4, classification" Deep-Learning-with-PyTorch.pdf,"281 A foundational model and training loop are improving your results with each change—o r at least that you’re able to set aside any changes and experiments that don’t work out! Expect to have to do a lot of exper- imentation when working on your own projects. Getting the best results will usually require considerable ti nkering and tweaking. But before we can get to the experimental phase, we must lay our foundation. Let’s see what our part 2 training loop looks like in figure 11.2: it should seem generally familiar, given that we saw a similar set of core steps in chapter 5. Here we will also use a validation set to evaluate our training progress, as discu ssed in section 5.5.3. The basic structure of what we’re going to implement is as follows: Initialize our model and data loading. Loop over a semi-arbitrarily chosen number of epochs. – Loop over each batch of tr aining data returned by LunaDataset . – The data-loader worker process loads the relevant batch of data in the background. – Pass the batch into our classi fication model to get results. – Calculate our loss based on the differ ence between our predicted results and our ground-truth data. – Record metrics about our model’s performance into a temporary data structure. – Update the model weights via backpropagation of the error. Log Metrics console tensorboardTraining LOop Load batch tuple ClaSsify Batch Calculate LoSs Record metrics Update weightsValidation LOop Load batch tuple ClaSsify Batch Calculate LoSs Record metricsInit model Init data loaders LOop over epochsInitialized with Random weightsFuLly Trained Figure 11.2 The training and validation script we will implement in this chapter" Deep-Learning-with-PyTorch.pdf,"282 CHAPTER 11 Training a classification model to detect suspected tumors – Loop over each batch of validation da ta (in a manner very similar to the training loop). – Load the relevant batch of validation data (again, in the background worker process). – Classify the batch, and compute the loss.– Record information about how well the model performed on the validation data. – Print out progress and performa nce information for this epoch. As we go through the code for the chapter, keep an eye out for two main differences between the code we’re producing here and wh at we used for a training loop in part 1. First, we’ll put more structure around our pr ogram, since the project as a whole is quite a bit more complicated than what we did in earlier chapters. With out that extra struc- ture, the code can get messy quickly. And for this project, we will have our main train- ing application use a number of well-contain ed functions, and we will further separate code for things like our dataset in to self-contained Python modules. Make sure that for your own projects, you match the level of structure and design to the complexity level of your project. Too little structure, and it will become difficult to perform experiments cleanl y, troubleshoot problems, or even describe what you’re doing! Conversely, too much structure means you’re wast ing time writing infrastruc- ture that you don’t need and most likely sl owing yourself down by having to conform to it after all that plumbing is in place. Pl us it can be tempting to spend time on infra- structure as a procrastination tactic, rather than digging in to the hard work of making actual progress on your project. Don’t fall into that trap! The other big difference between this chapter’s code and part 1 will be a focus on collecting a variety of metrics about how tr aining is progressing. Being able to accu- rately determine the impact of changes on training is impossible without having good metrics logging. Without spoili ng the next chapter, we’ll also see how important it is to collect not just metrics, but the right metrics for the job . We’ll lay the infrastructure for tracking those metrics in this chapter, and we’ll exercise that infrastructure by collect-ing and displaying the loss and percent of samples correctly cla ssified, both overall and per class. That’s enough to get us star ted, but we’ll cover a more realistic set of metrics in chapter 12. 11.2 The main entry poin t for our application One of the big structural differences from earlier training work we’ve done in this book is that part 2 wraps our work in a fully fledged command-line application. It will parse command-line argument s, have a full-featured --help command, and be easy to run in a wide variety of environments. All this will allow us to easily invoke the trainingroutines from both Jupyter and a Bash shell. 1 1Any shell, really, but if you’re using a non-Bash shell, you already knew that." Deep-Learning-with-PyTorch.pdf,"283 The main entry point for our application Our application’s functionality will be implemented via a class so that we can instantiate the application and pass it around if we feel th e need. This can make test- ing, debugging, or invocation from other Python programs easier. We can invoke the application without needing to spin up a second OS-level process (we won’t do explicit unit testing in this book, but the structure we create can be helpful for real projects where that kind of testing is appropriate). One way to take advantage of being able to invoke our training by either function call or OS-level process is to wrap the fu nction invocations into a Jupyter Notebook so the code can easily be called from either the native CLI or the browser. # In[2]:w def run(app, *argv): argv = list(argv) argv.insert(0, '--num-workers=4') log.info(""Running: {}({!r}).main()"".format(app, argv)) app_cls = importstr(*app.rsplit('.', 1)) app_cls(argv).main() log.info(""Finished: {}.{!r}).main()"".format(app, argv)) # In[6]: run('p2ch11.training.LunaTrainingApp', '--epochs=1') NOTE The training here assumes that you’re on a workstation that has a four- core, eight-thread CPU, 16 GB of RAM , and a GPU with 8 GB of RAM. Reduce --batch-size if your GPU has less RAM, and --num-workers if you have fewer CPU cores, or less CPU RAM. Let’s get some semistandard bo ilerplate code out of the wa y. We’ll start at the end of the file with a pretty standard if main stanza that instantiates the application object and invokes the main method. if __name__ == '__main__': LunaTrainingApp().main() From there, we can jump back to the top of th e file and have a look at the application class and the two functions we just called, __init__ and main . We’ll want to be able to accept command-line arguments, so we’ll use the standard argparse library ( https://docs .python.org/3/libra ry/argparse.html ) in the application’s __init__ function. Note that we can pass in custom arguments to the in itializer, should we wish to do so. The main method will be the primary entry point for the core logic of the application. Listing 11.1 code/p2_run_everything.ipynb Listing 11.2 training.py:386We assume you have a four-core, eight- thread CPU. Change the 4 if needed. This is a slightly cleaner call to __import__." Deep-Learning-with-PyTorch.pdf,"284 CHAPTER 11 Training a classification model to detect suspected tumors class LunaTrainingApp: def __init__(self, sys_argv=None): if sys_argv is None: sys_argv = sys.argv[1:] parser = argparse.ArgumentParser() parser.add_argument('--num-workers', help='Number of worker processes for background data loading', default=8, type=int, ) # ... line 63 self.cli_args = parser.parse_args(sys_argv)self.time_str = datetime.datetime.now().strftime('%Y-%m-%d_%H.%M.%S') # ... line 137 def main(self): log.info(""Starting {}, {}"".format(type(self).__name__, self.cli_args)) This structure is pretty general and could be reused for future projects. In particular, parsing arguments in __init__ allows us to configure the application separately from invoking it. If you check the code for this chapter on the book’s website or GitHub, you might notice some extra lines mentioning TensorBoard . Ignore those for now; we’ll discuss them in detail later in th e chapter, in section 11.9. 11.3 Pretraining setup and initialization Before we can begin iterating over each ba tch in our epoch, some initialization work needs to happen. After all, we can’t train a model if we haven’t even instantiated one yet! We need to do two main things, as we can see in figure 11.3. The first, as we just mentioned, is to initialize our model and optimizer; and the second is to initialize our Dataset and DataLoader instances. LunaDataset will define the randomized set of samples that will make up our training epoch, and our DataLoader instance will perform the work of loading the data out of our dataset and providing it to our application. Listing 11.3 training.py:31, class LunaTrainingApp If the caller doesn’t provide arguments, we get them from the command line. We’ll use the timestamp to help identify training runs." Deep-Learning-with-PyTorch.pdf,"285 Pretraining setup and initialization 11.3.1 Initializing the model and optimizer For this section, we are treating the details of LunaModel as a black box. In section 11.4, we will detail the internal workings. You ar e welcome to explore changes to the imple- mentation to better meet our goals for the model, although that’s probably best done after finishing at least chapter 12. Let’s see what our star ting point looks like. class LunaTrainingApp: def __init__(self, sys_argv=None): # ... line 70 self.use_cuda = torch.cuda.is_available()self.device = torch.device(""cuda"" if self.use_cuda else ""cpu"") self.model = self.initModel() self.optimizer = self.initOptimizer()Listing 11.4 training.py:31, class LunaTrainingAppLog Metrics console tensorboardTraining LOop Load batch tuple ClaSsify Batch Calculate LoSsRecord metricsUpdate weightsValidation LOop Load batch tupleClaSsify BatchCalculate LoSsRecord metricsInit model Init data loaders LOop over epochsInitialized with Random weightsFuLly Trained Log Metric s console tensorboard tensorboard TrainingLOop Loadbatch tuple ClaSsify Batc h Calculate Lo Ss Record metric s Update weights Oop lidation L Oo Val d batch tupl e Load aSsify Batch Cla lculate LoSs Cal cord metrics Rec cord metrics Rec Init model t model I Init t data loade ers LOop over epoc hs Initialize d with Random with R Random Random wei ights FuLly Traine d Figure 11.3 The training and validation script we will implement in this chapter, with a focus on the preloop variable initialization" Deep-Learning-with-PyTorch.pdf,"286 CHAPTER 11 Training a classification model to detect suspected tumors def initModel(self): model = LunaModel() if self.use_cuda: log.info(""Using CUDA; {} devices."".format(torch.cuda.device_count())) if torch.cuda.device_count() > 1: model = nn.DataParallel(model) model = model.to(self.device) return model def initOptimizer(self): return SGD(self.model.parameters(), lr=0.001, momentum=0.99) If the system used for training has more than one GPU, we will use the nn.DataParallel class to distribute the work between all of the GPUs in the system and then collect and resync parameter updates and so on. This is almost entirely transp arent in terms of both the model implementation and the code that uses that model. Assuming that self.use_cuda is true, the call self.model.to(device) moves the model parameters to the GPU, setting up th e various convolutions and other calcula- tions to use the GPU for the heavy numerical lifting. It’s import ant to do so before constructing the optimizer, since, otherwise, the optimi zer would be left looking at the CPU-based parameter objects rather than those copied to the GPU. For our optimizer, we’ll use basic stochastic gradie nt descent (SGD; https://pytorch.org/docs/stable /optim.html#torch.optim.SGD ) with momentum. We first saw this optimizer in chapter 5. Recall from part 1 that many different opti- mizers are available in PyTorc h; while we won’t cover most of them in any detail, the official documentation ( https://pytorch.org/docs/stable/optim.html#algorithms ) does a good job of linking to the relevant papers.Detects multiple GPUsWraps the model Sends model parameters to the GPU DataParallel vs. DistributedDataParallel In this book, we use DataParallel to handle utilizing mult iple GPUs. We chose Data- Parallel because it’s a simple drop-in wrapper around our existing models. It is not the best-performing solution for using multiple GPUs, however, and it is limited to work- ing with the hardware available in a single machine. PyTorch also provides DistributedDataParallel , which is the recommended wrap- per class to use when you need to spread work between more than one GPU or machine. Since the proper setup and config uration are nontrivial, and we suspect that the vast majority of our readers won’t see any benefit from the complexity, we won’tcover DistributedDataParallel in this book. If you wish to learn more, we suggest reading the official documentation: https://pytorch.org/tut orials/intermediate/ ddp_tutorial.html ." Deep-Learning-with-PyTorch.pdf,"287 Pretraining setup and initialization Using SGD is generally considered a safe place to start when it comes to picking an optimizer; there are some pr oblems that might not work well with SGD, but they’re relatively rare. Similarly, a learning rate of 0.001 and a momentum of 0.9 are pretty safe choices. Empirically, SGD with those va lues has worked reasonably well for a wide range of projects, and it’s easy to try a learning rate of 0.01 or 0.0001 if things aren’t working well right out of the box. That’s not to say any of those values is the best for our use case, but trying to find bet- ter ones is getting ahead of ourselves. System atically trying different values for learning rate, momentum, network size, and other similar configurat ion settings is called a hyper- parameter search . There are other, more glaring issues we need to address first in the com- ing chapters. Once we address those, we can begin to fine-tune these values. As we mentioned in the section “Testi ng other optimizers” in chapter 5, there are also other, more exotic optimizers we might ch oose; but other than perhaps swapping torch.optim.SGD for torch.optim.Adam , understanding the trade-offs involved is a topic too advanced for this book. 11.3.2 Care and feeding of data loaders The LunaDataset class that we built in the last chapter acts as the bridge between whatever Wild West data we have and the somewhat more structured world of tensors that the PyTorch building blocks expect. For example, torch.nn.Conv3d (https:// pytorch.org/docs/sta ble/nn.html#conv3d ) expects five-dimensional input: ( N, C, D, H, W): number of samples, cha nnels per sample, depth, he ight, and width. Quite dif- ferent from the native 3D our CT provides! You may recall the ct_t.unsqueeze(0) call in LunaDataset.__getitem__ from the last chapter; it provides the fourth dime nsion, a “channel” for our data. Recall from chapter 4 that an RGB image has three cha nnels, one each for red, green, and blue. Astronomical data could have dozens, one ea ch for various slices of the electromag- netic spectrum—gamma rays, X-rays, ultravio let light, visible light, infrared, micro- waves, and/or radio waves. Since CT scans are single-int ensity, our channel dimension is only size 1. Also recall from part 1 that training on si ngle samples at a time is typically an inef- ficient use of computing resources, becaus e most processing platforms are capable of more parallel calculations th an are required by a model to process a single training or validation sample. The solution is to group sample tuples together into a batch tuple, as in figure 11.4, allowing multiple samples to be proces sed at the same time. The fifth dimension ( N) differentiates multiple samples in the same batch. " Deep-Learning-with-PyTorch.pdf,"288 CHAPTER 11 Training a classification model to detect suspected tumors Conveniently, we don’t have to implement any of this batching: the PyTorch Data- Loader class will handle all of the collation wo rk for us. We’ve already built the bridge from the CT scans to Py Torch tensors with our LunaDataset class, so all that remains is to plug our dataset into a data loader. def initTrainDl(self): train_ds = LunaDataset( val_stride=10, isValSet_bool=False, ) batch_size = self.cli_args.batch_size if self.use_cuda: batch_size *= torch.cuda.device_count() train_dl = DataLoader( train_ds, batch_size=batch_size, num_workers=self.cli_args.num_workers,pin_memory=self.use_cuda, )Listing 11.5 training.py:89, LunaTrainingApp.initTrainDl Luna DatasetData Loader 1d bOol aRray List of Strings List of IRC tuplesBatch tuple 5d fp32 aRray ( , [T, F ...] [“123”, “456”...], [IRC, IRC...] ) is nodule? Series_uid candidate locationSample tupleS sample aRray ( , T, “1.2.3”, (I,R,C)) is nodule? Series_uid candidate locationsample aRray ( , F, “4.5.6”, (I,R,C)) .CSVANnotations.MHD .RAWCT Files Figure 11.4 Sample tuples being collated into a single batch tuple inside a data loader Our custom dataset An off-the-shelf class Batching is done automatically. Pinned memory transfers to GPU quickly." Deep-Learning-with-PyTorch.pdf,"289 Our first-pass neural network design return train_dl # ... line 137 def main(self): train_dl = self.initTrainDl() val_dl = self.initValDl() In addition to batching individual sample s, data loaders can also provide parallel loading of data by using separate processe s and shared memory. Al l we need to do is specify num_workers=… when instantiating the data load er, and the rest is taken care of behind the scenes. Each worker process prod uces complete batches as in figure 11.4. This helps make sure hungry GPUs are well fed with data. Our validation_ds and validation_dl instances look similar, except for the obvious isValSet_bool=True . When we iterate, like for batch_tup in self.train_dl: , we won’t have to wait for each Ct to be loaded, samples to be taken and batched, and so on. Instead, we’ll get the already loaded batch_tup immediately, and a worker process will be freed up in the background to begin loading another batch to use on a later iteration. Using the data-loading features of PyTorch can he lp speed up most projects, because we can overlap data loading and processing with GPU calculation. 11.4 Our first-pass neural network design The possible design space fo r a convolutional neural ne twork capable of detecting tumors is effectively infinite. Luckily, cons iderable effort has been spent over the past decade or so investigating effective mode ls for image recognition. While these have largely focused on 2D images, the general ar chitecture ideas transfer well to 3D, so there are many tested designs that we can us e as a starting point. This helps because although our first network architecture is un likely to be our best option, right now we are only aiming for “good enough to get us going.” We will base the network design on what we used in chapter 8. We will have to update the model somewhat because our inpu t data is 3D, and we will add some com- plicating details, but the overall structure shown in figure 11.5 should feel familiar. Similarly, the work we do for this project will be a good base for your future projects, although the further you get from classification or segmentation projects, the moreyou’ll have to adapt this base to fit. Let’s dissect this architecture , starting with the four repeated blocks that make up the bulk of the network. The validation data loader is very similar to training." Deep-Learning-with-PyTorch.pdf,"290 CHAPTER 11 Training a classification model to detect suspected tumors 11.4.1 The core convolutions Classification models often have a structur e that consists of a tail, a backbone (or body), and a head. The tail is the first few layers that process the input to the network. These early layers often have a different stru cture or organization than the rest of the network, as they must adapt the input to the form expected by the backbone. Here we use a simple batch normalization layer, thou gh often the tail contains convolutional layers as well. Such convolutional layers ar e often used to aggre ssively downsample the size of the image; since our image size is al ready small, we don’t need to do that here. Next, the backbone of the network typically contains the bulk of the layers, which are usually arranged in series of blocks . Each block has the same (or at least a similar) set of layers, though often the size of th e expected input and the number of filters changes from block to block. We will use a block that consists of two 3 × 3 convolu- tions, each followed by an activation, with a max-pooling operation at the end of the block. We can see this in the expanded view of figure 11.5 labeled Block[block1] . Here’s what the implementation of the block looks like in code. class LunaBlock(nn.Module): def __init__(self, in_channels, conv_channels): super().__init__()Listing 11.6 model.py:67, class LunaBlock Luna Model ArchitectureTail Backbone HEAD Input ImageFil terSmaLler Output Luna Model Architecture L Filter S O ChaNnels: 1 Image: 32 48 48ChaNnels: 8 Image: 16 24 24 xxChaNnels: 32Image: 4 6 6ChaNnels: 64 Image: 2 3 3xx ChaNnels: 16Image: 8 12 12xx xx xx Figure 11.5 The architecture of the LunaModel class consisting of a batch-normalization tail, a four-block backbone, and a head comprised of a linear layer followed by softmax" Deep-Learning-with-PyTorch.pdf,"291 Our first-pass neural network design self.conv1 = nn.Conv3d( in_channels, conv_channels, kernel_size=3, padding=1, bias=True, ) self.relu1 = nn.ReLU(inplace=True) 1((CO5-1)) self.conv2 = nn.Conv3d( conv_channels, conv_channels, kernel_size=3, padding=1, bias=True, ) self.relu2 = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool3d(2, 2) def forward(self, input_batch): block_out = self.conv1(input_batch) block_out = self.relu1(block_out)block_out = self.conv2(block_out) block_out = self.relu2(block_out) return self.maxpool(block_out) Finally, the head of the network takes the output from the backbone and converts it into the desired output form. For convolutional networks, this often involves flatten- ing the intermediate output and passing it to a fully connected layer. For some net-works, it makes sense to also include a second fully connect ed layer, although that is usually more appropriate for classification problems in which the imaged objects have more structure (thi nk about cars versus trucks having wheels, lights, grill, doors, and so on) and for projects with a large number of classes. Since we are only doing binary classification, and we don’t se em to need the additional complexity, we have only a single flattening layer. Using a structure like this can be a good first building block for a convolutional network. There are more complicated designs out there, but for many projects they’re overkill in terms of both implementation complexity and computational demands. It’s a good idea to start simple and add comp lexity only when there’s a demonstrable need for it. We can see the convolutions of our block represented in 2D in figure 11.6. Since this is a small portion of a larger image, we ignore padding here. (Note that the ReLU activation function is not shown, as appl ying it does not change the image sizes.) Let’s walk through the information flow between our input voxels and a single voxel of output. We want to have a strong sens e of how our output will respond when the inputs change. It might be a good idea to review chapter 8, particularly sections 8.1 through 8.3, just to make sure you’re 100% solid on the basic mechanics of convolutions. We’re using 3 × 3 × 3 convolutions in our block. A single 3 × 3 × 3 convolution has a receptive field of 3 × 3 × 3, which is almo st tautological. Twenty-seven voxels are fed in, and one comes out. It gets interesting when we use two 3 × 3 × 3 convolutions stacked back to back. Stack- ing convolutional layers allows the final outp ut voxel (or pixel) to be influenced by an input further away than the size of the co nvolutional kernel suggests. If that outputThese could be implemented as calls to the functional API instead." Deep-Learning-with-PyTorch.pdf,"292 CHAPTER 11 Training a classification model to detect suspected tumors voxel is fed into another 3 × 3 × 3 kernel as one of the edge voxels, then some of the inputs to the first layer will be outside of the 3 × 3 × 3 area of input to the second. The final output of those two stacked layers has an effective receptive field of 5 × 5 × 5. That means that when taken together, the stacked layers act as similar to a single convolu- tional layer with a larger size. Put another way, each 3 × 3 × 3 convolut ional layer adds an additional one-voxel- per-edge border to the receptive field. We can see this if we trace the arrows in fig- ure 11.6 backward; our 2 × 2 output has a receptive field of 4 × 4, which in turn has a receptive field of 6 × 6. Two stacked 3 × 3 × 3 layers uses fewer parameters than a full 5 × 5 × 5 convolution would (and so is also faster to compute). The output of our two stacked convolutions is fed into a 2 × 2 × 2 max pool, which means we’re taking a 6 × 6 × 6 effective field, throwing away seven-eighths of the data, and going with the one 5 × 5 × 5 fiel d that produced the largest value.2 Now, those “discarded” input voxels still have a chance to contribute, since the max pool that’s one output voxel over has an overlapping inpu t field, so it’s possible they’ll influence the final output that way. Note that while we show the receptive field shrinking with each convolutional layer, we’re using padded convolutions, which add a vi rtual one-pixel border around the image. Doing so keeps our inpu t and output image sizes the same. The nn.ReLU layers are the same as the ones we looked at in chapter 6. Outputs greater than 0.0 will be left unchanged, an d outputs less than 0.0 will be clamped to zero. This block will be repeated multiple times to form our model’s backbone. 2Remember that we’re actually working in 3D, despite the 2D figure. 6 6 Input 3 3 Convx 33 Convx22 Outputx22 Max POolx4x4 Output 4 4 Input (same as output)x 3 3 Conv x t 2 2 P Max x 2 2 1x1 Output I as as nv 3 3 Conv x 3 3 2 2 Output x 2 2 In nput aso as o output) output )x Figure 11.6 The convolutional architecture of a LunaModel block consisting of two 3 × 3 convolutions followed by a max pool. The final pixel has a receptive field of 6 × 6." Deep-Learning-with-PyTorch.pdf,"293 Our first-pass neural network design 11.4.2 The full model Let’s take a look at the full model implem entation. We’ll skip the block definition, since we just saw that in listing 11.6. class LunaModel(nn.Module): def __init__(self, in_channels=1, conv_channels=8): super().__init__() self.tail_batchnorm = nn.BatchNorm3d(1)self.block1 = LunaBlock(in_channels, conv_channels) self.block2 = LunaBlock(conv_channels, conv_channels * 2) self.block3 = LunaBlock(conv_channels * 2, conv_channels * 4) self.block4 = LunaBlock(conv_channels * 4, conv_channels * 8) self.head_linear = nn.Linear(1152, 2) self.head_softmax = nn.Softmax(dim=1) Here, our tail is relatively simple. We are going to normalize our input using nn.BatchNorm3d , which, as we saw in chapter 8, will shift and scale our input so that it has a mean of 0 and a standard deviation of 1. Thus, the somewhat odd Hounsfield unit (HU) scale that our input is in won’t re ally be visible to the rest of the network. This is a somewhat arbitrary choice; we kn ow what our input units are, and we know the expected values of the relevant tissu es, so we could probably implement a fixed normalization scheme pretty easily. It’s not clear which approach would be better.3 Our backbone is four repeated blocks, with the block implementation pulled out into the separate nn.Module subclass we saw earlier in listing 11.6. Since each block ends with a 2 × 2 × 2 max-pool operation, after 4 layers we will have decreased the resolution of the image 16 times in each dimension. Recall fr om chapter 10 that our data is returned in chunks that are 32 × 48 × 48, which will become 2 × 3 × 3 by the end of the backbone. Finally, our tail is just a fully connected layer followed by a call to nn.Softmax . Soft- max is a useful function for single-label cl assification tasks and has a few nice proper- ties: it bounds the output between 0 and 1, it’s relatively insens itive to the absolute range of the inputs (only the relative values of the inputs matter), and it allows our model to express the degree of ce rtainty it has in an answer. The function itself is relatively simple. Every value from the input is used to expo- nentiate e, and the resulting series of values is then divided by the sum of all the results of exponentiation. Here’s what it lo oks like implemented in a simple fashion as a nonoptimized softmax implementation in pure Python: >>> logits = [1, -2, 3] >>> exp = [e ** x for x in logits]>>> expListing 11.7 model.py:13, class LunaModel 3Which is why there’s an exercise to experiment with both in the next chapter!Tail Backbone Head" Deep-Learning-with-PyTorch.pdf,"294 CHAPTER 11 Training a classification model to detect suspected tumors [2.718, 0.135, 20.086] >>> softmax = [x / sum(exp) for x in exp] >>> softmax [0.118, 0.006, 0.876] Of course, we use the PyTorch version of nn.Softmax for our model, as it natively understands batches and tensors and will pe rform autograd quickly and as expected. COMPLICATION : CONVERTING FROM CONVOLUTION TO LINEAR Continuing on with our model definition, we come to a complication. We can’t just feed the output of self.block4 into a fully connected layer, since that output is a per- sample 2 × 3 × 3 image with 64 channels, an d fully connected layers expect a 1D vector as input (well, technically they expect a batch of 1D vectors, which is a 2D array, but the mismatch remains either way). Let’s take a look at the forward method. def forward(self, input_batch): bn_output = self.tail_batchnorm(input_batch) block_out = self.block1(bn_output) block_out = self.block2(block_out) block_out = self.block3(block_out) block_out = self.block4(block_out) conv_flat = block_out.view( block_out.size(0),-1, ) linear_output = self.head_linear(conv_flat) return linear_output, self.head_softmax(linear_output) Note that before we pass data into a fully co nnected layer, we must flatten it using the view function. Since that operation is statele ss (it has no parameters that govern its behavior), we can simply perform the operation in the forward function. This is somewhat similar to the functional interfaces we discussed in chapter 8. Almost every model that uses convolution and produces classifications, regressions, or other non- image outputs will have a similar co mponent in the head of the network. For the return value of the forward method, we return both the raw logits and the softmax-produced probabilities. We first hinted at logits in section 7.2.6: they are the numerical values produced by the network prior to being norma lized into probabili- ties by the softmax layer. That might sound a bit complicated, but logits are really just the raw input to the softmax layer. They ca n have any real-valued input, and the soft- max will squash them to the range 0–1.Listing 11.8 model.py:50, LunaModel.forward The batch size" Deep-Learning-with-PyTorch.pdf,"295 Training and validating the model We’ll use the logits when we calculate the nn.CrossEntropyLoss during training,4 and we’ll use the probabilities for when we wa nt to actually classify the samples. This kind of slight differ ence between what’s used for trai ning and what’s used in produc- tion is fairly common, especially when the difference between the two outputs is a sim- ple, stateless function like softmax. INITIALIZATION Finally, let’s talk about initializing our network’s parameters. In order to get well-behaved performance out of our model, the network’s weights, biases, and other parameters need to exhibit certain properties. Let’s imagine a degenerate case, where all of the network’s weights are greater th an 1 (and we do not have residual connec- tions). In that case, repeated multiplication by those weights would result in layer out- puts that became very large as data flowed through the layers of the network. Similarly, weights less than 1 would cause a ll layer outputs to become smaller and van- ish. Similar considerations apply to the gradients in the backward pass. Many normalization techniques can be used to keep layer outputs well behaved, but one of the simplest is to just make sure the network’s weights are initialized such that intermediate values and grad ients become neither unreason ably small nor unreasonably large. As we discussed in chapter 8, PyTorch do es not help us as much as it should here, so we need to do some initialization ourselves. We can treat the following _init_weights function as boilerplate, as the exact de tails aren’t partic ularly important. def _init_weights(self): for m in self.modules(): if type(m) in { nn.Linear, nn.Conv3d, }: nn.init.kaiming_normal_( m.weight.data, a=0, mode='fan_out', nonlinearity='relu', )if m.bias is not None: fan_in, fan_out = \ nn.init._calculate_fan_in_and_fan_out(m.weight.data) bound= 1 / math.sqrt(fan_out) nn.init.normal_(m.bias, -bound, bound) 11.5 Training and validating the model Now it’s time to take the various pieces we’ve been working with and assemble them into something we can actually execute. This training loop should be familiar—we saw loops like figure 11.7 in chapter 5. 4There are numerical stability benefits for doing so. Pr opagating gradients accurately through an exponential calculated using 32-bit floating-point numbers can be problematic.Listing 11.9 model.py:30, LunaModel._init_weights" Deep-Learning-with-PyTorch.pdf,"296 CHAPTER 11 Training a classification model to detect suspected tumors The code is relatively compact (the doTraining function is only 12 statements; it’s lon- ger here due to line-length limitations). def main(self): # ... line 143for epoch_ndx in range(1, self.cli_args.epochs + 1): trnMetrics_t = self.doTraining(epoch_ndx, train_dl) self.logMetrics(epoch_ndx, 'trn', trnMetrics_t) # ... line 165 def doTraining(self, epoch_ndx, train_dl): self.model.train()trnMetrics_g = torch.zeros( METRICS_SIZE, len(train_dl.dataset),device=self.device, ) batch_iter = enumerateWithEstimate( train_dl, ""E{} Training"".format(epoch_ndx),start_ndx=train_dl.num_workers, ) for batch_ndx, batch_tup in batch_iter: self.optimizer.zero_grad()Listing 11.10 training.py:137, LunaTrainingApp.mainLog Metrics console tensorboardTraining LOop Load batch tupleClaSsify Batch Calculate LoSsRecord metrics Update weightsValidation LOop Load batch tuple ClaSsify Batch Calculate LoSsRecord metricsInit model Init data loaders LOop over epochsInitialized with Random weightsFuLly Trained Log Metrics console tensorboard tensorboard Training LOo p Load batch tupl e ClaSsif y Batc h Calculate Lo Ss Recordmetrics Update weigh ts Oop lidation L Oo Val d batch tupl e Load aSsify Batc h Cla lculate LoSs Cal cord metrics Rec cordmetrics Rec Init model tmodel I Init t data loade ers LOop over epoc hs Initialized with Random with R Random Random wei ights FuLly Trained Figure 11.7 The training and validation script we will implement in this chapter, with a focus on the nested loops over each epoch and batches in the epoch Initializes an empty metrics array Sets up our batch looping with time estimate Frees any leftover gradient tensors" Deep-Learning-with-PyTorch.pdf,"297 Training and validating the model loss_var = self.computeBatchLoss( batch_ndx,batch_tup, train_dl.batch_size, trnMetrics_g ) loss_var.backward() self.optimizer.step() self.totalTrainingSamples_count += len(train_dl.dataset)return trnMetrics_g.to('cpu') The main differences that we see from the training loops in earlier chapters are as follows: The trnMetrics_g tensor collects detailed pe r-class metrics during training. For larger projects like ours, this kind of insight can be very nice to have. We don’t directly iterate over the train_dl data loader. We use enumerateWith- Estimate to provide an estimated time of comp letion. This isn’t crucial; it’s just a stylistic choice. The actual loss computatio n is pushed into the computeBatchLoss method. Again, this isn’t strictly necessary, but code reuse is typically a plus. We’ll discuss why we’ve wrapped enumerate with additional functionality in section 11.7.2; for now, assume it’s the same as enumerate(train_dl) . The purpose of the trnMetrics_g tensor is to transport information about how the model is behaving on a per-sample basis from the computeBatchLoss function to the logMetrics function. Let’s take a look at computeBatchLoss next. We’ll cover logMetrics after we’re done with the rest of the main training loop. 11.5.1 The computeBatchLoss function The computeBatchLoss function is called by both the training and validation loops. As the name suggests, it computes the loss over a batch of samples. In addition, the func- tion also computes and records per-sample information about the output the model is producing. This lets us compute things li ke the percentage of correct answers per class, which allows us to hone in on ar eas where our model is having difficulty. Of course, the function’s core functional ity is around feeding the batch into the model and computing the pe r-batch loss. We’re using CrossEntropyLoss (https:// pytorch.org/docs/stable/nn.ht ml#torch.nn.CrossEntropyLoss ), just like in chapter 7. Unpacking the batch tuple, moving the tens ors to the GPU, and invoking the model should all feel familiar afte r that earlier training work. We’ll discuss this method in detail in the next section. Actually updates the model weights" Deep-Learning-with-PyTorch.pdf,"298 CHAPTER 11 Training a classification model to detect suspected tumors def computeBatchLoss(self, batch_ndx, batch_tup, batch_size, metrics_g): input_t, label_t, _series_list, _center_list = batch_tup input_g = input_t.to(self.device, non_blocking=True) label_g = label_t.to(self.device, non_blocking=True) logits_g, probability_g = self.model(input_g)loss_func = nn.CrossEntropyLoss(reduction='none') loss_g = loss_func( logits_g, label_g[:,1], )# ... line 238 return loss_g.mean() Here we are not using the default behavior to get a loss value averaged over the batch. Instead, we get a tensor of loss values, one per sample. This lets us track the individual losses, which means we can aggregate them as we wish (per class, for example). We’ll see that in action in just a moment. For no w, we’ll return the mean of those per-sample losses, which is equivalent to the batch loss. In situations where you don’t want to keep statistics per sample, using the loss averaged over the batch is perfectly fine. Whether that’s the case is highly dependent on your project and goals. Once that’s done, we’ve fulfilled our oblig ations to the calling function in terms of what’s required to do backpropagation an d weight updates. Before we do that, how- ever, we also want to record our per-samp le stats for posterity (and later analysis). We’ll use the metrics_g parameter passed in to accomplish this. METRICS_LABEL_NDX=0 METRICS_PRED_NDX=1METRICS_LOSS_NDX=2 METRICS_SIZE = 3 # ... line 225 def computeBatchLoss(self, batch_ndx, batch_tup, batch_size, metrics_g): # ... line 238start_ndx = batch_ndx * batch_size end_ndx = start_ndx + label_t.size(0) metrics_g[METRICS_LABEL_NDX, start_ndx:end_ndx] = \ label_g[:,1].detach() metrics_g[METRICS_PRED_NDX, start_ndx:end_ndx] = \ probability_g[:,1].detach() metrics_g[METRICS_LOSS_NDX, start_ndx:end_ndx] = \ loss_g.detach() return loss_g.mean()Listing 11.11 training.py:225, .computeBatchLoss Listing 11.12 training.py:26reduction=‘none’ gives the loss per sample. Index of the one- hot-encoded class Recombines the loss per sample into a single value These named array indexes are declared at module-level scope. We use detach since none of our metrics need to hold on to gradients. Again, this is the loss over the entire batch." Deep-Learning-with-PyTorch.pdf,"299 Training and validating the model By recording the label, prediction, and loss for each and every training (and later, val- idation) sample, we have a wealth of deta iled information we can use to investigate the behavior of our model. For now, we’re go ing to focus on compi ling per-class statis- tics, but we could easily use this information to find the sample that is classified themost wrongly and start to investigate why. Ag ain, for some projects, this kind of infor- mation will be less interesting, but it’s good to remember that you have these kinds of options available. 11.5.2 The validation loop is similar The validation loop in figure 11.8 looks very similar to training but is somewhat sim- plified. The key difference is that validation is read-only. Specifically, the loss value returned is not used, and the weights are not updated. Nothing about the model should have change d between the start and end of the func- tion call. In addition, it’s quite a bit faster due to the with torch.no_grad() context manager explicitly informing PyTorch that no gradients need to be computed. def main(self): for epoch_ndx in range(1, self.cli_args.epochs + 1): # ... line 157valMetrics_t = self.doValidation(epoch_ndx, val_dl)Listing 11.13 training.py:137, LunaTrainingApp.mainLog Metrics console tensorboardTraining LOop Load batch tupleClaSsify Batch Calculate LoSs Record metrics Update weightsValidation LOop Load batch tuple ClaSsify Batch Calculate LoSs Record metricsInit model Init data loaders LOop over epochsInitialized with Random weightsFuLly Trained Log Metric s console tensorboard tensorboard Training LOo p Load batch tuple ClaSsif y Batch Calculate Lo Ss Record metri cs Updateweights Oop lidation L Oo Val d batch tup le Load aSsify Batc h Cla lculate Lo Ss Cal cord metrics Rec cordmetrics Rec Init model t model I Init t data loade ers LOop over epoc hs Initialized with Random with R Random Random wei ights FuLly Trained Figure 11.8 The training and validation script we will implement in this chapter, with a focus on the per-epoch validation loop" Deep-Learning-with-PyTorch.pdf,"300 CHAPTER 11 Training a classification model to detect suspected tumors self.logMetrics(epoch_ndx, 'val', valMetrics_t) # ... line 203 def doValidation(self, epoch_ndx, val_dl): with torch.no_grad(): self.model.eval()valMetrics_g = torch.zeros( METRICS_SIZE, len(val_dl.dataset),device=self.device, ) batch_iter = enumerateWithEstimate( val_dl, ""E{} Validation "".format(epoch_ndx),start_ndx=val_dl.num_workers, ) for batch_ndx, batch_tup in batch_iter: self.computeBatchLoss( batch_ndx, batch_tup, val_dl.batch_size, valMetrics_g) return valMetrics_g.to('cpu') Without needing to update network weights (recall that doing so would violate the entire premise of the validation set; something we never want to do!), we don’t need to use the loss returned from computeBatchLoss , nor do we need to reference the optimizer. All that’s left inside the loop is the call to computeBatchLoss . Note that we are still collecti ng metrics in valMetrics_g as a side effect of the call, even though we aren’t using the overall per-batch loss returned by computeBatchLoss for anything. 11.6 Outputting performance metrics The last thing we do per epoch is log ou r performance metrics for this epoch. As shown in figure 11.9, once we’ve logged metric s, we return to the training loop for the next epoch of training. Logging results and progress as we go is important, since if training goes off the rails (“does not conver ge” in the parlance of deep learning), we want to notice this is happening and stop spending time training a model that’s not working out. In less cata strophic cases, it’s good to be able to keep an eye on how your model behaves. Earlier, we were collecting results in trnMetrics_g and valMetrics_g for logging progress per epoch. Each of these two tens ors now contains ever ything we need to compute our percent correct and average loss per class for our trai ning and validation runs. Doing this per epoch is a common choi ce, though somewhat arbitrary. In future chapters, we’ll see how to manipulate the size of our epochs such that we get feedback about training progress at a reasonable rate.Turns off training-time behavior" Deep-Learning-with-PyTorch.pdf,"301 Outputting performance metrics 11.6.1 The logMetrics function Let’s talk about the high-level structure of the logMetrics function. The signature looks like this. def logMetrics( self,epoch_ndx, mode_str, metrics_t,classificationThreshold=0.5, ): We use epoch_ndx purely for display while logging our results. The mode_str argu- ment tells us whether the metrics are for training or validation. We consume either trnMetrics_t or valMetrics_t , which is passed in as the metrics _t parameter. Recall that both of those inputs are tensors of floating-point values that we filled with data during computeBatchLoss and then transferred back to the CPU right before we returned them from doTraining and doValidation . Both tensors have three rows and as many columns as we have samp les (training samples or validation samples, depending). As a reminder, those three rows correspond to the following constants. Listing 11.14 training.py:251, LunaTrainingApp.logMetricsLog Metrics console tensorboardTraining LOop Load batch tuple ClaSsify BatchCalculate LoSsRecord metrics Update weightsValidation LOop Load batch tupleClaSsify Batch Calculate LoSs Record metricsInit model Init data loaders LOop over epochsInitialized with Random weightsFuLly Trained Log Metrics console tensorboar d tensorboard Training LOo p Load batch tupl e ClaSsif y Batch Calculate LoSs Recordmetrics Update weigh ts Oop lidationLOo Val dbatchtuple Load aSsify Batch Cla lculate LoSs Cal cord metrics Rec cord metrics Rec Init model tmodel I Init t data loade ers LOop over epoc hs Initialized with Ran dom with R Random Random wei ights FuLly Trained Figure 11.9 The training and validation script we will implement in this chapter, with a focus on the metrics logging at the end of each epoch" Deep-Learning-with-PyTorch.pdf,"302 CHAPTER 11 Training a classification model to detect suspected tumors METRICS_LABEL_NDX=0 METRICS_PRED_NDX=1METRICS_LOSS_NDX=2 METRICS_SIZE = 3 CONSTRUCTING MASKS Next, we’re going to construct masks that will let us limit our metrics to only the nod- ule or non-nodule (aka positive or negative ) samples. We will also count the total sam- ples per class, as well as the number of samples we classified correctly. negLabel_mask = metrics_t[METRICS_LABEL_NDX] <= classificationThreshold negPred_mask = metrics_t[METRICS_PRED_NDX] <= classificationThreshold posLabel_mask = ~negLabel_mask posPred_mask = ~negPred_mask While we don’t assert i t h e r e , w e k n o w t h a t a ll of the values stored in metrics _t[METRICS_LABEL_NDX] belong to the set {0.0, 1.0} since we know that our nodule status labels are simply True or False . By comparing to classificationThreshold , which defaults to 0.5, we get an array of binary values where a True value corresponds to a non-nodule (aka negative) label for the sample in question. We do a similar comparison to create the negPred_mask , but we must remember that the METRICS_PRED_NDX values are the positive pred ictions produced by our model and can be any floating-point value between 0.0 and 1.0, inclusive. That doesn’t change our comparison, but it does mean th e actual value can be close to 0.5. The positive masks are simply the inverse of the negative masks. NOTE While other projects can utilize similar approaches, it’s important to realize that we’re taking some shortcut s that are allowed because this is a binary classification problem. If your next project has more than two classes or has samples that belong to multiple classes at the same time, you’ll have to use more complicated logi c to build similar masks.Listing 11.15 training.py:26 Listing 11.16 training.py:264, LunaTrainingApp.logMetricsThese are declared at module-level scope. Tensor masking and Boolean indexing Masked tensors are a common usage patter n that might be opaque if you have not encountered them before. You may be familiar with the NumPy concept called masked arrays ; tensor and array masks behave the same way. If you aren’t familiar with masked arrays, an excellent page in the NumPy documen- tation ( http://mng.bz/XPra ) describes the behavior well. PyTorch purposely uses the same syntax and semantics as NumPy." Deep-Learning-with-PyTorch.pdf,"303 Outputting performance metrics Next, we use those masks to compute some pe r-label statistics and store them in a dic- tionary, metrics_dict . neg_count = int(negLabel_mask.sum()) pos_count = int(posLabel_mask.sum()) neg_correct = int((negLabel_mask & negPred_mask).sum()) pos_correct = int((posLabel_mask & posPred_mask).sum()) metrics_dict = {} metrics_dict['loss/all'] = \ metrics_t[METRICS_LOSS_NDX].mean() metrics_dict['loss/neg'] = \ metrics_t[METRICS_LOSS_NDX, negLabel_mask].mean() metrics_dict['loss/pos'] = \ metrics_t[METRICS_LOSS_NDX, posLabel_mask].mean() metrics_dict['correct/all'] = (pos_correct + neg_correct) \ / np.float32(metrics_t.shape[1]) * 100 metrics_dict['correct/neg'] = neg_correct / np.float32(neg_count) * 100 metrics_dict['correct/pos'] = pos_correct / np.float32(pos_count) * 100 First we compute the average loss over the entire epoch. Since the loss is the single metric that is being minimized during traini n g , w e a l w a y s w a n t t o b e a b l e t o k e e p track of it. Then we limit the loss averaging to only those samples with a negative label using the negLabel_mask we just made. We do the same with the positive loss. Com- puting a per-class loss like this can be useful if one class is persistently harder to classify than another, since that knowledge can he lp drive investigation and improvements. We’ll close out the calculations with dete rmining the fraction of samples we classi- fied correctly, as well as th e fraction correct from each label. Since we will display these numbers as percentages in a moment, we also multiply the values by 100. Similar to the loss, we can use these numbers to he lp guide our efforts when making improve- ments. After the calculations, we then log our results with three calls to log.info . log.info( (""E{} {:8} {loss/all:.4f} loss, "" + ""{correct/all:-5.1f}% correct, "" ).format( epoch_ndx,mode_str, **metrics_dict, ) ) log.info( (""E{} {:8} {loss/neg:.4f} loss, "" + ""{correct/neg:-5.1f}% correct ({neg_correct:} of {neg_count:})""Listing 11.17 training.py:270, LunaTrainingApp.logMetrics Listing 11.18 training.py:289, LunaTrainingApp.logMetricsConverts to a normal Python integer Avoids integer division by converting to np.float32" Deep-Learning-with-PyTorch.pdf,"304 CHAPTER 11 Training a classification model to detect suspected tumors ).format( epoch_ndx, mode_str + '_neg',neg_correct=neg_correct, neg_count=neg_count, **metrics_dict, ) ) log.info( # ... line 319 ) The first log has values computed from all of our samples and is tagged /all , while the negative (non-nodule) and posi tive (nodule) values are tagged /neg and /pos , respectively. We don’t show the third loggin g statement for positive values here; it’s identical to the second except for swapping neg for pos in all cases. 11.7 Running the training script Now that we’ve completed the core of the tr aining.py script, we’l l actually start run- ning it. This will initialize and train our model and print statistics about how well the training is going. The idea is to get this kicked off to run in the background while we’re covering the model implementation in detail. Hopefully we’ll have results to look at once we’re done. We’re running this script from the main code directory; it should have subdirecto- ries called p2ch11, util, and so on. The python environment used should have all the libraries listed in requirements.txt installed. Once those libraries are ready, we can run: $ python -m p2ch11.training Starting LunaTrainingApp, Namespace(batch_size=256, channels=8, epochs=20, layers=3, num_workers=8) : 495958 training samples: 55107 validation samples Epoch 1 of 20, 1938/216 batches of size 256 E1 Training ----/1938, startingE1 Training 16/1938, done at 2018-02-28 20:52:54, 0:02:57 ... As a reminder, we also prov ide a Jupyter Notebook that contains invocations of the training application. # In[5]: run('p2ch11.prepcache.LunaPrepCacheApp') # In[6]: run('p2ch11.training.LunaTrainingApp', '--epochs=1')Listing 11.19 code/p2_run_everything.ipynbThe ‘pos’ logging is similar to the ‘neg’ logging earlier. This is the command line for Linux/Bash. Windows users will probably need to invoke Python differently, depending on the install method used." Deep-Learning-with-PyTorch.pdf,"305 Running the training script If the first epoch seems to be taking a very long time (more than 10 or 20 minutes), it might be related to needing to prep are the cached data required by LunaDataset . See section 10.5.1 for details about the cachin g. The exercises for chapter 10 included writing a script to pre-stuff the cache in an efficient manner. We also provide the prepcache.py file to do the same thing; it can be invoked with python -m p2ch11 .prepcache . Since we repeat our dsets.py files per chapter, the caching will need to be repeated for every chapter. This is somewhat space and time inefficient, but it means we can keep the code for each chapter much mo re well contained. Fo r your future proj- ects, we recommend reusing your cache more heavily. O n c e t r a i n i n g i s u n d e r w a y , w e w a n t t o m a k e s u r e w e ’ r e u s i n g t h e c o m p u t i n g resources at hand the way we expect. An easy way to tell if the bottleneck is data loading or computation is to wait a few moments afte r the script starts to train (look for output like E1 Training 16/7750, done at… ) and then check both top and nvidia-smi : If the eight Python worker processes ar e consuming >80% CPU, then the cache probably needs to be prepared (we kn ow this here because the authors have made sure there aren’t CPU bottlenecks in this project’s implementation; this won’t be generally true). If nvidia-smi reports that GPU-Util is >80%, then you’re saturating your GPU. We’ll discuss some strategies for efficient waiting in section 11.7.2. The intent is that the GPU is saturated; we want to use as much of that computing power as we can to complete epochs quic kly. A single NVIDIA GTX 1080 Ti should complete an epoch in under 15 minutes. Since our model is relatively simple, it doesn’t take a lot of CPU preprocessing for the CPU to be the bottleneck. When work- ing with models with greater depth (or more needed calculations in general), process- ing each batch will take longer, which will increase the amount of CPU processing we can do before the GPU runs out of work before the next batch of input is ready. 11.7.1 Needed data for training If the number of samples is less than 495,958 for training or 55,107 for validation, itmight make sense to do some sanity checking to be sure the full data is present and accounted for. For your future projects, ma ke sure your dataset returns the number of samples that you expect. First, let’s take a look at the basic di rectory structure of our data-unversioned/ part2/luna directory: $ ls -1p data-unversioned/part2/luna/ subset0/ subset1/ ...subset9/ Next, let’s make sure we have one .mhd f ile and one .raw file for each series UID" Deep-Learning-with-PyTorch.pdf,"306 CHAPTER 11 Training a classification model to detect suspected tumors $ ls -1p data-unversioned/part2/luna/subset0/ 1.3.6.1.4.1.14519.5.2.1.6279.6001.105756658031515062000744821260.mhd 1.3.6.1.4.1.14519.5.2.1.6279.6001.105756658031515062000744821260.raw1.3.6.1.4.1.14519.5.2.1.6279.6001.108197895896446896160048741492.mhd 1.3.6.1.4.1.14519.5.2.1.6279.6001.108197895896446896160048741492.raw ... and that we have the overal l correct number of files: $ ls -1 data-unversioned/part2/luna/subset?/* | wc -l 1776 $ ls -1 data-unversioned/part2/luna/subset0/* | wc -l 178... $ ls -1 data-unversioned/part2/luna/subset9/* | wc -l 176 If all of these seem right but things sti ll aren’t working, ask on Manning LiveBook (https://livebook.manning.com/book/dee p-learning-with-pytorch/chapter-11 ) and hopefully someone can help get things sorted out. 11.7.2 Interlude: The enumerateWithEstimate function Working with deep learning involves a lot of waiting. We’re talking about real-world, sitting around, glancing at the clock on the wall, a watched pot never boils (but you could fry an egg on the GPU), straight up boredom . The only thing worse than sitting and st aring at a blinking cursor that hasn’t moved for over an hour is flooding your screen with this: 2020-01-01 10:00:00,056 INFO training batch 1234 2020-01-01 10:00:00,067 INFO training batch 12352020-01-01 10:00:00,077 INFO training batch 1236 2020-01-01 10:00:00,087 INFO training batch 1237 ...etc... At least the quietly blinking cursor doesn’t blow out your scrollback buffer! Fundamentally, while doing all this waitin g, we want to answer the question “Do I have time to go refill my water glass?” along with follow-up questions about having time to Brew a cup of coffee Grab dinner Grab dinner in Paris5 To answer these pressing ques tions, we’re going to use our enumerateWithEstimate function. Usage looks like the following: 5If getting dinner in France doesn’t involve an airport, feel free to substitute “Paris, Texas” to make the joke work; https://en.wikipedia.org/wiki/Paris_(disambiguation) ." Deep-Learning-with-PyTorch.pdf,"307 Running the training script >>> for i, _ in enumerateWithEstimate(list(range(234)), ""sleeping""): ... time.sleep(random.random()) ...11:12:41,892 WARNING sleeping ----/234, starting 11:12:44,542 WARNING sleeping 4/234, done at 2020-01-01 11:15:16, 0:02:35 11:12:46,599 WARNING sleeping 8/234, done at 2020-01-01 11:14:59, 0:02:1711:12:49,534 WARNING sleeping 16/234, done at 2020-01-01 11:14:33, 0:01:51 11:12:58,219 WARNING sleeping 32/234, done at 2020-01-01 11:14:41, 0:01:59 11:13:15,216 WARNING sleeping 64/234, done at 2020-01-01 11:14:43, 0:02:0111:13:44,233 WARNING sleeping 128/234, done at 2020-01-01 11:14:35, 0:01:53 11:14:40,083 WARNING sleeping ----/234, done at 2020-01-01 11:14:40 >>> That’s 8 lines of output for over 200 iterations lasting ab out 2 minutes. Even given the wide variance of random.random() , the function had a pretty decent estimate after 16 iterations (in less than 10 se conds). For loop bodies with more constant timing, the estimates stabilize even more quickly. In terms of behavior, enumerateWithEstimate is almost identical to the standard enumerate (the differences are things like the fa ct that our function returns a genera- tor, whereas enumerate returns a specialized ). def enumerateWithEstimate( iter, desc_str, start_ndx=0,print_ndx=4, backoff=None, iter_len=None, ): for (current_ndx, item) in enumerate(iter): yield (current_ndx, item) However, the side effects (logging, specific ally) are what make the function interest- ing. Rather than get lost in the weeds tryi ng to cover every detail of the implementa- tion, if you’re interested, you ca n consult the function docstring ( https://github .com/deep-learning-with-pytorch/dlwpt-code/blob/master/util/util.py#L143 ) to get information about the function paramete rs and desk-check the implementation. Deep learning projects can be very ti me intensive. Knowing when something is expected to finish me ans you can use your time until th en wisely, and it can also clue you in that something isn’t working properly (or an approach is unworkable) if the expected time to completion is much larger than expected. Listing 11.20 util.py:143, def enumerateWithEstimate" Deep-Learning-with-PyTorch.pdf,"308 CHAPTER 11 Training a classification model to detect suspected tumors 11.8 Evaluating the model: Ge tting 99.7% correct means we’re done, right? Let’s take a look at some (a bridged) output from our trai ning script. As a reminder, we’ve run this with the command line python -m p2ch11.training : E1 Training ----/969, starting ... E1 LunaTrainingApp E1 trn 2.4576 loss, 99.7% correct... E1 val 0.0172 loss, 99.8% correct ... After one epoch of training, both the training and validation set show at least 99.7% correct results. That’s an A+! Time for a roun d of high-fives, or at least a satisfied nod and smile. We just solved cancer! … Right? Well, no. Let’s take a closer (less-abridg ed) look at that epoch 1 output: E1 LunaTrainingApp E1 trn 2.4576 loss, 99.7% correct, E1 trn_neg 0.1936 loss, 99.9% correct (494289 of 494743) E1 trn_pos 924.34 loss, 0.2% correct (3 of 1215)... E1 val 0.0172 loss, 99.8% correct, E1 val_neg 0.0025 loss, 100.0% correct (494743 of 494743)E1 val_pos 5.9768 loss, 0.0% correct (0 of 1215) On the validation set, we’re getting non-no dules 100% correct, bu t the actual nodules are 100% wrong. The network is just classi fying everything as not-a-nodule! The value 99.7% just means only approximatel y 0.3% of the samples are nodules. After 10 epochs, the situation is only marginally better: E10 LunaTrainingApp E10 trn 0.0024 loss, 99.8% correct E10 trn_neg 0.0000 loss, 100.0% correctE10 trn_pos 0.9915 loss, 0.0% correct E10 val 0.0025 loss, 99.7% correct E10 val_neg 0.0000 loss, 100.0% correctE10 val_pos 0.9929 loss, 0.0% correct The classification output remains the same—none of the nodule (aka positive) sam- ples are correctly identified. It’s interesting that we’re starting to see some decrease in the val_pos loss, however, while not seeing a corresponding increase in the val_neg loss. This implies that the network is learning something. Unfortunately, it’s learning very, very slowly. Even worse, this particular failure mode is the most dangerous in the real world! We want to avoid the situation where we cl assify a tumor as an innocuous structure," Deep-Learning-with-PyTorch.pdf,"309 Graphing training metrics with TensorBoard because that would not facilitate a patient getting the evaluation and eventual treat- ment they might need. It’s important to understand the consequences for misclassifi- cation for all your projects, as that can ha ve a large impact on how you design, train, and evaluate your model. We’ll disc uss this more in the next chapter. Before we get to that, however, we need to upgrade our tooling to make the results easier to understand. We’re sure you love to squint at columns of numbers as much as anyone, but pictures are worth a thousand wo rds. Let’s graph some of these metrics. 11.9 Graphing training metrics with TensorBoard We’re going to use a tool called TensorBoar d as a quick and easy way to get our train- ing metrics out of our training loop and into some pretty graphs. This will allow us to follow the trends of those metrics, rather than only look at the instantaneous values per epoch. It gets much, much easi er to know whether a value is an outlier or just the lat- est in a trend when you’re look ing at a visual representation. “Hey, wait,” you might be thinking, “isn’t TensorBoard part of the TensorFlow proj- ect? What’s it doing here in my PyTorch book?” Well, yes, it is part of another deep learning framework, but our philosophy is “use what works.” There’s no reason to restrict ou rselves by not using a tool just because it’s bundled with another project we’re not us ing. Both the PyTorch and TensorBoard devs agree, because they co llaborated to add official support for TensorBoard into PyTorch. TensorBoard is great, and it’s go t some easy-to-use PyTorch APIs that let us hook data from just about anywhere into it for quick and easy display. If you stick with deep learning, you’ll probably be seeing (and using) a lot of TensorBoard. In fact, if you’ve been running the chapter examples, you should already have some data on disk ready an d waiting to be displayed. Let’s see how to run Tensor- Board, and look at wh at it can show us. 11.9.1 Running TensorBoard By default, our training script will write me trics data to the runs / subdirectory. If you list the directory content, you might see something like this during your Bash shell session: $ ls -lA runs/p2ch11/ total 24drwxrwxr-x 2 elis elis 4096 Sep 15 13:22 2020-01-01_12.55.27-trn-dlwpt/ drwxrwxr-x 2 elis elis 4096 Sep 15 13:22 2020-01-01_12.55.27-val-dlwpt/ drwxrwxr-x 2 elis elis 4096 Sep 15 15:14 2020-01-01_13.31.23-trn-dwlpt/drwxrwxr-x 2 elis elis 4096 Sep 15 15:14 2020-01-01_13.31.23-val-dwlpt/ To get the tensorboard program, install the tensorflow (https://pypi.org/project/ tensorflow ) Python package. Since we’re not actu ally going to use TensorFlow proper, it’s fine if you install the default CPU-on ly package. If you have another version ofThe single-epoch run from earlier The more recent 10-epoch training run" Deep-Learning-with-PyTorch.pdf,"310 CHAPTER 11 Training a classification model to detect suspected tumors TensorBoard installed al ready, using that is fine too. Either make sure the appropriate directory is on your path, or invoke it with ../path/to/tensorboard --logdir runs/ . It doesn’t really matter where you invoke it from, as long as you use the --logdir argu- ment to point it at where your data is stored . It’s a good idea to segregate your data into separate folders, as TensorBoard can get a bit unwieldy once you get over 10 or 20 experiments. You’ll have to decide the best way to do that for each project as you go. Don’t be afraid to move data arou nd after the fact if you need to. Let’s start TensorBoard now: $ tensorboard --logdir runs/ 2020-01-01 12:13:16.163044: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not ➥ compiled to use: AVX2 FMA 1((CO17-2)) TensorBoard 1.14.0 at http://localhost:6006/ (Press CTRL+C to quit) Once that’s done, you should be able to point your browser at http://localhost:6006 and see the main dashboard.6 Figure 11.10 shows us what that looks like. 6If you’re running training on a different comput er from your browser, you’ll need to replace localhost with the appropriate hostname or IP address.These messages might be different or not present for you; that’s fine. Figure 11.10 The main TensorBoard UI, showing a paired set of training and validation runs" Deep-Learning-with-PyTorch.pdf,"311 Graphing training metrics with TensorBoard Along the top of the browser window, you should see the orange header. The right side of the header has the typical widgets fo r settings, a link to the GitHub repository, and the like. We can ignore those for now. Th e left side of the header has items for the data types we’ve provided. You should have at least the following: Scalars (the default tab) Histograms Precision-Recall Curves (shown as PR Curves) You might see Distributions as well as the seco nd UI tab (to the right of Scalars in fig- ure 11.10). We won’t use or discuss those he re. Make sure you’ve selected Scalars by clicking it. On the left is a set of controls for display options, as well as a list of runs that are present. The smoothing option can be useful if you have particularly noisy data; it will c a l m t h i n g s d o w n s o t h a t y o u c a n p i c k out the overall trend. The original non- smoothed data will still be visible in the background as a faded line in the same color.Figure 11.11 shows this, althou gh it might be difficult to discern when printed in black and white. Depending on how many times you’ve run the training script, you might have mul- tiple runs to select from. With too many runs being rendered, the graphs can get overly noisy, so don’t hesitate to deselect runs that aren’t of interest at the moment. If you want to permanently remove a run, the data can be deleted from disk while TensorBoard is running. You ca n do this to get rid of experiments that crashed, had Raw Data ploTtedSmOothed trend lines Figure 11.11 The TensorBoard sidebar with Smoothing set to 0.6 and two runs selected for display" Deep-Learning-with-PyTorch.pdf,"312 CHAPTER 11 Training a classification model to detect suspected tumors bugs, didn’t converge, or are so old they’re no longer interesting. The number of runs can grow pretty quickly, so it can be helpful to prune it often and to rename runs or move runs that are particul arly interesting to a more permanent directory so they don’t get deleted by accident. To remove both the train and validation runs, exe- cute the following (after changing the chap ter, date, and time to match the run you want to remove): $ rm -rf runs/p2ch11/2020-01-01_12.02.15_* Keep in mind that removing runs will cause the runs that are later in the list to move up, which will result in them being assigned new colors. OK, let’s get to the point of TensorBoard: the pretty graphs! The main part of the screen should be filled with data from gathering training and validation metrics, as shown in figure 11.12. That’s much easier to parse and absorb than E1 trn_pos 924.34 loss, 0.2% correct (3 of 1215) ! Although we’re going to save discussi on of what these graphs are telling us for section 11.10, now would be a good ti me to make sure it’s clear what these num- bers correspond to from our training prog ram. Take a moment to cross-reference theFigure 11.12 The main TensorBoard data display area showing us that our results on actual nodules are downright awful " Deep-Learning-with-PyTorch.pdf,"313 Graphing training metrics with TensorBoard numbers you get by mousing over the lines with the numbers spit out by training.py during the same training run. You should see a direct correspondence between the Value column of the tooltip and the values printed during training. Once you’re com- fortable and confiden t that you understand exactly what TensorBoard is showing you, let’s move on and discuss how to get thes e numbers to appear in the first place. 11.9.2 Adding TensorBoard support to the metrics logging function We are going to use the torch.utils.tensorboard module to write data in a format that TensorBoard will consume. This will allow us to write metrics for this and any other project quickly and easily. TensorBo ard supports a mix of NumPy arrays and PyTorch tensors, but since we don’t have any reason to put our data into NumPy arrays, we’ll use PyTorch tensors exclusively. The first thing we need do is to create our SummaryWriter objects (which we imported from torch.utils.tensorboard ). The only parameter we’re going to pass in is log_dir , which we will initialize to something like runs/p2ch11/2020-01-01_12 .55.27-trn-dlwpt . We can add a comment argument to our training script to change dlwpt to something more informative; use python -m p2ch11.training --help for more information. We create two writers, one each for the training and validation runs. Those writers will be reused for every epoch. When the SummaryWriter class gets initialized, it also creates the log_dir directories as a side effect. Thes e directories show up in Tensor- Board and can clutter the UI with empty runs if the training script crashes before any data gets written, which can be common when you’re experime nting with something. To avoid writing too many empty junk runs, we wait to instantiate the SummaryWriter objects until we’re ready to write data for the first time. This function is called from logMetrics() . def initTensorboardWriters(self): if self.trn_writer is None: log_dir = os.path.join('runs', self.cli_args.tb_prefix, self.time_str) self.trn_writer = SummaryWriter( log_dir=log_dir + '-trn_cls-' + self.cli_args.comment) self.val_writer = SummaryWriter( log_dir=log_dir + '-val_cls-' + self.cli_args.comment) If you recall, the first epoch is kind of a mess, with the early output in the training loop being essentially random. When we save the metrics from that first batch, those random results end up skewing things a bit. Recall from figure 11.11 that Tensor- Board has smoothing to remo ve noise from the trend lines, which helps somewhat. Another approach could be to skip metr ics entirely for the first epoch’s training data, although our model trains quickly enough that it’s still useful to see the firstListing 11.21 training.py:127, .initTensorboardWriters" Deep-Learning-with-PyTorch.pdf,"314 CHAPTER 11 Training a classification model to detect suspected tumors epoch’s results. Feel fr ee to change this behavior as you see fit; the rest of part 2 will continue with this pattern of includ ing the first, noisy training epoch. TIP If you end up doing a lot of experiment s that result in exceptions or kill- ing the training script relatively quickl y, you might be left with a number of junk runs cluttering up yo ur runs/ directory. Don’t be afraid to clean those out! WRITING SCALARS TO TENSOR BOARD Writing scalars is straight forward. We can take the metrics_dict we’ve already con- structed and pass in each key/value pair to the writer.add_scalar method. The torch.utils.tensorboard.SummaryWriter class has the add_scalar method ( http:// mng.bz/RAqj ) with the following signature. def add_scalar(self, tag, scalar_value, global_step=None, walltime=None): #. . . The tag parameter tells TensorBoard which gr aph we’re adding values to, and the scalar_value parameter is our data point’s Y-axis value. The global_step parameter acts as the X-axis value. Recall that we updated the totalTrainingSamples_count variable inside the doTraining function. We’ll use totalTrainingSamples_count as the X-axis of our TensorBoard plots by passing it in as the global_step parameter. Here’s what that looks like in our code. for key, value in metrics_dict.items(): writer.add_scalar(key, value, self.totalTrainingSamples_count) Note that the slashes in our key names (such as 'loss/all' ) result in TensorBoard grouping the charts by the substring before the '/'. The documentation suggests that we should be passing in the epoch number as the global_step parameter, but that re sults in some complicati ons. By using the number of training samples presented to the network, we can do things like change the number of samples per epoch and still be able to compare those future graphs to the ones we’re creating now. Saying that a model trains in half the number of epochs is meaningless if each epoch takes four times as long! Keep in mind that this might not be standard prac- tice, however; expect to see a variety of values used for the global step. Listing 11.22 PyTorch torch/utils/tensorboard/writer.py:267 Listing 11.23 training.py:323, LunaTrainingApp.logMetrics" Deep-Learning-with-PyTorch.pdf,"315 Why isn’t the model learning to detect nodules? 11.10 Why isn’t the model lear ning to detect nodules? Our model is clearly learning something —the loss trend lines ar e consistent as epochs increase, and the results are repeatable. Th ere is a disconnect, however, between what the model is learning and what we want it to learn. What’s go ing on? Let’s use a quick metaphor to illustrate the problem. Imagine that a professor gives students a final exam consisting of 100 True/False questions. The students have access to previo us versions of this professor’s tests going back 30 years, and every time there are only one or two questions with a True answer. The other 98 or 99 are False, every time. Assuming that the grades aren’t on a cu rve and instead have a typical scale of 90% correct or better being an A, and so on, it is trivial to get an A+: just mark every ques- tion as False! Let’s imagine that this year, there is only one True answer. A student like the one on the left in figure 11.13 who mi ndlessly marked every answer as False would get a 99% on the final but wouldn’t really demonstrate that they had learned anything (beyond how to cram from old tests, of co urse). That’s basically what our model is doing right now. Contrast that with a student like the one on the right who also got 99% of the ques- tions correct, but did so by answering two questions with True. Intuition tells us that the student on the right in figure 11.13 prob ably has a much better grasp of the mate- rial than the all-False student. Finding th e one True question while only getting one answer wrong is pretty difficult! Unfortunately, neither our students’ grades nor our model’s grading scheme re flect this gut feeling. We have a similar situation, where 99.7% of the answers to “Is this candidate a nod- ule?” are “Nope.” Our model is taking the easy way out and answering False on every question. Still, if we look back at our model’s numbers more clos ely, the loss on the training and validation sets is decreasing! The fact that we’re ge tting any traction at all on the cancer-detection proble m should give us hope. It will be the work of the next chapter to realize this potential. We’ll start chap ter 12 by introducing some new, relevant 1. F 6. F 2. F 7. F 3. F 8. F 4. F 9. F 5. F 10. F1. F 6. F 2. F 7. F 3. T 8. F 4. F 9. T 5. F 10. F 666666666 1F F F 8. F 4. F999999999999999T 6. F 7. F F 8. F . F 9 1. F 6 2. F 7 6. F 1FFFFFFFFFF6666 8. F 9......... FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF 33333333333333333333333333333........... TTTTTTTTTTTTTTTTTTTT 8 4.F9 Figure 11.13 A professor giving two students the same grade, despite different levels of knowledge. Question 9 is the only question with an answer of True." Deep-Learning-with-PyTorch.pdf,"316 CHAPTER 11 Training a classification model to detect suspected tumors terminology, and then we’ll come up with a better grading scheme that doesn’t lend itself to being gamed quite as eas ily as what we’ve done so far. 11.11 Conclusion We’ve come a long way this chapter—we no w have a model and a training loop, and are able to consume the data we produced in the last chapter. Our metrics are being logged to the console as well as graphed visually. While our results aren’t usable yet, we’r e actually closer than it might seem. In chapter 12, we will improve the metrics we ’re using to track our progress, and use them to inform the changes we need to make to get our model producing reasonable results. 11.12 Exercises 1Implement a program that iterates through a LunaDataset instance by wrap- ping it in a DataLoader instance, while timing how long it takes to do so. Com- pare these times to the times from the exercises in chapter 10. Be aware of the state of the cache when running the script. aWhat impact does setting num_workers=… to 0, 1, and 2 have? bWhat are the highest values your machine will support for a given combina- tion of batch_size=… and num_workers=… without running out of memory? 2Reverse the sort order of noduleInfo_list . How does that change the behavior of the model after one epoch of training? 3Change logMetrics to alter the naming scheme of the runs and keys that are used in TensorBoard. aExperiment with different forward-sl ash placement for keys passed in to writer.add_scalar . bHave both training and validation ru ns use the same writer, and add the trn or val string to the name of the key. cCustomize the naming of the log dire ctory and keys to suit your taste. 11.13 Summary Data loaders can be used to load data from arbitrary datasets in multiple pro- cesses. This allows otherwise-idle CPU resources to be devoted to preparing data to feed to the GPU. Data loaders load multiple samples from a dataset and collate them into a batch. PyTorch models expect to process batc hes of data, not in dividual samples. Data loaders can be used to manipulate arbitrary datasets by changing the rela- tive frequency of individual samples. This allows for “after-market” tweaks to a dataset, though it might make more sense to change the dataset implementa-tion directly." Deep-Learning-with-PyTorch.pdf,"317 Summary We will use PyTorch’s torch.optim.SGD (stochastic gradient descent) optimizer with a learning rate of 0.001 and a momentum of 0.99 for the majority of part 2. These values are also reasonable defaul ts for many deep learning projects. Our initial model for classification will be very similar to the model we used in chapter 8. This lets us get started with a model that we have reason to believe will be effective. We can revisit the mode l design if we think it’s the thing pre- venting our project from performing better. The choice of metrics that we monitor during training is important. It is easy to accidentally pick metrics that are misl eading about how the model is perform- ing. Using the overall percentage of sample s classified correctly is not useful for our data. Chapter 12 will detail how to evaluate and choose better metrics. TensorBoard can be used to display a wide range of metrics visually. This makes it much easier to consume certain fo rms of information (particularly trend data) as they change per epoch of training." Deep-Learning-with-PyTorch.pdf,"318Improving training with metrics and augmentation The close of the last chapter left us in a pr edicament. While we were able to get the mechanics of our deep learning project in place, none of the results were actually useful; the network simply classified ev erything as non-nodule! To make matters worse, the results seemed grea t on the surface, since we were looking at the overall percent of the training and validation sets that were classified correctly. With our data heavily skewed toward ne gative samples, blindly calling everything negative is aThis chapter covers Defining and computing precision, recall, and true/false positives/negatives Using the F1 score versus other quality metrics Balancing and augmenting data to reduce overfitting Using TensorBoard to graph quality metrics" Deep-Learning-with-PyTorch.pdf,"319 High-level plan for improvement quick and easy way for our model to scor e well. Too bad doing so makes the model basically useless! That means we’re still focused on the same part of figure 12.1 as we were in chap- ter 11. But now we’re working on gett ing our classification model working well instead of at all . This chapter is all about how to measur e, quantify, express, and then improve on how well our model is doing its job. 12.1 High-level plan for improvement While a bit abstract, figure 12.2 shows us ho w we are going to approach that broad set of topics. Let’s walk through this some what abstract map of the ch apter in detail. We will be dealing with the issues we’re facing, like ex cessive focus on a sing le, narrow metric and the resulting behavior being useless in the ge neral sense. In order to make some of this chapter’s concepts a bit more concrete, we’ll first employ a metaphor that puts our trou- bles in more tangible terms: in figure 12.2, (1) Guard Dogs and (2) Birds and Burglars. After that, we will develop a graphical language to represent some of the core con- cepts needed to formally discuss the issu es with the implementation from the last chapter: (3) Ratios: Recall and Precision. Once we have those concepts solidified, we’ll touch on some math using those conc epts that will encapsulate a more robust way of grading our model’s performance and condensing it into a single number: (4) New Metric: F1 Score. We will implement the formula for those new metrics and look[(I,R,C), (I,R,C), (I,R,C), ... ][NEG, POS, NEG, ...] MAL/BENcandidate Locations ClaSsification modelSTep 2 (ch. 13): Segmentation Step 1 (ch. 10): Data LoadingStep 4 (ch. 11+12): ClaSsification Step 5 (ch. 14): Nodule analysis and Diagnosisp=0.1 p=0.9 p=0.2 p=0.9Step 3 (ch. 14): Grouping.MHD .RAWCT Data segmentation modelcandidate Sample [(I,R,CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC), (I,R,CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC))))))), (I,R,C )))))))))))))))))))))))))))))))))),,, ... ] [NEG, POS, NEG, ... ] MAL/BEN ClaSsification model STep 2 (ch. 13 ): Segment ation Step 1 (ch. 10) : Data Loading Step 4 (ch. 11+12) : on StStStStStSStSttttStStSStepepepepepepeppeppepepepepppepepepepeeeeeeeeeeeee5555555555555555555555((((((((((((ch. 14 ): Nodule analysi s and Di agnosi s p=0.1 p=0.9 , p=0.2 p=0.9 Step 3 (ch. 14): Grouping [( ) candid ateeeeeeeeeeeeeeeeeee Locati onssssssssssssssssssssssss Step 4 (ch. 11+ ClaSsific atio .M MHD .R RAW CT Data Data segment at tion model candidate Sample Figure 12.1 Our end-to-end lung cancer detection proj ect, with a focus on this chapter’s topic: step 4, classification" Deep-Learning-with-PyTorch.pdf,"320 CHAPTER 12 Improving training with metrics and augmentation at the how the resulting values change epoc h by epoch during training. Finally, we’ll make some much-needed changes to our LunaDataset implementation with an aim at improving our training results: (5) Balancin g and (6) Augmentation. Then we will see if those experimental change s have the expected impact on our performance metrics. By the time we’re through with this chap ter, our trained model will be performing much better: (7) Workin’ Great! While it won’ t be ready to drop into clinical use just yet, it will be capable of producing result s that are clearly better than random. This will mean we have a workable implementation of step 4, nodule candidate classifica- tion; and once we’re finished, we can begin to think about how to incorporate steps 2 (segmentation) and 3 (group ing) into the project. 12.2 Good dogs vs. bad guys: Fals e positives and false negatives Instead of models and tumors , we’re going to consider the two guard dogs in figure 12.3, both fresh out of obedience school. They both want to alert us to burglars—a rare but serious situation that requires prompt attention. Unfortunately, while both dogs are good dogs, neither is a good guard dog. Our terrier (Roxie) barks at just about ever ything, while our old hound dog (Preston) barks almost exclusively at burglars—but on ly if he happens to be awake when they arrive. 3. Ratios recaLl and precision 4. new metric: f1 score5. Balancing 6. Augmentation 7. Workin’ great! . Balancing NEGPOS 4. new metric: f1 score 7.Workin’great! Augment ation EG S POS 3. Ratios rec aLl and precisio n 4newmetric: 6. NE 1. Guard dogs 2. Birds and burglars Figure 12.2 The metaphors we’ll use to modify the metrics measuring our model to make it magnificent" Deep-Learning-with-PyTorch.pdf,"321 Good dogs vs. bad guys: False positives and false negatives Roxie will alert us to a burglar ju st about every time. She will also alert us to fire engines, thunderstorms, helico pters, birds, the mail carr ier, squirrels, passersby, and so on. If we follow up on every bark, we’ll almost never get robbed (only the sneakiest of sneak-thieves can slip past). Perfect! … Except that being th at diligent means we aren’t really saving any work by having a guard dog. Inst ead, we’ll be up every couple of hours, flashlight in hand, due to Roxi e having smelled a cat, or heard an owl, or seen a late bus wander by. Roxie has a problematic number of false positives. A false positive i s a n e v e n t t h a t i s c l a s s i f i e d a s o f i n t e r e s t o r a s a m e m b e r o f t h e desired class (positive as in “Yes, that’s th e type of thing I’m interested in knowing about”) but that in truth is not really of interest. For the nodule-detection problem, it’s when an actually unin teresting candidate is flagged as a nodule and, hence, in need of a radiologist’s attention. For Roxie, these would be fire engines, thunderstorms, and so on. We will use an image of a cat as the canonical false positive in the next section and the figures that follow throug hout the rest of the chapter. Contrast false positives with true positives : items of interest that are classified cor- rectly. These will be represented in the figures by a human burglar. Meanwhile, if Preston barks, call the po lice, since that means someone has almost certainly broken in, the house is on fire, or Godzilla is attacking. Preston is a deep sleeper, however, and the sound of an in-p rogress home invasion isn’t likely to rouse him, so we’ll still get robbed just about ever y time someone tries. Again, while it’s bet- ter than nothing, we’re not really ending up with the peace of mind that motivated us to get a dog in the first place. Preston ha s a problematic number of false negatives.3. Ratios recaLl and precision 4. new metric: f1 score5. Balancing 6. Augmentation 7. Workin’ great!NEGPOS1. Guard dogs 2. Birds and burglars 5 . Balancing 4. new metric: f1 score 7. Workin’ great ! Augmentation EG S POS 3. Ratios reca Ll and precisio n 4newmetric: 6. NE 1. Guard dogs gs 2.B Birdsand b burgl ars Figure 12.3 The set of topics for this chapter, with a focus on the framing metaphor" Deep-Learning-with-PyTorch.pdf,"322 CHAPTER 12 Improving training with metrics and augmentation A false negative is an event that is classified as not of interest or not a member of the desired class (negative as in “No, that’s not the type of thing I’m interested in knowing about”) but that in truth is actually of interest. For the nodule-detection problem, it’s when a nodule (that is, a potential cancer) goes undetected. For Preston, these would be the robberies that he sleeps through. We ’ll get a bit creative here and use a picture of a rodent burglar for false negatives. They’re sneaky! Contrast false negatives with true negatives : uninteresting items that are correctly identified as such. We’ll go with a picture of a bird for these. Just to complete the metaphor, chapter 11 ’s model is basically a cat that refuses to meow at anything that isn’t a can of tuna (while stoically ignoring Roxie). Our focus at the end of the last chapter was on the percen t correct for the overall training and vali- dation sets. Clearly, that wasn’t a great wa y to grade ourselves, and as we can see from each of our dogs’ myopic focus on a single metric—like the number of true positives or true negatives—we need a metric with a br oader focus to capture our overall perfor- mance. 12.3 Graphing the positives and negatives Let’s start developing the visu al language we’ll use to describe true/false positives/ negatives. Please bear with us if our explan ation gets repetitive; we want to make sure you develop a solid mental mode l for the ratios we’re going to discuss. Consider figure 12.4, which shows events that might be of interest to one of our guard dogs. True Negative False NegativeTrue PositiveHuman ClaSsification ThresholdDog Prediction ThresholdFalse Positive Ignore Bark True Negativ e False Negative T Pos H H uman aSs Ss Ssification hr r reshold Dog Predictio n Threshold Fal Posi Negative Figure 12.4 Cats, birds, rodents, and robbers make up our four classification quadrants. They are separated by a human label and the dog classification threshold." Deep-Learning-with-PyTorch.pdf,"323 Graphing the positives and negatives We’ll use two thresholds in figure 12.4. The first is the human-decided dividing line that separates burglars from harmless animals. In conc rete terms, this is the label that is given for each training or validation sample . The second is the dog-determined classification threshold that determines whether the dog will bark at something. For a deep learning model, this is the predicted value that th e model produces when considering a sample. The combination of these two threshol ds divides our events into quadrants: true/false positives/ne gatives. We will shade the events of concern with a darker back- ground (what with those bad guys sneaki ng around in the dark all the time). Of course, reality is far more complicated . There is no Platonic ideal of a burglar, and no single point relative to the classifi cation threshold at which all burglars will be located. Instead, figure 12.5 shows us that some burglars will be particularly sneaky, and some birds will be particularly annoyi ng. We will also go ahead and enclose our instances in a graph. Our X-axis will rema in the bark-worthiness of each event, as determined by one of our guard dogs. We’re going to have the Y-axis represent some vague set of qualities that we as humans are able to perceive, but our dogs cannot. Since our model produces a binary classification, we can think of the prediction threshold as comparing a single-numerical-val ue output to our classification threshold value. This is why we will require that the classification threshold line to be perfectly vertical in figure 12.5. Each possible burglar is different, so our guard dogs will need to evaluate many dif- ferent situations, and that means more opportunities to make mistakes. We can see the clear diagonal line that separates the birds from the burglars, but Preston and Roxie can only perceive the X-axis here: they have a muddled, overlapped set of Bark IgnoreBoring Animals Bad Guys Figure 12.5 Each type of event will have many possible instances that our guard dogs will need to evaluate." Deep-Learning-with-PyTorch.pdf,"324 CHAPTER 12 Improving training with metrics and augmentation events in the middle of our graph. They mu st pick a vertical ba rk-worthiness thresh- old, which means it’s impossible for either on e of them to do so perfectly. Sometimes the person hauling your appliances to their van is the repair person you hired to fix your washing machine, and sometimes burgla rs show up in a van that says “Washing Machine Repair” on the side. Expecting a do g to pick up on those nuances is bound to fail. The actual input data we’re going to us e has high dimensionality—we need to con- sider a ton of CT voxel values, along with more abstract things like candidate size, overall location in the lungs, and so on. Th e job of our model is to map each of these events and respective properties into this rectangle in such a way that we can separate those positive and negative events cleanly usin g a single vertical li ne (our classification threshold). This is done by the nn.Linear layers at the end of our model. The posi- tion of the vertical line corresponds exactly to the classificationThreshold_float we saw in section 11.6.1. There, we chose the hardcoded value 0.5 as our threshold. Note that in reality, the data presented is not two-dimensional; it goes from very-high- dimensional after the second-to- last layer, to one-dimensional (here, our X-axis) at the output—just a single sc alar per sample (which is then bisected by the classification threshold). Here, we use the second dimens ion (the Y-axis) to represent per-sample features that our model cannot see or use: things like age or gender of the patient, location of the nodule candidate in the lung , or even local aspects of the candidate that the model hasn’t utilized. It also gives us a convenient way to represent confusion between non-nodule and nodule samples. The quadrant areas in figure 12.5 and th e count of samples contained in each will be the values we use to discuss model perf ormance, since we can use the ratios between these values to construct increasingly comp lex metrics that we can use to objectively measure how well we are doing. As they say, “the proof is in the proportions.”1 Next, we’ll use ratios between these event subs ets to start defining better metrics. 12.3.1 Recall is Roxie’s strength Recall is basically “Make sure you neve r miss any interesting events!” Formally, recall is the ratio of the true positives to the union of true positives and false negatives. We can see this depicted in figure 12.6. NOTE In some contexts, recall is referred to as sensitivity . To improve recall, minimize false negatives. In guard dog terms, that means if you’re unsure, bark at it, just in case. Don’t let any rodent thieves sneak by on your watch! Roxie accomplishes having an incredibly high recall by pushing her classification threshold all the way to the left, such that it encompasses nearly all of the positive events in figure 12.7. Note how doing so means her recall value is ne ar 1 . 0, whi ch means 99% of robbers are barked at. Since that’s how Roxie defi nes success, in her mind, she’s doing a great job. Never mind the huge expanse of false positives! 1No one actually says this." Deep-Learning-with-PyTorch.pdf,"325 Graphing the positives and negatives VS. RecaLl is the ratio determined by false negatives Figure 12.6 Recall is the ratio of the true positives to the union of true positives and false negatives. High recall minimizes false negatives. Roxie Barks at everything Low bark threshold, High RecaLl Figure 12.7 Roxie’s choice of threshold prioritizes minimizing false negatives. Every last rat is barked at . . . and cats, and most birds." Deep-Learning-with-PyTorch.pdf,"326 CHAPTER 12 Improving training with metrics and augmentation 12.3.2 Precision is Preston’s forte Precision is basically “Never bark unless yo u’re sure.” To improve precision, minimize false positives. Preston won’t bark at someth ing unless he’s certain it’s a burglar. More formally, precision is the ratio of the true positives to the union of true positives and false positives, as shown in figure 12.8. Preston accomplishes having an incredibly high precision by pushing his classification threshold all the way to the right, such that it excludes as many uninteresting, negative events as he can manage (see figure 12.9) . This is the opposite of Roxie’s approach and means Preston has a precision of nearly 1. 0: 99% of the things he barks at are rob- bers. This also matches his de finition of being a good guard dog, even though a large number of events pass undetected. While neither precision nor recall can be the single metric used to grade our model, they are both useful numbers to have on hand during training. Let’s calculate and display these as part of our training program, and then we’ll discuss other metricswe can employ. VS.Precision is the ratio determined by false Positives Figure 12.8 Precision is the ratio of the true positives to the union of true positives and false positives. High precision minimizes false positives." Deep-Learning-with-PyTorch.pdf,"327 Graphing the positives and negatives 12.3.3 Implementing precisio n and recall in logMetrics Both precision and recall are valuable metr ics to be able to track during training, since they provide important insight into how the model is behaving. If either of them drops to zero (as we saw in chapter 11!), it’s likely that our model has started to behave in a degenerate manner. We can use the exact details of th e behavior to guide where to investigate and experiment with getting training back on track. We’d like to update the logMetrics function to add precision and recall to the output we see for each epoch, to complement the loss an d correctness metrics we already have. We’ve been defining precision and recall in terms of “true po sitives” and the like thus far, so we will continue to do so in the code. It turns out that we are already com- puting some of th e values we need, though we had named them differently. neg_count = int(negLabel_mask.sum()) pos_count = int(posLabel_mask.sum()) trueNeg_count = neg_correct = int((negLabel_mask & negPred_mask).sum()) truePos_count = pos_correct = int((posLabel_mask & posPred_mask).sum()) falsePos_count = neg_count - neg_correct falseNeg_count = pos_count - pos_correctListing 12.1 training.py:315, LunaTrainingApp.logMetrics Preston Mostly SlEepsHigh bark Threshold, high Precision Pr Figure 12.9 Preston’s choice of threshold prioritizes minimizing false positives. Cats get left alone; only burglars are barked at!" Deep-Learning-with-PyTorch.pdf,"328 CHAPTER 12 Improving training with metrics and augmentation Here, we can see that neg_correct is the same thing as trueNeg_count ! That actually makes sense, since non- nodule is our “negative” value (a s in “a negative diagnosis”), and if the classifier gets the prediction corr ect, then that’s a true negative. Similarly, correctly labeled nodule sa mples are true positives. We do need to add the variables for our false positive and false negative values. That’s straightforward, since we can take th e total number of benign labels and subtract the count of the correct ones. What’s left is the count of non-nodule samples misclassi-fied as positive . Hence, they are false positives. Again, the false negative calculation is of the same form, but uses nodule counts. With those values, we can compute precision and recall and store them in metrics _dict . precision = metrics_dict['pr/precision'] = \ truePos_count / np.float32(truePos_count + falsePos_count) recall = metrics_dict['pr/recall'] = \ truePos_count / np.float32(truePos_count + falseNeg_count) Note the double assignment : while having separate precision and recall variables isn’t strictly necessary, they improve the readability of the next section. We also extend the logging statement in logMetrics to include the new values , but we skip the imple- mentation for now (we’ll revisit logging later in the chapter). 12.3.4 Our ultimate performa nce metric: The F1 score While useful, neither precision nor recall enti rely captures what we need in order to be able to evaluate a model. As we’ve seen with Roxie and Preston, it’s possible to gameeither one individually by manipulating ou r classification threshold, resulting in a model that scores well on one or the other but does so at the expense of any real-world utility. We need something that combines both of those values in a way that prevents such gamesmanship. As we can see in figure 12.10, it’s time to introduc e our ultimate metric. The generally accepted way of combining precision and recall is by using the F1 score ( https://en.wikipedia .org/wiki/F1_score ). As with other metrics, the F1 score ranges between 0 (a classifier with no real-world predictive power) and 1 (a classifierthat has perfect predictions). We will update logMetrics to include this as well. metrics_dict['pr/f1_score'] = \ 2 * (precision * recall) / (precision + recall) At first glance, this might seem more comp licated than we need, and it might not be immediately obvious how the F1 score behaves when trading off precision for recall orListing 12.2 training.py:333, LunaTrainingApp.logMetrics Listing 12.3 training.py:338, LunaTrainingApp.logMetrics" Deep-Learning-with-PyTorch.pdf,"329 Graphing the positives and negatives vice versa. This formula has a lot of nice properties, however, and it compares favor- ably to several other, simpler alternatives th at we might consider. One immediate possibility for a scoring functi on is to average the values for precision and recall together. Unfortunately, this gives both avg(p=1.0, r=0.0) and avg(p=0.5, r=0.5) the same score of 0.5, and as we discussed earlier, a cla ssifier with either precision or recall of zero is usually worthless. Givi ng something useless the same nonzero score as something useful disqualifies averagin g as a meaningful metric immediately. Still, let’s visually compar e averaging and F1 in figure 12.11. A few things stand out. First, we can see a lack of a curve or elbo w in the contour lines for averaging. That’s what lets our precision or recall skew to one side or the other! There will never be a sit- uation where it doesn’t make sense to maxi mize the score by having 100% recall (the Roxie approach) and then eliminate whichever false positives are easy to eliminate. That puts a floor on the addition score of 0.5 right out of the gate! Having a quality metric that is trivial to score at least 50% on doesn’t feel right. NOTE What we are actually doing here is taking the arithmetic mean (https://en.wikipedia.org /wiki/Arithmetic_mean ) of the precision and recall, both of which are rates rather than countable scalar values. Taking the arithmetic mean of rates doesn’t typi cally give meaningful results. The F1 score is another name for the harmonic mean (https://en.wikipedia.org/wiki/ Harmonic_mean ) of the two rates, which is a more appropriate way of com- bining those kinds of values.3. Ratios recaLl and precision 4. new metric: f1 score5. Balancing 6. Augmentation 7. Workin’ great!NEGPOS1. Guard dogs 2. Birds and burglars 5 . Balancin g 4. new metric: f1 score 7.Workin’great! Augment ation EG S POS 3. Ratios re caLl and precision 4newmetric: 6. NE 1. Guard dogs gs 2. B Birds and b burglars Figure 12.10 The set of topics for this chapter, with a focus on the final F1 score metric" Deep-Learning-with-PyTorch.pdf,"330 CHAPTER 12 Improving training with metrics and augmentation Contrast that with the F1 score: when recall is high but precision is low, trading off a lot of recall for even a little precision will move the score closer to that balanced sweet spot. There’s a nice, deep elbow that is easy to slide into. That encouragement to havebalanced precision and recall is what we want from our grading metric. Let’s say we still want a simpler metric, bu t one that doesn’t reward skew at all. In order to correct for the weakness of additi on, we might take the minimum of preci- sion and recall (figure 12.12). avg(p, r) 80 60 40 20 00 20 40 60 80f1(p, r)recaLlrecaLl80 60 40 20 00 20 40 60 80recaLlrecaLl precision precision Figure 12.11 Computing the final score with avg(p, r) . Lighter values are closer to 1.0. min(p, r) 80 60 40 20 00 20 40 60 80f1(p, r)recaLlrecaLl80 60 40 20 00 20 40 60 80recaLlrecaLl precision precision Figure 12.12 Computing the final score with min(p, r)" Deep-Learning-with-PyTorch.pdf,"331 Graphing the positives and negatives This is nice, because if either value is 0, the score is also 0, and the only way to get a score of 1.0 is to have both values be 1. 0. However, it still leaves something to be desired, since making a model change that increased the recall from 0.7 to 0.9 while leaving precision constant at 0.5 wouldn’t improve the score at all, nor would drop-ping recall down to 0.6! Although this me tric is certainly penalizing having an imbal- ance between precision and recall, it isn’ t capturing a lot of nuance about the two values. As we have seen, it’s easy to trad e one off for the other simply by moving the classification threshold. We’d like our metric to reflect those trades. We’ll have to accept at least a bit more complexity to better meet our goals. We could multiply the two values together, as in figure 12.13. This approach keeps the nice prop- erty that if either value is 0, the score is 0, and a score of 1.0 means both inputs are per- fect. It also favors a balanced trade-off between precision and recall at low values, though when it gets closer to perfect result s, it becomes more li near. That’s not great, since we really need to push both up to ha ve a meaningful improvement at that point. NOTE Here we’re taking the geometric mean (https://en.wikipedia.org/wiki/ Geometric_mean ) of two rates, which also doesn’t produce meaningful results. There’s also the issue of having almost the en tire quadrant from (0, 0) to (0.5, 0.5) be very close to zero. As we’ll see, having a metric that’s sensitive to changes in that region is important, especially in the early stages of our model design. While using multiplication as our scoring function is feasible (it doesn’t have any immediate disqualifications the way the prev ious scoring function s did), we will be using the F1 score to evaluate our classi fication model’s performance going forward.mul t(p, r) 80 60 40 20 00 20 40 60 80f1(p, r)recaLlrecaLl80 60 40 20 00 20 40 60 80recaLlrecaLl precision precision Figure 12.13 Computing the final score with mult(p, r)" Deep-Learning-with-PyTorch.pdf,"332 CHAPTER 12 Improving training with metrics and augmentation UPDATING THE LOGGING OUTPUT TO INCLUDE PRECISION , RECALL , AND F1 SCORE Now that we have our new metrics, adding them to our logging output is pretty straightforward. We’ll include precision, recall, and F1 in our main logging statement for each of our training and validation sets. log.info( (""E{} {:8} {loss/all:.4f} loss, "" + ""{correct/all:-5.1f}% correct, "" + ""{pr/precision:.4f} precision, "" + ""{pr/recall:.4f} recall, "" + ""{pr/f1_score:.4f} f1 score"" ).format( epoch_ndx,mode_str, **metrics_dict, ) ) In addition, we’ll include exact values for th e count of correctly identified and the total number of samples for each of th e negative and po sitive samples. log.info( (""E{} {:8} {loss/neg:.4f} loss, "" + ""{correct/neg:-5.1f}% correct ({neg_correct:} of {neg_count:})"" ).format( epoch_ndx,mode_str + '_neg', neg_correct=neg_correct, neg_count=neg_count,**metrics_dict, ) ) The new version of the positive logg ing statement looks much the same. 12.3.5 How does our model perform with our new metrics? Now that we’ve implemented our shiny new me trics, let’s take them for a spin; we’ll discuss the results after we show the results of the Bash shell session. You might want to read ahead while your system does its number crunching; this could take perhaps half an hour, depending on your system.2 Exactly how long it takes will depend on your system’s CPU, GPU, and disk speeds ; our system with an SSD and GTX 1080 Ti took about 20 minutes per full epoch:Listing 12.4 training.py:341, LunaTrainingApp.logMetrics Listing 12.5 training.py:353, LunaTrainingApp.logMetrics 2If it’s taking longer than that, make sure you’ve run the prepcache script.Format string updated" Deep-Learning-with-PyTorch.pdf,"333 Graphing the positives and negatives $ ../.venv/bin/python -m p2ch12.training Starting LunaTrainingApp... ...E1 LunaTrainingApp .../p2ch12/training.py:274: RuntimeWarning: ➥ invalid value encountered in double_scalars metrics_dict['pr/f1_score' ]=2* (precision * recall) / ➥ (precision + recall) E1 trn 0.0025 loss, 99.8% correct, 0.0000 prc, 0.0000 rcl, nan f1 E1 trn_ben 0.0000 loss, 100.0% correct (494735 of 494743)E1 trn_mal 1.0000 loss, 0.0% correct (0 of 1215) .../p2ch12/training.py:269: RuntimeWarning: ➥ invalid value encountered in long_scalars precision = metrics_dict['pr/precision'] = truePos_count / ➥ (truePos_count + falsePos_count) E1 val 0.0025 loss, 99.8% correct, nan prc, 0.0000 rcl, nan f1 E1 val_ben 0.0000 loss, 100.0% correct (54971 of 54971)E1 val_mal 1.0000 loss, 0.0% correct (0 of 136) Bummer. We’ve got some warnings, and give n that some of the values we computed were nan, there’s probably a division by zero happening somewhere. Let’s see what we can figure out. First, since none of the positive samples in the tr aining set are getting classified as positive, that means both prec ision and recall are zero, which results in our F1 score calculation dividing by zero. Second, for our validation set, truePos_count and falsePos_count are both zero due to nothing being flagged as posi tive. It follows that the denominator of our precision calculation is also zero; that makes sense, as that’s where we’re seeing another RuntimeWarning . A handful of negative training samples ar e classified as positive (494735 of 494743 are classified as negative, so that leaves 8 samples misclassified). While that might seem odd at first, recall that we are collecting our training results throughout the epoch , rather than using the model’s end-of-epoch state as we do for the validation results. That means the first batch is literally prod ucing random results. A few of the samples from that first batch being flagge d as positive isn’t surprising. NOTE Due to both the random initializati on of the network weights and the random ordering of the training sample s, individual runs will likely exhibit slightly different behavior. Having exac tly reproducible behavior can be desir- able but is out of scope for what we’re trying to do in part 2 of this book. Well, that was somewhat painful. Switching to our new metrics resulted in going from A+ to “Zero, if you’re lucky”—and if we’re not lucky, the score is so bad that it’s not even a number . Ouch. That said, in the long run, this is good for us. We’ve known that our model’s per- formance was garbage since chapter 11. If our metrics told us anything but that, it would point to a fundamental flaw in the metrics!The exact count and line numbers of these RuntimeWarning lines might be different from run to run." Deep-Learning-with-PyTorch.pdf,"334 CHAPTER 12 Improving training with metrics and augmentation 12.4 What does an idea l dataset look like? Before we start crying into our cups over th e current sorry state of affairs, let’s instead think about what we actually want our model to do. Figure 12.14 says that first we need to balance our data so that our model ca n train properly. Let’s build up the logical steps needed to get us there. Recall figure 12.5 earlier, and the followin g discussion of classification thresholds. Getting better results by moving the thre shold has limited effe ctiveness—there’s just too much overlap between the positive and negative classes to work with.3 Instead, we want to see an image like figu re 12.15. Here, our label threshold is nearly vertical. That’s what we want, because it me ans the label threshold and our classification threshold can line up reasonably well. Similarl y, most of the sample s are concentrated at either end of the diagram. Both of these things require that our data be easily separable and that our model have the capacity to perform that separati on. Our model currently has enough capacity, so that’s not the issue. Instead, let’s take a look at our data. Recall that our data is wildly imbalanced. There’s a 400:1 ratio of positive samples to negative ones. That’s crushingly imbalanced! Figure 12. 16 shows what that looks like. No wonder our “actually nodule” samples are getting lost in the crowd! 3Keep in mind that these images are just a representa tion of the classification space and do not represent ground truth.3. Ratios recaLl and precision 4. new metric: f1 score5. Balancing 6. Augmentation 7. Workin’ great!NEGPOS1. Guard dogs 2. Birds and burglars 5 . Balancin g 4. new metric: f1 score 7. Workin’ great ! Augment atttttattatattatttttatttattatatataaaaiioioioioiiiiiiin EG S POS 3.Ratios recaLl andprecision 4 new metric: 6. NE 1. Guard dogs gs 2.B Birdsand b burgl ars Figure 12.14 The set of topics for this chapter, with a focus on balancing our positive and negative samples" Deep-Learning-with-PyTorch.pdf,"335 What does an ideal dataset look like? Few incoRrect predictions Most events in clearly separate claSses Most events i n Figure 12.15 A well-trained model can cleanly separate data, making it easy to pick a classification threshold with few trade-offs. StiLl tOo many false positives StiLl tOo many fal positiv e pNegative Samples Positive Samples Figure 12.16 An imbalanced dataset that roughly approxi mates the imbalance in our LUNA classification data" Deep-Learning-with-PyTorch.pdf,"336 CHAPTER 12 Improving training with metrics and augmentation Now, let’s be perfectly clear: when we’re done, our model will be able to handle this kind of data imbalance just fine. We coul d probably even train the model all the way there without changing the bala ncing, assuming we were willing to wait for a gajillion epochs first.4 But we’re busy people with things to do, so rather than cook our GPU until the heat death of the universe, let’s try to make our training data look more ideal by changing the class balance we are training with. 12.4.1 Making the data look less like the actual and more like the “ideal” The best thing to do would be to have relati vely more positive sa mples. During the ini- tial epoch of training, when we’re going from randomized chaos to something more organized, having so few training sample s be positive means they get drowned out. The method by which this happens is somewhat subtle, however. Recall that since our network weights are initially randomized , the per-sample output of the network is also randomized (but clamped to the range [0-1]). NOTE Our loss function is nn.CrossEntropyLoss , which technically operates on the raw logits rather than the class probabilities. For ou r discussion, we’ll ignore that distinction and assume the loss and the label-prediction deltas are the same thing. The predictions numerically close to the correct label do not result in much change to the weights of the network, while predic tions that are signif icantly different from the correct answer are respon sible for a much greater change to the weights. Since t h e o u t p u t i s r a n d o m w h e n t h e m o d e l i s initialized with random weights, we can assume that of our ~500k training samples (4 95,958, to be exact), we’ll have the fol- lowing approximate groups: 1250,000 negative samples will be predicted to be negative (0.0 to 0.5) and result in at most a small change to the netw ork weights toward predicting negative. 2250,000 negative samples will be predicted to be positive (0.5 to 1.0) and result in a large swing toward the network weights predicting negative. 3500 positive samples will be predicted to be negative and result in a swing toward the network weights predicting positive. 4500 positive samples will be predicted to be positive and result in almost no change to the network weights. NOTE Keep in mind that the actual pred ictions are real numbers between 0.0 and 1.0 inclusive, so these groups won’t have strict delineations. Here’s the kicker, though: groups 1 and 4 can be any size , and they will continue to have close to zero impact on training. The only thing that matters is that groups 2 and 3 can counteract each other’s pull enough to prevent the network from collapsing to a degenerate “only output one thing” state. Since group 2 is 500 times larger than 4It’s not clear if this is actually true, but it’s plausible, and the loss was getting better . . ." Deep-Learning-with-PyTorch.pdf,"337 What does an ideal dataset look like? group 3 and we’re using a batch size of 32, roughly 500/32 = 15 batches will go by before seeing a single positi ve sample. That implies that 14 out of 15 training batches will be 100% negative and will only pull al l model weights toward predicting negative. That lopsided pull is what produces the degenerate behavior we’ve been seeing. Instead, we’d like to have just as many positive samples as negative ones. For the first part of training, then, half of both la bels will be classified incorrectly, meaning that groups 2 and 3 should be roughly equa l in size. We also want to make sure we present batches with a mix of negative and positive samples. Ba lance would result in the tug-of-war evening out, an d the mixture of classes per batch will give the model a decent chance of learning to discrimina te between the two classes. Since our LUNA data has only a small, fixed number of positi ve samples, we’ll have to settle for taking the positive samples that we have and pres enting them repeated ly during training. Recall our professor from chapter 11 who ha d a final exam with 99 false answers and 1 true answer. The next semest er, after being told “You should have a more even bal- ance of true and false answers,” the profe ssor decided to add a midterm with 99 true answers and 1 false one. “Problem solved!” Clearly, the correct approa ch is to intermix true and false answers in a way that doesn’t allow the students to exploit the larg er structure of the te sts to answer things correctly. Whereas a student wo uld pick up on a pattern like “odd questions are true, even questions are false,” the batching sy stem used by PyTorch doesn’t allow the model to “notice” or utilize that kind of pattern. Our training dataset will need to beupdated to alternate between positive and negative samples, as in figure 12.17. The unbalanced data is the proverbial ne edle in the haystack we mentioned at the start of chapter 9. If you had to perform th is classification work by hand, you’d proba- bly start to empathize with Preston.Discrimination Here, we define discrimination as “the ability to separa te two classes from each other.” Building and training a model that can tell “actually nodule” candidates from normal anatomical structures is the entire point of what we’re doing in part 2. Some other definitions of discrimination are more problematic. While out of scope for the discussion of our work here, there is a larger issue with models trained from real-world data. If that real-world dataset is co llected from sources that have a real-world- discriminatory bias (for example, racial bias in arrest and conviction rates, or anything collected from social media), and that bias is not corrected for during dataset prepa- ration or training, then th e resulting model w ill continue to exhibit the same biases present in the training data. Just as in humans, racism is learned. This means almost any model trained from internet-at-la rge data sources will be com- promised in some fashion, unless extreme care is taken to scrub those biases fromthe model. Note that like our goal in part 2, this is considered an unsolved problem." Deep-Learning-with-PyTorch.pdf,"338 CHAPTER 12 Improving training with metrics and augmentation We will not be doing any balancing for vali dation, however. Our model needs to func- tion well in the real world, and the real wo rld is imbalanced (after all, that’s where we got the raw data!). How should we accomplish this ba lancing? Let’s discuss our choices. SAMPLERS CAN RESHAPE DATASETS One of the optional arguments to DataLoader is sampler=… . This allows the data loader to override the iteration order na tive to the dataset passed in and instead shape, limit, or reemphasize the underlying data as desired. This can be incredibly useful when working with a dataset that isn’t under your control. Taking a public data- set and reshaping it to meet your needs is far less work than reimplementing that data- set from scratch. The downside is that many of the muta tions we could accomplish with samplers require that we break encapsulation of th e underlying dataset. For example, let’s assume we have a dataset like CIFAR-10 ( www.cs.toronto.edu/~kriz/cifar.html ) that Unbalancedbalanced Batch: 0 Batch: 0Batch: 1 Batch: 1 Batch: NBatch: 2Batch: 3 Batch: 13Batch: 14 Batch: 15First positive Sample! Figure 12.17 Batch after batch of imbalanced data will have nothing but negative events long before the first positive event, while balanced data can alternate every other sample." Deep-Learning-with-PyTorch.pdf,"339 What does an ideal dataset look like? consists of 10 equally weighted classes, and we want to instead have 1 class (say, “air- plane”) now make up 50% of all of the tr aining images. We could decide to use WeightedRandomSampler (http://mng.bz/8plK ) and weight each of the “airplane” sample indexes higher, but constructing the weights argument requires that we know in advance which indexes are airplanes. As we discussed, the Dataset API only specifies th at subclasses provide __len__ and __getitem__ , but there is nothing direct we can use to ask “Which samples are airplanes?” We’d either have to load up ev ery sample beforehand to inquire about the class of that sample, or we’d have to br eak encapsulation and hope the information we need is easily obtained from lookin g at the internal implementation of the Data- set subclass. Since neither of those options is particularly ideal in cases where we have control over the dataset directly, the code for pa rt 2 implements any needed data shaping inside the Dataset subclasses instead of relyin g on an external sampler. IMPLEMENTING CLASS BALANCING IN THE DATASET We are going to directly change our LunaDataset to present a balanced, one-to-one ratio of positive and negative samples for tr aining. We will keep separate lists of nega- tive training samples and po sitive training samples, an d alternate returning samples from each of those two lists. This will prevent the degenerate behavior of the model scoring well by simply answer ing “false” to every sample presented. In addition, the positive and negative classes will be intermixed so that the weight updates are forced to discriminate between the classes. Let’s add a ratio_int to LunaDataset that will control the label for the Nth sam- ple as well as keep track of our samples separated by label. class LunaDataset(Dataset): def __init__(self, val_stride=0, isValSet_bool=None, ratio_int=0, ): self.ratio_int = ratio_int # ... line 228self.negative_list = [ nt for nt in self.candidateInfo_list if not nt.isNodule_bool ]self.pos_list = [ nt for nt in self.candidateInfo_list if nt.isNodule_bool ]# ... line 265 def shuffleSamples(self): if self.ratio_int: random.shuffle(self.negative_list) random.shuffle(self.pos_list)Listing 12.6 dsets.py:217, class LunaDataset We will call this at the top of each epoch to randomize the order of samples being presented." Deep-Learning-with-PyTorch.pdf,"340 CHAPTER 12 Improving training with metrics and augmentation With this, we now have dedicated lists for each label. Using these lists, it becomes much easier to return the label we want fo r a given index into the dataset. In order to make sure we’re getting the indexing righ t, we should sketch out the ordering we want. Let’s assume a ratio_int of 2, meaning a 2:1 ratio of negative to positive sam- ples. That would mean every third index should be positive: DS Index 0123456789. . . Label +--+--+--+ Pos Index 0123 Neg Index 0 1 2 3 4 5 The relationship between the dataset index and the positive index is simple: divide the dataset index by 3 and then round down. The negative index is slightly more com- plicated, in that we have to subtract 1 fr om the dataset index and then subtract the most recent positive index as well. Implemented in our LunaDataset class, that looks like the following. def __getitem__(self, ndx): if self.ratio_int: pos_ndx = ndx // (self.ratio_int + 1) if ndx % (self.ratio_int + 1): neg_ndx = nd x-1- pos_ndx neg_ndx %= len(self.negative_list) candidateInfo_tup = self.negative_list[neg_ndx] else: pos_ndx %= len(self.pos_list) candidateInfo_tup = self.pos_list[pos_ndx] else: candidateInfo_tup = self.candidateInfo_list[ndx] That can get a little hairy, but if you desk-check it out, it will make sense. Keep in mind that with a low ratio, we’ll run out of posi tive samples before exhausting the dataset. We take care of that by taking the modulus of pos_ndx before indexing into self.pos_list . While the same kind of index overflow should never happen with neg_ndx due to the large number of negative samples, we do the modulus anyway, just in case we later decide to make a ch ange that might cause it to overflow. We’ll also make a change to our dataset’s length. Although this isn’t strictly neces- sary, it’s nice to speed up individual epochs. We’re going to hardcode our __len__ to be 200,000. Listing 12.7 dsets.py:286, LunaDataset.__getitem__ A ratio_int of zero means use the native balance. A nonzero remainder means this should be a negative sample. Overflow results in wraparound. Returns the Nth sample if not balancing classes" Deep-Learning-with-PyTorch.pdf,"341 What does an ideal dataset look like? def __len__(self): if self.ratio_int: return 200000 else: return len(self.candidateInfo_list) We’re no longer tied to a sp ecific number of sa mples, and presenting “a full epoch” doesn’t really make sense wh en we would have to repeat positive samples many, many times to present a balanced training set. By picking 200,000 samples, we reduce the time between starting a training run and seeing results (faster feedback is always nice!), and we give ourselves a nice, clean number of samples per epoch. Feel free to adjust the length of an epoch to meet your needs. For completeness, we also add a command-line parameter. class LunaTrainingApp: def __init__(self, sys_argv=None): # ... line 52parser.add_argument('--balanced', help=""Balance the training data to half positive, half negative."", action='store_true',default=False, ) Then we pass that parameter into the LunaDataset constructor. def initTrainDl(self): train_ds = LunaDataset( val_stride=10, isValSet_bool=False,ratio_int=int(self.cli_args.balanced), ) We’re all set. Let’s run it! 12.4.2 Contrasting training with a balanced LunaData set to previous runs As a reminder, our unbalanced tr aining run had results like these: $ python -m p2ch12.training ... E1 LunaTrainingAppE1 trn 0.0185 loss, 99.7% correct, 0.0000 precision, 0.0000 recall, ➥ nan f1 scoreListing 12.8 dsets.py:280, LunaDataset.__len__ Listing 12.9 training.py:31, class LunaTrainingApp Listing 12.10 training.py:137, LunaTrainingApp.initTrainDl Here we rely on python’s True being convertible to a 1." Deep-Learning-with-PyTorch.pdf,"342 CHAPTER 12 Improving training with metrics and augmentation E1 trn_neg 0.0026 loss, 100.0% correct (494717 of 494743) E1 trn_pos 6.5267 loss, 0.0% correct (0 of 1215) ...E1 val 0.0173 loss, 99.8% correct, nan precision, 0.0000 recall, ➥ nan f1 score E1 val_neg 0.0026 loss, 100.0% correct (54971 of 54971)E1 val_pos 5.9577 loss, 0.0% correct (0 of 136) But when we run with --balanced , we see the following: $ python -m p2ch12.training --balanced ... E1 LunaTrainingAppE1 trn 0.1734 loss, 92.8% correct, 0.9363 precision, 0.9194 recall, ➥ 0.9277 f1 score E1 trn_neg 0.1770 loss, 93.7% correct (93741 of 100000)E1 trn_pos 0.1698 loss, 91.9% correct (91939 of 100000) ... E1 val 0.0564 loss, 98.4% correct, 0.1102 precision, 0.7941 recall, ➥ 0.1935 f1 score E1 val_neg 0.0542 loss, 98.4% correct (54099 of 54971) E1 val_pos 0.9549 loss, 79.4% correct (108 of 136) This seems much better! We’ve given up about 5% correct answers on the negative samples to gain 86% correct positive answers. We’re back into a solid B range again!5 As in chapter 11, however, this result is deceptive. Since there are 400 times as many negative samples as positive ones, even getting just 1% wrong means we’d be incorrectly classifying negative samples as positive four times more often than there are actually positive samples in total! Still, this is clearly better than the ou tright wrong behavior from chapter 11 and much better than a random coin flip. In fact, we’ve even crossed over into being (almost) legitimately useful in real-world scenarios. Recall our overworked radiologist poring over each and every speck of a CT: well, now we’ve got something that can do a reasonable job of screening out 95% of the fa lse positives. That’s a huge help, since it translates into about a tenfold increase in productivity for the ma chine-assisted human. Of course, there’s still that pesky issue of the 14% of positi ve samples that were missed, which we should probab ly deal with. Perhaps some additional epochs of train- ing would help. Let’s see (and again, expect to spend at least 10 minutes per epoch): $ python -m p2ch12.training --balanced --epochs 20 ... E2 LunaTrainingAppE2 trn 0.0432 loss, 98.7% correct, 0.9866 precision, 0.9879 recall, ➥ 0.9873 f1 score E2 trn_ben 0.0545 loss, 98.7% correct (98663 of 100000)E2 trn_mal 0.0318 loss, 98.8% correct (98790 of 100000) 5And remember that this is after only the 200,000 training samples presented, not the 500,000+ of the unbal- anced dataset, so we got ther e in less than half the time." Deep-Learning-with-PyTorch.pdf,"343 What does an ideal dataset look like? E2 val 0.0603 loss, 98.5% correct, 0.1271 precision, 0.8456 recall, ➥ 0.2209 f1 score E2 val_ben 0.0584 loss, 98.6% correct (54181 of 54971)E2 val_mal 0.8471 loss, 84.6% correct (115 of 136) ... E5 trn 0.0578 loss, 98.3% correct, 0.9839 precision, 0.9823 recall, ➥ 0.9831 f1 score E5 trn_ben 0.0665 loss, 98.4% correct (98388 of 100000) E5 trn_mal 0.0490 loss, 98.2% correct (98227 of 100000)E5 val 0.0361 loss, 99.2% correct, 0.2129 precision, 0.8235 recall, ➥ 0.3384 f1 score E5 val_ben 0.0336 loss, 99.2% correct (54557 of 54971)E5 val_mal 1.0515 loss, 82.4% correct (112 of 136)... ... E10 trn 0.0212 loss, 99.5% correct, 0.9942 precision, 0.9953 recall, ➥ 0.9948 f1 score E10 trn_ben 0.0281 loss, 99.4% correct (99421 of 100000) E10 trn_mal 0.0142 loss, 99.5% correct (99530 of 100000)E10 val 0.0457 loss, 99.3% correct, 0.2171 precision, 0.7647 recall, ➥ 0.3382 f1 score E10 val_ben 0.0407 loss, 99.3% correct (54596 of 54971)E10 val_mal 2.0594 loss, 76.5% correct (104 of 136)... E20 trn 0.0132 loss, 99.7% correct, 0.9964 precision, 0.9974 recall, ➥ 0.9969 f1 score E20 trn_ben 0.0186 loss, 99.6% correct (99642 of 100000) E20 trn_mal 0.0079 loss, 99.7% correct (99736 of 100000) E20 val 0.0200 loss, 99.7% correct, 0.4780 precision, 0.7206 recall, ➥ 0.5748 f1 score E20 val_ben 0.0133 loss, 99.8% correct (54864 of 54971) E20 val_mal 2.7101 loss, 72.1% correct (98 of 136) Ugh. That’s a lot of text to scroll past to get to the numbers we’re interested in. Let’s power through and focus on the val_mal XX.X% correct numbers (or skip ahead to the TensorBoard graph in the next section. ) After epoch 2, we were at 87.5%; on epoch 5, we peaked with 92.6%; and then by epoch 20 we dropped down to 86.8%— below our second epoch! NOTE As mentioned earlier, expect each run to have unique behavior due to random initialization of network weig hts and random selection and ordering of training samples per epoch. The training set numbers don’t seem to be having the same problem. Negative train- ing samples are classified correctly 98.8% of the time, and positive samples are 99.1% correct. What’s going on? 12.4.3 Recognizing the sy mptoms of overfitting What we are seeing are clear signs of overfitt ing. Let’s take a look at the graph of our loss on positive samples, in figure 12.18." Deep-Learning-with-PyTorch.pdf,"344 CHAPTER 12 Improving training with metrics and augmentation Here, we can see that the training loss for our positive samples is nearly zero—each positive training sample gets a nearly perf ect prediction. Our vali dation loss for posi- tive samples is increasing , though, and that means our re al-world performance is likely getting worse. At this point, it’s often best to stop the training script, since the model is no longer improving. TIP Generally, if your model’s performance is improving on your training set while getting worse on your validation set, the model has started overfitting. We must take care to examine the right metr ics, however, since this trend is only hap- pening on our positive loss. If we take a look at our overall loss, everything seems fine! That’s because our validation set is not ba lanced, so the overall loss is dominated by our negative samples. As shown in figure 12 .19, we are not seeing the same divergent behavior for our negative samples. Inst ead, our negative loss looks great! That’s because we have 400 times more negative sa mples, so it’s much, much harder for the model to remember individual details. Our positive training se t has only 1,215 sam- ples, though. While we repeat those sample s multiple times, that doesn’t make them harder to memorize. The model is shifting from generalized principles to essentially memorizing quirks of those 1,215 samples an d claiming that anything that’s not one of those few samples is negative. This incl udes both negative training samples and everything in our validation set (both positive and negative). Clearly, some generalization is still going on, since we are classifying about 70% of the positive validation set correctly. We ju st need to change how we’re training the model so that our training set and validati on set both trend in the right direction. Validation loSs goes up Training loSs goes down to zerotag: loss/pos Figure 12.18 Our positive loss showing clear signs of overfitting, as the training loss and validation loss are trending in different directions" Deep-Learning-with-PyTorch.pdf,"345 Revisiting the problem of overfitting 12.5 Revisiting the problem of overfitting We touched on the concept of overfitting in chapter 5, and now it’s time to take a closer look at how to addre ss this common situation. Our go al with training a model is to teach it to recognize the general properties of the classes we are interested in, as expressed in our dataset. Those general prop erties are present in some or all samples of the class and can be generalized and used to predict samples that haven’t been trained on. When the model starts to learn specific properties of the training set, overfit- ting occurs, and the model starts to lose the ability to generalize. In case that’s a bit too abstract, let’s use another analogy. 12.5.1 An overfit face-to-age prediction model Let’s pretend we have a model that takes an image of a human face as input and out- puts a predicted age in years. A good model would pick up on age signifiers like wrin- kles, gray hair, hairstyle, clot hing choices, and similar, an d use those to build a general model of what different ages look like. Wh en presented with a new picture, it would consider things like “conservative haircut” and “reading glasse s” and “wrinkles” to conclude “around 65 years old.” An overfit model, by cont rast, instead remembers specif ic people by remembering identifying details. “That haircut and those gl asses mean it’s Fran k. He’s 62.8 years old”; “Oh, that scar means it’s Harry. He’s 39.3”; and so on. When shown a new per- son, the model won’t recognize the person an d will have absolutely no idea what age to predict. Even worse, if shown a picture of Frank Jr . (the spittin’ image of his dad, at least when he’s wearing his glasses!), the model will say, “I think that’s Frank. He’s 62.8 years old.” Never mind that Junior is 25 years younger! tag: loss/neg Both loSses are trending down. Figure 12.19 Our negative loss showing no signs of overfitting" Deep-Learning-with-PyTorch.pdf,"346 CHAPTER 12 Improving training with metrics and augmentation Overfitting is usually due to having too few training samples when compared to the ability of the model to just memorize th e answers. The median human can memorize the birthdays of their immediate family but would have to resort to generalizations when predicting the ages of any group larger than a small village. Our face-to-age model has the capacity to simply memorize the photos of anyone who doesn’t look exactly their age. As we discussed in part 1, model capacity is a some- what abstract concept, but is roughly a function of the number of parameters of the model times how efficiently those parame ters are used. When a model has a high capacity relative to the amount of data needed to memorize the hard samples from the training set, it’s likely that the model will begin to overfit on those more difficult training samples. 12.6 Preventing overfitting with data augmentation It’s time to take our model training from go od to great. We need to cover one last step in figure 12.20. We augment a dataset by applying synthetic altera tions to individual samples, resulting in a new dataset with an effective size that is larger than the original. The typical goal is for the alterations to result in a syntheti c sample that remains representative of the same general class as the source sample, bu t that cannot be trivially memorized along- side the original. When done properly, this augmentation can increase the training set3. Ratios recaLl and precision 4. new metric: f1 score5. Balancing 6. Augmentation 7. Workin’ great!NEGPOS1. Guard dogs 2. Birds and burglars 5 . Balancin g 4. new metri c: f1 score 7. Workin’ gre at! Augmentation EG S POS 3. Ratios rec aLl and precision 4newmetric: 6. NE 1. Guard dogs gs 2. B Birds and b burglars Figure 12.20 The set of topics for this chapter, with a focus on data augmentation" Deep-Learning-with-PyTorch.pdf,"347 Preventing overfitting with data augmentation size beyond what the model is capable of memorizing, resulting in the model being forced to increasingly rely on generalizati on, which is exactly what we want. Doing so is especially useful when dealing with lim ited data, as we saw in section 12.4.1. Of course, not all augmentations are equa lly useful. Going back to our example of a face-to-age prediction model, we could tr ivially change the red channel of the four corner pixels of each image to a random va lue 0–255, which would result in a dataset 4 billion times larger the original. Of course, this wouldn’t be particularly useful, since the model can pretty trivially learn to igno re the red dots in the image corners, and the rest of the image remains as easy to memorize as the single, unaugmented original image. Contrast that approach with flipping the image left to right. Doing so would only result in a dataset twice as large as the original, but each image would be quite a bit more useful for training purposes. Th e general properties of aging are not cor- related left to right, so a mirrored image remains representative. Similarly, it’s rare forfacial pictures to be perfectly symmetrical, so a mirrored version is unlikely to be trivi- ally memorized alongside the original. 12.6.1 Specific data augmentation techniques We are going to implement five specific types of data augmentation. Our implementa- tion will allow us to experiment with any or all of them, individu ally or in aggregate. The five techniques are as follows: Mirroring the image up-down, left-right, and/or front-back Shifting the image around by a few voxels Scaling the image up or down Rotating the image arou nd the head-foot axis Adding noise to the image For each technique, we want to make sure our approach maintains the training sam- ple’s representative nature, while being diff erent enough that the sample is useful to train with. We’ll define a function getCtAugmentedCandidate that is responsible for taking our standard chunk-of-CT-with-can didate-inside and modifying it. Our main approach will define an affine transformation matrix ( http://mng.bz/Edxq ) and use it with the PyTorch affine_grid (https://pytorch.org/docs/stable/nn.html#affine-grid ) and grid _sample (https://pytorch.org/docs/stable/nn.html#torch.nn.functional.grid_sample ) functions to resample our candidate. def getCtAugmentedCandidate( augmentation_dict, series_uid, center_xyz, width_irc,use_cache=True): if use_cache: ct_chunk, center_irc = \Listing 12.11 dsets.py:149, def getCtAugmentedCandidate" Deep-Learning-with-PyTorch.pdf,"348 CHAPTER 12 Improving training with metrics and augmentation getCtRawCandidate(series_uid, center_xyz, width_irc) else: ct = getCt(series_uid)ct_chunk, center_irc = ct.getRawCandidate(center_xyz, width_irc) ct_t = torch.tensor(ct_chunk).unsqueeze(0).unsqueeze(0).to(torch.float32) We first obtain ct_chunk , either from the cache or directly by loading the CT (some- thing that will come in handy once we ar e creating our own candidate centers), and then convert it to a tensor. Next is the affine grid and sampling code. transform_t = torch.eye(4) # ... # ... line 195 affine_t = F.affine_grid( transform_t[:3].unsqueeze(0).to(torch.float32), ct_t.size(), align_corners=False, ) augmented_chunk = F.grid_sample( ct_t,affine_t, padding_mode='border', align_corners=False, ).to('cpu') # ... line 214 return augmented_chunk[0], center_irc Without anything additional, this function won’t do much. Let’s see what it takes to add in some actual transforms. NOTE It’s important to structure your data pipeline such that your caching steps happen before augmentation! Doing otherwise will result in your data being augmented once and then persiste d in that state, which defeats the purpose. MIRRORING When mirroring a sample, we keep the pixe l values exactly the same and only change the orientation of the image. Since ther e’s no strong correlation between tumor growth and left-right or front-back, we shou ld be able to flip those without changing the representative nature of the samp le. The index-axis (referred to as Z in patient coordinates) corresponds to the direction of gravity in an upright human, however, so there’s a possibility of a difference in the top and bottom of a tumor. We are going to assume it’s fine, since quick visual investigation doesn’t show any gross bias. Were we working toward a clinically relevant project, we’d need to confirm that assumptionwith an expert.Listing 12.12 dsets.py:162, def getCtAugmentedCandidate Modifications to transform_tensor will go here." Deep-Learning-with-PyTorch.pdf,"349 Preventing overfitting with data augmentation for i in range(3): if 'flip' in augmentation_dict: if random.random() > 0.5: transform_t[i,i] *= -1 The grid_sample function maps the range [–1, 1] to the extents of both the old and new tensors (the rescaling happens implicitly if the sizes are different). This range mapping means that to mirror the data, all we need to do is mult iply the relevant ele- ment of the transforma tion matrix by –1. SHIFTING BY A RANDOM OFFSET Shifting the nodule candidate around should n’t make a huge difference, since convo- lutions are translation indepe ndent, though this will make our model more robust to imperfectly centered nodules. What will make a more significant difference is that the offset might not be an integer number of vo xels; instead, the data will be resampled using trilinear interpolation, which can intr oduce some slight blurring. Voxels at the edge of the sample will be repeated, which can be seen as a smeared, streaky section along the border. for i in range(3): # ... line 170 if 'offset' in augmentation_dict: offset_float = augmentation_dict['offset']random_float = (random.random( )*2-1 ) transform_t[i,3] = offset_float * random_float Note that our 'offset' parameter is the maximum offset expressed in the same scale as the [–1, 1] range the grid sample function expects. SCALING Scaling the image slightly is very similar to mirroring and shifti ng. Doing so can also result in the same re peated edge voxels we just me ntioned when disc ussing shifting the sample. for i in range(3): # ... line 175 if 'scale' in augmentation_dict: scale_float = augmentation_dict['scale']random_float = (random.random( )*2-1 ) transform_t[i,i] *= 1.0 + scale_float * random_float Since random_float is converted to be in the range [–1, 1], it doesn’t actually matter if we add scale_float * random_float to or subtract it from 1.0. Listing 12.13 dsets.py:165, def getCtAugmentedCandidate Listing 12.14 dsets.py:165, def getCtAugmentedCandidate Listing 12.15 dsets.py:165, def getCtAugmentedCandidate" Deep-Learning-with-PyTorch.pdf,"350 CHAPTER 12 Improving training with metrics and augmentation ROTATING Rotation is the first augmentation technique we’re going to use where we have to care- fully consider our data to ensure that we don’t break our sample with a conversion that causes it to no longer be representative. Re call that our CT slices have uniform spacing along the rows and columns (X- and Y-axes), but in the index (or Z) direction, the vox- els are non-cubic. That means we can’ t treat those axes as interchangeable. One option is to resample our data so that our resolution along the index-axis is the same as along the other two, but that’s not a true solution because the data along that axis would be very blu rry and smeared. Even if we interpolate more voxels, the fidelity of the data would rema in poor. Instead, we’ll treat that axis as special and con- fine our rotations to the X-Y plane. if 'rotate' in augmentation_dict: angle_rad = random.random() * math.pi * 2 s = math.sin(angle_rad) c = math.cos(angle_rad) rotation_t = torch.tensor([ [c, -s, 0, 0], [s, c, 0, 0],[0, 0, 1, 0], [0, 0, 0, 1], ]) transform_t @= rotation_t NOISE Our final augmentation technique is differen t from the others in that it is actively destructive to our sample in a way that flipping or rotating the sample is not. If we add too much noise to the sample, it will swam p the real data and make it effectively impossible to classify. While shifting and sc aling the sample would do something simi- lar if we used extreme input values, we’ve chosen values that will only impact the edgeof the sample. Noise will have an impact on the entire image. if 'noise' in augmentation_dict: noise_t = torch.randn_like(augmented_chunk) noise_t *= augmentation_dict['noise'] augmented_chunk += noise_t The other augmentation types have increased the effective size of our dataset. Noise makes our model’s job harder . We’ll revisit this once we see some training results. Listing 12.16 dsets.py:181, def getCtAugmentedCandidate Listing 12.17 dsets.py:208, def getCtAugmentedCandidate" Deep-Learning-with-PyTorch.pdf,"351 Preventing overfitting with data augmentation EXAMINING AUGMENTED CANDIDATES We can see the result of our efforts in figu re 12.21. The upper-left image shows an un- augmented positive candidate, and the next five show the effect of each augmentation type in isolation. Finally, the bottom ro w shows the combined result three times. Since each __getitem__ call to the augmenting da taset reapplies the augmenta- tions randomly, each image on the bottom ro w looks different. This also means it’s nearly impossible to generate an image exactl y like this again! It’s also important to None flip oFfset0 10 20 30 40 01 0 2 0 3 0 4 00 10 20 30 40 01 0 2 0 3 0 4 00 10 20 30 40 01 0 2 0 3 0 4 0 scale rotate noise0 10 20 30 40 01 0 2 0 3 0 4 00 10 20 30 40 01 0 2 0 3 0 4 00 10 20 30 40 01 0 2 0 3 0 4 0 aLl aLl aLl0 10 20 30 40 01 0 2 0 3 0 4 00 10 20 30 40 01 0 2 0 3 0 4 00 10 20 30 40 01 0 2 0 3 0 4 0 Figure 12.21 Various augmentation types performed on a positive nodule sample" Deep-Learning-with-PyTorch.pdf,"352 CHAPTER 12 Improving training with metrics and augmentation remember that sometimes the 'flip' augmentation will result in no flip. Returning always-flipped images is just as limiting as not flipping in the first place. Now let’s see if any of this makes a difference. 12.6.2 Seeing the improvemen t from data augmentation We are going to train additional models, on e per augmentation type discussed in the last section, with an additi onal model training run that combines all of the augmenta- tion types. Once they’re fi nished, we’ll take a look at our numbers in TensorBoard. In order to be able to turn our new augmentation types on and off, we need to expose the construction of augmentation_dict to our command-lin e interface. Argu- ments to our program will be added by parser.add_argument calls (not shown, but similar to the ones our program already has) , which will then be fed into code that actually constructs augmentation_dict . self.augmentation_dict = {} if self.cli_args.augmented or self.cli_args.augment_flip: self.augmentation_dict['flip'] = True if self.cli_args.augmented or self.cli_args.augment_offset: self.augmentation_dict['offset'] = 0.1 if self.cli_args.augmented or self.cli_args.augment_scale: self.augmentation_dict['scale'] = 0.2 if self.cli_args.augmented or self.cli_args.augment_rotate: self.augmentation_dict['rotate'] = True if self.cli_args.augmented or self.cli_args.augment_noise: self.augmentation_dict['noise'] = 25.0 Now that we have those command-line argu ments ready, you can either run the fol- lowing commands or revisit p2_run_every thing.ipynb and run cells 8 through 16. Either way you run it, expect these to take a significant time to finish: $ .venv/bin/python -m p2ch12.prepcache $ .venv/bin/python -m p2ch12.training --epochs 20 \ --balanced sanity-bal $ .venv/bin/python -m p2ch12.training --epochs 10 \ --balanced --augment-flip sanity-bal-flip $ .venv/bin/python -m p2ch12.training --epochs 10 \ --balanced --augment-shift sanity-bal-shift $ .venv/bin/python -m p2ch12.training --epochs 10 \ --balanced --augment-scale sanity-bal-scale $ .venv/bin/python -m p2ch12.training --epochs 10 \ --balanced --augment-rotate sanity-bal-rotate $ .venv/bin/python -m p2ch12.training --epochs 10 \Listing 12.18 training.py:105, LunaTrainingApp.__init__ These values were empirically chosen to have a reasonable impact, but better values probably exist. You only need to prep the cache once per chapter. You might have this run from earlier in the chapter; in that case there’s no need to rerun it!" Deep-Learning-with-PyTorch.pdf,"353 Preventing overfitting with data augmentation --balanced --augment-noise sanity-bal-noise $ .venv/bin/python -m p2ch12.training --epochs 20 \ --balanced --augmented sanity-bal-aug While that’s running, we can start TensorBoard. Let’s direct it to only show these runs by changing the logdir parameter like so: ../path/to/tensorboard --logdir runs/p2ch12 . Depending on the hardware you have at your disposal, the training might take a long time. Feel free to skip the flip , shift , and scale training jobs and reduce the first and last runs to 11 epochs if you need to move things along more quickly. Wechose 20 runs because that helps them st and out from the other runs, but 11 should work as well. If you let everything run to completion , your TensorBoard should have data like that shown in figure 12.22. We’re going to deselect everything except the validation data, to reduce clutter. When you’re looking at your data live, you can also change the smoothing value, which can help clarify the trend lines. Take a quick look at the fig- ure, and then we’ll go over it in some detail. tag: correct/all tag: correct/neg tag: correct/pos tag: loss/all tag: loss/neg tag: loss/pos tag: pr/f1_score tag: pr/precision tag: pr/recall tag: correct/a ll tag: correct /neg correct/pos tag: c tag: loss/al l tag: loss/ne g loss/pos tag: gp tag: pr/f1_scor e gpp tag: pr/precision pr/recall tag: FuLly Augmented is Worse than unaugmented......Except for how unaugmented is overfiTting on positive samplesNoise is worse than unaugmented FuLly Augmented FuLly AugmentedUnaugmented Unaugmented Individual Augments Figure 12.22 Percent correctly classified, loss, F1 s core, precision, and recall for the validation set from networks trained with a variety of augmentation schemes" Deep-Learning-with-PyTorch.pdf,"354 CHAPTER 12 Improving training with metrics and augmentation The first thing to notice in the upper-left gr aph (“tag: correct/all” ) is that the individ- ual augmentation types are something of a jumble. Our unaugmented and fully aug- mented runs are on opposite sides of that jumble. That means when combined, our augmentation is more than the sum of its part s. Also of interest is that our fully aug- mented run gets many more wrong answers. While that’s bad generally, if we look at the right column of images (which focus on the positive candidate samples we actually care about—the ones that are really nodules), we see that our fully augmented modelis much better at finding the positive candidat e samples. The recall for the fully aug- mented model is great! It’s also much better at not overfitting. As we saw earlier, our unaugmented model gets worse over time. One interesting thing to note is that the noise-augmented model is worse at identi- fying nodules than the unaugmented model. This makes sense if we remember that we said noise makes th e model’s job harder. Another interesting thing to see in the li ve data (it’s somewhat lost in the jumble here) is that the rotation-augmented model is nearly as good as the fully augmented model when it comes to recall, and it has much better precision. Since our F1 score is precision limited (due to the higher numb er of negative samples), the rotation- augmented model also has a better F1 score. We’ll stick with the fully augmented model going forward, since our use case requires high recall. The F1 score will sti ll be used to determine which epoch to save as the best. In a real-world pr oject, we might want to devote extra time to investigating whether a different combination of augmen tation types and parameter values could yield better results. 12.7 Conclusion We spent a lot of time and energy in this chapter reformulating how we think about our model’s performance. It’s easy to be misled by poor methods of evaluation, and it’s crucial to have a strong intuitive understanding of the factors that feed into evalu-ating a model well. Once those fundamentals are internalized, it’s much easier to spot when we’re being led astray. We’ve also learned about how to deal with data sources that aren’t sufficiently pop- ulated. Being able to synthesi ze representative training sa mples is incredibly useful. Situations where we have too much training data are rare indeed! Now that we have a classifier that is pe rforming reasonably, we ’ll turn our attention to automatically finding candidate nodules to classify. Chapter 13 will start there;then, in chapter 14, we will feed those candid ates back into the classifier we developed here and venture into building one more cl assifier to tell ma lignant nodules from benign ones." Deep-Learning-with-PyTorch.pdf,"355 Exercises 12.8 Exercises 1The F1 score can be generalized to support values other than 1. aRead https://en.wikipedia .org/wiki/F1_score , and implement F2 and F0.5 scores. bDetermine which of F1, F2, and F0.5 ma kes the most sense for this project. Track that value, and compare and contrast it with the F1 score. 6 2Implement a WeightedRandomSampler approach to balancing the positive and negative training samples for LunaDataset with ratio_int set to 0. aHow did you get the required information about the class of each sample? bWhich approach was easier? Which re sulted in more readable code? 3Experiment with differen t class-balancing schemes. aWhat ratio results in the best score after two epochs? After 20? bWhat if the ratio is a function of epoch_ndx ? 4Experiment with different da ta augmentation approaches. aCan any of the existing approaches be made more aggressive (noise, offset, and so on)? bDoes the inclusion of noise augmentation help or hinder your trainingresults? – Are there other values that change this result? cResearch data augmentation that othe r projects have used. Are any applica- ble here? – Implement “mixup” augmentation for positive nodule candidates. Does it help? 5Change the initial normalization from nn.BatchNorm to something custom, and retrain the model. aCan you get better results using fixed normalization? bWhat normalization offset and scale make sense? cDo nonlinear normalizations like square roots help? 6What other kinds of data can TensorBo ard display besides those we’ve covered here? aCan you have it display information about the weights of your network? bWhat about intermediate results from running your model on a particular sample? – Does having the backbone of the model wrapped in an instance of nn.Sequential help or hinder this effort? 6Yep, that’s a hint it’s not the F1 score!" Deep-Learning-with-PyTorch.pdf,"356 CHAPTER 12 Improving training with metrics and augmentation 12.9 Summary A binary label and a binary classification threshold combine to partition the dataset into four quadrants: true positives, true negatives, false negatives, and false positives. These four quantities pr ovide the basis for ou r improved perfor- mance metrics. Recall is the ability of a model to maximi ze true positives. Selecting every single item guarantees perfect recall—becaus e all the correct answers are included— but also exhibits poor precision. Precision is the ability of a model to mi nimize false positive s. Selecting nothing guarantees perfect precision—because no incorrect answers are included—but also exhibits poor recall. The F1 score combines precis ion and recall into a single metric that describes model performance. We use the F1 scor e to determine what impact changes to training or the model have on our performance. Balancing the training set to have an equal number of po sitive and negative samples during training can result in the model performing better (defined as having a positive, increasing F1 score). Data augmentation takes existing organi c data samples and modifies them such that the resulting augmented sample is no n-trivially different from the original, but remains representative of samples of the same class. This allows additional training without overfitting in si tuations where data is limited. Common data augmentation strategies in clude changes in orientation, mirror- ing, rescaling, shifting by an offset, and adding noise. Depending on the proj- ect, other more specific stra tegies may also be relevant." Deep-Learning-with-PyTorch.pdf,"357Using segmentation to find suspected nodules In the last four chapters, we have accomp lished a lot. We’ve learned about CT scans and lung tumors, datasets and data load ers, and metrics and monitoring. We have also applied many of the things we learned in part 1, and we have a working classi- fier. We are still operating in a somewhat arti ficial environment, however, since we require hand-annotated nodule candidate in formation to load into our classifier. We don’t have a good way to create that input automatically. Just feeding the entire CT into our model—that is, plugging in overlapping 32 × 32 × 32 patches of data— would result in 31 × 31 × 7 = 6,727 patches per CT, or about 10 times the number ofannotated samples we have. We’d need to overlap the edges; our classifier expects the nodule candidate to be centered, and even then the inconsistent positioning would probably present issues.This chapter covers Segmenting data with a pixel-to-pixel model Performing segmentation with U-Net Understanding mask prediction using Dice loss Evaluating a segmentation model’s performance" Deep-Learning-with-PyTorch.pdf,"358 CHAPTER 13 Using segmentation to find suspected nodules As we explained in chapter 9, our projec t uses multiple steps to solve the problem of locating possible nodules, identifying th em, with an indication of their possible malignancy. This is a common approach amon g practitioners, while in deep learning research there is a tendency to demonstrat e the ability of individual models to solve complex problems in an end-to-end fashion. The multistage project design we use in this book gives us a good excuse to introduce new concepts step by step. 13.1 Adding a second model to our project In the previous two chapters, we worked on step 4 of our plan shown in figure 13.1: classification. In this chapter, we’ll go back not just one but two steps. We need to find a way to tell our classifier where to look. To do this, we are going to take raw CT scans and find everything that might be a nodule.1 This is the highlighted step 2 in the fig- ure. To find these possible nodules, we have to flag voxels that look like they might bepart of a nodule, a process known as segmentation . Then, in chapter 14, we will deal with step 3 and provide the bridge by tran sforming the segmenta tion masks from this image into location annotations. By the time we’re finished with this chapter, we’ll have created a new model with an architecture that can pe rform per-pixel labeling, or segmentation. The code that 1We expect to mark quite a few things that are not nodul es; thus, we use the classification step to reduce the number of these.[(I,R,C), (I,R,C), (I,R,C), ...][NEG, POS, NEG, ...] MAL/BENcandidate Locations ClaSsification modelSTep 2 (ch. 13): Segmentation Step 1 (ch. 10): Data LoadingStep 4 (ch. 11+12): ClaSsification Step 5 (ch. 14): Nodule analysis and Diagnosisp=0.1 p=0.9 p=0.2 p=0.9Step 3 (ch. 14): Grouping.MHD .RAWCT Data segmentation modelcandidate Sample [[[[[[[[[[[[[[[[[(I,R,C ), (I,R,C) , (I,R,C ), ... ] [NEG, POS, NEG, ... ] MAL/BEN ClaSsification model STep 2 (ch. 13 ): Segment ation Step 1 (ch. 10 )):))))))))) Data Loadingggggggggg Step 4 (ch. 11+12): on Step 5 (ch. 14): Nodule anal ysis and Di agnosi s p=0.1 p=0.9 , p=0.2 p=0.9 Steppppppppp3333333333(((((((((chchhchchchchhcccccc. 14414414444411):::)::)::))))))) Groupin g [[[[[[[() candidate Location s Step 4 (ch. 11+ ClaSsificati o .M MHD .R RAW CT Data Data segmentat tion model candidate Sample Figure 13.1 Our end-to-end lung cancer detection project, with a focus on this chapter’s topic: step 2, segmentation" Deep-Learning-with-PyTorch.pdf,"359 Adding a second model to our project will accomplish this will be very similar to th e code from the last chapter, especially if we focus on the larger stru cture. All of the changes we’re going to make will be smaller and targeted. As we see in figure 13 .2, we need to make updates to our model (step 2A in the figure), dataset (2B), and training loop (2C) to account for the new model’s inputs, outputs, and other requirem ents. (Don’t worry if you don’t recognize each component in each of these steps in step 2 on the right side of the diagram. We’ll go through the details when we get to each step.) Finally, we’ll examine the results we get when running our new model (step 3 in the figure). Breaking down figure 13.2 into steps, our plan for this chapter is as follows: 1Segmentation . First we will learn how segmentation works with a U-Net model, including what the new model componen ts are and what happens to them as we go through the segmentation process. This is step 1 in figure 13.2. 2Update . To implement segmentation, we need to change our existing code base in three main places, shown in the substeps on the right side of figure 13.2.Thecode will be structurally ve ry similar to what we deve loped for classification, but will differ in detail: aUpdate the model (step 2A) . We will integrate a preexisting U-Net into our seg- mentation model. Ou r model in chapter 12 output a simple true/false classi- fication; our model in this chapter will instead output an entire image. 1. Segmentation UNet 2. Update: 2a. Model 2b. Dataset 2c. Training 3. Resul tsT/F 2a. Model 2b. D ataset 2c. Training T/F Figure 13.2 The new model architecture for se gmentation, along with the model, dataset, and training loop updates we will implement" Deep-Learning-with-PyTorch.pdf,"360 CHAPTER 13 Using segmentation to find suspected nodules bChange the dataset (step 2B) . We need to change our dataset to not only deliver bits of the CT but also provid e masks for the nodules. The classifica- tion dataset consisted of 3D crops around nodule candidates, but we’ll need to collect both full CT slices and 2D crops for segmentation training and validation. cAdapt the training loop (step 2C) . We need to adapt the training loop so we bring in a new loss to optimize. Because we want to display images of our seg- mentation results in TensorBoard, we’ll also do things like saving our model weights to disk. 3Results . Finally, we’ll see the fruits of our ef forts when we look at the quantitative segmentation results. 13.2 Various types of segmentation To get started, we need to talk about differe nt flavors of segmentation. For this project, we will be using semantic segmentation, which is the act of classifying individual pixels in an image using labels just like those we ’ve seen for our classification tasks, for example, “bear,” “cat,” “dog,” and so on. If done properly, this will result in distinct chunks or regions that signify things like “all of these pixels are part of a cat.” This takes the form of a label mask or heatmap that id entifies areas of interest. We will have a simple binary label: true values will corr espond to nodule candidates, and false values mean uninteresting healthy tissue. This partially meets our ne ed to find nodule candidates that we will later feed into our classification network. Before we get into the details, we should briefly discuss other approaches we could take to finding our nodule candidates. For example, instance segmentation labels indi- vidual objects of interest with distinct labels. So whereas semantic segmentation would label a picture of two people shaking ha nds with two labels (“person” and “back- ground”), instance se gmentation would have three labe ls (“person1,” “person2,” and “background”) with a boundary somewher e around the clasped hands. While this could be useful for us to distinguish “no dule1” from “nodule2,” we will instead use grouping to identify individual nodules. That approach will work well for us sincenodules are unlikely to touch or overlap. Another approach to these kinds of tasks is object detection , which locates an item of interest in an image and puts a bounding box around the item. While both instance segmentation and object detection could be great for our uses, their implementations are somewhat complex, and we don’t feel they are the best things for you to learn next. Also, training object-d etection models typically re quires much more computa- tional resources than our approach requires. If you’re feeling up to the challenge, the YOLOv3 paper is a more entertaining read than most deep learning research papers. 2 For us, though, semantic segmentation it is. 2Joseph Redmon and Ali Farhadi, “YOLOv3: An Incremental Improvement,” https://pjreddie.com/media/ files/papers/YOLOv3.pdf . Perhaps check it out once you’ve finished the book." Deep-Learning-with-PyTorch.pdf,"361 Semantic segmentation: Per-pixel classification NOTE As we go through the code examples in this chapter, we’re going to rely on you checking the code from GitHub for much of the larger context.We’ll be omitting code that’s unintere sting or similar to what’s come before in earlier chapters, so that we can focus on the crux of the issue at hand. 13.3 Semantic segmentation: Per-pixel classification Often, segmentation is used to answer questi ons of the form “Where is a cat in this pic- ture?” Obviously, most pictures of a cat, like figure 13.3, have a lot of non-cat in them; there’s the table or wall in the background, the keyboard the cat is sitting on, that kind of thing. Being able to say “This pixel is part of the cat, and this other pixel is part of thewall” requires fundamentally different model output and a different internal structure from the classification models we’ve worked with thus far. Classification can tell us whether a cat is present, while segmentation will tell us where we can find it. If your project requires differentiating between a near cat and a far cat, or a cat on the left versus a cat on the right, then segmen tation is probably the right approach. The image-consuming classification models that we’ve implemented so far can be thought of as funnels or magnifying glasses that take a large bunch of pixels and focus them down into a single “point” (or, more accura tely, a single set of class predictions), as shown in figure 13.4. Classifi cation models provide answers of the form “Yes, this huge pile of pixels has a cat in it, somewhere,” or “No, no cats here.” This is great when you don’t care where the cat is, just that there is (or isn’t) one in the image. Repeated layers of convolution and down sampling mean the model starts by con- suming raw pixels to produce specific, deta iled detectors for things like texture and color, and then builds up higher-level conc eptual feature detectors for parts like eyes CAT: Yes Cat: hereClaSsification vs. segmentation Figure 13.3 Classification results in one or more binary flags, while segmentation produces a mask or heatmap." Deep-Learning-with-PyTorch.pdf,"362 CHAPTER 13 Using segmentation to find suspected nodules and ears and mouth and nose3 that finally result in “cat” versus “dog.” Due to the increas- ing receptive field of the convolutions after each downsampling layer, those higher-level detectors can use information from an incr easingly large area of the input image. Unfortunately, since segmentation needs to produce an image-like output, ending up at a single classification-lik e list of binary-ish flags won’ t work. As we recall from sec- tion 11.4, downsampling is key to increasing the receptive fields of the convolutional layers, and is what helps reduce the array of pixels that make up an image to a single list of classes. Notice figure 13.5, which repeats figure 11.6. In the figure, our inputs flow from the le ft to right in the top row and are continued in the bottom row. In order to work out th e receptive field—the area influencing the single pixel at bottom right—we can go ba ckward. The max-pool operation has 2 × 2 inputs producing each final output pixel. The 3 × 3 conv in the middle of the bottomrow looks at one adjacent pixel (including di agonally) in each direction, so the total receptive field of the convolutions that result in the 2 x 2 output is 4 x 4 (with the right “x” characters). The 3 × 3 conv olution in the top row then adds an additional pixel of context in each direction, so the receptive fi eld of the single output pixel at bottom right is a 6 × 6 field in the input at top left. Wi th the downsampling from the max pool, the receptive field of the next block of convolut ions will have double the width, and each additional downsampling will do uble it again, while shrink ing the size of the output. We’ll need a different model architecture if we want our output to be the same size as our input. One simple model to use fo r segmentation would have repeated convo- lutional layers without an y downsampling. Given appropriate padding, that would result in output the same size as the inpu t (good), but a very limited receptive field 3… “head, shoulders, knees, and toes, knees an d toes,” as my (Eli’s) toddlers would sing. APple: no Bear: no cat: yes dog: no eGg: no flag: no ... zebra: no Pixels Textures Shapes ClaSses no Pplen APp no ear: n Be yes cat:y c no og: n og do do no Gg: n Gg eGg no g: n lag fl ... no a: bra zeb Figure 13.4 The magnifying glass model structure for classification" Deep-Learning-with-PyTorch.pdf,"363 Semantic segmentation: Per-pixel classification (bad) due to the limited reach based on ho w much overlap multiple layers of small convolutions will have. The classification model uses each downsampling layer to dou- ble the effective reach of the following co nvolutions; and without that increase in effective field size, each segmented pixel wi ll only be able to consider a very local neighborhood. NOTE Assuming 3 × 3 convolutions, the receptive field size for a simple model of stacked convolutions is 2 * L + 1, with L being the number of convo- lutional layers. Four layers of 3 × 3 convolutions will have a receptive field of 9 × 9 per output pixel. By inserting a 2 × 2 max pool between the seco nd and third convolutions, and another at the end, we increase the receptive field to … NOTE See if you can figure out the math yourself; when you’re done, check back here. ... 16 × 16. The final series of conv-conv-pool has a receptive field of 6 × 6, but that happens after the first max pool, which makes the final effective receptive field 12 × 12 in the original input resolution. The first tw o conv layers add a total border of 2 pixels around the 12 × 12, for a total of 16 × 16. So the question remains: how can we impr ove the receptive field of an output pixel while maintaining a 1:1 ratio of input pixels to output pixels? One common answer is 6 6 Input 3 3 Convx 33 Convx22 Outputx22 Max POolx4x4 Output 4 4 Input (same as output)x 3 3 Conv x t 2 2 P Max x 2 2 1x1 Output I as as nv 3 3 Conv x 3 3 2 2 Output x 2 2 In nput aso as o output) output)x Figure 13.5 The convolutional architecture of a LunaModel block, consisting of two 3 × 3 convolutions followed by a max pool. The final pixel has a 6 × 6 receptive field." Deep-Learning-with-PyTorch.pdf,"364 CHAPTER 13 Using segmentation to find suspected nodules to use a technique called upsampling , which takes an image of a given resolution and produces an image of a higher resolution . Upsampling at its simplest just means replacing each pixel with an N × N block of pixels, each with the same value as the original input pixel. The possibilities only get more complex from there, with options like linear interpolation and learned deconvolution. 13.3.1 The U-Net architecture Before we end up diving down a rabbit hole of possible upsampli ng algorithms, let’s get back to our goal for the chapter. Per figu re 13.6, step 1 is to get familiar with a foundational segmentation algorithm called U-Net. The U-Net architecture is a design for a neural network that can produce pixel- wise output and that was invented for segm entation. As you can see from the highlight in figure 13.6, a diagram of the U-Net architecture looks a bit like the letter U, which explains the origins of the name. We also immediately see that it is quite a bit more complicated than the mostly sequential struct ure of the classifiers we are familiar with. We’ll see a more detail ed version of the U-Net architectu re shortly, in figure 13.7, and learn exactly what each of those components is doing. Once we understand the model architecture, we can work on training one to solve our segmentation task. The U-Net architecture shown in figure 13.7 was an early breakthrough for image seg- mentation. Let’s take a look and th en walk through the architecture. In this diagram, the boxes represent inte rmediate results and the arrows represent operations between them. The U-shape of the architecture comes from the multiple1. Segmentation UNet2. Update: 2a. Model 2b. Dataset 2c. Training 3. Resul tsT/F 1. Segment ation UNet UNet UNet 2. Update: 2aModel 2a Mode l t g 3. RRRRRRRRRRRRRRRRRRRRRRRRRRRResul ts 2a. Mode l 2b. Dataset 2c. Tr aining T/F Figure 13.6 The new model architecture for segmentation, that we will be working with" Deep-Learning-with-PyTorch.pdf,"365 Semantic segmentation: Per-pixel classification resolutions at which the network operates. In the top row is the full resolution (512 × 512 for us), the row below has half that, and so on. The data flows from top left to bot-tom center through a series of convolutions and downscaling, as we saw in the classifi- ers and looked at in detail in chapter 8. Then we go up again, using upscaling convolutions to get back to the full resolu tion. Unlike the original U-Net, we will be padding things so we don’t lose pixels off the edges, so our resolution is the same on the left and on the right. Earlier network designs already had this U-shape, which people attempted to use to address the limited receptiv e field size of fully convolutional networks. To address this limited field size, they used a design that copied, inverted, and appended the focusing portions of an im age-classification network to create a symmetrical model that goes from fine detail to wide re ceptive field and back to fine detail. Those earlier network designs had problem s converging, however, most likely due to the loss of spatial information during downsampling. Once in formation reaches a large number of very downscaled images, the exact location of object boundaries gets UNET Architecture skip coNnections ClaSsification magifying glaSscould be fed into linear layerupsampling skip co Nnection s i catio n fying fying aSs Figure 13.7 From the U-Net paper, with annotations. Source: The base of this figure is courtesy Olaf Ronneberger et al., from the paper “U-Net: Convolutional Networks for Biomedical Image Segmentation,” which can be found at https://arxiv.org/abs/1505.04597 and https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ." Deep-Learning-with-PyTorch.pdf,"366 CHAPTER 13 Using segmentation to find suspected nodules harder to encode and theref ore reconstruct. To address this, the U-Net authors added the skip connections we see at the center of the figure. We first touched on skip con- nections in chapter 8, although they are em ployed differently here than in the ResNet architecture. In U-Net, skip connections short-circuit inputs along the downsampling path into the corresponding layers in the upsampling path. These layers receive as input both the upsampled results of the wide receptiv e field layers from lower in the U as well as the output of the earlier fine detail laye rs via the “copy and crop” bridge connections. This is the key innovation behind U-Net (which, interestingly, predated ResNet). All of this means those final detail layers are operating with the best of both worlds. They’ve got both information about the la rger context surrounding the immediate area and fine detail data from the first set of full-resolution layers. The “conv 1x1” layer at far right, in th e head of the network, changes the number of channels from 64 to 2 (the original pa per had 2 output channels; we have 1 in our case). This is somewhat akin to the fully connected layer we used in our classification network, but per-pixel, channel-wise: it’s a way to convert from the number of filters used in the last upsampling step to the number of out put classes needed. 13.4 Updating the model for segmentation It’s time to move through step 2A in fi gure 13.8. We’ve had enough theory about segmentation and history about U-Net; now we want to update our code, starting with the model. Instead of just outputting a binary classification that gi ves us a single output of true or false, we integrate a U-Net to get to a model that’s capable of outputting a 1. Segmentation UNet2. Update: 2a. Model 2b. Dataset 2c. Training 3. Resul tsT/F 1. Segmentation UNet UNet UNet 2. Update: 2a Model 2aModel t g 3. Resul ts 2a. Model 2b. D ataset 2c. Tr aining T/F Figure 13.8 The outline of this chapter, with a focus on the changes needed for our segmentation model" Deep-Learning-with-PyTorch.pdf,"367 Updating the model for segmentation probability for every pixel: that is, perf orming segmentation. Rather than imple- menting a custom U-Net segmentation model from scratch, we’re going to appropriate an existing implementation from an open source repository on GitHub. The U-Net implementation at https://github.com/jv anvugt/pytorch-unet seems to meet our needs well.4 It’s MIT licensed (copyright 2018 Joris), it’s contained in a single file, and it has a number of parame ter options for us to tweak. The file is included in our code repository at util/unet.py, along with a link to the original repos-itory and the full text of the license used. NOTE While it’s less of an issue for pers onal projects, it’s important to be aware of the license terms attached to op en source software you use for a proj- ect. The MIT license is one of the most permissive open source licenses, and it still places requirements on users of MIT licensed code! Also be aware that authors retain copyright even if they publish their work in a public forum (yes, even on GitHub), and if they do not include a license, that does not mean the work is in the public doma in. Quite the opposite! It means you don’t have any license to use the code, any more than you’d have the right to wholesale copy a book you borrowed from the library. We suggest taking some time to inspect th e code and, based on the knowledge you have built up until this point, iden tify the building blocks of the architecture as they are reflected in the code. Can you spot skip conne ctions? A particularly worthy exercise for you is to draw a diagram that shows how the mode l is laid out, just by looking at the code. Now that we have found a U-Net implemen tation that fits the bill, we need to adapt it so that it works well for our needs. In general, it’s a good idea to keep an eye out for situations where we can use somethin g off the shelf. It’s important to have a sense of what models exist, how they’re implemented and trained, and whether any parts can be scavenged and applied to th e project we’re working on at any given moment. While that broader knowledge is so mething that comes with time and expe- rience, it’s a good idea to st art building that toolbox now. 13.4.1 Adapting an off-the-shelf model to our project We will now make some changes to the classi c U-Net, justifying them along the way. A useful exercise for you will be to compare results between the vanilla model and the one after the tweaks, preferably removing one at a time to see the effect of each change (this is also called an ablation study in research circles). First, we’re going to pass the input th rough batch normalizat ion. This way, we won’t have to normalize the data ourselves in the dataset; and, more importantly, we will get normalization statistics (read me an and standard deviation) estimated over individual batches. This means when a batch is dull for some reason—that is, when there is nothing to see in all the CT crops fed into the network—it will be scaled more 4The implementation included here diff ers from the official paper by using average pooling instead of max pooling to downsample. The most recent version on GitHub has changed to use max pool." Deep-Learning-with-PyTorch.pdf,"368 CHAPTER 13 Using segmentation to find suspected nodules strongly. The fact that samples in batche s are picked randomly at every epoch will minimize the chances of a dull sample ending up in an all-dull batch, and hence those dull samples getting overemphasized. Second, since the output values are unco nstrained, we are going to pass the output through an nn.Sigmoid layer to restrict the output to the range [0, 1]. Third, we will reduce the total depth and number of filter s we allow our model to use. While this is jumping ahead of ourselves a bit, the capaci ty of the model using the standard param- eters far outstrips our dataset size. This mean s we’re unlikely to find a pretrained model that matches our exact needs. Fi nally, although this is not a modification, it’s important to note that our output is a single channel, with each pixel of output representing the model’s estimate of the probability that the pixel in question is part of a nodule. This wrapping of U-Net can be done rather simply by implementing a model with three attributes: one each for the two featur es we want to add, and one for the U-Net itself—which we can treat just like any prebui lt module here. We w ill also pass any key- word arguments we receive into the U-Net constructor. class UNetWrapper(nn.Module): def __init__(self, **kwargs): super().__init__() self.input_batchnorm = nn.BatchNorm2d(kwargs['in_channels']) self.unet = UNet(**kwargs) self.final = nn.Sigmoid() self._init_weights() The forward method is a similarly straigh tforward sequence. We could use an instance of nn.Sequential as we saw in chapter 8, but we’ll be explicit here for both clarity of code and cl arity of stack traces.5 def forward(self, input_batch): bn_output = self.input_batchnorm(input_batch) un_output = self.unet(bn_output)fn_output = self.final(un_output) return fn_output Note that we’re using nn.BatchNorm2d here. This is because U-Net is fundamentally a two-dimensional segmentation model. We could adapt the implementation to use 3DListing 13.1 model.py:17, class UNetWrapper Listing 13.2 model.py:50, UNetWrapper.forward 5In the unlikely event our code throws any ex ceptions—which it clearly won’t, will it?kwarg is a dictionary containing all keyword arguments passed to the constructor. BatchNorm2d wants us to specify the number of input channels, which we take from the keyword argument. The U-Net: a small thing to include here, but it’s really doing all the work.Just as for the classifier in chapter 11, we use our custom weight initialization. The function is copied over, so we will not show the code again." Deep-Learning-with-PyTorch.pdf,"369 Updating the dataset for segmentation convolutions, in order to use information ac ross slices. The memory usage of a straight- forward implementation would be considerably greater: that is, we would have to chop up the CT scan. Also, the fact that pixel sp acing in the Z direction is much larger than in-plane makes a nodule less likely to be present across many slices. These consider- ations make a fully 3D approach less attrac tive for our purposes. Instead, we’ll adapt our 3D data to be segmented a slice at a ti me, providing adjacent slices for context (for example, detecting that a bright lump is in deed a blood vessel gets much easier along- side neighboring slices). Sinc e we’re sticking with presenting the data in 2D, we’ll use channels to represent the adja cent slices. Our treatment of the third dimension is sim- ilar to how we applied a full y connected model to images in chapter 7: the model will have to relearn the adjacency relationships we’re throwing away along the axial direc- tion, but that’s not difficult for the model to accomplish, especially with the limited number of slices given for context owing to the small size of the target structures. 13.5 Updating the dataset for segmentation Our source data for this chapter remains unchanged: we’re consuming CT scans and annotation data about them. But our mode l expects input and will produce output of a different form than we had previously. As we hint at in step 2B of figure 13.9, our previous dataset produced 3D data, but we need to produce 2D data now. The original U-Net implementation did not use padded convolutions, which means while the output segmentation map wa s smaller than the input, every pixel of that output had a fully populated receptive field. None of the input pixels that fed 1. Segmentation UNet2. Update: 2a. Model 2b. Dataset 2c. Training 3. Resul tsT/F 1. Segment ation UNet UNet UNet 2. Upd ate: 2a Model 2a Model t gggggggggg 3. Resul ts 2a. Model 2b. D ataset 222222222222222ccccccc. TTTTTTTTTTTTTTTTTTTTTTTTTTrrrrrrrrrrrrrrrrrrrrrrrrrrraaaaaaaaaaaaaaaaaaaaaaaaiiiiiiiiiinnnnnnnnnnnnnnnnnnnnnnnnnnnniiiiiiiiiiiiinnnnnnnnnnnnnnnnnnnnnnnnnnnnggggggggggggggggggggggg T/F Figure 13.9 The outline of this chapter, with a focus on the changes needed for our segmentation dataset" Deep-Learning-with-PyTorch.pdf,"370 CHAPTER 13 Using segmentation to find suspected nodules into the determination of that output pi xel were padded, fabricated, or otherwise incomplete. Thus the output of the original U-Net will tile perfectly, so it can be used with images of any size (except at the ed ges of the input image, where some context will be missing by definition). There are two problems with us taking the same pixel-perfect approach for our problem. The first is related to the interaction between convolution and downsam- pling, and the second is related to the na ture of our data being three-dimensional. 13.5.1 U-Net has very specific input size requirements The first issue is that the sizes of the input and output patches for U-Net are very specific. In order to have the two-pixel loss per co nvolution line up evenly before and after downsampling (especially when considering the further convolutional shrinkage at that lower resolution), only certain input sizes will work. The U-Net paper used 572 × 572 image patches, which resulted in 388 × 388 output maps. The input images are bigger than our 512 × 512 CT slices, and the output is quite a bit smaller! That would mean any nodules near the edge of the CT scan slice wouldn’t be segmented at all. Although this setup works well when dealing with very larg e images, it’s not ideal for our use case. We will address this issue by setting the padding flag of the U-Net constructor to True . This will mean we can use input images of any size, and we will get output of the same size. We may lose some fidelity near the edges of the image, since the receptive field of pixels located there will include regions that have been artificially padded, but that’s a compromise we decide to live with. 13.5.2 U-Net trade-offs for 3D vs. 2D data The second issue is that our 3D data does n’t line up exactly with U-Net’s 2D expected input. Simply taking our 512 × 512 × 128 im age and feeding it into a converted-to-3D U-Net class won’t work, because we’ll exhaust our GPU memory. Each image is 29 by 29 by 27, with 22 bytes per voxel. The first layer of U-Net is 64 channels, or 26. That’s an expo- nent of 9 + 9 + 7 + 2 + 6 = 33, or 8 GB just for the first convolutional layer . There are two con- volutional layers (16 GB); and then each downsampling halves the resolution but doubles the channels, which is another 2 GB for each layer after the first downsample (remember, halving the resolution results in one-eighth the data, since we’re working with 3D data). So we’ve hit 20 GB before we even get to the second downsample, much less anything on the upsample side of the model or anything dealing with autograd. NOTE There are a number of clever and i nnovative ways to get around these problems, and we in no way suggest that this is the only approach that will ever work.6 We do feel that this approach is one of the simplest that gets the job done to the level we need for our project in this book. We’d rather keep things simple so that we can focus on the fundamental concepts; the cleverstuff can come later, once you’ve mastered the basics. 6For example, Stanislav Nikolov et al., “Deep Learning to Achieve Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy,” https://arxiv.org/pdf/1809.04430.pdf ." Deep-Learning-with-PyTorch.pdf,"371 Updating the dataset for segmentation As anticipated, instead of trying to do things in 3D, we’re going to treat each slice as a 2D segmentation problem and cheat our way around the issue of context in the third dimension by providing neighboring slices as separate channels. Instead of the tradi- tional “red,” “green,” and “blue” channels that we’re familiar with from photographic images, our main channels will be “two slices above,” “one slice above,” “the slice we’re actually segmenting,” “one slice below,” and so on. This approach isn’t without trade-offs, howe ver. We lose the direct spatial relation- ship between slices when represented as channels, as all channels will be linearly com-bined by the convolution kernels with no notion of them being one or two slices away,above or below. We also lose the wider receptive field in the depth dimension that would come from a true 3D segmentation with downsampling. Since CT slices are often thicker than the resolution in rows and columns, we do get a somewhat wider view than it seems at first, and this shou ld be enough, considering that nodules typi- cally span a limited number of slices. A n o t h e r a s p e c t t o c o n s i d e r , t h a t i s relevant for both the current and fully 3D approaches, is that we are no w ignoring the exact slice th ickness. This is something our model will eventually have to learn to be robust against, by being presented with data with different slice spacings. In general, there isn’t an easy flowchar t or rule of thumb that can give canned answers to questions about which trade-offs to make, or whethe r a given set of com- promises compromise too much. Careful expe rimentation is key, however, and system- atically testing hypothesis after hypothesis can help narrow down which changes and approaches are working well for the problem at hand. Although it’s tempting to make a flurry of changes while waiting for the last set of results to compute, resist that impulse . That’s important enough to repeat: do not test multiple modifications at the same time . There is far too high a chance that one of the changes will interact poorly with the other, and you’ll be left without solid evid ence that either one is worth investigating further. With that said, let’s start bu ilding out our segmentation dataset. 13.5.3 Building the ground truth data The first thing we need to address is th at we have a mismatch between our human- labeled training data and the actual output we want to get from our model. We have annotated points, but we want a per-voxel ma sk that indicates whether any given voxel is part of a nodule. We’ll have to build that mask ourselves from the data we have and then do some manual checking to make sure the routine that builds the mask is per- forming well. Validating these manually constructed heur istics at scale can be difficult. We aren’t going to attempt to do anything comprehe nsive when it comes to making sure each and every nodule is properly handled by ou r heuristics. If we had more resources, approaches like “collaborate with (or pay) someone to create and/or verify everything by hand” might be an option, but since this isn’t a well-funded endeavor, we’ll rely on checking a handful of samples and using a ve ry simple “does the output look reason- able?” approach." Deep-Learning-with-PyTorch.pdf,"372 CHAPTER 13 Using segmentation to find suspected nodules To that end, we’ll design our approaches and our APIs to make it easy to investi- gate the intermediate steps that our algo rithms are going through. While this might result in slightly clunky function calls re turning huge tuples of intermediate values, being able to easily grab results and plot them in a notebook makes the clunk worth it. BOUNDING BOXES We are going to begin by converting the no dule locations that we have into bounding boxes that cover the entire nodule (note that we’ll only do this for actual nodules ). If we assume that the nodule locations are roughly centered in the mass, we can trace outward from that point in all three dimens ions until we hit low-density voxels, indi- cating that we’ve reached normal lung tissue (w hich is mostly filled with air). Let’s fol- low this algorithm in figure 13.10. We start the origin of our search (O in th e figure) at the voxel at the annotated center of our nodule. We then examine the density of the voxels adjacent to our origin on the column axis, marked with a question mark (?). Since both of the examined voxels contain dense tissue, shown here in lighte r colors, we continue our search. After incrementing our column search distance to 2, we find that the left voxel has a density below our threshold, and so we stop our search at 2. Next, we perform the same search in the row direction. Again, we start at the ori- gin, and this time we search up and down . After our search distance becomes 3, we encounter a low-density voxel in both the upper and lower search locations. We only need one to stop our search! Col Step 1 Col Step 2 Col Finished Row Finished Row Start Row Finis hed w Start Final B.Boxcol_radius=1 Col_radius=2 col_radius=2 Row_radius=1 row_radius=3 Slice(-2,+2), slice(-3,+3) Figure 13.10 An algorithm for finding a bounding box around a lung nodule" Deep-Learning-with-PyTorch.pdf,"373 Updating the dataset for segmentation We’ll skip showing the search in the third dimension. Our final bounding box is five voxels wide and seven voxels tall. Here’s what that looks like in code, for the index direction. center_irc = xyz2irc( candidateInfo_tup.center_xyz, self.origin_xyz, self.vxSize_xyz,self.direction_a, ) ci = int(center_irc.index)cr = int(center_irc.row) cc = int(center_irc.col) index_radius = 2 try: while self.hu_a[ci + index_radius, cr, cc] > threshold_hu and \ self.hu_a[ci - index_radius, cr, cc] > threshold_hu:index_radius += 1 except IndexError: index_radius -= 1 We first grab the center data and then do the search in a while loop. As a slight com- plication, our search might fall off the bou ndary of our tensor. We are not terribly concerned about that case and are lazy, so we just catch the index exception.7 Note that we stop increm enting the very approximate radius values after the density drops below threshold, so ou r bounding box should contain a one-voxel border of low- density tissue (at least on one side; since nodules can be adjacent to regions like the lungwall, we have to stop searching in both directions when we hit air on either side). Since we check both center_index + index_radius and center_index - index_radius against that threshold, that on e-voxel boundary will only exis t on the edge closest to our nodule location. This is why we need those locations to be relatively centered. Since some nodules are adjacent to the boundary between the lung and denser tissue like mus- cle or bone, we can’t trace each direction independently, as some edges would end up incredibly far away from the actual nodule. We then repeat the same ra dius-expansion process with row_radius and col _radius (this code is omitted for brevity). Once that’s done, we can set a box in our bounding-box mask array to True (we’ll see the definition of boundingBox_ary in just a moment; it’s not surprising). OK, let’s wrap all this up in a function. We loop over all nodules. For each nodule, we perform the search shown earlier (which we elide from listing 13.4). Then, in a Boolean tensor boundingBox_a , we mark the bounding box we found.Listing 13.3 dsets.py:131, Ct.buildAnnotationMask 7The bug here is that the wraparound at 0 will go undete cted. It does not matter much to us. As an exercise, implement proper bounds checking.candidateInfo_tup here is the same as we’ve seen previously: as returned by getCandidateInfoList. Gets the center voxel indices, our starting point The search described previously The safety net for indexing beyond the size of the tensor" Deep-Learning-with-PyTorch.pdf,"374 CHAPTER 13 Using segmentation to find suspected nodules After the loop, we do a bit of cleanup by taking the intersection between the bounding-box mask and the tissu e that’s denser than our threshold of –700 HU (or 0.3 g/cc). That’s going to clip off the corners of our boxes (at least, the ones not embed- ded in the lung wall), and make it conform to the contours of the nodule a bit better. def buildAnnotationMask(self, positiveInfo_list, threshold_hu = -700): boundingBox_a = np.zeros_like(self.hu_a, dtype=np.bool) for candidateInfo_tup in positiveInfo_list: # ... line 169 boundingBox_a[ ci - index_radius: ci + index_radius + 1, cr - row_radius: cr + row_radius + 1, cc - col_radius: cc + col_radius + 1] = True mask_a = boundingBox_a & (self.hu_a > threshold_hu) return mask_a Let’s take a look at figure 13.11 to see what these masks look like in practice. Addi- tional images in full color can be foun d in the p2ch13_explore_data.ipynb notebook. The bottom-right nodule mask demonstrates a limitation of our rectangular bounding-box approach by including a portion of the lung wall. It’s certainly somethingListing 13.4 dsets.py:127, Ct.buildAnnotationMask Starts with an all-False tensor of the same size as the CT Loops over the nodules. As a reminder that we are only looking at nodules, we call the variable positiveInfo_list. After we get the nodule radius (the search itself is left out), we mark the bounding box.Restricts the mask to voxels above our density threshold positive mask0 100 200 300 400 500 0 100 200 300 400 500Figure 13.11 Three nodules from ct.positive_mask , highlighted in white" Deep-Learning-with-PyTorch.pdf,"375 Updating the dataset for segmentation we could fix, but since we’re not yet convinced that’s the best use of our time and attention, we’ll let it remain as is for now.8 Next, we’ll go about adding this mask to our CT class. CALLING MASK CREATION DURING CT INITIALIZATION Now that we can take a list of nodule info rmation tuples and turn them into at CT- shaped binary “Is this a nodule?” mask, let’s embed those masks into our CT object. First, we’ll filter our candidates into a list containing only nodules, and then we’ll use that list to build the annotation mask. Fina lly, we’ll collect the set of unique array indexes that have at least one voxel of the nodule mask. We’ll use this to shape the data we use for validation. def __init__(self, series_uid): # ... line 116 candidateInfo_list = getCandidateInfoDict()[self.series_uid] self.positiveInfo_list = [ candidate_tup for candidate_tup in candidateInfo_list if candidate_tup.isNodule_bool ] self.positive_mask = self.buildAnnotationMask(self.positiveInfo_list) self.positive_indexes = (self.positive_mask.sum(axis=(1,2)) .nonzero()[0].tolist()) Keen eyes might have noticed the getCandidateInfoDict function. The definition isn’t surprising; it’s just a reformulation of the same information as in the getCandidate- InfoList function, but pregrouped by series_uid . @functools.lru_cache(1) def getCandidateInfoDict(requireOnDisk_bool=True): candidateInfo_list = getCandidateInfoList(requireOnDisk_bool)candidateInfo_dict = {} for candidateInfo_tup in candidateInfo_list: candidateInfo_dict.setdefault(candidateInfo_tup.series_uid, []).append(candidateInfo_tup) return candidateInfo_dict 8Fixing this issue would not do a gr eat deal to teach you about PyTorch.Listing 13.5 dsets.py:99, Ct.__init__ Listing 13.6 dsets.py:87Filters for nodulesGives us a 1D vector (over the slices) with the number of voxels flagged in the mask in each slice Takes indices of the mask slices that have a nonzero count, which we make into a list This can be useful to keep Ct init from being a performance bottleneck.Takes the list of candidates for the series UID from the dict, defaulting to a fresh, empty list if we cannot find it. Then appends the present candidateInfo_tup to it." Deep-Learning-with-PyTorch.pdf,"376 CHAPTER 13 Using segmentation to find suspected nodules CACHING CHUNKS OF THE MASK IN ADDITION TO THE CT In earlier chapters, we cached chunks of CT centered around nodule candidates, since we didn’t want to have to read and parse all of a CT’s data every time we wanteda small chunk of the CT. We’ll want to do the same thing with our new positive _mask , so we need to also return it from our Ct.getRawCandidate function. This works out to an additional line of code and an edit to the return statement. def getRawCandidate(self, center_xyz, width_irc): center_irc = xyz2irc(center_xyz, self.origin_xyz, self.vxSize_xyz, self.direction_a) slice_list = [] # ... line 203ct_chunk = self.hu_a[tuple(slice_list)] pos_chunk = self.positive_mask[tuple(slice_list)] return ct_chunk, pos_chunk, center_irc This will, in turn, be cached to disk by the getCtRawCandidate function, which opens the CT, gets the specified raw candidate including the nodule mask, and clips the CT values before returning the CT ch unk, mask, and center information. @raw_cache.memoize(typed=True) def getCtRawCandidate(series_uid, center_xyz, width_irc): ct = getCt(series_uid) ct_chunk, pos_chunk, center_irc = ct.getRawCandidate(center_xyz, width_irc) ct_chunk.clip(-1000, 1000, ct_chunk) return ct_chunk, pos_chunk, center_irc The prepcache script precomputes and saves all these values for us, helping keep training quick. CLEANING UP OUR ANNOTATION DATA Another thing we’re going to take care of in this chapter is doing some better screen- ing on our annotation data. It turns out that several of the candid ates listed in candi- dates.csv are present multiple times. To make it even more interesting, those entries are not exact duplicates of one another. Instead, it seems that the original human annotations weren’t sufficient ly cleaned before being entered in the file. They might be annotations on the same nodule on di fferent slices, which might even have been beneficial for our classifier. We’ll do a bit of a hand wave here and provide a cleaned up annotation.csv file. In order to fully walk through the provenance of this cleaned file, you’ll need to know that the LUNA dataset is derived from anothe r dataset called the Lung Image DatabaseListing 13.7 dsets.py:178, Ct.getRawCandidate Listing 13.8 dsets.py:212Newly added New value returned here" Deep-Learning-with-PyTorch.pdf,"377 Updating the dataset for segmentation Consortium image collection (LIDC-IDRI)9 and includes detailed annotation information from multiple ra diologists. We’ve already done the legwork to get the original LIDC annotations, pull out the no dules, dedupe them, and save them to the file /data/part2/luna/annotations_with_malignancy.csv. With that file, we can update our getCandidateInfoList function to pull our nod- ules from our new annotations file. First, we loop over the new annotations for the actual nodules. Using the CSV reader,10 we need to convert th e data to the appropri- ate types before we stick them into our CandidateInfoTuple data structure. candidateInfo_list = [] with open('data/part2/luna/annotations_with_malignancy.csv', ""r"") as f: for row in list(csv.reader(f))[1:]: series_uid = row[0] annotationCenter_xyz = tuple([float(x) for x in row[1:4]])annotationDiameter_mm = float(row[4]) isMal_bool = {'False': False, 'True': True}[row[5]] candidateInfo_list.append( CandidateInfoTuple( True, True,isMal_bool, annotationDiameter_mm, series_uid,annotationCenter_xyz, ) ) Similarly, we loop over candid ates from candidates.csv as before, but this time we only use the non-nodules. As these are not nodu les, the nodule-specific information will just be filled with False and 0. with open('data/part2/luna/candidates.csv', ""r"") as f: for row in list(csv.reader(f))[1:]: series_uid = row[0] # ... line 72if not isNodule_bool: candidateInfo_list.append( CandidateInfoTuple( 9Samuel G. Armato 3rd et al., 2011, “The Lung Im age Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Refere nce Database of Lung Nodules on CT Scans,” Medical Physics 38, no. 2 (2011): 915-31, https://pubmed.ncbi.n lm.nih.gov/21452728/ . See also Bruce Vendt, LIDC-IDRI, Cancer Imaging Archive, http://mng.bz/mBO4 . 10If you do this a lot, the pandas library that just released 1.0 in 2020 is a great tool to make this faster. We stick with the CSV reader included in the standard Python distribution here.Listing 13.9 dsets.py:43, def getCandidateInfoList Listing 13.10 dsets.py:62, def getCandidateInfoListFor each line in the annotations file that represents one nodule, … … we add a record to our list. isNodule_bool hasAnnotation_bool For each line in the candidates file … … but only the non-nodules (we have the others from earlier) … … we add a candidate record." Deep-Learning-with-PyTorch.pdf,"378 CHAPTER 13 Using segmentation to find suspected nodules False, False, False,0.0, series_uid, candidateCenter_xyz, ) ) Other than the addition of the hasAnnotation_bool and isMal_bool flags (which we won’t use in this chapter), the new annotation s will slot in and be usable just like the old ones. NOTE You might be wondering why we ha ven’t discussed the LIDC before now. As it turns out, the LIDC has a large amount of tooling that’s already been constructed around the underlying dataset, which is specific to the LIDC. You could even get ready-made masks from PyLIDC. That tooling pres-ents a somewhat unrealistic picture of what sort of supp ort a given dataset might have, since the LIDC is anomal ously well supported. What we’ve done with the LUNA data is much more typical and provides for better learning,since we’re spending our time manipulating the raw data rather than learn-ing an API that some one else cooked up. 13.5.4 Implementing Luna 2dSegmentati onDataset Compared to previous chapters, we are going to take a different approach to the train- ing and validation split in this chapter. We wi ll have two classes: on e acting as a general base class suitable for validation data, an d one subclassing the base for the training set, with randomizatio n and a cropped sample. W h i l e t h i s a p p r o a c h i s s o m e w h a t m o r e complicated in some ways (the classes aren’t perfectly encapsulated, for example), it actually simplifies the logic of selecting randomized training samples and the like. It also becomes extremely clear which code paths impact both training and validation, and which are isolated to training only. Without this, we found that some of the logic can become nested or intertwined in ways that make it hard to follow. This is important because our training data will look significantly different fr om our validation data! NOTE Other class arrangements are also viable; we considered having two entirely separate Dataset subclasses, for example. Standard software engi- neering design principles apply, so try to keep your struct ure relatively sim- ple, and try to not copy and paste code, but don’t invent complicatedframeworks to prevent having to duplicate three lines of code. The data that we produce will be two-dime nsional CT slices with multiple channels. The extra channels will hold adjacent slices of CT. Recall figure 4.2, shown here as figure 13.12; we can see that each slice of CT scan can be thought of as a 2D grayscale image.isNodule_bool hasAnnotation_bool isMal_bool" Deep-Learning-with-PyTorch.pdf,"379 Updating the dataset for segmentation How we combine those slices is up to us. Fo r the input to our cla ssification model, we treated those slices as a 3D array of data and used 3D convolutions to process each sample. For our segmentation mo del, we are going to instead treat each slice as a sin- gle channel, and produce a multichannel 2D image. Doing so will mean that we are treating each slice of CT scan as if it was a color channel of an RG B image, like we saw in figure 4.1, repeated here as figure 13.13. Each input slice of the CT will get stacked together and consumed just like any other 2D image. The channe ls of our stacked CT image won’t correspond to colors, but noth ing about 2D convolut ions requires the input channels to be colors, so it works out fine. For validation, we’ll need to produce one sample per slice of CT that has an entry in the positive mask, for each validation CT we have. Since different CT scans can have different slice counts,11 we’re going to introduce a ne w function that caches the 11Most CT scanners produce 512 × 512 slices, and we’re not going to worry about the ones that do something different.Figure 13.12 Each slice of a CT scan represents a different position in space. top braineye more brainnosetEeth spinetop of skuLlmiDdle boTtom Figure 13.13 Each channel of a photographi c image represents a different color. red grEen blue" Deep-Learning-with-PyTorch.pdf,"380 CHAPTER 13 Using segmentation to find suspected nodules size of each CT scan and its positive mask to disk. We need this to be able to quickly construct the full size of a validation set without having to load each CT at Dataset ini- tialization. We’ll continue to use the sa me caching decorator as before. Populating this data will also take place during the prepcache.py script, which we must run oncebefore we start any model training. @raw_cache.memoize(typed=True) def getCtSampleSize(series_uid): ct = Ct(series_uid)return int(ct.hu_a.shape[0]), ct.positive_indexes The majority of the Luna2dSegmentationDataset.__init__ method is similar to what we’ve seen before. We have a new contextSlices_count parameter, as well as an augmentation_dict similar to what we introduced in chapter 12. The handling for the flag indicating whethe r this is meant to be a training or vali- dation set needs to change somewhat. Sinc e we’re no longer training on individual nodules, we will have to partit ion the list of series, taken as a whole, into training and validation sets. This means an entire CT sc an, along with all nodule candidates it con- tains, will be in either the tr aining set or the validation set. if isValSet_bool: assert val_stride > 0, val_strideself.series_list = self.series_list[::val_stride] assert self.series_list elif val_stride > 0: del self.series_list[::val_stride] assert self.series_list Speaking of validation, we’re going to ha ve two different modes we can validate our training with. First, when fullCt_bool is True , we will use every slice in the CT for our dataset. This will be useful when we’re evaluating end-to-end performance, since we need to pretend that we’re starting off wi th no prior information about the CT. We’ll use the second mode for validation during training, which is when we’re limiting our- selves to only the CT slices th at have a positive mask present. As we now only want certain CT series to be considered, we loop over the series UIDs we want and get the total number of slices and the list of interesting ones. self.sample_list = [] for series_uid in self.series_list:Listing 13.11 dsets.py:220 Listing 13.12 dsets.py:242, .__init__ Listing 13.13 dsets.py:250, .__init__Starting with a series list containing all our series, we keep only ever y val_stride-th element, starting with 0. If we are training , we delete every val_stride-th element instead." Deep-Learning-with-PyTorch.pdf,"381 Updating the dataset for segmentation index_count, positive_indexes = getCtSampleSize(series_uid) if self.fullCt_bool: self.sample_list += [(series_uid, slice_ndx) for slice_ndx in range(index_count)] else: self.sample_list += [(series_uid, slice_ndx) for slice_ndx in positive_indexes] Doing it this way will keep our validation re latively quick and ensure that we’re getting complete stats for true posi tives and false negatives, bu t we’re making the assumption that other slices will have false positive and true negative stats relatively similar to the ones we evaluate during validation. Once we have the set of series_uid values we’ll be using, we can filter our candi- dateInfo_list to contain only nodule candidates with a series_uid that is included in that set of series. Additionally, we’ll crea te another list that has only the positive can- didates so that during training, we can use those as our training samples. self.candidateInfo_list = getCandidateInfoList() series_set = set(self.series_list) self.candidateInfo_list = [cit for cit in self.candidateInfo_list if cit.series_uid in series_set] self.pos_list = [nt for nt in self.candidateInfo_list if nt.isNodule_bool] Our __getitem__ implementation will also be a bit fancier by delegating a lot of the logic to a function that makes it easier to re trieve a specific sample. At the core of it, we’d like to retrieve our data in three differ ent forms. First, we have the full slice of the CT, as specified by a series_uid and ct_ndx . Second, we have a cropped area around a nodule, which we’ll use for training data (we’ll explain in a bit why we’re not using full slices). Finally, the DataLoader is going to ask for samples via an integer ndx, and the dataset will need to return the appropriate type based on whether it’s training or validation. The base class or subclass __getitem__ functions will convert from the integer ndx to either the full slice or training crop, as appropriate. As mentioned, our validation set’s __getitem__ just calls another function to do the real work. Before that, it wraps the index around into the sample list in order to decouple the epoch size (given by the length of the dataset) from the actual numb er of samples. Listing 13.14 dsets.py:261, .__init__Here we extend sample_list with every slice of the CT by using range … … while here we take only the interesting slices. This is cached. Makes a set for faster lookup Filters out the candidates from series not in our set For the data balancing yet to come, we want a list of actual nodules." Deep-Learning-with-PyTorch.pdf,"382 CHAPTER 13 Using segmentation to find suspected nodules def __getitem__(self, ndx): series_uid, slice_ndx = self.sample_list[ndx % len(self.sample_list)] return self.getitem_fullSlice(series_uid, slice_ndx) That was easy, but we still need to implem ent the interesting functionality from the getItem_fullSlice method. def getitem_fullSlice(self, series_uid, slice_ndx): ct = getCt(series_uid)ct_t = torch.zeros((self.contextSlices_count *2+1 ,5 1 2 , 512)) start_ndx = slice_ndx - self.contextSlices_count end_ndx = slice_ndx + self.contextSlices_count + 1for i, context_ndx in enumerate(range(start_ndx, end_ndx)): context_ndx = max(context_ndx, 0) context_ndx = min(context_ndx, ct.hu_a.shape[0] - 1)ct_t[i] = torch.from_numpy(ct.hu_a[context_ndx].astype(np.float32)) ct_t.clamp_(-1000, 1000) pos_t = torch.from_numpy(ct.positive_mask[slice_ndx]).unsqueeze(0) return ct_t, pos_t, ct.series_uid, slice_ndx Splitting the functions like this means we ca n always ask a dataset for a specific slice (or cropped training chunk, which we’ll se e in the next section) indexed by series UID and position. Only for the inte ger indexing do we go through __getitem__ , which then gets a sample from the (shuffled) list. Aside from ct_t and pos_t , the rest of the tuple we return is all information that we include for debugging and display. We don’t need any of it for training. 13.5.5 Designing our training and validation data Before we get into the implementation for our training dataset, we need to explain why our training data will look different from our validation data. Instead of the full CT slices, we’re going to train on 64 × 64 crops around our positive candidates (the actually-a-nodule candidates). These 64 × 64 patches will be taken randomly from a 96 × 96 crop centered on the no dule. We will also include th ree slices of context in both directions as additional “channels” to our 2D segmentation. We’re doing this to make training more stable, and to converge more quickly. The only reason we know to do this is because we tried to train on whole CT slices, but wefound the results unsatisfactory. After some experimentation, we found that the 64 × 64 semirandom crop approach worked well, so we decided to use that for the book.Listing 13.15 dsets.py:281, .__getitem__ Listing 13.16 dsets.py:285, .getitem_fullSliceThe modulo operation does the wrapping. Preallocates the output When we reach beyond the bounds of the ct_a, we duplicate the first or last slice." Deep-Learning-with-PyTorch.pdf,"383 Updating the dataset for segmentation When you work on your own projects, you’ll need to do that kind of experimentation for yourself! We believe the whole-slice training was un stable essentially due to a class-balancing issue. Since each nodule is so small compared to the whole CT slice, we were right back in a needle-in-a-haystack situation si milar to the one we got out of in the last chapter, where our positive sa mples were swamped by the negatives. In this case, we’re talking about pixels rather than nodules, but the concept is the same. By training oncrops, we’re keeping the numb er of positive pixels the same and reducing the nega- tive pixel count by several orders of magnitude. Because our segmentation model is pixel-to-pixel and takes images of arbitrary size, we can get away with training and va lidating on samples with different dimen- sions. Validation uses the same convolutions with the same weights, just applied to a larger set of pixels (and so with fewer border pixels to fill in with edge data). One caveat to this approach is that sinc e our validation set contains orders of mag- nitude more negative pixels, our model will have a huge false positive rate during vali- dation. There are many more opportunit ies for our segmenta tion model to get tricked! It doesn’t help that we’re going to be pushing for high recall as well. We’ll dis- cuss that more in section 13.6.3. 13.5.6 Implementing Training Luna2dSegmentationDataset With that out of the way, let’s get back to the code. Here’s the training set’s __getitem__ . It looks just like the one for the validatio n set, except that we now sample from pos_list and call getItem_trainingCrop with the candidate info tupl e, since we need the series and the exact center location, not just the slice. def __getitem__(self, ndx): candidateInfo_tup = self.pos_list[ndx % len(self.pos_list)] return self.getitem_trainingCrop(candidateInfo_tup) To implement getItem_trainingCrop , we will use a getCtRawCandidate function similar to the one we used during classifica tion training. Here, we ’re passing in a dif- ferent size crop, but the function is un changed except for now returning an addi- tional array with a crop of the ct.positive_mask as well. We limit our pos_a to the center slice that we’re actually segmenting, and then con- struct our 64 × 64 random crops of the 96 × 96 we were given by getCtRawCandidate . Once we have those, we return a tuple with the same items as our validation dataset. def getitem_trainingCrop(self, candidateInfo_tup): ct_a, pos_a, center_irc = getCtRawCandidate( candidateInfo_tup.series_uid, candidateInfo_tup.center_xyz,Listing 13.17 dsets.py:320, .__getitem__ Listing 13.18 dsets.py:324, .getitem_trainingCrop Gets the candidate with a bit of extra surrounding" Deep-Learning-with-PyTorch.pdf,"384 CHAPTER 13 Using segmentation to find suspected nodules (7, 96, 96), ) pos_a = pos_a[3:4] row_offset = random.randrange(0,32) col_offset = random.randrange(0,32) ct_t = torch.from_numpy(ct_a[:, row_offset:row_offset+64, col_offset:col_offset+64]).to(torch.float32) pos_t = torch.from_numpy(pos_a[:, row_offset:row_offset+64, col_offset:col_offset+64]).to(torch.long) slice_ndx = center_irc.indexreturn ct_t, pos_t, candidateInfo_tup.series_uid, slice_ndx You might have noticed that data augmenta tion is missing from our dataset imple- mentation. We’re going to hand le that a little differently this time around: we’ll aug- ment our data on the GPU. 13.5.7 Augmenting on the GPU One of the key concerns when it comes to training a deep learning model is avoiding bottlenecks in your traini ng pipeline. Well, that’s not quite true—there will always be a bottleneck.12 The trick is to make sure the bottleneck is at the resource that’s the most expensive or difficult to upgrade, and that your usage of that resource isn’t wasteful. Some common places to see bottlenecks are as follows: In the data-loading pipeline, either in raw I/O or in decompressing data once it’s in RAM. We addressed this with our diskcache library usage. In CPU preprocessing of th e loaded data. This is often data normalization or augmentation. In the training loop on the GPU. This is typically where we want our bottleneck to be, since total deep le arning system costs for GPUs are usually higher than for storage or CPU. Less commonly, the bottlene ck can sometimes be the memory bandwidth between CPU and GPU. This implies that the GPU isn’t doing much work compared tothe data size that’s being sent in. Since GPUs can be 50 times faster than CPUs when working on tasks that fit GPUs well, it often makes sense to move those ta sks to the GPU from the CPU in cases where CPU usage is becoming high. This is especial ly true if the data gets expanded during this processing; by moving the smaller inpu t to the GPU first, the expanded data is kept local to the GPU, and le ss memory bandwidth is used. In our case, we’re going to move data augmentation to the GPU. This will keep our CPU usage light, and the GPU will easily be able to accommodate the additional workload. Far better to have the GPU busy with a small bit of extra work than idle waiting for the CPU to struggle through the augmentation process. 12Otherwise, your model would train instantly!Taking a one-element slice keeps the third dimension, which will be the (single) output channel.With two random numbers between 0 and 31, we crop both CT and mask." Deep-Learning-with-PyTorch.pdf,"385 Updating the dataset for segmentation We’ll accomplish this by using a second mo del, similar to all the other subclasses of nn.Module we’ve seen so far in this book. The main difference is that we’re not inter- ested in backpropagating gradie nts through the model, and the forward method will be doing decidedly different things. There will be some slight modifications to the actual augmentation routines since we’re wo rking with 2D data for this chapter, but otherwise, the augmentation will be very similar to what we sa w in chapter 12. The model will consume tensors and produce differe nt tensors, just like the other models we’ve implemented. Our model’s __init__ takes the same data augmentation arguments— flip , offset , and so on—that we used in the la st chapter, and assigns them to self . class SegmentationAugmentation(nn.Module): def __init__( self, flip=None, offset=None, scale=None, rotate=None, noise=None ): super().__init__() self.flip = flip self.offset = offset # ... line 64 Our augmentation forward method takes the input and the label, and calls out to build the transform_t tensor that will then drive our affine_grid and grid_sample calls. Those calls should feel very familiar from chapter 12. def forward(self, input_g, label_g): transform_t = self._build2dTransformMatrix()transform_t = transform_t.expand(input_g.shape[0], -1, -1) transform_t = transform_t.to(input_g.device, torch.float32) affine_t = F.affine_grid(transform_t[:,:2], input_g.size(), align_corners=False) augmented_input_g = F.grid_sample(input_g, affine_t, padding_mode='border', align_corners=False) augmented_label_g = F.grid_sample(label_g.to(torch.float32), affine_t, padding_mode='border', align_corners=False) if self.noise: noise_t = torch.randn_like(augmented_input_g) noise_t *= self.noise augmented_input_g += noise_t return augmented_input_g, augmented_label_g > 0.5Listing 13.19 model.py:56, class SegmentationAugmentation Listing 13.20 model.py:68, SegmentationAugmentation.forward Note that we’re augmenting 2D data. The first dimension of the transformation is the batch, but we only want the first two rows of the 3 × 3 matrices per batch item. We need the same transfor mation applied to CT and mask, so we use the same grid. Because grid_sample only works with floats, we convert here. Just before returning, we convert the mask back to Booleans by comparing to 0.5. The interpolation that grid_sample results in fractional values." Deep-Learning-with-PyTorch.pdf,"386 CHAPTER 13 Using segmentation to find suspected nodules Now that we know what we need to do with transform_t to get our data out, let’s take a look at the _build2dTransformMatrix function that actually creates the transforma- tion matrix we use. def _build2dTransformMatrix(self): transform_t = torch.eye(3) for i in range(2): if self.flip: if random.random() > 0.5: transform_t[i,i] *= -1 # ... line 108 if self.rotate: angle_rad = random.random() * math.pi * 2s = math.sin(angle_rad) c = math.cos(angle_rad) rotation_t = torch.tensor([ [c, -s, 0], [s, c, 0], [0, 0, 1]]) transform_t @= rotation_t return transform_t Other than the slight differences to deal with 2D data, our GPU augmentation code looks very similar to our CPU augmentation code. That’s great, because it meanswe’re able to write code that doesn’t have to care very much about where it runs. The primary difference isn’t in the core implem entation: it’s how we wrapped that imple- mentation into a nn.Module subclass. While we’ve been thinking about models as exclusively a deep learning tool, this shows us that with PyTorch, tensors can be used quite a bit more generally. Keep this in mi nd when you start your next project—the range of things you can accomplish with a GPU-accelerated tensor is pretty large! 13.6 Updating the training script for segmentation We have a model. We have data. We need to use them, and you won’t be surprised when step 2C of figure 13.14 suggests we should train our new model with the newdata. To be more precise about the process of training our model, we will update three things affecting the outcome from the training code we got in chapter 12: We need to instantiate the new model (unsurprisingly). We will introduce a new loss: the Dice loss. We will also look at an optimizer ot her than the venerable SGD we’ve used so far. We’ll stick with a popular one and use Adam.Listing 13.21 model.py:90, ._build2dTransformMatrix Creates a 3 × 3 matrix, but we will drop the last row later. Again, we’re augmenting 2D data here. Takes a random angle in radians, so in the range 0 .. 2{pi} Rotation matrix for th e 2D rotation by the random angle in the first two dimensions Applies the rotation to th e transformation matrix using the Python matrix multipli cation operator" Deep-Learning-with-PyTorch.pdf,"387 Updating the training script for segmentation But we will also step up our bookkeeping, by Logging images for visual inspection of the segmentation to TensorBoard Performing more metrics logging in TensorBoard Saving our best model based on the validation Overall, the training script p2ch13/training. py is even more similar to what we used for classification training in chapter 12 than the adapted code we’ve seen so far. Any significant changes will be covered here in the text, but be aware that some of the minor tweaks are skipped. For th e full story, check the source. 13.6.1 Initializing our segmenta tion and augmentation models Our initModel method is very unsurprising. We are using the UNetWrapper class and giving it our configuration parameters—which we will look at in detail shortly. Also, we now have a second model for augmentation. Just like before, we can move the model to the GPU if desired and possibly set up multi-GPU training using DataParallel . We skip these administ rative tasks here. def initModel(self): segmentation_model = UNetWrapper( in_channels=7,Listing 13.22 training.py:133, .initModel1. Segmentation UNet2. Update: 2a. Model 2b. Dataset 2c. Training 3. Resul tsT/F 1. Segment ation UNet UNet UNet 2. Update: 2aModel 2a Model t g 3. Resul ts 2a. Model 2b. Dataset 2c. Tr aining T/F Figure 13.14 The outline of this chapter, with a focus on the changes needed for our training loop" Deep-Learning-with-PyTorch.pdf,"388 CHAPTER 13 Using segmentation to find suspected nodules n_classes=1, depth=3, wf=4,padding=True, batch_norm=True, up_mode='upconv', ) augmentation_model = SegmentationAugmentation(**self.augmentation_dict)# ... line 154 return segmentation_model, augmentation_model For input into UNet , we’ve got seven input channels: 3 + 3 context slices, and 1 slice that is the focus for what we’re actually se gmenting. We have one output class indicat- ing whether this voxel is part of a nodule. The depth parameter controls how deep the U goes; each downsampling operation adds 1 to the depth. Using wf=5 means the first layer will have 2**wf == 32 filters, which doubles with each downsampling. We want the convolutions to be padded so that we get an output image the same size as our input. We also want batch normalization insi de the network after ea ch activation func- tion, and our upsampling functi on should be an upconvoluti on layer, as implemented by nn.ConvTranspose2d (see util/unet.py, line 123). 13.6.2 Using the Adam optimizer The Adam optimizer ( https://arxiv.org/abs/1412.6980 ) is an alternative to using SGD when training our models. Adam main tains a separate learning rate for each parameter and automatically updates that lear ning rate as training progresses. Due to these automatic updates, we typically won’t n eed to specify a non-default learning rate when using Adam, since it will quickly determ ine a reasonable learning rate by itself. Here’s how we instantiate Adam in code. def initOptimizer(self): return Adam(self.segmentation_model.parameters()) It’s generally accepted that Adam is a re asonable optimizer to start most projects with.13 There is often a configuration of stoc hastic gradient de scent with Nesterov momentum that will outperform Adam, bu t finding the correct hyperparameters to use when initializing SGD for a given project can be difficult and time consuming. There have been a large number of variations on Adam—AdaMax, RAdam, Ranger, and so on—that each have strength s and weaknesses. Delving into the details of those is outside the scope of this book, bu t we think that it’s important to know that those alternatives exist. We ’ll use Adam in chapter 13. Listing 13.23 training.py:156, .initOptimizer 13See http://cs231n.github.i o/neural-networks-3 ." Deep-Learning-with-PyTorch.pdf,"389 Updating the training script for segmentation 13.6.3 Dice loss The Sørensen-Dice coefficient ( https://en.wikipedia.org /wiki/S%C3%B8rensen%E2 %80%93Dice_coefficient ), also known as the Dice loss , is a common loss metric for seg- mentation tasks. One advantage of using Dice loss over a per-pixel cross-entropy loss is that Dice handles the case where only a smal l portion of the overall image is flagged as positive. As we recall from chapter 11 in sect ion 11.10, unbalanced training data can be problematic when using cross- entropy loss. That’s exactly the situation we have here— most of a CT scan isn’t a nodule. Luckily, with Dice, that won’t pose as much of a problem. The Sørensen-Dice coefficient is based on the ratio of correctly segmented pixels to the sum of the predicted and actual pixels . Those ratios are laid out in figure 13.15. On the left, we see an illustration of the Dice score. It is twice the joint area ( true posi- tives, striped) divided by the sum of the enti re predicted area an d the entire ground- truth marked area (the overlap being coun ted twice). On the right are two prototypi- cal examples of high agreement/high Dice score and low agreement/low Dice score. That might sound familiar; it’s the same ra tio that we saw in chapter 12. We’re basi- cally going to be using a per-pixel F1 score! NOTE This is a per-pixel F1 score where the “population” is one image’s pixels . Since the population is entirely contained within one training sample, we can use itfor training directly. In the classificati on case, the F1 score is not calculable over a single minibatch, and, hence, we cannot use it for training directly. x 2= 0.9 = 0.1 = DICE Predicted Actual CoRrect = 0. = 0 x Co oRrect ect Figure 13.15 The ratios that make up the Dice score" Deep-Learning-with-PyTorch.pdf,"390 CHAPTER 13 Using segmentation to find suspected nodules Since our label_g is effectively a Boolean mask, we can multiply it with our predic- tions to get our true positives. Note that we aren’t treating prediction_devtensor as a Boolean here. A loss defined with it wouldn’t be differentiable. Instead, we’re replac- ing the number of true positives with the sum of the predicted values for the pixels where the ground truth is 1. This converges to the same thing as the predicted values approach 1, but sometimes the predicted va lues will be uncertain predictions in the 0.4 to 0.6 range. Those undecided values will contribute roughly the same amount toour gradient updates, no matt er which side of 0.5 they happen to fall on. A Dice coef- ficient utilizing continuous predicti ons is sometimes referred to as soft Dice . There’s one tiny complication. Since we’r e wanting a loss to minimize, we’re going to take our ratio and subtract it from 1. Do ing so will invert the slope of our loss func- tion so that in the high-overlap case, our lo ss is low; and in the low-overlap case, it’s high. Here’s what that looks like in code. def diceLoss(self, prediction_g, label_g, epsilon=1): diceLabel_g = label_g.sum(dim=[1,2,3])dicePrediction_g = prediction_g.sum(dim=[1,2,3]) diceCorrect_g = (prediction_g * label_g).sum(dim=[1,2,3]) diceRatio_g = (2 * diceCorrect_g + epsilon) \ / (dicePrediction_g + diceLabel_g + epsilon) return 1 - diceRatio_ g We’re going to update our computeBatchLoss function to call self.diceLoss . Twice. We’ll compute the normal Dice loss for the training sample, as well as for only the pixels included in label_g . By multiplying our predictions (w hich, remember, are floating-point values) times the label (which are effectively Booleans), we’ll get ps eudo-predictions that got every negative pixel “exactly right” (since all the values for those pixels are multiplied by the false-is-zero values from label_g ). The only pixels that will generate loss are the false negative pixels (everyth ing that should have been predicted true, but wasn’t). This will be helpful, since recall is incredibly import ant for our overall project; after all, we can’t classify tumors properly if we don’t detect them in the first place! def computeBatchLoss(self, batch_ndx, batch_tup, batch_size, metrics_g, classificationThreshold=0.5): input_t, label_t, series_list, _slice_ndx_list = batch_tup input_g = input_t.to(self.device, non_blocking=True) label_g = label_t.to(self.device, non_blocking=True)Listing 13.24 training.py:315, .diceLoss Listing 13.25 training.py:282, .computeBatchLossSums over everything excep t the batch dimension to get the positively labeled, (softly) positively detected, and (softly) correct positives per batch itemThe Dice ratio. To avoid problems when we accidentally have neither predictions nor labels, we add 1 to both numerator and denominator. To make it a loss, we take 1 – Dice ratio, so lower loss is better. Transfers to GPU" Deep-Learning-with-PyTorch.pdf,"391 Updating the training script for segmentation if self.segmentation_model.training and self.augmentation_dict: input_g, label_g = self.augmentation_model(input_g, label_g) prediction_g = self.segmentation_model(input_g) diceLoss_g = self.diceLoss(prediction_g, label_g) fnLoss_g = self.diceLoss(prediction_g * label_g, label_g) # ... line 313return diceLoss_g.mean() + fnLoss_g.mean() * 8 Let’s talk a bit about what we’re doing with our return statement of diceLoss_g .mean() + fnLoss_g.mean() * 8 . LOSS WEIGHTING In chapter 12, we discussed shaping our dataset so that our classes were not wildly imbalanced. That helped training converge , since the positive and negative samples present in each batch were able to counte ract the general pull of the other, and the model had to learn to discriminate between them to improve. We’re approximating that same balance here by cropping down our training samples to include fewer non- positive pixels; but it’s incr edibly important to have high recall, and we need to make sure that as we train, we’re provid ing a loss that reflects that fact. We are going to have a weighted loss that favors one class over the other. What we’re saying by multiplying fnLoss_g by 8 is that getting the entire population of our posi- tive pixels right is eight times more impo rtant than getting the entire population of negative pixels right (nine, if you count the one in diceLoss_g ). Since the area cov- ered by the positive mask is much, much sm aller than the whole 64 × 64 crop, that also means each individual positive pixel wields that much more influence when it comes to backpropagation. We’re willing to trade away many correctl y predicted negative pixels in the general Dice loss to gain one correct pixel in the false negative loss. Since the general Dice loss is a strict superset of the false negative loss, the only correct pixels available to make that trade are ones that start as true ne gatives (all of the true positive pixels are already included in the false negative loss, so there’s no trade to be made). Since we’re willing to sacrif ice huge swaths of true negative pixels in the pursuit of having better recall, we should expect a large number of false positives in general.14 We’re doing this because recall is very, very important to our use case, and we’d much rather have some false positives th an even a single false negative. We should note that this approach on ly works when using the Adam optimizer. When using SGD, the push to overpredict would lead to every pixel coming back aspositive. Adam’s ability to fine-tune the le arning rate means stressing the false nega- tive loss doesn’t become overpowering. 14Roxie would be proud!Augments as needed if we are training. In validation, we would skip this. Runs the segmentation model … … and applies our fine Dice loss Oops. What is this?" Deep-Learning-with-PyTorch.pdf,"392 CHAPTER 13 Using segmentation to find suspected nodules COLLECTING METRICS Since we’re going to purposefully skew our nu mbers for better recall , let’s see just how tilted things will be. In our classification computeBatchLoss , we compute various per- sample values that we used for metrics and the like. We also compute similar values for the overall segmentation results. These tr ue positive and other metrics were previ- ously computed in logMetrics , but due to the size of the result data (recall that each single CT slice from the validation set is a quarter-million pixels!), we need to com- pute these summary stats live in the computeBatchLoss function. start_ndx = batch_ndx * batch_size end_ndx = start_ndx + input_t.size(0) with torch.no_grad(): predictionBool_g = (prediction_g[:, 0:1] > classificationThreshold).to(torch.float32) tp = ( predictionBool_g * label_g).sum(dim=[1,2,3]) fn = ((1 - predictionBool_g) * label_g).sum(dim=[1,2,3]) fp = ( predictionBool_g * (~label_g)).sum(dim=[1,2,3]) metrics_g[METRICS_LOSS_NDX, start_ndx:end_ndx] = diceLoss_g metrics_g[METRICS_TP_NDX, start_ndx:end_ndx] = tp metrics_g[METRICS_FN_NDX, start_ndx:end_ndx] = fnmetrics_g[METRICS_FP_NDX, start_ndx:end_ndx] = fp As we discussed at the beginni ng of this section, we can compute our true positives and so on by multiplying our prediction (or it s negation) and our label (or its negation) together. Since we’re not as worried about th e exact values of our predictions here (it doesn’t really matter if we flag a pixel as 0. 6 or 0.9—as long as it’s over the threshold, we’ll call it part of a nodule ca ndidate), we are going to create predictionBool_g by comparing it to our threshold of 0.5. 13.6.4 Getting images into TensorBoard One of the nice things about working on segmentation tasks is that the output is easily represented visually. Being able to eyeball ou r results can be a huge help for determin- ing whether a model is progressi ng well (but perhaps needs mo re training), or if it has gone off the rails (so we need to stop wast ing our time with furt her training). There are many ways we could package up our resu lts as images, and many ways we could dis- play them. TensorBoard has great support fo r this kind of data, and we already have TensorBoard SummaryWriter instances integrated with our training runs, so we’re going to use TensorBoard. Let’s see what it takes to get everything hooked up. We’ll add a logImages function to our main application class and call it with both our training and validation data loaders. While we are at it, we will make anotherListing 13.26 training.py:297, .computeBatchLoss We threshold the prediction to get “hard” Dice but convert to float for the later multiplication.Computing true positives, false positives, and false negatives is similar to what we did when computing the Dice loss. We store our metrics to a large tensor for future reference. This is per batch item rather than averaged over the batch." Deep-Learning-with-PyTorch.pdf,"393 Updating the training script for segmentation change to our training loop : we’re only going to perform validation and image log- ging on the first and then every fifth epoc h. We do this by checking the epoch num- ber against a new constant, validation_cadence . When training, we’re trying to balance a few things: Getting a rough idea of how our model is training without having to wait very long Spending the bulk of our GPU cycles training, rather than validating Making sure we are st ill performing well on the validation set The first point means we need to have relati vely short epochs so that we get to call logMetrics more often. The second, however, me ans we want to train for a relatively long time before calling doValidation . The third means we need to call doValidation regularly, rather than once at the end of tr aining or something un workable like that. By only doing validation on the first and then every fifth epoch, we can meet all of those goals. We get an early signal of training progress, spend th e bulk of our time training, and have periodic check-ins with the validation set as we go along. def main(self): # ... line 217 self.validation_cadence = 5for epoch_ndx in range(1, self.cli_args.epochs + 1): # ... line 228 trnMetrics_t = self.doTraining(epoch_ndx, train_dl)self.logMetrics(epoch_ndx, 'trn', trnMetrics_t) if epoch_ndx == 1 or epoch_ndx % self.validation_cadence == 0: # ... line 239 self.logImages(epoch_ndx, 'trn', train_dl) self.logImages(epoch_ndx, 'val', val_dl) There isn’t a single right way to structure our image logging. We are going to grab a handful of CTs from both the training and validation sets. For each CT, we will select 6 evenly spaced slices, end to end, and sh ow both the ground truth and our model’s output. We chose 6 slices only because Tens orBoard will show 12 images at a time, and we can arrange the browser window to have a row of label images over the model out- put. Arranging things this way makes it easy to visually compare the two, as we can see in figure 13.16. Also note the small slider-dot on the prediction images. That slider will allow us to view previous versions of the images with the same label (such as val/0_prediction_3, but at an earlier epoch). Being able to se e how our segmentation output changes over time can be useful when we’re trying to de bug something or make tweaks to achieve a specific result. As training progresses, TensorBoard will limit the number of imagesListing 13.27 training.py:210, SegmentationTrainingApp.main Our outermost loop, over the epochs Trains for one epochLogs the (scalar) metrics from training after each epoch Only every validation cadence-th interval … … we validate the model and log images." Deep-Learning-with-PyTorch.pdf,"394 CHAPTER 13 Using segmentation to find suspected nodules viewable from the slider to 10, probably to avoid overwhelming the browser with a huge number of images. The code that produces this output starts by getting 12 series from the pertinent data loader and 6 images from each series. def logImages(self, epoch_ndx, mode_str, dl): self.segmentation_model.eval() images = sorted(dl.dataset.series_list)[:12] for series_ndx, series_uid in enumerate(images): ct = getCt(series_uid) for slice_ndx in range(6): ct_ndx = slice_ndx * (ct.hu_a.shape[0] - 1) // 5 sample_tup = dl.dataset.getitem_fullSlice(series_uid, ct_ndx) ct_t, label_t, series_uid, ct_ndx = sample_tup After that, we feed ct_t it into the model. This looks very much like what we see in computeBatchLoss ; see p2ch13/training.py for details if desired. Once we have prediction_a , we need to build an image_a that will hold RGB values to display. We’re using np.float32 values, which need to be in a range from 0 to 1.Listing 13.28 training.py:326, .logImages positive label False PositivesPositive Prediction abel False Positiv sitive dictionEpoch SliderNo Label Figure 13.16 Top row: label data for training. Bottom row: output from the segmentation Sets the model to eval Takes (the same) 12 CTs by bypassing the data loader and using the dataset directly. The series list might be shuffled, so we sort. Selects six equidistant slices throughout the CT" Deep-Learning-with-PyTorch.pdf,"395 Updating the training script for segmentation Our approach will cheat a little by adding together various images and masks to get data in the range 0 to 2, and then multiplyin g the entire array by 0.5 to get it back into the right range. ct_t[:-1,:,:] /= 2000 ct_t[:-1,:,:] += 0.5 ctSlice_a = ct_t[dl.dataset.contextSlices_count].numpy()image_a = np.zeros((512, 512, 3), dtype=np.float32) image_a[:,:,:] = ctSlice_a.reshape((512,512,1))image_a[:,:,0] += prediction_a & (1 - label_a) image_a[:,:,0] += (1 - prediction_a) & label_a image_a[:,:,1] += ((1 - prediction_a) & label_a) * 0.5 image_a[:,:,1] += prediction_a & label_a image_a *= 0.5 image_a.clip(0, 1, image_a) Our goal is to have a grayscale CT at half intensity, overlaid with predicted-nodule (or, more correctly, nodule-candidate) pixels in various colors. We’re going to use red for all pixels that are inco rrect (false positives and false negatives). This will mostly be false positives, which we don’t care about too much (since we’re focused on recall). 1 - label_a inverts the label, and that multiplied by the prediction_a gives us only the predicted pixels that aren’t in a ca ndidate nodule. False negatives get a half- strength mask added to green, which means they will show up as orange (1.0 red and 0.5 green renders as orange in RGB). Ever y correctly predicted pixel inside a nodule is set to green; since we got those pixels right, no red will be added, and so they willrender as pure green. After that, we renormalize our data to the 0…1 range and clamp it (in case we start displaying augmented data here, which woul d cause speckles when the noise was out- side our expected CT range). All that rema ins is to save the data to TensorBoard. writer = getattr(self, mode_str + '_writer') writer.add_image( f'{mode_str}/{series_ndx}_prediction_{slice_ndx}',image_a, self.totalTrainingSamples_count, dataformats='HWC', )Listing 13.29 training.py:346, .logImages Listing 13.30 training.py:361, .logImagesCT intensity is assigned to all RGB channels to provide a grayscale base image. False positives are flagged as red and overlaid on the image. False negatives are orange. True positives are green." Deep-Learning-with-PyTorch.pdf,"396 CHAPTER 13 Using segmentation to find suspected nodules This looks very similar to the writer.add_scalar calls we’ve seen before. The data- formats='HWC' argument tells TensorBoard that th e order of axes in our image has our RGB channels as the third axis. Recall that our network layers often specify out- puts that are B × C × H × W, and we could put that data directly into TensorBoard as well if we specified 'CHW' . We also want to save the ground truth that we’re using to train, which will form the top row of our TensorBoard CT slices we saw earlier in figure 13.16. The code for thatis similar enough to what we just saw that we’ll skip it. Again, check p2ch13/training.py if you want the details. 13.6.5 Updating our metrics logging To give us an idea how we are doing, we compute per-epoch metrics: in particular, true positives, false negatives, and false po sitives. This is what the following listing does. Nothing here will be particularly surprising. sum_a = metrics_a.sum(axis=1) allLabel_count = sum_a[METRICS_TP_NDX] + sum_a[METRICS_FN_NDX] metrics_dict['percent_all/tp'] = \ sum_a[METRICS_TP_NDX] / (allLabel_count or 1) * 100 metrics_dict['percent_all/fn'] = \ sum_a[METRICS_FN_NDX] / (allLabel_count or 1) * 100 metrics_dict['percent_all/fp'] = \ sum_a[METRICS_FP_NDX] / (allLabel_count or 1) * 1 00 We are going to start scorin g our models as a way to determine whether a particular training run is the best we’ve seen so far. In chapter 12, we said we’d be using the F1 score for our model ranking, but our goals are different here. We need to make sure our recall is as high as possible, since we can’t classify a potential nodule if we don’t find it in the first place! We will use our recall to determine the “best” model. As long as the F1 score is rea- sonable for that epoch,15 we just want to get recall as high as possible. Screening out any false positives will be the respon sibility of the cla ssification model. def logMetrics(self, epoch_ndx, mode_str, metrics_t): # ... line 453 score = metrics_dict['pr/recall'] return scoreListing 13.31 training.py:400, .logMetrics 15And yes, “reasonable” is a bit of a dodge. “Nonzero” is a good starting place, if you’d like something more specific.Listing 13.32 training.py:393, .logMetricsCan be larger than 100% since we’re comparing to the total number of pixels labeled as candidate nodules, which is a tiny fraction of each image" Deep-Learning-with-PyTorch.pdf,"397 Updating the training script for segmentation When we add similar code to our classification training loop in th e next chapter, we’ll use the F1 score. Back in the main training loop, we’ll keep track of the best_score we’ve seen so far in this training run. When we save ou r model, we’ll include a flag that indicates whether this is the best score we’ve seen so far. Recall from section 13.6.4 that we’re only calling the doValidation function for the first and then every fifth epochs. That means we’re only going to check for a best score on those epochs. That shouldn’t be a problem, but it’s something to keep in mind if you need to debug something happen- ing on epoch 7. We do this checking just before we save the images. def main(self): best_score = 0.0 for epoch_ndx in range(1, self.cli_args.epochs + 1): # if validation is wanted# ... line 233valMetrics_t = self.doValidation(epoch_ndx, val_dl) score = self.logMetrics(epoch_ndx, 'val', valMetrics_t) best_score = max(score, best_score) self.saveModel('seg', epoch_ndx, score == best_score) Let’s take a look at how we persist our model to disk. 13.6.6 Saving our model PyTorch makes it pretty easy to save our model to disk. Under the hood, torch.save uses the standard Python pickle library, which means we could pass our model instance in directly, and it would save prop erly. That’s not considered the ideal way to persist our model, however, si nce we lose some flexibility. Instead, we will save only the parameters of our model. Doing this allows us to load those parameters into any model that expect s parameters of the same shape, even if the class doesn’t match the model those parameters were saved under. The save- parameters-only approach allows us to reus e and remix our models in more ways than saving the entire model. We can get at our model’s parameters using the model.state_dict() function. def saveModel(self, type_str, epoch_ndx, isBest=False): # ... line 496model = self.segmentation_model if isinstance(model, torch.nn.DataParallel): model = model.moduleListing 13.33 training.py:210, SegmentationTrainingApp.main Listing 13.34 training.py:480, .saveModelThe epoch-loop we already saw Computes the score. As we saw earlier, we take the recall. Now we only need to write sa veModel. The third parameter is whether we want to save it as best model, too. Gets rid of the DataParallel wrapper, if it exists" Deep-Learning-with-PyTorch.pdf,"398 CHAPTER 13 Using segmentation to find suspected nodules state = { 'sys_argv': sys.argv, 'time': str(datetime.datetime.now()),'model_state': model.state_dict(), 'model_name': type(model).__name__, 'optimizer_state' : self.optimizer.state_dict(),'optimizer_name': type(self.optimizer).__name__, 'epoch': epoch_ndx, 'totalTrainingSamples_count': self.totalTrainingSamples_count, } torch.save(state, file_path) We set file_path to something like data-unversioned/part2/models/p2ch13/ seg_2019-07-10_02.17.22_ch12.50000.state . The .50000. part is the number of training samples we’ve presente d to the model so far, while the other parts of the path are obvious. TIP By saving the optimizer state as well, we could resume training seamlessly. While we don’t provide an implementation of this, it could be useful if your access to computing resources is lik ely to be interrupted. Details on loading a model and optimizer to restart training can be found in the official documentation(https://pytorch.org/tutorials/begi nner/saving_loading_models.html ). If the current model has the best score we’v e seen so far, we save a second copy of state with a .best.state filename. This might get overwritten later by another, higher- score version of the model. By focusing only on this best file, we can divorce custom- ers of our trained model from the details of how each epoch of training went (assum- ing, of course, that our score metric is of high quality). if isBest: best_path = os.path.join( 'data-unversioned', 'part2', 'models', self.cli_args.tb_prefix, f'{type_str}_{self.time_str}_{self.cli_args.comment}.best.state') shutil.copyfile(file_path, best_path) log.info(""Saved model params to {}"".format(best_path)) with open(file_path, 'rb') as f: log.info(""SHA1 : "" + hashlib.sha1(f.read()).hexdigest()) We also output the SHA1 of the model we just sa ved. Similar to sys.argv and the timestamp we put into the state dictionary, this can help us debug exactly what model we’re working with if things become confused later (for example, if a file gets renamed incorrectly).Listing 13.35 training.py:514, .saveModelThe important part Preserves momentum, and so on" Deep-Learning-with-PyTorch.pdf,"399 Results We will update our classification training script in the next chapter with a similar routine for saving the classifi cation model. In order to diagnose a CT, we’ll need to have both models. 13.7 Results Now that we’ve made all of our code changes, we’ve hit the last section in step 3 of fig- ure 13.17. It’s time to run python -m p2ch13.training --epochs 20 --augmented final_seg . Let’s see what our results look like! Here is what our training metr ics look like if we limit ourselves to the epochs we have validation metrics for (we’ll be looking at those metrics next, so this will keep it an apples-to-apples comparison): E1 trn 0.5235 loss, 0.2276 precision, 0.9381 recall, 0.3663 f1 score E1 trn_all 0.5235 loss, 93.8% tp, 6.2% fn, 318.4% fp... E5 trn 0.2537 loss, 0.5652 precision, 0.9377 recall, 0.7053 f1 score E5 trn_all 0.2537 loss, 93.8% tp, 6.2% fn, 72.1% fp... E10 trn 0.2335 loss, 0.6011 precision, 0.9459 recall, 0.7351 f1 score E10 trn_all 0.2335 loss, 94.6% tp, 5.4% fn, 62.8% fp1. Segmentation UNet2. Update: 2a. Model 2b. Dataset 2c. Training 3. Resul tsT/F 1. Segment ation UNet UNet UNet 2. Update: 2a Model 2a Model t g 3. Re sul ts 2a. Model 2b. Dataset 2c. Tr aining T/F Figure 13.17 The outline of this chapter, with a focus on the results we see from training TPs are trending up, too. Great! And FNs and FPs are trending down.In these rows, we are particularly interested in the F1 score—it is trending up. Good!" Deep-Learning-with-PyTorch.pdf,"400 CHAPTER 13 Using segmentation to find suspected nodules ... E15 trn 0.2226 loss, 0.6234 precision, 0.9536 recall, 0.7540 f1 score E15 trn_all 0.2226 loss, 95.4% tp, <2> 4.6% fn, 57.6% fp ... E20 trn 0.2149 loss, 0.6368 precision, 0.9584 recall, 0.7652 f1 score E20 trn_all 0.2149 loss, 95.8% tp, <2> 4.2% fn, 54.7% fp Overall, it looks pretty good. True positi ves and the F1 score are trending up, false positives and negatives are trending down. That ’s what we want to see! The validation metrics will tell us whether th ese results are legitimate. Keep in mind that since we’re training on 64 × 64 crops, but validating on whole 512 × 512 CT slices, we are almost certainly going to have drastically different TP:FN:FP ratios. Let’s see: E1 val 0.9441 loss, 0.0219 precision, 0.8131 recall, 0.0426 f1 score E1 val_all 0.9441 loss, 81.3% tp, 18.7% fn, 3637.5% fp E5 val 0.9009 loss, 0.0332 precision, 0.8397 recall, 0.0639 f1 score E5 val_all 0.9009 loss, 84.0% tp, 16.0% fn, 2443.0% fp E10 val 0.9518 loss, 0.0184 precision, 0.8423 recall, 0.0360 f1 score E10 val_all 0.9518 loss, 84.2% tp, 15.8% fn, 4495.0% fp E15 val 0.8100 loss, 0.0610 precision, 0.7792 recall, 0.1132 f1 score E15 val_all 0.8100 loss, 77.9% tp, 22.1% fn, 1198.7% fp E20 val 0.8602 loss, 0.0427 precision, 0.7691 recall, 0.0809 f1 score E20 val_all 0.8602 loss, 76.9% tp, 23.1% fn, 1723.9% fp Ouch—false positive rates over 4,000%? Yes, actually, that’s ex pected. Our validation slice area is 218 pixels (512 is 29), while our training crop is only 212. That means we’re validating on a slice surface that’s 26 = 64 times bigger! Having a false positive count that’s also 64 times bigger makes sense. Re member that our true positive rate won’t have changed meaningfully, since it would all have been included in the 64 × 64 sam- ple we trained on in the first place. This situation also results in very low precision, and, hence, a low F1 score. That’s a natura l result of how we’ve structured the training and validation, so it’s not a cause for alarm. What’s problematic, however, is our recall (and, hence, our true positive rate). Our recall plateaus between epochs 5 and 10 and th en starts to drop. It’s pretty obvious that we begin overfitting very quickly, and we can see further evidence of that in figure 13.18—while the training recall keeps tren ding upward, the valid ation recall decreases after 3 million samples. This is how we identified overfittin g in chapter 5, in particular figure 5.14. In these rows, we are pa rticularly interested in the F1 score—it is trending up. Good!TPs are trending up, too. Great! And FNs and FPs are trending down. The highest TP rate (great). Note that the TP rate is the same as recall. But FPs are 4495 %—that sounds like a lot." Deep-Learning-with-PyTorch.pdf,"401 Conclusion NOTE Always keep in mind that TensorBo ard will smooth your data lines by default. The lighter ghost line behind the solid color shows the raw values. The U-Net architecture has a lot of capaci ty, and even with our reduced filter and depth counts, it’s able to memorize our training set pretty quickly. One upside is thatwe don’t end up needing to train the model for very long! Recall is our top priority for segmentati on, since we’ll let issues with precision be handled downstream by the cla ssification models. Reducing those false positives is the entire reason we have those classification models! This skewed situation does mean it is more difficult than we’d like to eval uate our model. We could instead use the F2 score, which weights recall more heavily (or F5, or F10 …), but we’d have to pick an N high enough to almost comp letely discount precision. We’ll skip the intermediates and just score our model by recall, and use our human judgment to make sure a given training run isn’t being pathological about it. Since we’re training on the Dice loss, rather than directly on re call, it should work out. This is one of the situations where we are cheating a little, because we (the authors) have already done the training and evaluation for chapter 14, and we know how all of this is going to turn out. There isn’t any good way to look at this situation and know that the results we’re seeing will wo rk. Educated guesse s are helpful, but they are no substitute fo r actually running experiments until something clicks. As it stands, our results are good enough to use going forward, even if our metrics have some pretty extreme values. We’re on e step closer to finishing our end-to-end project! 13.8 Conclusion In this chapter, we’ve discussed a new way of structuring models fo r pixel-to-pixel seg- mentation; introduced U-Net, an off-the-sh elf, proven model architecture for those kinds of tasks; and adapted an implementa tion for our own use. We’ve also changed our dataset to provide data for our new mode l’s training needs, including small crops PlateauTraining dataset Validation datasetOverfiTting Platea u Training dataset Validation dataset fiTt Over f Figure 13.18 The validation set recall, showing signs of overfitting when recall goes down after epoch 10 (3 million samples)" Deep-Learning-with-PyTorch.pdf,"402 CHAPTER 13 Using segmentation to find suspected nodules for training and a limited set of slices fo r validation. Our training loop now has the ability to save images to TensorBoard, and we have moved augmentation from the dataset into a separate model that can oper ate on the GPU. Finally, we looked at our training results and discussed how even though the false positive rate (in particular)looks different from what we might hope, our results will be acceptable given our requirements for them from the larger proj ect. In chapter 14, we will pull together the various models we’ve written in to a cohesive, end-to-end whole. 13.9 Exercises 1Implement the model-wrappe r approach to augmentati on (like what we used for segmentation training) fo r the classification model. aWhat compromises did you have to make? bWhat impact did the change have on training speed? 2Change the segmentation Dataset implementation to have a three-way split for training, validation, and test sets. aWhat fraction of the data did you use for the test set? bDo performance on the test set and the validation set seem consistent witheach other? cHow badly does training suffer with the smaller training set? 3Make the model try to segment malignant versus benign in addition to is-nod- ule status. aHow does your metrics reporting need to change? Your image generation? bWhat kind of results do you see? Is the segmentation good enough to skip the classification step? 4Can you train the model on a combination of 64 × 64 crops and whole-CTslices? 16 5Can you find additional sources of data to use beyond just the LUNA (or LIDC) data? 13.10 Summary Segmentation flags individual pixels or voxels for memb ership in a class. This is in contrast to classification, which operates at the level of the entire image. U-Net was a breakthrough model arch itecture for segmentation tasks. Using segmentation followed by classifi cation, we can implement detection with relatively modest data and computation requirements. Naive approaches to 3D segmentation can quickly use too much RAM for current-generation GPUs. Carefully limiting the scope of what is presented to the model can help limit RAM usage. 16Hint: Each sample tuple to be batched together must have the same shape for each corresponding tensor, but the next batch could have different samples with different shapes." Deep-Learning-with-PyTorch.pdf,"403 Summary It is possible to train a segmentation model on image crops while validating on whole-image slices. This flexibility can be important for class balancing. Loss weighting is an emphasis on the lo ss computed from cert ain classes or sub- sets of the training data, to encourage the model to focus on the desired results. It can complement class balancing and is a useful tool when trying to tweak model training performance. TensorBoard can display 2D images gene rated during training and will save a history of how those models changed over the training run. This can be used to visually track changes to model output as training progresses. Model parameters can be saved to disk an d loaded back to reconstitute a model that was saved earlier. Th e exact model implementation can change as long as there is a 1:1 mapping between old and new parameters." Deep-Learning-with-PyTorch.pdf,"404End-to-end nodule analysis, and where to go next Over the past several chapters, we have bu ilt a decent number of systems that are important components of our project. We started loading our data, built and improved classifiers for nodule candidat es, trained segmentation models to find those candidates, handled the support infr astructure needed to train and evaluate those models, and started saving the results of our training to disk. Now it’s time to unify the components we have into a cohesi ve whole, so that we may realize the full goal of our project: it’s time to automatically detect cancer.This chapter covers Connecting segmentation and classification models Fine-tuning a network for a new task Adding histograms and other metric types to TensorBoard Getting from overfitting to generalizing" Deep-Learning-with-PyTorch.pdf,"405 Towards the finish line 14.1 Towards the finish line We can get a hint of the work remaining by looking at figure 14.1. In step 3 (grouping) we see that we still need to build the bridge between the segmentation model from chapter 13 and the classifier from chapter 12 that will tell us whether what the segmen- tation network found is, indeed, a nodule. On the right is step 5 (nodule analysis and diagnosis), the last step to the overall goal : seeing whether a nodule is cancer. This is another classification task; but to learn so mething in the process, we’ll take a fresh angle at how to approach it by building on the nodule classifi er we already have. Of course, these brief descriptions and thei r simplified depiction in figure 14.1 leave out a lot of detail. Let’s zoom in a little with figure 14.2 and see what we’ve got left to accomplish. As you can see, three important tasks remain. Each item in the following list corre- sponds to a major line item from figure 14.2: 1Generate nodule candidates . This is step 3 in the ov erall project. Three tasks go into this step: aSegmentation —The segmentation model from chapter 13 will predict if a given pixel is of interest: if we suspect it is part of a nodule. This will be done per 2D slice, and every 2D result will be stacked to form a 3D array of voxels containing nodule candidate predictions.[(I,R,C), (I,R,C), (I,R,C), ...][NEG, POS, NEG, ... ] MAL/BENcandidate Locations ClaSsification modelSTep 2 (ch. 13): Segmentation Step 1 (ch. 10): Data LoadingStep 4 (ch. 11+12): ClaSsification Step 5 (ch. 14): Nodule analysis and Diagnosisp=0.1 p=0.9 p=0.2 p=0.9Step 3 (ch. 14): Grouping.MHD .RAWCT Data segmentation modelcandidate Sample [(I,R,C ), (I,R,C) , (I,R,C ), ... ] [NEG, POS, NEG, ... ]]]]]]]]]]]] MAL/BEN ClaSsificacaacaaaacacccctiotiotiottiioootiotnnnnnnnn model STep 2 (ch. 13) : Segment ation Step 1 (ch. 10) : Data Loading Step 4 (ch. 11+12): on Step 5 (ch. 14) : Nodule anal ysis and Diagnosi s p=0.1 p=0.9 , p=0.2 p=0.9 Step 3 (ch. 14) : Grouping [( ) candidate Locations Step 4 (ch. 11+ ClaSsificatio .M MHD .R RAW CT Data Data segmentat tion model candidate Sample Figure 14.1 Our end-to-end lung cancer detection project, with a focus on this chapter’s topics: steps 3 and 5, grouping and nodule analysis" Deep-Learning-with-PyTorch.pdf,"406 CHAPTER 14 End-to-end nodule analysis, and where to go next bGrouping —We will group the voxels into nodule candidates by applying a thresh- old to the predictions, and then groupi ng connected regions of flagged voxels. cConstructing sample tuples —Each identified nodule candidate will be used to construct a sample tuple for classificati on. In particular, we need to produce the coordinates (index , row, column) of that nodule’s center. Once this is achieved, we will have an a pplication that takes a raw CT scan from a patient and produces a list of detected nodu le candidates. Producing such a list is the t a s k i n t h e L U N A c h a l l e n g e . I f t h i s p r o j e c t w e r e t o b e u s e d c l i n i c a l l y ( a n d w e reemphasize that our project should not be!) , this nodule list would be suitable for closer inspection by a doctor. 2Classify nodules and malignancy . We’ll take the nodule candidates we just pro- duced and pass them to the candidate cl assification step we implemented in chapter 12, and then perfor m malignancy detection on the candidates flagged as nodules: aNodule classification —Each nodule candidate from segmentation and group- ing will be classified as either nodule or non-nodule. Doing so will allow us to screen out the many normal anatomical structures flagged by our segmenta- tion process. 1. Nodule Candidate Generation 2. Nodule and malignancy claSsification 3. End-to-end detection1a. Segmentation 1b. Grouping 1c. sample tuples ( ... (..., IRC) ) (..., IRC), 2. Nodule and malignanc y claSsificati o 3. End-to-end de tection n 1b. Grouping 1c.sample tup ( ... (..., IR C (..., IR C ( 1a Segmentati on 1a. Segmentati on 2a. Nodule claSsification 2b. ROC/AUC Metrics 2c. fine-tuning malignancy model 3a. (..., IRC) 3b. Is nodule? 3c. is Malignant? Figure 14.2 A detailed look at the work remaining for our end-to-end project" Deep-Learning-with-PyTorch.pdf,"407 Independence of the validation set bROC/AUC metrics —Before we can start our last cl assification step, we’ll define some new metrics for exam ining the performance of classification models, as well as establish a baseline metric against which to compare our malignancy classifiers. cFine-tuning the malignancy model —Once our new metrics are in place, we will define a model specifically for cla ssifying benign and malignant nodules, train it, and see how it performs. We wi ll do the training by fine-tuning: a process that cuts out some of the weig hts of an existing model and replaces them with fresh values that we then adapt to our new task. At that point we will be within arm’s reach of our ultimate goal: to classify nodules into benign and malignant classes and then deri ve a diagnosis from the CT. Again, diag- nosing lung cancer in the real world involv es much more than staring at a CT scan, so our performing this diagnosis is more an experiment to see how far we can get using deep learning and imaging data alone. 3End-to-end detection . Finally, we will put all of this together to get to the finish line, combining the components into an en d-to-end solution that can look at a CT and answer the question “Are th ere malignant nodules present in the lungs?” aIRC—We will segment our CT to get nodu le candidate samples to classify. bDetermine the nodules —We will perform nodule classification on the candidate to determine whether it should be fed into the malignancy classifier. cDetermine malignancy— We will perform malign ancy classification on the nod- ules that pass through the nodule classifier to determine whether the patient has cancer. We’ve got a lot to do. To the finish line! NOTE As in the previous chapter, we will discuss the key concepts in detail in the text and leave out the code for repe titive, tedious, or obvious parts. Full details can be found in the book’s code repository. 14.2 Independence of the validation set We are in danger of making a subtle but critical mistake, which we need to discuss and avoid: we have a potential leak from the tr aining set to the valid ation set! For each of the segmentation and classification models, we took care of splitting the data into a training set and an independent validation set by taking every tenth example for vali- dation and the remainder for training. However, the split for the classification model was done on the list of nodules, and the split for the segmentation model was done on the list of CT scans. This means we likely have nodules from the segmentation va lidation set in the training set of the clas- sification model and vice versa. We must avoi d that! If left unfixed, this situation could lead to performance figures that would be artificially higher compared to what we" Deep-Learning-with-PyTorch.pdf,"408 CHAPTER 14 End-to-end nodule analysis, and where to go next would obtain on an independent dataset. This is called a leak, and it would invalidate our validation. To rectify this potential data leak, we need to rework the classification dataset to also work at the CT scan level, just as we did for the segmentation task in chapter 13. Then we need to retrain the classification model wi th this new dataset. On the bright side, we didn’t save our classification model earlie r, so we would have to retrain anyway. Your takeaway from this should be to keep an eye on the end-to-end process when defining the validation set. Probably the easi est way to do this (and the way it is done for most important datasets) is to make the validation split as expl icit as possible—for example, by having two separate direct ories for training and validation—and then stick to this split for your entire project. When you need to redo the split (for exam- ple, when you need to add stratification of the dataset split by some criterion), you need to retrain all of your models with the newly split dataset. So what we did for you was to take LunaDataset from chapters 10–12 and copy over getting the candidate list and splitting it into test and validation datasets from Luna2dSegmentationDataset in chapter 13. As this is very mechanical, and there is not much to learn from the details (you ar e a dataset pro by now), we won’t show the code in detail. We’ll retrain our classification model by rerunning the training for the classifier:1 $ python3 -m p2ch14.training --num-workers=4 --epochs 100 nodule-nonnodule After 100 epochs, we achieve about 95% ac curacy for positive samples and 99% for negative ones. As the validation loss isn’t seen to be trending upwa rd again, we could train the model longer to see if things continued to improve. After 90 epochs, we reach the maximal F1 score and have 99.2% validation accu- racy, albeit only 92.8% on th e actual nodules. We ’ll take this model, even though we might also try to trade a bit of overall ac curacy for better accuracy on the malignant nodules (in between, the model got 95.4% accu racy on actual nodules for 98.9% total accuracy). This will be good enough for us, and we are ready to bridge the models. 14.3 Bridging CT segmentation and nodule candidate classification Now that we have a segmentation model saved from chapter 13 and a classification model we just trained in the previous sect ion, figure 14.3, steps 1a, 1b, and 1c show that we’re ready to work on writing the co de that will convert our segmentation out- put into sample tuples. We are doing the grouping : finding the dashed outline around the highlight of step 1b in figure 14.3. Our input is the segmentation : the voxels flagged by the segmentation model in 1a. We want to find 1c, the coordinates of the center of mass of each “lump” of flagged voxels: the in dex, row, and column of the 1b plus mark is what we need to provide in the list of sample tuples as output. 1You can also use the p2_run_everything notebook." Deep-Learning-with-PyTorch.pdf,"409 Bridging CT segmentation and nodule candidate classification Running the models will naturally look ve ry similar to how we handled them during training and validation (valid ation in particular). The diffe rence here is the loop over the CTs. For each CT, we segment every slice and then take all the segmented output as the input to grouping. The output from grou ping will be fed into a nodule classifier, and the nodules that survive th at classification will be fed into a malignancy classifier. This is accomplished by the following ou ter loop over the CTs, which for each CT segments, groups, classifies candidates, an d provides the classifications for further processing. for _, series_uid in series_iter: ct = getCt(series_uid) mask_a = self.segmentCt(ct, series_uid) candidateInfo_list = self.groupSegmentationOutput( series_uid, ct, mask_a) classifications_list = self.classifyCandidates( ct, candidateInfo_list)Listing 14.1 nodule_analysis.py:324, NoduleAnalysisApp.main1. Nodule Candidate Generation 2. Nodule and malignancy claSsification 3. End-to-end detection1a. Segmentation 1b. Grouping 1c. sample tuples ( ... (..., IRC) ) (..., IRC), 3a. (..., IRC) 3b. Is nodule? 3c. is Malignant?2a. Nodule claSsification 2b. ROC/AUC Metrics 2c. fine-tuning malignancy model 1. Nodule C andidate Geeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeennnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnneeeeeeeeeeeeeeeeeerationnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn on n 1bGrouping 1csamp letuples C) ) C), 2. Nodule and malignancy claSsificatio 3. End-to-e nd detectio n n 1b. Grouping 1c.sampletup ( ... (..., IR C (..., IR C ( 1aSegmentation 1a. Segmentation 3a. (..., IRC) 3b. Is nodule? 3c. is Malignant ? odel 2a. Nodule c laSsification 2b. ROC/AUC Metrics 2c. f ine-tuningmalignancy mo Figure 14.3 Our plan for this chapter, with a focus on grouping segmented voxels into nodule candidates Loops over the series UIDsGets the CT (step 1 in the big picture) Runs our segmentation model on it (step 2) Groups the flagged voxels in the output (step 3) Runs our nodule classifier on them (step 4)" Deep-Learning-with-PyTorch.pdf,"410 CHAPTER 14 End-to-end nodule analysis, and where to go next We’ll break down the segmentCt , groupSegmentationOutput , and classifyCandi- dates methods in the following sections. 14.3.1 Segmentation First up, we are going to perform segmentati on on every slice of the entire CT scan. As we need to feed a given patient’s CT slice by slice, we build a Dataset that loads a CT with a single series_uid and returns each slice, one per __getitem__ call. NOTE The segmentation step in particular can take quite a while when exe- cuted on the CPU. Even though we glo ss over it here, the code will use the GPU if available. Other than the more expansive input, the main difference is what we do with the out- put. Recall that the output is an array of per-pixel probabilities (that is, in the range 0…1) that the given pixel is part of a nodule . While iterating over the slices, we collect the slice-wise predictions in a mask array that has the same shape as our CT input. Afterward, we threshold the predictions to ge t a binary array. We will use a threshold of 0.5, but if we wanted to, we could expe riment with thresholdi ng to trade getting more true positives for an increase in false positives. We also include a sma ll cleanup step using the erosion operation from scipy.ndimage.morphology . It deletes one layer of boundary voxels and only keeps the inner ones—those for which all eight neig hboring voxels in the axis direction are also flagged. This makes the flagged area smaller and causes very small components (smaller than 3 × 3 × 3 voxels) to vanish. Put together with the loop over the data loader, which we instruct to feed us all slic es from a single CT, we have the following. def segmentCt(self, ct, series_uid): with torch.no_grad(): output_a = np.zeros_like(ct.hu_a, dtype=np.float32) seg_dl = self.initSegmentationDl(series_uid) # for input_t, _, _, slice_ndx_list in seg_dl: input_g = input_t.to(self.device) prediction_g = self.seg_model(input_g) for i, slice_ndx in enumerate(slice_ndx_list): output_a[slice_ndx] = prediction_g[i].cpu().numpy() mask_a = output_a > 0.5 mask_a = morphology.binary_erosion(mask_a, iterations=1) return mask_aListing 14.2 nodule_analysis.py:384, .segmentCt We do not need gradients here, so we don’t build the graph.This array will hold our output: a float array of probability annotations. We get a data loader that lets us loop over our CT in batches. After moving the input to the GPU …… we run the segmentation model … … and copy each element to the output array. Thresholds the probability outputs to get a binary output, and then applies binary erosion as cleanup" Deep-Learning-with-PyTorch.pdf,"411 Bridging CT segmentation and nodule candidate classification This was easy enough, but now we need to invent the grouping. 14.3.2 Grouping voxels into nodule candidates We are going to use a simple connected-c omponents algorithm for grouping our sus- pected nodule voxels into chunks to feed into classification. This grouping approach labels connected components, wh ich we will accomplish using scipy.ndimage .measurements.label . The label function will take all nonzero pixels that share an edge with another nonzero pixel and mark them as belonging to the same group. Since our output from the segmentation mo del has mostly blobs of highly adjacent pixels, this approach matches our data well. def groupSegmentationOutput(self, series_uid, ct, clean_a): candidateLabel_a, candidate_count = measurements.label(clean_a)centerIrc_list = measurements.center_of_mass( ct.hu_a.clip(-1000, 1000) + 1001, labels=candidateLabel_a,index=np.arange(1, candidate_count+1), ) The output array candidateLabel_a is the same shape as clean_a , which we used for input, but it has 0 where the background voxels are, and increasing integer labels 1, 2, …, with one number for each of the connected blobs of voxels making up a nodule can- didate. Note that the labels here are not the same as labels in a classification sense! These are just saying “This blob of voxels is blob 1, this blob over here is blob 2, and so on.” SciPy also sports a functi on to get the centers of mass of the nodule candidates: scipy.ndimage.measurements.center_of_mass . It takes an array with per-voxel den- sities, the integer labels from the label function we just called, and a list of which of those labels need to have a center calculated. To match the function’s expectation that the mass is non-negative, we offset the (clipped) ct.hu_a by 1,001. Note that this leads to all flagged voxels ca rrying some weight, since we clamped the lowest air value to –1,000 HU in the native CT units. candidateInfo_list = [] for i, center_irc in enumerate(centerIrc_list): center_xyz = irc2xyz( center_irc, ct.origin_xyz, ct.vxSize_xyz,ct.direction_a, )Listing 14.3 nodule_analysis.py:401 Listing 14.4 nodule_analysis.py:409Assigns each voxel the label of the group it belongs to Gets the center of mass for each group as index, row, column coordinates Converts the voxel coordinates to real patient coordinates" Deep-Learning-with-PyTorch.pdf,"412 CHAPTER 14 End-to-end nodule analysis, and where to go next candidateInfo_tup = \ CandidateInfoTuple(False, False, False, 0.0, series_uid, center_xyz) candidateInfo_list.append(candidateInfo_tup) return candidateInfo_list As output, we get a list of three arrays (one each for the index, row, and column) the same length as our candidate_count . We can use this data to populate a list of candidateInfo_tup instances; we have grown attached to this little data structure, so we stick our results into the same kind of list we’ve been using since chapter 10. As we don’t really have suitable data for the first four values ( isNodule_bool , hasAnnotation_bool , isMal_bool , and diameter_mm ), we insert placeholder values of a suitable type. We then convert our coordina tes from voxels to physical coordinates in a loop, creating the list. It might seem a bit silly to move our coordinates away from our array-based index, row, and column, but all of the code that consumes candidateInfo_tup instances expects center_xyz , not center_irc . We’d get wildly wrong results if we tried to swap one for the other! Yay—we’ve conquered step 3, getting no dule locations from the voxel-wise detec- tions! We can now crop out the suspected no dules and feed them to our classifier to weed out some more false positives. 14.3.3 Did we find a nodule? Classification to reduce false positives As we started part 2 of this book, we described the job of a radiologist looking through CT scans for signs of cancer thus: Currently, the work of reviewing the data must be performed by highly trained specialists, requires painstaking attention to detail, and it is dominated by cases where no cancer exists. Doing that job well is akin to being placed in front of 100 haystacks and being told, “Determine which of these, if any, contain a needle.” We’ve spent time and energy discussing the proverbial needles; let’s discuss the hay for a moment by looking at figure 14.4. Our job, so to speak, is to fork away as muchhay as we can from in front of our glassy-eyed radiologist, so that they can refocustheir highly trained attention where it can do the most good. Let’s look at how much we are discarding at each step while we perform our end- to-end diagnosis. The arrows in figure 14.4 show the data as it flows from the raw CTvoxels through our project to our final ma lignancy determination. Each arrow that ends with an X indicates a swath of data discarded by the previous step; the arrow pointing to the next step represents the da ta that survived the culling. Note that the numbers here are very approximate. Builds our candidate info tuple and appends it to the list of detections" Deep-Learning-with-PyTorch.pdf,"413 Bridging CT segmentation and nodule candidate classification Let’s go through the steps in figure 14.4 in more detail: 1Segmentation —Segmentation starts with the en tire CT: hundreds of slices, or about 33 million (225) voxels (give or take quite a lot). About 220 voxels are flagged as being of interest; this is or ders of magnitude smaller than the total input, which means we’re throwing out 97% of the voxels (that’s the 225 on the left leading to the X). 2Grouping . While grouping doesn’t remove anyt hing explicitly, it does reduce the number of items we’re considering, si nce we consolidate voxels into nodule candidates. The grouping produc es about 1,000 candidates (210) from 1 million voxels. A nodule of 16 × 16 × 2 voxels would have a total of 210 voxels.2 3Nodule classification . This process throws away th e majority of the remaining ~210 items. From our thousands of nodule candidates, we’re left with tens of nod- ules: about 25. 4Malignant classification . Finally, the malignancy classifier takes tens of nodules (25) and finds the one or two (21) that are cancer. 2The size of any given nodule is highly variable, obviously. 3. nodule claSsification 4. malignant claSsification ~2^10 ~2^5 ~2^5 ~2^1 1. Segmentation 2. Grouping~2^25 voxels ~2^10 candidates ~2^25 ~2^20 Figure 14.4 The steps of our end-to-end detection project, and the rough order of magnitude of data removed at each step" Deep-Learning-with-PyTorch.pdf,"414 CHAPTER 14 End-to-end nodule analysis, and where to go next Each step along the way allows us to discar d a huge amount of data that our model is confident is irrelevant to our cancer-detec tion goal. We went from millions of data points to a handful of tumors. Now that we have identified regions in the image that our segmentation model con- siders probable candidates, we need to cr op these candidates from the CT and feed them into the classification module. Happily, we have candidateInfo_list from the previous section, so all we need to do is make a DataSet from it, put it into a Data- Loader , and iterate over it. Column 1 of the probability predictions is the predicted probability that this is a nodu le and is what we want to keep. Just as before, we collect the output from the entire loop. def classifyCandidates(self, ct, candidateInfo_list): cls_dl = self.initClassificationDl(candidateInfo_list) classifications_list = [] for batch_ndx, batch_tup in enumerate(cls_dl): input_t, _, _, series_list, center_list = batch_tup input_g = input_t.to(self.device) with torch.no_grad(): _, probability_nodule_g = self.cls_model(input_g) if self.malignancy_model is not None: _, probability_mal_g = self.malignancy_model(input_g) else: probability_mal_g = torch.zeros_like(probability_nodule_g) zip_iter = zip(center_list, probability_nodule_g[:,1].tolist(),probability_mal_g[:,1].tolist()) for center_irc, prob_nodule, prob_mal in zip_iter: center_xyz = irc2xyz(center_irc, direction_a=ct.direction_a,Listing 14.5 nodule_analysis.py:357, .classifyCandidatesFully automated vs. assistive systems There is a difference between a fully autom ated system and one that is designed to augment a human’s ab ilities. For our automated syst em, once a piece of data is flagged as irrelevant, it is gone forever. When presenting data for a human to con- sume, however, we should allow them to peel back some of the layers and look atthe near misses, as well as annotate our findings with a degree of confidence. Were we designing a system for clinical use, we’d need to carefully consider our exact intended use and make sure our system design supported those use cases well.Since our project is fully automated, we can move forward without having to consider how best to surface the near misses and the unsure answers. Again, we get a data loader to loop over, this time based on our candidate list. Sends the inputs to the deviceRuns the inputs through the nodule vs. non-nodule network If we have a malignancy model, we run that, too. Does our bookkeeping, constructing a list of our results" Deep-Learning-with-PyTorch.pdf,"415 Bridging CT segmentation and nodule candidate classification origin_xyz=ct.origin_xyz, vxSize_xyz=ct.vxSize_xyz, )cls_tup = (prob_nodule, prob_mal, center_xyz, center_irc) classifications_list.append(cls_tup) return classifications_list This is great! We can now threshold the output probabilities to get a list of things our model thinks are actual nodules. In a practi cal setting, we would probably want to out- put them for a radiologist to inspect. Again, we might want to adjust the threshold to err a bit on the safe side: that is, if our th reshold was 0.3 instead of 0.5, we would pres- ent a few more candidates that turn out no t to be nodules, while reducing the risk of missing actual nodules. if not self.cli_args.run_validation: print(f""found nodule candidates in {series_uid}:"")for prob, prob_mal, center_xyz, center_irc in classifications_list: if prob > 0.5: s = f""nodule prob {prob:.3f}, ""if self.malignancy_model: s += f""malignancy prob {prob_mal:.3f}, "" s += f""center xyz {center_xyz}""print(s) if series_uid in candidateInfo_dict: one_confusion = match_and_score( classifications_list, candidateInfo_dict[series_uid] )all_confusion += one_confusion print_confusion( series_uid, one_confusion, self.malignancy_model is not None ) print_confusion( ""Total"", all_confusion, self.malignancy_model is not None ) Let’s run this for a given CT from the validation set:3 $ python3.6 -m p2ch14.nodule_analysis 1.3.6.1.4.1.14519.5.2.1.6279.6001 ➥ .592821488053137951302246128864 ... found nodule candidates in 1.3.6.1.4.1.14519.5.2.1.6279.6001.5928214880 ➥ 53137951302246128864:Listing 14.6 nodule_analysis.py:333, NoduleAnalysisApp.main 3We chose this series specifically be cause it has a nice mix of results.If we don’t pass run_validation, we print individual information … … for all candidates found by the segmentation where the classifier assigned a nodule probability of 50% or more. If we have the ground truth data, we compute and print the confusion matrix and also add the current results to the total." Deep-Learning-with-PyTorch.pdf,"416 CHAPTER 14 End-to-end nodule analysis, and where to go next nodule prob 0.533, malignancy prob 0.030, center xyz XyzTuple ➥ (x=-128.857421875, y=-80.349609375, z=-31.300007820129395) nodule prob 0.754, malignancy prob 0.446, center xyz XyzTuple ➥ (x=-116.396484375, y=-168.142578125, z=-238.30000233650208) ... nodule prob 0.974, malignancy prob 0.427, center xyz XyzTuple ➥ (x=121.494140625, y=-45.798828125, z=-211.3000030517578) nodule prob 0.700, malignancy prob 0.310, center xyz XyzTuple ➥ (x=123.759765625, y=-44.666015625, z=-211.3000030517578) ... The script found 16 nodule candidates in total. Since we’re using our validation set, we have a full set of annotations and malignan cy information for each CT, which we can use to create a confusion matrix with our re sults. The rows are the truth (as defined by the annotations), and the columns show how our project handled each case: 1.3.6.1.4.1.14519.5.2.1.6279.6001.592821488053137951302246128864 | Complete Miss | Filtered Out | Pred. Nodule Non-Nodules | | 1088 | 15 Benign | 1 | 0 | 0 Malignant | 0 | 0 | 1 The Complete Miss column is when our segmen ter did not flag a no dule at all. Since the segmenter was not trying to flag non-no dules, we leave that cell blank. Our seg- menter was trained to have high recall, so there are a large number of non-nodules, but our nodule classifier is well equipped to screen those out. So we found the 1 malignant nodule in th is scan, but missed a 17th benign one. In addition, 15 false positive non-nodules ma de it through the nodule classifier. The filtering by the classifier brought the false positives down from over 1,000! As we saw earlier, 1,088 is about O(210), so that lines up with what we expect. Similarly, 15 is about O(24), which isn’t far from the O(25) we ballparked. Cool! But what’s the larger picture? 14.4 Quantitative validation Now that we have anecdotal evidence that the thing we built might be working on one case, let’s take a look at the performance of our model on the entire validation set. Doing so is simple: we run our validation set through the previous prediction and check how many nodules we get, how many we miss, and how many candidates are erroneously identi fied as nodules.This candidate is assigned a 53% probability of be ing malignant, so it barely makes the probability threshold of 50%. The malignancy classific ation assigns a very low (3%) probability. Detected as a nodule with very high confidence and assigned a 42% prob ability of malignancy Scan IDPrognosis: Complete Miss means the segmentation didn’t find a nodule, Filtered Out is the classifier’s work, and Predicted Nodules are th ose it marked as nodules. The rows contain the ground truth." Deep-Learning-with-PyTorch.pdf,"417 Predicting malignancy We run the following, which should take ha lf an hour to an hour when run on the GPU. After coffee (or a full-blown nap), here is what we get: $ python3 -m p2ch14.nodule_analysis --run-validation ... Total | Complete Miss | Filtered Out | Pred. Nodule Non-Nodules | | 164893 | 2156 Benign | 12 | 3 | 87 Malignant | 1 | 6 | 45 We detected 132 of the 154 nodules, or 85%. Of the 22 we missed, 13 were not consid- ered candidates by the segmentation, so th is would be the obvious starting point for improvements. About 95% of the detected nodules are false positives. This is of course not great; on the other hand, it’s a lot less critical—having to look at 20 nodule candidates to find one nodule will be much easier than looking at the entire CT. We w ill go into this in more detail in section 14.7.2, but we want to stress that rather than treating these mistakes as a black box, it’s a good idea to investigate the misclassified ca ses and see if they have commonal- ities. Are there characteristics that differentiate them from the samples that were correctly classified? Can we find anything that co uld be used to improve our performance? For now, we’re going to accept our numb ers as is: not bad, but not perfect. The exact numbers may differ when you run your self-trained model. Toward the end of this chapter, we will provide some pointe rs to papers and techniques that can help improve these numbers. With inspiration and some experimentat ion, we are confi- dent that you can achieve bette r scores than we show here. 14.5 Predicting malignancy Now that we have implemented the nodule-d etection task of the LUNA challenge and can produce our own nodule pr edictions, we ask ourselves the logical next question: can we distinguish malignant nodules from be nign ones? We should say that even with a good system, diagnosing malignancy would pr obably take a more holistic view of the patient, additional non-CT co ntext, and eventually a biopsy, rather than just looking at single nodules in isolation on a CT scan. As such, this seems to be a task that is likely to be performed by a doctor for some time to come. 14.5.1 Getting malignancy information The LUNA challenge focuses on nodule de tection and does not come with malig- nancy information. The LIDC-IDRI dataset ( http://mng.bz/4A4R ) has a superset of the CT scans used for the LUNA dataset an d includes additional information about the degree of malignancy of the identified tumors. Conv eniently, there is a PyLIDC library that can be inst alled easily, as follows: $ pip3 install pylidc" Deep-Learning-with-PyTorch.pdf,"418 CHAPTER 14 End-to-end nodule analysis, and where to go next The pylicd library gives us ready access to th e additional maligna ncy information we want. Just like matching the annotations with the candidates by location as we did in chapter 10, we need to associate the annota tion information from LIDC with the coor- dinates of the LUNA candidates. In the LIDC annotations, the malignanc y information is encoded per nodule and diagnosing radiologist (up to four looked at the same nodule) using an ordinal five-value scale from 1 (highly unlikely) through mode rately unlikely, inde terminate, and moder- ately suspicious, and ending with 5 (highly suspicious).4 These annotations are based on the image alone and subject to assumptions about the patient. To convert the list of num- bers to a single Boolean yes/no, we will cons ider nodules to be ma lignant when at least two radiologists rated that nodul e as “moderately suspicious” or greater. Note that this cri- terion is somewhat arbitrary; indeed, the lit erature has many different ways of dealing with this data, including predicting the five steps, using averages, or removing nodulesfrom the dataset where the rating radi ologists were uncertain or disagreed. The technical aspects of combining the data are the same as in chapter 10, so we skip showing the code here (it is in the code re pository for this chapter) and will use the extended CSV file. We will use the dataset in a way very similar to what we did for the nod- ule classifier, except that we now only need to process actual nodules and use whether a given nodule is malignant or not as the label to predict. This is structurally very similar to the balancing we used in chapter 12, but instead of sampling from pos_list and neg_list , we sample from mal_list and ben_list . Just as we did for the nodule classi- fier, we want to keep the training data balanced. We put this into the MalignancyLuna- Dataset class, which subclasses the LunaDataset but is otherwise very similar. For convenience, we create a dataset command-line argument in training.py and dynamically use the dataset class specified on the command line. We do this by usingPython’s getattr function. For example, if self.cli_args.dataset is the string MalignancyLunaDataset , it will get p2ch14.dsets.MalignancyLunaDataset and assign this type to ds_cls , as we can see here. ds_cls = getattr(p2ch14.dsets, self.cli_args.dataset) train_ds = ds_cls( val_stride=10, isValSet_bool=False,ratio_int=1, ) 14.5.2 An area under the curve ba seline: Classifying by diameter It is always good to have a baseline to see what performance is better than nothing. We could go for better than random, but here we can use the diameter as a predictor for malignancy—larger nodules are more likely to be malignant. Step 2b of figure 14.5hints at a new metric we ca n use to compare classifiers. 4See the PyLIDC documentation for full details: http://mng.bz/Qyv6 .Listing 14.7 training.py:154, .initTrainDl Dynamic class-name lookup Recall that this is the on e-to-one balancing of the training data, here between benign and malignant. " Deep-Learning-with-PyTorch.pdf,"419 Predicting malignancy We could use the nodule diameter as the sole input to a hypothetical classifier predict- ing whether a nodule is malignant. It wouldn ’t be a very good classifier, but it turns out that saying “Everything bi gger than this threshold X is malignant” is a better pre- dictor of malignancy than we might expect . Of course, picking the right threshold is key—there’s a sweet spot that gets all the huge tumors and none of the tiny specks, and roughly splits the uncertain area that’s a jumble of larger benign nodules and smaller malignant ones. As we might recall from chapter 12, our tr ue positive, false positive, true negative, a n d f a l s e n e g a t i v e c o u n t s c h a n g e b a s e d o n w h a t t h r e s h o l d v a l u e w e c h o o s e . A s w e decrease the threshold over which we pred ict that a nodule is malignant, we will increase the number of true positives, but also the numb er of false positives. The false positive rate (FPR) is FP / (FP + TN), while the true positive rate (TPR) is TP / (TP + FN), which you might al so remember from chapter 12 as the recall. Let’s set a range for our threshold. The lower bound will be the value at which all of our samples are classified as positive, and the upper bound will be the opposite, where all samples are classified as negati ve. At one extreme, our FPR and TPR will both be zero, since there won’t be any positives; and at the other, both will be one, since TNs and FNs won’t exist (everything is positive!).1. Nodule Candidate Generation 2. Nodule and malignancy claSsification 3. End-to-end detection1a. Segmentation 1b. Grouping 1c. sample tuples ( ... (..., IRC) ) (..., IRC), 3a. (..., IRC) 3b. Is nodule? 3c. is Malignant?2a. Nodule claSsification 2b. ROC/AUC Metrics 2c. fine-tuning malignancy model 1. Nodule C andidate Gener ation on n 1b. Grouping 1c. sample tuple s C) ) C), 2. Nodule and mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmaaaaaaaaaaaaaaaaaaaallllllllllllllllllllllllllligggggggggggggggggggggggggnnnnnnnnnnnnancy claSsificatio 3. End-to-e nd detectio n n 1b. Grouping 1c. sample tup ( ... (..., IR C (..., IR C ( 1a Segmentati on 1a. Segmentati on 3a. (..., IRC ) 3b. Is nodule? 3c. is Malignant ? odel 2a. Nodule claSsificati onnnnnnnnnnnnnnnnnnnnnnn 2b. ROC /AUC Metrics 2c. fine-tuning malignancy m o Figure 14.5 The end-to-end project we are implementing in this chapter, with a focus on the ROC graph" Deep-Learning-with-PyTorch.pdf,"420 CHAPTER 14 End-to-end nodule analysis, and where to go next For our nodule data, that’s from 3.25 mm (the smallest nodule) to 22.78 mm (the largest). If we pick a threshold value somewhere between those two values, we canthen compute FPR(threshold) and TPR(th reshold). If we set the FPR value to X and TPR to Y, we can plot a point that represents that threshold; and if we instead plot the FPR versus TPR for every possible threshold, we get a diagram called the receiver operat- ing characteristic (ROC) shown in figure 14.6 . The shaded area is the area under the (ROC) curve , or AUC. It is between 0 and 1, and high er is better. 5 5Note that random predictions on a balanced dataset would result in an AUC of 0.5, so that gives us a floor for how good our classifier must be.No one true way to measure false positives: Precision vs. false positive rate The FPR here and the precision from chapter 12 are rates (between 0 and 1) that measure things that are not quite opposites. As we discussed, precision is TP / (TP + FP) and measures how many of the sa mples predicted to be positive will actu- ally be positive. The FPR is FP / (FP + TN) and measures how many of the actuallynegative samples are predicted to be positi ve. For heavily imbala nced datasets (like the nodule versus non-nodule classification), our model might achieve a very good FPR (which is closely related to the cross- entropy criterion as a loss) while the preci- sion—and thus the F1 scor e—is still very poor. A low FPR means we’re weeding out a lot of what we’re not interested in, but if we are looking for that proverbial needle, we still have mostly hay. Figure 14.6 Receiver operating characteristic (ROC) curve for our baseline5.42 Mm threshold10.55 Mm Threshold 0.0 0.0 0.2 false positive ratetrue positive rate 0.4 0.6roc diameter baseline, auc=0.901 0.8 1.00.20.40.60.81.0 542Mm T " Deep-Learning-with-PyTorch.pdf,"421 Predicting malignancy Here, we also call out two specific thresh old values: diameters of 5.42 mm and 10.55 mm. We chose those two values because they give us somewhat reasonable endpoints for the range of thresholds we might consider, were we to need to pick a single thresh- old. Anything smaller than 5.42 mm, and we’d only be dropping our TPR. Larger than 10.55 mm, and we’d just be flagging malignant nodules as benign for no gain. The best threshold for this classifier wi ll probably be in the middle somewhere. How do we actually compute the values shown here? We first grab the candidate info list, filter out the annotated nodules, and get the malignancy label and diameter. For convenience, we also get the numb er of benign and malignant nodules. # In[2]: ds = p2ch14.dsets.MalignantLunaDataset(val_stride=10, isValSet_bool=True)nodules = ds.ben_list + ds.mal_listis_mal = torch.tensor([n.isMal_bool for n in nodules]) diam = torch.tensor([n.diameter_mm for n in nodules]) num_mal = is_mal.sum()num_ben = len(is_mal) - num_mal To compute the ROC curve, we need an array of the possible thresholds. We get this from torch.linspace , which takes the two boundary elements. We wish to start at zero predicted positive s, so we go from maximal thresh old to minimal. This is the 3.25 to 22.78 we already mentioned: # In[3]: threshold = torch.linspace(diam.max(), diam.min()) We then build a two-dimensional tensor in which the rows are per threshold, the col- umns are per-sample information, and the va lue is whether this sa mple is predicted as positive. This Boolean tensor is then filt ered by whether the label of the sample is malignant or benign. We sum the rows to count the number of True entries. Dividing by the number of malignant or benign nodules gives us the TPR and FPR—the two coordinates for the ROC curve: # In[4]: predictions = (diam[None] >= threshold[:, None])tp_diam = (predictions & is_mal[None]).sum(1).float() / num_mal fp_diam = (predictions & ~is_mal[None]).sum(1).float() / num_benListing 14.8 p2ch14_malben_baseline.ipynb Takes the regular dataset and in particular the list of benign and malignant nodules Gets lists of malignancy status and diameterFor normalization of the TPR and FPR , we take the number of malignant and benign nodules. Indexing by None adds a dimension of size 1, just like .unsqueeze(ndx). This gets us a 2D tensor of whether a given nodule (in a column) is classified as malignant for a given diameter (in the row). With the predictions matrix, we can compute the TPRs and FPRs for each diameter by summing over the columns." Deep-Learning-with-PyTorch.pdf,"422 CHAPTER 14 End-to-end nodule analysis, and where to go next To compute the area under this curve, we use numeric integration by the trapezoidal rule ( https://en.wikipedia.org /wiki/Trapezoidal_rule ), where we multiply the aver- age TPRs (on the Y-axis) between two points by the difference of the two FPRs (on the X-axis)—the area of trapezoids between two points of the graph. Then we sum thearea of the trapezoids: # In[5]: fp_diam_diff = fp_diam[1:] - fp_diam[:-1] tp_diam_avg = (tp_diam[1:] + tp_diam[:-1])/2auc_diam = (fp_diam_diff * tp_diam_avg).sum() Now, if we run pyplot.plot(fp_diam, tp_diam, label=f""diameter baseline, AUC={auc_diam:.3f}"") (along with the appropriate figure setup we see in cell 8), we get the plot we saw in figure 14.6. 14.5.3 Reusing preexisting weights: Fine-tuning One way to quickly get results (and often also get by with much less data) is to start not from random initializations bu t from a network trained on some task with related data. This is called transfer learning or, when training only the last few layers, fine-tuning . Look- ing at the highlighted part in figure 14.7, we see that in step 2c, we’re going to cut out the last bit of the model and replace it with something new. Figure 14.7 The end-to-end project we’re implementing in this chapter, with a focus on fine-tuning1. Nodule Candidate Generation 2. Nodule and malignancy claSsification 3. End-to-end detection1a. Segmentation 1b. Grouping 1c. sample tuples ( ... (..., IRC) ) (..., IRC), 3a. (..., IRC) 3b. Is nodule? 3c. is Malignant?2a. Nodule claSsification 2b. ROC/AUC Metrics 2c. fine-tuning malignancy model 1. Nodule Candidate Generatio n ooooooooooooooooooooooooooooooooon n 1b. Grouping 1c. sample tuple s C) ) C), 2. Nodule and malignan ccyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy ccccccccccccccccccccccccccccccccccclllllllllllllllaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsSsiiiiiiiiiiiiiiiiiifffffffffffficattttttttttttttiiiiiiiiiiiiiiiiiiooooooooooooooooooooo 3. End-to-e nd dete ctttttttttttttttttttttttttttttttiiiiiiiiiiiiiiiiiiiiooooooooooooooooooooooooooooooooooooooonnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn n 1b. Grouping 1c. sampletup ( ... (..., IR C (..., IR C ( 1aSegmentation 1a. Segmentation 3a. (..., IRC) 3b. Is nodule? 3c. is Malignant ? odellllllllllllll 2a. Nodule claSsification 2b. ROC /AUC Metrics 2c. fine-tuning malignancy m o" Deep-Learning-with-PyTorch.pdf,"423 Predicting malignancy Recall from chapter 8 that we could inte rpret the intermediate values as features extracted from the image—features could be edges or corners that the model detects or indications of any pattern. Before deep learning, it was very common to use hand- crafted features similar to wh at we briefly experimented with when starting with con- volutions. Deep learning has the network deri ve features useful for the task at hand, such as discrimination between classes, from the data. Now, fine-tuning has us mix the ancient ways (almost a decade ago!) of using preexisting features and the new way of using learned features. We treat some (often large) part of the network as a fixed fea- ture extractor and only train a relatively small part on top of it. This generally works very well. Pretrained networks trained on ImageNet as we saw in chapter 2 are very useful as feature extractors for many tasks dealing with natural images—sometimes they also work amazin gly for completely different inputs, from paintings or imitations thereof in style tran sfer to audio spectrograms. There are cases when this strategy works le ss well. For example, one of the common data augmentation strategies in training models on ImageNe t is randomly flipping the images—a dog looking right is the same class as one look ing left. As a result, the features between flipped images are very similar. But if we now try to use the pretrained model for a task where left or right matters, we will likely encounter accuracy problems. If we want to identify traffic signs, turn left here is quite different than turn right here ; but a network building on ImageNet-based features will pr obably make lots of wrong assignments between the two classes.6 In our case, we have a network that has been trained on similar data: the nodule classification network. Let’s try using that. For the sake of exposition, we will stay very basic in our fine-tuning approach. In the model architecture in figu re 14.8, the two bits of pa rticular interest are high- lighted: the last convolutional block and the head_linear module. The simplest fine- tuning is to cut out the head_linear part—in truth, we are just keeping the random initialization. After we try that, we will also explore a variant where we retrain both head_linear and the last convolutional block. We need to do the following: Load the weights of the model we wish to start with, except for the last linearlayer, where we want to keep the initialization. Disable gradients for the parameters we do not want to train (everything exceptparameters with names starting with head ). When we do fine-tuning training on more than head_linear , we still only reset head _linear to random, because we believe the previous feature-extraction layers might 6You can try it yourself with the venerable German Traffic Sign Recognition Benchmark dataset at http:// mng.bz/XPZ9 ." Deep-Learning-with-PyTorch.pdf,"424 CHAPTER 14 End-to-end nodule analysis, and where to go next not be ideal for our problem, but we expect them to be a reasonable starting point. This is easy: we add some loading code into our model setup. d = torch.load(self.cli_args.finetune, map_location='cpu') model_blocks = [ n for n, subm in model.named_children() if len(list(subm.parameters())) > 0 ]finetune_blocks = model_blocks[-self.cli_args.finetune_depth:] model.load_state_dict( { k: v for k,v in d['model_state'].items() if k.split('.')[0] not in model_blocks[-1] },Listing 14.9 training.py:124, .initModel Luna Model ArchitectureTail Backbone HEADInput ImageFil terSmaLler Output ChaNnels: 1 Image: 32 48 48ChaNnels: 8 Image: 16 24 24 xxChaNnels: 32Image: 4 6 6ChaNnels: 64 Image: 2 3 3xx ChaNnels: 16Image: 8 12 12xx xx xx A Tail Backb n HA D Input Image maLl er Output L Filter S O Nnels: 1 ChaNn e: 32 Image 2 48 48 Nnels: 8 ChaNn e: 16 Imag e 24 24 x x ls:32 ChhhhhhhhhaNnaNnNnaNnNnNnNnNnaNnNnNnNnNnNnaNnNnaNnNnNnNnaNnaNnaaael 4 Image: 6 6 ChChChCChChCCChhhhhhhaNnaNnaNnaaNnaNnaaNnaNnaNnaNnNnNnNnNnNnel Image: Nnels: 16 ChaNn e: 8 Imag e 12 2 12 x x x x x x --finetune-Depth=1 --finetune-Depth=2 ls 32 el s:666666666664444444444444444444444 ls 2222222 2222 2 3 33333333333333333 xxxxxxxxxx x --fine --fine Figure 14.8 The model architecture from chapter 11, with the depth-1 and depth-2 weights highlighted Filters out top-level modules that have parameters (as opposed to the fi nal activation) Takes the last finetune_depth blocks. The default (if fine-tuning) is 1. Filters out the last block (the final linear part) and does not load it. Starting from a fully initialized model would have us begin with (almost) all nodules labeled as malignant, because that output means “nodule” in the classifier we start from." Deep-Learning-with-PyTorch.pdf,"425 Predicting malignancy strict=False, ) for n, p in model.named_parameters(): if n.split('.')[0] not in finetune_blocks: p.requires_grad_(False) We’re set! We can train only the head by running this: python3 -m p2ch14.training \ --malignant \--dataset MalignantLunaDataset \ --finetune data/part2/models/cls_2020-02-06_14.16.55_final-nodule- ➥ nonnodule.best.state \ --epochs 40 \ malben-finetune Let’s run our model on the validation set and get the ROC curve, shown in figure 14.9. It’s a lot better than random, but gi ven that we’re not outperforming the base- line, we need to see what is holding us back. Figure 14.10 shows the TensorBoard graphs for our training. Looking at the validation loss, we see that while the AUC slowly increa ses and the loss decrease s, even the training loss seems to plateau at a somewhat high level (say, 0.3) instead of trending toward zero. We could run a longer training to check whet her it is just very slow; but comparing this to the loss progression discussed in chapte r 5—in particular, figure 5.14—we can see our loss value has not flatlined as badly as case A in the figure , but our problem withPassing strict=False lets us load only some weights of the module (with the filtered ones missing). For all but finetune_blocks, we do not want gradients. 0.0 0.0 0.2 0.4 0.6 0.8 1.0diameter baseline, auc = 0.901 finetuned-1 model, auc = 0.8880.20.40.60.81.0 Figure 14.9 ROC curve for our fine-tuned model with a retrained final linear layer. Not too bad, but not quite as good as the baseline." Deep-Learning-with-PyTorch.pdf,"426 CHAPTER 14 End-to-end nodule analysis, and where to go next losses stagnating is qualitativel y similar. Back then, case A indicated that we did not have enough capacity, so we should consider the following three possible causes: Features (the output of the last convol ution) obtained by training the network on nodule versus non-nodule classification are not useful for malignancy detec- tion. The capacity of the head—the only part we are training—is not large enough. The network might have too little capacity overall. If training only the fully connected part in fine-tuning is not enough, the next thing to try is to include the last convolutional bl ock in the fine-tuning training. Happily, we introduced a parameter for th at, so we can include the block4 part into our training: python3 -m p2ch14.training \ --malignant \ --dataset MalignantLunaDataset \--finetune data/part2/models/cls_2020-02-06_14.16.55_final-nodule- ➥ nonnodule.best.state \ --finetune-depth 2 \--epochs 10 \ malben-finetune-twolayer Once done, we can check our new best mode l against the baseline. Figure 14.11 looks more reasonable! We flag about 75% of th e malignant nodules with almost no false positives. This is clearly better than the 65% the diameter baseline can give us. Trying to push beyond 75%, our mo del’s performance falls back to the baseline. When we go back to the classification problem, we will want to pick a point on the ROC curve to balance true positives versus false positives. We are roughly on par with the baseline, and we will be content with that. In sec- tion 14.7, we hint at the many things th at you can explore to improve these results, but that didn’t fit in this book. depth 1 validation depth 1 trainingdepth 1 training depth 1 validation Figure 14.10 AUC (left) and loss (right) for the fine-tuning of the last linear layer This CLI parameter is new." Deep-Learning-with-PyTorch.pdf,"427 Predicting malignancy Looking at the loss curves in figure 14.12, we see that our model is now overfitting very early; thus the next step would be to check into further regularization methods. We will leave that for you. There are more refined methods of fine-t uning. Some advocate gradually unfreeze the layers, starting from the top. Others prop ose to train the later layers with the usual learning rate and use a smaller rate for the lower layers. PyTorch does natively support using different optimization parameters li ke learning rates, weight decay, and momentum for different parameters by separating them in several parameter groups that are just that: lists of paramete rs with separate hyperparameters ( https://pytorc .org/docs/stable/optim.html#per-parameter-options ). 0.0 0.0 0.2 0.4 0.6 0.8 1.00.20.40.60.81.0 diameter baseline, auc = 0.901 finetuned-1lr model, auc = 0.888 finetuned-2lr model, auc = 0.911Figure 14.11 ROC curve for our modified model. Now we’re getting really close to the baseline. depth 1 validation depth 1 trainingdepth 1 training depth 1 validation dededededededededdededededededededededededddedddddeedededdeeeeedeedptptptpttttpttptptptpppptptpttptppptttpptpttpttpttpttptttppppph hh hhhhhhhhhhh h hhhhhhhhhhhhhh hhhhhh1 1111111111vavvavvvvvvavvvlilililidaddadddddaddtitititionnononnnnnnonnn deeeedeeptpthh 11 trtraiaaaaaianininggggggnggg deedpth hhhh1 traiaaaaninggggg dedededededeededeeedeeeeededeeeedeeededddeedddeededeedddddddptptpppptptppttptptppptttttttptpppppppttptptppttpttpptttttthhhhhhhhhhh hhhhhhhhhhhhhhhhhhhhhhhhh11111111111111 1vavavavavvvvvvvvvvvvvvvvvvvvvvvvvvvvvlilililillllllllllldaddadadddddatitititionononnnnnnonDepth 2 training depth 1 training depth 1 validation depth 2 validationdepth 2 validation depth 1 validation depth 1 training depth 2 training Figure 14.12 AUC (left) and loss (right) for the fine-tuning of the last convolutional block and the fully connected layer" Deep-Learning-with-PyTorch.pdf,"428 CHAPTER 14 End-to-end nodule analysis, and where to go next 14.5.4 More output in TensorBoard While we are retraining the model, it might be worth looking at a few more outputs we could add to TensorBoard to see how we are doing. For histograms, TensorBoard has a premade recording function. For ROC curv es, it does not, so we have an oppor- tunity to meet the Matplotlib interface. HISTOGRAMS We can take the predicted probabilities for malignancy and make a histogram of them. Actually, we make two: one for (according to the ground truth) benign and onefor malignant nodules. These histograms give us a fine-grained view into the outputs of the model and let us see if there are larg e clusters of output probabilities that are completely wrong. NOTE In general, shaping the data you display is an important part of getting quality information from the data. If you have many extremely confident cor- rect classifications, you might want to exclude the leftmost bin. Getting theright things onscreen will typically require some iteration of careful thoughtand experimentation. Don’t hesitate to tweak what you’re showing, but alsotake care to remember if you change the definition of a particular metric with- out changing the name. It can be easy to compare apples to oranges unless you’re disciplined about na ming schemes or removing now-invalid runs of data. We first create some space in the tensor metrics_t holding our data. Recall that we defined the indices somewhere near the top. METRICS_LABEL_NDX=0 METRICS_PRED_NDX=1 METRICS_PRED_P_NDX=2METRICS_LOSS_NDX=3 METRICS_SIZE = 4 Once that’s done, we can call writer.add_histogram with a label, the data, and the global_step counter set to our number of training samples presented; this is similar to the scalar call earlier. We also pass in bins set to a fixed scale. bins = np.linspace(0, 1) writer.add_histogram( 'label_neg', metrics_t[METRICS_PRED_P_NDX, negLabel_mask],self.totalTrainingSamples_count, bins=bins )writer.add_histogram( 'label_pos',Listing 14.10 training.py:31 Listing 14.11 training.py:496, .logMetricsOur new index, carrying the prediction probabilities (rather than prethresholded predictions)" Deep-Learning-with-PyTorch.pdf,"429 Predicting malignancy metrics_t[METRICS_PRED_P_NDX, posLabel_mask], self.totalTrainingSamples_count, bins=bins ) Now we can take a look at our prediction distribution for benign samples and how it evolves over each epoch. We want to exam ine two main features of the histograms in figure 14.13. As we would expect if our ne twork is learning anything, in the top row of benign samples and non-nodules, there is a mountain on the left where the network is very confident that what it sees is not mali gnant. Similarly, there is a mountain on the right for the malignant samples. But looking closer, we see the capacity problem of fine-tuning only one layer. Focusing on the top-left seri es of histograms, we see the mass to the left is somewhat spread out and does not seem to reduce much. There is even a small peak round 1.0, and quite a bit of probability mass is spread out across the entire range. This reflects the loss that didn’t want to decrease below 0.3. Figure 14.13 TensorBoard histogram display for fine-tuning the head only " Deep-Learning-with-PyTorch.pdf,"430 CHAPTER 14 End-to-end nodule analysis, and where to go next Given this observation on the training loss, we would not have to look further, but let’s pretend for a moment that we do. In the validation results on the right side, it appears that the probability mass away from the “correct” side is larger for the non-malignant samples in the top-right diagram than for the malignant ones in the bottom-right dia- gram. So the network gets non-malignant samples wrong more often than malignant ones. This might have us look into rebalancing the data to show more non-malignant samples. But again, this is when we pretend there was nothing wrong with the trainingon the left side. We typically want to fix training first! For comparison, let’s take a look at th e same graph for our depth 2 fine-tuning (figure 14.14). On the training side (the left two diagrams), we have very sharp peaks at the correct answer and no t much else. This reflects that training works well. On the validation side, we now see that the most pronounced artifact is the little peak at 0 predicted probability for malignanc y in the bottom-right histogram. So our systematic problem is that we’re misclassi fying malignant samples as non-malignant. (This is the reverse of what we had earlier!) This is the overfitting we saw with two-layer Figure 14.14 TensorBoard histogram di splay for fine-tuning with depth 2 " Deep-Learning-with-PyTorch.pdf,"431 Predicting malignancy fine-tuning. It probably would be good to pu ll up a few images of that type to see what’s happening. ROC AND OTHER CURVES IN TENSOR BOARD As mentioned earlier, TensorBoard does no t natively support drawing ROC curves. We can, however, use the ability to export an y graph from Matplotlib. The data prepara- tion looks just like in section 14.5.2: we us e the data that we also plotted in the histo- gram to compute the TPR and FPR— tpr and fpr, respectively. We again plot our data, but this time we keep track of pyplot.figure and pass it to the SummaryWriter method add_figure . fig = pyplot.figure() pyplot.plot(fpr, tpr)writer.add_figure('roc', fig, self.totalTrainingSamples_count) Because this is given to TensorBoard as an image, it appears under that heading. We didn’t draw the comparison curve or anything else, so as not to distract you from the actual function call, but we could use any Matp lotlib facilities here. In figure 14.15, we see again that the depth-2 fine-tuning (left) overfits, while the head-only fine-tuning (right) does not. Listing 14.12 training.py:482, .logMetrics Sets up a new Matplotlib figu re. We usually don’t need it because it is implicitly done in Matplotlib, but here we do.Uses arbitrary pyplot functions Adds our figure to TensorBoard Figure 14.15 Training ROC curves in TensorBoard. A slider lets us go through the iterations. " Deep-Learning-with-PyTorch.pdf,"432 CHAPTER 14 End-to-end nodule analysis, and where to go next 14.6 What we see when we diagnose Following along with steps 3a, 3b, and 3c in figure 14.16, we now need to run the full pipeline from the step 3a segmentation on the left to the step 3c malignancy model on the right. The good news is that almost all of our code is in place already! We just need to stitch it together: the moment has come to actually write and run our end-to-end diagnosis script. We saw our first hints at handling the malignancy model back in the code in section 14.3.3. If we pass an argument --malignancy-path to the nodule_analysis call, it runs the malignancy model found at this path and outputs the information. Thisworks for both a single scan and the --run-validation variant. Be warned that the script will probably ta ke a while to finish; even just the 89 CTs in the validation set took about 25 minutes.7 7Most of the delay is from SciPy’s processing of the connected components. At the time of writing, we are not aware of an accelerated implementation.1. Nodule Candidate Generation 2. Nodule and malignancy claSsification 3. End-to-end detection1a. Segmentation 1b. Grouping 1c. sample tuples ( ... (..., IRC) ) (..., IRC), 3a. (..., IRC) 3b. Is nodule? 3c. is Malignant?2a. Nodule claSsification 2b. ROC/AUC Metrics 2c. fine-tuning malignancy model 1. Nodule C andidate Gener ation on n 1b. Grouping 1c. sample tuples C) ) C), 2. Nodule and malignanc y claSsificati o 3. End-to-e nd dete ction n 1b. Grouping 1c. sampletup ( ... (..., IR C (..., IR C ( 1aSegmentation 1a. Segmentation 3a. (..., IRC ) 3b. Is nodule? 3c. is Malignant ? odel 2a. Nodule claSsification 2b. RO C/AUC Metrics 2c. fine-tuning malignancy m o Figure 14.16 The end-to-end project we are implementing in this chapter, with a focus on end-to-end detection" Deep-Learning-with-PyTorch.pdf,"433 What we see when we diagnose Let’s see what we get: Total | Complete Miss | Filtered Out | Pred. Benign | Pred. Malignant Non-Nodules | | 164893 | 1593 | 563 Benign | 12 | 3 | 70 | 17 Malignant | 1 | 6 | 9 | 36 Not too bad! We detect about 85% of the no dules and correctly flag about 70% of the malignant ones, end to end.8 While we have a lot of false positives, it would seem that having 16 of them per true nodule reduces what needs to be looked at (well, if it were not for the 30% false negatives). As we already warned in chapter 9, this isn’t at thelevel where you could collect millions of funding for your medical AI startup, 9 but it’s a pretty reasonable starting point. In general, we should be pretty happy that we’re getting results that are clearly meaningful; and of course our real goal has been to study deep learning along the way. We might next choose to look at the no dules that are actually misclassified. Keep in mind that for our task at hand, even th e radiologists who annotated the dataset dif- fered in opinion. We might stratify our valid ation set by how clearly they identified a nodule as malignant. 14.6.1 Training, validation, and test sets There is one caveat that we must mention. While we didn’t explicitly train our model on the validation set, although we ran this risk at the beginning of the chapter, we did choose the epoch of training to use based on th e model’s performance on the validation set. That’s a bit of a data leak, too. In fact, we should expect our real-world performance to be slightly worse than this, as it’s unlikely that whatever model performs best on our valida- tion set will perform equally well on every other unseen set of data (on average, at least). For this reason, practitioners often split data into three sets: A training set , exactly as we’ve done here A validation set , used to determine which epoch of evolution of the model to consider “best” A test set , used to actually predict performa nce for the model (as chosen by the validation set) on unseen, real-world data Adding a third set would have led us to pul l another nontrivial chunk of our training data, which would have been somewhat painfu l, given how badly we had to fight over- fitting already. It would also have complicated the presenta tion, so we purposely left it out. Were this a project with the resources to get more data and an imperative to build the best possible system to us e in the wild, we’d have to make a different decision here and actively seek more data to use as an independent test set. 8Recall that our earlier “75% with almost no false positi ves” ROC number was looking at malignancy classification in isolation. Here we are filtering out seven malignant nodules before we even get to the malignancy classifier. 9If it were, we’d have done th at instead of writing this book!" Deep-Learning-with-PyTorch.pdf,"434 CHAPTER 14 End-to-end nodule analysis, and where to go next The general message is that there are subtle ways for bias to creep into our models. We should use extra care to control inform ation leakage at every step of the way and verify its absence using independent data as much as possible. The price to pay for tak- ing shortcuts is failing egregiously at a la ter stage, at the worst possible time: when we’re closer to production. 14.7 What next? Additional source s of inspiration (and data) Further improvements will be difficult to meas ure at this point. Ou r classification vali- dation set contains 154 nodules, and our no dule classification model is typically get- ting at least 150 of them right, with most of the variance coming from epoch-by-epochtraining changes. Even if we were to make a significant improvement to our model, we don’t have enough fidelity in our validat ion set to tell whether that change is an improvement for certain! This is also very pronounced in the benign versus malignant classification, where the vali dation loss zigzags a lot. If we reduced our validation stride from 10 to 5, the size of our validati on set would double, at the cost of one-ninth of our training data. That might be worth it if we wanted to try other improvements. Of course, we would also need to address th e question of a test set, which would take away from our already limited training data. We would also want to take a good look at the cases where the network does not perform as well as we’d like, to see if we can identify any pattern. But beyond that, let’s talk briefly about some general ways we coul d improve our project. In a way, this sec- tion is like section 8.5 in chapter 8. We will endeavor to fill you with ideas to try; don’t worry if you don’t understand each in detail. 10 14.7.1 Preventing overfitting: Better regularization Reflecting on what we did throughout part 2, in each of the three problems—the clas- sifiers in chapter 11 and section 14.5, as well as th e segmentation in chapter 13—we had overfitting models. Overfitting in the fi rst case was catastrophic; we dealt with it by balancing the data and augmentation in chapter 12. This balancing of the data to prevent overfitting has also been the main motivation to train the U-Net on crops around nodules and candidates rather than full slices. For the remaining overfitting, we bailed out, stopping trai ning early when the overfitting started to affect our valida- tion results. This means pr eventing or reducing overfi tting would be a great way to improve our results. This pattern—get a model that overfits, an d then work to reduce that overfitting— can really can be seen as a recipe.11 So this two-step process should be used when we want to improve on the state we have achieved now. 10At least one of the authors would love to write an en tire book on the topics touched on in this section. 11See also Andrej Karparthy’s blog post “A Recipe for Training Neural Networks” at https://karpathy.github .io/2019/04/25/recipe for a more elaborate recipe." Deep-Learning-with-PyTorch.pdf,"435 What next? Additional sources of inspiration (and data) CLASSIC REGULARIZATION AND AUGMENTATION You might have noticed that we did not even use all the regularization techniques from chapter 8. For example, dropou t would be an easy thing to try. While we have some augmentation in place, we could go further. One relatively pow- erful augmentation method we did not atte mpt to employ is elastic deformations, where we put “digital cr umples” into the inputs.12 This makes for much more variability than rotation and flipping alone and would se em to be applicable to our tasks as well. MORE ABSTRACT AUGMENTATION So far, our augmentation has been geomet rically inspired—we transformed our input to more or less look like some thing plausible we might see. It turns out that we need not limit ourselves to that type of augmentation. Recall from chapter 8 that mathematically , the cross-entropy loss we have been using is a measure of the discrepancy between two probability distributions—that of the pre- dictions and the distribution that puts all probability mass on the label and can be rep- resented by the one-hot vector for the label. If overconfidence is a problem for ournetwork, one simple thing we could try is not using the one-hot distribution but rather putting a small probability mass on the “wrong” classes. 13 This is called label smoothing . We can also mess with inputs and labels at the same time. A very general and also easy-to-apply augmentation technique for doing this has been proposed under the name of mixup :14 the authors propose to randomly in terpolate both inputs and labels. Interestingly, with a li nearity assumption for the loss (whi ch is satisfied by binary cross entropy), this is equivalent to just manipu lating the inputs with a weight drawn from an appropriately ad apted distribution.15 Clearly, we don’t expect blended inputs to occur when working on real da ta, but it seems that this mi xing encourages stability of the predictions and is very effective. BEYOND A SINGLE BEST MODEL : ENSEMBLING One perspective we could have on the problem of overfitting is that our model iscapable of working the way we want if we knew the right parameters, but we don’tactually know them. 16 If we followed this intuition, we might try to come up with sev- eral sets of parameters (that is, several models), hoping that the weaknesses of each might compensate for the other. This tech nique of evaluating several models and combining the output is called ensembling . Simply put, we train several models and then, in order to predict, run all of th em and average the predictions. When each individual model overfits (or we have take n a snapshot of the model just before we started to see the overfitting), it seems pl ausible that the models might start to make bad predictions on different inputs, rather than always overfit the same sample first. 12You can find a recipe (albeit aimed at TensorFlow) at http://mng.bz/Md5Q . 13You can use nn.KLDivLoss loss for this. 14Hongyi Zhang et al., “mixup: Beyond Empirical Risk Minimization,” https://arxiv.org/abs/1710.09412 . 15See Ferenc Huszár’s post at http://mng.bz/aRJj/ ; he also provides PyTorch code. 16We might expand that to be outright Bayesian, but we’ll just go with this bit of intuition." Deep-Learning-with-PyTorch.pdf,"436 CHAPTER 14 End-to-end nodule analysis, and where to go next In ensembling, we typically use completely separate training runs or even varying model structures. But if we were to make it particularly simple, we could take several snapshots of the model from a single training run—preferably shortly before the end or before we start to observe overfitting. We might try to build an ensemble of these snapshots, but as they will st ill be somewhat close to each other, we could instead aver- age them. This is the core idea of stochastic weight averaging .17 We need to exercise some care when doing so: fo r example, when our models use batch normalization, we might want to adjust the statistics, but we can likely get a small accuracy boost even without that. GENERALIZING WHAT WE ASK THE NETWORK TO LEARN We could also look at multitask learning , where we require a mo del to learn additional outputs beyond the ones we will then evaluate,18 which has a proven track record of improving results. We could try to train on nodule versus non-nodule and benign ver- sus malignant at the same time. Actually, th e data source for the malignancy data pro- vides additional labeling we could use as a dditional tasks; see th e next section. This idea is closely related to the transfer-learn ing concept we looked at earlier, but here we would typically train both tasks in parall el rather than first doing one and then try- ing to move to the next. If we do not have additional tasks but ra ther have a stash of additional unlabeled data, we can look into semi-supervised learning . An approach that was recently proposed and looks very effective is unsupervised data augmentation.19 Here we train our model as usual on the data. On the unlabeled data, we make a prediction on an unaug-mented sample. We then take that prediction as the target for this sample and train the model to predict that target on the au gmented sample as well . In other words, we don’t know if the prediction is correct, but we ask the network to produce consistent outputs whether we augment or not. When we run out of tasks of genuine interest but do not have additional data, we may look at making things up. Making up da ta is somewhat diff icult (although people sometimes use GANs similar to the ones we briefly saw in chapter 2, with some suc- cess), so we instead make up tasks. This is when we enter the realm of self-supervised learning ; the tasks are often called pretext tasks . A very popular crop of pretext tasks apply some sort of corruptio n to some of the inputs. Then we can train a network to reconstruct the original (for example, using a U-Net-like architecture) or train a clas- sifier to detect real from corrupted data while sharing large parts of the model (such as the convolutional layers). This is still dependent on us coming up with a way to corrupt our inputs. If we don’t have such a method in mind and aren ’t getting the results we want, there are 17Pavel Izmailov and Andrew Gord on Wilson present an introduction with PyTorch code at http://mng.bz/gywe . 18See Sebastian Ruder, “An Overview of Multi- Task Learning in Deep Neural Networks,” https://arxiv.org/ abs/1706.05098 ; but this is also a key idea in many areas. 19Q. Xie et al., “Unsupervised Data Augm entation for Consistency Training,” https://arxiv.org/abs/ 1904.12848 ." Deep-Learning-with-PyTorch.pdf,"437 What next? Additional sources of inspiration (and data) other ways to do self-supervised learning. A very generic task would be if the features the model learns are good enough to let the model discriminate between different samples of our dataset. This is called contrastive learning . To make things more concrete, consider the following: we take the extracted features from the current image and a largish number K of other images. This is our key set of features. Now we set up a classification pretex t task as follows: given the features of the current image, the query , to which of the K + 1 key features does it belong? This might seem trivial at first, but even if there is perfect agreement betw een the query features and the key features for the correct class, tr aining on this task encourages the feature of the query to be maximally dissimilar from those of the K other images (in terms of being assigned low probability in the classifier output). Of course, there are many details to fill in; we recommend (somewhat arbi trarily) looking at momentum contrast.20 14.7.2 Refined training data We could improve our training data in a fe w ways. We mentioned earlier that the malig- nancy classification is actually based on a more nuanced categorization by several radiologists. An easy way to use the data we discarded by making it into the dichotomy “malignant or not?” would be to use the five classes. The radiologists’ assessments could then be used as a smoothed label: we could one-hot-encode each one and then averageover the assessments of a given nodule. So if four radiologists look at a nodule and two call it “indeterminate,” one calls that same nodule “moderately suspicious,” and the fourth labels it “highly suspicious,” we would train on the cross entropy between the model output and the target probability distribution given by the vector 0 0 0.5 0.25 0.25 . This would be similar to the label smo othing we mentioned earlier, but in a smarter, problem-specific way. We would, howe ver, have to find a new way of evaluating these models, as we lose the simple a ccuracy, ROC, and AUC notions we have in binary classification. Another way to use multiple assessments would be to train a number of models instead of one, each trained on the annotati ons given by an individual radiologist. At inference we would then ensemble the mode ls by, for example, averaging their output probabilities. In the direction of multiple tasks mentione d earlier, we could again go back to the PyLIDC-provided annotation data, where ot her classifications are provided for each annotation (subtlety, internal structure, ca lcification, sphericity, margin definedness, lobulation, spiculation, and texture ( https://pylidc.github.io/annotation.html ). We might have to learn a lot more about nodules, first, though. In the segmentation, we could try to see whether the masks provided by PyLIDC work better than those we generated oursel ves. Since the LIDC data has annotations from multiple radiologists, it would be possible to group nodules into “high agree- ment” and “low agreement” groups. It might be interesting to see if that corresponds 20K. He et al., “Momentum Contrast for Unsupervised Visual Representation Learning,” https://arxiv.org/ abs/1911.05722 ." Deep-Learning-with-PyTorch.pdf,"438 CHAPTER 14 End-to-end nodule analysis, and where to go next to “easy” and “hard” to cla ssify nodules in terms of seei ng whether our classifier gets almost all easy ones right and only has trouble on the ones that were more ambiguous to the human experts. Or we could appr oach the problem from the other side, by defining how difficult nodules are to de tect in terms of our model performance: “easy” (correctly cla ssified after an epoch or two of training), “medium” (eventually gotten right), and “hard” (persi stently misclassi fied) buckets. Beyond readily available data, one thing that would probably make sense is to fur- ther partition the nodules by malignancy ty pe. Getting a professional to examine our training data in more detail and flag each nodule with a cancer type, and then forcing the model to report that type , could result in more efficient training. The cost to con- tract out that work is prohibitive for hobb y projects, but paying might make sense in commercial contexts. Especially difficult cases could also be subject to a limited repeat review by human experts to check for errors. Again, that would require a budget but is certainly within reason for serious endeavors. 14.7.3 Competition results and research papers Our goal in part 2 was to present a self-con tained path from problem to solution, and we did that. But the particular problem of finding and classifying lung nodules hasbeen worked on before; so if you want to dig deeper, you can also see what other peo- ple have done. DATA SCIENCE BOWL 2017 While we have limited the scope of part 2 to the CT scans in the LUNA dataset, there is also a wealth of information avai lable from Data Science Bowl 2017 ( www.kaggle .com/c/data-science-bowl-2017 ), hosted by Kaggle ( www.kaggle.com ). The data itself is no longer available, but there are many accounts of people describing what worked for them and what did not. For example, so me of the Data Science Bowl (DSB) final- ists reported that the detailed malignancy level (1…5) information from LIDC was useful during training. Two highlights you could look at are these:21 Second-place solution write-up by Daniel Hammack and Julian de Wit: http:// mng.bz/Md48 Ninth-place solution write- up by Team Deep Breath: http://mng.bz/aRAX NOTE Many of the newer techniques we hi n t e d a t p r e v i o u s l y w e r e n o t y e t available to the DSB participants. The three years between the 2017 DSB and this book going to print are an eternity in deep learning! One idea for a more legitimate test set would be to use the DSB dataset instead of reusing our validation set. Unfortunately, the DSB stopped sharing the raw data, sounless you happen to have access to an old copy, you would need another data source. 21Thanks to the Internet Archive for saving them from redesigns." Deep-Learning-with-PyTorch.pdf,"439 Conclusion LUNA PAPERS The LUNA Grand Challenge has collected several results ( https://luna16.grand-chal- lenge.org/Results ) that show quite a bit of promise. While not all of the papers pro- vided include enough detail to reproduce the results, many do contain enough information to improve our project. You coul d review some of the papers and attempt to replicate approaches that seem interesting. 14.8 Conclusion This chapter concludes part 2 and delivers on the promise we made back in chapter 9: we now have a working, end-to-end system th at attempts to diagnose lung cancer from CT scans. Looking back at where we starte d, we’ve come a long way and, hopefully, learned a lot. We trained a model to do so mething interesting and difficult using pub- licly available data. The key question is, “Will this be good for anything in the real world?” with the follow-up question, “Is this ready for production?” The definition of production critically depends on the intended use , so if we’re wondering whether our algorithm can replace an expert radiologist, this is de finitely not the case. We’d argue that this can represent version 0.1 of a tool that could in the future support a radiolo- gist during clinical routine: for instance, by providing a second opinion about some- thing that could have gone unnoticed. Such a tool would require clearance by re gulatory bodies of competence (like the Food and Drug Administration in the United States) in order for it to be employed outside of research contexts. Something we would certainly be missing is an extensive, curated dataset to further train and, even more importantly, vali date our work. Indi- vidual cases would need to be evaluated by multiple experts in the context of a research protocol; and a proper representati on of a wide spectrum of situations, from common presentations to corner cases, would be mandatory. All these cases, from pure research use to clinical validation to clinical use, would require us to execute our mo del in an environment amenab le to be scaled up. Need- less to say, this comes with its own set of challenges, both technical and in terms of process. We’ll discuss so me of the technical challenges in chapter 15. 14.8.1 Behind the curtain As we close out the modeling in part 2, we want to pull back the curtain a bit and give you a glimpse at the unvarnished truth of working on deep learning projects. Funda- mentally, this book has presented a skewed take on things: a curated set of obstacles and opportunities; a well-tended garden path through the larger wilds of deep learn-ing. We think this semi-organic series of challenges (especially in part 2) makes for a better book, and we hope a better learning experience. It does not, however, make for a more realistic experience. In all likelihood, the vast majority of yo ur experiments will not work out. Not every idea will be a discovery, and not every chan ge will be a breakthrough. Deep learning is fiddly. Deep learning is fickle. And remember that deep learning is literally pushing at the forefront of human knowledge; it’s a fr ontier that we are exploring and mapping" Deep-Learning-with-PyTorch.pdf,"440 CHAPTER 14 End-to-end nodule analysis, and where to go next further every day, right now . It’s an exciting time to be in the field, but as with most fieldwork, you’re going to get some mud on your boots. In the spirit of transparency, here are so me things that we tried, that we tripped over, that didn’t work, or that at least di dn’t work well enough to bother keeping: Using HardTanh instead of Softmax for the classification network (it was simpler to explain, but it didn ’t actually work well). Trying to fix the issues caused by HardTanh by making the classification network more complicated (skip connections, and so on). Poor weight initialization causing trai ning to be unstable, particularly for segmentation. Training on full CT slices for segmentation. Loss weighting for segmenta tion with SGD. It didn’t work, and Adam was needed for it to be useful. True 3D segmentation of CT scans. It didn’t work for us, but then DeepMind went and did it anyway.22 This was before we moved to cropping to nodules, and we ran out of memory, so you might try again based on the current setup. Misunderstanding the meaning of the class column from the LUNA data, which caused some rewrites part way through authoring the book. Accidentally leaving in an “I want resu lts quickly” hack th at threw away 80% of the candidate nodules found by the segmentation module, causing the results to look atrocious until we figured out what was going on (that cost an entireweekend!). A host of different opti mizers, loss functions, and model architectures. Balancing the training data in various ways. There are certainly more that we’ve forgotten. A lot of things went wrong before they went right! Please le arn from our mistakes. We might also add that for many things in this text, we just picked an approach; we emphatically do not imply that other approaches are inferior (many of them are prob- ably better!). Additionally, coding style and project design typically differ a lot between people. In machine lear ning, it is very common for people to do a lot of pro- gramming in Jupyter Notebooks. Notebooks ar e a great tool to try things quickly, but they come with their own caveats: for exampl e, around how to keep track of what you did. Finally, instead of using th e caching mechanism we used with prepcache , we could have had a separate prep rocessing step that wrote out the data as serialized ten- sors. Each of these approaches seems to be a matter of taste; even among the three authors, any one of us would do things slightly differently.23 It is always good to try things and find which one works best for yo u while remaining flex ible when cooperat- ing with your peers. 22Stanislav Nikolov et al., “Deep Learni ng to Achieve Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy,” https://arxiv.org/pdf/1809.04430.pdf 23Oh, the discussions we’ve had!" Deep-Learning-with-PyTorch.pdf,"441 Summary 14.9 Exercises 1Implement a test set for classification, or reuse the test set from chapter 13’s exercises. Use the validation set to pick the best epochs while training, but use the test set to evaluate the end-to-end project. How well does performance on the validation set line up with performance on the test set? 2Can you train a single model that is able to do three-way classification, distin- guishing among non-nodules, benign mo dules, and malignant nodules in one pass? aWhat class-balancing split works best for training? bHow does this single-pass model pe rform, compared to the two-pass approach we are using in the book? 3We trained our classifier on annotations, but expect it to perform on the output of our segmentation. Use the segmentation model to build a list of non-nodules to use during training instea d of the non-nodules provided. aDoes the classification model performance improve when trained on this new set? bCan you characterize what kinds of nodule candidates see the biggest changes with the newly trained model? 4The padded convolutions we use result in less than full context near the edges of the image. Compute the loss for segmented pixels near the edges of the CT scan slice, versus those in the interior . Is there a measurable difference between the two? 5Try running the classifier on the entire CT by using overlapping 32 × 48 × 48 patches. How does this compare to the segmentation approach? 14.10 Summary An unambiguous split between training and validation (and test) sets is crucial. Here, splitting by patient is much less prone to getting things wrong. This iseven more true when you have se veral models in your pipeline. Getting from pixel-wise marks to nodules can be achieved using very traditional image processing. We don’t want to look down on the classics, but value these tools and use them where appropriate. Our diagnosis script perfor ms both segmentation and classification. This allows us to diagnose a CT that we have not seen before, though our current Dataset implementation is not configured to accept series_uid s from sources other than LUNA. Fine-tuning is a great way to fit a model while using a minimum of training data. Make sure the pretrained model has feat ures relevant to your task, and make sure that you retrain a portion of the network with enough capacity." Deep-Learning-with-PyTorch.pdf,"442 CHAPTER 14 End-to-end nodule analysis, and where to go next TensorBoard allows us to write out many different types of diagrams that help us determine what’s going on. But this is not a replacement for looking at data on which our model works particularly badly. Successful training seems to involve an overfitting network at some stage, and which we then regularize. We might as we ll take that as a recipe; and we should probably learn more about regularization. Training neural networks is about trying things, seeing what goes wrong, and improving on it. There usually isn’t a magic bullet. Kaggle is an excellent source of projec t ideas for deep learning. Many new data- sets have cash prizes for the top perf ormers, and older contests have examples that can be used as starting po ints for further experimentation." Deep-Learning-with-PyTorch.pdf,"Part 3 Deployment In part 3, we’ll look at how to get ou r models to the point where they can be used. We saw how to build models in the previous parts: part 1 introduced thebuilding and training of models, and part 2 thoroughly covered an example from start to finish, so the hard work is done. But no model is useful until you can actually use it. So, now we need to put the models out there and apply them to th e tasks they are designed to solve. This part is closer to part 1 in spirit, because it introduces a lot of PyTorch compo- nents. As before, we’ll focus on applicat ions and tasks we wish to solve rather than just looking at PyTorch for its own sake. In part 3’s single chapter, we’ll take a tour of the PyTorch deployment land- scape as of early 2020. We’ll get to kn ow and use the PyTorc h just-in-time com- piler (JIT) to export models for use in third-party applications to the C++ API for mobile support. " Deep-Learning-with-PyTorch.pdf," " Deep-Learning-with-PyTorch.pdf,"445Deploying to production In part 1 of this book, we learned a lot about models; and part 2 left us with a detailed path for creating good models for a particular problem. Now that we have these great models, we need to take them where they can be useful. Maintaining infrastructure for executing inference of deep learning models at scale can beimpactful from an architectural as well as cost standpoint. While PyTorch started off as a framework focused on research, beginning with the 1.0 release, a set of production-oriented features were added th at today make PyTorch an ideal end-to- end platform from research to large-scale production.This chapter covers Options for deploying PyTorch models Working with the PyTorch JIT Deploying a model server and exporting models Running exported and natively implemented models from C++ Running models on mobile" Deep-Learning-with-PyTorch.pdf,"446 CHAPTER 15 Deploying to production What deploying to production me ans will vary with the use case: Perhaps the most natural de ployment for the models we developed in part 2 would be to set up a network service providing access to our models. We’ll dothis in two versions using lightwei ght Python web frameworks: Flask ( http:// flask.pocoo.org ) and Sanic ( https://sanicframework.org ). The first is arguably one of the most popular of these framewor ks, and the latter is similar in spirit but takes advantage of Python’s new async/await support for asynchronous operations for efficiency. We can export our model to a well-standar dized format that al lows us to ship it using optimized model processors, specia lized hardware, or cloud services. For PyTorch models, the Open Neural Networ k Exchange (ONNX) format fills this role. We may wish to integrate our models into larger applications. For this it wouldbe handy if we were not limited to Py thon. Thus we will explore using PyTorch models from C++ with the idea that this also is a stepping-stone to any language. Finally, for some things like the image zebr aification we saw in chapter 2, it may be nice to run our model on mobile device s. While it is unlikely that you will have a CT module for your mobile, other medica l applications like do-it-yourself skin screenings may be more natural, and the user might pref er running on the device versus having their skin sent to a cloud service. Luckily for us, PyTorch has gained mobile support recently, and we will explore that. As we learn how to implement these use cases, we will use the classifier from chapter 14 as our first example for serving, and then switch to the zebraification model for the other bits of deployment. 15.1 Serving PyTorch models We’ll begin with what it takes to put our model on a server. Staying true to our hands- on approach, we’ll start with the simplest possible server. Once we have something basic that works, we’ll take look at its sh ortfalls and take a stab at resolving them. Finally, we’ll look at what is, at the time of writing, the future. Let’s get something that listens on the network.1 15.1.1 Our model behind a Flask server Flask is one of the most widely used Py thon modules. It can be installed using pip:2 pip install Flask 1To play it safe, do not do this on an untrusted network. 2Or pip3 for Python3. You also might want to ru n it from a Python virtual environment." Deep-Learning-with-PyTorch.pdf,"447 Serving PyTorch models The API can be created by decorating functions. from flask import Flask app = Flask(__name__) @app.route(""/hello"") def hello(): return ""Hello World!"" if __name__ == '__main__': app.run(host='0.0.0.0', port=8000) When started, the app lication will run at port 8000 an d expose one route, /hello, that returns the “Hello World” string. At this point, we can augment our Flask server byloading a previously saved model and exposing it through a POST route. We will use the nodule classifier from chapter 14 as an example. We’ll use Flask’s (somewhat curiously imported) request to get our data. More pre- cisely, request.files contains a dictionary of file objects indexed by field names. We’ll use JSON to parse the input, and we’ll return a JSON string using flask’s jsonify helper. Instead of /hello, we will now expose a /predict route that takes a binary blob (the pixel content of the series) and the related metadata (a JSON object containing a dic- tionary with shape as a key) as input files provided with a POST request and returns a JSON response with th e predicted diagnosis. More prec isely, our server takes one sam- ple (rather than a batch) and returns the probability that it is malignant. In order to get to the data, we first ne ed to decode the JSON to binary, which we can then decode into a one-dimensional array with numpy.frombuffer . We’ll convert this to a tensor with torch.from_numpy and view its actual shape. The actual handling of the model is ju st like in chapter 14: we’ll instantiate Luna- Model from chapter 14, load the weights we got from our training, and put the model in eval mode. As we are not training anything, we’ll tell PyTorch that we will not want gradients when running the model by running in a with torch.no_grad() block. import numpy as np import sys import osimport torch from flask import Flask, request, jsonify import json from p2ch13.model_cls import LunaModel app = Flask(__name__) model = LunaModel()Listing 15.1 flask_hello_world.py:1 Listing 15.2 flask_server.py:1 Sets up our model, loads the weights, and moves to evaluation mode" Deep-Learning-with-PyTorch.pdf,"448 CHAPTER 15 Deploying to production model.load_state_dict(torch.load(sys.argv[1], map_location='cpu')['model_state']) model.eval() def run_inference(in_tensor): with torch.no_grad(): # LunaModel takes a batch and outputs a tuple (scores, probs) out_tensor = model(in_tensor.unsqueeze(0))[1].squeeze(0) probs = out_tensor.tolist()out = {'prob_malignant': probs[1]} return out @app.route(""/predict"", methods=[""POST""]) def predict(): meta = json.load(request.files['meta'])blob = request.files['blob'].read() in_tensor = torch.from_numpy(np.frombuffer( blob, dtype=np.float32)) in_tensor = in_tensor.view(*meta['shape']) out = run_inference(in_tensor) return jsonify(out) if __name__ == '__main__': app.run(host='0.0.0.0', port=8000) print (sys.argv[1]) Run the server as follows: python3 -m p3ch15.flask_server ➥ data/part2/models/cls_2019-10-19_15.48.24_final_cls.best.state We prepared a trivial client at cls_client.p y that sends a single example. From the code directory, you can run it as python3 p3ch15/cls_client.py It should tell you that the nodule is very unlikely to be malignant. Clearly, our server takes inputs, runs them through our model, and returns the outputs. So are we done?Not quite. Let’s look at what could be better in the next section. 15.1.2 What we want from deployment Let’s collect some things we desire for serving models.3 First, we want to support mod- ern protocols and their features . Old-school HTTP is deeply serial, which means when a client wants to send several requests in the same connection, th e next requests will only be sent after the previous request ha s been answered. Not very efficient if you want to send a batch of things. We will partially deliver here—our upgrade to Sanic certainly moves us to a fr amework that has the ambition to be very efficient. 3One of the earliest public talks discussing the inadequa cy of Flask serving for PyTorch models is Christian Perone’s “PyTorch under the Hood,” http://mng.bz/xWdW .No autograd for us. We expect a form submission (HTTP POST) at the “/predict” endpoint. Our request will have one file called meta. Converts our data from binary blob to torch Encodes our response content as JSON" Deep-Learning-with-PyTorch.pdf,"449 Serving PyTorch models When using GPUs, it is often much more efficient to batch requests than to process them one by one or fire them in parallel. So next, we have the task of collecting requests from several connections, assembling them into a batch to run on the GPU, and then getting the results back to the respective re questers. This sounds elaborate and (again, when we write this) seems not to be done very often in simple tutorials. That is reason enough for us to do it properly here! Note , though, that until la tency induced by the duration of a model run is an issue (in that waiting for our own run is OK; but waiting for the batch that’s running when the request arrives to finish, and then waiting for our run to give results, is prohibitive), there is little reason to run multiple batches on one GPU at a given time. Increasing the maximum batch size will generally be more efficient. We want to serve several things in parallel . Even with asynchronous serving, we need our model to run efficiently on a se cond thread—this means we want to escape the (in)famous Python global inte rpreter lock (GIL) with our model. We also want to do as little copying as possible. Both from a memory-consumption and a time perspective, copying things over and over is bad. Many HTTP things are encoded in Base64 (a format restricted to 6 bits per byte to encode binary in more or less alphanumeric strings), and—say, for images—decoding that to binary and then again to a tensor and then to the batch is cl early relatively expensive. We will partially deliver on this—we’ll use streaming PUT requests to not allocate Base64 strings and to avoid growing strings by succe ssively appending to them (which is terrible for perfor- mance for strings as much as tensors). We say we do not deliver completely because we are not truly minimizing the copying, though. The last desirable thing for serving is safety . Ideally, we would have safe decoding. We want to guard against bo th overflows and resource ex haustion. Once we have a fixed-size input tensor, we should be mostly good, as it is hard to crash PyTorch start-ing from fixed-sized inputs. The stretch to get there, decoding im ages and the like, is likely more of a headache, and we make no guarantees. Internet security is a large enough field that we will not cover it at al l. We should note that neural networks are known to be susceptible to manipulation of the inputs to generate desired but wrong or unforeseen ou tputs (known as adversarial examples ), but this isn’t extremely perti- nent to our application, so we’ll skip it here. Enough talk. Let’s improve on our server. 15.1.3 Request batching Our second example server will use the Sa nic framework (installed via the Python package of the same name). This will give us the ability to serve many requests in par- allel using asynchronous processing, so we’ll ti ck that off our list. While we are at it, we will also implement request batching. Asynchronous programming can sound scary, and it usually comes with lots of ter- minology. But what we are doing here is just allowing functions to non-blockingly wait for results of computations or events.4 4Fancy people call these asynchronous function generators or sometimes, more loosely, coroutines : https:// en.wikipedia.org/wiki/Coroutine ." Deep-Learning-with-PyTorch.pdf,"450 CHAPTER 15 Deploying to production In order to do request batching, we have to decouple the reques t handling from run- ning the model. Figure 15.1 shows the flow of the data. At the top of figure 15.1 are the client s, making requests. One by one, these go through the top half of the request processor. They cause work items to be enqueued with the request information. When a full batch has been queued or the oldestrequest has waited for a specified maximu m time, a model runner takes a batch from the queue, processes it, and attaches the resu lt to the work items. These are then pro- cessed one by one by the bottom half of the request processor.ClientClient Client Model RuNner Queue Resul tsWork ItemWork ItemWork ItemWork Item Resul t Work Item Resul t Work Item Resul tRequest ProceSsor (boTtom half) nEeds_proceSsing(based on queue sizeor timer)Request ProceSsor (top half) Work BatchWork Batch Figure 15.1 Dataflow with request batching" Deep-Learning-with-PyTorch.pdf,"451 Serving PyTorch models IMPLEMENTATION We implement this by writing two functions. The model runner function starts at the beginning and runs forever. Whenever we n eed to run the model, it assembles a batch of inputs, runs the model in a second th read (so other things can happen), and returns the result. The request processor then decodes the request, enqueues inputs, waits for the processing to be completed, and returns the output with the results. In order to appreciate what asynchronous means here, think of the model runner as a wastepaper basket. All the figures we scribble for this chapter can be quickly disposed of to the right of the desk. But every once in a while— either because the basket is full or when it is time to clean up in the evening—we ne ed to take all the collected paper out to the trash can. Similarly, we enqueue new reques ts, trigger processing if needed, and wait for the results before sending them out as the answer to the request. Figure 15.2 shows our two functions in the bloc ks we execute uninterrupted before handing back to the event loop. A slight complication relative to this pict ure is that we have two occasions when we need to process events: if we have accumu lated a full batch, we start right away; and when the oldest request reaches the maximu m w a i t t i m e , w e a l s o w a n t t o r u n . W e solve this by setting a timer for the latter.5 5An alternative might be to forgo the timer and just run whenever the queue is not empty. This would potentially run smaller “first” batches, but the overa ll performance impact might not be so large for most applications.Model RuNner (runs forever, with pauses) while True: wait_for_work() batch = get_batch_from_queue() if more_work_left: schedule_next_proceSsor_run() resul t = launch_model_in_other_ thread(batch) extract_resul t_and_signal_ready()Request proceSsor (caLled for each request) d = get_request_data() im = decode_image_ to_ tensor(d) work_item['input'] = image aDd_ to_queue(work_item) schedule_next_proceSsor_run()wait_for_ready(work_item) im_out = work_item['resul t'] return encode_in_response(im_out) Model Execution (launched by Model RuNner) Runs in other thread to not block run_model_in_jit() # no GIL once in JIT signal to event lOop (and thus Model RuNner)Event LOop Figure 15.2 Our asynchronous server consists of thre e blocks: request processor, model runner, and model execution. These blocks are a bit like functions, but the first two will yield to the event loop in between." Deep-Learning-with-PyTorch.pdf,"452 CHAPTER 15 Deploying to production All our interesting code is in a ModelRunner class, as shown in the following listing. class ModelRunner: def __init__(self, model_name): self.model_name = model_nameself.queue = [] self.queue_lock = Noneself.model = get_pretrained_model(self.model_name, map_location=device) self.needs_processing = None self.needs_processing_timer = None ModelRunner first loads our model and takes care of some administrative things. In addition to the model, we also need a fe w other ingredients. We enter our requests into a queue . This is a just a Python list in which we add work items at the back and remove them in the front. When we modify the queue , we want to prevent other tasks from changing the queue out from under us. To this effect, we introduce a queue_lock that will be an asyncio.Lock provided by the asyncio module. As all asyncio objects we use here need to know the event loop, which is only available after we initialize the application, we temporarily set it to None in the instantiation. While locking like this may not be strictly necessary because our methods do no t hand back to the ev ent loop while hold- ing the lock, and operations on the queue are atomic thanks to the GIL, it does explic- itly encode our underlying a ssumption. If we had multiple workers, we would need to look at locking. One caveat: Python’s async locks are not threadsafe. (Sigh.) ModelRunner waits when it has nothing to do. We need to signal it from Request- Processor that it should stop slacking off and get to work. This is done via an asyncio.Event called needs_processing . ModelRunner uses the wait() method to wait for the needs_processing event. The RequestProcessor then uses set() to sig- nal, and ModelRunner wakes up and clear() s the event. Finally, we need a timer to guarantee a maximal wait time. This timer is created when we need it by using app.loop.call_at . It sets the needs_processing event; we just reserve a slot now. So actually, someti mes the event will be set directly because a batch is complete or when the timer goes off. When we process a batch before the timer goes off, we will clear it so we don’t do too much work. FROM REQUEST TO QUEUE Next we need to be able to enqueue re quests, the core of the first part of Request- Processor in figure 15.2 (without the decoding and reencoding). We do this in our first async method, process_input .Listing 15.3 request_batching_server.py:32, ModelRunner The queue This will become our lock.Loads and instantiates the model. This is the (only) thing we will need to change for switching to the JIT. For now, we import the CycleGAN (with the slight modification of standardizing to 0.. 1 input and output) from p3ch 15/cyclegan.py.Our signal to run the model Finally, the timer" Deep-Learning-with-PyTorch.pdf,"453 Serving PyTorch models async def process_input(self, input): our_task = {""done_event"": asyncio.Event(loop=app.loop), ""input"": input, ""time"": app.loop.time()} async with self.queue_lock: if len(self.queue) >= MAX_QUEUE_SIZE: raise HandlingError(""I'm too busy"", code=503) self.queue.append(our_task)self.schedule_processing_if_needed() await our_task[""done_event""].wait() return our_task[""output""] We set up a little Python dictionary to hold our task’s information: the input of course, the time it was queued, and a done_event to be set when the task has been processed. The processing adds an output . Holding the queue lock (conveniently done in an async with block), we add our task to the queue and schedule processing if needed. As a precaution, we error out if the queue has become too large. Then all we have to do is wait for our task to be pro-cessed, and return it. NOTE It is important to use the loop ti me (typically a monotonic clock), which may be different from the time.time() . Otherwise, we might end up with events scheduled for processing before they have been queued, or no processing at all. This is all we need for the request proc essing (except decoding and encoding). RUNNING BATCHES FROM THE QUEUE Next, let’s look at the model_runner function on the right si de of figure 15.2, which does the model invocation. async def model_runner(self): self.queue_lock = asyncio.Lock(loop=app.loop) self.needs_processing = asyncio.Event(loop=app.loop) while True: await self.needs_processing.wait() self.needs_processing.clear() if self.needs_processing_timer is not None: self.needs_processing_timer.cancel() self.needs_processing_timer = None async with self.queue_lock: # ... line 87 to_process = self.queue[:MAX_BATCH_SIZE] del self.queue[:len(to_process)]Listing 15.4 request_batching_server.py:54 Listing 15.5 request_batching_server.py:71, .run_modelSets up the task dataWith the lock, we add our task and … … schedule processing. Processing will set needs_processing if we have a full batch. If we don’t and no timer is set, it will set one to when the max wait time is up. Waits (and hands back to the loop using await) for the pro cessing to finish Waits until there is something to do Cancels the timer if it is set Grabs a batch and schedules the running of the next batch, if needed" Deep-Learning-with-PyTorch.pdf,"454 CHAPTER 15 Deploying to production self.schedule_processing_if_needed() batch = torch.stack([t[""input""] for t in to_process], dim=0) # we could delete inputs here... result = await app.loop.run_in_executor( None, functools.partial(self.run_model, batch) ) for t, r in zip(to_process, result): t[""output""] = rt[""done_event""].set() del to_process As indicated in figure 15.2, model_runner does some setup and then infinitely loops (but yields to the event loop in between). It is invoked when the app is instantiated, so it can set up queue_lock and the needs_processing event we discussed earlier. Then it goes into the loop, await -ing the needs_processing event. When an event comes, first we check whet her a time is set and, if so, clear it, because we’ll be processing things now. Then model_runner grabs a batch from the queue and, if needed, schedu les the processing of the ne xt batch. It assembles the batch from the individual ta sks and launches a new thread that evaluates the model using asyncio 's app.loop.run_in_executor . Finally, it adds the outputs to the tasks and sets done_event . And that’s basically it. The web fram ework—roughly lookin g like Flask with async and await sprinkled in—needs a little wrapper. And we need to start the model_runner function on the event loop. As mentioned ea rlier, locking the queue really is not nec- essary if we do not have multiple runners taking from the queue and potentially inter- rupting each other, but knowing our code will be adapted to other projects, we stay on the safe side of losing requests. We start our server with python3 -m p3ch15.request_batching_server data/p1ch2/horse2zebra_0.4.0.pth Now we can test by uploading the image da ta/p1ch2/horse.jpg and saving the result: curl -T data/p1ch2/horse.jpg ➥ http://localhost:8000/image --output /tmp/res.jpg Note that this server does get a few things right—it batches requests for the GPU and runs asynchronously—but we still use the Python mode, so the GIL hampers running our model in parallel to the request serving in the main thread. It will not be safe for potentially hostile environments like the internet. In particular, the decoding of request data seems neither optima l in speed nor completely safe. In general, it would be nicer if we co uld have decoding where we pass the request stream to a function alon g with a preallocated memo ry chunk, and the function decodes the image from the stream to us. But we do not know of a library that does things this way. Runs the model in a separate thread, moving data to the device and then handing over to the model. We continue processing after it is done. Adds the results to the work- item and sets the ready event" Deep-Learning-with-PyTorch.pdf,"455 Exporting models 15.2 Exporting models So far, we have used PyTorch from the Python interpreter. But this is not always desir- able: the GIL is still potentially blocking our improved web server. Or we might want to run on embedded systems where Python is too expensive or unavailable. This is when we export our model. There are several ways in which we can play this. We might go away from PyTorch entirely and move to more specialized frameworks. Or we might stay within the PyTorch ecosystem and use the JIT, a just in time compiler for a PyTorch-centric subset of Python. Even when we then run the JITed model in Python, we might be after two of its advantages: so metimes the JIT enables nifty optimizations, or—as in the case of our web server—we ju st want to escape the GIL, which JITed models do. Finally (but we take some ti me to get there), we might run our model under libtorch , the C++ library PyTorch offers, or with the derived Torch Mobile. 15.2.1 Interoperability beyond PyTorch with ONNX Sometimes we want to leav e the PyTorch ecosystem with our model in hand—for example, to run on embedded hardware wi th a specialized model deployment pipe- line. For this purpose, Open Neural Networ k Exchange provides an interoperational format for neural networks an d machine learning models ( https://onnx.ai ). Once exported, the model can be executed usin g any ONNX-compatible runtime, such as ONNX Runtime,6 provided that the operations in use in our model are supported by the ONNX standard and the targ et runtime. It is, for exampl e, quite a bit faster on the Raspberry Pi than running PyTorch directly. Beyond traditional hardware, a lot of spe- cialized AI accelerator hardware supports ONNX ( https://onnx.ai/supported-tools .html#deployModel ). In a way, a deep learning model is a pr ogram with a very spec ific instruction set, made of granular operations like ma trix multiplicatio n, convolution, relu , tanh , and so on. As such, if we can serialize the comp utation, we can reexecute it in another run- time that understands its low-level operatio ns. ONNX is a standardization of a format describing those operatio ns and their parameters. Most of the modern deep learning fram eworks support serialization of their com- putations to ONNX, and some of them can load an ONNX file and execute it (although this is not the case for PyTorc h). Some low-footprint (“edge”) devices accept an ONNX files as input and generate low-level instructions for the specific device. And some cloud computing provid ers now make it po ssible to upload an ONNX file and see it expose d through a REST endpoint. In order to export a model to ONNX , we need to run a model with a dummy input: the values of the input tensors don’t re ally matter; what matters is that they are the correct shape and type. By invoking the torch.onnx.export function, PyTorch 6The code lives at https://github.com/mi crosoft/onnxruntime , but be sure to read the privacy statement! Currently, building ONNX Runtime yourself will get you a package that does not send things to the mother- ship." Deep-Learning-with-PyTorch.pdf,"456 CHAPTER 15 Deploying to production will trace the computations performed by the mo del and serialize them into an ONNX file with the provided name: torch.onnx.export(seg_model, dummy_input, ""seg_model.onnx"") The resulting ONNX file can now be run in a runtime, compiled to an edge device, or uploaded to a cloud service. It can be used from Python after installing onnxruntime or onnxruntime-gpu and getting a batch as a NumPy array. import onnxruntime sess = onnxruntime.InferenceSession(""seg_model.onnx"") input_name = sess.get_inputs()[0].namepred_onnx, = sess.run(None, {input_name: batch}) Not all TorchScript operators can be represented as standa rdized ONNX operators. If we export operations fo reign to ONNX, we will get errors about unknown aten opera- tors when we try to use the runtime. 15.2.2 PyTorch’s own export: Tracing When interoperability is not the key, but we need to escape the Python GIL or other- wise export our network, we can use PyTorch’s own representation, called the Torch- Script graph . We will see what that is and how the JIT that generates it works in the next section. But let’s give it a spin right here and now. The simplest way to make a TorchScript mo del is to trace it. This looks exactly like ONNX exporting. This isn’t surprising, be cause that is what the ONNX model uses under the hood, too. Here we just feed dummy inputs into the model using the torch.jit.trace function. We import UNetWrapper from chapter 13, load the trained parameters, and put the model into evaluation mode. Before we trace the model, there is one additional caveat: none of the parameters should require gradients, because using the torch.no_grad() context manager is strictly a runtime switch. Even if we trace the model within no_grad but then run it outside, PyTorch will record gr adients. If we take a peek ahead at figure 15.4, we see why: after the model has been traced, we ask PyTorch to execute it. But the traced model will have parameters requiring grad ients when executing the recorded opera- tions, and they will make everything requir e gradients. To escape that, we would have to run the traced model in a torch.no_grad context. To spare us this—from experi- ence, it is easy to forget and then be su rprised by the lack of performance—we loop through the model parameters and set all of them to not require gradients.Listing 15.6 onnx_example.py The ONNX runtime API uses sessions to define models and then calls the run method with a set of named inputs. This is a somewhat typical setup when dealing with computations defined in static graphs." Deep-Learning-with-PyTorch.pdf,"457 Exporting models But then all we need to do is call torch.jit.trace . 7 import torch from p2ch13.model_seg import UNetWrapper seg_dict = torch.load('data-unversioned/part2/models/p2ch13/seg_2019-10-20_15 ➥ .57.21_none.best.state', map_location='cpu') seg_model = UNetWrapper(in_channels=8, n_classes=1, depth=4, wf=3, ➥ padding=True, batch_norm=True, up_mode='upconv') seg_model.load_state_dict(seg_dict['model_state']) seg_model.eval() for p in seg_model.parameters(): p.requires_grad_(False) dummy_input = torch.randn(1, 8, 512, 512) traced_seg_model = torch.jit.trace(seg_model, dummy_inp ut) The tracing gives us a warning: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means the trace might not generalize to other inputs! return layer[:, :, diff_y:(diff_y + target_size[0]), diff_x:(diff_x + ➥ target_size[1])] This stems from the cropping we do in U-Net, but as long as we only ever plan to feed images of size 512 × 512 into the model, we will be OK. In the next section, we’ll take a closer look at what causes the warning an d how to get around the limitation it high- lights if we need to. It will also be important when we want to convert models that aremore complex than convolutional ne tworks and U-Nets to TorchScript. We can save the traced model torch.jit.save(traced_seg_model, 'traced_seg_model.pt') and load it back without need ed anything but the saved file , and then we can call it: loaded_model = torch.jit.load('traced_seg_model.pt') prediction = loaded_model(batch) The PyTorch JIT will keep the model’s state fr om when we saved it: that we had put it into evaluation mode and that our parameters do not require gradie nts. If we had not taken care of it beforehand, we would need to use with torch.no_grad(): in the execution.Listing 15.7 trace_example.py 7Strictly speaking, this traces the model as a function . Recently, PyTorch gained the ability to preserve more of the module structure using torch.jit.trace_module , but for us, the plain tracing is sufficient.Sets the parameters to not require gradients The tracing" Deep-Learning-with-PyTorch.pdf,"458 CHAPTER 15 Deploying to production TIP You can run the JITed and exported PyTorch model without keeping the source. However, we always want to establish a workflow where we automati-cally go from source model to installe d JITed model for de ployment. If we do not, we will find ourselves in a situat ion where we would like to tweak some- thing with the model but have lost the ability to modify and regenerate.Always keep the source, Luke! 15.2.3 Our server with a traced model Now is a good time to iterate our web server to what is, in this case, our final version. We can export the traced CycleGAN model as follows: python3 p3ch15/cyclegan.py data/p1ch2/horse2zebra_0.4.0.pth ➥ data/p3ch15/traced_zebra_model.pt Now we just need to replace the call to get_pretrained_model with torch.jit.load in our server (and drop the now-unnecessary import of get_pretrained_model ). This also means our model runs independent of the GIL—and this is what we wanted our server to achieve here. For your convenience, we have put the small modifications in request_batching_jit_server.py. We can run it with the traced model file path as a command-line argument. Now that we have had a taste of what the JIT can do for us, let’s dive into the details! 15.3 Interacting with the PyTorch JIT Debuting in PyTorch 1.0, the PyTorch JIT is at the center of quite a few recent innova- tions around PyTorch, not least of which is providing a rich set of deployment options. 15.3.1 What to expect from movi ng beyond classic Python/PyTorch Quite often, Python is said to lack speed. While there is some truth to this, the tensor operations we use in PyTorch usually are in themselves large enough that the Python slowness between them is not a large issue. For small devices li ke smartphones, the memory overhead that Python brings might be more important. So keep in mind that frequently, the speedup gained by taking Py thon out of the computation is 10% or less. Another immediate speedup from not running the model in Python only appears in multithreaded environments, but then it can be significant: because the intermedi- ates are not Python objects, the computation is not affected by the menace of all Python parallelization, the GIL. This is what we had in mind earlier and realized when we used a traced model in our server. Moving from the classic PyTorch way of executing one operation before looking at the next does give PyTorch a holistic view of the calculation: that is, it can consider the calculation in its entirety. This opens the door to crucial optimizations and higher- level transformations. Some of those apply mostly to inference, while others can also provide a significant speedup in training." Deep-Learning-with-PyTorch.pdf,"459 Interacting with the PyTorch JIT Let’s use a quick example to give you a ta ste of why looking at several operations at once can be beneficial. When PyTorch runs a sequence of operations on the GPU, it calls a subprogram ( kernel , in CUDA parlance) for each of them. Every kernel reads the input from GPU memory, computes the result, and then stores the result. Thus most of the time is typically spent not co mputing things, but reading from and writing to memory. This can be improved on by reading only once, computing several opera- tions, and then writing at the very end. This is precisely what the PyTorch JIT fuserdoes. To give you an idea of how this works, figure 15.3 shows the pointwise computa- tion taking place in long short-term memory (LSTM; https://en.wikipedia.org/wiki/ Long_short-term_memory ) cell, a popular building bl ock for recurrent networks. The details of figure 15.3 are not import ant to us here, but there are 5 inputs at the top, 2 outputs at the bottom, and 7 in termediate results represented as rounded indices. By computing all of this in one go in a single CUDA function and keeping theintermediates in registers, the JIT reduces the number of memory reads from 12 to 5 and the number of writes from 9 to 2. Thes e are the large gains the JIT gets us; it can reduce the time to train an LSTM network by a factor of four. This seemingly simple ingate ceLlgate forgetgate forgetgatesigmoid tanh tanh cx hxsigmoid sigmoid ingate ceLlgatecx outgate outgate Figure 15.3 LSTM cell pointwise operations. From five inputs at the top, this block computes two outputs at the bottom. The boxes in between are intermediate results that vanilla PyTorch will store in memory but the JIT fuser will just keep in registers." Deep-Learning-with-PyTorch.pdf,"460 CHAPTER 15 Deploying to production trick allows PyTorch to significantly narr ow the gap between the speed of LSTM and generalized LSTM cells flexibly defined in PyTorch and the rigid but highly optimized LSTM implementation provided by libraries like cuDNN. In summary, the speedup from using the JI T to escape Python is more modest than we might naively expect when we have been told that Python is awfully slow, but avoid- ing the GIL is a significant win for multithreaded applications. The large speedups in JITed models come from special optimizati ons that the JIT enables but that are more elaborate than just avoiding Python overhead. 15.3.2 The dual nature of PyTorch as interface and backend To understand how moving beyond Python work s, it is beneficial to mentally separate PyTorch into several parts. We saw a first gl impse of this in section 1.4. Our PyTorch torch.nn modules—which we first saw in chapter 6 and which have been our main tool for modeling ever since—hold th e parameters of our network and are implemented using the functional interface: functions taking and returning tensors. These are implemented as a C++ extension, handed over to the C++-level autograd- enabled layer. (This then hands the actual computation to an internal library called ATen, performing the computation or rely ing on backends to do so, but this is not important.) Given that the C++ functions are already there, the PyTorch developers made them into an official API. This is the nucleus of LibTorch, which allows us to write C++ ten- sor operations that look almost like their Python counterparts. As the torch.nn mod- ules are Python-only by nature, the C++ API mirrors them in a namespace torch::nn that is designed to look a lot like the Python part but is independent. This would allow us to redo in C++ what we did in Python. But that is not what we want: we want to export the model. Happily, there is another interface to the same functions provided by PyTorch: the PyTorc h JIT. The PyTorch JIT provides a “sym- bolic” representation of the computation. This representation is the TorchScript inter- mediate representation (TorchScript IR, or sometimes just TorchScript). We mentioned TorchScript in section 15.2.2 when discussing delayed computation. In the following sections, we will see how to get this representation of our Python models and how they can be saved, loaded, and executed. Simila r to what we discussed for the regular PyTorch API, the PyTorch JIT functions to load, inspect, and execute TorchScript modules can also be accessed both from Python and from C++. In summary, we have four ways of call ing PyTorch functions, illustrated in figure 15.4: from both C++ and Python, we can either call functions directly or have the JIT as an intermediary. All of these eventually call the C++ LibTorch functions and from there ATen and the computational backend. " Deep-Learning-with-PyTorch.pdf,"461 Interacting with the PyTorch JIT 15.3.3 TorchScript TorchScript is at the center of the deployment options envisioned by PyTorch. As such, it is worth taking a close look at how it works. There are two straightforward ways to create a TorchScript model: tracing and scripting. We will look at each of them in the following sections. At a very high level, the two work as follows: In tracing , which we used in in section 15.2.2, we execute our usual PyTorch model using sample (random) inputs. The PyTorch JIT has hooks (in the C++ autograd interface) for every function th at allows it to re cord the computation. In a way, it is like saying “Watch how I compute the outputs—now you can do the same.” Given that the JIT only comes into play when PyTorch functions (and also nn.Module s) are called, you can run any Python code while tracing, but the JIT will only notice those bits (a nd notably be ignorant of control flow). When we use tensor shapes—usually a tu ple of integers—the JIT tries to follow what’s going on but may have to give up . This is what gave us the warning when tracing the U-Net. In scripting , the PyTorch JIT looks at the actual Python code of our computation and compiles it into the TorchScript IR. This means that, while we can be sure that every aspect of our program is capt ured by the JIT, we are restricted to those parts understood by the compiler. This is like saying “I am telling you how to do it—now you do the same.” Sounds like programming, really.C++ LibTorch (includes autograd) ATen (Tensors) CuDNn NnPACK ... Backends:ATen ""native"" KernelsclaSsic PyTorch: torch, torch.Nn JIT ExecutionC++-Programs Python Programs Custom Ops (JIT Extensions)PyTorch torch.jit Figure 15.4 Many ways of calling into PyTorch" Deep-Learning-with-PyTorch.pdf,"462 CHAPTER 15 Deploying to production We are not here for theory, so let’s try trac ing and scripting with a very simple function that adds inefficiently over the first dimension: # In[2]: def myfn(x): y = x[0]for i in range(1, x.size(0)): y = y + x[i] return y We can trace it: # In[3]:inp = torch.randn(5,5) traced_fn = torch.jit.trace(myfn, inp) print(traced_fn.code) # Out[3]: def myfn(x: Tensor) -> Tensor: y = torch.select(x, 0, 0)y0 = torch.add(y, torch.select(x, 0, 1), alpha=1) y1 = torch.add(y0, torch.select(x, 0, 2), alpha=1) y2 = torch.add(y1, torch.select(x, 0, 3), alpha=1)_0 = torch.add(y2, torch.select(x, 0, 4), alpha=1) return _0 TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so thisvalue will be treated as a constant in the future. This means the trace might not generalize to other inputs! We see the big warning—and indeed, the co de has fixed indexing and additions for five rows, and it would not deal as intended with four or six rows. This is where scripting helps: # In[4]: scripted_fn = torch.jit.script(myfn) print(scripted_fn.code) # Out[4]: def myfn(x: Tensor) -> Tensor: y = torch.select(x, 0, 0) _0 = torch.__range_length(1, torch.size(x, 0), 1) y 0=yfor _1 in range(_0): i = torch.__derive_index(_1, 1, 1) y0 = torch.add(y0, torch.select(x, 0, i), alpha=1) return y0Indexing in the first line of our functionOur loop—but completely unrolled and fixed to 1…4 regardless of the size of x Scary, but so true! PyTorch constructs the range length from the tensor size. Our for loop—even if we have to take the funny-looking next line to get our index i Our loop body, which is just a tad more verbose" Deep-Learning-with-PyTorch.pdf,"463 Interacting with the PyTorch JIT We can also print the scripted graph, which is closer to the internal representation of TorchScript: # In[5]: xprint(scripted_fn.graph) # end::cell_5_code[] # tag::cell_5_output[] # Out[5]:graph(%x.1 : Tensor): %10 : bool = prim::Constant[value=1]() %2 : int = prim::Constant[value=0]()%5 : int = prim::Constant[value=1]() %y.1 : Tensor = aten::select(%x.1, %2, %2) %7 : int = aten::size(%x.1, %2)%9 : int = aten::__range_length(%5, %7, %5) %y : Tensor = prim::Loop(%9, %10, %y.1) block0(%11 : int, %y.6 : Tensor): %i.1 : int = aten::__derive_index(%11, %5, %5) %18 : Tensor = aten::select(%x.1, %2, %i.1) %y.3 : Tensor = aten::add(%y.6, %18, %5)-> (%10, %y.3) return (%y) In practice, you would most often use torch.jit.script in the form of a decorator: @torch.jit.script def myfn(x): ... You could also do this with a custom trace decorator taking care of the inputs, but this has not caught on. Although TorchScript (the language) looks like a subset of Python, there are fun- damental differences. If we look very closel y, we see that PyTorch has added type spec- ifications to the code. This hints at an im portant difference: Torc hScript is statically typed—every value (variable) in the prog ram has one and only one type. Also, the types are limited to those for which the To rchScript IR has a representation. Within the program, the JIT will usually infer the ty pe automatically, but we need to annotate any non-tensor arguments of scripted function s with their types. This is in stark con- trast to Python, where we can a ssign anything to any variable. So far, we’ve traced functions to get scri pted functions. But we graduated from just using functions in chapter 5 to using modules a long time ago. Sure enough, we can also trace or script models. These will th en behave roughly like the modules we know and love. For both tracing and scri pting, we pass an instance of Module to torch.jit.trace (with sample inputs) or torch.jit.script (without sample inputs), respectively. This will give us the forward method we are used to. If we want to expose other method s (this only works in scripting ) to be called from the outside, we decorate them with @torch.jit.export in the class definition.Seems a lot more verbose than we need The first assignment of y Constructing the range is recognizable after we see the code. Our for loop returns the value (y) it calculates.Body of the for loop: selects a slice, and adds to y" Deep-Learning-with-PyTorch.pdf,"464 CHAPTER 15 Deploying to production When we said that the JITed modules work like they did in Python, this includes the fact that we can use them for training, t oo. On the flip side, this means we need to set them up for inference (for example, using the torch.no_grad() context) just like our traditional models, to make them do the right thing. With algorithmically relatively simple models—like the CycleGAN, classification models and U-Net-based segmentation—we can just trace the model as we did earlier. For more complex models, a nifty property is that we can use scripted or traced func- tions from other scripted or traced code, and that we can use scripted or traced sub- modules when constructing and tracing or scripting a module. We can also trace functions by calling nn.Models , but then we need to set all parameters to not require gradients, as the parameters will be constants for the traced model. As we have seen tracing already, let’s look at a practical ex ample of scripting in more detail. 15.3.4 Scripting the gaps of traceability In more complex models, such as those fr om the Fast R-CNN family for detection or recurrent networks used in natural language processing, the bits with control flow like for loops need to be scripted. Similarly, if we needed the flexibility, we would find the code bit the tracer warned about. class UNetUpBlock(nn.Module): ...def center_crop(self, layer, target_size): _, _, layer_height, layer_width = layer.size() diff_y = (layer_height - target_size[0]) // 2diff_x = (layer_width - target_size[1]) // 2 return layer[:, :, diff_y:(diff_y + target_size[0]), ➥ diff_x:(diff_x + target_size[1])] def forward(self, x, bridge): ...crop1 = self.center_crop(bridge, up.shape[2:]) ... What happens is that the JIT ma gically replaces the shape tuple up.shape with a 1D integer tensor with the same information. Now the slicing [2:] and the calculation of diff_x and diff_y are all traceable tensor operations . However, that does not save us, because the slicing then wants Python ints; and there, the reach of the JIT ends, giv- ing us the warning. But we can solve this issue in a straightforward way: we script center_crop . We slightly change the cut between caller and callee by passing up to the scripted center _crop and extracting the sizes there. Other than that, all we need is to add the @torch.jit.script decorator. The result is the following code, which makes the U-Net model traceable without warnings.Listing 15.8 From utils/unet.py The tracer warns here." Deep-Learning-with-PyTorch.pdf,"465 LibTorch: PyTorch in C++ @torch.jit.script def center_crop(layer, target): _, _, layer_height, layer_width = layer.size() _, _, target_height, target_width = target.size() diff_y = (layer_height - target_height) // 2diff_x = (layer_width - target_width]) // 2 return layer[:, :, diff_y:(diff_y + target_height), ➥ diff_x:(diff_x + target_width)] class UNetUpBlock(nn.Module): ... def forward(self, x, bridge): ...crop1 = center_crop(bridge, up) ... Another option we could choose—but that we will not use here—would be to move unscriptable things into custom operat ors implemented in C++. The TorchVision library does that for some specialt y operations in Mask R-CNN models. 15.4 LibTorch: PyTorch in C++ We have seen various way to export our models, but so far, we have used Python. We’ll now look at how we can forgo Pyth on and work with C++ directly. Let’s go back to the horse-to-zebra Cycl eGAN example. We will now take the JITed model from section 15.2.3 and run it from a C++ program. 15.4.1 Running JITed models from C++ The hardest part about deploying PyTorch vision models in C++ is choosing an image library to choose the data.8 Here, we go with the very lightweight library CImg (http://cimg.eu ). If you are very familiar with Op enCV, you can adapt the code to use that instead; we just felt that CImg is easiest for our exposition. Running a JITed model is very simple. We’ ll first show the image handling; it is not really what we are after, so we will do this very quickly.9 #include ""torch/script.h"" #define cimg_use_jpeg#include ""CImg.h"" using namespace cimg_library; int main(int argc, char **argv) { CImg image(argv[2]);Listing 15.9 Rewritten excerpt from utils/unet.py 8But TorchVision may develop a conven ience function for loading images.Listing 15.10 cyclegan_jit.cpp 9The code works with PyTorch 1.4 and, hopefully, a bove. In PyTorch versions before 1.3 you needed data in place of data_ptr .Changes the signature, taking target instead of target_size Gets the sizes within the scripted part The indexing uses the size values we got. We adapt our call to pass up rather than the size. Includes the PyTorch script header and CImg with native JPEG support Loads and decodes the image into a float array" Deep-Learning-with-PyTorch.pdf,"466 CHAPTER 15 Deploying to production image = image.resize(227, 227); // ...here we need to produce an output tensor from input CImg out_img(output.data_ptr(), output.size(2), output.size(3), 1, output.size(1)); out_img.save(argv[3]); return 0; } For the PyTorch side, we include a C++ header torch/script.h. Then we need to set up and include the CImg library. In the main function, we load an image from a file given on the command line and resize it (in CImg). So we now have a 227 × 227 image inthe CImg variable image . At the end of the program, we’ll create an out_img of the same type from our (1, 3, 277, 277) -shaped tensor and save it. Don’t worry about these bits. They are not the PyTorch C++ we want to learn, so we can just take them as is. The actual computation is straightforward, too. We ne ed to make an input tensor from the image, load our model, and run the input tensor through it. auto input_ = torch::tensor( torch::ArrayRef(image.data(), image.size())); auto input = input_.reshape({1, 3, image.height(), image.width()}).div_(255); auto module = torch::jit::load(argv[1]);std::vector inputs; inputs.push_back(input); auto output_ = module.forward(inputs).toTensor(); auto output = output_.contiguous().mul_(255); Recall from chapter 3 that PyTorch keeps th e values of a tensor in a large chunk of memory in a particular order. So does C Img, and we can get a pointer to this memory chunk (as a float array) using image.data() and the number of elements using image.size() . With these two, we can create a somewhat smarter reference: a torch::ArrayRef (which is just shorthand for poin ter plus size; PyTorch uses those at the C++ level for data but also for returnin g sizes without copyin g). Then we can just parse that into the torch::tensor constructor, just as we would with a list. TIP Sometimes you might want to use the similar-working torch::from_blob instead of torch::tensor . The difference is that tensor will copy the data. If you do not want copying, you can use from_blob , but then you need to take care that the underpinning memory is availa ble during the lifetime of the tensor.Listing 15.11 cyclegan_jit.cppResizes to a smaller size The method data_ptr() gives us a pointer to the tensor storage. With it and the shape information, we can construct the output image.Saves the image Puts the image data into a tensor Reshapes and rescales to move from CImg conventions to PyTorch’s Loads the JITed model or function from a file Packs the input into a (one- element) vector of IValues Calls the module and extracts the result tensor. For efficiency, the ownership is moved, so if we held on to the IValue, it would be empty afterward.Makes sure our result is contiguous" Deep-Learning-with-PyTorch.pdf,"467 LibTorch: PyTorch in C++ Our tensor is only 1D, so we need to re shape it. Conveniently, CImg uses the same ordering as PyTorch (channel, rows, column s). If not, we would need to adapt the reshaping and permute the axes as we did i n c h a p t e r 4 . A s C I m g u s e s a r a n g e o f 0…255 and we made our model to use 0…1, we divide here and multiply later.This could, of course, be absorbed into the model, but we wanted to reuse our traced model. Loading the traced model is very straightforward using torch::jit::load . Next, we have to deal with an abstraction PyTorch introduces to bridge between Python and C++: we need to wrap our input in an IValue (or several IValue s), the generic data type for any value. A function in the JIT is passed a vector of IValue s, so we declare that and then push_back our input tensor. This will automatically wrap our tensor into an IValue . We feed this vector of IValue s to the forward and get a single one back. We can then unpack the te nsor in the resulting IValue with .toTensor . Here we see a bit about IValue s: they have a type (here, Tensor ), but they could also be holding int64_t s or double s or a list of tensors. For example, if we had multiple outputs, we would get an IValue holding a list of tensors, which ultimately stems from the Python calling conventions. When we unpack a tensor from an IValue using .toTensor , the IValue transfers ownership (becomes in valid). But let’s not worry about it; we got a tensor back. Because sometimes the model may return non-contiguous data (with gaps in the storage from chapter 3), but CImg reasonably requires us to provide it with a contiguous block, we call contiguous . It is important that we assign this contiguous tensor to a variable that is in scope until we are done working with the underlying memory. Just like in Python, PyTo rch will free memory if it sees that no tensors are using it anymore. So let’s compile this! On Debian or Ubuntu, you need to install cimg-dev , libjpeg-dev , and libx11-dev to use CImg .A common pitfall to avoid: pre- and postprocessing When switching from one library to another, it is easy to forget to check that the con- version steps are compatible. They are non-obvious unless we look up the memory layout and scaling convention of PyTorch and the image processing library we use. If we forget, we will be disapp ointed by not getting th e results we anticipate. Here, the model would go wild because it gets extremely large inputs. However, in the end, the output convention of our model is to give RGB values in the 0..1 range. If we used this directly with CImg, the result would look all black. Other frameworks have other conventions: for example OpenCV likes to store images as BGR instead of RGB, requiring us to flip the channel dimension. We always wantto make sure the input we feed to the model in the deployment is the same as what we fed into it in Python." Deep-Learning-with-PyTorch.pdf,"468 CHAPTER 15 Deploying to production You can download a C++ library of PyTo rch from the PyTorch page. But given that we already have PyTorch installed,10 we might as well use that; it comes with all we need for C++. We need to know where our PyTorch installation lives, so open Python and check torch.__file__ , which may say /usr/local/lib/python3.7/dist-packages/ torch/__init__.py. This means the CMake files we need are in /usr/local/lib/ python3.7/dist-package s/torch/share/cmake/. While using CMake seems like overkill for a single source file project, linking to PyTorch is a bit complex; so we just use the following as a boilerplate CMake file.11 cmake_minimum_required(VERSION 3.0 FATAL_ERROR) project(cyclegan-jit) find_package(Torch REQUIRED) set(CMAKE_CXX_FLAGS ""${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}"") add_executable(cyclegan-jit cyclegan_jit.cpp) target_link_libraries(cyclegan-jit pthread jpeg X11) target_link_libraries(cyclegan-jit ""${TORCH_LIBRARIES}"") set_property(TARGET cyclegan-jit PROPERTY CXX_STANDARD 14) It is best to make a build directory as a su bdirectory of where the source code resides and then in it run CMake as12 CMAKE_PREFIX_PATH=/usr/local/lib/python3.7/ dist-packages/torch/share/cmake/ cmake .. and finally make . This will build the cyclegan-jit program, which we can then run as follows: ./cyclegan-jit ../traced_zebra_model.pt ../../data/p1ch2/horse.jpg /tmp/z.jpg We just ran our PyTorch model without Pyth on. Awesome! If you want to ship your application, you likely want to copy the libraries from /usr/local/lib/python3.7/dist-packages/torch/lib into where your executable is, so that they will always be found. 15.4.2 C++ from the start: The C++ API The C++ modular API is intended to feel a lot like the Python one. To get a taste, we will translate the CycleGAN generator into a model natively defined in C++, but with- out the JIT. We do, however, need the pretrain ed weights, so we’ll save a traced version of the model (and here it is important to trace not a function but the model). 10We hope you have not been slacking off about trying out things you read.Listing 15.12 CMakeLists.txt 11The code directory has a bit longer version to work around Windows issues. 12You might have to replace the path with where your Py Torch or LibTorch installati on is located. Note that the C++ library can be more picky than the Python on e in terms of compatibility: If you are using a CUDA- enabled library, you need to have the matching CUDA headers installed. If you get cryptic error messages about “Caffe2 using CUDA,” you need to install a CPU- only version of the library, but CMake found a CUDA- enabled one.Project name. Replace it with your own here and on the other lines. We need Torch.We want to compile an executable named cyclegan-jit from the cyclegan_jit.cpp source file. Links to the bits required for CImg. CImg itself is all- include, so it does not appear here." Deep-Learning-with-PyTorch.pdf,"469 LibTorch: PyTorch in C++ We’ll start with some administrative details: includes and namespaces. #include #define cimg_use_jpeg #include using torch::Tensor; When we look at the source code in the file, we find that ConvTransposed2d is ad hoc defined, when ideally it should be taken from the standard library. The issue here is that the C++ modular API is still under de velopment; and with PyTorch 1.4, the pre- made ConvTranspose2d m o d u l e c a n n o t b e u s e d i n Sequential because it takes an optional second argument.13 Usually we could just leave Sequential —as we did for Python—but we want our model to have the same structure as the Python CycleGANgenerator from chapter 2. Next, let’s look at the residual block. struct ResNetBlock : torch::nn::Module { torch::nn::Sequential conv_block; ResNetBlock(int64_t dim) : conv_block( torch::nn::ReflectionPad2d(1), torch::nn::Conv2d(torch::nn::Conv2dOptions(dim, dim, 3)),torch::nn::InstanceNorm2d( torch::nn::InstanceNorm2dOptions(dim)), torch::nn::ReLU(/*inplace=*/true), torch::nn::ReflectionPad2d(1), torch::nn::Conv2d(torch::nn::Conv2dOptions(dim, dim, 3)), torch::nn::InstanceNorm2d(torch::nn::InstanceNorm2dOptions(dim))) { register_module(""conv_block"", conv_block); } Tensor forward(const Tensor &inp) { return inp + conv_block->forward(inp); } }; Just as we would in Python , we register a subclass of torch::nn::Module . Our residual block has a sequential conv_block submodule. And just as we did in Python, we need to initialize our submodules, notably Sequential . We do so using the C++ initialization statement. This is similar to how weListing 15.13 cyclegan_cpp_api.cpp 13This is a great improvement over PyTorch 1.3, wher e we needed to implement custom modules for ReLU, ÌnstanceNorm2d , and others.Listing 15.14 Residual block in cyclegan_cpp_api.cppImports the one-stop torch/torch.h header and CImg Spelling out torch::Tensor can be tedious, so we import the name into the main namespace. Initializes Sequential, including its submodules Always remember to register the modules you assign, or bad things will happen! As might be expected, our forward function is pretty simple." Deep-Learning-with-PyTorch.pdf,"470 CHAPTER 15 Deploying to production construct submodules in Python in the __init__ constructor. Unlike Python, C++ does not have the introspection and hookin g capabilities that en able redirection of __setattr__ to combine assignment to a member and registration. Since the lack of keyword arguments ma kes the parameter sp ecification awkward with default arguments, modules (like tens or factory functions) typically take an options argument. Optional keyword argument s in Python correspond to methods of the options object that we can chai n. For example, the Python module nn.Conv2d(in_channels, out_channels, kernel_size, stride=2, padding=1) that we need to convert translates to torch::nn::Conv2d(torch::nn::Conv2dOptions (in_channels, out_channels, kernel_size).stride(2).padding(1)) . This is a bit more tedious, but you’re reading this becaus e you love C++ and aren’t deterred by the hoops it makes you jump through. We should always take care that registra tion and assignment to members is in sync, or things will not work as expected: for example, loading and updating parameters during training will happen to the regist ered module, but the actual module being called is a member. This synchronization wa s done behind the scenes by the Python nn.Module class, but it is not automatic in C++. Failing to do so will cause us many headaches. In contrast to what we did (and should!) in Python, we need to call m->forward(…) for our modules. Some modules can al so be called directly, but for Sequential , this is not currently the case. A final comment on calling conventions is in order: depending on whether you modify tensors provided to functions,14 tensor arguments shou ld always be passed as const Tensor& for tensors that are left unchanged or Tensor if they are changed. Ten- sors should be returned as Tensor . Wrong argument types like non-const references (Tensor& ) will lead to unpars able compiler errors. In the main generator cla ss, we’ll follow a typical pattern in the C++ API more closely by naming our class ResNetGeneratorImpl and promoting it to a torch module ResNetGenerator using the TORCH_MODULE macro. The background is that we want to mostly handle modules as references or shared pointers. The wrapped class accomplishes this. struct ResNetGeneratorImpl : torch::nn::Module { torch::nn::Sequential model; ResNetGeneratorImpl(int64_t input_nc = 3, int64_t output_nc = 3, int64_t ngf = 64, int64_t n_blocks = 9) { TORCH_CHECK(n_blocks >= 0); model->push_back(torch::nn::ReflectionPad2d(3)); 14This is a bit blurry because you can create a new tensor sharing memory with an input and modify it in place, but it’s best to avoid that if possible.Listing 15.15 ResNetGenerator in cyclegan_cpp_api.cpp Adds modules to the Sequential container in the constructor. This allows us to add a variable number of modules in a for loop." Deep-Learning-with-PyTorch.pdf,"471 LibTorch: PyTorch in C++ ... model->push_back(torch::nn::Conv2d( torch::nn::Conv2dOptions(ngf * mult, ngf * mult * 2, 3) .stride(2) .padding(1))); ...register_module(""model"", model); } Tensor forward(const Tensor &inp) { return model->forward(inp); } }; TORCH_MODULE(ResNetGenerator); That’s it—we’ve defined the perfect C++ analogue of the Python ResNetGenerator model. Now we only need a main function to load parameters and run our model. Loading the image with CImg and converting from image to tensor and tensor back to image are the same as in the previous se ction. To include some variation, we’ll dis- play the image instead of writing it to disk. ResNetGenerator model; ... torch::load(model, argv[1]);... cimg_library::CImg image(argv[2]); image.resize(400, 400);auto input_ = torch::tensor(torch::ArrayRef(image.data(), image.size())); auto input = input_.reshape({1, 3, image.height(), image.width()});torch::NoGradGuard no_grad; model->eval();auto output = model->forward(input); ... cimg_library::CImg out_img(output.data_ptr(), output.size(3), output.size(2), 1, output.size(1)); cimg_library::CImgDisplay disp(out_img, ""See a C++ API zebra!"");while (!disp.is_closed()) { disp.wait(); } The interesting changes are in how we create and run the model. Just as expected, we instantiate the model by declaring a variab le of the model type. We load the model using torch::load (here it is important that we wrapped the model). While this looks very familiar to PyTorch practitioners, note that it will work on JIT-saved files rather than Python-serialized state dictionaries. When running the model, we need the equivalent of with torch.no_grad(): . This is provided by instantiating a variable of type NoGradGuard and keeping it in scope forListing 15.16 cyclegan_cpp_api.cpp mainSpares us from reproducing some tedious things An example of Options in action Creates a wrapper ResNetGenerator around our ResNetGeneratorImpl class. As archaic as it seems, the matching names are important here. Instantiates our model Loads the parametersDeclaring a guard variable is the equivalent of the torch.no_grad() context. You can put it in a { … } block if you need to limit how long you turn off gradients. As in Python, eval mode is turned on (for our model, it would not be strictly relevant). Again, we call forward rather than the model. Displaying the image, we need to wait for a key rather than immediately exiting our program." Deep-Learning-with-PyTorch.pdf,"472 CHAPTER 15 Deploying to production as long as we do not want gradients. Just like in Python, we set the model into evaluation mode calling model->eval() . This time around, we call model->forward with our input tensor and get a tensor as a result—no JIT is involved, so we do not need IValue packing and unpacking. Phew. Writing this in C++ was a lot of work for the Python fans that we are. We are glad that we only promised to do inferenc e here, but of course LibTorch also offers optimizers, data loaders, and much more . The main reason to use the API is, of course, when you want to create models an d neither the JIT nor Python is a good fit. For your convenience, CMakeLists.txt co ntains also the instructions for building cyclegan-cpp-api , so building is just like in the previous section. We can run the program as ./cyclegan_cpp_api ../traced_zebra_model.pt ../../data/p1ch2/horse.jpg But we knew what the model would be doing, didn’t we? 15.5 Going mobile A s t h e l a s t v a r i a n t o f d e p l o y i n g a m o d e l , w e w i l l c o n s i d e r d e p l o y m e n t t o m o b i l e d e v i c e s . W h e n w e w a n t t o b r i n g o u r m o d e ls to mobile, we are typically looking at Android and/or iOS. Here, we’ll focus on Android. The C++ parts of PyTorch—LibTorch—c an be compiled for Android, and we could access that from an app written in Ja va using the Android Java Native Interface (JNI). But we really only need a handful of functions from PyTorch—loading a JITed model, making inputs into tensors and IValue s, running them through the model, and getting results back. To save us the tr ouble of using the JNI, the PyTorch develop- ers wrapped these functions into a small library called PyTorch Mobile. The stock way of developing apps in An droid is to use the Android Studio IDE, and we will be using it, too. But this mean s there are a few dozen files of administra- tiva—which also happen to change from on e Android version to the next. As such, we focus on the bits that turn one of the An droid Studio templates (Java App with Empty Activity) into an app that takes a picture, runs it through our zebra-CycleGAN, and displays the result. Sticking with the them e of the book, we will be efficient with the Android bits (and they can be painful co mpared with writing PyTorch code) in the example app. To infuse life into the template, we need to do three things. First, we need to define a UI. To keep things as simple as we can, we have two elements: a TextView named head- line that we can click to take an d transform a picture; and an ImageView to show our picture, which we call image_view . We will leave the picture-taking to the camera app (which you would likely avoid doing in an app for a smoother user experience), because dealing with the camera directly would blur our focus on deploying PyTorch models.15 15We are very proud of the topical metaphor." Deep-Learning-with-PyTorch.pdf,"473 Going mobile Then, we need to include PyTorch as a depe ndency. This is done by editing our app’s build.gradle file and adding pytorch_android and pytorch_android_torchvision . dependencies { ... implementation 'org.pytorch:pytorch_android:1.4.0' implementation 'org.pytorch:pytorch_android_torchvision:1.4.0' } We need to add our traced model as an asset. Finally, we can get to the meat of our sh iny app: the Java class derived from activity that contains our main code. We’ll just discuss an excerpt here. It starts with imports and model setup. ... import org.pytorch.IValue; import org.pytorch.Module;import org.pytorch.Tensor; import org.pytorch.torchvision.TensorImageUtils; ...public class MainActivity extends AppCompatActivity { private org.pytorch.Module model; @Override protected void onCreate(Bundle savedInstanceState) { ...try { model = Module.load(assetFilePath(this, ""traced_zebra_model.pt"")); } catch (IOException e) { Log.e(""Zebraify"", ""Error reading assets"", e); finish(); }... } ... } We need some imports from the org.pytorch namespace. In the typical style that is a hallmark of Java, we import IValue , Module , and Tensor , which do what we might expect; and the class org.pytorch.torchvision.TensorImageUtils , which holds util- ity functions to convert between tensors and images. First, of course, we need to declare a variable holding our model. Then, when our app is started—in onCreate of our activity—we’ll load the module using the Model.loadListing 15.17 Additions to build.gradle Listing 15.18 MainActivity.java part 1The dependencies section is very likely already there. If not, add it at the bottom.The pytorch_android library gets the core things mentioned in the text. The helper library pytorch_android_torc hvision—perhaps a bit immodestly named when compared to its larger TorchVision sibling—contains a few utilities to convert bitmap objects to tensors, but at the time of writing not much more. Don’t you love imports? Holds our JITed model In Java we have to catch the exceptions. Loads the module from a file" Deep-Learning-with-PyTorch.pdf,"474 CHAPTER 15 Deploying to production method from the location given as an argume nt. There is a slight complication though: apps’ data is provided by the supplier as assets that are not easily accessible from the filesystem. For this reason , a utility method called assetFilePath (taken from the PyTorch Android examples) copies the asset to a location in the filesystem. Finally, in Java, we need to catch exceptions that our code throws, unless we want to (and are able to) declare the method we are codi ng as throwing them in turn. When we get an image from the camera app using Android’s Intent mechanism, we need to run it through our model and display it. This happens in the onActivityResult event handler. @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == REQUEST_IMAGE_CAPTURE && resultCode == RESULT_OK) { Bitmap bitmap = (Bitmap) data.getExtras().get(""data""); final float[] means = {0.0f, 0.0f, 0.0f}; final float[] stds = {1.0f, 1.0f, 1.0f}; final Tensor inputTensor = TensorImageUtils.bitmapToFloat32Tensor( bitmap, means, stds); final Tensor outputTensor = model.forward( IValue.from(inputTensor)).toTensor(); Bitmap output_bitmap = tensorToBitmap(outputTensor, means, stds, Bitmap.Config.RGB_565); image_view.setImageBitmap(output_bitmap); } } Converting the bitmap we get from An droid to a tensor is handled by the Tensor- ImageUtils.bitmapToFloat32Tensor function (static method), which takes two float arrays, means and stds , in addition to bitmap . Here we specify the mean and standard deviation of our input data(set), which will then be mapped to have zero mean and unitstandard deviation ju st like TorchVision’s Normalize transform. Android already gives us the images in the 0..1 range that we need to feed into our model, so we specify mean 0 and standard deviation 1 to prevent th e normalization from changing our image. Around the actual call to model.forward , we then do the same IValue wrapping and unwrapping dance that we did when using the JIT in C++, except that our forward takes a single IValue rather than a vector of them. Finall y, we need to get back to a bitmap. Here PyTorch will not help us, so we need to define our own tensorToBitmap (and submit the pull request to PyTorch). We spar e you the details here, as they are tediousListing 15.19 MainActivity.java, part 2 This is executed when the camera app takes a picture.Performs normalization, but the default is images in the range of 0… 1 so we do not n eed to transform: that is, have 0 shift and a scaling divisor of 1.Gets a tensor from a bitmap, combining steps like TorchVision’s ToTensor (converting to a float tensor with entries between 0 and 1) and Normalize This looks almost like what we did in C++. tensorToBitmap is our own invention." Deep-Learning-with-PyTorch.pdf,"475 Going mobile and full of copying (from the tensor to a float[] array to a int[] array containing ARGB values to the bitmap), but it is as it is. It is designed to be the inverse of bitmapToFloat32Tensor . And t hat ’ s all w e ne e d t o d o t o g e t PyT or ch into Android. Using the minimal additions to the code we left out here to request a picture, we havea Zebraify Android app that looks like in what we see in figure 15.5. Well done!16 We should note that we end up with a full ver- sion of PyTorch with all ops on Android. This will, in general, also include operations you will not need for a given task, leading to the question ofwhether we could save some space by leaving them out. It turns out th at starting with PyTorch 1.4, you can build a customized version of the PyTorch library that includes only the opera- tions you need (see https://pytorch.org/mobile/ android/#custom-build ). 15.5.1 Improving efficiency: Model design and quantization If we want to explore mobile in more detail, our n e x t s t e p i s t o t r y t o m a k e o u r m o d e l s f a s t e r .When we wish to reduce the memory and compute footprint of our models, the first thing to look at is streamlining the model itself: that is, computing the same or very similar mappings from inputs to outputs with fewer parameters and operations. This is often called distillation . The details of distillation vary—sometimes we try to shrink each weight by eliminating small or irrelevant weights; 17 in other examples, we com- bine several layers of a net into one (Disti lBERT) or even train a fully different, sim- pler model to reproduce the larger model’s outputs (OpenNMT’s original CTranslate). We mention this because these mo difications are likely to be the first step in getting models to run faster. Another approach to is to reduce the footprint of each parameter and operation: instead of expending the usual 32-bit per para meter in the form of a float, we convert our model to work with integers (a typical choice is 8-bit). This is quantization .18 16At the time of writing, PyTorch Mobile is still relati vely young, and you may hit rough edges. On Pytorch 1.3, the colors were off on an actual 32-bit ARM phone wh ile working in the emulator. The reason is likely a bug in one of the computational backend functions that ar e only used on ARM. With PyTorch 1.4 and a newer phone (64-bit ARM), it seemed to work better. 17Examples include the Lottery Ticket Hypothesis and WaveRNN. 18In contrast to quantization, (partially) moving to 16 -bit floating-point for training is usually called reduced or (if some bits stay 32-bit) mixed-precision training. Figure 15.5 Our CycleGAN zebra app" Deep-Learning-with-PyTorch.pdf,"476 CHAPTER 15 Deploying to production PyTorch does offer quantized tensors for this purpose. They are exposed as a set of scalar types similar to torch.float , torch.double , and torch.long (compare sec- tion 3.5). The most common quan tized tensor scalar types are torch.quint8 and torch.qint8 , representing numbers as unsigned and signed 8-bit integers, respec- tively. PyTorch uses a separate scalar type here in order to use the dispatch mecha- nism we briefly looked at in section 3.11. It might seem surprising that using 8-bit integers instead of 32-bit floating-points works at all; and typically there is a slight degradation in result s, but not much. Two things seem to contribute: if we consider rounding errors as essentially random, and convolutions and linear layers as weighted averages, we may expect rounding errors to typically cancel.19 This allows reducing the relative precision from more than 20 bits in 32-bit floating-points to the 7 bits that sign ed integers offer. The other thing quantiza- tion does (in contrast to training with 16-b it floating-points) is move from floating- point to fixed precision (per tensor or ch annel). This means the largest values are resolved to 7-bit precision, and values that are one-eighth of the largest values to only 7 – 3 = 4 bits. But if things like L1 regu larization (briefly mentioned in chapter 8) work, we might hope similar e ffects allow us to afford less precision to the smaller val- ues in our weights when quantizi ng. In many cases, they do. Quantization debuted with PyTorch 1.3 an d is still a bit rough in terms of sup- ported operations in PyTorch 1.4. It is rapidly maturing, though, and we recommend checking it out if you are serious abou t computationally efficient deployment. 15.6 Emerging technology: Enterpri se serving of PyTorch models We may ask ourselves whether all the depl oyment aspects discussed so far should involve as much coding as they do. Sure, it is common enough for someone to codeall that. As of early 2020, wh ile we are busy with the finishing touches to the book, we have great expectations for the near future ; but at the same time, we feel that the deployment landscape will significantly change by the summer. Currently, RedisAI ( https://github.com/RedisAI/redisai-py ), which one of the authors is involved with, is waiting to appl y Redis goodness to our models. PyTorch has just experimentally released TorchServe (after this book is finalized, see https://pytorch.org/ blog/pytorch-library-updates- new-model-serving-library/# torchserve-experimental ). Similarly, MLflow ( https://mlflow.org ) is building out more and more support, and Cortex ( https://cortex.dev ) wants us to use it to deploy models. For the more specific task of information retrieval, th ere also is EuclidesDB ( https://euclidesdb.readthedocs.io/ en/latest ) to do AI-based feature databases. Exciting times, but unfortunately, they do not sync with our writing schedule. We hope to have more to tell in th e second edition (or a second book)! 19Fancy people would refer to the Central Limit Theorem here. And indeed, we must take care that the inde- pendence (in the statistical sense) of rounding errors is preserved. For example, we usually want zero (a prom- inent output of ReLU) to be exactly representable. Ot herwise, all the zeros would be changed by the exact same quantity in rounding, leading to errors adding up rather than canceling." Deep-Learning-with-PyTorch.pdf,"477 Summary 15.7 Conclusion This concludes our short tour of how to get our models out to where we want to apply them. While the ready-made Torch serving is not quite there yet as we write this, when it arrives you will likely want to export your models through the JIT—so you’ll be glad we went through it here. In the meantime , you now know how to deploy your model to a network service, in a C+ + application, or on mobile. We look forward to seeing what you will build! Hopefully we’ve also delivered on the pr omise of this book: a working knowledge of deep learning basics, and a level of comfort with the PyTorch library. We hope you’ve enjoyed reading as mu ch as we’ve enjoyed writing.20 15.8 Exercises As we close out Deep Learning with PyTorch , we have one final exercise for you: 1Pick a project that sounds exciting to you. Kaggle is a great place to start looking. Dive in. You have acquired the skills and learned th e tools you need to succeed. We can’t wait to hear what you do next; drop us a li ne on the book’s forum and let us know! 15.9 Summary We can serve PyTorch models by wrapping them in a Python web server frame- work such as Flask. By using JITed models, we can avoid the GIL even when calling them fromPython, which is a good idea for serving. Request batching and asynchronous proc essing helps use resources efficiently, in particular when inference is on the GPU. To export models beyond PyTorch, ONNX is a great format. ONNX Runtimeprovides a backend for many purp oses, including the Raspberry Pi. The JIT allows you to export and run arbitrary PyTorch code in C++ or on mobile with little effort. Tracing is the easiest way to get JITed models; you might need to use scripting for some particularly dynamic parts. There also is good support for C++ (and an increasing nu mber of other lan- guages) for running models both JITed and natively. PyTorch Mobile lets us easily integrate JITed models into Android or iOS apps. For mobile deployments, we want to streamline the model architecture and quantize models if possible. A few deployment frameworks are emerging , but a standard isn’t quite visible yet. 20More, actually; writing books is hard!" Deep-Learning-with-PyTorch.pdf," " Deep-Learning-with-PyTorch.pdf,"479index Numerics 3D images data representation using tensors 75 –76 loading 76 7-Zip website 252 A ablation studies 367activation functions 143 , 145 capping output range 146choosing 148 –149 compressing output range 146 –147 actual nodules 372Adam optimizer 388add_figure 431add_scalar method 314advanced indexing 85affine_grid 347 , 385 AlexNet 19 –22 align_as method 49Android Studio IDE 472annotations.csv file 258app.loop.call_at 452app.loop.run_in_executor 454argmax 179argument unpacking 122arithmetic mean 329array coordinates 269array dimensions 265arXiV public preprint repository 7ASCII (American Standard Code for Information Interchange) 94 –95 assetFilePath method 474async method 452asynchronous function generators 449 asyncio module 452asyncio.Lock 452aten operators 456AUC (area under the ROC curve) 420 augmentation abstract 435regularization and 435 augmentation_dict 352 , 380 augmenting datasets 190 , 346 autograd component 123 –138 computing gradient automatically 123 –127 accumulating grad functions 125 –127 applying autograd 123 –124 grad attribute 124 evaluating training loss 132–133 generalizing to validation set 133 –134 optimizers 127 –131 gradient descent optimizers 128 –130 testing optimizers 130 –131 splitting datasets 134 –136 switching off 137 –138 average pooling 203B backpropagation 148 , 225 bad_indexes tensor 85--balanced 342 batch direction 76 batch normalization 25, 222–223, 227 batch_tup 289--batch-size 283batching requests 449 BatchNorm layer 227 BatchNorm2d 368bias parameter 200bikes tensor 89 bikes.stride() method 90 birds vs. airplanes challenge 172 –191, 196–207 building dataset 173 –174 detecting features 200 –202 downsampling 203 –204 fully connected model 174–175 limits of 189 –191 loss for classifying 180 –182 output of classifier 175 –176 padding boundary 198 –200 pooling 203 –204 representing output as probabilities 176 –180 training the classifier 182 –189 bitmapToFloat32Tensor 475BLAS (Basic Linear Algebra Subprograms) 53 blocks 290" Deep-Learning-with-PyTorch.pdf,"INDEX 480 bool tensors 51 Boolean indexing 302 Bottleneck modules 23bounding boxes 372 –375 boundingBox_a 373 boxed numeric values 43boxing 50 broadcasting 47 , 111, 155 buffer protocol 64 _build2dTransformMatrix function 386 byte pair encoding method 97 C C++ C++ API 468 –472 LibTorch 465 –472 running JITed models from C++ 465 –468 __call__ method 152 –153 cancer detector project classification model training disconnect 315 –316 evaluating the model 308–309 first-pass neural network design 289 –295 foundational model and training loop 280 –282 graphing training metrics 309 –314 main entry point for application 282 –284 outputting performance metrics 300 –304 pretraining setup and initialization 284 –289 running training script 304 –307 training and validating the model 295 –300 CT scans 238 –241 data augmentation 346 –354 improvement from 352–354 techniques 347 –352 data loading loading individual CT scans 262 –265 locating nodules 265 –271 parsing LUNA's annotation data 256 –262 raw CT data files 256straightforward dataset implementation 271–277 deployment enterprise serving of PyTorch models 476 exporting models 455 –458 interacting with PyTorch JIT 458 –465 LibTorch 465 –472 mobile 472 –476 serving PyTorch models 446 –454 difficulty of 245 –247 end-to-end analysis 405 –407 bridging CT segmentation and nodule candidate classification 408 –416 diagnosis script 432 –434 independence of validation set 407 –408 predicting malignancy 417 –431 quantitative validation 416–417 training, validation, and test sets 433 –434 false positives and false negatives 320 –322 high-level plan for improvement 319 –320 LUNA Grand Challenge data source downloading 251 –252 overview 251 metrics graphing positives and negatives 322 –333 ideal dataset 334 –344 nodules 249 –250 overview 236 –237 preparing for large-scale projects 237 –238 second model 358 –360 segmentation semantic segmentation 361 –366 types of 360 –361 updating dataset for 369–386 updating model for 366–369 updating training script for 386 –399 structure of 241 –252candidate_count 412 candidateInfo_list 381 , 414 candidateInfo_tup 373 CandidateInfoTuple data structure 377 candidateLabel_a array 411 categorical values 80center_crop 464center_index - index_radius 373center_index + index_radius 373 chain rule 123ChainDataset 174 channel dimension 76 channels 197CIFAR-10 dataset 165 –173 data transforms 168 –170 Dataset class 166 –167 downloading 166normalizing data 170 –172 CIFAR-100 166 cifar2 object 174 CImg library 465 –468 class balancing 339 –341 class_index 180classification classifying by diameter 419–422 to reduce false positives 412–416 classification model training disconnect 315 –316 evaluating the model 308–309 first-pass neural network design 289 –295 converting from convolu- tion to linear 294 –295 core convolutions 290 –292 full model 293 –295 initialization 295 foundational model and train- ing loop 280 –282 graphing training metrics 309 –314 adding TensorBoard sup- port to the metrics log- ging function 313 –314 running TensorBoard 309–313 writing scalars to TensorBoard 314 main entry point for application 282 –284" Deep-Learning-with-PyTorch.pdf,"INDEX 481 classification model training (continued) outputting performance metrics 300 –304 constructing masks 302–304 logMetrics function 301–304 pretraining setup and initialization 284 –289 care and feeding of data loaders 287 –289 initializing model and optimizer 285 –287 running training script 304–307 enumerateWithEstimate function 306 –307 needed data for training 305 –306 training and validating the model 295 –300 computeBatchLoss function 297 –299 validation loop 299 –300 classification threshold 323classificationThreshold 302classificationThreshold_float 324 classifyCandidates method 410clean_a 411clean_words tensor 96clear() method 452CMake 468CMakeLists.txt 472Coco 166col_radius 373comparison ops 53Complete Miss 416computeBatchLoss function 297 –300, 390, 392 ConcatDataset 174contextSlices_count parameter 380 contiguous block 467contiguous method 61contiguous tensors 60continuous values 80contrastive learning 437conv.weight 198conv.weight.one_() method 200convolutional layers 370 convolutions 194 –229 birds vs. airplanes challenge 196 –207as nn module 208 –209 detecting features 200 –202 downsampling 203 –204 padding boundary 198 –200 pooling 203 –204 function of 194 –196 model design 217 –229 comparing designs 228–229 depth of network 223 –228 outdated 229 regularization 219 – 223 width of network 218 –219 subclassing nn module 207–212 training 212 –217 measuring accuracy 214on GPU 215 –217 saving and loading 214 –215 ConvTransposed2d 469copying 449coroutines 449Cortex 476cost function 109cpu method 64create_dataset function 67creation ops 53 CrossEntropyLoss 297 csv module 78CT (computed tomography) scans 75 , 240 Ct class 256 , 262, 264, 271, 280, 289 CT scans 238 –241 bridging CT segmentation and nodule candidate classification 408 –416 classification to reduce false positives 412 –416 grouping voxels into nodule candidates 411–412 segmentation 410 –411 caching chunks of mask in addition to CT 376 calling mask creation during CT initialization 375 extracting nodules from 270–271 Hounsfield units 264 –265 loading individual 262 –265 scan shape and voxel sizes 267 –268 ct_a values 264ct_chunk function 348ct_mhd.GetDirections() method 268 ct_mhd.GetSpacing() method 268 ct_ndx 381 ct_t 382 , 394 Ct.buildAnnotationMask 374Ct.getRawCandidate function 274 , 376 ct.positive_mask 383cuda method 64cuDNN library 460CycleGAN 29 –30, 452, 458, 464, 468 cyclegan_jit.cpp source file 468cyclegan-cpp-api 472 D daily_bikes tensor 90 , 92 data augmentation 346 –354 improvement from 352 –354 on GPU 384 –386 techniques 347 –352 mirroring 348 –349 noise 350rotating 350scaling 349shifting 349 data augmentation strategy 191data loading loading individual CT scans 262 –265 locating nodules 265 –271 converting between milli- meters and voxel addresses 268 –270 CT scan shape and voxel sizes 267 –268 extracting nodules from CT scans 270 –271 patient coordinate system 265 –267 parsing LUNA's annotation data 256 –262 training and validation sets 258 –259 unifying annotation and candidate data 259 – 262 raw CT data files 256 straightforward dataset implementation 271 –277 caching candidate arrays 274" Deep-Learning-with-PyTorch.pdf,"INDEX 482 data loading (continued) constructing dataset in LunaDataset.__init__ 275 rendering data 277 segregation between training and validation sets 275 –276 data representation using tensors images 71 –75 3D images 75 –76 adding color channels 72changing layout 73 –74 loading image files 72 –73 normalizing data 74 –75 tabular data 77 –87 categorization 83 –84 loading a data tensor 78 –80 one-hot encoding 81 –83 real-world dataset 77 –78 representing scores 81thresholds 84 –87 text 93 –101 converting text to numbers 94 one-hot encoding characters 94 –95 one-hot encoding whole words 96 –98 text embeddings 98 –100 text embeddings as blueprint 100 –101 time series 87 –93 adding time dimensions 88 –89 shaping data by time period 89 –90 training 90 –93 Data Science Bowl 2017 438 data tensor 85data.CIFAR10 dataset 167DataLoader class 11 , 280, 284, 288, 381, 414 DataParallel 286 , 387 dataset argument 418 Dataset class 11 , 166– 167 Dataset subclass 173 , 256, 271–273, 275, 279, 284, 339, 378, 414 dataset.CIFAR10 169 datasets module 166 deep learning exercises 15hardware and software requirements 13 –15 paradigm shift from 4 –6 PyTorch for 6 –9 how supports deep learn- ing projects 10 –13 reasons for using 7 –9 def __len__ method 272dense tensors 65 DenseNet 226 deployment enterprise serving of PyTorch models 476 exporting models 455 –458 ONNX 455 –456 tracing 456 –458 interacting with PyTorch JIT 458 –465 dual nature of PyTorch as interface and backend 460 expectations 458 –460 scripting gaps of traceability 464 –465 TorchScript 461 –464 LibTorch 465 –472 C++ API 468 –472 running JITed models from C++ 465 –468 mobile 472 –476 serving PyTorch models 446–454 Flask server 446 –448 goals of deployment 448–449 request batching 449 –454 depth of network 223 – 228 building very deep models 226 –228 initialization 228skip connections 223 –226 device argument 64device attribute 63 –64 device variable 215diameter_mm 258Dice loss 389 –392 collecting metrics 392 loss weighting 391 diceLoss_g 391DICOM (Digital Imaging and Communications in Medicine) 76 , 256, 267 DICOM UID 264 Dirac distribution 187discrete convolution 195discrete cross-correlations 195 discrimination 337discriminator network 28diskcache library 274 , 384 dispatching mechanism 65DistilBERT 475distillation 475 DistributedDataParallel 286 doTraining function 296, 301, 314 doValidation function 301, 393, 397 downsampling 203 –204 dropout 25 , 220–222 Dropout module 221DSB (Data Science Bowl) 438 dsets.py:32 260 dtype argument 64 managing 51 –52 precision levels 51specifying numeric types with 50 –51 dtype torch.float 65dull batches 367 E edge detection kernel 201einsum function 48embedding text as blueprint 100 –101 data representation with tensors 98 –100 embeddings 96 , 99 end-to-end analysis 405 –407 bridging CT segmentation and nodule candidate classification 408 –416 classification to reduce false positives 412 –416 grouping voxels into nodule candidates411–412 segmentation 410 –411 diagnosis script 432 –434 independence of validation set 407 –408 predicting malignancy 417 –431 classifying by diameter 419 –422 getting malignancy information 417 –418 reusing preexisting weights 422 –427 TensorBoard 428 –431" Deep-Learning-with-PyTorch.pdf,"INDEX 483 end-to-end analysis (continued) quantitative validation 416–417 training, validation, and test sets 433 –434 English Corpora 94ensembling 435 –436 enterprise serving 476 enumerateWithEstimate function 297 , 306–307 epoch_ndx 301epochs 116 , 212–213 error function 144 –145 eval mode 25 F F1 score overview 328 –332 updating logging output to include 332 face-to-age prediction model 345 –346 false negatives 324 , 395 false positives 321 –322, 326, 329, 395, 401 falsePos_count 333falsifying images, pretrained networks for 27 –33 CycleGAN 29 –30 GAN game 28generating images 30 –33 Fashion-MNIST 166Fast R-CNN 246feature engineering 256feature extractor 423fine-tuning 101 , 422 FishNet 246Flask 446 –448 flip augmentation 352float array 466float method 52 float32 type 169 float32 values 274floating-point numbers 40 –42, 50, 77 fnLoss_g 391for batch_tup in self.train_dl 289 for loop 226 , 464 forward function 152 , 207, 218, 225 forward method 294 , 368, 385 forward pass 22FPR (false positive rate) 419 –421FPRED (False positive reduction) 251 from_blob 466 fullCt_bool 380 fully automated system 414 function docstring 307 G GAN (generative adversarial network) game 17 , 28 generalized classes 345 generalized tensors 65 –66 generator network 28 geometric mean 331 get_pretrained_model 458getattr function 418 getCandidateInfoDict function 375 getCandidateInfoList function 259 , 275, 373, 375, 377 getCt value 274 getCtRawCandidate function 274 , 376, 383 getCtRawNodule function 272__getitem__ method 272 –274, 351, 381 _getitem__ method 272 getitem method 166 getItem_fullSlice method 382getItem_trainingCrop 383 getRawNodule function 271 ghost pixels 199 GIL (global interpreter lock) 449 global_step parameter 314 , 428 Google Colab 63Google OAuth 252 GPUs (graphical processing units) 41 moving tensors to 62 –64 training networks on 215 –217 grad attribute 124 –125 grad_fn 179gradient descent algorithm 113 –122 applying derivatives to model 115 computing derivatives 115 data visualization 122 decreasing loss 113 –114 defining gradient function 116Iterating to fit model 116 –119 overtraining 118 –119 training loop 116 –117 normalizing inputs 119 –121 grid_sample function 347, 349, 385 grouping 247 , 406, 408, 413 groupSegmentationOutput method 410 H h5py library, serializing tensors to HDF5 with 67 –68 HardTanh 440Hardtanh function 148hardware for deep learning 13–15 harmonic mean 329hasAnnotation_bool flag 378HDF5, serializing tensors to 67–68 head_linear module 423--help command 282hidden layer 158histograms 311 , 428–431 HU (Hounsfield units) 264–265, 293 hyperparameter search 287hyperparameter tuning 118hyperparameters 184 I identity mapping 225IDRI (Image Database Resource Initiative) 377 ILSVRC (ImageNet Large Scale Visual Recognition Challenge) 18 , 20 image data representation 71 –75 3D images data representation 75 –76 loading 76 adding color channels 72changing layout 73 –74 loading image files 72 –73 normalizing data 74 –75 Image object 268image recognition CIFAR-10 dataset 165 –172 data transforms 168 –170 Dataset class 166 –167 downloading 166normalizing data 170 –172" Deep-Learning-with-PyTorch.pdf,"INDEX 484 image recognition (continued) example network 172 –191 building dataset 173 –174 fully connected model 174 –175 limits of 189 –191 loss for classifying 180 –182 output of classifier 175 –176 representing output as probabilities 176 –180 training the classifier 182–189 image_a 394image.size() method 466 imageio module 72 –73, 76 ImageNet 17 , 423 ImageView 472 img array 73 img_t tensor 47 in-memory caching 260in-place operations 55 indexing ops 53 indexing tensors into storages 54 –55 list indexing in Python vs. 42 range indexing 46 inference 25 –27 __init__ constructor 470 init constructor 156 __init__ method 264 , 283, 385 init parameter 218 _init_weights function 295 input object 22 input voxels 292instance segmentation 360 ÌnstanceNorm2d 469 interval scale 80 _irc suffix 268 _irc variable 256 isMal_bool flag 378 isValSet_bool parameter 275IterableDataset 166 IValue 466 –467, 474 J Java App 472 JAX 9 JIT (just in time) 455 –456, 458–459 JITed model 473 JNI (Java Native Interface) 472joining ops 53 Jupyter notebooks 14K Kepler’s laws 105 kernel trick 209kernels 195 , 459 kwarg 368 L L2 regularization 219label function 411label smoothing 435label_g 390labeling images, pretrained networks for 33 –35 LAPACK operations 53 last_points tensor 68layers 23leaks 408LeakyReLU function 148__len__ method 272 , 340 len method 166LibTorch 465 –472 C++ API 468 –472 running JITed models from C++ 465 –468 LIDAR (light detection and ranging) 239 LIDC (Lung Image Database Consortium) 377 –378 LIDC-IDRI (Lung Image Data- base Consortium image collection) 377 , 417 linear model 153 –157 batching inputs 154comparing to 161 –162 optimizing batches 155 –157 replacing 158 –159 list indexing 42lists 50load_state_dict method 31localhost 310log_dir 313log.info method 303--logdir argument 310logdir parameter 353logits 187 , 294 logMetrics function 297 , 313, 393 implementing precision and recall in 327 –328 overview 301 –304 loss function 109 –112 loss tensor 124loss.backward() method 124, 126, 212lottery ticket hypothesis 197 LPS (left-posterior-superior) 266 LSTM (long short-term memory) 217 , 459–460 LUNA (LUng Nodule Analysis) 251 , 256, 263, 337, 378, 417, 438 LUNA Grand Challenge data source contrasting training with balanced LUNA Dataset to previous runs 341 –343 downloading 251 –252 LUNA papers 439overview 251 parsing annotation data 256–262 training and validation sets 258 –259 unifying annotation and candidate data259–262 Luna2dSegmentationDataset 378–382 Luna2dSegmentationDataset .__init__ method 380 LunaDataset class 271 , 274, 280, 284, 287–288, 305, 320, 339–340 LunaDataset.__init__, constructing dataset in 275 LunaDataset.candidateInfo_list 277 LunaModel 285 , 447 M machine learning autograd component 123–138 computing gradient automatically 123 –127 evaluating training loss 132 –133 generalizing to validation set 133 –134 optimizers 127 –131 splitting datasets 134 –136 switching off 137 –138 gradient descent algorithm 113 –122 applying derivatives to model 115" Deep-Learning-with-PyTorch.pdf,"INDEX 485 machine learning: gradient descent (continued) computing derivatives 115data visualization 122decreasing loss 113 –114 defining gradient function 116 Iterating to fit model 116–119 normalizing inputs 119–121 loss function 109 –112 modeling 104 –106 parameter estimation 106–109 choosing linear model 108 –109 data gathering 107 –108 data visualization 108 example problem 107 switching to PyTorch 110 –112 main method 283 , 471 malignancy classification 407malignancy model 407--malignancy-path argument 432MalignancyLunaDataset class 418 malignant classification 413map_location keyword argument 217 Mask R-CNN 246Mask R-CNN models 465masked arrays 302masks caching chunks of mask in addition to CT 376 calling mask creation during CT initialization 375 constructing 302 –304 math ops 53 Matplotlib 172 , 247, 431 max function 26max pooling 203mean square loss 111memory bandwidth 384Mercator projection map 267metadata, tensor 55 –62 contiguous tensors 60 –62 transposing in higher dimensions 60 transposing without copying 58 –59 views of another tensor’s storage 56 –58 MetaIO format 263metrics graphing positives and negatives 322 –333 F1 score 328 –332 performance 332 –333 precision 326 –328, 332 recall 324 , 327–328, 332 ideal dataset 334 –344 class balancing 339 –341 contrasting training with balanced LUNA Dataset to previous runs 341 –343 making data look less like the actual and more like the ideal 336 –341 samplers 338 –339 symptoms of overfitting 343 –344 metrics_dict 303 , 314 METRICS_PRED_NDX values 302 metrics_t parameter 298 , 301 metrics_t tensor 428 millimeter-based coordinate system 265 minibatches 129 , 184–185 mirroring 348 –349 MIT license 367mixed-precision training 475mixup 435MLflow 476 MNIST dataset 165 –166 mobile deployment 472 –476 mode_str argument 301model design 217 –229 comparing designs 228 –229 depth of network 223 –228 building very deep models 226 –228 initialization 228skip connections 223 –226 outdated 229regularization 219 –223 batch normalization 222–223 dropout 220 –222 weight penalties 219 –220 width of network 218 –219 model function 131 , 142 Model Runner function 450–451 model_runner function 453–454 model.backward() method 159Model.load method 474 model.parameters() method 159 model.state_dict() function 397model.train() method 223ModelRunner class 452models module 22modules 151MS COCO dataset 35 MSE (Mean Square Error) 157, 180, 182 MSELoss 175 multichannel images 197 multidimensional arrays, tensors as 42 multitask learning 436 mutating ops 53 N N dimension 89named tensors 46 , 48–49 named_parameters method 159names argument 48NDET (Nodule detection) 251ndx integer 272 needs_processing event 452, 454 needs_processing. ModelRunner 452 neg_list 418neg_ndx 340negLabel_mask 303 negPred_mask 302 netG model 30neural networks __call__ method 152 –153 activation functions 145 –149 capping output range 146choosing 148 –149 compressing output range 146 –147 composing multilayer networks 144 error function 144 –145 first-pass, for cancer detector 289 –295 converting from convolu- tion to linear 294 –295 core convolutions 290 –292 full model 293 –295 initialization 295 inspecting parameters 159–161" Deep-Learning-with-PyTorch.pdf,"INDEX 486 neural networks (continued) linear model 153 –157 batching inputs 154 comparing to 161 –162 optimizing batches 155–157 replacing 158 –159 nn module 151 –157 what learning means for 149–151 NeuralTalk2 model 33 –35 neurons 143NLL (negative log likelihood) 180 –181 NLP (natural language processing) 93 nn module 151 –157, 207–212 nn.BatchNorm1D module 222nn.BatchNorm2D module 222 nn.BatchNorm3D module 222 nn.BCELoss function 176nn.BCELossWithLogits 176nn.Conv2d 196 , 205 nn.ConvTranspose2d 388nn.CrossEntropyLoss 187 , 273, 295, 336 nn.DataParallel class 286nn.Dropout module 221nn.Flatten layer 207nn.functional.linear function 210 nn.HardTanh module 211 nn.KLDivLoss 435nn.Linear 152 –153, 155, 174, 194 nn.LogSoftmax 181 , 187 nn.MaxPool2d module 204–205, 210 nn.Module class 151 –152, 154, 159, 207, 209, 293, 385–386, 470 nn.ModuleDict 152 , 209 nn.ModuleList 152 , 209 nn.NLLLoss class 181 , 187 nn.ReLU layers 292 nn.ReLU module 211nn.Sequential 368nn.Sequential model 159, 207–208 nn.Sigmoid activation 176nn.Sigmoid layer 368nn.Softmax 177 –178, 181, 293–294 nn.Tanh module 210nodule classification 406nodule_t output 273 noduleInfo_list 262NoduleInfoTuple 260nodules 249 –250 finding through segmentation semantic segmentation 361 –366 types of 360 –361 updating dataset for 369–386 updating model for 366–369 updating training script for 386 –399 locating 265 –271 converting between millimeters and voxel addresses 268 –270 CT scan shape and voxel sizes 267 –268 extracting nodules from CT scans 270 –271 patient coordinate system 265 –267 noduleSample_list 277NoGradGuard 471noise 350noise-augmented model 354nominal scale 80non-nodule values 304Normalize transform 474--num-workers 283NumPy arrays 41 , 78 NumPy, tensors and 64 –65 numpy.frombuffer 447nvidia-smi 305 O object detection 360object recognition, pretrained networks for 17 –27 AlexNet 20 –22 obtaining 19 –20 ResNet 22 –27 offset argument 385 offset parameter 349Omniglot 166onActivityResult 474one-dimensional tensors 111one-hot encoding 91 –92 tabular data 81 –83 text data characters 94 –95 whole words 96 –98ONNX (Open Neural Network Exchange) 446 , 455–456 ONNXRuntime 455 onnxruntime-gpu 456 OpenCL 63 OpenCV 465 , 467 OpenNMT’s original CTranslate 475 optim module 129 optim submodule 127 optim.SGD 156 optimizer.step() method 156, 159 optimizers 127 –131 gradient descent optimizers 128 –130 testing optimizers 130 –131 options argument 470 ordered tabular data 71 OrderedDict 160 ordinal values 80 org.pytorch namespace 473 org.pytorch.torchvision.Tensor- ImageUtils class 473 OS-level process 283 Other operations 53 out tensor 26 overfitting 132 , 134, 136, 345–346, 434–437 abstract augmentation 435 classic regularization and augmentation 435 ensembling 435 –436 face-to-age prediction model 345 –346 generalizing what we ask the network to learn 436 –437 preventing with data augmentation 346 –354 improvement from 352 –354 mirroring 348 –349 noise 350 rotating 350 scaling 349 shifting by a random offset 349 symptoms of 343 –344 P p2_run_everything notebook 408 p7zip-full package 252" Deep-Learning-with-PyTorch.pdf,"INDEX 487 padded convolutions 292 padding 362 padding flag 370 pandas library 41 , 78, 377 parallelism 53parameter estimation 106 –109 choosing linear model 108–109 data gathering 107 –108 data visualization 108example problem 107 parameter groups 427parameters 120 , 145, 160, 188, 196, 225, 397 parameters() method 156, 188, 210 params tensor 124 , 126, 129 parser.add_argument 352 patient coordinate system 266–267 converting between millimeters and voxel addresses 268 –270 CT scan shape and voxel sizes 267 –268 extracting nodules from CT scans 270 –271 overview 265 –267 penalization terms 134permute method 73 , 170 pickle library 397 pin_memory option 216 points tensor 46 , 57, 64 points_gpu tensor 64pointwise ops 53pooling 203 –204 pos_list 383 , 418 pos_ndx 340 pos_t 382positive loss 344positive_mask 376POST route 447PR (Precision-Recall) Curves 311 precision 326 implementing in logMetrics 327 –328 updating logging output to include 332 predict method 209Predicted Nodules 395 , 416 prediction images 393prediction_a 394 prediction_devtensor 390 prepcache script 376 , 440preprocess function 23 pretext tasks 436 pretrained keyword argument 36 pretrained networks 423 describing content of images 33 –35 fabricating false images from real images 27 –33 CycleGAN 29 –30 GAN game 28 generating images 30 –33 recognizing subject of images 17 –27 AlexNet 20 –22 inference 25 –27 obtaining 19 –20 ResNet 22 –27 Torch Hub 35 –37 principled augmentation 222Project Gutenberg 94PyLIDC library 417 –418 pyplot.figure 431Python, list indexing in 42PyTorch 6 functional API 210 –212 how supports deep learning projects 10 –13 keeping track of parameters and submodules 209 –210 reasons for using 7 –9 PyTorch JIT 458 –465 dual nature of PyTorch as interface and backend 460 expectations 458 –460 scripting gaps of traceability 464 –465 TorchScript 461 –464 PyTorch models enterprise serving of 476exporting 455 –458 ONNX 455 –456 tracing 456 –458 serving 446 –454 Flask server 446 –448 goals of deployment 448–449 request batching 449 –454 PyTorch Serving 476pytorch_android library 473 pytorch_android_torchvision 473Q quantization 475 –476 quantized tensors 65queue_lock 452 R random sampling 53 random_float function 349 random.random() function 307randperm function 134range indexing 46ratio_int 339 –340 recall 324 implementing in logMetrics 327 –328 updating logging output to include 332 recurrent neural network 34RedisAI 476reduced training 475 reduction ops 53 refine_names method 48regression problems 107regularization 219 –223 augmentation and 435batch normalization 222 –223 dropout 220 –222 weight penalties 219 –220 ReLU (rectified linear unit) 147 , 224 rename method 48request batching 449 –454 from request to queue 452–453 implementation 451 –452 running batches from queue 453 –454 RequestProcessor 450 , 452 requireOnDisk_bool parameter 260 requires_grad attribute 138requires_grad–True argument 124 residual networks 224ResNet 19 , 225 creating network instance 22details about structure of 22–25 inference 25 –27 resnet variable 23resnet101 function 22resnet18 function 36ResNetGenerator class 30" Deep-Learning-with-PyTorch.pdf,"INDEX 488 ResNetGenerator module 470–471 ResNetGeneratorImpl class 471 ResNets 224 –226, 366 REST endpoint 455Retina U-Net 246return statement 376RGB (red, green, blue) 24 , 47, 72, 165, 172, 189, 195, 197, 205, 244, 395 RNNs (recurrent neural networks) 93 ROC (receiver operating characteristic) 420 –421, 428, 433 ROC curves 431ROC/AUC metrics 407ROCm 63rotating 350row_radius 373 --run-validation variant 432 RuntimeError 49RuntimeWarning lines 333 S samplers 338 –339 Sanic framework 449scalar values 314 , 329 scalars 311scale invariant 177scaling 349 scatter_ method 82 Scikit-learn 41SciPy 41scipy.ndimage.measurements .center_of_mass 411 scipy.ndimage.measurements .label 411 scipy.ndimage.morphology 410 scripting 461 segmentation 241 , 243, 358, 405, 408, 413 bridging CT segmentation and nodule candidate classification 408 –416 classification to reduce false positives 412 –416 grouping voxels into nodule candidates 411–412 segmentation 410 –411 semantic segmentation 361–366 types of 360 –361updating dataset for 369 –386 augmenting on GPU 384–386 designing training and vali- dation data 382 –383 ground truth data 371 –378 input size requirements 370 Luna2dSegmentation- Dataset 378 –382 TrainingLuna2dSegmentati onDataset 383 –384 U-Net trade-offs for 3D vs. 2D data 370 –371 updating model for 366 –369 updating training script for 386 –399 Adam optimizer 388Dice loss 389 –392 getting images into TensorBoard 392 –396 initializing segmentation and augmentation models 387 –388 saving model 397 –399 updating metrics logging 396 – 397 segmentCt method 410self-supervised learning 436self.block4 294self.candidateInfo_list 272 , 275 self.cli_args.dataset 418self.diceLoss 390self.model.to(device) 286 self.pos_list 340 self.use_cuda 286semantic segmentation 360 –366 semi-supervised learning 436sensitivity 324SentencePiece libraries 97Sequential 160serialization 53serializing tensors 66 –68 series instance UID 263series_uid 245 , 256, 260–261, 275, 375, 381, 410 seriesuid column 258set_grad_enabled 138set() method 452SGD (stochastic gradient descent) 129 –130, 135, 156, 184, 220, 286 shifting by a random offset 349show method 24Sigmoid function 148SimpleITK 263 , 268 singleton dimension 83sitk routines 264Size class 56skip connections 223 –226 slicing ops 53soft Dice 390softmax 176 –177, 181, 293 Softplus function 147software requirements for deep learning 13 –15 sort function 26spectral ops 53Spitfire 190step. zero_grad method 128stochastic weight averaging 436storages 53 –55 in-place operations 55indexing into 54 –55 strided convolution 203strided tensors 65submodules 207subword-nmt 97SummaryWriter class 313, 392, 431 SVHN 166sys.argv 398 T t_c values 108 –109 t_p value 109 –110 t_u values 108tabular data representation 77–87 categorization 83 –84 loading a data tensor 78 –80 one-hot encoding 81 –83 real-world dataset 77 –78 representing scores 81thresholds 84 –87 Tanh function 143 , 147, 149, 158 target tensor 81 , 84 temperature variable 92tensor masking 302Tensor.to method 215TensorBoard 284 , 309–314, 343, 428–431 adding support to metrics logging function 313 –314 getting images into 392 –396 histograms 428 –431 ROC and other curves 431running 309 –313 writing scalars to 314" Deep-Learning-with-PyTorch.pdf,"INDEX 489 tensorboard program 309 TensorFlow 9 tensorflow package 309 TensorImageUtils.bitmapTo- Float32Tensor function 474 tensors API 52 –53 as multidimensional arrays 42 constructing 43data representation images 71 –75 tabular data 77 –87 text 93 –101 time series 87 –93 element types dtype argument 50 –52 standard 50 essence of 43 –46 floating-point numbers 40 –42 generalized 65 –66 indexing list indexing in Python vs. 42 range indexing 46 metadata 55 –62 contiguous tensors 60 –62 transposing in higher dimensions 60 transposing without copying 58 –59 views of another tensor’s storage 56 –58 moving to GPU 62 –64 named 46 –49 NumPy interoperability 64–65 serializing 66 –68 storages 53 –55 in-place operations 55indexing into 54 –55 tensorToBitmap 474 test set 433text data representation 93 –101 converting text to numbers 94 one-hot encoding characters 94 –95 whole words 96 –98 text embeddings 98 –100 text embeddings as blueprint 100 –101 time series data representation 87 –93 adding time dimensions 88–89shaping data by time period 89 –90 training 90 –93 time.time() method 453to method 51 , 64 top-level attributes 152 , 209 Torch Hub 35 –37 torch module 43 , 52, 127 TORCH_MODULE macro 470 torch.bool type 85torch.cuda.is_available 215torch.from_numpy function 68, 447 torch.jit.script 463 @torch.jit.script decorator 464torch.jit.trace function 456–457, 463 torch.linspace 421 torch.max module 179 torch.nn module 151 , 196 torch.nn.functional 210 –211 torch.nn.functional.pad function 199 torch.nn.functional.softmax 26 torch.nn.Hardtanh function 146 torch.nn.Sigmoid 146torch.no_grad() method 138, 456, 464, 471 torch.onnx.export function 455torch.optim.Adam 287torch.optim.SGD 287torch.save 397 torch.sort 89 torch.Tensor class 19torch.util.data 11torch.utils.data module 185torch.utils.data.Dataset 166torch.utils.data.dataset.Dataset 173 torch.utils.data.Subset class 174torch.utils.tensorboard module 313 torch.utils.tensorboard .SummaryWriter class 314 TorchScript 12 , 460–464 TorchVision library 465torchvision module 165 , 228 TorchVision project 19 torchvision.models 20torchvision.resnet101 function 31 torchvision.transforms 168 , 191 totalTrainingSamples_count variable 314ToTensor 169 , 171 TPR (true positive rate) 419–421 TPUs (tensor processing units) 63 , 65 tracing 456 –458 scripting gaps of traceability 464 –465 server with traced model 458 train property 221train_dl data loader 297 train_loss.backward() method 138 training and validation sets parsing annotation data 258–259 segregation between 275 –276 training set 433training_loss.backward() method 156 TrainingLuna2dSegmentation- Dataset 383 –384 transfer learning 422transform_t 386TransformIndexToPhysical- Point method 268 TransformPhysicalPointTo- Index method 268 transforms.Compose 170 transforms.Normalize 171translation-invariant 190 , 194 transpose function 52trnMetrics_g tensor 297 , 300 trnMetrics_t 301true negatives 322true positives 321 , 324, 327, 389, 392 trueNeg_count 328truePos_count 333tuples 259 , 406 two-layer networks 149 U U-Net architecture 364–367, 388 input size requirements 370trade-offs for 3D vs. 2D data 370 –371 UIDs (unique identifiers) 263un-augmented model 354unboxed numeric values 43UNetWrapper class 368 , 387 up.shape 464upsampling 364" Deep-Learning-with-PyTorch.pdf,"INDEX 490 V val_loss tensor 138 val_neg loss 308val_pos loss 308val_stride parameter 275validate function 216validation 383 , 409 validation loop 299 –300 validation set 132 , 433 validation_cadence 393validation_dl 289validation_ds 289valMetrics_g 300valMetrics_t 301vanilla gradient descent 127vanilla model 367view function 294volread function 76volumetric data data representation using tensors 75 –76 loading 76 volumetric pixel 239voxel-address-based coordinate system 265 voxels 239 converting between millimeters and voxel addresses 268 –270 grouping voxels into nodule candidates 411 –412 voxel sizes 267 –268 W wait() method 452 weight decay 220 weight matrix 195weight parameter 200 weight penalties 219 –220 weight tensor 197 weighted loss 391 WeightedRandomSampler 339weights 106 weights argument 339 whole-slice training 383 width of network 218 –219 Wine Quality dataset 77with statement 126 with torch.no_grad() method 299 , 447, 457, 471 word2index_dict 96WordNet 17writer.add_histogram 428writer.add_scalar method 314, 396 X Xavier initializations 228 _xyz suffix 268xyz2irc function 269 Y YOLOv3 paper 360 Z zero_grad method 128zeros function 50 , 55, 125" Deep-Learning-with-PyTorch.pdf," INPUT REPRESENTATION(VALUES OF PIXELS)158 186 2200.19 0.230.460.77...0.91 0.010.0 0.520.910.0...0.74 0.45...172 175 ... INTERMEDIATE REPRESENTATIONS SIMILAR INPUTS SHOULD LEAD TOCLOSE REPRESENTATIONS(ESPECIALlY AT DEePER LEVELS)OUTPUT REPRESENTATION(PROBABILITY OF CLASsES)“SUN” “SEASIdE”“SCENERY”" Deep-Learning-with-PyTorch.pdf,"Stevens ● Antiga ● Viehmann ISBN: 978-1-61729-526-3 Although many deep learning tools use Python, the PyTorch library is truly Pythonic. Instantly familiar to anyone who knows PyData tools like NumPy and scikit-learn, PyTorch simplifi es deep learning without sacrifi c- ing advanced features. It’s excellent for building quick models, and it scales smoothly from laptop to enterprise. Because companies like Apple, Facebook, and JPMorgan Chase rely on PyTorch, it’s a great skill to have as you expand your career options. Deep Learning with PyTorch teaches you to create neural net- works and deep learning systems with PyTorch. This practical book quickly gets you to work building a real-world example from scratch: a tumor image classifi er. Along the way, it covers best practices for the entire DL pipeline, including the PyTorch Tensor API, loading data in Python, monitoring training, and visualizing results. What’s Inside ● T raining deep neural networks ● Implementing modules and loss functions ● Utilizing pretrained models from PyTorch Hub ● Exploring code samples in Jupyter Notebooks For Python programmers with an interest in machine learning. Eli Stevens had roles from software engineer to CTO, and is currently working on machine learning in the self-driving-car industry. Luca Antiga is cofounder of an AI engineering company and an AI tech startup, as well as a former PyTorch contributor. Thomas Viehmann is a PyTorch core developer and machine learning trainer and consultant. To download their free eBook in PDF, ePub, and Kindle formats, owners of this book should visit www.manning.com/books/deep-learning-with-pytorch $49.99 / Can $65.99 [INCLUDING eBOOK] Deep Learning with PyTorchPYTHON/DATA SCIENCE MANNING“With this publication, we fi nally have a defi nitive treatise on PyTorch. It covers the basics and abstractions in great detail.” —From the Foreword by Soumith Chintala, Cocreator of PyTorch “Deep learning divided into digestible chunks with code samples that build up logically.”—Mathieu Zhang, NVIDIA “Timely, practical, and thorough. Don’t put it on your bookshelf, but next to your laptop.”—Philippe Van Bergen P² Consulting “Deep Learning with PyTorch offers a very pragmatic overview of deep learning . . . It is a didactical resource.”—Orlando Alejo Méndez Morales ExperianSee first page" Deep Learning in Medical Image Analysis.pdf,"Deep Learning in Medical Image Analysis Dr. Hichem Felouat hichemfel@gmail.com https://www.researchgate.net/profile/Hichem_Felouat https://www.linkedin.com/in/hichemfelouat " Deep Learning in Medical Image Analysis.pdf,"2 Hichem Felouat - hichemfel@gmail.com - AlgeriaMedical Images •Medical imaging is the technique and process of creating visual representations of the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues. •Medical imaging seeks to reveal internal structures hidden by the skin and bones, as well as to diagnose and treat diseases. " Deep Learning in Medical Image Analysis.pdf,"3 Hichem Felouat - hichemfel@gmail.com - AlgeriaMedical Image Modalities •X-ray radiography - US: Ultrasound - MR/MRI/DMRI: Magnetic Resonance Imaging - PET: Positron Emission Tomography - MG: Mammography - CT: Computed Tomography - RGB: Optical Images. " Deep Learning in Medical Image Analysis.pdf,"4 Hichem Felouat - hichemfel@gmail.com - AlgeriaMedical Image Visualization •Visualization is the process of exploring, transforming, and viewing data as images to gain understanding and insight into the data, which requires fast interactive speed and high image quality. Anatomist : https://brainvisa.info/web/" Deep Learning in Medical Image Analysis.pdf,"5 Hichem Felouat - hichemfel@gmail.com - Algeria Plotly [1] 1)https://plotly.com/python/visualizing-mri-volume-slices/ 2)https://nilearn.github.io/stable/index.html 3)https://nipy.org/nibabel/coordinate_systems.html Medical Image Visualization Nilearn [2] NiBabel [3]" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 6Medical Image Data I/O https://nipy.org/nibabel/coordinate_systems.html" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 7Deep Learning DL DL is a subfield of ML, developed by several researchers. " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 8Deep Learning DL " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 9Deep Learning DL " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 10Deep Learning DL Several DL models have been proposed : •Convolutional neural networks (CNNs) •Autoencoders (Aes) •Recurrent neural networks (RNNs) •Generative adversarial networks (GANs) • Faster RCNN and Mask RCNN •U-Net •Vision Transformer (ViT) •Graph Neural Networks (GNNs)" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 11Deep Learning DL " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 12Deep Learning DL •In DL area, there are many different tasks: Image Classification, Regression, Object Localization, Object Detection, Instance Segmentation, Image captioning, etc.. " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 13Image Classification for Medical Image Analysis •Convolutional neural network (CNN) is the dominant classification framework for image analysis." Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 14Image Classification for Medical Image Analysis - The eyes of CNN •CNN is designed for working with two-dimensional image data, also they can be used with one-dimensional and three-dimensional data." Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 15A simple 2D CNN " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 16A simple 3D CNN " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 17Image Regression for Medical Image Analysis Brain age prediction using deep learning : https://www.nature.com/articles/s41467-019-13163-9" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 18Medical Image Captioning Using DL Medical Image Captioning Using Optimized Deep Learning Model : https://www.hindawi.com/journals/cin/2022/9638438/" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 19Transfer Learning TL •Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. •The intuition behind transfer learning for image classification is that if a model is trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world. You can then take advantage of these learned feature maps without having to start from scratch by training a large model on a large dataset." Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 20Transfer Learning TL •It is generally not a good idea to train a very large DNN from scratch: instead, you should always try to find an existing neural network that accomplishes a similar task to the one you are trying to tackle then reuse the lower layers of this network. •It will not only speed up training considerably but also require significantly less training data. •The output layer of the original model should usually be replaced because it is most likely not useful at all for the new task, and it may not even have the right number of outputs for the new task." Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 21Transfer Learning TL " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 22Available models in Keras: Models for image classification with weights trained on ImageNet: https://keras.io/applications/ Transfer Learning TL" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 23Transfer Learning TL base_model = keras.applications.xception.Xception(weights=""imagenet"", include_top=False) avg = keras.layers.GlobalAveragePooling2D()(base_model.output) output = keras.layers.Dense(n_classes, activation=""softmax"")(avg) model = keras.Model(inputs=base_model.input, outputs=output) for layer in model.layers[:-2]: layer.trainable = False optimizer = keras.optimizers.SGD(lr=0.2, momentum=0.9, decay=0.01) model.compile(loss=""sparse_categorical_crossentropy"", optimizer=optimizer, metrics=[""accuracy""]) history = model.fit(train_set, epochs=5, validation_data=valid_set) for layer in model.layers[-5:]: layer.trainable = True optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, decay=0.001) model.compile(...) history = model.fit(...)The two new layers" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 24Object Detection •Localizing an object in a picture means predicting a bounding box around the object and can be expressed as a regression task. " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 25Object Detection Problem: the dataset does not have bounding boxes around the objects, how can we train our model? •We need to add them ourselves. This is often one of the hardest and most costly parts of a Machine Learning project: getting the labels. •It is a good idea to spend time looking for the right tools. " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 26Object Detection •An image labeling or annotation tool is used to label the images for bounding box object detection and segmentation. Open-source image labeling tool like : •VGG Image •Annotator •LabelImg •OpenLabeler •ImgLab Commercial tool like : •LabelBox •SuperviselyCrowdsourcing Platform like : • Amazon Mechanical Turk " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 27Object Detection - VGG VGG Image: http://www.robots.ox.ac.uk/~vgg/software/via/" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 28Object Detection - labelImg " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 29Object Detection •The MSE often works fairly well as a cost function to train the model, but it is not a great metric to evaluate how well the model can predict bounding boxes. •The most common metric for this is the Intersection over Union (IoU). " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 30Object Detection mean Average Precision : In order to calculate mAP, we draw a series of precision-recall curves with the IoU threshold set at varying levels of difficulty. In COCO evaluation, the IoU threshold ranges from 0.5 to 0.95 with a step size of 0.05 represented as AP@[.5:.05:.95] •Calculate the final AP by averaging the AP over different classes. mAP-IoU_thresholds" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 31Object Detection (C, X,Y, W, H)raw image must have the same sizeimage labeling (C, X,Y, W, H)model •Each item should be a tuple of the form : (images, (class_labels, bounding_boxes) ) " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 32Object Detection In general, object detectors have three (3) main components: 1)The backbone that extracts features from the given image. 2)The feature network that takes multiple levels of features from the backbone as input and outputs a list of fused features that represent salient characteristics of the image. 3)The final class/box network that uses the fused features to predict the class and location of each object. " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 33Object Detection - Faster RCNN Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks https://arxiv.org/abs/1506.01497Feature Network Region Proposal Network Backbone Class/Box Network" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 34Instance Segmentation - labelme •Instance Segmentation aims to predicting the object class-label and the pixel-specific object instance-mask." Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 35Instance Segmentation - Mask R-CNN Backbone Feature Network Region Proposal Network Class/Box/Mask Network" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 36Instance Segmentation - Mask R-CNN https://github.com/hichemfelouat/my-codes-of-machine-learning/blob/master/Mask_RCNN_TF2OD_Custom_dataset.ipynb" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 37YOLO •You Only Look Once (YOLO) is an algorithm that uses convolutional neural networks for object detection. • It is one of the faster object detection algorithms out there. •It is a very good choice when we need real-time detection, without loss of too much accuracy. YOLOV5: https://github.com/ultralytics/yolov5 How to Train YOLOv5 On a Custom Dataset : https://blog.roboflow.com/how-to-train-yolov5-on-a-custom-dataset/" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 38•Detectron2 was built by Facebook AI Research (FAIR) to support rapid implementation and evaluation of novel computer vision research. •Detectron2 is now implemented in PyTorch. •Detectron2 is flexible and extensible, and able to provide fast training on single or multiple GPU servers. •Detectron2 can be used as a library to support different projects on top of it. Detectron2 Detectron2: A PyTorch-based modular object detection library https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-/ Detectron2 : https://github.com/facebookresearch/detectron2?fbclid=IwAR2CdXQoTU9i-ebKPZIc7BQw8R6NKgp0B-yUkGr1BF3w1VKWzNhxFHi6Zbw detectron2’s documentation: https://detectron2.readthedocs.io/" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 39Detectron2 Detectron2 Model Zoo : https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 40TensorFlow 2 Object Detection API •The TensorFlow Object Detection API is an open-source framework built on top of TensorFlow that makes it easy to construct, train, and deploy object detection models. • The TensorFlow Object Detection API allows you to train a collection state of the art object detection models under a unified framework. TensorFlow Object Detection API : https://github.com/tensorflow/models/tree/master/research/object_detection" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 41TensorFlow 2 Object Detection API Model Zoo : https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 423D-Unet " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 433D-Unet " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 44Autoencoders in Medical Imaging Architecture of the denoising autoencoder model, with an example low-SNR, single-repetition dM raw image (left), and the corresponding high-SNR dM mean image. Combined Denoising and Suppression of Transient Artifacts in Arterial Spin Labeling MRI Using Deep Learning : https://onlinelibrary.wiley.com/doi/10.1002/jmri.27255" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 45Autoencoders in Medical Imaging " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 46Generative Adversarial Networks GANs in Medical Imaging GANs for medical image analysis : https://doi.org/10.1016/j.artmed.2020.101938" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 47Transformers in Medical Imaging Vision Transformer (ViT) for Image Classification (cifar10 dataset) : https://github.com/hichemfelouat/my-codes-of-machine-learning/blob/master/Vision_Transformer_(ViT)_for_Image_Classification_(cifar10_dataset).ipynb" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 48Transformers in Medical Imaging Transformers in Medical Imaging: A Survey https://arxiv.org/abs/2201.09873v1" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 49Transformers in Medical Imaging " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 50Graph Neural Network in Medical Image Analysis BrainGNN: Interpretable Brain Graph Neural Network for fMRI Analysis : https://doi.org/10.1016/j.media.2021.102233 Graph-Based Deep Learning for Medical Diagnosis and Analysis: Past, Present and Future : https://arxiv.org/abs/2105.13137" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 51Self-Supervised and Semi-Supervised Learning In the self-supervised learning technique, the model depends on the underlying structure of data to predict outcomes. It involves no labelled data. However, in semi-supervised learning, we still provide a small amount of labelled data. " Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 52Self-Supervised and Semi-Supervised Learning Uncertainty Guided Semi-supervised Segmentation of Retinal Layers in OCT Images https://link.springer.com/chapter/10.1007/978-3-030-32239-7_32 Semi-Supervised Learning in Computer Vision https://amitness.com/2020/07/semi-supervised-learning/ A Survey of Self-Supervised and Few-Shot Object Detection https://arxiv.org/abs/2110.14711" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 53Open Set Learning OSL Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. Open set learning (OSL) is a more challenging and realistic setting, where there exist test samples from the classes that are unseen during training. Fig (c): describes open set recognition, where the decision boundaries limit the scope of KKCs 1,2,3,4, reserving space for UUCs ?5,?6. Via these decision boundaries, the samples from some UUCs are labeled as ”unknown” or rejected rather than misclassified as KKCs. Recent Advances in Open Set Recognition: A Survey https://arxiv.org/abs/1811.08581" Deep Learning in Medical Image Analysis.pdf,"Hichem Felouat - hichemfel@gmail.com - Algeria 54Thanks For Your Attention Hichem Felouat ... "