page_no int64 1 474 | page_content stringlengths 160 3.83k |
|---|---|
201 | page_content='183 Batch normalization\nfeatures extracted from the previous layers. L3 is trying to map these inputs to yˆ to\nmake it as close as possible to the label y. While the third layer is doing that, the net-\nwork is adapting the values of the parameters from previous layers. As the parameters\n(w, b) are cha... |
202 | page_content="184 CHAPTER 4Structuring DL projects and hyperparameter tuning\n4.9.4 Batch normalization implementation in Keras\nIt is important to know how batch normalization works so you can get a better under-\nstanding of what your code is doing. But when using BN in your network, you don’t\nhave to implement all... |
203 | page_content='185 Project: Achieve high accuracy on image classification\n4.9.5 Batch normalization recap\nThe intuition that I hope you’ll take away from this discussion is that BN applies the nor-\nmalization process not just to the input layer, but also to the values in the hidden layers\nin a neural network. This w... |
204 | page_content="186 CHAPTER 4Structuring DL projects and hyperparameter tuning\nfrom keras.models import Sequential\nfrom keras.utils import np_utils\nfrom keras.layers import Dense, Activation, Flatten, Dropout, BatchNormalization,\n Conv2D, MaxPooling2D\nfrom keras.callbacks import ModelCheckpoint\nfrom keras impor... |
205 | page_content='187 Project: Achieve high accuracy on image classification\nOne-hot encode the labels\nTo one-hot encode the labels in the train, valid, and test datasets, we use the to_\ncategorical function in Keras:\nnum_classes = 10\ny_train = np_utils.to_categorical(y_train,num_classes)\ny_valid = np_utils.to_categ... |
206 | page_content="188 CHAPTER 4Structuring DL projects and hyperparameter tuning\nbase_hidden_units = 32 \nweight_decay = 1e-4 \nmodel = Sequential() \n# CONV1\nmodel.add(Conv2D(base_hidden_units, kernel_size= 3, padding= 'same', \n kernel_regularizer=regularizers.l2(weight_decay), \ninput_shape=x_train... |
207 | page_content="189 Project: Achieve high accuracy on image classification\n# FC7\nmodel.add(Flatten()) \nmodel.add(Dense(10, activation= 'softmax' )) \nmodel.summary() \nThe model summary is shown in figure 4.31.Flattens the feature map into a 1D \nfeatures vector (explained in chapter 3)\n10 hidden units becau... |
208 | page_content="190 CHAPTER 4Structuring DL projects and hyperparameter tuning\nSTEP 4: T RAIN THE MODEL \nBefore we jump into the training code, let’s discuss the strategy behind some of the\nhyperparameter settings: \n\uf0a1batch_size —This is the mini-batch hyperparameter that we covered in this\nchapter. The higher... |
209 | page_content="191 Project: Achieve high accuracy on image classification\nWhen you run this code, you will see the verbose output of the network training for\neach epoch. Keep your eyes on the loss and val_loss values to analyze the network\nand diagnose bottlenecks. Figure 4.32 shows the verbose output of epochs 121... |
210 | page_content='192 CHAPTER 4Structuring DL projects and hyperparameter tuning\nFurther improvements\nAccuracy of 90% is pretty good, but you can still improve further. Here are some ideas\nyou can experiment with:\n\uf0a1More training epochs —Notice that the network was improving until epoch 123.\nYou can increase the ... |
211 | page_content='Part 2\nImage classification\nand detection\nR apid advances in AI research are enabling new applications to be built\nevery day and across different industries that weren’t possible just a few years\nago. By learning these tools, you will be empowered to invent new products and\napplications yourself. Ev... |
212 | page_content='195Advanced CNN\narchitectures\nWelcome to part 2 of this book. Part 1 presented the foundation of neural networks\narchitectures and covered multilayer perceptrons (MLPs) and convolutional neural\nnetworks (CNNs). We wrapped up part 1 with strategies to structure your deep neu-\nral network projects and ... |
213 | page_content='196 CHAPTER 5Advanced CNN architectures\nevolved since then to deeper CNNs like AlexNet and VGGNet, and beyond to more\nadvanced and super-deep networks like Inception and ResNet, developed in 2014 and\n2015, respectively. \n For each CNN architecture, you will learn the following:\n\uf0a1Novel features ... |
214 | page_content='197 CNN design patterns\nin part 1 of this book fully equips you to start reading research papers written by pio-\nneers in the AI field. Reading and implementing research papers is by far one of the\nmost valuable skills that you will build from reading this book.\nTIP Personally, I feel the task of goin... |
215 | page_content='198 CHAPTER 5Advanced CNN architectures\n\uf0a1Pattern 2: Image depth increases, and dimensions decrease —The input data at each\nlayer is an image. With each layer, we apply a new convolutional layer over a\nnew image. This pushes us to think of an image in a more generic way. First,\nyou see that each ... |
216 | page_content='199 LeNet-5\ncontains millions of images; DL and CV researchers use the ImageNet dataset to com-\npare algorithms. More on that later. \nNOTE The snippets in this chapter are not meant to be runnable. The goal is\nto show you how to implement the specifications that are defined in a research\npaper. Visit... |
217 | page_content='200 CHAPTER 5Advanced CNN architectures\n5.2.2 LeNet-5 implementation in Keras\nTo implement LeNet-5 in Keras, read the original paper and follow the architecture\ninformation from pages 6–8. Here are the main takeaways for building the LeNet-5\nnetwork:\n\uf0a1Number of filters in each convolutional lay... |
218 | page_content="201 LeNet-5\nNow let’s put that in code to build the LeNet-5 architecture:\nfrom keras.models import Sequential \nfrom keras.layers import Conv2D, AveragePooling2D, Flatten, Dense \n \nmodel = Sequential() \n \n# C1 Convolutional Layer\nmodel.add(Conv2D( filters = 6... |
219 | page_content="202 CHAPTER 5Advanced CNN architectures\n5.2.3 Setting up the learning hyperparameters\nLeCun and his team used scheduled decay learning where the value of the learning\nrate was decreased using the following schedule: 0.0005 for the first two epochs,\n0.0002 for the next three epochs, 0.00005 for the ne... |
220 | page_content="203 AlexNet\n verbose=1,\n save_best_only=True)\ncallbacks = [checkpoint, lr_reducer]\nmodel.compile(loss= 'categorical_crossentropy' , optimizer='sgd',\n metrics=[ 'accuracy' ])\nNow start the network training for 20 epochs, as mentione... |
221 | page_content='204 CHAPTER 5Advanced CNN architectures\n650,000 neurons, which gives it a larger learning capacity to understand more complex\nfeatures. This allowed AlexNet to achieve remarkable performance in the ILSVRC\nimage classification competition in 2012.\n ImageNet and ILSVRC\nImageNet ( http:/ /image-net.org... |
222 | page_content='205 AlexNet\n5.3.1 AlexNet architecture\nYou saw a version of the AlexNet architecture in the project at the end of chapter 3.\nThe architecture is pretty straightforward. It consists of:\n\uf0a1Convolutional layers with the following kernel sizes: 11 × 11, 5 × 5, and 3 × 3\n\uf0a1Max pooling layers for i... |
223 | page_content='206 CHAPTER 5Advanced CNN architectures\nDROPOUT LAYER\nAs explained in chapter 3, dropout layers are used to prevent the neural network from\noverfitting. The neurons that are “dropped out” do not contribute to the forward pass\nand do not participate in backpropagation. This means every time an input ... |
224 | page_content='207 AlexNet\nWEIGHT REGULARIZATION \nKrizhevsky et al. used a weight decay of 0.0005. Weight decay is another term for the\nL2 regularization technique explained in chapter 4. This approach reduces the over-\nfitting of the DL neural network model on training data to allow the network to gener-\nalize be... |
225 | page_content='208 CHAPTER 5Advanced CNN architectures\n\uf0a1POOL with a filter size of 3 × 3— This reduces the dimensions from 55 × 55 to\n27 × 27:\n + 1 = 27\nThe pooling layer doesn’t change the depth of the volume. The output dimen-\nsions are 27 × 27 × 96. \nSimilarly, we can calculate the output dimensions of th... |
226 | page_content="209 AlexNet\nNOTE You might be wondering how Krizhevsky and his team decided to\nimplement this configuration. Setting up the right values of network hyper-\nparameters like kernel size, depths, stride, pooling size, etc., is tedious and\nrequires a lot of trial and error. The idea remains the same: we wa... |
227 | page_content="210 CHAPTER 5Advanced CNN architectures\nmodel.add(Activation('relu'))\nmodel.add(BatchNormalization())\nmodel.add(MaxPool2D(pool_size=(3,3), strides=(2,2), padding= 'valid'))\nmodel.add(Flatten()) \n# layer 6 (Dense layer + dropout) \nmodel.add(Dense( units = 4096, activation = 'relu'))\nmodel.ad... |
228 | page_content='211 AlexNet\nmodel.fit(X_train, y_train, batch_size= 128, epochs= 90, \n validation_data=(X_test, y_test), verbose=2, callbacks=[reduce_lr])\n5.3.5 AlexNet performance\nAlexNet significantly outperformed all the prior competitors in the 2012 ILSVRC\nchallenges. It achieved a winning top-5 test er... |
229 | page_content='212 CHAPTER 5Advanced CNN architectures\n5.4 VGGNet\nVGGNet was developed in 2014 by the Visual Geometry Group at Oxford University\n(hence the name VGG).3 The building components are exactly the same as those in\nLeNet and AlexNet, except that VGGNet is an even deeper network with more convo-\nlutional,... |
230 | page_content='213 VGGNet\nThis unified configuration of the convolutional and pooling components simplifies the\nneural network architecture, which makes it very easy to understand and implement. \n The VGGNet architecture is developed by stacking 3 × 3 convolutional layers with\n2 × 2 pooling layers inserted after sev... |
231 | page_content='214 CHAPTER 5Advanced CNN architectures\nVGGNet, has more than 144 million parameters. VGG16 is more commonly used\nbecause it performs almost as well as VGG19 but with fewer parameters.\nVGG16 IN KERAS\nConfigurations D (VGG16) and E (VGG19) are the most commonly used configura-\ntions because they are ... |
232 | page_content="215 VGGNet\nlayer to the third, fourth, and fifth blocks as you can see in figure 5.9. This chapter’s\ndownloaded code includes a full implementation of both VGG16 and VGG19. \n Note that Simonyan and Zisserman used the following regularization techniques\nto avoid overfitting:\n\uf0a1L2 regularization wi... |
233 | page_content="216 CHAPTER 5Advanced CNN architectures\n# block #5\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.ad... |
234 | page_content='217 Inception and GoogLeNet\n5.5 Inception and GoogLeNet \nThe Inception network came to the world in 2014 when a group of researchers at\nGoogle published their paper, “Going Deeper with Convolutions.”4 The main hall-\nmark of this architecture is building a deeper neural network while improving the\nuti... |
235 | page_content='218 CHAPTER 5Advanced CNN architectures\nFrom the diagram, you can observe the following:\n\uf0a1In classical architectures like LeNet, AlexNet, and VGGNet, we stack convolu-\ntional and pooling layers on top of each other to build the feature extractors. At\nthe end, we add the dense fully connected lay... |
236 | page_content='219 Inception and GoogLeNet\n1Suppose we have an input dimensional volume from the previous layer of size\n32 × 32 × 200.\n2We feed this input to four convolutions simultaneously: \n– 1 × 1 convolutional layer with depth = 64 and padding = same . The output of\nthis kernel = 32 × 32 × 64.\n– 3 × 3 convo... |
237 | page_content='220 CHAPTER 5Advanced CNN architectures\n5.5.3 Inception module with dimensionality reduction\nThe naive representation of the inception module that we just saw has a big computa-\ntional cost problem that comes with processing larger filters like the 5 × 5 convolutional\nlayer. To get a better sense of ... |
238 | page_content='221 Inception and GoogLeNet\nwithout applying the dimensionality reduction layer. But here, instead of processing\nthe 5 × 5 convolutional layer on the entire 200 channels of the input volume, we take\nthis huge volume and shrink its representation to a much smaller intermediate vol-\nume that has only 16... |
239 | page_content='222 CHAPTER 5Advanced CNN architectures\nIMPACT OF DIMENSIONALITY REDUCTION ON NETWORK PERFORMANCE\nYou might be wondering whether shrinking the representation size so dramatically hurts\nthe performance of the neural network. Szegedy et al. ran experiments and found that as\nlong as you implement th... |
240 | page_content='223 Inception and GoogLeNet\n We then run into the problem of computational cost that comes with using large\nfilters. Here, we use a 1 × 1 convolutional layer called the reduce layer that reduces\nthe computational cost significantly. We add reduce layers before the 3 × 3 and 5 × 5\nconvolutional layers ... |
241 | page_content='224 CHAPTER 5Advanced CNN architectures\ninception module and called it GoogLeNet. They used this network in their submission\nfor the ILSVRC 2014 competition. The GoogLeNet architecture is shown in figure 5.15.\nAs you can see, GoogLeNet uses a stack of a total of nine inception modules and a max\npooli... |
242 | page_content="225 Inception and GoogLeNet\n5.5.5 GoogLeNet in Keras\nNow, let’s implement the GoogLeNet architecture in Keras (figure 5.16). Notice that\nthe inception module takes the features from the previous module as input, passes them\nthrough four routes, concatenates the depth of the output of all four routes, ... |
243 | page_content="226 CHAPTER 5Advanced CNN architectures\n kernel_initializer=kernel_init, \n bias_initializer=bias_init) (pre_conv_3x3)\n # 5 × 5 route = 1 × 1 CONV + 5 × 5 CONV \npre_conv_5x5 = Conv2D(filters_5x5_reduce, kernel_size=(1, 1), padding=' same',\n acti... |
244 | page_content="227 Inception and GoogLeNet\nimplemented by Szegedy et al. in the original paper. (Note that “#3 × 3 reduce” and\n“#5 × 5 reduce” in the figure represent the 1 × 1 filters in the reduction layers that are\nused before the 3 × 3 and 5 × 5 convolutional layers.)\n Now, let’s go through the implementations o... |
245 | page_content="228 CHAPTER 5Advanced CNN architectures\nx = Conv2D(64, (1, 1), padding=' same', strides=(1, 1), activation=' relu')(x)\nx = Conv2D(192, (3, 3), padding=' same', strides=(1, 1), activation='relu')(x)\nx = BatchNormalization()(x)\nx = MaxPool2D((3, 3), padding=' same', strides=(2, 2))(x)\nPART B: B UILDIN... |
246 | page_content="229 Inception and GoogLeNet\nNow, let’s create modules 5a and 5b:\nx = inception_module(x, filters_1x1=256, filters_3x3_reduce=160, filters_3x3=320, \n filters_5x5_reduce=32, filters_5x5=128, \nfilters_pool_proj=128, \n name= 'inception_5a' )\nx = inception_module(x... |
247 | page_content='230 CHAPTER 5Advanced CNN architectures\n5.6 ResNet\nThe Residual Neural Network (ResNet) was developed in 2015 by a group from the\nMicrosoft Research team.5 They introduced a novel residual module architecture with\nskip connections . The network also features heavy batch normalization for the hidden\n... |
248 | page_content='231 ResNet\nTo solve the vanishing gradient problem, He et al. created a shortcut that allows the\ngradient to be directly backpropagated to earlier layers. These shortcuts are called skip\nconnections : they are used to flow information from earlier layers in the network to\nlater layers, creating an alt... |
249 | page_content="232 CHAPTER 5Advanced CNN architectures\nThe code implementation of the skip connection is straightforward:\nX_shortcut = X \nX = Conv2D(filters = F1, kernel_size = (3, 3), strides = (1,1))(X) \nX = Activation( 'relu')(X) \nX = Conv2D(filters = F1, kernel_s... |
250 | page_content='233 ResNet\n\uf0a1Classifiers —The classification part is still the same as we learned for other net-\nworks: fully connected layers followed by a softmax. \nNow that you know what a skip connection is and you are familiar with the high-level\narchitecture of ResNet, let’s unpack residual blocks to unders... |
251 | page_content='234 CHAPTER 5Advanced CNN architectures\nlayer (1 × 1 convolutional layer + batch normalization) to the shortcut path, as shown\nin figure 5.23. This is called the reduce shortcut .\nBefore we jump into the code implementation, let’s recap the discussion of residual\nblocks:\n\uf0a1Residual blocks contai... |
252 | page_content="235 ResNet\n\uf0a1reduce —Boolean: True identifies the reduction layer \n\uf0a1s—Integer (strides)\nThe function returns X: the output of the residual block, which is a tensor of shape\n(height, width, channel).\n The function is as follows:\ndef bottleneck_residual_block (X, kernel_size, filters, reduce... |
253 | page_content='236 CHAPTER 5Advanced CNN architectures\ncan use the same approach to develop ResNet with 18, 34, 101, and 152 layers by fol-\nlowing the architecture in figure 5.24 from the original paper.\nWe know from the previous section that each residual module contains 3 × 3 convolu-\ntional layers, and we now ca... |
254 | page_content="237 ResNet\nfrom the previous layer. Then we use the regular shortcut for the remaining\nlayers of that stage. Recall from our implementation of the bottleneck_\nresidual_block function that we will set the argument reduce to True to\napply the reduce shortcut.\nNow let’s follow the 50-layer architectu... |
255 | page_content="238 CHAPTER 5Advanced CNN architectures\n5.6.4 Learning hyperparameters\nHe et al. followed a training procedure similar to that of AlexNet: the training is car-\nried out using mini-batch GD with momentum of 0.9. The team set the learning rate\nto start with a value of 0.1 and then decreased it by a fac... |
256 | page_content='239 Summary\nSummary\n\uf0a1Classical CNN architectures have the same classical architecture of stacking\nconvolutional and pooling layers on top of each other with different configura-\ntions for their layers.\n\uf0a1LeNet consists of five weight layers: three convolutional and two fully connected\nlayer... |
257 | page_content='240Transfer learning\nTransfer learning is one of the most important techniques of deep learning. When\nbuilding a vision system to solve a specific problem, you usually need to collect and\nlabel a huge amount of data to train your network. You can build convnets, as you\nlearned in chapter 3, and start ... |
258 | page_content='241 What problems does transfer learning solve?\n DL researchers and practitioners have posted many research papers and open\nsource projects of trained algorithms that they have worked on for weeks and months\nand trained on GPUs to get state-of-the-art results on an array of problems. Often,\nthe fact t... |
259 | page_content='242 CHAPTER 6Transfer learning\nmodel weights from pretrained models that were developed for standard CV bench-\nmark datasets, such as the ImageNet image-recognition tasks. Top-performing models\ncan be downloaded and used directly, or integrated into a new model for your own\nCV problems.\n The questio... |
260 | page_content='243 What is transfer learning?\nand domains where labeled data is plentiful. Transferring features extracted from\nanother network that has seen millions of images will make our model less prone to\noverfit and help it generalize better when faced with novel scenarios. You will be\nable to fully grasp thi... |
261 | page_content='244 CHAPTER 6Transfer learning\nIn transfer learning, we first train a base network on a base dataset and task, and\nthen we repurpose the learned features, or transfer them to a second target network to\nbe trained on a target dataset and task. This process will tend to work if the features\nare general... |
262 | page_content='245 What is transfer learning?\nTo fully understand how to use transfer learning, let’s implement this example in\nKeras. (Luckily, Keras has a set of pretrained networks that are ready for us to down-\nl o a d a n d u s e : t h e c o m p l e t e l i s t o f m o d e l s i s a t https:/ /keras.io... |
263 | page_content='246 CHAPTER 6Transfer learning\n1Download the open source code of the VGG16 network and its weights to cre-\nate our base model, and remove the classification layers from the VGG network\n(FC_4096 > FC_4096 > Softmax_1000 ): \nfrom keras.applications.vgg16 import VGG16 \nbase_model = VGG16(weights ... |
264 | page_content="247 What is transfer learning?\nblock4_pool (MaxPooling2D) (None, 14, 14, 512) 0 \n_________________________________________________________________\nblock5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5... |
265 | page_content="248 CHAPTER 6Transfer learning\nx = Flatten()(last_output) \nx = Dense(2, activation= 'softmax' , name='softmax' )(x) \n5Build a new_model that takes the input of the base model as its input and the\noutput of the last softmax layer as an output. The new model is composed of all\nthe feature extra... |
266 | page_content='249 What is transfer learning?\nblock3_conv1 (Conv2D) (None, 56, 56, 256) 295168 \n_________________________________________________________________\nblock3_conv2 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3... |
267 | page_content='250 CHAPTER 6Transfer learning\n6.3 How transfer learning works\nSo far, we learned what the transfer learning technique is and the main problems it\nsolves. We also saw an example of how to take a pretrained network that was trained\non ImageNet and transfer its learnings to our specific task. Now, let’... |
268 | page_content='251 How transfer learning works\nexist in the training set. They are called feature maps because they map where a certain\nkind of feature is found in the image. CNNs look for features such as straight lines,\nedges, and even objects. Whenever they spot these features, they report them to the\nfeature map... |
269 | page_content='252 CHAPTER 6Transfer learning\n6.3.1 How do neural networks learn features?\nA neural network learns the features in a dataset step by step in increasing levels of\ncomplexity, one layer after another. These are called feature maps . The deeper you go\nthrough the network layers, the more image-specific... |
270 | page_content='253 How transfer learning works\nmore specific to our task (human faces): mid-level features contain combinations of\nshapes that form objects in the human face like eyes and noses. As we go deeper\nthrough the network, we notice that features eventually transition from general to\nspecific and, by the la... |
271 | page_content='254 CHAPTER 6Transfer learning\n6.3.2 Transferability of features extracted at later layers\nThe transferability of features that are extracted at later layers depends on the similar-\nity of the original and new datasets. The idea is that all images must have shapes and\nedges, so the early layers are u... |
272 | page_content='255 Transfer learning approaches\nThe steps are as follows:\n1Import the necessary libraries:\nfrom keras.preprocessing.image import load_img\nfrom keras.preprocessing.image import img_to_array\nfrom keras.applications.vgg16 import preprocess_input\nfrom keras.applications.vgg16 import decode_predictions\... |
273 | page_content="256 CHAPTER 6Transfer learning\n4Now our input image is ready for us to run predictions:\nyhat = model.predict(image) \nlabel = decode_predictions(yhat) \nlabel = label[0][0] \nprint('%s (%.2f%%)' % (label[1], label[2]*100)) \nWhen you run this code, you will get the following output:\n>> Ge... |
274 | page_content='257 Transfer learning approaches\nFreeze the weights in the\nfeature extraction layers.\nRemove the\nclassifier.\nAdd a softmax layer\nwith 2 units.3 × 3 CONV, 64\n3 × 3 CONV, 64\nPool/2\nPool/23 × 3 CONV, 128\nPool/23 × 3 CONV, 128\n3 × 3 CONV, 256\n3 × 3 CONV, 256\nPool/23 × 3 CONV, 256\n3 × 3 CONV, 512\... |
275 | page_content='258 CHAPTER 6Transfer learning\nclasses. The classifier part has been trained to overfit the training data to classify\nthem into 1,000 classes. But in our new problem, let’s say cats versus dogs, we have\nonly two classes. So, it is a lot more effective to train a new classifier from scratch to\noverfit... |
276 | page_content='259 Transfer learning approaches\nAs we discussed earlier, feature maps that are extracted early in the network are\ngeneric. The feature maps get progressively more specific as we go deeper in the net-\nwork. This means feature maps 4 in figure 6.10 are very specific to the source domain.\nBased on the s... |
277 | page_content='260 CHAPTER 6Transfer learning\n6.5 Choosing the appropriate level of transfer learning\nRecall that early convolutional layers extract generic features and become more spe-\ncific to the training data the deeper we go through the network. With that said, we can\nchoose the level of detail for feature ex... |
278 | page_content='261 Choosing the appropriate level of transfer learning\nthis case, the more fine-tuning we do, the more the network is prone to overfit the\nnew data.\n For example, suppose all the images in our new dataset contain dogs in a specific\nweather environment—snow, for example. If we fine-tuned on this datas... |
279 | page_content='262 CHAPTER 6Transfer learning\n6.5.5 Recap of the transfer learning scenarios\nWe’ve explored the two main factors that help us define which transfer learning\napproach to use (size of our data and similarity between the source and target data-\nsets). These two factors give us the four major scenarios ... |
280 | page_content='263 Open source datasets\n In this section, we will review some of the popular open source datasets to help\nguide you in your search to find the most suitable dataset for your problem. Keep in\nmind that the ones listed in this chapter are the most popular datasets used in the CV\nresearch community at t... |
281 | page_content='264 CHAPTER 6Transfer learning\n6.6.2 Fashion-MNIST\nFashion-MNIST was created with the intention of replacing the original MNIST data-\nset, which has become too simple for modern convolutional networks. The data is\nstored in the same format as MNIST, but instead of handwritten digits, it contains\n60,... |
282 | page_content='265 Open source datasets\nperfectly centered objects, whereas CIFAR images are color (three channels) with dra-\nmatic variation in how the objects appear. The CIFAR-10 dataset consists of 32×32\ncolor images in 10 classes, with 6,000 images per class. There are 50,000 training\nimages and 10,000 test ima... |
283 | page_content='266 CHAPTER 6Transfer learning\ntool. At the time of this writing, there are over 14 million images in the ImageNet\nproject. To organize such a massive amount of data, the creators of ImageNet followed\nthe WordNet hierarchy: each meaningful word/phrase in WordNet is called a synonym\nset (synset for s... |
284 | page_content='267 Open source datasets\ncontains 328,000 images. More than 200,000 of them are labeled, and they include 1.5\nmillion object instances and 80 object categories that would be easily recognizable by\na 4-year-old. The original research paper by the creators of the dataset describes the\nmotivation for and... |
285 | page_content='268 CHAPTER 6Transfer learning\n6.7 Project 1: A pretrained network as a feature extractor \nIn this project, we use a very small amount of data to train a classifier that detects\nimages of dogs and cats. This is a pretty simple project, but the goal of the exercise is\nto see how to implement transfer ... |
286 | page_content='269 Project 1: A pretrained network as a feature extractor\n The process to use a pretrained model as a feature extractor is well established:\n1Import the necessary libraries.\n2Preprocess the data to make it ready for the neural network.\n3Load pretrained weights from the VGG16 network trained on a larg... |
287 | page_content='270 CHAPTER 6Transfer learning\nI have the data structured in the book’s code so it’s ready for you to use flow_\nfrom_directory() . Now, load the data into train_path , valid_path , and test\n_path variables, and then generate the train, valid, and test batches:\ntrain_path = \'data/train\'\nvalid_pat... |
288 | page_content="271 Project 1: A pretrained network as a feature extractor\nstep and use that as a feature extractor, and then add a classifier on top of it in\nthe next step: \nfor layer in base_model.layers: \n layer.trainable = False\n5Add the new classifier, and build the new model. We add a few layers on top o... |
289 | page_content="272 CHAPTER 6Transfer learning\nblock4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 \n_________________________________________________________________\nblock4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock... |
290 | page_content="273 Project 1: A pretrained network as a feature extractor\n - 25s - loss: 0.0941 - acc: 0.9505 - val_loss: 0.2019 - val_acc: 0.9070\nEpoch 7/20\n - 28s - loss: 0.0269 - acc: 1.0000 - val_loss: 0.1707 - val_acc: 0.9000\nEpoch 8/20\n - 26s - loss: 0.0349 - acc: 0.9917 - val_loss: 0.2489 - val_acc: 0.8140\n... |
291 | page_content="274 CHAPTER 6Transfer learning\ndef path_to_tensor(img_path):\n img = image.load_img(img_path, target_size=(224, 224)) \n x = image.img_to_array(img) \n return np.expand_dims(x, axis=0) \ndef paths_to_tensor(img_paths):\n list_of_tensors = [path_to_tensor(img_path) for img_path i... |
292 | page_content="275 Project 2: Fine-tuning\nFollowing are the details of our dataset:\n\uf0a1Number of classes = 10 (digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9)\n\uf0a1Image size = 100 × 100\n\uf0a1Color space = RGB\n\uf0a11,712 images in the training set\n\uf0a1300 images in the validation set\n\uf0a150 images in the test ... |
293 | page_content='276 CHAPTER 6Transfer learning\nfrom keras.layers import Dense, Flatten, Dropout, BatchNormalization\nfrom keras.models import Model\nfrom sklearn.metrics import confusion_matrix\nimport itertools\nimport matplotlib.pyplot as plt\n%matplotlib inline\n2Preprocess the data to make it ready for the neural n... |
294 | page_content='277 Project 2: Fine-tuning\nlanguage case, the new domain is very different from our domain, so we will\nstart with fine-tuning only the last five layers; if we don’t get satisfying results,\nwe can fine-tune more. It turns out that after we trained the new model, we\ngot 98% accuracy, so this was a good ... |
295 | page_content="278 CHAPTER 6Transfer learning\nTotal params: 14,714,688\nTrainable params: 7,079,424\nNon-trainable params: 7,635,264\n_________________________________________________________________\n5Add the new classifier layers, and build the new model:\nlast_output = base_model.output \nx = Dense(10, activat... |
296 | page_content="279 Project 2: Fine-tuning\nblock5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_pool (MaxPooling2D) (None, 7, 7, 512) 0 \n_________________________________________________________________\nglobal_ave... |
297 | page_content="280 CHAPTER 6Transfer learning\nEpoch 10/150\n18/18 [==============================] - 39s 2s/step - loss: 0.2383 - acc: \n0.9276 - val_loss: 0.2844 - val_acc: 0.9000\nEpoch 11/150\n18/18 [==============================] - 41s 2s/step - loss: 0.1163 - acc: \n0.9778 - val_loss: 0.0775 - val_acc: 1.0000\nE... |
298 | page_content="281 Project 2: Fine-tuning\nunderstanding of how the model performed on the test dataset. See chapter 4\nfor details on the different model evaluation metrics. Now, let’s build the confu-\nsion matrix for our model (see figure 6.20):\nfrom sklearn.metrics import confusion_matrix\nimport numpy as np\ncm_l... |
299 | page_content='282 CHAPTER 6Transfer learning\ngo through the rest of the numbers on the Predicted Label axis. You will notice\nthat the model successfully made the correct predictions for all the test images\nexcept the image with true label = 8. In that case, the model mistakenly classi-\nfied an image of number 8 as... |
300 | page_content='283Object detection with\nR-CNN, SSD, and YOLO\nIn the previous chapters, we explained how we can use deep neural networks for\nimage classification tasks. In image classification, we assume that there is only one\nmain target object in the image, and the model’s sole focus is to identify the target\ncate... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.