Upload copy_of_coding_challenge_for_fatima_fellowship.py
Browse files
copy_of_coding_challenge_for_fatima_fellowship.py
ADDED
|
@@ -0,0 +1,424 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# -*- coding: utf-8 -*-
|
| 2 |
+
"""Copy of Coding Challenge for Fatima Fellowship
|
| 3 |
+
|
| 4 |
+
Automatically generated by Colaboratory.
|
| 5 |
+
|
| 6 |
+
Original file is located at
|
| 7 |
+
https://colab.research.google.com/drive/117STYJL65mfNEBthtUrJ7rfRavlj0415
|
| 8 |
+
|
| 9 |
+
# Fatima Fellowship Quick Coding Challenge (Pick 1)
|
| 10 |
+
|
| 11 |
+
Thank you for applying to the Fatima Fellowship. To help us select the Fellows and assess your ability to do machine learning research, we are asking that you complete a short coding challenge. Please pick **1 of these 5** coding challenges, whichever is most aligned with your interests.
|
| 12 |
+
|
| 13 |
+
**Due date: 1 week**
|
| 14 |
+
|
| 15 |
+
**How to submit**: Please make a copy of this colab notebook, add your code and results, and submit your colab notebook to the submission link below. If you have never used a colab notebook, [check out this video](https://www.youtube.com/watch?v=i-HnvsehuSw).
|
| 16 |
+
|
| 17 |
+
**Submission link**: https://airtable.com/shrXy3QKSsO2yALd3
|
| 18 |
+
|
| 19 |
+
# 1. Deep Learning for Vision
|
| 20 |
+
|
| 21 |
+
**Upside down detector**: Train a model to detect if images are upside down
|
| 22 |
+
|
| 23 |
+
* Pick a dataset of natural images (we suggest looking at datasets on the [Hugging Face Hub](https://huggingface.co/datasets?task_categories=task_categories:image-classification&sort=downloads))
|
| 24 |
+
* Synthetically turn some of images upside down. Create a training and test set.
|
| 25 |
+
* Build a neural network (using Tensorflow, PyTorch, or any framework you like)
|
| 26 |
+
* Train it to classify image orientation until a reasonable accuracy is reached
|
| 27 |
+
* [Upload the the model to the Hugging Face Hub](https://huggingface.co/docs/hub/adding-a-model), and add a link to your model below.
|
| 28 |
+
* Look at some of the images that were classified incorrectly. Please explain what you might do to improve your model's performance on these images in the future (you do not need to impelement these suggestions)
|
| 29 |
+
|
| 30 |
+
**Submission instructions**: Please write your code below and include some examples of images that were classified
|
| 31 |
+
"""
|
| 32 |
+
|
| 33 |
+
### WRITE YOUR CODE TO TRAIN THE MODEL HERE
|
| 34 |
+
|
| 35 |
+
"""**Importing Important Libraries**"""
|
| 36 |
+
|
| 37 |
+
import numpy as np
|
| 38 |
+
import pandas as pd
|
| 39 |
+
import matplotlib.pyplot as plt
|
| 40 |
+
|
| 41 |
+
import torch
|
| 42 |
+
import torch.nn as nn
|
| 43 |
+
from torch.autograd import Variable
|
| 44 |
+
|
| 45 |
+
import torchvision
|
| 46 |
+
import torchvision.transforms as transforms
|
| 47 |
+
from torchvision import datasets,transforms
|
| 48 |
+
from torch.utils.data import Dataset, DataLoader
|
| 49 |
+
from sklearn.metrics import confusion_matrix
|
| 50 |
+
|
| 51 |
+
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
|
| 52 |
+
|
| 53 |
+
import requests
|
| 54 |
+
from PIL import Image
|
| 55 |
+
|
| 56 |
+
transform_train=transforms.Compose([transforms.Resize((28,28)),
|
| 57 |
+
transforms.RandomHorizontalFlip(),
|
| 58 |
+
|
| 59 |
+
transforms.RandomRotation(10),
|
| 60 |
+
transforms.RandomAffine(0,shear=10,scale=(0.8,1.2)),
|
| 61 |
+
transforms.ColorJitter(brightness=0.2,contrast=0.2,saturation=0.2),
|
| 62 |
+
transforms.ToTensor(),
|
| 63 |
+
transforms.Normalize((0.5,),(0.5,))])
|
| 64 |
+
|
| 65 |
+
batch_size_train = 64
|
| 66 |
+
batch_size_test = 1000
|
| 67 |
+
learning_rate = 0.01
|
| 68 |
+
|
| 69 |
+
train_set=datasets.MNIST(root='./data',train=True,download=True,transform=transform_train)
|
| 70 |
+
test_set=datasets.MNIST(root='./data',train=False,download=True,transform=transforms.Compose([transforms.ToTensor()]))
|
| 71 |
+
train_loader=torch.utils.data.DataLoader(dataset=train_set,batch_size=100,shuffle=True)
|
| 72 |
+
test_loader=torch.utils.data.DataLoader(dataset=test_set,batch_size=100, shuffle=True)
|
| 73 |
+
|
| 74 |
+
def output_label(label):
|
| 75 |
+
output_mapping = {
|
| 76 |
+
0: "zero",
|
| 77 |
+
1: "one",
|
| 78 |
+
2: "two",
|
| 79 |
+
3: "three",
|
| 80 |
+
4: "four",
|
| 81 |
+
5: "five",
|
| 82 |
+
6: "Six",
|
| 83 |
+
7: "Seven",
|
| 84 |
+
8: "eight",
|
| 85 |
+
9: "nine"
|
| 86 |
+
}
|
| 87 |
+
input = (label.item() if type(label) == torch.Tensor else label)
|
| 88 |
+
return output_mapping[input]
|
| 89 |
+
|
| 90 |
+
"""**Playing with data and displaying some images using matplotlib imshow() method.**"""
|
| 91 |
+
|
| 92 |
+
a = next(iter(train_loader))
|
| 93 |
+
a[0].size()
|
| 94 |
+
|
| 95 |
+
# Commented out IPython magic to ensure Python compatibility.
|
| 96 |
+
import matplotlib.pyplot as plt
|
| 97 |
+
# %matplotlib inline
|
| 98 |
+
|
| 99 |
+
# obtain one batch of training images
|
| 100 |
+
dataiter = iter(train_loader)
|
| 101 |
+
images, labels = dataiter.next()
|
| 102 |
+
images = images.numpy()
|
| 103 |
+
|
| 104 |
+
# plot the images in the batch, along with the corresponding labels
|
| 105 |
+
fig = plt.figure(figsize=(25, 4))
|
| 106 |
+
for idx in np.arange(20):
|
| 107 |
+
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
|
| 108 |
+
ax.imshow(np.squeeze(images[idx]), cmap='gray')
|
| 109 |
+
# print out the correct label for each image
|
| 110 |
+
# .item() gets the value contained in a Tensor
|
| 111 |
+
ax.set_title(str(labels[idx].item()))
|
| 112 |
+
for i, label in enumerate(labels):
|
| 113 |
+
print(output_label(label), end=", ")
|
| 114 |
+
|
| 115 |
+
img = np.squeeze(images[1])
|
| 116 |
+
|
| 117 |
+
fig = plt.figure(figsize = (12,12))
|
| 118 |
+
ax = fig.add_subplot(111)
|
| 119 |
+
ax.imshow(img, cmap='gray')
|
| 120 |
+
width, height = img.shape
|
| 121 |
+
thresh = img.max()/2.5
|
| 122 |
+
for x in range(width):
|
| 123 |
+
for y in range(height):
|
| 124 |
+
val = round(img[x][y],2) if img[x][y] !=0 else 0
|
| 125 |
+
ax.annotate(str(val), xy=(y,x),
|
| 126 |
+
horizontalalignment='center',
|
| 127 |
+
verticalalignment='center',
|
| 128 |
+
color='white' if img[x][y]<thresh else 'black')
|
| 129 |
+
|
| 130 |
+
"""# ***Building a CNN**
|
| 131 |
+
|
| 132 |
+
-Make a model class (mnistCNN in our case)
|
| 133 |
+
|
| 134 |
+
- It inherit nn.Module class that is a super class for all the neural networks in Pytorch.
|
| 135 |
+
|
| 136 |
+
-Our Neural Net has following layers:
|
| 137 |
+
|
| 138 |
+
- Two Sequential layers each consists of following layers-
|
| 139 |
+
|
| 140 |
+
- Convolution layer that has kernel size of 3 * 3, padding = 1 (zero_padding) in 1st layer and padding = 0 in second one. Stride of 1 in both layer.
|
| 141 |
+
|
| 142 |
+
- Batch Normalization layer.
|
| 143 |
+
|
| 144 |
+
- Acitvation function: ReLU.
|
| 145 |
+
|
| 146 |
+
- Max Pooling layer with kernel size of 2 * 2 and stride 2.
|
| 147 |
+
|
| 148 |
+
- Flatten out the output for dense layer(a.k.a. fully connected layer).
|
| 149 |
+
|
| 150 |
+
- 3 Fully connected layer with different in/out features.
|
| 151 |
+
|
| 152 |
+
- 1 Dropout layer that has class probability p = 0.25.
|
| 153 |
+
|
| 154 |
+
- All the functionaltiy is given in forward method that defines the forward pass of CNN.
|
| 155 |
+
|
| 156 |
+
- Our input image is changing in a following way:
|
| 157 |
+
|
| 158 |
+
- First Convulation layer : input: 28 * 28 * 3, output: 28 * 28 * 32
|
| 159 |
+
|
| 160 |
+
- First Max Pooling layer : input: 28 * 28 * 32, output: 14 * 14 * 32
|
| 161 |
+
- Second Conv layer : input : 14 * 14 * 32, output: 12 * 12 * 64
|
| 162 |
+
- Second Max Pooling layer : 12 * 12 * 64, output: 6 * 6 * 64
|
| 163 |
+
- Final fully connected layer has 10 output features.
|
| 164 |
+
"""
|
| 165 |
+
|
| 166 |
+
class mnistCNN(nn.Module):
|
| 167 |
+
|
| 168 |
+
def __init__(self):
|
| 169 |
+
super(mnistCNN, self).__init__()
|
| 170 |
+
|
| 171 |
+
self.layer1 = nn.Sequential(
|
| 172 |
+
nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3, padding=1),
|
| 173 |
+
nn.BatchNorm2d(32),
|
| 174 |
+
nn.ReLU(),
|
| 175 |
+
nn.MaxPool2d(kernel_size=2, stride=2)
|
| 176 |
+
)
|
| 177 |
+
|
| 178 |
+
self.layer2 = nn.Sequential(
|
| 179 |
+
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3),
|
| 180 |
+
nn.BatchNorm2d(64),
|
| 181 |
+
nn.ReLU(),
|
| 182 |
+
nn.MaxPool2d(2)
|
| 183 |
+
)
|
| 184 |
+
|
| 185 |
+
self.fc1 = nn.Linear(in_features=64*6*6, out_features=600)
|
| 186 |
+
self.drop = nn.Dropout2d(0.25)
|
| 187 |
+
self.fc2 = nn.Linear(in_features=600, out_features=120)
|
| 188 |
+
self.fc3 = nn.Linear(in_features=120, out_features=10)
|
| 189 |
+
|
| 190 |
+
def forward(self, x):
|
| 191 |
+
out = self.layer1(x)
|
| 192 |
+
out = self.layer2(out)
|
| 193 |
+
out = out.view(out.size(0), -1)
|
| 194 |
+
out = self.fc1(out)
|
| 195 |
+
out = self.drop(out)
|
| 196 |
+
out = self.fc2(out)
|
| 197 |
+
out = self.fc3(out)
|
| 198 |
+
|
| 199 |
+
return out
|
| 200 |
+
|
| 201 |
+
"""# Making a model of our CNN class¶
|
| 202 |
+
- Creating a object(model in the code)
|
| 203 |
+
- Transfering it into GPU if available.
|
| 204 |
+
- Defining a Loss function. we're using CrossEntropyLoss() here.
|
| 205 |
+
- Using Adam algorithm for optimization purpose.
|
| 206 |
+
"""
|
| 207 |
+
|
| 208 |
+
model = mnistCNN()
|
| 209 |
+
model.to(device)
|
| 210 |
+
|
| 211 |
+
error = nn.CrossEntropyLoss()
|
| 212 |
+
|
| 213 |
+
learning_rate = 0.001
|
| 214 |
+
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
|
| 215 |
+
print(model)
|
| 216 |
+
|
| 217 |
+
"""**Training the network and Testing it on test dataset**"""
|
| 218 |
+
|
| 219 |
+
num_epochs = 5
|
| 220 |
+
count = 0
|
| 221 |
+
# Lists for visualization of loss and accuracy
|
| 222 |
+
loss_list = []
|
| 223 |
+
iteration_list = []
|
| 224 |
+
accuracy_list = []
|
| 225 |
+
|
| 226 |
+
# Lists for knowing classwise accuracy
|
| 227 |
+
predictions_list = []
|
| 228 |
+
labels_list = []
|
| 229 |
+
|
| 230 |
+
for epoch in range(num_epochs):
|
| 231 |
+
for images, labels in train_loader:
|
| 232 |
+
# Transfering images and labels to GPU if available
|
| 233 |
+
images, labels = images.to(device), labels.to(device)
|
| 234 |
+
|
| 235 |
+
train = Variable(images.view(100, 1, 28, 28))
|
| 236 |
+
labels = Variable(labels)
|
| 237 |
+
|
| 238 |
+
# Forward pass
|
| 239 |
+
outputs = model(train)
|
| 240 |
+
loss = error(outputs, labels)
|
| 241 |
+
|
| 242 |
+
# Initializing a gradient as 0 so there is no mixing of gradient among the batches
|
| 243 |
+
optimizer.zero_grad()
|
| 244 |
+
|
| 245 |
+
#Propagating the error backward
|
| 246 |
+
loss.backward()
|
| 247 |
+
|
| 248 |
+
# Optimizing the parameters
|
| 249 |
+
optimizer.step()
|
| 250 |
+
|
| 251 |
+
count += 1
|
| 252 |
+
|
| 253 |
+
# Testing the model
|
| 254 |
+
|
| 255 |
+
if not (count % 50): # It's same as "if count % 50 == 0"
|
| 256 |
+
total = 0
|
| 257 |
+
correct = 0
|
| 258 |
+
|
| 259 |
+
for images, labels in test_loader:
|
| 260 |
+
images, labels = images.to(device), labels.to(device)
|
| 261 |
+
labels_list.append(labels)
|
| 262 |
+
|
| 263 |
+
test = Variable(images.view(100, 1, 28, 28))
|
| 264 |
+
|
| 265 |
+
outputs = model(test)
|
| 266 |
+
|
| 267 |
+
predictions = torch.max(outputs, 1)[1].to(device)
|
| 268 |
+
predictions_list.append(predictions)
|
| 269 |
+
correct += (predictions == labels).sum()
|
| 270 |
+
|
| 271 |
+
total += len(labels)
|
| 272 |
+
|
| 273 |
+
accuracy = correct * 100 / total
|
| 274 |
+
loss_list.append(loss.data)
|
| 275 |
+
iteration_list.append(count)
|
| 276 |
+
accuracy_list.append(accuracy)
|
| 277 |
+
|
| 278 |
+
if not (count % 500):
|
| 279 |
+
print("Iteration: {}, Loss: {}, Accuracy: {}%".format(count, loss.data, accuracy))
|
| 280 |
+
|
| 281 |
+
"""**Visualizing the Loss and Accuracy with Iterations**"""
|
| 282 |
+
|
| 283 |
+
plt.plot(iteration_list, loss_list)
|
| 284 |
+
plt.xlabel("No. of Iteration")
|
| 285 |
+
plt.ylabel("Loss")
|
| 286 |
+
plt.title("Iterations vs Loss")
|
| 287 |
+
plt.show()
|
| 288 |
+
|
| 289 |
+
plt.plot(iteration_list, accuracy_list)
|
| 290 |
+
plt.xlabel("No. of Iteration")
|
| 291 |
+
plt.ylabel("Accuracy")
|
| 292 |
+
plt.title("Iterations vs Accuracy")
|
| 293 |
+
plt.show()
|
| 294 |
+
|
| 295 |
+
"""**Looking the Accuracy in each class of MNIST dataset**"""
|
| 296 |
+
|
| 297 |
+
class_correct = [0. for _ in range(10)]
|
| 298 |
+
total_correct = [0. for _ in range(10)]
|
| 299 |
+
|
| 300 |
+
with torch.no_grad():
|
| 301 |
+
for images, labels in test_loader:
|
| 302 |
+
images, labels = images.to(device), labels.to(device)
|
| 303 |
+
test = Variable(images)
|
| 304 |
+
outputs = model(test)
|
| 305 |
+
predicted = torch.max(outputs, 1)[1]
|
| 306 |
+
c = (predicted == labels).squeeze()
|
| 307 |
+
|
| 308 |
+
for i in range(100):
|
| 309 |
+
label = labels[i]
|
| 310 |
+
class_correct[label] += c[i].item()
|
| 311 |
+
total_correct[label] += 1
|
| 312 |
+
|
| 313 |
+
for i in range(10):
|
| 314 |
+
print("Accuracy of {}: {:.2f}%".format(output_label(i), class_correct[i] * 100 / total_correct[i]))
|
| 315 |
+
|
| 316 |
+
"""**Printing the Confusion Matrix**"""
|
| 317 |
+
|
| 318 |
+
from itertools import chain
|
| 319 |
+
|
| 320 |
+
predictions_l = [predictions_list[i].tolist() for i in range(len(predictions_list))]
|
| 321 |
+
labels_l = [labels_list[i].tolist() for i in range(len(labels_list))]
|
| 322 |
+
predictions_l = list(chain.from_iterable(predictions_l))
|
| 323 |
+
labels_l = list(chain.from_iterable(labels_l))
|
| 324 |
+
|
| 325 |
+
# Commented out IPython magic to ensure Python compatibility.
|
| 326 |
+
import sklearn.metrics as metrics
|
| 327 |
+
|
| 328 |
+
confusion_matrix(labels_l, predictions_l)
|
| 329 |
+
print("Classification report for CNN :\n%s\n"
|
| 330 |
+
# % (metrics.classification_report(labels_l, predictions_l)))
|
| 331 |
+
|
| 332 |
+
# obtain one batch of test images
|
| 333 |
+
dataiter = iter(test_loader)
|
| 334 |
+
images, labels = dataiter.next()
|
| 335 |
+
|
| 336 |
+
# get sample outputs
|
| 337 |
+
output = model(images)
|
| 338 |
+
# convert output probabilities to predicted class
|
| 339 |
+
_, preds = torch.max(output, 1)
|
| 340 |
+
# prep images for display
|
| 341 |
+
images = images.numpy()
|
| 342 |
+
|
| 343 |
+
# plot the images in the batch, along with predicted and true labels
|
| 344 |
+
fig = plt.figure(figsize=(25, 4))
|
| 345 |
+
for idx in np.arange(20):
|
| 346 |
+
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
|
| 347 |
+
ax.imshow(np.squeeze(images[idx]), cmap='gray')
|
| 348 |
+
ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())),
|
| 349 |
+
color=("green" if preds[idx]==labels[idx] else "red"))
|
| 350 |
+
|
| 351 |
+
"""our model is performing excellently mainly because of the data augumentation done on the training data.
|
| 352 |
+
|
| 353 |
+
**Write up**:
|
| 354 |
+
* Link to the model on Hugging Face Hub:
|
| 355 |
+
* Include some examples of misclassified images. Please explain what you might do to improve your model's performance on these images in the future (you do not need to impelement these suggestions)
|
| 356 |
+
|
| 357 |
+
# 2. Deep Learning for NLP
|
| 358 |
+
|
| 359 |
+
**Fake news classifier**: Train a text classification model to detect fake news articles!
|
| 360 |
+
|
| 361 |
+
* Download the dataset here: https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset
|
| 362 |
+
* Develop an NLP model for classification that uses a pretrained language model
|
| 363 |
+
* Finetune your model on the dataset, and generate an AUC curve of your model on the test set of your choice.
|
| 364 |
+
* [Upload the the model to the Hugging Face Hub](https://huggingface.co/docs/hub/adding-a-model), and add a link to your model below.
|
| 365 |
+
* *Answer the following question*: Look at some of the news articles that were classified incorrectly. Please explain what you might do to improve your model's performance on these news articles in the future (you do not need to impelement these suggestions)
|
| 366 |
+
"""
|
| 367 |
+
|
| 368 |
+
### WRITE YOUR CODE TO TRAIN THE MODEL HERE
|
| 369 |
+
|
| 370 |
+
"""**Write up**:
|
| 371 |
+
* Link to the model on Hugging Face Hub:
|
| 372 |
+
* Include some examples of misclassified news articles. Please explain what you might do to improve your model's performance on these news articles in the future (you do not need to impelement these suggestions)
|
| 373 |
+
|
| 374 |
+
# 3. Deep RL / Robotics
|
| 375 |
+
|
| 376 |
+
**RL for Classical Control:** Using any of the [classical control](https://github.com/openai/gym/blob/master/docs/environments.md#classic-control) environments from OpenAI's `gym`, implement a deep NN that learns an optimal policy which maximizes the reward of the environment.
|
| 377 |
+
|
| 378 |
+
* Describe the NN you implemented and the behavior you observe from the agent as the model converges (or diverges).
|
| 379 |
+
* Plot the reward as a function of steps (or Epochs).
|
| 380 |
+
Compare your results to a random agent.
|
| 381 |
+
* Discuss whether you think your model has learned the optimal policy and potential methods for improving it and/or where it might fail.
|
| 382 |
+
* (Optional) [Upload the the model to the Hugging Face Hub](https://huggingface.co/docs/hub/adding-a-model), and add a link to your model below.
|
| 383 |
+
|
| 384 |
+
|
| 385 |
+
You may use any frameworks you like, but you must implement your NN on your own (no pre-defined/trained models like [`stable_baselines`](https://stable-baselines.readthedocs.io/en/master/)).
|
| 386 |
+
|
| 387 |
+
You may use any simulator other than `gym` _however_:
|
| 388 |
+
* The environment has to be similar to the classical control environments (or more complex like [`robosuite`](https://github.com/ARISE-Initiative/robosuite)).
|
| 389 |
+
* You cannot choose a game/Atari/text based environment. The purpose of this challenge is to demonstrate an understanding of basic kinematic/dynamic systems.
|
| 390 |
+
"""
|
| 391 |
+
|
| 392 |
+
### WRITE YOUR CODE TO TRAIN THE MODEL HERE
|
| 393 |
+
|
| 394 |
+
"""**Write up**:
|
| 395 |
+
* (Optional) link to the model on Hugging Face Hub:
|
| 396 |
+
* Discuss whether you think your model has learned the optimal policy and potential methods for improving it and/or where it might fail.
|
| 397 |
+
|
| 398 |
+
# 4. Theory / Linear Algebra
|
| 399 |
+
|
| 400 |
+
**Implement Contrastive PCA** Read [this paper](https://www.nature.com/articles/s41467-018-04608-8) and implement contrastive PCA in Python.
|
| 401 |
+
|
| 402 |
+
* First, please discuss what kind of dataset this would make sense to use this method on
|
| 403 |
+
* Implement the method in Python (do not use previous implementations of the method if they already exist)
|
| 404 |
+
* Then create a synthetic dataset and apply the method to the synthetic data. Compare with standard PCA.
|
| 405 |
+
|
| 406 |
+
**Write up**: Discuss what kind of dataset it would make sense to use Contrastive PCA
|
| 407 |
+
"""
|
| 408 |
+
|
| 409 |
+
### WRITE YOUR CODE HERE
|
| 410 |
+
|
| 411 |
+
"""# 5. Systems
|
| 412 |
+
|
| 413 |
+
**Inference on the edge**: Measure the inference times in various computationally-constrained settings
|
| 414 |
+
|
| 415 |
+
* Pick a few different speech detection models (we suggest looking at models on the [Hugging Face Hub](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads))
|
| 416 |
+
* Simulate different memory constraints and CPU allocations that are realistic for edge devices that might run such models, such as smart speakers or microcontrollers, and measure what is the average inference time of the models under these conditions
|
| 417 |
+
* How does the inference time vary with (1) choice of model (2) available system memory (3) available CPU (4) size of input?
|
| 418 |
+
|
| 419 |
+
Are there any surprising discoveries? (Note that this coding challenge is fairly open-ended, so we will be considering the amount of effort invested in discovering something interesting here).
|
| 420 |
+
"""
|
| 421 |
+
|
| 422 |
+
### WRITE YOUR CODE HERE
|
| 423 |
+
|
| 424 |
+
"""**Write up**: What surprising discoveries do you see?"""
|