text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# **U-Net (2D)**
---
<font size = 4>U-Net is an encoder-decoder network architecture originally used for image segmentation, first published by [Ronneberger *et al.*](https://arxiv.org/abs/1505.04597). The first half of the U-Net architecture is a downsampling convolutional neural network which acts as a feature extractor from input images. The other half upsamples these results and restores an image by combining results from downsampling with the upsampled images.
<font size = 4> **This particular notebook enables image segmentation of 2D dataset. If you are interested in 3D dataset, you should use the 3D U-Net notebook instead.**
---
<font size = 4>*Disclaimer*:
<font size = 4>This notebook is part of the Zero-Cost Deep-Learning to Enhance Microscopy project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories. The BioImage Model Zoo export was jointly developed by [Estibaliz Gómez de Mariscal](https://github.com/esgomezm) (deepImageJ team).
<font size = 4>This notebook is largely based on the papers:
<font size = 4>**U-Net: Convolutional Networks for Biomedical Image Segmentation** by Ronneberger *et al.* published on arXiv in 2015 (https://arxiv.org/abs/1505.04597)
<font size = 4>and
<font size = 4>**U-Net: deep learning for cell counting, detection, and morphometry** by Thorsten Falk *et al.* in Nature Methods 2019
(https://www.nature.com/articles/s41592-018-0261-2)
And source code found in: https://github.com/zhixuhao/unet by *Zhixuhao*
<font size = 4>The guidelines to use the trained network in ImageJ with deepImageJ are given in the following paper:
<font size = 4>**DeepImageJ: a user-friendly environment to run deep learning models in ImageJ**, bioRxiv (2019) by *Estibaliz Gómez-de-Mariscal, Carlos García-López-de-Haro, Wei Ouyang, Laurène Donati, Emma Lundberg, Michael Unser, Arrate Muñoz-Barrutia and Daniel Sage* (https://doi.org/10.1101/799270)
<font size = 4>**Please also cite this original paper when using or developing this notebook.**
# **How to use this notebook?**
---
<font size = 4>Video describing how to use our notebooks are available on youtube:
- [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook
- [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook
---
###**Structure of a notebook**
<font size = 4>The notebook contains two types of cell:
<font size = 4>**Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.
<font size = 4>**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.
---
###**Table of contents, Code snippets** and **Files**
<font size = 4>On the top left side of the notebook you find three tabs which contain from top to bottom:
<font size = 4>*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.
<font size = 4>*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.
<font size = 4>*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here.
<font size = 4>**Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.
<font size = 4>**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!
---
###**Making changes to the notebook**
<font size = 4>**You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.
<font size = 4>To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).
You can use the `#`-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment.
# **0. Before getting started**
---
<font size = 4>Before you run the notebook, please ensure that you are logged into your Google account and have the training and/or data to process in your Google Drive.
<font size = 4>For U-Net to train, **it needs to have access to a paired training dataset corresponding to images and their corresponding masks**. Information on how to generate a training dataset is available in our Wiki page: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki
<font size = 4>**We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model (Quality control dataset)**. The quality control assessment can be done directly in this notebook.
<font size = 4>Additionally, the corresponding Training_source and Training_target files need to have **the same name**.
<font size = 4>Here's a common data structure that can work:
* Experiment A
- **Training dataset**
- Training_source
- img_1.tif, img_2.tif, ...
- Training_target
- img_1.tif, img_2.tif, ...
- **Quality control dataset**
- Training_source
- img_1.tif, img_2.tif
- Training_target
- img_1.tif, img_2.tif
- **Data to be predicted**
- **Results**
---
<font size = 4>**Important note**
<font size = 4>- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.
<font size = 4>- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.
<font size = 4>- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.
---
# **1. Install U-Net dependencies**
---
## **1.1. Install key dependencies**
---
<font size = 4>
```
#@markdown ##Play to install U-Net dependencies
!pip install pydeepimagej
!pip install data
!pip install fpdf
!pip install h5py==2.10
#Force session restart
exit(0)
```
## **1.2. Restart your runtime**
---
<font size = 4>
**<font size = 4> Ignore the following message error message. Your Runtime has automatically restarted. This is normal.**
<img width="40%" alt ="" src="https://github.com/HenriquesLab/ZeroCostDL4Mic/raw/master/Wiki_files/session_crash.png"><figcaption> </figcaption>
## **1.3. Load key dependencies**
---
<font size = 4>
```
Notebook_version = '1.13'
Network = 'U-Net (2D) BioimageIO'
from builtins import any as b_any
def get_requirements_path():
# Store requirements file in 'contents' directory
current_dir = os.getcwd()
dir_count = current_dir.count('/') - 1
path = '../' * (dir_count) + 'requirements.txt'
return path
def filter_files(file_list, filter_list):
filtered_list = []
for fname in file_list:
if b_any(fname.split('==')[0] in s for s in filter_list):
filtered_list.append(fname)
return filtered_list
def build_requirements_file(before, after):
path = get_requirements_path()
# Exporting requirements.txt for local run
!pip freeze > $path
# Get minimum requirements file
df = pd.read_csv(path, delimiter = "\n")
mod_list = [m.split('.')[0] for m in after if not m in before]
req_list_temp = df.values.tolist()
req_list = [x[0] for x in req_list_temp]
# Replace with package name and handle cases where import name is different to module name
mod_name_list = [['sklearn', 'scikit-learn'], ['skimage', 'scikit-image']]
mod_replace_list = [[x[1] for x in mod_name_list] if s in [x[0] for x in mod_name_list] else s for s in mod_list]
filtered_list = filter_files(req_list, mod_replace_list)
file=open(path,'w')
for item in filtered_list:
file.writelines(item + '\n')
file.close()
import sys
before = [str(m) for m in sys.modules]
#@markdown ##Load key U-Net dependencies
#As this notebokk depends mostly on keras which runs a tensorflow backend (which in turn is pre-installed in colab)
#only the data library needs to be additionally installed.
%tensorflow_version 1.x
import tensorflow as tf
# print(tensorflow.__version__)
# print("Tensorflow enabled.")
# Keras imports
from keras import models
from keras.models import Model, load_model
from keras.layers import Input, Conv2D, MaxPooling2D, Dropout, concatenate, UpSampling2D
from keras.optimizers import Adam
# from keras.callbacks import ModelCheckpoint, LearningRateScheduler, CSVLogger # we currently don't use any other callbacks from ModelCheckpoints
from keras.callbacks import ModelCheckpoint
from keras.callbacks import ReduceLROnPlateau
from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
from keras import backend as keras
# General import
from __future__ import print_function
import numpy as np
import pandas as pd
import os
import glob
from skimage import img_as_ubyte, io, transform
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.pyplot import imread
from pathlib import Path
import shutil
import random
import time
import csv
import sys
from math import ceil
from fpdf import FPDF, HTMLMixin
from pip._internal.operations.freeze import freeze
import subprocess
# Imports for QC
from PIL import Image
from scipy import signal
from scipy import ndimage
from sklearn.linear_model import LinearRegression
from skimage.util import img_as_uint
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
# For sliders and dropdown menu and progress bar
from ipywidgets import interact
import ipywidgets as widgets
# from tqdm import tqdm
from tqdm.notebook import tqdm
from sklearn.feature_extraction import image
from skimage import img_as_ubyte, io, transform
from skimage.util.shape import view_as_windows
from datetime import datetime
# Suppressing some warnings
import warnings
warnings.filterwarnings('ignore')
def create_patches(Training_source, Training_target, patch_width, patch_height, min_fraction):
"""
Function creates patches from the Training_source and Training_target images.
The steps parameter indicates the offset between patches and, if integer, is the same in x and y.
Saves all created patches in two new directories in the /content folder.
min_fraction is the minimum fraction of pixels that need to be foreground to be considered as a valid patch
Returns: - Two paths to where the patches are now saved
"""
DEBUG = False
Patch_source = os.path.join('/content','img_patches')
Patch_target = os.path.join('/content','mask_patches')
Patch_rejected = os.path.join('/content','rejected')
#Here we save the patches, in the /content directory as they will not usually be needed after training
if os.path.exists(Patch_source):
shutil.rmtree(Patch_source)
if os.path.exists(Patch_target):
shutil.rmtree(Patch_target)
if os.path.exists(Patch_rejected):
shutil.rmtree(Patch_rejected)
os.mkdir(Patch_source)
os.mkdir(Patch_target)
os.mkdir(Patch_rejected) #This directory will contain the images that have too little signal.
patch_num = 0
for file in tqdm(os.listdir(Training_source)):
img = io.imread(os.path.join(Training_source, file))
mask = io.imread(os.path.join(Training_target, file),as_gray=True)
if DEBUG:
print(file)
print(img.dtype)
# Using view_as_windows with step size equal to the patch size to ensure there is no overlap
patches_img = view_as_windows(img, (patch_width, patch_height), (patch_width, patch_height))
patches_mask = view_as_windows(mask, (patch_width, patch_height), (patch_width, patch_height))
patches_img = patches_img.reshape(patches_img.shape[0]*patches_img.shape[1], patch_width,patch_height)
patches_mask = patches_mask.reshape(patches_mask.shape[0]*patches_mask.shape[1], patch_width,patch_height)
if DEBUG:
print(all_patches_img.shape)
print(all_patches_img.dtype)
for i in range(patches_img.shape[0]):
img_save_path = os.path.join(Patch_source,'patch_'+str(patch_num)+'.tif')
mask_save_path = os.path.join(Patch_target,'patch_'+str(patch_num)+'.tif')
patch_num += 1
# if the mask conatins at least 2% of its total number pixels as mask, then go ahead and save the images
pixel_threshold_array = sorted(patches_mask[i].flatten())
if pixel_threshold_array[int(round(len(pixel_threshold_array)*(1-min_fraction)))]>0:
io.imsave(img_save_path, img_as_ubyte(normalizeMinMax(patches_img[i])))
io.imsave(mask_save_path, convert2Mask(normalizeMinMax(patches_mask[i]),0))
else:
io.imsave(Patch_rejected+'/patch_'+str(patch_num)+'_image.tif', img_as_ubyte(normalizeMinMax(patches_img[i])))
io.imsave(Patch_rejected+'/patch_'+str(patch_num)+'_mask.tif', convert2Mask(normalizeMinMax(patches_mask[i]),0))
return Patch_source, Patch_target
def estimatePatchSize(data_path, max_width = 512, max_height = 512):
files = os.listdir(data_path)
# Get the size of the first image found in the folder and initialise the variables to that
n = 0
while os.path.isdir(os.path.join(data_path, files[n])):
n += 1
(height_min, width_min) = Image.open(os.path.join(data_path, files[n])).size
# Screen the size of all dataset to find the minimum image size
for file in files:
if not os.path.isdir(os.path.join(data_path, file)):
(height, width) = Image.open(os.path.join(data_path, file)).size
if width < width_min:
width_min = width
if height < height_min:
height_min = height
# Find the power of patches that will fit within the smallest dataset
width_min, height_min = (fittingPowerOfTwo(width_min), fittingPowerOfTwo(height_min))
# Clip values at maximum permissible values
if width_min > max_width:
width_min = max_width
if height_min > max_height:
height_min = max_height
return (width_min, height_min)
def fittingPowerOfTwo(number):
n = 0
while 2**n <= number:
n += 1
return 2**(n-1)
def getClassWeights(Training_target_path):
Mask_dir_list = os.listdir(Training_target_path)
number_of_dataset = len(Mask_dir_list)
class_count = np.zeros(2, dtype=int)
for i in tqdm(range(number_of_dataset)):
mask = io.imread(os.path.join(Training_target_path, Mask_dir_list[i]))
mask = normalizeMinMax(mask)
class_count[0] += mask.shape[0]*mask.shape[1] - mask.sum()
class_count[1] += mask.sum()
n_samples = class_count.sum()
n_classes = 2
class_weights = n_samples / (n_classes * class_count)
return class_weights
def weighted_binary_crossentropy(class_weights):
def _weighted_binary_crossentropy(y_true, y_pred):
binary_crossentropy = keras.binary_crossentropy(y_true, y_pred)
weight_vector = y_true * class_weights[1] + (1. - y_true) * class_weights[0]
weighted_binary_crossentropy = weight_vector * binary_crossentropy
return keras.mean(weighted_binary_crossentropy)
return _weighted_binary_crossentropy
def save_augment(datagen,orig_img,dir_augmented_data="/content/augment"):
"""
Saves a subset of the augmented data for visualisation, by default in /content.
This is adapted from: https://fairyonice.github.io/Learn-about-ImageDataGenerator.html
"""
try:
os.mkdir(dir_augmented_data)
except:
## if the preview folder exists, then remove
## the contents (pictures) in the folder
for item in os.listdir(dir_augmented_data):
os.remove(dir_augmented_data + "/" + item)
## convert the original image to array
x = img_to_array(orig_img)
## reshape (Sampke, Nrow, Ncol, 3) 3 = R, G or B
#print(x.shape)
x = x.reshape((1,) + x.shape)
#print(x.shape)
## -------------------------- ##
## randomly generate pictures
## -------------------------- ##
i = 0
#We will just save 5 images,
#but this can be changed, but note the visualisation in 3. currently uses 5.
Nplot = 5
for batch in datagen.flow(x,batch_size=1,
save_to_dir=dir_augmented_data,
save_format='tif',
seed=42):
i += 1
if i > Nplot - 1:
break
# Generators
def buildDoubleGenerator(image_datagen, mask_datagen, image_folder_path, mask_folder_path, subset, batch_size, target_size):
'''
Can generate image and mask at the same time use the same seed for image_datagen and mask_datagen to ensure the transformation for image and mask is the same
datagen: ImageDataGenerator
subset: can take either 'training' or 'validation'
'''
seed = 1
image_generator = image_datagen.flow_from_directory(
os.path.dirname(image_folder_path),
classes = [os.path.basename(image_folder_path)],
class_mode = None,
color_mode = "grayscale",
target_size = target_size,
batch_size = batch_size,
subset = subset,
interpolation = "bicubic",
seed = seed)
mask_generator = mask_datagen.flow_from_directory(
os.path.dirname(mask_folder_path),
classes = [os.path.basename(mask_folder_path)],
class_mode = None,
color_mode = "grayscale",
target_size = target_size,
batch_size = batch_size,
subset = subset,
interpolation = "nearest",
seed = seed)
this_generator = zip(image_generator, mask_generator)
for (img,mask) in this_generator:
# img,mask = adjustData(img,mask)
yield (img,mask)
def prepareGenerators(image_folder_path, mask_folder_path, datagen_parameters, batch_size = 4, target_size = (512, 512)):
image_datagen = ImageDataGenerator(**datagen_parameters, preprocessing_function = normalizePercentile)
mask_datagen = ImageDataGenerator(**datagen_parameters, preprocessing_function = normalizeMinMax)
train_datagen = buildDoubleGenerator(image_datagen, mask_datagen, image_folder_path, mask_folder_path, 'training', batch_size, target_size)
validation_datagen = buildDoubleGenerator(image_datagen, mask_datagen, image_folder_path, mask_folder_path, 'validation', batch_size, target_size)
return (train_datagen, validation_datagen)
# Normalization functions from Martin Weigert
def normalizePercentile(x, pmin=1, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):
"""This function is adapted from Martin Weigert"""
"""Percentile-based image normalization."""
mi = np.percentile(x,pmin,axis=axis,keepdims=True)
ma = np.percentile(x,pmax,axis=axis,keepdims=True)
return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)
def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32
"""This function is adapted from Martin Weigert"""
if dtype is not None:
x = x.astype(dtype,copy=False)
mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)
ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)
eps = dtype(eps)
try:
import numexpr
x = numexpr.evaluate("(x - mi) / ( ma - mi + eps )")
except ImportError:
x = (x - mi) / ( ma - mi + eps )
if clip:
x = np.clip(x,0,1)
return x
# Simple normalization to min/max fir the Mask
def normalizeMinMax(x, dtype=np.float32):
x = x.astype(dtype,copy=False)
x = (x - np.amin(x)) / (np.amax(x) - np.amin(x))
return x
# This is code outlines the architecture of U-net. The choice of pooling steps decides the depth of the network.
def unet(pretrained_weights = None, input_size = (256,256,1), pooling_steps = 4, learning_rate = 1e-4, verbose=True, class_weights=np.ones(2)):
inputs = Input(input_size)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
# Downsampling steps
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
if pooling_steps > 1:
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
if pooling_steps > 2:
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
drop4 = Dropout(0.5)(conv4)
if pooling_steps > 3:
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
drop5 = Dropout(0.5)(conv5)
#Upsampling steps
up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
merge6 = concatenate([drop4,up6], axis = 3)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)
if pooling_steps > 2:
up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop4))
if pooling_steps > 3:
up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)
if pooling_steps > 1:
up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv3))
if pooling_steps > 2:
up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Conv2D(128, 3, activation= 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
conv8 = Conv2D(128, 3, activation= 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)
if pooling_steps == 1:
up9 = Conv2D(64, 2, padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv2))
else:
up9 = Conv2D(64, 2, padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8)) #activation = 'relu'
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Conv2D(64, 3, padding = 'same', kernel_initializer = 'he_normal')(merge9) #activation = 'relu'
conv9 = Conv2D(64, 3, padding = 'same', kernel_initializer = 'he_normal')(conv9) #activation = 'relu'
conv9 = Conv2D(2, 3, padding = 'same', kernel_initializer = 'he_normal')(conv9) #activation = 'relu'
conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9)
model = Model(inputs = inputs, outputs = conv10)
# model.compile(optimizer = Adam(lr = learning_rate), loss = 'binary_crossentropy', metrics = ['acc'])
model.compile(optimizer = Adam(lr = learning_rate), loss = weighted_binary_crossentropy(class_weights))
if verbose:
model.summary()
if(pretrained_weights):
model.load_weights(pretrained_weights);
return model
def predict_as_tiles(Image_path, model):
# Read the data in and normalize
Image_raw = io.imread(Image_path, as_gray = True)
Image_raw = normalizePercentile(Image_raw)
# Get the patch size from the input layer of the model
patch_size = model.layers[0].output_shape[1:3]
# Pad the image with zeros if any of its dimensions is smaller than the patch size
if Image_raw.shape[0] < patch_size[0] or Image_raw.shape[1] < patch_size[1]:
Image = np.zeros((max(Image_raw.shape[0], patch_size[0]), max(Image_raw.shape[1], patch_size[1])))
Image[0:Image_raw.shape[0], 0: Image_raw.shape[1]] = Image_raw
else:
Image = Image_raw
# Calculate the number of patches in each dimension
n_patch_in_width = ceil(Image.shape[0]/patch_size[0])
n_patch_in_height = ceil(Image.shape[1]/patch_size[1])
prediction = np.zeros(Image.shape)
for x in range(n_patch_in_width):
for y in range(n_patch_in_height):
xi = patch_size[0]*x
yi = patch_size[1]*y
# If the patch exceeds the edge of the image shift it back
if xi+patch_size[0] >= Image.shape[0]:
xi = Image.shape[0]-patch_size[0]
if yi+patch_size[1] >= Image.shape[1]:
yi = Image.shape[1]-patch_size[1]
# Extract and reshape the patch
patch = Image[xi:xi+patch_size[0], yi:yi+patch_size[1]]
patch = np.reshape(patch,patch.shape+(1,))
patch = np.reshape(patch,(1,)+patch.shape)
# Get the prediction from the patch and paste it in the prediction in the right place
predicted_patch = model.predict(patch, batch_size = 1)
prediction[xi:xi+patch_size[0], yi:yi+patch_size[1]] = np.squeeze(predicted_patch)
return prediction[0:Image_raw.shape[0], 0: Image_raw.shape[1]]
def saveResult(save_path, nparray, source_dir_list, prefix='', threshold=None):
for (filename, image) in zip(source_dir_list, nparray):
io.imsave(os.path.join(save_path, prefix+os.path.splitext(filename)[0]+'.tif'), img_as_ubyte(image)) # saving as unsigned 8-bit image
# For masks, threshold the images and return 8 bit image
if threshold is not None:
mask = convert2Mask(image, threshold)
io.imsave(os.path.join(save_path, prefix+'mask_'+os.path.splitext(filename)[0]+'.tif'), mask)
def convert2Mask(image, threshold):
mask = img_as_ubyte(image, force_copy=True)
mask[mask > threshold] = 255
mask[mask <= threshold] = 0
return mask
def getIoUvsThreshold(prediction_filepath, groud_truth_filepath):
prediction = io.imread(prediction_filepath)
ground_truth_image = img_as_ubyte(io.imread(groud_truth_filepath, as_gray=True), force_copy=True)
threshold_list = []
IoU_scores_list = []
for threshold in range(0,256):
# Convert to 8-bit for calculating the IoU
mask = img_as_ubyte(prediction, force_copy=True)
mask[mask > threshold] = 255
mask[mask <= threshold] = 0
# Intersection over Union metric
intersection = np.logical_and(ground_truth_image, np.squeeze(mask))
union = np.logical_or(ground_truth_image, np.squeeze(mask))
iou_score = np.sum(intersection) / np.sum(union)
threshold_list.append(threshold)
IoU_scores_list.append(iou_score)
return (threshold_list, IoU_scores_list)
# -------------- Other definitions -----------
W = '\033[0m' # white (normal)
R = '\033[31m' # red
prediction_prefix = 'Predicted_'
print('-------------------')
print('U-Net and dependencies installed.')
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
# Check if this is the latest version of the notebook
All_notebook_versions = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_Notebook_versions.csv", dtype=str)
print('Notebook version: '+Notebook_version)
Latest_Notebook_version = All_notebook_versions[All_notebook_versions["Notebook"] == Network]['Version'].iloc[0]
print('Latest notebook version: '+Latest_Notebook_version)
if Notebook_version == Latest_Notebook_version:
print("This notebook is up-to-date.")
else:
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
def pdf_export(trained = False, augmentation = False, pretrained_model = False):
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Training report for '+Network+' model ('+model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
# add another cell
if trained:
training_time = "Training time: "+str(hour)+ "hour(s) "+str(mins)+"min(s) "+str(round(sec))+"sec(s)"
pdf.cell(190, 5, txt = training_time, ln = 1, align='L')
pdf.ln(1)
Header_2 = 'Information for your materials and method:'
pdf.cell(190, 5, txt=Header_2, ln=1, align='L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
#print(all_packages)
#Main Packages
main_packages = ''
version_numbers = []
for name in ['tensorflow','numpy','Keras']:
find_name=all_packages.find(name)
main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '
#Version numbers only here:
version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])
cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)
cuda_version = cuda_version.stdout.decode('utf-8')
cuda_version = cuda_version[cuda_version.find(', V')+3:-1]
gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)
gpu_name = gpu_name.stdout.decode('utf-8')
gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]
#print(cuda_version[cuda_version.find(', V')+3:-1])
#print(gpu_name)
loss = str(model.loss)[str(model.loss).find('function')+len('function'):str(model.loss).find('.<')]
shape = io.imread(Training_source+'/'+os.listdir(Training_source)[1]).shape
dataset_size = len(os.listdir(Training_source))
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(number_of_training_dataset)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_width)+','+str(patch_height)+')) with a batch size of '+str(batch_size)+' and a'+loss+' loss function,'+' using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
if pretrained_model:
text = 'The '+Network+' model was trained for '+str(number_of_epochs)+' epochs on '+str(number_of_training_dataset)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_width)+','+str(patch_height)+')) with a batch size of '+str(batch_size)+' and a'+loss+' loss function,'+' using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). The model was re-trained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
pdf.set_font('')
pdf.set_font_size(10.)
pdf.multi_cell(180, 5, txt = text, align='L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(1)
pdf.cell(28, 5, txt='Augmentation: ', ln=1)
pdf.set_font('')
if augmentation:
aug_text = 'The dataset was augmented by'
if rotation_range != 0:
aug_text = aug_text+'\n- rotation'
if horizontal_flip == True or vertical_flip == True:
aug_text = aug_text+'\n- flipping'
if zoom_range != 0:
aug_text = aug_text+'\n- random zoom magnification'
if horizontal_shift != 0 or vertical_shift != 0:
aug_text = aug_text+'\n- shifting'
if shear_range != 0:
aug_text = aug_text+'\n- image shearing'
else:
aug_text = 'No augmentation was used for training.'
pdf.multi_cell(190, 5, txt=aug_text, align='L')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
if Use_Default_Advanced_Parameters:
pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')
pdf.cell(200, 5, txt='The following parameters were used for training:')
pdf.ln(1)
html = """
<table width=40% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>number_of_epochs</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>patch_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>batch_size</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>number_of_steps</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>percentage_validation</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>initial_learning_rate</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>pooling_steps</td>
<td width = 50%>{6}</td>
</tr>
<tr>
<td width = 50%>min_fraction</td>
<td width = 50%>{7}</td>
</table>
""".format(number_of_epochs, str(patch_width)+'x'+str(patch_height), batch_size, number_of_steps, percentage_validation, initial_learning_rate, pooling_steps, min_fraction)
pdf.write_html(html)
#pdf.multi_cell(190, 5, txt = text_2, align='L')
pdf.set_font("Arial", size = 11, style='B')
pdf.ln(1)
pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(29, 5, txt= 'Training_source:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_source, align = 'L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(28, 5, txt= 'Training_target:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_target, align = 'L')
#pdf.cell(190, 5, txt=aug_text, align='L', ln=1)
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(21, 5, txt= 'Model Path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')
pdf.ln(1)
pdf.cell(60, 5, txt = 'Example Training pair', ln=1)
pdf.ln(1)
exp_size = io.imread('/content/TrainingDataExample_Unet2D.png').shape
pdf.image('/content/TrainingDataExample_Unet2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- Unet: Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
# if Use_Data_augmentation:
# ref_3 = '- Augmentor: Bloice, Marcus D., Christof Stocker, and Andreas Holzinger. "Augmentor: an image augmentation library for machine learning." arXiv preprint arXiv:1708.04680 (2017).'
# pdf.multi_cell(190, 5, txt = ref_3, align='L')
pdf.ln(3)
reminder = 'Important:\nRemember to perform the quality control step on all newly trained models\nPlease consider depositing your training dataset on Zenodo'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(model_path+'/'+model_name+'/'+model_name+'_training_report.pdf')
print('------------------------------')
print('PDF report exported in '+model_path+'/'+model_name+'/')
def qc_pdf_export():
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'Unet 2D'
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Quality Control report for '+Network+' model ('+QC_model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(2)
pdf.cell(190, 5, txt = 'Loss curves', ln=1, align='L')
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'/Quality Control/QC_example_data.png').shape
if os.path.exists(full_QC_model_path+'/Quality Control/lossCurvePlots.png'):
pdf.image(full_QC_model_path+'/Quality Control/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/12), h = round(exp_size[0]/3))
else:
pdf.set_font('')
pdf.set_font('Arial', size=10)
pdf.multi_cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.',align='L')
pdf.ln(2)
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(2)
pdf.cell(190, 5, txt = 'Threshold Optimisation', ln=1, align='L')
#pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'/Quality Control/'+QC_model_name+'_IoUvsThresholdPlot.png').shape
pdf.image(full_QC_model_path+'/Quality Control/'+QC_model_name+'_IoUvsThresholdPlot.png', x = 11, y = None, w = round(exp_size[1]/6), h = round(exp_size[0]/7))
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(3)
pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'/Quality Control/QC_example_data.png').shape
pdf.image(full_QC_model_path+'/Quality Control/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
pdf.ln(1)
html = """
<body>
<font size="10" face="Courier New" >
<table width=60% style="margin-left:0px;">"""
with open(full_QC_model_path+'/Quality Control/QC_metrics_'+QC_model_name+'.csv', 'r') as csvfile:
metrics = csv.reader(csvfile)
header = next(metrics)
image = header[0]
IoU = header[1]
IoU_OptThresh = header[2]
header = """
<tr>
<th width = 33% align="center">{0}</th>
<th width = 33% align="center">{1}</th>
<th width = 33% align="center">{2}</th>
</tr>""".format(image,IoU,IoU_OptThresh)
html = html+header
i=0
for row in metrics:
i+=1
image = row[0]
IoU = row[1]
IoU_OptThresh = row[2]
cells = """
<tr>
<td width = 33% align="center">{0}</td>
<td width = 33% align="center">{1}</td>
<td width = 33% align="center">{2}</td>
</tr>""".format(image,str(round(float(IoU),3)),str(round(float(IoU_OptThresh),3)))
html = html+cells
html = html+"""</body></table>"""
pdf.write_html(html)
pdf.ln(1)
pdf.set_font('')
pdf.set_font_size(10.)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- Unet: Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
pdf.ln(3)
reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(full_QC_model_path+'/Quality Control/'+QC_model_name+'_QC_report.pdf')
print('------------------------------')
print('QC PDF report exported as '+full_QC_model_path+'/Quality Control/'+QC_model_name+'_QC_report.pdf')
# Build requirements file for local run
after = [str(m) for m in sys.modules]
build_requirements_file(before, after)
```
# **2. Complete the Colab session**
---
## **2.1. Check for GPU access**
---
By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:
<font size = 4>Go to **Runtime -> Change the Runtime type**
<font size = 4>**Runtime type: Python 3** *(Python 3 is programming language in which this program is written)*
<font size = 4>**Accelerator: GPU** *(Graphics processing unit)*
```
#@markdown ##Run this cell to check if you have GPU access
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime setting is correct then Google did not allocate a GPU for your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
# from tensorflow.python.client import device_lib
# device_lib.list_local_devices()
# print the tensorflow version
print('Tensorflow version is ' + str(tf.__version__))
```
## **2.2. Mount your Google Drive**
---
<font size = 4> To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook.
<font size = 4> Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive.
<font size = 4> Once this is done, your data are available in the **Files** tab on the top left of notebook.
```
#@markdown ##Play the cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
# mount user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
```
**<font size = 4> If you cannot see your files, reactivate your session by connecting to your hosted runtime.**
<img width="40%" alt ="Example of image detection with retinanet." src="https://github.com/HenriquesLab/ZeroCostDL4Mic/raw/master/Wiki_files/connect_to_hosted.png"><figcaption> Connect to a hosted runtime. </figcaption>
# **3. Select your parameters and paths**
---
## **3.1. Setting main training parameters**
---
<font size = 4>
<font size = 5> **Paths for training data and models**
<font size = 4>**`Training_source`, `Training_target`:** These are the folders containing your source (e.g. EM images) and target files (segmentation masks). Enter the path to the source and target images for training. **These should be located in the same parent folder.**
<font size = 4>**`model_name`:** Use only my_model -style, not my-model. If you want to use a previously trained model, enter the name of the pretrained model (which should be contained in the trained_model -folder after training).
<font size = 4>**`model_path`**: Enter the path of the folder where you want to save your model.
<font size = 4>**`visual_validation_after_training`**: If you select this option, a random image pair will be set aside from your training set and will be used to display a predicted image of the trained network next to the input and the ground-truth. This can aid in visually assessing the performance of your network after training. **Note: Your training set size will decrease by 1 if you select this option.**
<font size = 5> **Select training parameters**
<font size = 4>**`number_of_epochs`**: Choose more epochs for larger training sets. Observing how much the loss reduces between epochs during training may help determine the optimal value. **Default: 200**
<font size = 5>**Advanced parameters - experienced users only**
<font size = 4>**`batch_size`**: This parameter describes the amount of images that are loaded into the network per step. Smaller batchsizes may improve training performance slightly but may increase training time. If the notebook crashes while loading the dataset this can be due to a too large batch size. Decrease the number in this case. **Default: 4**
<font size = 4>**`number_of_steps`**: This number should be equivalent to the number of samples in the training set divided by the batch size, to ensure the training iterates through the entire training set. The default value is calculated to ensure this. This behaviour can also be obtained by setting it to 0. Other values can be used for testing.
<font size = 4> **`pooling_steps`**: Choosing a different number of pooling layers can affect the performance of the network. Each additional pooling step will also two additional convolutions. The network can learn more complex information but is also more likely to overfit. Achieving best performance may require testing different values here. **Default: 2**
<font size = 4>**`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during training. **Default value: 10**
<font size = 4>**`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0003**
<font size = 4>**`patch_width` and `patch_height`:** The notebook crops the data in patches of fixed size prior to training. The dimensions of the patches can be defined here. When `Use_Default_Advanced_Parameters` is selected, the largest 2^n x 2^n patch that fits in the smallest dataset is chosen. Larger patches than 512x512 should **NOT** be selected for network stability.
<font size = 4>**`min_fraction`:** Minimum fraction of pixels being foreground for a slected patch to be considered valid. It should be between 0 and 1.**Default value: 0.02** (2%)
```
# ------------- Initial user input ------------
#@markdown ###Path to training images:
Training_source = '' #@param {type:"string"}
Training_target = '' #@param {type:"string"}
model_name = '' #@param {type:"string"}
model_path = '' #@param {type:"string"}
#@markdown ###Training parameters:
#@markdown Number of epochs
number_of_epochs = 200#@param {type:"number"}
#@markdown ###Advanced parameters:
Use_Default_Advanced_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please input:
batch_size = 4#@param {type:"integer"}
number_of_steps = 0#@param {type:"number"}
pooling_steps = 2 #@param [1,2,3,4]{type:"raw"}
percentage_validation = 10#@param{type:"number"}
initial_learning_rate = 0.0003 #@param {type:"number"}
patch_width = 512#@param{type:"number"}
patch_height = 512#@param{type:"number"}
min_fraction = 0.02#@param{type:"number"}
# ------------- Initialising folder, variables and failsafes ------------
# Create the folders where to save the model and the QC
full_model_path = os.path.join(model_path, model_name)
if os.path.exists(full_model_path):
print(R+'!! WARNING: Folder already exists and will be overwritten !!'+W)
if (Use_Default_Advanced_Parameters):
print("Default advanced parameters enabled")
batch_size = 4
pooling_steps = 2
percentage_validation = 10
initial_learning_rate = 0.0003
patch_width, patch_height = estimatePatchSize(Training_source)
min_fraction = 0.02
#The create_patches function will create the two folders below
# Patch_source = '/content/img_patches'
# Patch_target = '/content/mask_patches'
print('Training on patches of size (x,y): ('+str(patch_width)+','+str(patch_height)+')')
#Create patches
print('Creating patches...')
Patch_source, Patch_target = create_patches(Training_source, Training_target, patch_width, patch_height, min_fraction)
number_of_training_dataset = len(os.listdir(Patch_source))
print('Total number of valid patches: '+str(number_of_training_dataset))
if Use_Default_Advanced_Parameters or number_of_steps == 0:
number_of_steps = ceil((100-percentage_validation)/100*number_of_training_dataset/batch_size)
print('Number of steps: '+str(number_of_steps))
# Calculate the number of steps to use for validation
validation_steps = max(1, ceil(percentage_validation/100*number_of_training_dataset/batch_size))
# Here we disable pre-trained model by default (in case the next cell is not ran)
Use_pretrained_model = False
# Here we disable data augmentation by default (in case the cell is not ran)
Use_Data_augmentation = False
# Build the default dict for the ImageDataGenerator
data_gen_args = dict(width_shift_range = 0.,
height_shift_range = 0.,
rotation_range = 0., #90
zoom_range = 0.,
shear_range = 0.,
horizontal_flip = False,
vertical_flip = False,
validation_split = percentage_validation/100,
fill_mode = 'reflect')
# ------------- Display ------------
#if not os.path.exists('/content/img_patches/'):
random_choice = random.choice(os.listdir(Patch_source))
x = io.imread(os.path.join(Patch_source, random_choice))
#os.chdir(Training_target)
y = io.imread(os.path.join(Patch_target, random_choice), as_gray=True)
f=plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.imshow(x, interpolation='nearest',cmap='gray')
plt.title('Training image patch')
plt.axis('off');
plt.subplot(1,2,2)
plt.imshow(y, interpolation='nearest',cmap='gray')
plt.title('Training mask patch')
plt.axis('off');
plt.savefig('/content/TrainingDataExample_Unet2D.png',bbox_inches='tight',pad_inches=0)
```
##**3.2. Data augmentation**
---
<font size = 4> Data augmentation can improve training progress by amplifying differences in the dataset. This can be useful if the available dataset is small since, in this case, it is possible that a network could quickly learn every example in the dataset (overfitting), without augmentation. Augmentation is not necessary for training and if the dataset is large the values can be set to 0.
<font size = 4> The augmentation options below are to be used as follows:
* <font size = 4> **shift**: a translation of the image by a fraction of the image size (width or height), **default: 10%**
* **zoom_range**: Increasing or decreasing the field of view. E.g. 10% will result in a zoom range of (0.9 to 1.1), with pixels added or interpolated, depending on the transformation, **default: 10%**
* **shear_range**: Shear angle in counter-clockwise direction, **default: 10%**
* **flip**: creating a mirror image along specified axis (horizontal or vertical), **default: True**
* **rotation_range**: range of allowed rotation angles in degrees (from 0 to *value*), **default: 180**
```
#@markdown ##**Augmentation options**
Use_Data_augmentation = True #@param {type:"boolean"}
Use_Default_Augmentation_Parameters = True #@param {type:"boolean"}
if Use_Data_augmentation:
if Use_Default_Augmentation_Parameters:
horizontal_shift = 10
vertical_shift = 20
zoom_range = 10
shear_range = 10
horizontal_flip = True
vertical_flip = True
rotation_range = 180
#@markdown ###If you are not using the default settings, please provide the values below:
#@markdown ###**Image shift, zoom, shear and flip (%)**
else:
horizontal_shift = 10 #@param {type:"slider", min:0, max:100, step:1}
vertical_shift = 10 #@param {type:"slider", min:0, max:100, step:1}
zoom_range = 10 #@param {type:"slider", min:0, max:100, step:1}
shear_range = 10 #@param {type:"slider", min:0, max:100, step:1}
horizontal_flip = True #@param {type:"boolean"}
vertical_flip = True #@param {type:"boolean"}
#@markdown ###**Rotate image within angle range (degrees):**
rotation_range = 180 #@param {type:"slider", min:0, max:180, step:1}
#given behind the # are the default values for each parameter.
else:
horizontal_shift = 0
vertical_shift = 0
zoom_range = 0
shear_range = 0
horizontal_flip = False
vertical_flip = False
rotation_range = 0
# Build the dict for the ImageDataGenerator
data_gen_args = dict(width_shift_range = horizontal_shift/100.,
height_shift_range = vertical_shift/100.,
rotation_range = rotation_range, #90
zoom_range = zoom_range/100.,
shear_range = shear_range/100.,
horizontal_flip = horizontal_flip,
vertical_flip = vertical_flip,
validation_split = percentage_validation/100,
fill_mode = 'reflect')
# ------------- Display ------------
dir_augmented_data_imgs="/content/augment_img"
dir_augmented_data_masks="/content/augment_mask"
random_choice = random.choice(os.listdir(Patch_source))
orig_img = load_img(os.path.join(Patch_source,random_choice))
orig_mask = load_img(os.path.join(Patch_target,random_choice))
augment_view = ImageDataGenerator(**data_gen_args)
if Use_Data_augmentation:
print("Parameters enabled")
print("Here is what a subset of your augmentations looks like:")
save_augment(augment_view, orig_img, dir_augmented_data=dir_augmented_data_imgs)
save_augment(augment_view, orig_mask, dir_augmented_data=dir_augmented_data_masks)
fig = plt.figure(figsize=(15, 7))
fig.subplots_adjust(hspace=0.0,wspace=0.1,left=0,right=1.1,bottom=0, top=0.8)
ax = fig.add_subplot(2, 6, 1,xticks=[],yticks=[])
new_img=img_as_ubyte(normalizeMinMax(img_to_array(orig_img)))
ax.imshow(new_img)
ax.set_title('Original Image')
i = 2
for imgnm in os.listdir(dir_augmented_data_imgs):
ax = fig.add_subplot(2, 6, i,xticks=[],yticks=[])
img = load_img(dir_augmented_data_imgs + "/" + imgnm)
ax.imshow(img)
i += 1
ax = fig.add_subplot(2, 6, 7,xticks=[],yticks=[])
new_mask=img_as_ubyte(normalizeMinMax(img_to_array(orig_mask)))
ax.imshow(new_mask)
ax.set_title('Original Mask')
j=2
for imgnm in os.listdir(dir_augmented_data_masks):
ax = fig.add_subplot(2, 6, j+6,xticks=[],yticks=[])
mask = load_img(dir_augmented_data_masks + "/" + imgnm)
ax.imshow(mask)
j += 1
plt.show()
else:
print("No augmentation will be used")
```
## **3.3. Using weights from a pre-trained model as initial weights**
---
<font size = 4> Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a U-Net model**.
<font size = 4> This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**.
<font size = 4> In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used.
```
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = True #@param {type:"boolean"}
pretrained_model_choice = "Model_from_file" #@param ["Model_from_file"]
Weights_choice = "last" #@param ["last", "best"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".hdf5")
# --------------------- Download the a model provided in the XXX ------------------------
if pretrained_model_choice == "Model_name":
pretrained_model_name = "Model_name"
pretrained_model_path = "/content/"+pretrained_model_name
print("Downloading the UNET_Model_from_")
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".hdf5")
# --------------------- Add additional pre-trained models here ------------------------
# --------------------- Check the model exist ------------------------
# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled,
if not os.path.exists(h5_file_path):
print(R+'WARNING: pretrained model does not exist')
Use_pretrained_model = False
# If the model path contains a pretrain model, we load the training rate,
if os.path.exists(h5_file_path):
#Here we check if the learning rate can be loaded from the quality control folder
if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:
csvRead = pd.read_csv(csvfile, sep=',')
#print(csvRead)
if "learning rate" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)
print("pretrained network learning rate found")
#find the last learning rate
lastLearningRate = csvRead["learning rate"].iloc[-1]
#Find the learning rate corresponding to the lowest validation loss
min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]
#print(min_val_loss)
bestLearningRate = min_val_loss['learning rate'].iloc[-1]
if Weights_choice == "last":
print('Last learning rate: '+str(lastLearningRate))
if Weights_choice == "best":
print('Learning rate of best validation loss: '+str(bestLearningRate))
if not "learning rate" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead' + W)
#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used
if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead'+ W)
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
# Display info about the pretrained model to be loaded (or not)
if Use_pretrained_model:
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
else:
print(R+'No pretrained network will be used.')
```
# **4. Train the network**
---
####**Troubleshooting:** If you receive a time-out or exhausted error, try reducing the batchsize of your training set. This reduces the amount of data loaded into the model at one point in time.
## **4.1. Prepare the training data and model for training**
---
<font size = 4>Here, we use the information from 3. to build the model and convert the training data into a suitable format for training.
```
#@markdown ##Play this cell to prepare the model for training
# ------------------ Set the generators, model and logger ------------------
# This will take the image size and set that as a patch size (arguable...)
# Read image size (without actuall reading the data)
(train_datagen, validation_datagen) = prepareGenerators(Patch_source, Patch_target, data_gen_args, batch_size, target_size = (patch_width, patch_height))
# This modelcheckpoint will only save the best model from the validation loss point of view
model_checkpoint = ModelCheckpoint(os.path.join(full_model_path, 'weights_best.hdf5'), monitor='val_loss',verbose=1, save_best_only=True)
print('Getting class weights...')
class_weights = getClassWeights(Training_target)
# --------------------- Using pretrained model ------------------------
#Here we ensure that the learning rate set correctly when using pre-trained models
if Use_pretrained_model:
if Weights_choice == "last":
initial_learning_rate = lastLearningRate
if Weights_choice == "best":
initial_learning_rate = bestLearningRate
else:
h5_file_path = None
# --------------------- ---------------------- ------------------------
# --------------------- Reduce learning rate on plateau ------------------------
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, verbose=1, mode='auto',
patience=10, min_lr=0)
# --------------------- ---------------------- ------------------------
# Define the model
model = unet(pretrained_weights = h5_file_path,
input_size = (patch_width,patch_height,1),
pooling_steps = pooling_steps,
learning_rate = initial_learning_rate,
class_weights = class_weights)
config_model= model.optimizer.get_config()
print(config_model)
# ------------------ Failsafes ------------------
if os.path.exists(full_model_path):
print(R+'!! WARNING: Model folder already existed and has been removed !!'+W)
shutil.rmtree(full_model_path)
os.makedirs(full_model_path)
os.makedirs(os.path.join(full_model_path,'Quality Control'))
# ------------------ Display ------------------
print('---------------------------- Main training parameters ----------------------------')
print('Number of epochs: '+str(number_of_epochs))
print('Batch size: '+str(batch_size))
print('Number of training dataset: '+str(number_of_training_dataset))
print('Number of training steps: '+str(number_of_steps))
print('Number of validation steps: '+str(validation_steps))
print('---------------------------- ------------------------ ----------------------------')
pdf_export(augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)
```
## **4.2. Start Training**
---
<font size = 4>When playing the cell below you should see updates after each epoch (round). Network training can take some time.
<font size = 4>* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches. Another way circumvent this is to save the parameters of the model after training and start training again from this point.
<font size = 4>Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder from Google Drive as all data can be erased at the next training if using the same folder.
```
#@markdown ##Start training
start = time.time()
# history = model.fit_generator(train_datagen, steps_per_epoch = number_of_steps, epochs=epochs, callbacks=[model_checkpoint,csv_log], validation_data = validation_datagen, validation_steps = validation_steps, shuffle=True, verbose=1)
history = model.fit_generator(train_datagen, steps_per_epoch = number_of_steps, epochs = number_of_epochs, callbacks=[model_checkpoint, reduce_lr], validation_data = validation_datagen, validation_steps = validation_steps, shuffle=True, verbose=1)
# Save the last model
model.save(os.path.join(full_model_path, 'weights_last.hdf5'))
# convert the history.history dict to a pandas DataFrame:
lossData = pd.DataFrame(history.history)
# The training evaluation.csv is saved (overwrites the Files if needed).
lossDataCSVpath = os.path.join(full_model_path,'Quality Control/training_evaluation.csv')
with open(lossDataCSVpath, 'w') as f:
writer = csv.writer(f)
writer.writerow(['loss','val_loss', 'learning rate'])
for i in range(len(history.history['loss'])):
writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]])
# Displaying the time elapsed for training
print("------------------------------------------")
dt = time.time() - start
mins, sec = divmod(dt, 60)
hour, mins = divmod(mins, 60)
print("Time elapsed:", hour, "hour(s)", mins,"min(s)",round(sec),"sec(s)")
print("------------------------------------------")
#Create a pdf document with training summary
pdf_export(trained = True, augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)
```
# **5. Evaluate your model**
---
<font size = 4>This section allows the user to perform important quality checks on the validity and generalisability of the trained model.
<font size = 4>**We highly recommend to perform quality control on all newly trained models.**
```
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
QC_model_folder = "" #@param {type:"string"}
#Here we define the loaded model name and path
QC_model_name = os.path.basename(QC_model_folder)
QC_model_path = os.path.dirname(QC_model_folder)
if (Use_the_current_trained_model):
print("Using current trained network")
QC_model_name = model_name
QC_model_path = model_path
full_QC_model_path = os.path.join(QC_model_path, QC_model_name)
if os.path.exists(os.path.join(full_QC_model_path, 'weights_best.hdf5')):
print("The "+QC_model_name+" network will be evaluated")
else:
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
```
## **5.1. Inspection of the loss function**
---
<font size = 4>First, it is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.*
<font size = 4>**Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.
<font size = 4>**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.
<font size = 4>During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.
<font size = 4>Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased.
```
#@markdown ##Play the cell to show a plot of training errors vs. epoch number
epochNumber = []
lossDataFromCSV = []
vallossDataFromCSV = []
with open(os.path.join(full_QC_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:
csvRead = csv.reader(csvfile, delimiter=',')
next(csvRead)
for row in csvRead:
lossDataFromCSV.append(float(row[0]))
vallossDataFromCSV.append(float(row[1]))
epochNumber = range(len(lossDataFromCSV))
plt.figure(figsize=(15,10))
plt.subplot(2,1,1)
plt.plot(epochNumber,lossDataFromCSV, label='Training loss')
plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (linear scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.subplot(2,1,2)
plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')
plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (log scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.savefig(os.path.join(full_QC_model_path, 'Quality Control', 'lossCurvePlots.png'),bbox_inches='tight',pad_inches=0)
plt.show()
```
## **5.2. Error mapping and quality metrics estimation**
---
<font size = 4>This section will calculate the Intersection over Union score for all the images provided in the Source_QC_folder and Target_QC_folder. The result for one of the image will also be displayed.
<font size = 4>The **Intersection over Union** metric is a method that can be used to quantify the percent overlap between the target mask and your prediction output. **Therefore, the closer to 1, the better the performance.** This metric can be used to assess the quality of your model to accurately predict nuclei.
<font size = 4>The Input, Ground Truth, Prediction and IoU maps are shown below for the last example in the QC set.
<font size = 4> The results for all QC examples can be found in the "*Quality Control*" folder which is located inside your "model_folder".
### **Thresholds for image masks**
<font size = 4> Since the output from Unet is not a binary mask, the output images are converted to binary masks using thresholding. This section will test different thresholds (from 0 to 255) to find the one yielding the best IoU score compared with the ground truth. The best threshold for each image and the average of these thresholds will be displayed below. **These values can be a guideline when creating masks for unseen data in section 6.**
```
# ------------- User input ------------
#@markdown ##Choose the folders that contain your Quality Control dataset
Source_QC_folder = "" #@param{type:"string"}
Target_QC_folder = "" #@param{type:"string"}
# ------------- Initialise folders ------------
# Create a quality control/Prediction Folder
prediction_QC_folder = os.path.join(full_QC_model_path, 'Quality Control', 'Prediction')
if os.path.exists(prediction_QC_folder):
shutil.rmtree(prediction_QC_folder)
os.makedirs(prediction_QC_folder)
# ------------- Prepare the model and run predictions ------------
# Load the model
unet = load_model(os.path.join(full_QC_model_path, 'weights_best.hdf5'), custom_objects={'_weighted_binary_crossentropy': weighted_binary_crossentropy(np.ones(2))})
Input_size = unet.layers[0].output_shape[1:3]
print('Model input size: '+str(Input_size[0])+'x'+str(Input_size[1]))
# Create a list of sources
source_dir_list = os.listdir(Source_QC_folder)
number_of_dataset = len(source_dir_list)
print('Number of dataset found in the folder: '+str(number_of_dataset))
predictions = []
for i in tqdm(range(number_of_dataset)):
predictions.append(predict_as_tiles(os.path.join(Source_QC_folder, source_dir_list[i]), unet))
# Save the results in the folder along with the masks according to the set threshold
saveResult(prediction_QC_folder, predictions, source_dir_list, prefix=prediction_prefix, threshold=None)
#-----------------------------Calculate Metrics----------------------------------------#
f = plt.figure(figsize=((5,5)))
with open(os.path.join(full_QC_model_path,'Quality Control', 'QC_metrics_'+QC_model_name+'.csv'), "w", newline='') as file:
writer = csv.writer(file)
writer.writerow(["File name","IoU", "IoU-optimised threshold"])
# Initialise the lists
filename_list = []
best_threshold_list = []
best_IoU_score_list = []
for filename in os.listdir(Source_QC_folder):
if not os.path.isdir(os.path.join(Source_QC_folder, filename)):
print('Running QC on: '+filename)
test_input = io.imread(os.path.join(Source_QC_folder, filename), as_gray=True)
test_ground_truth_image = io.imread(os.path.join(Target_QC_folder, filename), as_gray=True)
(threshold_list, iou_scores_per_threshold) = getIoUvsThreshold(os.path.join(prediction_QC_folder, prediction_prefix+filename), os.path.join(Target_QC_folder, filename))
plt.plot(threshold_list,iou_scores_per_threshold, label=filename)
# Here we find which threshold yielded the highest IoU score for image n.
best_IoU_score = max(iou_scores_per_threshold)
best_threshold = iou_scores_per_threshold.index(best_IoU_score)
# Write the results in the CSV file
writer.writerow([filename, str(best_IoU_score), str(best_threshold)])
# Here we append the best threshold and score to the lists
filename_list.append(filename)
best_IoU_score_list.append(best_IoU_score)
best_threshold_list.append(best_threshold)
# Display the IoV vs Threshold plot
plt.title('IoU vs. Threshold')
plt.ylabel('IoU')
plt.xlabel('Threshold value')
plt.legend()
plt.savefig(full_QC_model_path+'/Quality Control/'+QC_model_name+'_IoUvsThresholdPlot.png',bbox_inches='tight',pad_inches=0)
plt.show()
# Table with metrics as dataframe output
pdResults = pd.DataFrame(index = filename_list)
pdResults["IoU"] = best_IoU_score_list
pdResults["IoU-optimised threshold"] = best_threshold_list
average_best_threshold = sum(best_threshold_list)/len(best_threshold_list)
# ------------- For display ------------
print('--------------------------------------------------------------')
@interact
def show_QC_results(file=os.listdir(Source_QC_folder)):
plt.figure(figsize=(25,5))
#Input
plt.subplot(1,4,1)
plt.axis('off')
plt.imshow(plt.imread(os.path.join(Source_QC_folder, file)), aspect='equal', cmap='gray', interpolation='nearest')
plt.title('Input')
#Ground-truth
plt.subplot(1,4,2)
plt.axis('off')
test_ground_truth_image = io.imread(os.path.join(Target_QC_folder, file),as_gray=True)
plt.imshow(test_ground_truth_image, aspect='equal', cmap='Greens')
plt.title('Ground Truth')
#Prediction
plt.subplot(1,4,3)
plt.axis('off')
test_prediction = plt.imread(os.path.join(prediction_QC_folder, prediction_prefix+file))
test_prediction_mask = np.empty_like(test_prediction)
test_prediction_mask[test_prediction > average_best_threshold] = 255
test_prediction_mask[test_prediction <= average_best_threshold] = 0
plt.imshow(test_prediction_mask, aspect='equal', cmap='Purples')
plt.title('Prediction')
#Overlay
plt.subplot(1,4,4)
plt.axis('off')
plt.imshow(test_ground_truth_image, cmap='Greens')
plt.imshow(test_prediction_mask, alpha=0.5, cmap='Purples')
metrics_title = 'Overlay (IoU: ' + str(round(pdResults.loc[file]["IoU"],3)) + ' T: ' + str(round(pdResults.loc[file]["IoU-optimised threshold"])) + ')'
plt.title(metrics_title)
plt.savefig(full_QC_model_path+'/Quality Control/QC_example_data.png',bbox_inches='tight',pad_inches=0)
print('--------------------------------------------------------------')
print('Best average threshold is: '+str(round(average_best_threshold)))
print('--------------------------------------------------------------')
pdResults.head()
qc_pdf_export()
```
## **5.3. Export your model into the BioImage Model Zoo format**
---
<font size = 4>This section exports the model into the BioImage Model Zoo format so it can be used directly with DeepImageJ. The new files will be stored in the model folder specified at the beginning of Section 5.
<font size = 4>Once the cell is executed, you will find a new zip file with the name specified in `Trained_model_name.bioimage.io.model`.
<font size = 4>To use it with deepImageJ, download it and unzip it in the ImageJ/models/ or Fiji/models/ folder of your local machine.
<font size = 4>In ImageJ, open the example image given within the downloaded zip file. Go to Plugins > DeepImageJ > DeepImageJ Run. Choose this model from the list and click OK.
<font size = 4> More information at https://deepimagej.github.io/deepimagej/
```
# ------------- User input ------------
# information about the model
#@markdown ##Introduce the metadata of the model architecture:
Trained_model_name = "" #@param {type:"string"}
Trained_model_authors = "[Author 1, Author 2, Author 3]" #@param {type:"string"}
Trained_model_authors_affiliation = "[Affiliation 1, Affiliation 2, Affiliation 3]" #@param {type:"string"}
Trained_model_description = "" #@param {type:"string"}
Trained_model_license = 'MIT'#@param {type:"string"}
Trained_model_references = ["Falk et al. Nature Methods 2019", "Ronneberger et al. arXiv in 2015", "Lucas von Chamier et al. biorXiv 2020"]
Trained_model_DOI = ["https://doi.org/10.1038/s41592-018-0261-2","https://doi.org/10.1007/978-3-319-24574-4_28", "https://doi.org/10.1101/2020.03.20.000133"]
#@markdown ##Choose a threshold for DeepImageJ's postprocessing macro:
Use_The_Best_Average_Threshold = True #@param {type:"boolean"}
#@markdown ###If not, please input:
threshold = 85 #@param {type:"number"}
#@markdown ##Introduce the pixel size (in microns) of the image provided as an example of the model processing:
# information about the example image
PixelSize = 0.0004 #@param {type:"number"}
#@markdown ##Do you want to choose the exampleimage?
default_example_image = True #@param {type:"boolean"}
#@markdown ###If not, please input:
fileID = "" #@param {type:"string"}
if Use_The_Best_Average_Threshold:
threshold = average_best_threshold
from skimage import io
if default_example_image:
source_dir_list = os.listdir(Source_QC_folder)
fileID = os.path.join(Source_QC_folder, source_dir_list[0])
# Read the input image
test_img = io.imread(fileID)
# Load the model
unet = load_model(os.path.join(full_QC_model_path, 'weights_best.hdf5'), custom_objects={'_weighted_binary_crossentropy': weighted_binary_crossentropy(np.ones(2))})
test_prediction = predict_as_tiles(fileID, unet)
# test_prediction = io.imread(os.path.join(prediction_QC_folder, prediction_prefix+fileID))
# # Binarize it with the threshold chosen
test_prediction_mask = convert2Mask(test_prediction, threshold)
## Run this cell to export the model to the BioImage Model Zoo format.
####
from pydeepimagej.yaml import BioImageModelZooConfig
import urllib
# Check minimum size: it is [8,8] for the 2D XY plane
pooling_steps = 0
for keras_layer in unet.layers:
if keras_layer.name.startswith('max') or "pool" in keras_layer.name:
pooling_steps += 1
MinimumSize = [2**(pooling_steps), 2**(pooling_steps)]
# Create a model without compilation so it can be used in any other environment.
input = unet.input
single_output = unet.output
unet = Model(input, single_output)
dij_config = BioImageModelZooConfig(unet, MinimumSize)
# Model developer details
dij_config.Authors.Names = Trained_model_authors[1:-1].split(',')
dij_config.Authors.Affiliations = Trained_model_authors_affiliation[1:-1].split(',')
dij_config.Description = Trained_model_description
dij_config.Name = Trained_model_name
dij_config.References = Trained_model_references
dij_config.DOI = Trained_model_DOI
dij_config.License = Trained_model_license
# Additional information about the model
dij_config.GitHub = 'https://github.com/HenriquesLab/ZeroCostDL4Mic'
dij_config.Date = datetime.now()
dij_config.Documentation = './README.md'
dij_config.Tags = ['zerocostdl4mic', 'deepimagej', 'segmentation', 'tem', 'unet']
dij_config.PackagedBy = [{'name': 'pydeepimagej',
'affiliation': 'PIP python package'}]
dij_config.Framework = 'tensorflow'
if bool(dij_config.FixedPatch):
# XY-dime
x_size = test_img.shape[1]
x_size = np.int(x_size/2)
x_dim = np.int(dij_config.ModelInput[1]/2)
test_img = test_img[x_size-x_dim:x_size+x_dim,
x_size-x_dim:x_size+x_dim]
test_prediction_mask = test_prediction_mask[x_size-x_dim:x_size+x_dim,
x_size-x_dim:x_size+x_dim]
test_img = np.expand_dims(test_img, axis = [0,-1])
test_prediction_mask = np.expand_dims(test_prediction_mask, axis = [0,-1])
# Add the information about the test image. Note here PixelSize is given in nm
dij_config.add_test_info(test_img, test_prediction_mask, [PixelSize, PixelSize])
dij_config.create_covers([test_img, test_prediction_mask])
dij_config.Covers = ['./input.png', './output.png']
## Prepare preprocessing file
min_percentile = 0
max_percentile = 99.85
path_preprocessing = "per_sample_scale_range.ijm"
urllib.request.urlretrieve('https://raw.githubusercontent.com/deepimagej/imagej-macros/master/bioimage.io/per_sample_scale_range.ijm', path_preprocessing )
# Modify the threshold in the macro to the chosen threshold
ijmacro = open(path_preprocessing,"r")
list_of_lines = ijmacro. readlines()
# Line 21 is the one corresponding to the optimal threshold
list_of_lines[24] = "min_percentile = {};\n".format(min_percentile)
list_of_lines[25] = "max_percentile = {};\n".format(max_percentile)
ijmacro = open(path_preprocessing,"w")
ijmacro. writelines(list_of_lines)
ijmacro. close()
# Include info about the macros
dij_config.Preprocessing = [path_preprocessing]
dij_config.Preprocessing_files = [path_preprocessing]
# Preprocessing following BioImage Model Zoo specifications
dij_config.add_bioimageio_spec('pre-processing', 'scale_range',
mode='per_sample', axes='xyzc',
min_percentile = min_percentile,
max_percentile = max_percentile)
## Prepare postprocessing file
path_postprocessing = "8bitBinarize.ijm"
urllib.request.urlretrieve("https://raw.githubusercontent.com/deepimagej/imagej-macros/master/8bitBinarize.ijm", path_postprocessing )
# Modify the threshold in the macro to the chosen threshold
ijmacro = open(path_postprocessing,"r")
list_of_lines = ijmacro. readlines()
# Line 21 is the one corresponding to the optimal threshold
list_of_lines[21] = "optimalThreshold = {};\n".format(threshold)
ijmacro.close()
ijmacro = open(path_postprocessing,"w")
ijmacro. writelines(list_of_lines)
ijmacro. close()
# Include info about the macros
dij_config.Postprocessing = [path_postprocessing]
dij_config.Postprocessing_files = [path_postprocessing]
# Preprocessing following BioImage Model Zoo specifications
dij_config.add_bioimageio_spec('post-processing', 'scale_range',
mode='per_sample', axes='xyzc',
min_percentile=0, max_percentile=100)
dij_config.add_bioimageio_spec('post-processing', 'scale_linear',
gain=255, offset=0, axes='xy')
dij_config.add_bioimageio_spec('post-processing', 'binarize',
threshold=threshold)
# Store the model weights
# ---------------------------------------
# used_bioimageio_model_for_training_URL = "/Some/URL/bioimage.io/"
# dij_config.Parent = used_bioimageio_model_for_training_URL
# Add weights information
dij_config.add_weights_formats(unet, 'TensorFlow',
parent="keras_hdf5",
tf_version=tf.__version__)
dij_config.add_weights_formats(unet, 'KerasHDF5',
tf_version=tf.__version__)
## EXPORT THE MODEL
deepimagej_model_path = os.path.join(full_QC_model_path, Trained_model_name+'.bioimage.io.model')
dij_config.export_model(deepimagej_model_path)
# Create a markdown readme with information
readme_path = os.path.join(deepimagej_model_path, 'README.md')
f= open(readme_path,"w+")
f.write("Visit https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
f.close()
# Zip the bundled model to download
shutil.make_archive(deepimagej_model_path, 'zip', deepimagej_model_path)
```
# **6. Using the trained model**
---
<font size = 4>In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive.
## **6.1 Generate prediction(s) from unseen dataset**
---
<font size = 4>The current trained model (from section 4.1) can now be used to process images. If you want to use an older model, untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder.
<font size = 4>**`Data_folder`:** This folder should contain the images that you want to use your trained network on for processing.
<font size = 4>**`Result_folder`:** This folder will contain the predicted output images.
<font size = 4> Once the predictions are complete the cell will display a random example prediction beside the input image and the calculated mask for visual inspection.
<font size = 4> **Troubleshooting:** If there is a low contrast image warning when saving the images, this may be due to overfitting of the model to the data. It may result in images containing only a single colour. Train the network again with different network hyperparameters.
```
# ------------- Initial user input ------------
#@markdown ###Provide the path to your dataset and to the folder where the prediction will be saved (Result folder), then play the cell to predict output on your unseen images.
Data_folder = '' #@param {type:"string"}
Results_folder = '' #@param {type:"string"}
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
Prediction_model_folder = "" #@param {type:"string"}
#Here we find the loaded model name and parent path
Prediction_model_name = os.path.basename(Prediction_model_folder)
Prediction_model_path = os.path.dirname(Prediction_model_folder)
# ------------- Failsafes ------------
if (Use_the_current_trained_model):
print("Using current trained network")
Prediction_model_name = model_name
Prediction_model_path = model_path
full_Prediction_model_path = os.path.join(Prediction_model_path, Prediction_model_name)
if os.path.exists(full_Prediction_model_path):
print("The "+Prediction_model_name+" network will be used.")
else:
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
# ------------- Prepare the model and run predictions ------------
# Load the model and prepare generator
unet = load_model(os.path.join(Prediction_model_path, Prediction_model_name, 'weights_best.hdf5'), custom_objects={'_weighted_binary_crossentropy': weighted_binary_crossentropy(np.ones(2))})
Input_size = unet.layers[0].output_shape[1:3]
print('Model input size: '+str(Input_size[0])+'x'+str(Input_size[1]))
# Create a list of sources
source_dir_list = os.listdir(Data_folder)
number_of_dataset = len(source_dir_list)
print('Number of dataset found in the folder: '+str(number_of_dataset))
predictions = []
for i in tqdm(range(number_of_dataset)):
predictions.append(predict_as_tiles(os.path.join(Data_folder, source_dir_list[i]), unet))
# predictions.append(prediction(os.path.join(Data_folder, source_dir_list[i]), os.path.join(Prediction_model_path, Prediction_model_name)))
# Save the results in the folder along with the masks according to the set threshold
saveResult(Results_folder, predictions, source_dir_list, prefix=prediction_prefix, threshold=None)
# ------------- For display ------------
print('--------------------------------------------------------------')
def show_prediction_mask(file=os.listdir(Data_folder), threshold=(0,255,1)):
plt.figure(figsize=(18,6))
# Wide-field
plt.subplot(1,3,1)
plt.axis('off')
img_Source = plt.imread(os.path.join(Data_folder, file))
plt.imshow(img_Source, cmap='gray')
plt.title('Source image',fontsize=15)
# Prediction
plt.subplot(1,3,2)
plt.axis('off')
img_Prediction = plt.imread(os.path.join(Results_folder, prediction_prefix+file))
plt.imshow(img_Prediction, cmap='gray')
plt.title('Prediction',fontsize=15)
# Thresholded mask
plt.subplot(1,3,3)
plt.axis('off')
img_Mask = convert2Mask(img_Prediction, threshold)
plt.imshow(img_Mask, cmap='gray')
plt.title('Mask (Threshold: '+str(round(threshold))+')',fontsize=15)
interact(show_prediction_mask, continuous_update=False);
```
## **6.2. Export results as masks**
---
```
# @markdown #Play this cell to save results as masks with the chosen threshold
threshold = 120#@param {type:"number"}
saveResult(Results_folder, predictions, source_dir_list, prefix=prediction_prefix, threshold=threshold)
print('-------------------')
print('Masks were saved in: '+Results_folder)
```
## **6.3. Download your predictions**
---
<font size = 4>**Store your data** and ALL its results elsewhere by downloading it from Google Drive and after that clean the original folder tree (datasets, results, trained model etc.) if you plan to train or use new networks. Please note that the notebook will otherwise **OVERWRITE** all files which have the same name.
# **7. Version log**
---
<font size = 4>**v1.13**:
* This version now includes an automatic restart allowing to set the h5py library to v2.10.
* The section 1 and 2 are now swapped for better export of *requirements.txt*.
* This version also now includes built-in version check and the version log that you're reading now.
---
#**Thank you for using 2D U-Net!**
| github_jupyter |
Some random sanity checks and scratchpads worth keeping around.
```
import jax.numpy as jnp
from scipy import signal
import numpy as np
import time
from jaxdsp import processor_graph
from jaxdsp.processors import fir_filter, iir_filter, clip, delay_line, biquad_lowpass, lowpass_feedback_comb_filter as lbcf, allpass_filter, freeverb, sine_wave
```
## Reverb forward pass
```
from IPython.display import Audio
from scipy.io.wavfile import read as readwav
sample_rate, X = readwav('./audio/speech-male.wav')
tail_length = 24 * sample_rate # let it ring out
X = np.concatenate([X, np.zeros(tail_length)])
processor = freeverb
presets = processor.PRESETS
audio_for_preset = {preset_name: processor.tick_buffer((params, processor.init_state()), X)[1] for preset_name, params in presets.items()}
# Avoid a harsh clip at the end of the sample.
def apply_release(X, release_samples=int(0.2*sample_rate)):
return X * np.concatenate([np.ones(X.shape[-1] - release_samples), np.linspace(1.0, 0.0, release_samples)])
output_for_preset = {preset_name: Audio(apply_release(audio.T), rate=sample_rate) for preset_name, audio in audio_for_preset.items()}
output_for_preset['flat_space']
output_for_preset['expanding_space']
import matplotlib.pyplot as plt
carry = ({"frequency_hz": 0.1}, {"sample_rate": 10.0, "phase_radians": 0.0})
X = jnp.arange(20)
carry, Y = sine_wave.tick_buffer(carry, X)
carry_2, Y_2 = sine_wave.tick_buffer(carry, X)
carry_3, Y_3 = sine_wave.tick_buffer(({"frequency_hz": 0.32}, carry_2[1]), X)
carry_4, Y_4 = sine_wave.tick_buffer(({"frequency_hz": 0.85}, carry_3[1]), X)
plt.figure(figsize=(14, 4))
plt.title("`sine_wave` phase correction over multiple frames with varying frequency", size=16)
plt.plot(jnp.concatenate([Y, Y_2, Y_3, Y_4]))
plt.vlines(np.arange(5) * X.size, ymin=-1, ymax=1, color='r', label='Frame boundaries')
_ = plt.legend()
min_position = 0.0
max_position = 1.0
min_value = 30.0
max_value = 16_000.0
scale = (np.log(max_value) - np.log(min_value)) / (max_position - min_position)
position = np.linspace(min_position, max_position, 1000)
scaled = np.exp(np.log(min_value) + scale * (position - min_position))
inverse_scaled = (np.log(scaled) - np.log(min_value)) / scale + min_position
fig, [plot_1, plot_2] = plt.subplots(2, 1, figsize=(12, 6))
fig.suptitle('Exponential slider scaling', size=16)
plot_1.set_title('Exponential', size=14)
plot_1.plot(position, scaled, linewidth=2, label='exponentially scaled')
plot_1.set_ylabel('Scaled')
plot_1.hlines([min_value, max_value], xmin=min_position, xmax=max_position, color='r', linestyle='--', label='min/max values')
plot_1.legend()
plot_2.set_title('Linear & inverse (should match)', size=14)
plot_2.plot(position, position, label='linear')
plot_2.plot(position, inverse_scaled, linestyle='--', linewidth=3, label='inverse scaled')
plot_2.hlines([min_position, max_position], xmin=min_position, xmax=max_position, color='r', linestyle='--', label='min/max positions')
plot_2.set_xlabel('Position')
plot_2.set_ylabel('Scaled')
_ = plot_2.legend()
fig.tight_layout()
```
## TODO
* demonstrate single-sample tick
* plot processing time normalized to real_time = 1.0
* compare to C++ performance
* charts for impulse response, magnitude spectrogram and phase, updating in real-time
| github_jupyter |
```
%matplotlib inline
from cosmodc2.sdss_colors import load_umachine_processed_sdss_catalog
sdss = load_umachine_processed_sdss_catalog()
print(sdss.keys())
import os
from astropy.table import Table
# MDPL2-based mock
dirname = "/Users/aphearin/work/random/0331"
basename = "cutmock_1e9.hdf5"
fname = os.path.join(dirname, basename)
mock = Table.read(fname, path='data')
mock.Lbox = 500.
# Bpl-based mock
# dirname = "/Users/aphearin/work/random/0331"
# basename = "testing_bpl_based_v4.hdf5"
# fname = os.path.join(dirname, basename)
# mock = Table.read(fname, path='data')
# mock.Lbox = 250.
print(mock.keys())
```
## Map $M_{\rm r}$ onto every galaxy based on its $M_{\ast}$
```
from cosmodc2.sdss_colors import mock_magr
mock['restframe_extincted_sdss_abs_magr'] = mock_magr(
mock['upid'], mock['obs_sm'], mock['sfr_percentile'],
mock['host_halo_mvir'], sdss['sm'], sdss['sfr_percentile_fixed_sm'],
sdss['restframe_extincted_sdss_abs_magr'], sdss['z'])
```
## Map ${\rm g-r}$ and ${\rm r-i}$ onto every galaxy
```
from cosmodc2.sdss_colors import gr_ri_monte_carlo
magr = mock['restframe_extincted_sdss_abs_magr']
percentile = mock['sfr_percentile']
redshift = np.zeros_like(magr)
gr, ri, is_red_ri, is_red_gr = gr_ri_monte_carlo(
magr, percentile, redshift, local_random_scale=0.1)
mock['gr'] = gr
mock['ri'] = ri
mock['is_red_ri'] = is_red_ri
mock['is_red_gr'] = is_red_gr
from cosmodc2.sdss_colors import remap_cluster_bcg_gr_ri_color
gr_remapped, ri_remapped = remap_cluster_bcg_gr_ri_color(mock['upid'], mock['host_halo_mvir'],
np.copy(mock['gr']), np.copy(mock['ri']))
mock['_gr_no_remap'] = np.copy(mock['gr'])
mock['_ri_no_remap'] = np.copy(mock['ri'])
mock['gr'] = gr_remapped
mock['ri'] = ri_remapped
```
## Inspect cluster halo colors
```
magr_mask = mock['restframe_extincted_sdss_abs_magr'] < -19
cenmask = mock['upid'] == -1
cluster_halo_mask = mock['host_halo_mvir'] > 10**14
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4))
nbins = 80
__=ax1.hist(mock['_gr_no_remap'][cenmask & cluster_halo_mask & magr_mask],
bins=nbins, normed=True, alpha=0.8,
label=r'${\rm v3\ centrals}$')
__=ax1.hist(mock['gr'][cenmask & cluster_halo_mask & magr_mask],
bins=nbins, normed=True, alpha=0.8,
label=r'${\rm v4\ centrals}$')
legend = ax1.legend()
xlim1 = ax1.set_xlim(0.25, 1.2)
xlabel1 = ax1.set_xlabel(r'${\rm g-r}$')
ylabel1 = ax1.set_ylabel(r'${\rm PDF}$')
_title = r'$M_{\rm r} < -19;\ M_{\rm halo}>10^{14}M_{\odot}; z=0$'
title1 = ax1.set_title(_title)
__=ax2.scatter(mock['_gr_no_remap'][cenmask & cluster_halo_mask & magr_mask],
mock['_ri_no_remap'][cenmask & cluster_halo_mask & magr_mask], s=0.25,
label=r'${\rm v3\ centrals}$')
__=ax2.scatter(mock['gr'][cenmask & cluster_halo_mask & magr_mask],
mock['ri'][cenmask & cluster_halo_mask & magr_mask], s=0.25,
label=r'${\rm v4\ centrals}$')
xlim2 = ax2.set_xlim(0., 1.25)
ylim2 = ax2.set_ylim(0., 0.6)
xlabel2 = ax2.set_xlabel(r'${\rm g-r}$')
ylabel2 = ax2.set_ylabel(r'${\rm r-i}$')
_title = r'$M_{\rm r} < -19;\ M_{\rm halo}>10^{14}M_{\odot}; z=0$'
title2 = ax2.set_title(_title)
figname = 'central_colors_v3_vs_v4.pdf'
fig.savefig(figname, bbox_extra_artists=[xlabel1, ylabel1], bbox_inches='tight')
```
## Compare color-dependent clustering to Zehavi+11
```
from cosmodc2.sdss_colors import zehavi11_clustering
from cosmodc2.sdss_colors.sdss_measurements import rp as rp_zehavi
wp_blue18p5, wperr_blue18p5 = zehavi11_clustering(-18.5, subsample='blue')
wp_red18p5, wperr_red18p5 = zehavi11_clustering(-18.5, subsample='red')
wp_blue19p0, wperr_blue19p0 = zehavi11_clustering(-19, subsample='blue')
wp_red19p0, wperr_red19p0 = zehavi11_clustering(-19, subsample='red')
wp_blue19p5, wperr_blue19p5 = zehavi11_clustering(-19.5, subsample='blue')
wp_red19p5, wperr_red19p5 = zehavi11_clustering(-19.5, subsample='red')
wp_blue20p0, wperr_blue20p0 = zehavi11_clustering(-20, subsample='blue')
wp_red20p0, wperr_red20p0 = zehavi11_clustering(-20, subsample='red')
wp_blue20p5, wperr_blue20p5 = zehavi11_clustering(-20.5, subsample='blue')
wp_red20p5, wperr_red20p5 = zehavi11_clustering(-20.5, subsample='red')
wp_blue21p5, wperr_blue21p5 = zehavi11_clustering(-21.5, subsample='blue')
wp_red21p5, wperr_red21p5 = zehavi11_clustering(-21.5, subsample='red')
from cosmodc2.mock_diagnostics import zehavi_wp
protoDC2_littleh = 0.7
period = mock.Lbox
x, y, z, vz = mock['x'], mock['y'], mock['z'], mock['vz']
magr = mock['restframe_extincted_sdss_abs_magr']
rp_mids, wp_v4_18p5_blue = zehavi_wp(x, y, z, vz,
period, magr, -18.5, protoDC2_littleh,
subsample='blue', gr_colors=mock['gr'])
rp_mids, wp_v4_18p5_red = zehavi_wp(x, y, z, vz,
period, magr, -18.5, protoDC2_littleh,
subsample='red', gr_colors=mock['gr'])
rp_mids, wp_v4_19p0_blue = zehavi_wp(x, y, z, vz,
period, magr, -19.0, protoDC2_littleh,
subsample='blue', gr_colors=mock['gr'])
rp_mids, wp_v4_19p0_red = zehavi_wp(x, y, z, vz,
period, magr, -19.0, protoDC2_littleh,
subsample='red', gr_colors=mock['gr'])
rp_mids, wp_v4_19p5_blue = zehavi_wp(x, y, z, vz,
period, magr, -19.5, protoDC2_littleh,
subsample='blue', gr_colors=mock['gr'])
rp_mids, wp_v4_19p5_red = zehavi_wp(x, y, z, vz,
period, magr, -19.5, protoDC2_littleh,
subsample='red', gr_colors=mock['gr'])
rp_mids, wp_v4_20p5_blue = zehavi_wp(x, y, z, vz,
period, magr, -20.5, protoDC2_littleh,
subsample='blue', gr_colors=mock['gr'])
rp_mids, wp_v4_20p5_red = zehavi_wp(x, y, z, vz,
period, magr, -20.5, protoDC2_littleh,
subsample='red', gr_colors=mock['gr'])
rp_mids, wp_v4_21p5_blue = zehavi_wp(x, y, z, vz,
period, magr, -21.5, protoDC2_littleh,
subsample='blue', gr_colors=mock['gr'])
rp_mids, wp_v4_21p5_red = zehavi_wp(x, y, z, vz,
period, magr, -21.5, protoDC2_littleh,
subsample='red', gr_colors=mock['gr'])
fig, _axes = plt.subplots(2, 2, figsize=(10, 8))
((ax1, ax2), (ax3, ax4)) = _axes
axes = ax1, ax2, ax3, ax4
for ax in axes:
__=ax.loglog()
__=ax1.errorbar(rp_zehavi, wp_blue19p0, np.sqrt(wperr_blue19p0), fmt='.', color='blue')
__=ax1.errorbar(rp_zehavi, wp_red19p0, np.sqrt(wperr_red19p0), fmt='.', color='red')
__=ax1.plot(rp_mids, wp_v4_19p0_blue, color='blue')
__=ax1.plot(rp_mids, wp_v4_19p0_red, color='red')
__=ax2.errorbar(rp_zehavi, wp_blue19p5, np.sqrt(wperr_blue19p5), fmt='.', color='blue')
__=ax2.errorbar(rp_zehavi, wp_red19p5, np.sqrt(wperr_red19p5), fmt='.', color='red')
__=ax2.plot(rp_mids, wp_v4_19p5_blue, color='blue')
__=ax2.plot(rp_mids, wp_v4_19p5_red, color='red')
__=ax3.errorbar(rp_zehavi, wp_blue20p5, np.sqrt(wperr_blue20p5), fmt='.', color='blue')
__=ax3.errorbar(rp_zehavi, wp_red20p5, np.sqrt(wperr_red20p5), fmt='.', color='red')
__=ax3.plot(rp_mids, wp_v4_20p5_blue, color='blue')
__=ax3.plot(rp_mids, wp_v4_20p5_red, color='red')
__=ax4.errorbar(rp_zehavi, wp_blue21p5, np.sqrt(wperr_blue21p5), fmt='.', color='blue')
__=ax4.errorbar(rp_zehavi, wp_red21p5, np.sqrt(wperr_red21p5), fmt='.', color='red')
__=ax4.plot(rp_mids, wp_v4_21p5_blue, color='blue')
__=ax4.plot(rp_mids, wp_v4_21p5_red, color='red')
```
| github_jupyter |
```
!pip install git+https://github.com/AlpacaDB/backlight
import os
import numpy as np
import pandas as pd
import backlight
```
# Generate example dummy data
```
np.random.seed(0)
# market data
if not os.path.exists("example_market.csv"):
idx = pd.date_range("2018-04-01 00:00:00", "2018-06-30 23:59:59", freq="10S")
ask = np.cumsum(np.random.rand(len(idx)) - 0.5) + 100.0
bid = ask - 0.02
pd.DataFrame(
index=idx,
data=np.array([ask, bid]).T,
columns=["ask", "bid"]
).abs().to_csv("example_market.csv")
# signal data
if not os.path.exists("example_signal.csv"):
sig = np.random.rand(3, len(idx)).T
n = np.sum(sig, axis=1)
pd.DataFrame(
index=idx,
data=sig / n.reshape(-1, 1),
columns=["down", "neutral", "up"]
).to_csv("example_signal.csv")
```
# Configuration
```
model_id = ""
start_dt = "2018-06-03 00:00:00+0000"
end_dt = "2018-06-06 00:00:00+0000"
sig_url = "file:///{}/example_signal.csv".format(os.getcwd())
mkt_url = "file:///{}/example_market.csv".format(os.getcwd())
```
# loading the signal
```
from backlight import signal
SYMBOL = "USDJPY"
sig = signal.load_signal(SYMBOL, sig_url, start_dt, end_dt)
sig.tail()
sig.symbol
```
# loading the marketdata
```
from backlight import datasource
mkt = datasource.load_marketdata(
sig.symbol,
sig.start_dt,
sig.end_dt,
mkt_url,
)
mkt.tail()
```
# generating labels
## Fixed Neutral
```
from backlight import labelizer
from backlight.labelizer.ternary.fixed_neutral import FixedNeutralLabelizer
lbl_fix = labelizer.create_labels(
mkt,
FixedNeutralLabelizer(
lookahead="3Min",
neutral_atol=0.075,
neutral_rtol=0.00,
))
lbl_fix.head()
lbl_fix.label_type
# (-1.0, 0.0, 1.0) : (Down, Neutral, Up)
lbl_fix.groupby("label").label.count() / len(lbl_fix)
```
## Dynamic Neutral
```
from backlight.labelizer.ternary.dynamic_neutral import DynamicNeutralLabelizer
lbl_dyn = labelizer.create_labels(
mkt,
DynamicNeutralLabelizer(
lookahead="3Min",
neutral_ratio=0.38,
neutral_window="1H",
neutral_hard_limit=0.0,
))
lbl_dyn.head()
lbl_dyn.label_type
# (-1.0, 0.0, 1.0) : (Down, Neutral, Up)
lbl_dyn.groupby("label").label.count() / len(lbl_dyn)
```
# simulate trading strategy
```
from backlight import strategies
trades = strategies.simple_entry_and_exit(mkt, sig, max_holding_time=pd.Timedelta('30min'))
```
# simulate and evaluate positions
```
from backlight import positions
positions = positions.calc_positions(trades, mkt)
positions.head()
```
# Calculate Metrics - based on the raw signals
```
from backlight import metrics
m = metrics.calc_metrics(sig, lbl_fix)
m
from backlight import metrics
m = metrics.calc_metrics(sig, lbl_dyn)
m
```
# Calculate Performance
```
from backlight import metrics
m = metrics.calc_position_performance(positions)
m
from backlight import metrics
m = metrics.calc_trade_performance(trades, mkt)
m
```
# Plot positions
```
from backlight import plot
plot.plot_pl(positions)
from backlight import plot
plot.plot_cumulative_pl(positions)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Load-data" data-toc-modified-id="Load-data-1"><span class="toc-item-num">1 </span>Load data</a></span></li><li><span><a href="#All-patients" data-toc-modified-id="All-patients-2"><span class="toc-item-num">2 </span>All patients</a></span></li><li><span><a href="#Conditioned-on-gender-(sex)" data-toc-modified-id="Conditioned-on-gender-(sex)-3"><span class="toc-item-num">3 </span>Conditioned on gender (sex)</a></span><ul class="toc-item"><li><span><a href="#Male" data-toc-modified-id="Male-3.1"><span class="toc-item-num">3.1 </span>Male</a></span></li><li><span><a href="#Female" data-toc-modified-id="Female-3.2"><span class="toc-item-num">3.2 </span>Female</a></span></li></ul></li><li><span><a href="#Conditioned-on-age" data-toc-modified-id="Conditioned-on-age-4"><span class="toc-item-num">4 </span>Conditioned on age</a></span><ul class="toc-item"><li><span><a href="#Young-(1-19-years)" data-toc-modified-id="Young-(1-19-years)-4.1"><span class="toc-item-num">4.1 </span>Young (1-19 years)</a></span></li><li><span><a href="#Adult-(20-65-years)" data-toc-modified-id="Adult-(20-65-years)-4.2"><span class="toc-item-num">4.2 </span>Adult (20-65 years)</a></span></li><li><span><a href="#Elderly-(<65-years)" data-toc-modified-id="Elderly-(<65-years)-4.3"><span class="toc-item-num">4.3 </span>Elderly (<65 years)</a></span></li></ul></li><li><span><a href="#Save-all-the-populations-in-disproportionality-estimation" data-toc-modified-id="Save-all-the-populations-in-disproportionality-estimation-5"><span class="toc-item-num">5 </span>Save all the populations in disproportionality estimation</a></span></li></ul></div>
# Load data
```
import itertools
from tqdm import tqdm
from collections import Counter
import scipy.stats as stats
import pandas as pd
import numpy as np
import pickle
from statsmodels.stats.multitest import multipletests
# %matplotlib notebook
pd.set_option('display.max_columns', None)
import warnings
warnings.filterwarnings('ignore')
# load the dictionaries for drugs, AE
# drug_dic = pickle.load(open('../Data/curated/drug_dic.pk', 'rb'))
# In this MeDRA_dic, key is string of PT_name, value is a list:
# [PT, PT_name, HLT,HLT_name,HLGT,HLGT_name,SOC,SOC_name,SOC_abbr]
meddra_se_disease_dic = pickle.load(open('../Data/curated/AE_dic.pk', 'rb'))
MedDRA_dic_all = pickle.load(open('../Data/curated/AE_mapping.pk', 'rb'))
def format_tex(float_number):
exponent = np.floor(np.log10(float_number))
mantissa = float_number/10**exponent
mantissa_format = str(mantissa)[0:3]
if float_number!=0:
return "$< {0}\times10^{{{1}}}$".format(mantissa_format, str(int(exponent)))
else:
return "$< 0 \times10^{0}$"
def weird_division(n, d):
return n / d if d else 0
def CI(ROR, A, B, C, D):
ror = np.log(ROR)
sq = 1.96*np.sqrt(weird_division(1, A) + weird_division(1, B) + weird_division(1, C) +weird_division(1, D))
CI_up = np.exp(ror + sq)
CI_down = np.exp(ror - sq)
return CI_up, CI_down
```
# All patients
```
SE_uncondition = pickle.load(open('../Data/pandemic/SE_uncondition.pk', 'rb'))
# Remove some SE that make less sense or not related to medication
# """15765: device use error, 2688:device physical property issue; 2232:product contamination;4293: compulsions;
# 10484: infusion; 6275:body height below normal; 2325:large for dates baby. 10870:product distribution issue.
# 8657:poverty of thought content. 2222:poor personal hygiene. 1039:family stress. 1215:nosocomial infection.
# 4347:syringe issue. 4374:confabulation, 647:device occlusion, 6141:product outer packaging issue
# 3716:product contamination with body fluid, 2048:fear of disease. 10249:drain placement.1848:treatment failure
# 2659:device leakage. 8613:device alarm issue. 9141:product label confusion. 8593:device connection issue
# 3820:application site discharge. 6799:post procedural discharge: 1728:poisoning deliberate
# 2576:social problem. 132:device malfunction 2591:needle issue. 4216: exomphalos. 1669:fear of falling
# 713:medical device change. 4425:intercepted medication error. 1809:exposure via partner. 3087 :liquid product physical issue
# 4005:medical device implantation. 5616:application site discomfort. 865:device failure. 4908:device ineffective
# 14262:reproductive complication associated with device. 4624: device colour issue。 1051 educational problem
# 1281:device difficult to use. 4551:pregnancy of partner. 9615:prescribed underdose. 11243:product physical consistency issue
# 1598:product odour abnormal. 437:accident at work. 1451:product packaging quantity issue,1679:incorrect drug administration rate
# 627:hospice care. 238:unevaluable event. 3067: imprisonment. 8012:stress at work 6177:medical device pain
# 277:mass, 1046:thrombosis in device, 905:product size issue. 2071:product label issue. 218:off label use
# 2744:product colour issue. 1224: laboratory test abnormal 2139:product packaging issue. 5240:product contamination physical
# 11139: expired device used. 12123:lack of injection site rotation. 639:device issue. 1095:injury associated with device
# 1747:therapeutic product ineffective, 7592: product dropper issue. 158:incorrect dose administered. 1041:economic problem
# 341:device related infection. 655: product physical issue. 4257:device related sepsis,968:treatment noncompliance
# 353:road traffic accident. 991:medication error. 335:drug ineffective, 14157:device physical property issue,
# 14881:device power source issue, 11494:off label use, 13223: device malfunction, 15755:unintentional medical device removal
# 13242:drug dependence,
# r *hallucination, visual* with code :4652, malaise:4515, condition aggravated:4846
# 15201:toxicity to various agents,
# 'eating disorder', 'incoherent', 'out of specification test results', 'antibody test negative', 'gene mutation identification test positive', 'gun shot wound',
# 'bed sharing','antibody test positive', 'large for dates baby,viral load', 'small for dates baby', 'x-ray',
# 'scan', 'blood test','female condom', 'sleep study','boredom',' toxicity to various agents',
# 'transplant failure', 'pregnancy after post coital contraception','drug intolerance', 'drug withdrawal syndrome'
# [732,7823,15255,12614,13146,1347,12821,7349,1117,4512,3818,315]
# 521:infection
# """
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
drop_SE_meddra = ['10072878', '10047290', '10039083', '10037784', '10049521', '10057687', '10067769','10012438', '10023830', '10060991', '10036099', '10007225', '10031161', '10046820', '10000217', '10005654', '10074248', '10049058', '10044287', '10061533', '10039449',
'10061637', '10045104', '10016577', '10027990', '10039179', '10053547', '10021124', '10014408', '10048852', '10060960', '10015670', '10022961', '10067724', '10047853',
'10038457', '10020880', '10053937', '10008967', '10052894', '10026956', '10008267', '10006580', '10028093', '10012318', '10058890', '10010938', '10070933', '10071504',
'10005136', '10053249', '10016235', '10036485', '10003598', '10061416', '10020524', '10037382', '10048222', '10061545', '10058682', '10019299', '10028862', '10026961',
'10042598', '10018404', '10029888', '10012557', '10003476', '10030527', '10068065','10017792', '10064913', '10069880', '10069865', '10053762', '10063829', '10072931',
'10066901', '10013700', '10012587', '10069217', '10069218', '10016173', '10010144', '10022459', '10038001', '10064685', '10070691', '10058049', '10071406', '10071408',
'10056871', '10012578', '10069841', '10003569', '10041738', '10008686', '10010007', '10013663', '10070863', '10014062', '10021630', '10071067', '10061426', '10063478',
'10041092', '10048064', '10061498', '10061726', '10016387', '10053319', '10048909', '10021789', '10019075', '10025482', '10010264', '10073318', '10073513', '10052971',
'10064687', '10053686', '10069327', '10063130', '10073317', '10071404', '10071415', '10036790', '10069868', '10064685', '10071134', '10064684', '10059108', '10016251',
'10056871', '10069249', '10012578', '10012575', '10062546', '10069803', '10069889', '10069853', '10069326', '10069331', '10010151', '10069299', '10069330', '10069250',
'10069226', '10071408', '10060769', '10071412', '10049481', '10062255', '10064538', '10069289', '10069405', '10053683', '10069173', '10069227', '10049711', '10069229',
'10069222', '10071403', '10069217', '10012587', '10027754', '10069221', '10041291','10069846', '10070691', '10069325', '10070592', '10070305', '10069224', '10072867',
'10051808', '10068945', '10070692', '10071407', '10069878', '10063560', '10069223','10049812', '10065769', '10071287', '10061483', '10071406', '10069228', '10069175',
'10069297', '10069802', '10072754', '10061366', '10069837', '10069218', '10069293', '10073300', '10069272', '10069873', '10041290', '10065535', '10059875', '10053487',
'10072753', '10069176', '10063601', '10012579', '10069266', '10069271', '10069871', '10068515', '10061087', '10071575', '10073311', '10072608', '10069292', '10070617',
'10059057', '10069864', '10071051', '10069329', '10050474', '10069267', '10070468', '10051297', '10070773', '10069232', '10071405', '10072645', '10069294', '10069298',
'10070470', '10062680', '10069300', '10069231', '10062197', '10038773', '10072082', '10065066', '10069841', '10069268', '10073302', '10069273', '10070765', '10059015',
'10069296', '10068444', '10057254', '10072342', '10069295', '10052371', '10069174', '10069867', '10069332', '10063599', '10073170', '10069877', '10060770', '10073306',
'10071409', '10071148', '10065963', '10069220', '10073594', '10073760', '10074266', '10073305', '10072950', '10051841', '10069861', '10069801', '10071587', '10073301',
'10069838', '10030020', '10070574', '10030018', '10042458', '10042464', '10022081', '10022086', '10022085', '10022061', '10022067', '10022111', '10022056', '10062519',
'10066083', '10061549', '10022095', '10022093', '10022052', '10022112', '10053424', '10003053', '10003041', '10003055', '10066044', '10022107', '10059203', '10057880',
'10070245', '10054266', '10053483', '10053183', '10051572', '10050104', '10064578', '10003060', '10003046', '10059830', '10022078', '10048943', '10003036', '10050057',
'10022075', '10048744', '10059005', '10061409', '10065577', '10065059', '10059650', '10048941', '10068317', '10017012', '10063783', '10049666', '10051116', '10017000',
'10051099', '10022076', '10022062', '10063772', '10051154', '10053664', '10048634', '10064774', '10063782', '10056520', '10022066', '10022079', '10052267', '10057688',
'10067620', '10059008', '10065600', '10052264', '10022105', '10067252', '10022055', '10059009', '10049041', '10063785', '10063587', '10052268', '10022072', '10022071',
'10050103', '10022104', '10066778', '10058043', '10059048', '10065614', '10022065', '10022082', '10058464', '10066797', '10054997', '10063860', '10025478', '10051101',
'10022044', '10003050', '10022088', '10066210', '10052162', '10063775', '10065615', '10068689', '10067253', '10067255', '10049043', '10053663', '10022090', '10021542',
'10072136', '10063765', '10063763', '10063862', '10058713', '10065488', '10065460', '10066211', '10053505', '10022045', '10063779', '10063867', '10063871', '10055123',
'10063850', '10049042', '10063683', '10063848', '10016777', '10067996', '10022048', '10067995', '10036769', '10063786', '10058062', '10065464', '10063857', '10054995',
'10054996', '10053482', '10059241', '10051100', '10063778', '10068607', '10055122', '10063839', '10050082', '10065489', '10063072', '10066209', '10073059', '10065485',
'10007811', '10063776', '10063762', '10054812', '10063868', '10068159', '10065491', '10063854', '10063858', '10063873', '10063774', '10065463', '10069667', '10063881',
'10050101', '10052270', '10072694', '10065490', '10059079', '10064109', '10063780', '10066149', '10063874', '10049044', '10053484', '10068922', '10065456', '10049660',
'10055662', '10049260', '10065455', '10065476', '10063771', '10065652', '10063865', '10065473', '10022083', '10053995', '10065653', '10003048', '10003059', '10052271',
'10066041', '10063870', '10065902', '10066214', '10050100', '10065458', '10073418', '10056270', '10073412', '10054092', '10060124', '10059386', '10063856', '10063784',
'10068954', '10073457', '10063863', '10001315', '10062102', '10055909', '10073174', '10073759', '10065461', '10052272', '10033905', '10073779', '10074011', '10048648',
'10066221', '10073989', '10073998', '10074008', '10074013', '10055117', '10073752', '10074001', '10065487', '10073993', '10069624', '10073992', '10073994', '10074004',
'10073416', '10065457', '10063880', '10074015', '10073996', '10074010', '10074012','10074005', '10074000', '10073624', '10073606', '10063864', '10065454', '10073990',
'10073615', '10068881', '10065475', '10073612', '10063960', '10013654', '10073768', '10013709', '10036556', '10045188', '10051076', '10072268', '10057362', '10073508',
'10013687', '10022523', '10033295', '10061623', '10013710', '10013722', '10013745', '10026923', '10000381', '10068071', '10014166', '10049177', '10060942', '10051118',
'10066053', '10045542', '10057857', '10056327', '10059866', '10052016', '10064306', '10018981', '10013718', '10029719', '10036573', '10063686', '10052804', '10066368',
'10036567', '10061132', '10067082', '10064373', '10048958', '10050895', '10013756', '10068072', '10028243', '10067667', '10049998', '10060940', '10049975', '10050845',
'10063370', '10049463', '10013752', '10057856', '10062015', '10052805', '10061133', '10050192', '10061452', '10054807', '10060320', '10060321', '10063122', '10048407',
'10013717', '10036575', '10049055', '10060144', '10013753', '10072426', '10050846', '10052744', '10061824', '10062014', '10064374', '10067010', '10066468', '10052237',
'10060145', '10036574', '10050425', '10064773', '10063222', '10052806', '10048723', '10036578', '10059641', '10067688', '10073085', '10066266', '10053580', '10064381',
'10064937', '10073954', '10052970', '10073702', '10013744', '10072385', '10046735', '10051792', '10044439', '10067482', '10053716', '10052428', '10062932', '10018794',
'10048038', '10029897', '10036410', '10048629', '10072170', '10060933', '10050325', '10048031', '10061468', '10061613', '10053669', '10023439', '10053692', '10010185',
'10051358', '10051373', '10050858', '10010162', '10025127', '10047920', '10020364', '10066337', '10019314', '10059442', '10024714', '10058672', '10050778', '10065044',
'10057677', '10063181', '10066900', '10019315', '10038533', '10043903', '10024715', '10054923', '10058041', '10061730', '10050852', '10058042', '10059185', '10063581',
'10051604', '10066194', '10061890', '10041899', '10065240', '10011643', '10057925', '10058845', '10010183', '10057679', '10054108', '10065386', '10068179', '10059444',
'10052277', '10010186', '10056409', '10010184', '10010187', '10065242', '10049169', '10060345', '10035148', '10059032', '10048870', '10062355', '10013754', '10061822',
'10063671', '10060872', '10003051', '10003054', '10022064', '10022094', '10027091', '10048396', '10050114', '10050464', '10050729', '10053425', '10053998', '10054846',
'10054994', '10057581', '10057843', '10058142', '10058974', '10059058', '10061111', '10061153', '10061649', '10063781', '10063866', '10064355', '10064366', '10064382',
'10064385', '10064505', '10064998', '10065117', '10065484', '10066967', '10068003', '10068383', '10069216', '10069842', '10069902', '10071430', '10072720', '10073303',
'10073336', '10074425', '10074495', '10074497', '10074498', '10074508', '10074555', '10074586', '10074704', '10074758', '10074796', '10074853', '10074860', '10074868',
'10074896', '10074902', '10074903', '10074904', '10074905', '10074906', '10074946', '10075097', '10075103', '10075107', '10075333', '10075373', '10075461', '10075511',
'10075571', '10075573', '10075574', '10075578', '10075580', '10075585', '10075765', '10075928', '10075933', '10075965', '10075967', '10075971', '10076053', '10076065',
'10076070', '10076073', '10076087', '10076089', '10076091', '10076101', '10076128', '10076133', '10076141', '10076182', '10076232', '10076273', '10076308', '10076309',
'10076368', '10076470', '10076476', '10076481', '10076503', '10076542', '10076544', '10076573', '10076637', '10076639', '10076869', '10076874', '10076897', '10076936',
'10076991', '10077040', '10077107', '10077455', '10077643', '10077659', '10077672', '10077678', '10077767', '10077796', '10077800', '10077801', '10077812', '10078105',
'10078156', '10078325', '10078340', '10078390', '10078504', '10078525', '10078668', '10078675', '10079007', '10079078', '10079212', '10079213', '10079221', '10079277',
'10079315', '10079316', '10079317', '10079381', '10079400', '10079404', '10079466', '10079523', '10079645', '10079843', '10079846', '10079849', '10079903', '10080000',
'10080001', '10080092', '10080099', '10080179', '10080231', '10080304', '10080357','10080359', '10080459', '10080648', '10080714', '10080718', '10080751', '10080753',
'10080754', '10080804', '10080901', '10080903', '10080974', '10081202', '10081301', '10081359', '10081478', '10081479', '10081480', '10081540', '10081572', '10081574',
'10081575', '10081576', '10081577', '10081578', '10081579', '10081580', '10081581','10081675', '10081704', '10081742', '10081743', '10081770', '10081771', '10082169',
'10082200', '10082201', '10082202', '10082204', '10082205', '10082292', '10082458', '10082527', '10083420', '10083599', '10083995', '10061427', '10002653', '10077122',
'10071095', '10001756', '10002730', '10008453', '10014404', '10025250', '10034998', '10037794', '10051082', '10051083', '10052909', '10053073', '10053468', '10053469',
'10054976', '10054977', '10056613', '10057374', '10057480', '10058909', '10059283', '10059828', '10059862', '10061018', '10061758', '10062035', '10062117', '10064728',
'10065100', '10065154', '10065357', '10066377', '10066401', '10067768', '10068048', '10068492', '10072806', '10074079', '10074300', '10074746', '10074842', '10074950',
'10074982', '10078115', '10078798', '10079637', '10080422', '10083202']
meddra_drop_list = ['10003051', '10003054', '10022064', '10022094', '10027091', '10048396', '10050114', '10050464', '10050729', '10053425', '10053998', '10054846', '10054994', '10057581',
'10057843', '10058142', '10058974', '10059058', '10061111', '10061153', '10061649', '10063781', '10063866', '10064355', '10064366', '10064382', '10064385', '10064505',
'10064998', '10065117', '10065484', '10066967', '10068003', '10068383', '10069216', '10069842', '10069902', '10071430', '10072720', '10073303', '10073336', '10074425',
'10074495', '10074497', '10074498', '10074508', '10074555', '10074586', '10074704','10074758', '10074796', '10074853', '10074860', '10074868', '10074896', '10074902',
'10074903', '10074904', '10074905', '10074906', '10074946', '10075097', '10075103', '10075107', '10075333', '10075373', '10075461', '10075511', '10075571', '10075573',
'10075574', '10075578', '10075580', '10075585', '10075765', '10075928', '10075933', '10075965', '10075967', '10075971', '10076053', '10076065', '10076070', '10076073',
'10076087', '10076089', '10076091', '10076101', '10076128', '10076133', '10076141', '10076182', '10076232', '10076273', '10076308', '10076309', '10076368', '10076470',
'10076476', '10076481', '10076503', '10076542', '10076544', '10076573', '10076637', '10076639', '10076869', '10076874', '10076897', '10076936', '10076991', '10077040',
'10077107', '10077455', '10077643', '10077659', '10077672', '10077678', '10077767', '10077796', '10077800', '10077801', '10077812', '10078105', '10078156', '10078325',
'10078340', '10078390', '10078504', '10078525', '10078668', '10078675', '10079007', '10079078', '10079212', '10079213', '10079221', '10079277', '10079315', '10079316',
'10079317', '10079381', '10079400', '10079404', '10079466', '10079523', '10079645', '10079843', '10079846', '10079849', '10079903', '10080000', '10080001', '10080092',
'10080099', '10080179', '10080231', '10080304', '10080357', '10080359', '10080459', '10080648', '10080714', '10080718', '10080751', '10080753', '10080754', '10080804',
'10080901', '10080903', '10080974', '10081202', '10081301', '10081359', '10081478', '10081479', '10081480', '10081540', '10081572', '10081574', '10081575', '10081576',
'10081577', '10081578', '10081579', '10081580', '10081581', '10081675', '10081704', '10081742', '10081743', '10081770', '10081771', '10082169', '10082200', '10082201',
'10082202', '10082204', '10082205', '10082292', '10082458', '10082527', '10083420', '10083599', '10083995', '10061427' ,'10002653', '10077122', '10071095', '10001756',
'10002730', '10008453', '10014404', '10025250', '10034998', '10037794', '10051082','10051083', '10052909', '10053073', '10053468', '10053469', '10054976', '10054977', '10056613',
'10057374', '10057480', '10058909', '10059283', '10059828', '10059862', '10061018', '10061758', '10062035', '10062117', '10064728', '10065100', '10065154', '10065357',
'10068048', '10068492', '10072806', '10074079', '10074300', '10074746', '10074842', '10074950', '10074982', '10078115', '10078798', '10079637', '10080422', '10083202',
'10040560', '10060938', '10012335', '10011762','10011906','10016256', '10040642', '10000059', '10016322','10033371','10042209', '10079987', '10022116','10050953', '10046274',
'10066377', '10066401', '10067768', '10013971', '10013969', '10036590', '10084268', '10051905', '10084271','10084451','10070255','10084380']
# '10084268', '10051905', '10084271','10084451','10070255','10084380','10016256',
## '10084268' is covid, we remove it to [with all other explicity covid symptoms?]
## '10051905': 'coronavirus infection'
## '10084271' 'sars-cov-2 test positive'
## '10084451' 'suspected covid-19'
## 10070255 coronavirus test positive
## 10084380 covid-19 pneumonia
## 10077122 device delivery system issue
## '10016256' 'fatigue'
## '10071095' 'growth failure'
drop_SE_meddra.extend(meddra_drop_list)
# pickle.dump(drop_SE_name, open('../Data/pandemic/drop_SE_name.pk','wb'))
drop_list = drop_SE_meddra
# drop_SE_name
SE_uncondition = SE_uncondition.drop_duplicates('SE')
idd = [i not in drop_list for i in SE_uncondition['SE']]
SE_uncondition = SE_uncondition[idd]
"""Nan = 0/0, in our case means nothing, so we drop them first."""
SE_uncondition = SE_uncondition[SE_uncondition['2019_ROR'].notna()]
# """Find the ID of nonsense SE by keywords, and then copy the IDs to the above drop_list"""
# """Remove the SE with specific word"""
# # drop_word = ['device', 'issue', 'product', 'equipment', 'exposure', 'broken','falling', 'suicide','idea', 'site',
# # 'crime', 'foreign', 'quality','drug','pregnancy','dose', 'nonspecific' ,'homicid','event','wound',
# # 'idea', 'transplant', 'thoughts', 'user','infusion', 'plague', 'technique', 'medication']
# drop_word = ['therapy']
# drop_index = [any(word in se for word in drop_word) for se in SE_uncondition.name]
# drop_list_1 = SE_uncondition[drop_index]
# SE_uncondition.shape, drop_list_1.shape
# print(list(drop_list_1.SE))
# ll = ['eating disorder', 'incoherent', 'out of specification test results', 'antibody test negative', 'gene mutation identification test positive', 'gun shot wound',
# 'bed sharing','antibody test positive', 'large for dates baby,viral load', 'small for dates baby', 'x-ray',
# 'scan', 'blood test','female condom', 'sleep study','boredom',' toxicity to various agents',
# 'transplant failure', 'pregnancy after post coital contraception','drug intolerance', 'drug withdrawal syndrome', 'gustatory and olfactory', 'anosmia']
# for i in ll:
# print(list(SE_uncondition[SE_uncondition.name==i].SE))
SE_uncondition.head(3)
SE_uncondition_2019 = SE_uncondition[['SE','name','2019_A', '2019_B', '2020_A','2020_B','2019_ROR','2019_Delta']]
SE_uncondition_2019['p_value'] = SE_uncondition_2019.apply(lambda row: stats.fisher_exact([[row['2019_A'], row['2019_B']], [row['2020_A'], row['2020_B']]])[1], axis = 1)
# multipletests
SE_uncondition_2019['sig'], SE_uncondition_2019['p_corrected'] = multipletests(pvals=SE_uncondition_2019['p_value'], alpha=0.05, method='bonferroni')[0:2]
# calculate 95% confidential interval
### for volcano plot, keep the ROR and P-value of all SE
pickle.dump(SE_uncondition_2019, open('../Data/pandemic/SE_uncondition_2019_volcano.pk', 'wb')) # update the dataframe with ROR and Delta
SE_uncondition_2019_sig = SE_uncondition_2019[SE_uncondition_2019['sig']==True]
SE_uncondition_2019_sig['CI_upper'] = SE_uncondition_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[0], axis = 1)
SE_uncondition_2019_sig['CI_lower'] = SE_uncondition_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[1], axis = 1)
print('the figure for volcano figure is saved')
SE_uncondition_2019_sig_over = SE_uncondition_2019_sig[SE_uncondition_2019_sig['2019_Delta']>0]
SE_uncondition_2019_sig_under = SE_uncondition_2019_sig[SE_uncondition_2019_sig['2019_Delta']<0]
SE_uncondition_2019_sig_under.sort_values('p_corrected', ascending=True).head()
```
# Conditioned on gender (sex)
The gender contains:
- Male
- Female
- unknown
So that the sum of male and female may not equals to the uncondition. In analysis, we omit unknown gender.
## Male
```
SE_male = pickle.load(open('../Data/pandemic/SE_male.pk', 'rb'))
idd_male = [i not in drop_list for i in SE_male['SE']] # drop the nonsense SE
SE_male = SE_male[idd_male]
SE_male = SE_male.drop_duplicates('SE')
SE_male = SE_male[SE_male['2019_ROR'].notna()]
SE_male_2019 = SE_male[['SE','name','2019_A', '2019_B', '2020_A','2020_B','2019_ROR','2019_Delta']]
SE_male_2019['p_value'] = SE_male_2019.apply(lambda row: stats.fisher_exact([[row['2019_A'], row['2019_B']], [row['2020_A'], row['2020_B']]])[1], axis = 1)
# SE_male_2019['p_value'] = SE_male_2019.apply(lambda row: stats.fisher_exact([[row['2020_A'], row['2020_B']], [row['2019_A'], row['2019_B']]])[1], axis = 1)
# multipletests
SE_male_2019['sig'], SE_male_2019['p_corrected'] = multipletests(pvals=SE_male_2019['p_value'], alpha=0.05, method='bonferroni')[0:2]
SE_male_2019_sig = SE_male_2019[SE_male_2019['sig']==True]
SE_male_2019_sig['CI_upper'] = SE_male_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[0], axis = 1)
SE_male_2019_sig['CI_lower'] = SE_male_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[1], axis = 1)
SE_male_2019_sig.sort_values('p_corrected', ascending=True)
SE_male_2019_sig_over = SE_male_2019_sig[SE_male_2019_sig['2019_Delta']>0]
SE_male_2019_sig_under = SE_male_2019_sig[SE_male_2019_sig['2019_Delta']<0]
SE_male_2019_sig_under.sort_values('p_corrected', ascending=True).head()
l_male = list(SE_male_2019_sig.SE)
l_uncondition = list(SE_uncondition_2019_sig.SE)
set(l_male) - set(l_uncondition)
```
## Female
```
SE_female = pickle.load(open('../Data/pandemic/SE_female.pk', 'rb'))
idd_female = [i not in drop_list for i in SE_female['SE']]
SE_female = SE_female[idd_female]
SE_female = SE_female.drop_duplicates('SE')
SE_female = SE_female[SE_female['2019_ROR'].notna()]
SE_female_2019 = SE_female[['SE','name','2019_A', '2019_B', '2020_A','2020_B','2019_ROR','2019_Delta']]
SE_female_2019['p_value'] = SE_female_2019.apply(lambda row: stats.fisher_exact([[row['2019_A'], row['2019_B']], [row['2020_A'], row['2020_B']]])[1], axis = 1)
# multipletests
SE_female_2019['sig'], SE_female_2019['p_corrected'] = multipletests(pvals=SE_female_2019['p_value'],
alpha=0.05, method='bonferroni')[0:2]
SE_female_2019_sig = SE_female_2019[SE_female_2019['sig']==True]
SE_female_2019_sig['CI_upper'] = SE_female_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[0], axis = 1)
SE_female_2019_sig['CI_lower'] = SE_female_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[1], axis = 1)
SE_female_2019_sig_over = SE_female_2019_sig[SE_female_2019_sig['2019_Delta']>0]
SE_female_2019_sig_under = SE_female_2019_sig[SE_female_2019_sig['2019_Delta']<0]
SE_female_2019_sig_under.sort_values('p_corrected', ascending=True).head()
## Anything only occur in female but not in uncondition?
l_male = list(SE_male_2019_sig.SE)
l_female = list(SE_female_2019_sig.SE)
l_uncondition = list(SE_uncondition_2019_sig.SE)
set(l_female) - set(l_uncondition)
```
# Conditioned on age
## Young (1-19 years)
```
SE_young = pickle.load(open('../Data/pandemic/SE_young.pk', 'rb'))
idd_young = [i not in drop_list for i in SE_young['SE']]
SE_young = SE_young[idd_young]
SE_young = SE_young.drop_duplicates('SE')
SE_young = SE_young[SE_young['2019_ROR'].notna()]
SE_young_2019 = SE_young[['SE','name','2019_A', '2019_B', '2020_A','2020_B','2019_ROR','2019_Delta']]
SE_young_2019['p_value'] = SE_young_2019.apply(lambda row: stats.fisher_exact([[row['2019_A'], row['2019_B']], [row['2020_A'], row['2020_B']]])[1], axis = 1)
# multipletests
SE_young_2019['sig'], SE_young_2019['p_corrected'] = multipletests(pvals=SE_young_2019['p_value'], alpha=0.05, method='bonferroni')[0:2]
SE_young_2019_sig = SE_young_2019[SE_young_2019['sig']==True]
SE_young_2019_sig['CI_upper'] = SE_young_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[0], axis = 1)
SE_young_2019_sig['CI_lower'] = SE_young_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[1], axis = 1)
SE_young_2019_sig_over = SE_young_2019_sig[SE_young_2019_sig['2019_Delta']>0]
SE_young_2019_sig_under = SE_young_2019_sig[SE_young_2019_sig['2019_Delta']<0]
SE_young_2019_sig_over.sort_values('p_corrected', ascending=True).head()
```
## Adult (20-65 years)
```
SE_adult = pickle.load(open('../Data/pandemic/SE_adult.pk', 'rb'))
idd_adult = [i not in drop_list for i in SE_adult['SE']]
SE_adult = SE_adult[idd_adult]
SE_adult = SE_adult.drop_duplicates('SE')
SE_adult = SE_adult[SE_adult['2019_ROR'].notna()]
SE_adult_2019 = SE_adult[['SE','name','2019_A', '2019_B', '2020_A','2020_B','2019_ROR','2019_Delta']]
SE_adult_2019['p_value'] = SE_adult_2019.apply(lambda row: stats.fisher_exact([[row['2019_A'], row['2019_B']], [row['2020_A'], row['2020_B']]])[1], axis = 1)
# multipletests
SE_adult_2019['sig'], SE_adult_2019['p_corrected'] = multipletests(pvals=SE_adult_2019['p_value'], alpha=0.05, method='bonferroni')[0:2]
SE_adult_2019_sig = SE_adult_2019[SE_adult_2019['sig']==True]
SE_adult_2019_sig['CI_upper'] = SE_adult_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[0], axis = 1)
SE_adult_2019_sig['CI_lower'] = SE_adult_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[1], axis = 1)
SE_adult_2019_sig_over = SE_adult_2019_sig[SE_adult_2019_sig['2019_Delta']>0]
SE_adult_2019_sig_under = SE_adult_2019_sig[SE_adult_2019_sig['2019_Delta']<0]
SE_adult_2019_sig_over.sort_values('p_corrected', ascending=True).head()
```
## Elderly (<65 years)
```
SE_elderly = pickle.load(open('../Data/pandemic/SE_elderly.pk', 'rb'))
idd_elderly = [i not in drop_list for i in SE_elderly['SE']]
SE_elderly = SE_elderly[idd_elderly]
SE_elderly = SE_elderly.drop_duplicates('SE')
SE_elderly = SE_elderly[SE_elderly['2019_ROR'].notna()]
SE_elderly_2019 = SE_elderly[['SE','name','2019_A', '2019_B', '2020_A','2020_B','2019_ROR','2019_Delta']]
SE_elderly_2019['p_value'] = SE_elderly_2019.apply(lambda row: stats.fisher_exact([[row['2019_A'], row['2019_B']], [row['2020_A'], row['2020_B']]])[1], axis = 1)
# multipletests
SE_elderly_2019['sig'], SE_elderly_2019['p_corrected'] = multipletests(pvals=SE_elderly_2019['p_value'], alpha=0.05, method='bonferroni')[0:2]
SE_elderly_2019_sig = SE_elderly_2019[SE_elderly_2019['sig']==True]
SE_elderly_2019_sig['CI_upper'] = SE_elderly_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[0], axis = 1)
SE_elderly_2019_sig['CI_lower'] = SE_elderly_2019_sig.apply(lambda row: CI(row['2019_ROR'], row['2019_A'], row['2019_B'],row['2020_A'], row['2020_B'])[1], axis = 1)
SE_elderly_2019[SE_elderly_2019.name=='pyrexia']
SE_elderly_2019_sig_over = SE_elderly_2019_sig[SE_elderly_2019_sig['2019_Delta']>0]
SE_elderly_2019_sig_under = SE_elderly_2019_sig[SE_elderly_2019_sig['2019_Delta']<0]
SE_elderly_2019_sig_under.sort_values('p_corrected', ascending=True).head()
```
# Save all the populations in disproportionality estimation
```
condition_list = ['SE_uncondition_2019_sig_over', 'SE_uncondition_2019_sig_under', 'SE_male_2019_sig_over', 'SE_male_2019_sig_under',
'SE_female_2019_sig_over', 'SE_female_2019_sig_under',
'SE_young_2019_sig_over', 'SE_young_2019_sig_under', 'SE_adult_2019_sig_over', 'SE_adult_2019_sig_under',
'SE_elderly_2019_sig_over', 'SE_elderly_2019_sig_under']
for condition in condition_list:
pickle.dump(locals()[condition], open('../Data/pandemic/results/'+condition+'_step1.pk', 'wb'))
print(condition,'saved')
```
| github_jupyter |
# Lateral Movement
The adversary is trying to move through your environment.
Lateral Movement consists of techniques that adversaries use to enter and control remote systems on a network. Following through on their primary objective often requires exploring the network to find their target and subsequently gaining access to it. Reaching their objective often involves pivoting through multiple systems and accounts to gain. Adversaries might install their own remote access tools to accomplish Lateral Movement or use legitimate credentials with native network and operating system tools, which may be stealthier.
## Techniques
| ID | Name | Description |
| :--------: | :---------: | :---------: |
T1570 | Lateral Tool Transfer | Adversaries may transfer tools or other files between systems in a compromised environment. Files may be copied from one system to another to stage adversary tools or other files over the course of an operation. Adversaries may copy files laterally between internal victim systems to support lateral movement using inherent file sharing protocols such as file sharing over SMB to connected network shares or with authenticated connections with [SMB/Windows Admin Shares](https://attack.mitre.org/techniques/T1021/002) or [Remote Desktop Protocol](https://attack.mitre.org/techniques/T1021/001). Files can also be copied over on Mac and Linux with native tools like scp, rsync, and sftp.
T1563.002 | RDP Hijacking | Adversaries may hijack a legitimate user’s remote desktop session to move laterally within an environment. Remote desktop is a common feature in operating systems. It allows a user to log into an interactive session with a system desktop graphical user interface on a remote system. Microsoft refers to its implementation of the Remote Desktop Protocol (RDP) as Remote Desktop Services (RDS).(Citation: TechNet Remote Desktop Services)
Adversaries may perform RDP session hijacking which involves stealing a legitimate user's remote session. Typically, a user is notified when someone else is trying to steal their session. With System permissions and using Terminal Services Console, `c:\windows\system32\tscon.exe [session number to be stolen]`, an adversary can hijack a session without the need for credentials or prompts to the user.(Citation: RDP Hijacking Korznikov) This can be done remotely or locally and with active or disconnected sessions.(Citation: RDP Hijacking Medium) It can also lead to [Remote System Discovery](https://attack.mitre.org/techniques/T1018) and Privilege Escalation by stealing a Domain Admin or higher privileged account session. All of this can be done by using native Windows commands, but it has also been added as a feature in red teaming tools.(Citation: Kali Redsnarf)
T1563.001 | SSH Hijacking | Adversaries may hijack a legitimate user's SSH session to move laterally within an environment. Secure Shell (SSH) is a standard means of remote access on Linux and macOS systems. It allows a user to connect to another system via an encrypted tunnel, commonly authenticating through a password, certificate or the use of an asymmetric encryption key pair.
In order to move laterally from a compromised host, adversaries may take advantage of trust relationships established with other systems via public key authentication in active SSH sessions by hijacking an existing connection to another system. This may occur through compromising the SSH agent itself or by having access to the agent's socket. If an adversary is able to obtain root access, then hijacking SSH sessions is likely trivial.(Citation: Slideshare Abusing SSH)(Citation: SSHjack Blackhat)(Citation: Clockwork SSH Agent Hijacking)(Citation: Breach Post-mortem SSH Hijack)
[SSH Hijacking](https://attack.mitre.org/techniques/T1563/001) differs from use of [SSH](https://attack.mitre.org/techniques/T1021/004) because it hijacks an existing SSH session rather than creating a new session using [Valid Accounts](https://attack.mitre.org/techniques/T1078).
T1563 | Remote Service Session Hijacking | Adversaries may take control of preexisting sessions with remote services to move laterally in an environment. Users may use valid credentials to log into a service specifically designed to accept remote connections, such as telnet, SSH, and RDP. When a user logs into a service, a session will be established that will allow them to maintain a continuous interaction with that service.
Adversaries may commandeer these sessions to carry out actions on remote systems. [Remote Service Session Hijacking](https://attack.mitre.org/techniques/T1563) differs from use of [Remote Services](https://attack.mitre.org/techniques/T1021) because it hijacks an existing session rather than creating a new session using [Valid Accounts](https://attack.mitre.org/techniques/T1078).(Citation: RDP Hijacking Medium)(Citation: Breach Post-mortem SSH Hijack)
T1021.006 | Windows Remote Management | Adversaries may use [Valid Accounts](https://attack.mitre.org/techniques/T1078) to interact with remote systems using Windows Remote Management (WinRM). The adversary may then perform actions as the logged-on user.
WinRM is the name of both a Windows service and a protocol that allows a user to interact with a remote system (e.g., run an executable, modify the Registry, modify services).(Citation: Microsoft WinRM) It may be called with the `winrm` command or by any number of programs such as PowerShell.(Citation: Jacobsen 2014)
T1021.005 | VNC | Adversaries may use [Valid Accounts](https://attack.mitre.org/techniques/T1078) to remotely control machines using Virtual Network Computing (VNC). The adversary may then perform actions as the logged-on user.
VNC is a desktop sharing system that allows users to remotely control another computer’s display by relaying mouse and keyboard inputs over the network. VNC does not necessarily use standard user credentials. Instead, a VNC client and server may be configured with sets of credentials that are used only for VNC connections.
T1021.004 | SSH | Adversaries may use [Valid Accounts](https://attack.mitre.org/techniques/T1078) to log into remote machines using Secure Shell (SSH). The adversary may then perform actions as the logged-on user.
SSH is a protocol that allows authorized users to open remote shells on other computers. Many Linux and macOS versions come with SSH installed by default, although typically disabled until the user enables it. The SSH server can be configured to use standard password authentication or public-private keypairs in lieu of or in addition to a password. In this authentication scenario, the user’s public key must be in a special file on the computer running the server that lists which keypairs are allowed to login as that user.(Citation: SSH Secure Shell)
T1021.003 | Distributed Component Object Model | Adversaries may use [Valid Accounts](https://attack.mitre.org/techniques/T1078) to interact with remote machines by taking advantage of Distributed Component Object Model (DCOM). The adversary may then perform actions as the logged-on user.
The Windows Component Object Model (COM) is a component of the native Windows application programming interface (API) that enables interaction between software objects, or executable code that implements one or more interfaces. Through COM, a client object can call methods of server objects, which are typically Dynamic Link Libraries (DLL) or executables (EXE). Distributed COM (DCOM) is transparent middleware that extends the functionality of COM beyond a local computer using remote procedure call (RPC) technology.(Citation: Fireeye Hunting COM June 2019)(Citation: Microsoft COM)
Permissions to interact with local and remote server COM objects are specified by access control lists (ACL) in the Registry.(Citation: Microsoft Process Wide Com Keys) By default, only Administrators may remotely activate and launch COM objects through DCOM.(Citation: Microsoft COM ACL)
Through DCOM, adversaries operating in the context of an appropriately privileged user can remotely obtain arbitrary and even direct shellcode execution through Office applications(Citation: Enigma Outlook DCOM Lateral Movement Nov 2017) as well as other Windows objects that contain insecure methods.(Citation: Enigma MMC20 COM Jan 2017)(Citation: Enigma DCOM Lateral Movement Jan 2017) DCOM can also execute macros in existing documents(Citation: Enigma Excel DCOM Sept 2017) and may also invoke Dynamic Data Exchange (DDE) execution directly through a COM created instance of a Microsoft Office application(Citation: Cyberreason DCOM DDE Lateral Movement Nov 2017), bypassing the need for a malicious document.
T1021.002 | SMB/Windows Admin Shares | Adversaries may use [Valid Accounts](https://attack.mitre.org/techniques/T1078) to interact with a remote network share using Server Message Block (SMB). The adversary may then perform actions as the logged-on user.
SMB is a file, printer, and serial port sharing protocol for Windows machines on the same network or domain. Adversaries may use SMB to interact with file shares, allowing them to move laterally throughout a network. Linux and macOS implementations of SMB typically use Samba.
Windows systems have hidden network shares that are accessible only to administrators and provide the ability for remote file copy and other administrative functions. Example network shares include `C$`, `ADMIN$`, and `IPC$`. Adversaries may use this technique in conjunction with administrator-level [Valid Accounts](https://attack.mitre.org/techniques/T1078) to remotely access a networked system over SMB,(Citation: Wikipedia Server Message Block) to interact with systems using remote procedure calls (RPCs),(Citation: TechNet RPC) transfer files, and run transferred binaries through remote Execution. Example execution techniques that rely on authenticated sessions over SMB/RPC are [Scheduled Task/Job](https://attack.mitre.org/techniques/T1053), [Service Execution](https://attack.mitre.org/techniques/T1569/002), and [Windows Management Instrumentation](https://attack.mitre.org/techniques/T1047). Adversaries can also use NTLM hashes to access administrator shares on systems with [Pass the Hash](https://attack.mitre.org/techniques/T1550/002) and certain configuration and patch levels.(Citation: Microsoft Admin Shares)
T1021.001 | Remote Desktop Protocol | Adversaries may use [Valid Accounts](https://attack.mitre.org/techniques/T1078) to log into a computer using the Remote Desktop Protocol (RDP). The adversary may then perform actions as the logged-on user.
Remote desktop is a common feature in operating systems. It allows a user to log into an interactive session with a system desktop graphical user interface on a remote system. Microsoft refers to its implementation of the Remote Desktop Protocol (RDP) as Remote Desktop Services (RDS).(Citation: TechNet Remote Desktop Services)
Adversaries may connect to a remote system over RDP/RDS to expand access if the service is enabled and allows access to accounts with known credentials. Adversaries will likely use Credential Access techniques to acquire credentials to use with RDP. Adversaries may also use RDP in conjunction with the [Accessibility Features](https://attack.mitre.org/techniques/T1546/008) technique for Persistence.(Citation: Alperovitch Malware)
T1550.004 | Web Session Cookie | Adversaries can use stolen session cookies to authenticate to web applications and services. This technique bypasses some multi-factor authentication protocols since the session is already authenticated.(Citation: Pass The Cookie)
Authentication cookies are commonly used in web applications, including cloud-based services, after a user has authenticated to the service so credentials are not passed and re-authentication does not need to occur as frequently. Cookies are often valid for an extended period of time, even if the web application is not actively used. After the cookie is obtained through [Steal Web Session Cookie](https://attack.mitre.org/techniques/T1539), the adversary may then import the cookie into a browser they control and is then able to use the site or application as the user for as long as the session cookie is active. Once logged into the site, an adversary can access sensitive information, read email, or perform actions that the victim account has permissions to perform.
There have been examples of malware targeting session cookies to bypass multi-factor authentication systems.(Citation: Unit 42 Mac Crypto Cookies January 2019)
T1550.001 | Application Access Token | Adversaries may use stolen application access tokens to bypass the typical authentication process and access restricted accounts, information, or services on remote systems. These tokens are typically stolen from users and used in lieu of login credentials.
Application access tokens are used to make authorized API requests on behalf of a user and are commonly used as a way to access resources in cloud-based applications and software-as-a-service (SaaS).(Citation: Auth0 - Why You Should Always Use Access Tokens to Secure APIs Sept 2019) OAuth is one commonly implemented framework that issues tokens to users for access to systems. These frameworks are used collaboratively to verify the user and determine what actions the user is allowed to perform. Once identity is established, the token allows actions to be authorized, without passing the actual credentials of the user. Therefore, compromise of the token can grant the adversary access to resources of other sites through a malicious application.(Citation: okta)
For example, with a cloud-based email service once an OAuth access token is granted to a malicious application, it can potentially gain long-term access to features of the user account if a "refresh" token enabling background access is awarded.(Citation: Microsoft Identity Platform Access 2019) With an OAuth access token an adversary can use the user-granted REST API to perform functions such as email searching and contact enumeration.(Citation: Staaldraad Phishing with OAuth 2017)
Compromised access tokens may be used as an initial step in compromising other services. For example, if a token grants access to a victim’s primary email, the adversary may be able to extend access to all other services which the target subscribes by triggering forgotten password routines. Direct API access through a token negates the effectiveness of a second authentication factor and may be immune to intuitive countermeasures like changing passwords. Access abuse over an API channel can be difficult to detect even from the service provider end, as the access can still align well with a legitimate workflow.
T1550.003 | Pass the Ticket | Adversaries may “pass the ticket” using stolen Kerberos tickets to move laterally within an environment, bypassing normal system access controls. Pass the ticket (PtT) is a method of authenticating to a system using Kerberos tickets without having access to an account's password. Kerberos authentication can be used as the first step to lateral movement to a remote system.
In this technique, valid Kerberos tickets for [Valid Accounts](https://attack.mitre.org/techniques/T1078) are captured by [OS Credential Dumping](https://attack.mitre.org/techniques/T1003). A user's service tickets or ticket granting ticket (TGT) may be obtained, depending on the level of access. A service ticket allows for access to a particular resource, whereas a TGT can be used to request service tickets from the Ticket Granting Service (TGS) to access any resource the user has privileges to access.(Citation: ADSecurity AD Kerberos Attacks)(Citation: GentilKiwi Pass the Ticket)
[Silver Ticket](https://attack.mitre.org/techniques/T1558/002) can be obtained for services that use Kerberos as an authentication mechanism and are used to generate tickets to access that particular resource and the system that hosts the resource (e.g., SharePoint).(Citation: ADSecurity AD Kerberos Attacks)
[Golden Ticket](https://attack.mitre.org/techniques/T1558/001) can be obtained for the domain using the Key Distribution Service account KRBTGT account NTLM hash, which enables generation of TGTs for any account in Active Directory.(Citation: Campbell 2014)
T1550.002 | Pass the Hash | Adversaries may “pass the hash” using stolen password hashes to move laterally within an environment, bypassing normal system access controls. Pass the hash (PtH) is a method of authenticating as a user without having access to the user's cleartext password. This method bypasses standard authentication steps that require a cleartext password, moving directly into the portion of the authentication that uses the password hash. In this technique, valid password hashes for the account being used are captured using a Credential Access technique. Captured hashes are used with PtH to authenticate as that user. Once authenticated, PtH may be used to perform actions on local or remote systems.
Windows 7 and higher with KB2871997 require valid domain user credentials or RID 500 administrator hashes.(Citation: NSA Spotting)
T1550 | Use Alternate Authentication Material | Adversaries may use alternate authentication material, such as password hashes, Kerberos tickets, and application access tokens, in order to move laterally within an environment and bypass normal system access controls.
Authentication processes generally require a valid identity (e.g., username) along with one or more authentication factors (e.g., password, pin, physical smart card, token generator, etc.). Alternate authentication material is legitimately generated by systems after a user or application successfully authenticates by providing a valid identity and the required authentication factor(s). Alternate authentication material may also be generated during the identity creation process.(Citation: NIST Authentication)(Citation: NIST MFA)
Caching alternate authentication material allows the system to verify an identity has successfully authenticated without asking the user to reenter authentication factor(s). Because the alternate authentication must be maintained by the system—either in memory or on disk—it may be at risk of being stolen through [Credential Access](https://attack.mitre.org/tactics/TA0006) techniques. By stealing alternate authentication material, adversaries are able to bypass system access controls and authenticate to systems without knowing the plaintext password or any additional authentication factors.
T1534 | Internal Spearphishing | Adversaries may use internal spearphishing to gain access to additional information or exploit other users within the same organization after they already have access to accounts or systems within the environment. Internal spearphishing is multi-staged attack where an email account is owned either by controlling the user's device with previously installed malware or by compromising the account credentials of the user. Adversaries attempt to take advantage of a trusted internal account to increase the likelihood of tricking the target into falling for the phish attempt.(Citation: Trend Micro When Phishing Starts from the Inside 2017)
Adversaries may leverage [Spearphishing Attachment](https://attack.mitre.org/techniques/T1566/001) or [Spearphishing Link](https://attack.mitre.org/techniques/T1566/002) as part of internal spearphishing to deliver a payload or redirect to an external site to capture credentials through [Input Capture](https://attack.mitre.org/techniques/T1056) on sites that mimic email login interfaces.
There have been notable incidents where internal spearphishing has been used. The Eye Pyramid campaign used phishing emails with malicious attachments for lateral movement between victims, compromising nearly 18,000 email accounts in the process.(Citation: Trend Micro When Phishing Starts from the Inside 2017) The Syrian Electronic Army (SEA) compromised email accounts at the Financial Times (FT) to steal additional account credentials. Once FT learned of the attack and began warning employees of the threat, the SEA sent phishing emails mimicking the Financial Times IT department and were able to compromise even more users.(Citation: THE FINANCIAL TIMES LTD 2019.)
T1210 | Exploitation of Remote Services | Adversaries may exploit remote services to gain unauthorized access to internal systems once inside of a network. Exploitation of a software vulnerability occurs when an adversary takes advantage of a programming error in a program, service, or within the operating system software or kernel itself to execute adversary-controlled code. A common goal for post-compromise exploitation of remote services is for lateral movement to enable access to a remote system.
An adversary may need to determine if the remote system is in a vulnerable state, which may be done through [Network Service Scanning](https://attack.mitre.org/techniques/T1046) or other Discovery methods looking for common, vulnerable software that may be deployed in the network, the lack of certain patches that may indicate vulnerabilities, or security software that may be used to detect or contain remote exploitation. Servers are likely a high value target for lateral movement exploitation, but endpoint systems may also be at risk if they provide an advantage or access to additional resources.
There are several well-known vulnerabilities that exist in common services such as SMB (Citation: CIS Multiple SMB Vulnerabilities) and RDP (Citation: NVD CVE-2017-0176) as well as applications that may be used within internal networks such as MySQL (Citation: NVD CVE-2016-6662) and web server services. (Citation: NVD CVE-2014-7169)
Depending on the permissions level of the vulnerable remote service an adversary may achieve [Exploitation for Privilege Escalation](https://attack.mitre.org/techniques/T1068) as a result of lateral movement exploitation as well.
T1175 | Component Object Model and Distributed COM | **This technique has been deprecated. Please use [Distributed Component Object Model](https://attack.mitre.org/techniques/T1021/003) and [Component Object Model](https://attack.mitre.org/techniques/T1559/001).**
Adversaries may use the Windows Component Object Model (COM) and Distributed Component Object Model (DCOM) for local code execution or to execute on remote systems as part of lateral movement.
COM is a component of the native Windows application programming interface (API) that enables interaction between software objects, or executable code that implements one or more interfaces.(Citation: Fireeye Hunting COM June 2019) Through COM, a client object can call methods of server objects, which are typically Dynamic Link Libraries (DLL) or executables (EXE).(Citation: Microsoft COM) DCOM is transparent middleware that extends the functionality of Component Object Model (COM) (Citation: Microsoft COM) beyond a local computer using remote procedure call (RPC) technology.(Citation: Fireeye Hunting COM June 2019)
Permissions to interact with local and remote server COM objects are specified by access control lists (ACL) in the Registry. (Citation: Microsoft COM ACL)(Citation: Microsoft Process Wide Com Keys)(Citation: Microsoft System Wide Com Keys) By default, only Administrators may remotely activate and launch COM objects through DCOM.
Adversaries may abuse COM for local command and/or payload execution. Various COM interfaces are exposed that can be abused to invoke arbitrary execution via a variety of programming languages such as C, C++, Java, and VBScript.(Citation: Microsoft COM) Specific COM objects also exists to directly perform functions beyond code execution, such as creating a [Scheduled Task/Job](https://attack.mitre.org/techniques/T1053), fileless download/execution, and other adversary behaviors such as Privilege Escalation and Persistence.(Citation: Fireeye Hunting COM June 2019)(Citation: ProjectZero File Write EoP Apr 2018)
Adversaries may use DCOM for lateral movement. Through DCOM, adversaries operating in the context of an appropriately privileged user can remotely obtain arbitrary and even direct shellcode execution through Office applications (Citation: Enigma Outlook DCOM Lateral Movement Nov 2017) as well as other Windows objects that contain insecure methods.(Citation: Enigma MMC20 COM Jan 2017)(Citation: Enigma DCOM Lateral Movement Jan 2017) DCOM can also execute macros in existing documents (Citation: Enigma Excel DCOM Sept 2017) and may also invoke [Dynamic Data Exchange](https://attack.mitre.org/techniques/T1173) (DDE) execution directly through a COM created instance of a Microsoft Office application (Citation: Cyberreason DCOM DDE Lateral Movement Nov 2017), bypassing the need for a malicious document.
T1091 | Replication Through Removable Media | Adversaries may move onto systems, possibly those on disconnected or air-gapped networks, by copying malware to removable media and taking advantage of Autorun features when the media is inserted into a system and executes. In the case of Lateral Movement, this may occur through modification of executable files stored on removable media or by copying malware and renaming it to look like a legitimate file to trick users into executing it on a separate system. In the case of Initial Access, this may occur through manual manipulation of the media, modification of systems used to initially format the media, or modification to the media's firmware itself.
T1080 | Taint Shared Content |
Adversaries may deliver payloads to remote systems by adding content to shared storage locations, such as network drives or internal code repositories. Content stored on network drives or in other shared locations may be tainted by adding malicious programs, scripts, or exploit code to otherwise valid files. Once a user opens the shared tainted content, the malicious portion can be executed to run the adversary's code on a remote system. Adversaries may use tainted shared content to move laterally.
A directory share pivot is a variation on this technique that uses several other techniques to propagate malware when users access a shared network directory. It uses [Shortcut Modification](https://attack.mitre.org/techniques/T1547/009) of directory .LNK files that use [Masquerading](https://attack.mitre.org/techniques/T1036) to look like the real directories, which are hidden through [Hidden Files and Directories](https://attack.mitre.org/techniques/T1564/001). The malicious .LNK-based directories have an embedded command that executes the hidden malware file in the directory and then opens the real intended directory so that the user's expected action still occurs. When used with frequently used network directories, the technique may result in frequent reinfections and broad access to systems and potentially to new and higher privileged accounts. (Citation: Retwin Directory Share Pivot)
Adversaries may also compromise shared network directories through binary infections by appending or prepending its code to the healthy binary on the shared network directory. The malware may modify the original entry point (OEP) of the healthy binary to ensure that it is executed before the legitimate code. The infection could continue to spread via the newly infected file when it is executed by a remote system. These infections may target both binary and non-binary formats that end with extensions including, but not limited to, .EXE, .DLL, .SCR, .BAT, and/or .VBS.
T1072 | Software Deployment Tools | Adversaries may gain access to and use third-party software suites installed within an enterprise network, such as administration, monitoring, and deployment systems, to move laterally through the network. Third-party applications and software deployment systems may be in use in the network environment for administration purposes (e.g., SCCM, VNC, HBSS, Altiris, etc.).
Access to a third-party network-wide or enterprise-wide software system may enable an adversary to have remote code execution on all systems that are connected to such a system. The access may be used to laterally move to other systems, gather information, or cause a specific effect, such as wiping the hard drives on all endpoints.
The permissions required for this action vary by system configuration; local credentials may be sufficient with direct access to the third-party system, or specific domain credentials may be required. However, the system may require an administrative account to log in or to perform it's intended purpose.
T1051 | Shared Webroot | **This technique has been deprecated and should no longer be used.**
Adversaries may add malicious content to an internally accessible website through an open network file share that contains the website's webroot or Web content directory (Citation: Microsoft Web Root OCT 2016) (Citation: Apache Server 2018) and then browse to that content with a Web browser to cause the server to execute the malicious content. The malicious content will typically run under the context and permissions of the Web server process, often resulting in local system or administrative privileges, depending on how the Web server is configured.
This mechanism of shared access and remote execution could be used for lateral movement to the system running the Web server. For example, a Web server running PHP with an open network share could allow an adversary to upload a remote access tool and PHP script to execute the RAT on the system running the Web server when a specific page is visited. (Citation: Webroot PHP 2011)
T1021 | Remote Services | Adversaries may use [Valid Accounts](https://attack.mitre.org/techniques/T1078) to log into a service specifically designed to accept remote connections, such as telnet, SSH, and VNC. The adversary may then perform actions as the logged-on user.
In an enterprise environment, servers and workstations can be organized into domains. Domains provide centralized identity management, allowing users to login using one set of credentials across the entire network. If an adversary is able to obtain a set of valid domain credentials, they could login to many different machines using remote access protocols such as secure shell (SSH) or remote desktop protocol (RDP).(Citation: SSH Secure Shell)(Citation: TechNet Remote Desktop Services)
```
#Invoke-AtomicTest-By can be downloaded from https://github.com/cyb3rbuff/ART-Utils/Invoke-AtomicTest-By
Invoke-AtomicTest-By -Tactic lateral-movement
```
| github_jupyter |
```
import json
import pandas as pd
import operator
with open('../docs/data/dams.geojson') as f:
in_json = json.load(f)
in_ftrs = in_json['features']
ftr1 = in_ftrs[0]
prop1 = ftr1['properties']
var_names = prop1.keys()
types = {}
vals = {}
to_skip = ['Url_Address','NID_ID','key']
for v in prop1:
if v not in to_skip:
types[v] = type(prop1[v])
if prop1[v] == None:
ix = 1
varval = None
while varval == None and ix < len(in_ftrs):
varval = in_ftrs[ix]['properties'][v]
ix+=1
types[v] = type(varval)
if v == 'All_Purposes':
tmp = [i['properties'][v] for i in in_ftrs]
tmp = [str(i).split(', ') for i in tmp]
vals[v] = list(set([item for sublist in tmp for item in sublist]))
elif types[v] == str:
vals[v] = list(set([i['properties'][v] for i in in_ftrs]))
elif types[v] in [float,int]:
vals[v] = []
tmp = [i['properties'][v] for i in in_ftrs if i['properties'][v] != None]
vals[v].append(min(tmp))
vals[v].append(max(tmp))
for ix,i in enumerate(vals[v]):
if i == None:
if types[v] == str: vals[v][ix] = "unlabled"
types[v] = str(types[v])[8:-2]
types['All_Purposes'] = 'multiple'
types['Dam_Name'] = 'open_text'
types['Owner_Name'] = 'open_text'
types['River'] = 'open_text'
del types['Submit_Date']
del vals['Submit_Date']
# Field Names
names = {i:i.replace('_',' ') for i in types.keys()}
names['Fed_Operation'] = "Federal Agency Operating"
names['All_Purposes'] = 'Dam Purpose (all)'
names['Primary_Purpose'] = 'Dam Purpose (primary)'
names['Fed_Owner'] = 'Federal Agency Owning'
names['Max_Storage'] = 'Max Storage (acre-feet)'
names['Normal_Storage'] = 'Normal Storage (acre-feet)'
names['Hydraulic_Height'] = 'Hydraulic Height (feet)'
names['Structural_Height'] = 'Structural Height (feet)'
names['Dam_Height'] = 'Dam Height (feet)'
names['NID_Height'] = 'Max Height (feet)'
names['Dam_Length'] = 'Dam Length (feet)'
names['Source_Agency'] = 'Data Source'
names['State_Reg_Agency'] = 'State Regulatory Agency'
names['State_Reg_Dam'] = 'State Regulation'
names['Surface_Area'] = 'Surface Area (acres)'
names = sorted(names.items(), key=operator.itemgetter(1))
for v in vals:
vals[v] = sorted(vals[v])
out_json = {
'types':types,
'vals':vals,
'names':names
}
with open('../docs/data/filterText.json','w') as f:
json.dump(out_json,f)
```
| github_jupyter |
```
%matplotlib inline
%config InlineBackend.figure_formats = {'png', 'retina'}
data_key = pd.read_csv('key.csv')
data_key = data_key[data_key['station_nbr'] != 5]
data_weather = pd.read_csv('weather.csv')
data_weather = data_weather[data_weather['station_nbr'] != 5] ## Station 5번 제거한 나머지
data_train = pd.read_csv('train.csv')
df = pd.merge(data_weather, data_key)
station_nbr = df['station_nbr']
df.drop('station_nbr', axis=1, inplace=True)
df['station_nbr'] = station_nbr
df = pd.merge(df, data_train)
# Station 5번을 뺀 나머지 Merge 완성
# 'M'과 '-'을 np.nan으로 값을 변경하기 전에, ' T'값을 먼저 snowfall=0.05, preciptotal = 0.005로 변경하자
df['snowfall'][df['snowfall'] == ' T'] = 0.05
df['preciptotal'][df['preciptotal'] == ' T'] = 0.005
df['snowfall'][df['snowfall'] == ' T'], df['preciptotal'][df['preciptotal'] == ' T']
# T 값 변경 완료. 이제, 19개 Station 별로 정리하기 (5번 Station 생략)
df['snowfall'][df['snowfall'] == ' T'] = 0.05
df['preciptotal'][df['preciptotal'] == ' T'] = 0.005
# T 값 변경 완료. 이제, 19개 Station 별로 정리하기 (5번 Station 생략)
df_s_1 = df[df['station_nbr'] == 1]; df_s_8 = df[df['station_nbr'] == 8]; df_s_15 = df[df['station_nbr'] == 15]
df_s_2 = df[df['station_nbr'] == 2]; df_s_9 = df[df['station_nbr'] == 9]; df_s_16 = df[df['station_nbr'] == 16]
df_s_3 = df[df['station_nbr'] == 3]; df_s_10 = df[df['station_nbr'] == 10]; df_s_17 = df[df['station_nbr'] == 17]
df_s_4 = df[df['station_nbr'] == 4]; df_s_11 = df[df['station_nbr'] == 11]; df_s_18 = df[df['station_nbr'] == 18]
df_s_5 = df[df['station_nbr'] == 5]; df_s_12 = df[df['station_nbr'] == 12]; df_s_19 = df[df['station_nbr'] == 19]
df_s_6 = df[df['station_nbr'] == 6]; df_s_13 = df[df['station_nbr'] == 13]; df_s_20 = df[df['station_nbr'] == 20]
df_s_7 = df[df['station_nbr'] == 7]; df_s_14 = df[df['station_nbr'] == 14]
# Each Station avgspeed 의 M값을 np.nan으로 변경
df_s_1_avgspeed = df_s_1['avgspeed'].copy(); df_s_1_avgspeed = pd.to_numeric(df_s_1_avgspeed, errors = 'coerce')
df_s_2_avgspeed = df_s_2['avgspeed'].copy(); df_s_2_avgspeed = pd.to_numeric(df_s_2_avgspeed, errors = 'coerce')
df_s_3_avgspeed = df_s_3['avgspeed'].copy(); df_s_3_avgspeed = pd.to_numeric(df_s_3_avgspeed, errors = 'coerce')
df_s_4_avgspeed = df_s_4['avgspeed'].copy(); df_s_4_avgspeed = pd.to_numeric(df_s_4_avgspeed, errors = 'coerce')
df_s_5_avgspeed = df_s_5['avgspeed'].copy(); df_s_5_avgspeed = pd.to_numeric(df_s_5_avgspeed, errors = 'coerce')
df_s_6_avgspeed = df_s_6['avgspeed'].copy(); df_s_6_avgspeed = pd.to_numeric(df_s_6_avgspeed, errors = 'coerce')
df_s_7_avgspeed = df_s_7['avgspeed'].copy(); df_s_7_avgspeed = pd.to_numeric(df_s_7_avgspeed, errors = 'coerce')
df_s_8_avgspeed = df_s_8['avgspeed'].copy(); df_s_8_avgspeed = pd.to_numeric(df_s_8_avgspeed, errors = 'coerce')
df_s_9_avgspeed = df_s_9['avgspeed'].copy(); df_s_9_avgspeed = pd.to_numeric(df_s_9_avgspeed, errors = 'coerce')
df_s_10_avgspeed = df_s_10['avgspeed'].copy(); df_s_10_avgspeed = pd.to_numeric(df_s_10_avgspeed, errors = 'coerce')
df_s_11_avgspeed = df_s_11['avgspeed'].copy(); df_s_11_avgspeed = pd.to_numeric(df_s_11_avgspeed, errors = 'coerce')
df_s_12_avgspeed = df_s_12['avgspeed'].copy(); df_s_12_avgspeed = pd.to_numeric(df_s_12_avgspeed, errors = 'coerce')
df_s_13_avgspeed = df_s_13['avgspeed'].copy(); df_s_13_avgspeed = pd.to_numeric(df_s_13_avgspeed, errors = 'coerce')
df_s_14_avgspeed = df_s_14['avgspeed'].copy(); df_s_14_avgspeed = pd.to_numeric(df_s_14_avgspeed, errors = 'coerce')
df_s_15_avgspeed = df_s_15['avgspeed'].copy(); df_s_15_avgspeed = pd.to_numeric(df_s_15_avgspeed, errors = 'coerce')
df_s_16_avgspeed = df_s_16['avgspeed'].copy(); df_s_16_avgspeed = pd.to_numeric(df_s_16_avgspeed, errors = 'coerce')
df_s_17_avgspeed = df_s_17['avgspeed'].copy(); df_s_17_avgspeed = pd.to_numeric(df_s_17_avgspeed, errors = 'coerce')
df_s_18_avgspeed = df_s_18['avgspeed'].copy(); df_s_18_avgspeed = pd.to_numeric(df_s_18_avgspeed, errors = 'coerce')
df_s_19_avgspeed = df_s_19['avgspeed'].copy(); df_s_19_avgspeed = pd.to_numeric(df_s_19_avgspeed, errors = 'coerce')
df_s_20_avgspeed = df_s_20['avgspeed'].copy(); df_s_20_avgspeed = pd.to_numeric(df_s_20_avgspeed, errors = 'coerce')
# 각 각의 Station의 Nan 값에, 위에서 구한 각각 station의 평균 값을 넣어서 NaN 값 뺏을 때와 비교할 것.
df_s_1_avgspeed_with_mean = df_s_1_avgspeed.copy(); df_s_1_avgspeed_with_mean[df_s_1_avgspeed_with_mean.isnull()] = df_s_1_avgspeed.mean()
df_s_2_avgspeed_with_mean = df_s_2_avgspeed.copy(); df_s_2_avgspeed_with_mean[df_s_2_avgspeed_with_mean.isnull()] = df_s_2_avgspeed.mean()
df_s_3_avgspeed_with_mean = df_s_3_avgspeed.copy(); df_s_3_avgspeed_with_mean[df_s_3_avgspeed_with_mean.isnull()] = df_s_3_avgspeed.mean()
df_s_4_avgspeed_with_mean = df_s_4_avgspeed.copy(); df_s_4_avgspeed_with_mean[df_s_4_avgspeed_with_mean.isnull()] = df_s_4_avgspeed.mean()
df_s_5_avgspeed_with_mean = df_s_5_avgspeed.copy(); df_s_5_avgspeed_with_mean[df_s_5_avgspeed_with_mean.isnull()] = df_s_5_avgspeed.mean()
df_s_6_avgspeed_with_mean = df_s_6_avgspeed.copy(); df_s_6_avgspeed_with_mean[df_s_6_avgspeed_with_mean.isnull()] = df_s_6_avgspeed.mean()
df_s_7_avgspeed_with_mean = df_s_7_avgspeed.copy(); df_s_7_avgspeed_with_mean[df_s_7_avgspeed_with_mean.isnull()] = df_s_7_avgspeed.mean()
df_s_8_avgspeed_with_mean = df_s_8_avgspeed.copy(); df_s_8_avgspeed_with_mean[df_s_8_avgspeed_with_mean.isnull()] = df_s_8_avgspeed.mean()
df_s_9_avgspeed_with_mean = df_s_9_avgspeed.copy(); df_s_9_avgspeed_with_mean[df_s_9_avgspeed_with_mean.isnull()] = df_s_9_avgspeed.mean()
df_s_10_avgspeed_with_mean = df_s_10_avgspeed.copy(); df_s_10_avgspeed_with_mean[df_s_10_avgspeed_with_mean.isnull()] = df_s_10_avgspeed.mean()
df_s_11_avgspeed_with_mean = df_s_11_avgspeed.copy(); df_s_11_avgspeed_with_mean[df_s_11_avgspeed_with_mean.isnull()] = df_s_11_avgspeed.mean()
df_s_12_avgspeed_with_mean = df_s_12_avgspeed.copy(); df_s_12_avgspeed_with_mean[df_s_12_avgspeed_with_mean.isnull()] = df_s_12_avgspeed.mean()
df_s_13_avgspeed_with_mean = df_s_13_avgspeed.copy(); df_s_13_avgspeed_with_mean[df_s_13_avgspeed_with_mean.isnull()] = df_s_13_avgspeed.mean()
df_s_14_avgspeed_with_mean = df_s_14_avgspeed.copy(); df_s_14_avgspeed_with_mean[df_s_14_avgspeed_with_mean.isnull()] = df_s_14_avgspeed.mean()
df_s_15_avgspeed_with_mean = df_s_15_avgspeed.copy(); df_s_15_avgspeed_with_mean[df_s_15_avgspeed_with_mean.isnull()] = df_s_15_avgspeed.mean()
df_s_16_avgspeed_with_mean = df_s_16_avgspeed.copy(); df_s_16_avgspeed_with_mean[df_s_16_avgspeed_with_mean.isnull()] = df_s_16_avgspeed.mean()
df_s_17_avgspeed_with_mean = df_s_17_avgspeed.copy(); df_s_17_avgspeed_with_mean[df_s_17_avgspeed_with_mean.isnull()] = df_s_17_avgspeed.mean()
df_s_18_avgspeed_with_mean = df_s_18_avgspeed.copy(); df_s_18_avgspeed_with_mean[df_s_18_avgspeed_with_mean.isnull()] = df_s_18_avgspeed.mean()
df_s_19_avgspeed_with_mean = df_s_19_avgspeed.copy(); df_s_19_avgspeed_with_mean[df_s_19_avgspeed_with_mean.isnull()] = df_s_19_avgspeed.mean()
df_s_20_avgspeed_with_mean = df_s_20_avgspeed.copy(); df_s_20_avgspeed_with_mean[df_s_20_avgspeed_with_mean.isnull()] = df_s_20_avgspeed.mean()
# 각 각의 Station 별로 avgspeed의 값이 np.nan일 때, 즉, missing value를 뺏을 때의 전체 mean 값을 나타낸다.
print('#station1_without_nan_mean:', round(df_s_1_avgspeed.mean(),4), '#station1_without_nan_std:', round(df_s_1_avgspeed.std(),4));
print('#station2_without_nan_mean:', round(df_s_2_avgspeed.mean(),4), '#station2_without_nan_std:', round(df_s_2_avgspeed.std(),4));
print('#station3_without_nan_mean:', round(df_s_3_avgspeed.mean(),4), '#station3_without_nan_std:', round(df_s_3_avgspeed.std(),4));
print('#station4_without_nan_mean:', round(df_s_4_avgspeed.mean(),4), '#station4_without_nan_std:', round(df_s_4_avgspeed.std(),4));
print('#station5_without_nan_mean:', round(df_s_5_avgspeed.mean(),4), '#station5_without_nan_std:', round(df_s_5_avgspeed.std(),4));
print('#station6_without_nan_mean:', round(df_s_6_avgspeed.mean(),4), '#station6_without_nan_std:', round(df_s_6_avgspeed.std(),4));
print('#station7_without_nan_mean:', round(df_s_7_avgspeed.mean(),4), '#station7_without_nan_std:', round(df_s_7_avgspeed.std(),4));
print('#station8_without_nan_mean:', round(df_s_8_avgspeed.mean(),4), '#station8_without_nan_std:', round(df_s_8_avgspeed.std(),4));
print('#station9_without_nan_mean:', round(df_s_9_avgspeed.mean(),4), '#station9_without_nan_std:', round(df_s_9_avgspeed.std(),4));
print('#station10_without_nan_mean:', round(df_s_10_avgspeed.mean(),4), '#station10_without_nan_std:', round(df_s_10_avgspeed.std(),4));
print('#station11_without_nan_mean:', round(df_s_11_avgspeed.mean(),4), '#station11_without_nan_std:', round(df_s_11_avgspeed.std(),4));
print('#station12_without_nan_mean:', round(df_s_12_avgspeed.mean(),4), '#station12_without_nan_std:', round(df_s_12_avgspeed.std(),4));
print('#station13_without_nan_mean:', round(df_s_13_avgspeed.mean(),4), '#station13_without_nan_std:', round(df_s_13_avgspeed.std(),4));
print('#station14_without_nan_mean:', round(df_s_14_avgspeed.mean(),4), '#station14_without_nan_std:', round(df_s_14_avgspeed.std(),4));
print('#station15_without_nan_mean:', round(df_s_15_avgspeed.mean(),4), '#station15_without_nan_std:', round(df_s_15_avgspeed.std(),4));
print('#station16_without_nan_mean:', round(df_s_16_avgspeed.mean(),4), '#station16_without_nan_std:', round(df_s_16_avgspeed.std(),4));
print('#station17_without_nan_mean:', round(df_s_17_avgspeed.mean(),4), '#station17_without_nan_std:', round(df_s_17_avgspeed.std(),4));
print('#station18_without_nan_mean:', round(df_s_18_avgspeed.mean(),4), '#station18_without_nan_std:', round(df_s_18_avgspeed.std(),4));
print('#station19_without_nan_mean:', round(df_s_19_avgspeed.mean(),4), '#station19_without_nan_std:', round(df_s_19_avgspeed.std(),4));
print('#station20_without_nan_mean:', round(df_s_20_avgspeed.mean(),4), '#station20_without_nan_std:', round(df_s_20_avgspeed.std(),4));
print('stat1_nan_as_mean:',round(df_s_1_avgspeed_with_mean.mean(),4),'#stat1_nan_as_std:', round(df_s_1_avgspeed_with_mean.std(),4));
print('stat2_nan_as_mean:',round(df_s_2_avgspeed_with_mean.mean(),4),'#stat2_nan_as_std:', round(df_s_2_avgspeed_with_mean.std(),4));
print('stat3_nan_as_mean:',round(df_s_3_avgspeed_with_mean.mean(),4),'#stat3_nan_as_std:', round(df_s_3_avgspeed_with_mean.std(),4));
print('stat4_nan_as_mean:',round(df_s_4_avgspeed_with_mean.mean(),4),'#stat4_nan_as_std:', round(df_s_4_avgspeed_with_mean.std(),4));
print('stat5_nan_as_mean:',round(df_s_5_avgspeed_with_mean.mean(),4),'#stat5_nan_as_std:', round(df_s_5_avgspeed_with_mean.std(),4));
print('stat6_nan_as_mean:',round(df_s_6_avgspeed_with_mean.mean(),4),'#stat6_nan_as_std:', round(df_s_6_avgspeed_with_mean.std(),4));
print('stat7_nan_as_mean:',round(df_s_7_avgspeed_with_mean.mean(),4),'#stat7_nan_as_std:', round(df_s_7_avgspeed_with_mean.std(),4));
print('stat8_nan_as_mean:',round(df_s_8_avgspeed_with_mean.mean(),4),'#stat8_nan_as_std:', round(df_s_8_avgspeed_with_mean.std(),4));
print('stat9_nan_as_mean:',round(df_s_9_avgspeed_with_mean.mean(),4),'#stat9_nan_as_std:', round(df_s_9_avgspeed_with_mean.std(),4));
print('stat10_nan_as_mean:',round(df_s_10_avgspeed_with_mean.mean(),4),'#stat10_nan_as_std:', round(df_s_10_avgspeed_with_mean.std(),4));
print('stat11_nan_as_mean:',round(df_s_11_avgspeed_with_mean.mean(),4),'#stat11_nan_as_std:', round(df_s_11_avgspeed_with_mean.std(),4));
print('stat12_nan_as_mean:',round(df_s_12_avgspeed_with_mean.mean(),4),'#stat12_nan_as_std:', round(df_s_12_avgspeed_with_mean.std(),4));
print('stat13_nan_as_mean:',round(df_s_13_avgspeed_with_mean.mean(),4),'#stat13_nan_as_std:', round(df_s_13_avgspeed_with_mean.std(),4));
print('stat14_nan_as_mean:',round(df_s_14_avgspeed_with_mean.mean(),4),'#stat14_nan_as_std:', round(df_s_14_avgspeed_with_mean.std(),4));
print('stat15_nan_as_mean:',round(df_s_15_avgspeed_with_mean.mean(),4),'#stat15_nan_as_std:', round(df_s_15_avgspeed_with_mean.std(),4));
print('stat16_nan_as_mean:',round(df_s_16_avgspeed_with_mean.mean(),4),'#stat16_nan_as_std:', round(df_s_16_avgspeed_with_mean.std(),4));
print('stat17_nan_as_mean:',round(df_s_17_avgspeed_with_mean.mean(),4),'#stat17_nan_as_std:', round(df_s_17_avgspeed_with_mean.std(),4));
print('stat18_nan_as_mean:',round(df_s_18_avgspeed_with_mean.mean(),4),'#stat18_nan_as_std:', round(df_s_18_avgspeed_with_mean.std(),4));
print('stat19_nan_as_mean:',round(df_s_19_avgspeed_with_mean.mean(),4),'#stat19_nan_as_std:', round(df_s_19_avgspeed_with_mean.std(),4));
print('stat20_nan_as_mean:',round(df_s_20_avgspeed_with_mean.mean(),4),'#stat20_nan_as_std:', round(df_s_20_avgspeed_with_mean.std(),4));
y1 = np.array([df_s_1_avgspeed.mean(), df_s_2_avgspeed.mean(), df_s_3_avgspeed.mean(), df_s_4_avgspeed.mean(), df_s_5_avgspeed.mean(),
df_s_6_avgspeed.mean(), df_s_7_avgspeed.mean(), df_s_8_avgspeed.mean(), df_s_9_avgspeed.mean(), df_s_10_avgspeed.mean(),
df_s_11_avgspeed.mean(), df_s_12_avgspeed.mean(), df_s_13_avgspeed.mean(), df_s_14_avgspeed.mean(), df_s_15_avgspeed.mean(),
df_s_16_avgspeed.mean(), df_s_17_avgspeed.mean(), df_s_18_avgspeed.mean(), df_s_19_avgspeed.mean(), df_s_20_avgspeed.mean()])
y2 = np.array([df_s_1_avgspeed_with_mean.mean(), df_s_2_avgspeed_with_mean.mean(), df_s_3_avgspeed_with_mean.mean(), df_s_4_avgspeed_with_mean.mean()
,df_s_5_avgspeed_with_mean.mean(), df_s_6_avgspeed_with_mean.mean(), df_s_7_avgspeed_with_mean.mean(), df_s_8_avgspeed_with_mean.mean()
,df_s_9_avgspeed_with_mean.mean(), df_s_10_avgspeed_with_mean.mean(), df_s_11_avgspeed_with_mean.mean(), df_s_12_avgspeed_with_mean.mean()
,df_s_13_avgspeed_with_mean.mean(), df_s_14_avgspeed_with_mean.mean(), df_s_15_avgspeed_with_mean.mean(), df_s_16_avgspeed_with_mean.mean()
,df_s_17_avgspeed_with_mean.mean(),df_s_18_avgspeed_with_mean.mean(),df_s_19_avgspeed_with_mean.mean(),df_s_20_avgspeed_with_mean.mean()])
plt.figure(figsize=(10,5))
plt.plot()
x = range(1, 21)
plt.xlabel('Station Number')
plt.ylabel('avgspeed')
plt.bar(x, y1, color='y')
plt.bar(x, y2, color='r')
plt.show()
#### avgspeed 에서는
# 이 Graph가 나타내는 바는, y1과 y2가 차이가 거의 없다는 것. 그래프에는 r color와 y color이 있는데, 하나만 보인다는 것은
# 우리가 그래프에서 눈으로 보이지 않는 정도의 차이가 나온다는 것이다, 즉 차이가 극히 작다고 판단 할 수 있다.
# Missing Values를 np.nan으로 바꾸고, 거기서 Mean 값을 구하여 np.nan에 대입하여도,
# 따라서, 여기서는 Missing Value에 Mean값을 적용하여 회귀 분석에 적용 할 수있다고 판단된다.
plt.figure(figsize=(12,6))
xticks = range(1,21)
plt.plot(xticks, y1, 'ro-', label='stations_without_nan_mean')
plt.plot(xticks, y2, 'bd:', label='stations_nan_as_mean')
plt.legend(loc=0)
plt.show()
y3 = np.array([df_s_1_avgspeed.std(), df_s_2_avgspeed.std(), df_s_3_avgspeed.std(), df_s_4_avgspeed.std(), df_s_5_avgspeed.std(),
df_s_6_avgspeed.std(), df_s_7_avgspeed.std(), df_s_8_avgspeed.std(), df_s_9_avgspeed.std(), df_s_10_avgspeed.std(),
df_s_11_avgspeed.std(), df_s_12_avgspeed.std(), df_s_13_avgspeed.std(), df_s_14_avgspeed.std(), df_s_15_avgspeed.std(),
df_s_16_avgspeed.std(), df_s_17_avgspeed.std(), df_s_18_avgspeed.std(), df_s_19_avgspeed.std(), df_s_20_avgspeed.std()])
y4 = np.array([df_s_1_avgspeed_with_mean.std(), df_s_2_avgspeed_with_mean.std(), df_s_3_avgspeed_with_mean.std(), df_s_4_avgspeed_with_mean.std()
,df_s_5_avgspeed_with_mean.std(), df_s_6_avgspeed_with_mean.std(), df_s_7_avgspeed_with_mean.std(), df_s_8_avgspeed_with_mean.std()
,df_s_9_avgspeed_with_mean.std(), df_s_10_avgspeed_with_mean.std(), df_s_11_avgspeed_with_mean.std(), df_s_12_avgspeed_with_mean.std()
,df_s_13_avgspeed_with_mean.std(), df_s_14_avgspeed_with_mean.std(), df_s_15_avgspeed_with_mean.std(), df_s_16_avgspeed_with_mean.std()
,df_s_17_avgspeed_with_mean.std(),df_s_18_avgspeed_with_mean.std(),df_s_19_avgspeed_with_mean.std(),df_s_20_avgspeed_with_mean.std()])
plt.figure(figsize=(10,5))
plt.plot()
x = range(1, 21)
plt.xlabel('Station Number')
plt.ylabel('avgspeed')
plt.bar(x, y3, color='r')
plt.bar(x, y4, color='g')
plt.show()
plt.figure(figsize=(12,6))
xticks = range(1,21)
plt.plot(xticks, y3, 'ro-', label='stations_without_nan_std')
plt.plot(xticks, y4, 'bd:', label='stations_nan_as_std')
plt.legend(loc=0)
plt.show()
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import sys
from tqdm import tqdm
sys.path.insert(0,'..')
%matplotlib inline
from dataset import Dataset
from models import CNP
from train import Trainer
from utils.dataset_utils import (load_data,train_test_split, make_features)
from types import SimpleNamespace
config = SimpleNamespace(**trainconfig)
df = pd.read_pickle('../data/btc_30m.pkl')
x,ts = train_test_split(df,test_mode=config.test_mode)
x, index, stats = make_features(x,
seq_len=config.seq_len,
preprocess=config.preprocess,
lags=config.lags,
use_x=config.use_x,
)
dataset = Dataset(x, index,stats, seq_len=config.seq_len, pred_len=config.pred_len,random_start=True)
model = NP(dataset.shape, config)
trainer = Trainer(model, config, path=config.path)
trainer.init_sess()
losses = []
ep = 0
for t in tqdm(range(config.steps), desc='Train steps: '):
try:
batch,_ = dataset.next()
loss = trainer.fit(batch, stats=dataset.stats)
losses.append(loss)
except StopIteration:
ep+=1
if ep % 50 == 0:
tqdm.write('Ep: {}, Avg_loss: {}'.format(ep, np.mean(losses)), end='\n')
dataset.reset()
losses=[]
x, index, stats = make_features(ts,
seq_len=config.seq_len,
preprocess=config.preprocess,
lags=config.lags,
use_x=config.use_x,
)
dataset = Dataset(x, index,stats, seq_len=config.seq_len, pred_len=config.pred_len,random_start=False, window=config.pred_len)
preds = {'d':[], 'h':[],'sigma':[], 'y':[]}
for batch, date in dataset:
mu, sigma = trainer.sess.run([model.h, model.sigma], feed_dict=trainer._get_dict(batch))
preds['d'].append(date)
preds['h'].append(mu)
preds['sigma'].append(sigma)
preds['y'].append(batch['y'])
y = pd.Series(np.vstack(preds['y']).flatten(), name='y')
mu = pd.Series(np.vstack(preds['h']).flatten(), name='h')
sigma = pd.Series(np.vstack(preds['sigma']).flatten(), name='sigma')
y_hat = mu * dataset.stats['std'] + dataset.stats['mu']
df = pd.concat([y,y_hat, sigma],axis=1)
fig, ax = plt.subplots(1,1, figsize=(20,5))
ax.plot(mu)
ax.fill_between(np.arange(0, y_hat.shape[0], 1),mu + 1.28*sigma, mu - 1.28*sigma,alpha=1,color='red')
ax.set_xlabel('time')
ax.set_ylabel('temperature')
fig, ax = plt.subplots(1,1, figsize=(20,5))
ax.plot(df.y)
ax.plot(df.h)
ax.plot(rnn_df.h)
ax.set_xlabel('time')
ax.set_ylabel('btc_price')
ax.legend(['y','y_hat', 'y_hat_rnn'])
fig.savefig('np_sml.png')
```
| github_jupyter |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS-109B Introduction to Data Science
## Lab 5: Convolutional Neural Networks
**Harvard University**<br>
**Spring 2020**<br>
**Instructors:** Mark Glickman, Pavlos Protopapas, and Chris Tanner<br>
**Lab Instructors:** Chris Tanner and Eleni Angelaki Kaxiras<br>
**Content:** Eleni Angelaki Kaxiras, Pavlos Protopapas, Patrick Ohiomoba, and David Sondak
---
```
# RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text
HTML(styles)
```
## Learning Goals
In this lab we will look at Convolutional Neural Networks (CNNs), and their building blocks.
By the end of this lab, you should:
- have a good undertanding on how images, a common type of data for a CNN, are represented in the computer and how to think of them as arrays of numbers.
- be familiar with preprocessing images with `tf.keras` and `scipy`.
- know how to put together the building blocks used in CNNs - such as convolutional layers and pooling layers - in `tensorflow.keras` with an example.
- run your first CNN.
```
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (5,5)
import numpy as np
from scipy.optimize import minimize
from sklearn.utils import shuffle
%matplotlib inline
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Dropout, Flatten, Activation, Input
from tensorflow.keras.layers import Conv2D, Conv1D, MaxPooling2D, MaxPooling1D,\
GlobalAveragePooling1D, GlobalMaxPooling1D
from tensorflow.keras.optimizers import Adam, SGD, RMSprop
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.metrics import AUC, Precision, Recall, FalsePositives, FalseNegatives, \
TruePositives, TrueNegatives
from tensorflow.keras.regularizers import l2
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
tf.keras.backend.clear_session() # For easy reset of notebook state.
print(tf.__version__) # You should see a > 2.0.0 here!
```
## Part 0: Running on SEAS JupyterHub
**PLEASE READ**: [Instructions for Using SEAS JupyterHub](https://canvas.harvard.edu/courses/65462/pages/instructions-for-using-seas-jupyterhub?module_item_id=638544)
SEAS and FAS are providing you with a platform in AWS to use for the class (accessible from the 'Jupyter' menu link in Canvas). These are AWS p2 instances with a GPU, 10GB of disk space, and 61 GB of RAM, for faster training for your networks. Most of the libraries such as keras, tensorflow, pandas, etc. are pre-installed. If a library is missing you may install it via the Terminal.
**NOTE : The AWS platform is funded by SEAS and FAS for the purposes of the class. It is not running against your individual credit.**
**NOTE NOTE NOTE: You are not allowed to use it for purposes not related to this course.**
**Help us keep this service: Make sure you stop your instance as soon as you do not need it.**

*source:CS231n Stanford: Google Cloud Tutorial*
## Part 1: Parts of a Convolutional Neural Net
We can have
- 1D CNNs which are useful for time-series or 1-Dimensional data,
- 2D CNNs used for 2-Dimensional data such as images, and also
- 3-D CNNs used for video.
### a. Convolutional Layers.
Convolutional layers are comprised of **filters** and **feature maps**. The filters are essentially the **neurons** of the layer. They have the weights and produce the input for the next layer. The feature map is the output of one filter applied to the previous layer.
Convolutions operate over 3D tensors, called feature maps, with two spatial axes (height and width) as well as a depth axis (also called the channels axis). For an RGB image, the dimension of the depth axis is 3, because the image has three color channels: red, green, and blue. For a black-and-white picture, like the MNIST digits, the depth is 1 (levels of gray). The convolution operation extracts patches from its input feature map and applies the same transformation to all of these patches, producing an output feature map. This output feature map is still a 3D tensor: it has a width and a height. Its depth can be arbitrary, because the output depth is a parameter of the layer, and the different channels in that depth axis no longer stand for specific colors as in RGB input; rather, they stand for filters. Filters encode specific aspects of the input data: at a high level, a single filter could encode the concept “presence of a face in the input,” for instance.
In the MNIST example that we will see, the first convolution layer takes a feature map of size (28, 28, 1) and outputs a feature map of size (26, 26, 32): it computes 32 filters over its input. Each of these 32 output channels contains a 26×26 grid of values, which is a response map of the filter over the input, indicating the response of that filter pattern at different locations in the input.
Convolutions are defined by two key parameters:
- Size of the patches extracted from the inputs. These are typically 3×3 or 5×5
- The number of filters computed by the convolution.
**Padding**: One of "valid", "causal" or "same" (case-insensitive). "valid" means "no padding". "same" results in padding the input such that the output has the same length as the original input. "causal" results in causal (dilated) convolutions,
#### 1D Convolutional Network
In `tf.keras` see [1D convolutional layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D)

*image source: Deep Learning with Python by François Chollet*
#### 2D Convolutional Network
In `tf.keras` see [2D convolutional layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D)

**keras.layers.Conv2D** (filters, kernel_size, strides=(1, 1), padding='valid', activation=None, use_bias=True,
kernel_initializer='glorot_uniform', data_format='channels_last',
bias_initializer='zeros')
### b. Pooling Layers.
Pooling layers are also comprised of filters and feature maps. Let's say the pooling layer has a 2x2 receptive field and a stride of 2. This stride results in feature maps that are one half the size of the input feature maps. We can use a max() operation for each receptive field.
In `tf.keras` see [2D pooling layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D)
**keras.layers.MaxPooling2D**(pool_size=(2, 2), strides=None, padding='valid', data_format=None)

### c. Dropout Layers.
Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting.
In `tf.keras` see [Dropout layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout)
tf.keras.layers.Dropout(rate, seed=None)
rate: float between 0 and 1. Fraction of the input units to drop.<br>
seed: A Python integer to use as random seed.
References
[Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf)
### d. Fully Connected Layers.
A fully connected layer flattens the square feature map into a vector. Then we can use a sigmoid or softmax activation function to output probabilities of classes.
In `tf.keras` see [Fully Connected layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)
**keras.layers.Dense**(units, activation=None, use_bias=True,
kernel_initializer='glorot_uniform', bias_initializer='zeros')
## Part 2: Preprocessing the data
```
img = plt.imread('../images/cat.1700.jpg')
height, width, channels = img.shape
print(f'PHOTO: height = {height}, width = {width}, number of channels = {channels}, \
image datatype = {img.dtype}')
img.shape
# let's look at the image
imgplot = plt.imshow(img)
```
#### Visualizing the different channels
```
colors = [plt.cm.Reds, plt.cm.Greens, plt.cm.Blues, plt.cm.Greys]
subplots = np.arange(221,224)
for i in range(3):
plt.subplot(subplots[i])
plt.imshow(img[:,:,i], cmap=colors[i])
plt.subplot(224)
plt.imshow(img)
plt.show()
```
If you want to learn more: [Image Processing with Python and Scipy](http://prancer.physics.louisville.edu/astrowiki/index.php/Image_processing_with_Python_and_SciPy)
## Part 3: Putting the Parts together to make a small ConvNet Model
Let's put all the parts together to make a convnet for classifying our good old MNIST digits.
```
# Load data and preprocess
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data(
path='mnist.npz') # load MNIST data
train_images.shape
```
**Notice:** These photos do not have a third dimention channel because they are B&W.
```
train_images.max(), train_images.min()
train_images = train_images.reshape((60000, 28, 28, 1)) # Reshape to get third dimension
test_images = test_images.reshape((10000, 28, 28, 1))
train_images = train_images.astype('float32') / 255 # Normalize between 0 and 1
test_images = test_images.astype('float32') / 255
# Convert labels to categorical data
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
mnist_cnn_model = Sequential() # Create sequential model
# Add network layers
mnist_cnn_model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
mnist_cnn_model.add(MaxPooling2D((2, 2)))
mnist_cnn_model.add(Conv2D(64, (3, 3), activation='relu'))
mnist_cnn_model.add(MaxPooling2D((2, 2)))
mnist_cnn_model.add(Conv2D(64, (3, 3), activation='relu'))
```
The next step is to feed the last output tensor (of shape (3, 3, 64)) into a densely connected classifier network like those you’re already familiar with: a stack of Dense layers. These classifiers process vectors, which are 1D, whereas the output of the last conv layer is a 3D tensor. First we have to flatten the 3D outputs to 1D, and then add a few Dense layers on top.
```
mnist_cnn_model.add(Flatten())
mnist_cnn_model.add(Dense(32, activation='relu'))
mnist_cnn_model.add(Dense(10, activation='softmax'))
mnist_cnn_model.summary()
```
<div class="Question"><b>Question</b> Why are we using cross-entropy here?</div>
```
loss = tf.keras.losses.categorical_crossentropy
optimizer = Adam(lr=0.001)
#optimizer = RMSprop(lr=1e-2)
# see https://www.tensorflow.org/api_docs/python/tf/keras/metrics
metrics = ['accuracy']
# Compile model
mnist_cnn_model.compile(optimizer=optimizer,
loss=loss,
metrics=metrics)
```
<div class="discussion"><b>Discussion</b> How can we choose the batch size?</div>
```
%%time
# Fit the model
verbose, epochs, batch_size = 1, 10, 64 # try a different num epochs and batch size : 30, 16
history = mnist_cnn_model.fit(train_images, train_labels,
epochs=epochs,
batch_size=batch_size,
verbose=verbose,
validation_split=0.2,
# validation_data=(X_val, y_val) # IF you have val data
shuffle=True)
print(history.history.keys())
print(history.history['val_accuracy'][-1])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
#plt.savefig('../images/batch8.png')
mnist_cnn_model.metrics_names
# Evaluate the model on the test data:
score = mnist_cnn_model.evaluate(test_images, test_labels,
batch_size=batch_size,
verbose=0, callbacks=None)
#print("%s: %.2f%%" % (mnist_cnn_model.metrics_names[1], score[1]*100))
test_acc = mnist_cnn_model.evaluate(test_images, test_labels)
test_acc
```
<div class="discussion"><b>Discussion</b> Compare validation accuracy and test accuracy? Comment on whether we have overfitting.</div>
### Data Preprocessing : Meet the `ImageDataGenerator` class in `keras`
[(keras ImageGenerator documentation)](https://keras.io/preprocessing/image/)
The MNIST and other pre-loaded dataset are formatted in a way that is almost ready for feeding into the model. What about plain images? They should be formatted into appropriately preprocessed floating-point tensors before being fed into the network.
The Dogs vs. Cats dataset that you’ll use isn’t packaged with Keras. It was made available by Kaggle as part of a computer-vision competition in late 2013, back when convnets weren’t mainstream. The data has been downloaded for you from https://www.kaggle.com/c/dogs-vs-cats/data The pictures are medium-resolution color JPEGs.
```
# TODO: set your base dir to your correct local location
base_dir = '../data/cats_and_dogs_small'
import os, shutil
# Set up directory information
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
train_cats_dir = os.path.join(train_dir, 'cats')
train_dogs_dir = os.path.join(train_dir, 'dogs')
validation_cats_dir = os.path.join(validation_dir, 'cats')
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
test_cats_dir = os.path.join(test_dir, 'cats')
test_dogs_dir = os.path.join(test_dir, 'dogs')
print('total training cat images:', len(os.listdir(train_cats_dir)))
print('total training dog images:', len(os.listdir(train_dogs_dir)))
print('total validation cat images:', len(os.listdir(validation_cats_dir)))
print('total validation dog images:', len(os.listdir(validation_dogs_dir)))
print('total test cat images:', len(os.listdir(test_cats_dir)))
print('total test dog images:', len(os.listdir(test_dogs_dir)))
```
So you do indeed have 2,000 training images, 1,000 validation images, and 1,000 test images. Each split contains the same number of samples from each class: this is a balanced binary-classification problem, which means classification accuracy will be an appropriate measure of success.
<div class="discussion"><b>Discussion</b> Should you always do your own splitting of the data How about shuffling? Does it always make sense?</div>
```
img_path = '../data/cats_and_dogs_small/train/cats/cat.70.jpg'
# We preprocess the image into a 4D tensor
from keras.preprocessing import image
import numpy as np
img = image.load_img(img_path, target_size=(150, 150))
img_tensor = image.img_to_array(img)
img_tensor = np.expand_dims(img_tensor, axis=0)
# Remember that the model was trained on inputs
# that were preprocessed in the following way:
img_tensor /= 255.
# Its shape is (1, 150, 150, 3)
print(img_tensor.shape)
plt.imshow(img_tensor[0])
plt.show()
```
Why do we need an extra dimension here?
#### Building the network
```
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
```
For the compilation step, you’ll go with the RMSprop optimizer. Because you ended the network with a single sigmoid unit, you’ll use binary crossentropy as the loss.
```
loss = tf.keras.losses.binary_crossentropy
#optimizer = Adam(lr=0.001)
optimizer = RMSprop(lr=1e-2)
metrics = ['accuracy']
# Compile model
model.compile(optimizer=optimizer,
loss=loss,
metrics=metrics)
```
The steps for getting it into the network are roughly as follows:
1. Read the picture files.
2. Convert the JPEG content to RGB grids of pixels.
3. Convert these into floating-point tensors.
4. Rescale the pixel values (between 0 and 255) to the [0, 1] interval (as you know, neural networks prefer to deal with small input values).
It may seem a bit daunting, but fortunately Keras has utilities to take care of these steps automatically with the class `ImageDataGenerator`, which lets you quickly set up Python generators that can automatically turn image files on disk into batches of preprocessed tensors. This is what you’ll use here.
```
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
```
Let’s look at the output of one of these generators: it yields batches of 150×150 RGB images (shape (20, 150, 150, 3)) and binary labels (shape (20,)). There are 20 samples in each batch (the batch size). Note that the generator yields these batches indefinitely: it loops endlessly over the images in the target folder. For this reason, you need to break the iteration loop at some point:
```
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
```
Let’s fit the model to the data using the generator. You do so using the `.fit_generator` method, the equivalent of `.fit` for data generators like this one. It expects as its first argument a Python generator that will yield batches of inputs and targets indefinitely, like this one does.
Because the data is being generated endlessly, the Keras model needs to know how many samples to draw from the generator before declaring an epoch over. This is the role of the `steps_per_epoch` argument: after having drawn steps_per_epoch batches from the generator—that is, after having run for steps_per_epoch gradient descent steps - the fitting process will go to the next epoch. In this case, batches are 20 samples, so it will take 100 batches until you see your target of 2,000 samples.
When using fit_generator, you can pass a validation_data argument, much as with the fit method. It’s important to note that this argument is allowed to be a data generator, but it could also be a tuple of Numpy arrays. If you pass a generator as validation_data, then this generator is expected to yield batches of validation data endlessly; thus you should also specify the validation_steps argument, which tells the process how many batches to draw from the validation generator for evaluation
```
%%time
# Fit the model <--- always a good idea to time it
verbose, epochs, batch_size, steps_per_epoch = 1, 5, 64, 100
history = model.fit_generator(
train_generator,
steps_per_epoch=steps_per_epoch,
epochs=5, # TODO: should be 100
validation_data=validation_generator,
validation_steps=50)
# It’s good practice to always save your models after training.
model.save('cats_and_dogs_small_1.h5')
```
Let’s plot the loss and accuracy of the model over the training and validation data during training:
```
print(history.history.keys())
print(history.history['val_accuracy'][-1])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.savefig('../images/batch8.png')
```
Let's try data augmentation
```
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
```
These are just a few of the options available (for more, see the Keras documentation).
Let’s quickly go over this code:
- rotation_range is a value in degrees (0–180), a range within which to randomly rotate pictures.
- width_shift and height_shift are ranges (as a fraction of total width or height) within which to randomly translate pictures vertically or horizontally.
- shear_range is for randomly applying shearing transformations.
- zoom_range is for randomly zooming inside pictures.
- horizontal_flip is for randomly flipping half the images horizontally—relevant when there are no assumptions of - horizontal asymmetry (for example, real-world pictures).
- fill_mode is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.
Let’s look at the augmented images
```
from keras.preprocessing import image
fnames = [os.path.join(train_dogs_dir, fname) for
fname in os.listdir(train_dogs_dir)]
img_path = fnames[3] # Chooses one image to augment
img = image.load_img(img_path, target_size=(150, 150))
# Reads the image and resizes it
x = image.img_to_array(img) # Converts it to a Numpy array with shape (150, 150, 3)
x = x.reshape((1,) + x.shape) # Reshapes it to (1, 150, 150, 3)
i=0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
```
If you train a new network using this data-augmentation configuration, the network will never see the same input twice. But the inputs it sees are still heavily intercorrelated, because they come from a small number of original images—you can’t produce new information, you can only remix existing information. As such, this may not be enough to completely get rid of overfitting. To further fight overfitting, you’ll also add a **Dropout** layer to your model right before the densely connected classifier.
```
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(512, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
loss = tf.keras.losses.binary_crossentropy
optimizer = RMSprop(lr=1e-4)
metrics = ['acc', 'accuracy']
# Compile model
model.compile(loss=loss,
optimizer=optimizer,
metrics=metrics)
# Let’s train the network using data augmentation and dropout.
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
test_datagen = ImageDataGenerator(rescale=1./255)
# Note that the validation data shouldn’t be augmented!
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=5, # TODO: should be 100
validation_data=validation_generator,
validation_steps=50)
# save model if needed
model.save('cats_and_dogs_small_2.h5')
```
And let’s plot the results again. Thanks to data augmentation and dropout, you’re no longer overfitting: the training curves are closely tracking the validation curves. You now reach an accuracy of 82%, a 15% relative improvement over the non-regularized model. (Note: these numbers are for 100 epochs..)
```
print(history.history.keys())
print(history.history['val_accuracy'][-1])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Accuracy with data augmentation')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Loss with data augmentation')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
#plt.savefig('../images/batch8.png')
```
By using regularization techniques even further, and by tuning the network’s parameters (such as the number of filters per convolution layer, or the number of layers in the network), you may be able to get an even better accuracy, likely up to 86% or 87%. But it would prove difficult to go any higher just by training your own convnet from scratch, because you have so little data to work with. As a next step to improve your accuracy on this problem, you’ll have to use a pretrained model.
| github_jupyter |
```
# default_exp naive_bayes
#hide
from nbdev.showdoc import *
# all_flag
```
# Naive Bayes Classifier
> Summary: Naive Bayes, Text classification, Sentiment analysis, bag-of-words, BOW
## What is Naive Bayes Method?
Naive Bayes technique is a supervised method. It is a probabilistic learning method for classifying documents particularly text documents. It works based on the Naive Bayes assumption. Navie Bayes assumes that features $x_1, x_2, \cdots, x_n$ are **conditionally independent** given the class labels $y$. In other words:
$$P(x_1,x_2,\cdots,x_n|y)=P(x_1|y)P(x_2|y)\cdots P(x_n|y)=\prod_{i=1}^{n}P(x_i|y)$$
Although often times $x_i$'s are not really conditionally independent in real world, nevertheless, the approach performs surprisingly well in practice, particularly document classification and spam filtering.
## Naive Bayes (NB) for Text Classification
Let $D$ be a corpus of documents where each document $d$ is represented by a bag of words (i.e. the order of words does not matter), and has a label $y_d$. If the total number of labels are two $y=\{0,1\}$, meaning that every document belongs to the class 0 or 1, then the problem is a binary classification. Otherwise it is called a multi-class classification, $y=\{0,1, \cdots, k\}$.
For instance, for sentiment analysis we have two classes: "positive" and "negative", therefore, $y=\{0,1\}$, 0 representing "negative" class and 1 representing "positive" class, respectively.
The words of the documents are the features and the total number of features is the size of the vocabulary $|v|$ (i.e. total number of unique words in the corpus). For classifying documents, first we represent each document $d$ as a vector of the words $d=<x_{1}, x_{2},\cdots,x_{v}>$. Then the probability of document $d$ being in class $y=c$ is:
$$P(y=c|x_1,x_2,\cdots,x_v)=\dfrac{P(y=c)P(x_1,x_2,\cdots,x_v|y=c)}{P(x_1,x_2,\cdots,x_v)}$$
Assuming conditional independence between $x_i$'s:
$$P(y=c|x_1,x_2,\cdots,x_v)=\dfrac{P(y=c)\prod_{i=1}^{|v|}P(x_i|y=c)}{P(x_1,x_2,\cdots,x_v)}$$
We can drop the denominator as it is a normalization constant. Thus we have:
$$P(y=c|x_1,x_2,\cdots,x_v)\propto P(y=c)\prod_{i=1}^{|v|}P(x_i|y=c)$$
In text classification, our goal is to find the *best* class for the document $d$. The best class in NB classification is the most likely or *maximum a posteriori (MAP)* class $y^{map}$. Therefore:
$$y^{map} =\underset{y}{\arg\max}\quad P(y=y_k)\prod_i^{|v|} P(x_i|y=y_k)$$
## How to Estimate Parameters $p(y)$ and $p(x_i|y)$
In the context of text classification, to estimate the parameters $P(y)$ and $P(x_i|y)$ we use relative frequency, which assigns the most likely value of each parameter given the training data. For estimating the prior:
$$\hat{P}(y=c)=\dfrac{N_c}{|D|}$$
where $N_c$ is the total number of documents with label $c$ and $|D|$ is total number of documents in the corpus $D$.
We estimate the conditional probability $\hat{P}(x_i|y=c)$ as the relative frequency of term $x_i$ in documents belonging to class $c$:
$$\hat{P}(x_i=t|y=c)=\dfrac{N_{ct} + \alpha}{\sum_{t'=1}^v (N_{ct'}+\alpha)}=\dfrac{N_{ct} + \alpha}{\sum_{t'=1}^v N_{ct'}+\alpha|v|}$$
where $N_{ct}$ is the number of occurrences of $t$ (i.e. count or frequency of $t$) in training documents from class $c$, and $\sum_{t'=1}^v N_{ct'}$ is total count of all the words in documents with label $c$. The parameter $\alpha \geq 0$ is called the *smoothing prior*. You can think of it as "virtual" or "imaginary" counts, which prevents the probabilities from becomming zero when a feature does not exist in the training data. When $\alpha =1$, it is called *Laplace smoothing*.
## Toy Example
Let's assume that we have a small dataset shown below: (example is adopted from [here](https://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html))

According to this dataset, we want to classify the test document. The vocabulary has 6 words: {"Chinese", "Tokyo", "Japan", "Beijing", "Shanghai", "Macao"}. Therefore, each document is a vector of these words. For example, document $d_1 = <2,0,0,1,0,0>$. Now, the first step to classify the test document $d_5$ is to calculate the priors:
$\hat{P}(y=yes)=\frac{3}{4}$ and $\hat{P}(y=no)=\frac{1}{4}$. Then, we compute the conditional probabilities, assuming $\alpha=1$:
$$\hat{P}(Chinese|y=yes)=\dfrac{5+1}{8+6}=\dfrac{6}{14}=\dfrac{3}{7}$$
$$\hat{P}(Tokyo|y=yes)=\hat{P}(Japan|y=yes)=\dfrac{0+1}{8+6}=\dfrac{1}{14}$$
$$\hat{P}(Chinese|y=no)=\dfrac{1+1}{3+6}=\dfrac{2}{9}$$
$$\hat{P}(Tokyo|y=yes)=\hat{P}(Japan|y=yes)=\dfrac{1+1}{3+6}=\dfrac{2}{9}$$
We then get,
$$\hat{P}(y=yes|d_5)\propto 3/4 \cdot (3/7)^3 \cdot 1/14 \cdot 1/14 \approx 0.0003$$
$$\hat{P}(y=no|d_5)\propto 1/4 \cdot (2/9)^3 \cdot 2/9 \cdot 2/9 \approx 0.0001$$
Thus, the classifier assigns the test document to $c=$ China. The reason for this classification decision is that the three occurrences of the positive indicator Chinese in $d_5$ outweigh the occurrences of the two negative indicators Japan and Tokyo.
## Implementation
For sentiment classification, I used the popular IMDB dataset from [here](https://ai.stanford.edu/~amaas/data/sentiment/). This dataset provides a set of 25,000 highly polar movie reviews for training, and 25,000 for testing.
```
#hide
import os
import sys
import time
import numpy as np
from tqdm import tqdm
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
#export
def read_dir(dir_path, label):
""" Read all the files in the directory `dir_path` with the labels `label` and
return a list of tuples (text, label)."""
files = os.listdir(dir_path)
data = []
for file in tqdm(files, file=sys.stdout):
with open(os.path.join(dir_path,file), 'r', encoding='utf-8') as f:
data.append((f.read(),label))
return data
def load_data(task):
""" Load all the positive and negative examples for the training or test sets according to argument `task`,
and return the shuffled data."""
neg_dir_path = os.path.join('imdb_data', task, 'neg')
pos_dir_path = os.path.join('imdb_data', task, 'pos')
label = 0
neg_data = read_dir(neg_dir_path, label)
label = 1
pos_data = read_dir(pos_dir_path, label)
data = neg_data + pos_data
np.random.shuffle(data)
return data
def train(x_train, y_train, stop_words='english', ngram_range=(1, 1), max_features=None):
"""Create the BOW (i.e. word-count matrix) for the training set depending on input arguments
whether or not consider `stop_words`, `ngram_range` such as words, bigrams, etc and
`max_features`, which is the size of the vocabulary. Return the model and vectorizer objects."""
vectorizer = CountVectorizer(input=x_train, stop_words=stop_words, ngram_range=ngram_range, min_df=10, max_df=0.9, max_features=max_features)
print('Vectorizing training data i.e. Creating the word count matrix...', end=' ')
x_train_vectorized = vectorizer.fit_transform(x_train)
print('done!\n')
print('Start training...')
model = MultinomialNB()
model.fit(x_train_vectorized, y_train)
print('Training done!')
print('Number of documents = {} | Number of features = {}'.format(x_train_vectorized.shape[0], x_train_vectorized.shape[1]))
return model, vectorizer
def test(x_test, y_test, model):
"""Perform the prediction on the test set `x_test` and measures the accuracy based on actual labels `y_test`.
Return the predictions and accuracy."""
print('Start testing...')
predictions = model.predict(x_test)
accuracy = model.score(x_test, y_test)
print('done!')
return predictions, accuracy
def run(x_train, y_train, x_test, y_test, stop_words, ngram_range, max_features):
"""Vectorize the data, create the model, run the train and test and measure the accuracy considering the input arquments."""
st_time = time.time()
model,vectorizer = train(x_train, y_train, stop_words, ngram_range, max_features)
en_time = time.time() - st_time
print('Training time: {:.2f} s'.format(en_time))
print()
print('Vectorizing test data...', end=' ')
x_test_vectorized = vectorizer.transform(x_test)
print('done!')
print('Test data shape = ', x_test_vectorized.shape)
predictions, accuracy = test(x_test_vectorized, y_test, model)
print('accuracy = {:.2f}'.format(accuracy))
return predictions, accuracy
```
First, we read the files and create our training and test sets.
```
print('Loading training data...')
train_data = load_data('train')
print('done!')
print('Loading test data...')
test_data = load_data('test')
print('done!')
# Unpack the (text, label) for training and test data
x_train, y_train = zip(*train_data)
x_test, y_test = zip(*test_data)
```
There are many different configurations that we can experiment with. To start it off, we train the model based on input arquments. This model removes the stop words, creates the unigrams and bigrams for all the documents and take into account the entire vocabulary. After training, the model outputs useful information such as the accuracy of the model and total training time.
```
# Training the model with following parameters
# Remove stop words
stop_words = 'english'
# Consider uni-grams and bi-grams
ngram_range = (1, 2)
# Take all the features into account
max_features = None
preds, acc = run(x_train, y_train, x_test, y_test, stop_words, ngram_range, max_features)
```
In the this experiment, we only consider the unigrams (i.e. representing documents based on terms). The remaining input parameters are as before. We can see that the accuracy is decreased, which indicate that bigrams help the classification task.
```
# Another configuration
stop_words = 'english'
ngram_range = (1, 1)
max_features = None
preds, acc = run(x_train, y_train, x_test, y_test, stop_words, ngram_range, max_features)
```
In this configuration, we only consider the 5000 common words in the documents, but take the bigrams into account and perform the classification task. It gives better accuracy comparing with previous experiment.
```
# Another configuration
stop_words = 'english'
ngram_range = (1, 2)
max_features = 5000
preds, acc = run(x_train, y_train, x_test, y_test, stop_words, ngram_range, max_features)
```
In this experiment, we keep the stop words but limit the vocabulary size to 5000 and only consider unigrams. The result shows that cutting the size of the vocab lowers the accuracy.
```
# Another configuration
stop_words = None
ngram_range = (1, 1)
max_features = 5000
preds, acc = run(x_train, y_train, x_test, y_test, stop_words, ngram_range, max_features)
```
In the last experiment, we keep the stop words, consider both unigrams and bigrams and take into account the entire vocabulary. It gives us the best accuracy among all the configuration we experimented with. It demonstrates that keeping stop words help in better sentiment analysis, unlike some other classification tasks.
```
# Another configuration
stop_words = None
ngram_range = (1, 2)
max_features = None
preds, acc = run(x_train, y_train, x_test, y_test, stop_words, ngram_range, max_features)
```
There are other configurations that I did not experiment with. But I leave that as an exercise. ;)
## Training an LSTM model on the IMDB sentiment classification task
The code below is adopted directly from [Keras website](https://keras.io/examples/imdb_lstm/). This code implements an LSTM model on the same IMDB dataset for sentiment analysis task.
```
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding
from tensorflow.keras.layers import LSTM
from tensorflow.keras.datasets import imdb
max_features = 20000
# cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
```
It takes a lot of time to run on CPU, therefore I ran it on Google Colab, which took roughly around 30 minutes to train. You can see the result below:
```
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 80)
x_test shape: (25000, 80)
Build model...
Train...
Train on 25000 samples, validate on 25000 samples
Epoch 1/15
25000/25000 [==============================] - 137s 5ms/sample - loss: 0.4546 - accuracy: 0.7889 - val_loss: 0.3738 - val_accuracy: 0.8394
Epoch 2/15
25000/25000 [==============================] - 133s 5ms/sample - loss: 0.2930 - accuracy: 0.8816 - val_loss: 0.3865 - val_accuracy: 0.8281
Epoch 3/15
25000/25000 [==============================] - 130s 5ms/sample - loss: 0.2111 - accuracy: 0.9173 - val_loss: 0.4440 - val_accuracy: 0.8348
Epoch 4/15
25000/25000 [==============================] - 128s 5ms/sample - loss: 0.1484 - accuracy: 0.9449 - val_loss: 0.4976 - val_accuracy: 0.8283
Epoch 5/15
25000/25000 [==============================] - 127s 5ms/sample - loss: 0.1022 - accuracy: 0.9627 - val_loss: 0.6127 - val_accuracy: 0.8238
Epoch 6/15
25000/25000 [==============================] - 126s 5ms/sample - loss: 0.0781 - accuracy: 0.9734 - val_loss: 0.6076 - val_accuracy: 0.8173
Epoch 7/15
25000/25000 [==============================] - 126s 5ms/sample - loss: 0.0587 - accuracy: 0.9794 - val_loss: 0.7733 - val_accuracy: 0.8188
Epoch 8/15
25000/25000 [==============================] - 126s 5ms/sample - loss: 0.0430 - accuracy: 0.9860 - val_loss: 0.9582 - val_accuracy: 0.7603
Epoch 9/15
25000/25000 [==============================] - 125s 5ms/sample - loss: 0.0482 - accuracy: 0.9847 - val_loss: 0.7996 - val_accuracy: 0.8190
Epoch 10/15
25000/25000 [==============================] - 125s 5ms/sample - loss: 0.0270 - accuracy: 0.9916 - val_loss: 0.8270 - val_accuracy: 0.8106
Epoch 11/15
25000/25000 [==============================] - 125s 5ms/sample - loss: 0.0226 - accuracy: 0.9927 - val_loss: 0.8667 - val_accuracy: 0.8120
Epoch 12/15
25000/25000 [==============================] - 126s 5ms/sample - loss: 0.0176 - accuracy: 0.9944 - val_loss: 1.0325 - val_accuracy: 0.8141
Epoch 13/15
25000/25000 [==============================] - 126s 5ms/sample - loss: 0.0149 - accuracy: 0.9948 - val_loss: 0.9852 - val_accuracy: 0.8104
Epoch 14/15
25000/25000 [==============================] - 126s 5ms/sample - loss: 0.0140 - accuracy: 0.9955 - val_loss: 1.0443 - val_accuracy: 0.8127
Epoch 15/15
25000/25000 [==============================] - 125s 5ms/sample - loss: 0.0133 - accuracy: 0.9956 - val_loss: 1.1763 - val_accuracy: 0.8127
25000/25000 [==============================] - 16s 656us/sample - loss: 1.1763 - accuracy: 0.8127
Test score: 1.1762849241028726
Test accuracy: 0.81268
```
## Comparing Naive Bayes with LSTM for Sentiment Analysis
If we compare Naive Bayes with LSTM, we find out some interesting observations:
1. Implementing Naive Bayes is very straightforward compared to LSTM.
2. Training NB is extremely fast, a few seconds, whereas the implemented LSTM takes about 30 minutes on GPU. Please note that this LSTM implementation even cuts the max features into 20000.
3. Number of parameters that we can alter for NB is very few unlike LSTM that we need to perform lots of fine tuning.
4. Scaling Naive Bayes implementation to large datasets having millions of documents is quite easy whereas for LSTM we certainly need plenty of resources.
If you look at the image below, you notice that the state-of-the-art for sentiment analysis belongs to a technique that utilizes Naive Bayes bag of n-grams. See the [paper](https://www.aclweb.org/anthology/P19-2057.pdf) for all the details. This method gives the best accuracy higher than many purely deep learning methods such as BERT and LSTM+CNN.
**Don't get me wrong!** I am a big fan of neural networks and deep learning techniques. The point that I am trying to make is that in many cases we may not need very complex methods for our tasks. My approach is always start off with simpler techniques and if they are not satisfactory, then move to more sophisticated ones.

*State-of-the-art sentiment analysis on the IMDB dataset. [[Image Source](https://paperswithcode.com/sota/sentiment-analysis-on-imdb)]*
| github_jupyter |
# Table of Contents
<div class="toc" style="margin-top: 1em;"><ul class="toc-item" id="toc-level0"><li><span><a href="http://localhost:8889/notebooks/nn_postprocessing/discrete_crps_test.ipynb#Sebastians-example" data-toc-modified-id="Sebastians-example-1"><span class="toc-item-num">1 </span>Sebastians example</a></span><ul class="toc-item"><li><span><a href="http://localhost:8889/notebooks/nn_postprocessing/discrete_crps_test.ipynb#Function-for-real(?)-CRPS" data-toc-modified-id="Function-for-real(?)-CRPS-1.1"><span class="toc-item-num">1.1 </span>Function for real(?) CRPS</a></span></li></ul></li></ul></div>
```
from utils import crps_normal
mean = 1
sigma = 3
obs = 2
crps_normal(mean, sigma, obs)
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
x = np.linspace(-10, 10, 100)
plt.plot(x, norm.pdf(x, mean, sigma))
plt.axvline(obs)
plt.plot(x, norm.cdf(x, mean, sigma))
plt.axvline(obs)
bins = np.arange(-10, 11)
bins
from scipy.stats import binned_statistic
discrete_pdf = binned_statistic(x, norm.pdf(x, mean, sigma), bins=bins)[0]
discrete_pdf
plt.bar(bins[:-1], discrete_pdf)
plt.axvline(obs)
discrete_cdf = binned_statistic(x, norm.cdf(x, mean, sigma), bins=bins)[0]
discrete_cdf
plt.bar(bins[:-1], discrete_cdf)
plt.bar(bins[:-1], np.asarray(bins[:-1] > obs, dtype='float'), color='g', zorder=0.1)
plt.axvline(obs)
plt.savefig('./test_plot')
np.cumsum(discrete_pdf), discrete_cdf
discrete_cdf - np.asarray(bins[:-1] > obs, dtype='float')
np.mean((discrete_cdf - np.asarray(bins[:-1] > obs, dtype='float'))**2)
```
## Sebastians example
```
probs = np.array([0.14935065, 0.15097403, 0.04707792, 0.13149351, 0.10064935, 0.08116883,
0.11363636, 0.02110390, 0.09902597, 0.10551948])
probs.sum()
obs = 4.577418
bin_edges = np.arange(0, 11, 1)
plt.bar(bin_edges[:-1] + 0.5, probs, width=1)
plt.axvline(obs, c='r')
plt.show()
cum_probs = np.cumsum(probs)
cum_obs = np.array([0, 1])
cum_obs_bin_edges = np.array([0, obs, 10])
plt.bar(bin_edges[:-1] + 0.5, cum_probs, width=1, alpha=0.5)
plt.bar(cum_obs_bin_edges[:-1], cum_obs, width=np.diff(cum_obs_bin_edges),
zorder =0.1)
bin_obs = np.array((bin_edges[:-1] < obs) & (bin_edges[1:] > obs), dtype=int)
bin_obs
plt.bar(bin_edges[:-1] + 0.5, probs, width=1)
plt.bar(bin_edges[:-1] + 0.5, bin_obs, width=1)
plt.axvline(obs, c='r')
plt.show()
rps = np.sum((probs - bin_obs) ** 2)
rps
cum_bin_obs = np.cumsum(bin_obs)
plt.bar(bin_edges[:-1] + 0.5, cum_probs, width=1, alpha=0.8)
plt.bar(bin_edges[:-1] + 0.5, cum_bin_obs, width=1, zorder=0.1)
plt.axvline(obs, c='r')
plt.show()
approx_crps = np.sum((cum_probs - cum_bin_obs) ** 2)
approx_crps
insert_idx = np.where(obs<bin_edges)[0][0]
new_bin_edges = np.insert(np.array(bin_edges, dtype=float), insert_idx, obs)
new_bin_edges
new_probs = np.insert(probs, insert_idx, probs[insert_idx-1])
plt.bar(new_bin_edges[:-1] + 0.5 * np.diff(new_bin_edges), new_probs,
width=np.diff(new_bin_edges), linewidth=1, edgecolor='k')
plt.show()
new_bin_obs = np.array((new_bin_edges[:-1] <= obs) & (new_bin_edges[1:] > obs), dtype=int)
new_bin_edges[:-1], obs,(new_bin_edges[:-1] < obs)
new_bin_obs
new_cum_bin_obs = np.cumsum(new_bin_obs)
new_cum_probs = np.cumsum(new_probs * np.diff(new_bin_edges))
plt.bar(new_bin_edges[:-1] + 0.5 * np.diff(new_bin_edges), new_cum_probs,
width=np.diff(new_bin_edges), linewidth=1, edgecolor='k', alpha=0.8)
plt.bar(new_bin_edges[:-1] + 0.5 * np.diff(new_bin_edges), new_cum_bin_obs,
width=np.diff(new_bin_edges), linewidth=1, color='r', zorder=0.1)
plt.bar(bin_edges[:-1] + 0.5, cum_probs, width=1, alpha=0.8)
plt.axvline(obs, c='r')
plt.show()
crps = np.sum((((new_cum_probs - new_cum_bin_obs) * np.diff(new_bin_edges)) ** 2) )
crps
np.sum(((new_cum_probs - new_cum_bin_obs) **2 * np.diff(new_bin_edges)**2))
(cum_probs - cum_bin_obs) ** 2
np.diff(new_bin_edges)
a = np.concatenate(([0], new_cum_probs))
fix = (a[:-1] + a[1:]) / 2
plt.plot(new_bin_edges, np.concatenate(([0], new_cum_probs)))
plt.bar(new_bin_edges[:-1] + 0.5 * np.diff(new_bin_edges), new_cum_probs,
width=np.diff(new_bin_edges), linewidth=1, edgecolor='k', alpha=0.8)
plt.bar(new_bin_edges[:-1] + 0.5 * np.diff(new_bin_edges), fix,
width=np.diff(new_bin_edges), linewidth=1, color='r', zorder=2)
plt.plot(new_bin_edges, np.concatenate(([0], new_cum_bin_obs)))
plt.axvline(obs, c='g')
plt.bar(new_bin_edges[:-1] + 0.5 * np.diff(new_bin_edges), new_cum_probs,
width=np.diff(new_bin_edges), linewidth=1, edgecolor='k', alpha=0.8)
plt.bar(new_bin_edges[:-1] + 0.5 * np.diff(new_bin_edges), fix,
width=np.diff(new_bin_edges), linewidth=1, color='r', zorder=2)
plt.axvline(obs, c='r')
plt.show()
crps = np.sum((((fix - new_cum_bin_obs) ) ** 2) *np.diff(new_bin_edges))
crps
```
### Function for real(?) CRPS
```
# Test data
preds = np.stack([probs, probs])
preds.shape, preds # [sample, bin]
targets = np.array([obs, 9.0000])
targets.shape, targets
a = np.repeat(np.atleast_2d(bin_edges), targets.shape[0], axis=0)
a.shape
a - targets
b = a - targets
b[b < 0] = 999
b
insert_idx = np.argmin(b, axis=0)
insert_idx
a
new_bin_edges = np.insert(np.array(a, dtype=float), insert_idx, np.atleast_2d(targets), axis=1)
new_bin_edges
c = np.array([np.insert(np.array(bin_edges, dtype=float), insert_idx[i], targets[i])
for i in range(targets.shape[0])])
c
d = np.array([np.insert(preds[i], insert_idx[i], preds[i, insert_idx[i]-1])
for i in range(targets.shape[0])])
d
new_bin_obs = np.array([(c[i, :-1] <= targets[i]) & (c[i, 1:] > targets[i])
for i in range(targets.shape[0])], dtype=int)
new_bin_obs
d = d * np.diff(c, axis=1)
d
new_cum_bin_obs = np.cumsum(new_bin_obs, axis=1)
new_cum_probs = np.cumsum(d, axis=1)
new_cum_bin_obs, new_cum_probs
new_cum_probs = (new_cum_probs.T / new_cum_probs[:, -1]).T
new_cum_probs
f = np.concatenate((np.zeros((new_cum_probs.shape[0], 1)), new_cum_probs), axis=1)
fix = (a[:-1] + a[1:]) / 2
import pdb
def maybe_correct_crps(preds, targets, bin_edges):
#pdb.set_trace()
#Convert input arrays
preds = np.array(np.atleast_2d(preds), dtype='float')
targets = np.array(np.atleast_1d(targets), dtype='float')
# preds [sample, bins]
# Find insert index
pdb.set_trace
mat_bins = np.repeat(np.atleast_2d(bin_edges), targets.shape[0], axis=0)
b = mat_bins.T - targets
b[b < 0] = 999
insert_idxs = np.argmin(b, axis=0)
# Insert
ins_bin_edges = np.array([np.insert(np.array(bin_edges, dtype=float),
insert_idxs[i], targets[i])
for i in range(targets.shape[0])])
ins_preds = np.array([np.insert(preds[i], insert_idxs[i], preds[i, insert_idxs[i]-1])
for i in range(targets.shape[0])])
# Get obs
bin_obs = np.array([(ins_bin_edges[i, :-1] <= targets[i]) &
(ins_bin_edges[i, 1:] > targets[i])
for i in range(targets.shape[0])], dtype=int)
# Cumsum with weights
ins_preds *= np.diff(ins_bin_edges, axis=1)
cum_bin_obs = np.cumsum(bin_obs, axis=1)
cum_probs = np.cumsum(ins_preds, axis=1)
cum_probs = (cum_probs.T / cum_probs[:, -1]).T
# Get adjusted preds
adj_cum_probs = np.concatenate((np.zeros((cum_probs.shape[0], 1)), cum_probs), axis=1)
adj_cum_probs = (adj_cum_probs[:, :-1] + adj_cum_probs[:, 1:]) / 2
# Compute CRPS
crps = np.mean(np.sum(((adj_cum_probs - cum_bin_obs) ** 2) *
np.diff(ins_bin_edges, axis=1), axis=1))
return crps
maybe_correct_crps(preds, targets, bin_edges)
def maybe_correct_crps2(preds, targets, bin_edges):
#pdb.set_trace()
#Convert input arrays
preds = np.array(np.atleast_2d(preds), dtype='float')
targets = np.array(np.atleast_1d(targets), dtype='float')
# preds [sample, bins]
# Find insert index
pdb.set_trace
mat_bins = np.repeat(np.atleast_2d(bin_edges), targets.shape[0], axis=0)
b = mat_bins.T - targets
b[b < 0] = 999
insert_idxs = np.argmin(b, axis=0)
# Insert
ins_bin_edges = np.array([np.insert(np.array(bin_edges, dtype=float),
insert_idxs[i], targets[i])
for i in range(targets.shape[0])])
ins_preds = np.array([np.insert(preds[i], insert_idxs[i], preds[i, insert_idxs[i]-1])
for i in range(targets.shape[0])])
# Get obs
bin_obs = np.array([(ins_bin_edges[i, :-1] <= targets[i]) &
(ins_bin_edges[i, 1:] > targets[i])
for i in range(targets.shape[0])], dtype=int)
# Cumsum with weights
ins_preds *= np.diff(ins_bin_edges, axis=1)
cum_bin_obs = np.cumsum(bin_obs, axis=1)
cum_probs = np.cumsum(ins_preds, axis=1)
cum_probs = (cum_probs.T / cum_probs[:, -1]).T
# Get adjusted preds
adj_cum_probs = np.concatenate((np.zeros((cum_probs.shape[0], 1)), cum_probs), axis=1)
print(adj_cum_probs.shape, cum_bin_obs.shape)
sq_list = []
for i in range(cum_bin_obs.shape[1]):
x_l = np.abs(cum_bin_obs[:, i] - adj_cum_probs[:, i])
x_r = np.abs(cum_bin_obs[:, i] - adj_cum_probs[:, i + 1])
sq = 1./3. * (x_l ** 2 + x_l * x_r + x_r ** 2)
if np.isnan(np.mean(sq)):
pdb.set_trace()
sq_list.append(sq)
crps = np.sum(np.array(sq_list).T * np.diff(ins_bin_edges, axis=1), axis=1)
return crps
maybe_correct_crps2(preds, targets, bin_edges)
np.nanmax([np.nan, 1])
```
| github_jupyter |
# Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!
**You will learn to:** Use regularization in your deep learning models.
Let's first import the packages you are going to use.
### <font color='darkblue'> Updates to Assignment <font>
#### If you were working on a previous version
* The current notebook filename is version "2a".
* You can find your work in the file directory as version "2".
* To see the file directory, click on the Coursera logo at the top left of the notebook.
#### List of Updates
* Clarified explanation of 'keep_prob' in the text description.
* Fixed a comment so that keep_prob and 1-keep_prob add up to 100%
* Updated print statements and 'expected output' for easier visual comparisons.
```
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
```
train_X, train_Y, test_X, test_Y = load_2D_dataset()
```
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
## 1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python.
- in *dropout mode* -- by setting the `keep_prob` to a value less than one
You will first try the model without any regularization. Then, you will implement:
- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"
- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
```
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
```
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
```
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
## 2 - L2 Regularization
The standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
```python
np.sum(np.square(Wl))
```
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
```
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = lambd/(2*m)*(np.sum(np.square(W1))+np.sum(np.square(W2))+np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
```
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m)*W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = \n"+ str(grads["dW1"]))
print ("dW2 = \n"+ str(grads["dW2"]))
print ("dW3 = \n"+ str(grads["dW3"]))
```
**Expected Output**:
```
dW1 =
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 =
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 =
[[-1.77691347 -0.11832879 -0.09397446]]
```
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call:
- `compute_cost_with_regularization` instead of `compute_cost`
- `backward_propagation_with_regularization` instead of `backward_propagation`
```
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
```
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
**What is L2-regularization actually doing?**:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
**What you should remember** -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
## 3 - Dropout
Finally, **dropout** is a widely used regularization technique that is specific to deep learning.
**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
### 3.1 - Forward propagation with dropout
**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
**Instructions**:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 1 with probability (`keep_prob`), and 0 otherwise.
**Hint:** Let's say that keep_prob = 0.8, which means that we want to keep about 80% of the neurons and drop out about 20% of them. We want to generate a vector that has 1's and 0's, where about 80% of them are 1 and about 20% are 0.
This python statement:
`X = (X < keep_prob).astype(int)`
is conceptually the same as this if-else statement (for the simple case of a one-dimensional array) :
```
for i,v in enumerate(x):
if v < keep_prob:
x[i] = 1
else: # v >= keep_prob
x[i] = 0
```
Note that the `X = (X < keep_prob).astype(int)` works with multi-dimensional arrays, and the resulting output preserves the dimensions of the input array.
Also note that without using `.astype(int)`, the result is an array of booleans `True` and `False`, which Python automatically converts to 1 and 0 if we multiply it with numbers. (However, it's better practice to convert data into the data type that we intend, so try using `.astype(int)`.)
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
```
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1 < keep_prob).astype(int) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0], A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2 < keep_prob) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2*D2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
```
**Expected Output**:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
### 3.2 - Backward propagation with dropout
**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
**Instruction**:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`.
2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
```
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = \n" + str(gradients["dA1"]))
print ("dA2 = \n" + str(gradients["dA2"]))
```
**Expected Output**:
```
dA1 =
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 =
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
```
Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:
- `forward_propagation_with_dropout` instead of `forward_propagation`.
- `backward_propagation_with_dropout` instead of `backward_propagation`.
```
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
```
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Note**:
- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.
<font color='blue'>
**What you should remember about dropout:**
- Dropout is a regularization technique.
- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
- Apply dropout both during forward and backward propagation.
- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
## 4 - Conclusions
**Here are the results of our three models**:
<table>
<tr>
<td>
**model**
</td>
<td>
**train accuracy**
</td>
<td>
**test accuracy**
</td>
</tr>
<td>
3-layer NN without regularization
</td>
<td>
95%
</td>
<td>
91.5%
</td>
<tr>
<td>
3-layer NN with L2-regularization
</td>
<td>
94%
</td>
<td>
93%
</td>
</tr>
<tr>
<td>
3-layer NN with dropout
</td>
<td>
93%
</td>
<td>
95%
</td>
</tr>
</table>
Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system.
Congratulations for finishing this assignment! And also for revolutionizing French football. :-)
<font color='blue'>
**What we want you to remember from this notebook**:
- Regularization will help you reduce overfitting.
- Regularization will drive your weights to lower values.
- L2 regularization and Dropout are two very effective regularization techniques.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import pymc3 as pm
import arviz as az
```
# Preparation of results for business: Newpaper Sales
We will illustrate how to prepare results for a business audience using ArviZ. The motivating example we'll use is a classic example in Industrial Engineering, the [newsvendor problem](https://en.wikipedia.org/wiki/Newsvendor_model). In brief the problem is as follows
As a newpaper girl, or boy, you face a dilemma; buy too many newspapers and you'll lose money from newspaper that expire. Buy too few newspapers and you'll miss potential sales, and lose money. Using past sales information and ArviZ we can create a compelling data centric pitch about what business choice to make in this scenario
## Outline
* Posterior Estimation
* Posterior Predictive
### Simulation
As we have in past chapters we're going to create some simulated sales data. Our latent (hidden) parameter is 30 sales per a day on average, and we are simulated 20 days of sales. We are using the poisson distribution because we can only sell positive whole numbers of newspapers (e.g. 12, 14,21)
```
newspaper_sales = stats.poisson.rvs(30, size=20)
```
### The observed data
At the end of 20 days these are the sales numbers we see. Given 20 days of sales you must make a case of how many newspapers to buy in the future.
```
newspaper_sales
```
## Fitting the data
Before any results can be prepared we must first fit a model and generate some results
```
with pm.Model() as newspaper_model:
_lambda = pm.Normal("λ", newspaper_sales.mean(), 10)
sales = pm.Poisson("sales", _lambda, observed=newspaper_sales)
trace = pm.sample(draws=2000)
ppc = pm.sample_posterior_predictive(trace)
data = az.from_pymc3(trace=trace, posterior_predictive=ppc)
```
## Preparation of results
Using ArviZ there are numerous ways we can share the results depending on audience. The three will we cover are a numerical Summary with `az.summary`, posterior parameter estimation using `az.plot_posterior`, and posterior predictive visualization using `az.ppc_plot`
### Numerical Summary of estimated parameters using az.summary
The simplest way to share the result is using the numerical summuraization in az.summary
```
az.summary(data)
```
Using the above summary we get a couple of useful data points. The mean of the expected number of sales each day is ~30.2 and the standard deviation of sales is ~1.2. The advantages of a numerical summary is the ease of communication and succintness. Often times in business settings you may need to describe results without the aid of computer, or chart. Additionally in most businesses your surrounding colleagues will likely not be statisticians. However most people can intuit around the idea of mean and standard deviation, and providing a mean and sd estimate is quick and is not overwhelming.
There are some downsides to numerical summaries. One is that by summarizing to two numbers, a lot of information is "left on the table". Additionally in certain contexts when a visualization can be used, it's often much more memorable to show a picture of the parameter estimation than provide two summary statistics. Lastly the mean, standard deviation, and even the hpd estimates, don't characterize all distributions well. For exmaple in our newspaper problem if the distribution was bimodal, perhaps because of weekday sales and weekend sales, none of the numerical estimates would correctly communicate this condition.
### Visual Summary of estimated paramters using plot_posterior
```
az.plot_posterior(data);
```
With plot posterior we get a nice visualization of the parameter estimates, as well as point estimate and the Highest Posterior Density. In contrast to the numerical estimates a posterior plot is much harder to gloss over. Posterior Plots also characterize all, or nearly all, the information inferred about the estimated parameters, including uncertainty and characteristics of the distribution.
As with any visualization though this cannot be used outside of a visual presentation format. It can be awkward to describe to non statisticians verbally. Additionally posterior by nature are the estimated distribution of *parameter space*. Many times people that are unfamiliar with Bayesian Statistics will incorrectly assume that the x axis is in the units of data, in this case newspaper sales. This intrepratation can lead to incorrect results, as colleagues may assume that "there is a 94% chance that 28 to 31 newspapers will be sold in a day" and are in for a surprise when on many occassions more or less papers are sold.
### Visual Summary of estimated sales using Posterior Predictive Plot
```
az.plot_ppc(data, num_pp_samples=100);
```
With plot_ppc we get a plot of the observed data along with the simulated outcomes from the parameter estimations. Benefits if the Posterior Predictive plot are the same as the posterior plot, but with the notable change that the x axis is in the units of the data. Posterior Predictive Plots then lend themselves to the correct interpretation that "15 to 40 newspapers will be sold a day, and lesser likely that 10 or 45 will be sold"
The downside to the posterior plot is that if asked to explained how they are generated it's not typically a straightforward exercise. From personal experience if trying to explain that you fit parameters using Bayesian Statistics to estimate the posterior, then generated "1000", or any arbitrary number, of future simulations, then took the KDE of them, most colleagues lose interest quickly.
## Practical Advice for using ArviZ in business settings
As a statistician in the workplace your job is to help others make good decisions using data. This naturally requires your analysis to be rigorous and thorough, but also requires you to present it in a way that others will take heed to the results. If your boss becomes bored with the details of Posterior Plots and subsequently fails to understand the conclusion. then your analysis was as ineffective as not running an analysis at all.
My advice would be to know your audience. their tolerance for mathematical explanations, and understand the context in which you can explain the data. If you're providing a visual presentation, and have a couple minutes, consider using the Posterior Plot or Posterior Predictive Plot as it lends itself to a longer explanation. If you're given less time consider using the numerical summaries from `az.summary` to quickly convey your results.
In either scenario I would suggest memorizing a couple of the numerical summaries, such as mean, median, or HPD, so if an opportunity arises in a "hallway" conversation, or during a meeting without a presentation you are able to share results quickly and without a visual aid.
| github_jupyter |
<a href="https://colab.research.google.com/github/rlworkgroup/garage/blob/master/examples/jupyter/trpo_gym_tf_cartpole.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This is a jupyter notebook demonstrating usage of [garage](https://github.com/rlworkgroup/garage) in a jupyter notebook.
In particular, it demonstrates the example `trpo_gym_tf_cartpole.py` file already available in [garage/examples/tf/](https://github.com/rlworkgroup/garage/blob/master/examples/tf/)
## Install pre-requisites
```
%%shell
echo "abcd" > mujoco_fake_key
git clone --depth 1 https://github.com/rlworkgroup/garage/
cd garage
bash scripts/setup_colab.sh --mjkey ../mujoco_fake_key --no-modify-bashrc > /dev/null
raise Exception("Please restart your runtime so that the installed dependencies for 'garage' can be loaded, and then resume running the notebook")
```
---
---
---
## Prepare for the training
```
# The contents of this cell and the one after are mostly copied from garage/examples/...
# Note that these need to be run twice if for the first time on a colab.research.google.com instance
# 1st time is to create the "personal config from template"
# 2nd time is the charm
from garage.np.baselines import LinearFeatureBaseline
from garage.envs import normalize
# from garage.envs.box2d import CartpoleEnv
from garage.experiment import run_experiment
from garage.tf.algos import TRPO
from garage.tf.envs import TfEnv
#from garage.tf.policies import GaussianMLPPolicy
from garage.tf.policies import CategoricalMLPPolicy
import gym
from garage.experiment import LocalRunner
from garage.logger import logger, StdOutput
# garage version of CartPole environment, has Discrete action space instead of Box
#env = TfEnv(normalize(CartpoleEnv()))
#policy = GaussianMLPPolicy(
# name="policy", env_spec=env.spec, hidden_sizes=(32, 32))
# gym version of CartPole. Check note above
# env = TfEnv(normalize(gym.make("CartPole-v0")))
# garage updated method of getting env
env = TfEnv(env_name='CartPole-v1')
policy = CategoricalMLPPolicy(
name="policy", env_spec=env.spec, hidden_sizes=(32, 32))
# create baseline
baseline = LinearFeatureBaseline(env_spec=env.spec)
# specify an algorithm
algo = TRPO(
env_spec=env.spec,
policy=policy,
baseline=baseline,
# Use these settings for garage version of env
# max_path_length=100,
# n_itr=100,
# Use these for gym version
max_path_length=200,
n_itr=20,
discount=0.99,
max_kl_step=0.01
)
```
## Train the algorithm
```
# start a tensorflow session so that we can keep it open after training and use the trained network to see it performing
import tensorflow as tf
sess = tf.InteractiveSession()
# initialize
sess.run(tf.compat.v1.global_variables_initializer())
# log to stdout
logger.add_output(StdOutput())
# train the algo
runner = LocalRunner()
runner.setup(algo=algo, env=env)
# use n_epochs = 100 for practical example, n_epochs = 10 for quick demo, n_epochs = 1 for smoke testing
runner.train(n_epochs=10, batch_size=10000, plot=False)
```
## Visualize a video of the algorithm playing
```
%%shell
# Prepare display for seeing a video of the policy in action in the jupyter notebook
# Note that this doesn't require a runtime restart
# https://stackoverflow.com/a/51183488/4126114
apt-get install python-opengl ffmpeg xvfb
pip install pyvirtualdisplay
# Virtual display
from pyvirtualdisplay import Display
virtual_display = Display(visible=0, size=(1400, 900))
virtual_display.start()
# bugfix?
# Set an "id" field since missing for some reason (uncovered in monitor wrapper below)
env.spec.id = 1
# wrap the gym environment for recording a video of the policy performance
# https://kyso.io/eoin/openai-gym-jupyter?utm_campaign=News&utm_medium=Community&utm_source=DataCamp.com
import gym
from gym import wrappers
env = wrappers.Monitor(env, "./gym-results", force=True)
obs = env.reset()
for i in range(1000):
#action = env.action_space.sample()
action, _ = policy.get_action(obs)
obs, reward, done, info = env.step(action)
if done: break
print("done at step %i"%i)
env.close()
# Display the video in the jupyter notebook
# Click the play button below to watch the video
import io
import base64
from IPython.display import HTML
video = io.open('./gym-results/openaigym.video.%s.video000000.mp4' % env.file_infix, 'r+b').read()
encoded = base64.b64encode(video)
vid_ascii = encoded.decode('ascii')
HTML(data='''
<video width="360" height="auto" alt="test" controls><source src="data:video/mp4;base64,{0}" type="video/mp4" /></video>'''
.format(vid_ascii))
```
| github_jupyter |
#### Outline
- for each dataset:
- load dataset;
- for each network:
- load network
- project 1000 test dataset samples
- save to metric dataframe
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU (this may not be needed on your computer)
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0
import numpy as np
import pickle
import pandas as pd
import time
from umap import UMAP
from tfumap.umap import tfUMAP
import tensorflow as tf
from sklearn.decomposition import PCA
from openTSNE import TSNE
from tqdm.autonotebook import tqdm
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR
output_dir = MODEL_DIR/'projections'
projection_speeds = pd.DataFrame(columns = ['method_', 'dimensions', 'dataset', 'speed'])
```
### MNIST
```
dataset = 'cassins_dtw'
dims = (32,31,1)
```
##### load dataset
```
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR
syllable_df = pd.read_pickle(DATA_DIR/'cassins'/ 'cassins.pickle')
top_labels = (
pd.DataFrame(
{i: [np.sum(syllable_df.labels.values == i)] for i in syllable_df.labels.unique()}
)
.T.sort_values(by=0, ascending=False)[:20]
.T
)
sylllable_df = syllable_df[syllable_df.labels.isin(top_labels.columns)]
sylllable_df = sylllable_df.reset_index()
specs = np.array(list(sylllable_df.spectrogram.values))
specs.shape
sylllable_df['subset'] = 'train'
sylllable_df.loc[:1000, 'subset'] = 'valid'
sylllable_df.loc[1000:1999, 'subset'] = 'test'
Y_train = np.array(list(sylllable_df.labels.values[sylllable_df.subset == 'train']))
Y_valid = np.array(list(sylllable_df.labels.values[sylllable_df.subset == 'valid']))
Y_test = np.array(list(sylllable_df.labels.values[sylllable_df.subset == 'test']))
X_train = np.array(list(sylllable_df.spectrogram.values[sylllable_df.subset == 'train'])) #/ 255.
X_valid = np.array(list(sylllable_df.spectrogram.values[sylllable_df.subset == 'valid']))# / 255.
X_test = np.array(list(sylllable_df.spectrogram.values[sylllable_df.subset == 'test'])) #/ 255.
X_train_flat = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
from sklearn.preprocessing import OrdinalEncoder
enc = OrdinalEncoder()
Y_train = enc.fit_transform([[i] for i in Y_train]).astype('int').flatten()
X_test_flat = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
```
#### Network
##### 2 dims
```
load_loc = output_dir / dataset / 'network'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
n_repeats = 10
times = []
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
embedder.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
times.append(end_time - start_time)
projection_speeds.loc[len(projection_speeds)] = ['network', 2, dataset, end_time - start_time]
z = embedder.transform(X_test_flat);
np.save( MODEL_DIR/'projections' / dataset / 'network' / 'z_test.npy', z)
##### Network CPU
with tf.device('/CPU:0'):
n_repeats = 10
times = []
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
embedder.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
times.append(end_time - start_time)
projection_speeds.loc[len(projection_speeds)] = ['network-cpu', 2, dataset, end_time - start_time]
```
##### 64 dims
```
load_loc = output_dir / dataset /"64"/ 'network'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
n_repeats = 10
times = []
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
embedder.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
times.append(end_time - start_time)
projection_speeds.loc[len(projection_speeds)] = ['network', 64, dataset, end_time - start_time]
z = embedder.transform(X_test_flat);
out = MODEL_DIR/'projections' / dataset / '64' / 'network' / 'z_test.npy'
np.save( out, z)
```
##### Network CPU
```
with tf.device('/CPU:0'):
n_repeats = 10
times = []
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
embedder.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
times.append(end_time - start_time)
projection_speeds.loc[len(projection_speeds)] = ['network-cpu', 64, dataset, end_time - start_time]
```
### AE
##### 2 dims
```
load_loc = output_dir / dataset / 'autoencoder'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
decoding_method = "autoencoder",
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
decoder = tf.keras.models.load_model((load_loc / 'decoder').as_posix())
embedder.decoder = decoder
n_repeats = 10
times = []
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
embedder.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
times.append(end_time - start_time)
projection_speeds.loc[len(projection_speeds)] = ['network', 2, dataset, end_time - start_time]
z = embedder.transform(X_test_flat);
np.save( MODEL_DIR/'projections' / dataset / 'autoencoder' / 'z_test.npy', z)
```
##### 64 dims
```
load_loc = output_dir / dataset /"64"/ 'autoencoder'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
decoding_method = "autoencoder",
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
decoder = tf.keras.models.load_model((load_loc / 'decoder').as_posix())
embedder.decoder = decoder
n_repeats = 10
times = []
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
embedder.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
times.append(end_time - start_time)
projection_speeds.loc[len(projection_speeds)] = ['network', 64, dataset, end_time - start_time]
z = embedder.transform(X_test_flat);
out = MODEL_DIR/'projections' / dataset / '64' / 'autoencoder' / 'z_test.npy'
np.save( out, z)
```
#### UMAP-learn
##### 2 dims
```
embedder = UMAP(n_components = 2, verbose=True)
z_umap = embedder.fit_transform(X_train_flat)
n_repeats = 10
times = []
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
embedder.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
times.append(end_time - start_time)
projection_speeds.loc[len(projection_speeds)] = ['umap-learn', 2, dataset, end_time - start_time]
out
z = embedder.transform(X_test_flat);
out = MODEL_DIR/'projections' / dataset / 'umap-learn' / 'z_test.npy'
ensure_dir(out)
np.save(out, z)
```
##### 64 dims
```
embedder = UMAP(n_components = 64, verbose=True)
z_umap = embedder.fit_transform(X_train_flat)
n_repeats = 10
times = []
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
embedder.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
projection_speeds.loc[len(projection_speeds)] = ['umap-learn', 64, dataset, end_time - start_time]
z = embedder.transform(X_test_flat);
out = MODEL_DIR/'projections' / dataset / '64' / 'umap-learn' / 'z_test.npy'
ensure_dir(out)
np.save(out, z)
```
#### PCA
##### 2 dims
```
pca = PCA(n_components=2)
z = pca.fit_transform(X_train_flat)
n_repeats = 10
times = []
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
pca.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
times.append(end_time - start_time)
projection_speeds.loc[len(projection_speeds)] = ['pca', 2, dataset, end_time - start_time]
z = pca.transform(X_test_flat);
out = MODEL_DIR/'projections' / dataset / 'PCA' / 'z_test.npy'
ensure_dir(out)
np.save(out, z)
```
##### 64 dims
```
pca = PCA(n_components=64)
z = pca.fit_transform(X_train_flat)
n_repeats = 10
times = []
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
pca.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
times.append(end_time - start_time)
projection_speeds.loc[len(projection_speeds)] = ['pca', 64, dataset, end_time - start_time]
z = pca.transform(X_test_flat);
out = MODEL_DIR/'projections' / dataset / "64" / 'PCA' / 'z_test.npy'
ensure_dir(out)
np.save(out, z)
```
#### TSNE
##### 2 dims
```
tsne = TSNE(
n_components = 2,
n_jobs=32,
verbose=True
)
embedding_train = tsne.fit(X_train_flat)
n_repeats = 10
times = []
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
embedding_train.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
times.append(end_time - start_time)
projection_speeds.loc[len(projection_speeds)] = ['TSNE', 2, dataset, end_time - start_time]
z = embedding_train.transform(X_test_flat);
out = MODEL_DIR/'projections' / dataset / 'TSNE' / 'z_test.npy'
ensure_dir(out)
np.save(out, z)
projection_speeds
```
### Save
```
save_loc = DATA_DIR / 'projection_speeds' / (dataset + '.pickle')
ensure_dir(save_loc)
projection_speeds.to_pickle(save_loc)
```
| github_jupyter |
# Analyze consensus motif
The third output from the computational pipeline is a fasta file of the best predicted promoter for each input sequence. For more details about how robust these predictions are, see Section 2 of `inspect_BioProspector_results.ipynb`.
Given a fasta file of best predictions from a given set of input settings for the pipeline, we next want to
1. Visualize the consensus motif
1. Score consensus motif PSSM matches to hexamers of predicted promoters
1. Analyze occurences of the consensus motif across the genome
For the above analyses, we use the BioProspector outputs from the top 3% of expressed genes across all conditions. However this computational framework makes it easy to produce outputs for several different top percentage threshold. To demonstrate, we compare a range of outputs from different top% thresholds here.
4. Compare consensus motifs from different top percentage thresholds
```
import altair as alt
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
import sys
sys.path.append('../') # use modules in main directory
import genbank_utils as gu
import consensus_viz_utils as cu
```
# 1. Visualize the consensus motif
```
# Load BioPropsector output from predicting promoters from the top 3% of expressed loci
selection_f = "../med9/loci_in_top_9perc_upstream_regions_w150_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615863879_SELECTION.fa"
motif_blocks, m1, m2 = cu.build_2Bmotif_from_selection_file(selection_f)
print(m1.consensus)
print(m2.consensus)
m1.pssm
m1.pwm
m1.pssm.mean()
cum = 0
for base in m1.pwm:
for val in m1.pwm[base]:
cum += (val*np.log2(val))
print(cum)
cum = 0
for i in range(m1.length):
base_sum = 0
for base in m1.pwm:
val = m1.pwm[base][i]
base_sum += val*np.log2(val)
cum += 2 + base_sum
print(cum)
m1.pssm.calculate("TTGACA")
np.sum([m1.pwm[x][2] for x in m1.pwm])
# Load BioPropsector output from predicting promoters from the top 3% of expressed loci
selection_f = "../med4/loci_in_top_4perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615799724_SELECTION.fa"
motif_blocks, m1, m2 = cu.build_2Bmotif_from_selection_file(selection_f)
print(m1.consensus)
print(m2.consensus)
m1.pssm.mean()
# Load BioPropsector output from predicting promoters from the top 3% of expressed loci
selection_f = "../med3/loci_in_top_3perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615799844_SELECTION.fa"
motif_blocks, m1, m2 = cu.build_2Bmotif_from_selection_file(selection_f)
print(m1.consensus)
print(m2.consensus)
# Load BioPropsector output from predicting promoters from the top 3% of expressed loci
selection_f = "../med2/loci_in_top_2perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1616202614_SELECTION.fa"
motif_blocks, m1, m2 = cu.build_2Bmotif_from_selection_file(selection_f)
print(m1.consensus)
print(m2.consensus)
# Load BioPropsector output from predicting promoters from the top 3% of expressed loci
selection_f = "../med1/loci_in_top_1perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1616202668_SELECTION.fa"
motif_blocks, m1, m2 = cu.build_2Bmotif_from_selection_file(selection_f)
print(m1.consensus)
print(m2.consensus)
# Load BioPropsector output from predicting promoters from the top 3% of expressed loci
selection_f = "../med6/loci_in_top_6perc_upstream_regions_w300_minus15_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1616125735_SELECTION.fa"
motif_blocks, m1, m2 = cu.build_2Bmotif_from_selection_file(selection_f)
print(m1.consensus)
print(m2.consensus)
# Load BioPropsector output from predicting promoters from the top 3% of expressed loci
selection_f = "../med8/loci_in_top_8perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615798640_SELECTION.fa"
motif_blocks, m1, m2 = cu.build_2Bmotif_from_selection_file(selection_f)
print(m1.consensus)
print(m2.consensus)
# Load BioPropsector output from predicting promoters from the top 3% of expressed loci
selection_f = "../med9/loci_in_top_9perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615798208_SELECTION.fa"
motif_blocks, m1, m2 = cu.build_2Bmotif_from_selection_file(selection_f)
print(m1.consensus)
print(m2.consensus)
# Load BioPropsector output from predicting promoters from the top 3% of expressed loci
selection_f = "../med10/loci_in_top_10perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615797619_SELECTION.fa"
motif_blocks, m1, m2 = cu.build_2Bmotif_from_selection_file(selection_f)
print(m1.consensus)
print(m2.consensus)
print(m1.pssm.mean())
print(m2.pssm.mean())
# Load BioPropsector output from predicting promoters from the top 3% of expressed loci
selection_f = "../med10/loci_in_top_10perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615797619_SELECTION.fa"
motif_blocks, m1, m2 = cu.build_2Bmotif_from_selection_file(selection_f)
print(m1.consensus)
print(m2.consensus)
```
# 2. Score consensus motif PSSM matches to hexamers of predicted promoters
The above motif was derived from the first 6 and final 6 bases of each predicted promoter from the set of top loci. Here, we simply apply this the Position Specific Scoring Matrix (PSSM) back to the inputs to gauge in general, how well does this motif match the inputs. If most inputs receive relatively high scores, it suggests that there was a clearer signal that was found among the promoter input sequences provided and the consensus summarizes the signal well across the top loci promoter predcitions. Lower scores indicate that the consensus didn't summarize the predictions very well (perhaps the signal it found wasn't very clear, there were multiple competing signals, or maybe individual inputs didn't have this particular promoter structure (possibly a different sigma factor structure)).
```
hex_score_df = cu.score_predictions_to_motif(motif_blocks, m1, m2)
hex_score_df.head()
hex_score_df[hex_score_df['locus_tag']=='ompAp']
```
In this dataframe, `hex1_score` and `hex2_score` represent this predictions log odds score (how well it matches) to the consensus PSSM above. The `total_score` is the sum of `hex1_score` and `hex2_score`. Higher scores indicate better matches to the consensus.
```
# scatter plot of hex1 vs hex 2
scatter = alt.Chart(
hex_score_df,
).mark_point().encode(
x=alt.X('hex1_score:Q',axis=alt.Axis(title="-35 Consensus Match")),
y=alt.Y('hex2_score:Q',axis=alt.Axis(title="-10 Consensus Match")),
color=alt.Color('total_score:Q',scale=alt.Scale(scheme='viridis'),sort='descending'),
size=alt.value(100),
tooltip=["desc:N","hex1:N",'hex2:N','total_score:Q'],
).properties(
width=200,
height=200
).interactive()
# stripplot showing total score (hex1 + hex2)
stripplot = alt.Chart(
hex_score_df,
).mark_point().encode(
x=alt.X(
'jitter:Q',
title=None,
axis=alt.Axis(values=[0], ticks=True, grid=False, labels=False),
scale=alt.Scale()
),
y=alt.Y(
'total_score:Q',
axis=alt.Axis(title="Total Consensus Score")),
color=alt.Color(
'total_score:Q',
scale=alt.Scale(scheme='viridis'),
sort='descending'),
size=alt.value(100),
tooltip=["desc:N","hex1:N",'hex2:N','total_score:Q'],
).transform_calculate(
# Generate Gaussian jitter with a Box-Muller transform
jitter='sqrt(-2*log(random()))*cos(2*PI*random())'
).properties(height=200, width=50
).interactive()
# horizontally concat plots
combo = alt.hconcat(
scatter,
stripplot,
title=f"Consensus motif match scores to promoter predictions"
).configure_view(
stroke=None
).configure_axis(
labelFontSize=16,
titleFontSize=16,
).configure_title(
anchor='middle',
fontSize=20
)
#combo.save('consensus_scatter.html')
combo
```
#### Interactive plot!
* Hover over points to see which prediction they correspond to
* Zoom and pan to investigate more closely
Quick plot comparing individual predictions match to both the first (-35) and second (-10) promoter blocks. From the left panel, we notice more predictions had closer matches to the -35 consensus while a few had close matches to the -10 consensus. No predictions were quite perfect in both the -35 and -10 positions. The panel on the right displays which predictions exhibited the overall highest scores.
# 3. Analyze occurences of the consensus motif across the genome
To help convince ourselves that the consensus we found was meaningful, we wanted to make sure that this signal was specific to promoter regions. In particular, we searched for matches to the consensus across the entire _M. buryatense genome_ and recorded the position of each match. Positions were sorted into 4 categories:
1. *`in gene`*: Located inside a feature annotation
1. *`intergenic`*: Beyond 300bp of a feature start coordinate, not inside a feature
1. *`<300 to ATG`*: Within 300bp of a feature start coordinate (including matches from `<100 to ATG`)
1. *`<100 to ATG`*: Within 100bp of a feature start coordinate (usually an ATG, the translation start site)
In the following analysis, we count the number of PSSM matches that fall into each category and normalize by the total number of genome positions that belong to each category. This normalization indicates if PSSM matches tend to be overrepresented in regions closely upstream of translation start sites (likely promoter regions)
## Load genome and feature coords from genbank
```
gbFile_5G = '../data/ecoli_NC_000913.3.gb'
GENOME_FWD, GENOME_REV,GENOME_LEN = gu.get_genome_fwd_rev_len(gbFile_5G)
print("Genome length:", GENOME_LEN, "bps")
print(GENOME_FWD[:10])
print(GENOME_REV[-10:])
# put into a tuple for a later function that expects this format
genomes = [
('genome_fwd','genome_fwd',GENOME_FWD),
('genome_rev','genome_rev',GENOME_REV)
]
# extract tuples of just the feature coordinates from the genbank object
pos_feat_coords, neg_feat_coords = gu.get_pos_neg_relative_feature_coords(gbFile_5G, GENOME_LEN)
```
### Build arrays for the "distance to feature" and "nearest feature"
For every position in the genome, record:
* how far is it to the nearest feature start coordinate on the same strand
* what is the next nearest feature ID to this position in the genome
Use these arrays to count the baseline number of positions in the genome that fall into each category (we'll use this later to normalize PSSM match counts)
```
pos_dist_array,pos_nearest_feat_array = cu.build_feature_distance_index(pos_feat_coords,GENOME_LEN)
neg_dist_array,neg_nearest_feat_array = cu.build_feature_distance_index(neg_feat_coords,GENOME_LEN)
# make category df for all positions in the gneome to get baseline counts
# of each category
baseline_cat_df = cu.build_genome_position_category_df(pos_dist_array, neg_dist_array)
baseline_cat_df.head()
```
### Search for the consensus across the genome
Since BioProspector searched for 2-block motif patterns with a spacer anywhere from 15-18bp, we will similarly consider consensus matches with variable spacing. So first, we will construct a PSSM that combines the -35 and -10 blocks by insert a matrix of 0's (neutral odds) between the blocks. The matrix will be 4x[15,16,17,18]. We'll search for all of these PSSMs across the genome and record all matches (genome positions with a log odds score > 0, so sequences that looks more like the consensus than random). Based on the match position's end coordinate, we will assign each match to one of the 4 genome categories outlined above.
```
# from the consensus motif blocks, build variably spaced PSSMs (with 15-18bp spacers)
var_spaced_motifs = cu.build_dict_of_motifs_to_try(m1, m2)
# search for PSSM matches in the forward and reverse direction
motif_match_df = cu.find_and_score_motifs_in_seqs(var_spaced_motifs,genomes,{})
motif_match_df = cu.add_genome_category_to_pssm_matches(
motif_match_df,
pos_dist_array,
pos_nearest_feat_array,
neg_dist_array,
neg_nearest_feat_array)
motif_match_df.head()
```
### Analyze the number of matches in each genome category
```
motif_match_cat_df = cu.analyze_motif_matches_across_genome(
motif_match_df,
baseline_cat_df)
motif_match_cat_df
```
This "category dataframe" shows the total number of positions in each category in the `total` column (a sum of the `pos_count` and `neg_count` columns). The `pssm_match_count` column is the count of PSSM matches. The `match_perc` column is the `pssm_match_count` divided by the `total`. This normalization is necessary because the total number of positions in the genome that are "intergenic" or "in gene" is far greater than the number of positions within 100bp of a start coordinate, and thus there are many more chances for matches to occur. Here, we're interested to see if there's an enrichment for relatively more matches occuring in promoter regions (areas closely upstream of features), which would suggest the motif we found with BioProspector indeed is enriched in these areas and not a random, non-specific signal.
### Visualize all normalized PSSM counts
```
cu.genome_category_normed_bar_v(motif_match_cat_df)
int_mat_perc = motif_match_cat_df[motif_match_cat_df['cat'] == 'intergenic']['match_perc'].values[0]
d100_mat_perc = motif_match_cat_df[motif_match_cat_df['cat'] == '<100 to ATG']['match_perc'].values[0]
np.log2(d100_mat_perc/int_mat_perc)
np.log2(d100_mat_perc/int_mat_perc)
```
### Analyze the number of high scoring matches in each genome category
While the above chart indeed shows an enrichment of PSSM matches to regions closely upstream of features, many of these matches are relatively low quality (low log-odds score). Next, we simply filter the motif match data frame to only extremely high scoring matches (log-odds above a threshold of 12).
```
# set a log odds score threshold
threshold=10
top_motif_match_df = motif_match_df[motif_match_df['score']>threshold]
# analyze categories for top matches only
top_motif_match_cat_df = cu.analyze_motif_matches_across_genome(
top_motif_match_df,
baseline_cat_df)
top_motif_match_cat_df
cu.genome_category_normed_bar_v(top_motif_match_cat_df,threshold=threshold)
# horizontal versions
cu.genome_category_normed_bar_h(motif_match_cat_df)
cu.genome_category_normed_bar_h(top_motif_match_cat_df,threshold=10)
```
Indeed, when only considering extremely strong matches to the consensus motif, the enrichment in upstream regions (`<100 to ATG` and `<300 to ATG`) is amplified
# 4. Compare consensus motifs from different top percentage thresholds
While the above analyses use the top 3% of expressed genes, it may be useful to compare results for different percentage thresholds. Here we examine and compare the consensus results for the top 1,2,3,4,5,6,10,and 20% thresholds.
```
# dictionary of various biopropsector pipeline outputs
f_dict = {
# 3:'../med3/loci_in_top_3perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615799844_SELECTION.fa',
# 4:'../med4/loci_in_top_4perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615799724_SELECTION.fa',
5:'../pal5_op83/loci_in_top_5perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615880240_SELECTION.fa',
6:'../pal6_op83/loci_in_top_6perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615880152_SELECTION.fa',
7:'../pal7_op83/loci_in_top_7perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615880045_SELECTION.fa',
8:'../pal8_op83/loci_in_top_8perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615879911_SELECTION.fa',
9:'../pal9_op83/loci_in_top_9perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615879738_SELECTION.fa',
10:'../pal10_op83/loci_in_top_10perc_upstream_regions_w300_min20_trunc_W6_w6_G18_g15_d1_a1_n200_1615879406_SELECTION.fa'
}
def compare_consensus_motifs(f_dict,threshold=12):
'''
Given a file dict of bioprospector outputs:
1. determine the consensus motif
2. score the consensus against the input promoter predictions
3. search for the consensus across both strands of the genome
'''
# save various intermediate dfs to concat at the end
hex_score_dfs = []
motif_match_dfs = []
motif_match_cat_dfs = []
top_motif_match_cat_dfs = []
# loop through all selection files for different n-percent thresholds
for nperc in f_dict:
print(f"\nTop {nperc}% consensus")
# extract the consensus motif in 2 blocks
motif_blocks, m1, m2 = cu.build_2Bmotif_from_selection_file(f_dict[nperc])
hex_score_df = cu.score_predictions_to_motif(motif_blocks, m1, m2)
# add nperc column
hex_score_df['nperc'] = nperc
hex_score_dfs.append(hex_score_df)
# while we have a specific consensus motif block for this file, search
# for it across the genome
# from the consensus motif blocks, build variably spaced PSSMs (with 15-18bp spacers)
var_spaced_motifs = cu.build_dict_of_motifs_to_try(m1, m2)
# search for PSSM matches in the forward and reverse direction
motif_match_df = cu.find_and_score_motifs_in_seqs(var_spaced_motifs,genomes,{})
# add the genome category
motif_match_df = cu.add_genome_category_to_pssm_matches(motif_match_df,
pos_dist_array,
pos_nearest_feat_array,
neg_dist_array,
neg_nearest_feat_array)
# add nperc column
motif_match_df['nperc'] = nperc
motif_match_dfs.append(motif_match_df)
motif_match_cat_df = cu.analyze_motif_matches_across_genome(
motif_match_df,
baseline_cat_df)
# add nperc column
motif_match_cat_df['nperc'] = nperc
motif_match_cat_dfs.append(motif_match_cat_df)
# also calculate enrichment for the top scoring matches
print(f"Calculating for top scoring matches (threshold={threshold})")
top_motif_match_df = motif_match_df[motif_match_df['score']>threshold]
top_motif_match_cat_df = cu.analyze_motif_matches_across_genome(
top_motif_match_df,
baseline_cat_df)
# add nperc column
top_motif_match_cat_df['nperc'] = nperc
top_motif_match_cat_dfs.append(top_motif_match_cat_df)
# concat all dfs into combined version
print("Concatting final dfs")
all_hex_df = pd.concat(hex_score_dfs)
all_motif_match_df = pd.concat(motif_match_dfs)
all_motif_match_cat_df = pd.concat(motif_match_cat_dfs)
all_top_motif_match_cat_df = pd.concat(top_motif_match_cat_dfs)
return all_hex_df, all_motif_match_df, all_motif_match_cat_df, all_top_motif_match_cat_df
all_hex_df, \
all_motif_match_df, \
all_motif_match_cat_df, \
all_top_motif_match_cat_df = compare_consensus_motifs(f_dict,threshold=10)
all_hex_df.head()
all_motif_match_df.head()
all_motif_match_cat_df.head()
all_top_motif_match_cat_df.head()
```
### Visualize PSSM scores against input sequences
```
fig = plt.figure(figsize=(10,5))
sns.swarmplot(data=all_hex_df, x='nperc',y='total_score')
plt.xlabel("Top N% threshold")
plt.ylabel("Sum of PSSM match scores to hexamer predictions")
plt.title("Distribution of Consensus match scores to promoter predictions")
plt.show()
stripplot = alt.Chart(
all_hex_df,
#title=f"Prediction consensus matches"
).mark_point().encode(
x=alt.X(
'jitter:Q',
title=None,
axis=alt.Axis(values=[0], ticks=True, grid=False, labels=False),
scale=alt.Scale()
),
y=alt.Y('total_score:Q',axis=alt.Axis(title="Total Consensus Score")),
color=alt.Color('total_score:Q',scale=alt.Scale(scheme='viridis'),sort='descending'),
size=alt.value(100),
tooltip=["desc:N","hex1:N",'hex2:N','total_score:Q'],
column=alt.Column(
'nperc:N',
header=alt.Header(
labelFontSize=16,
labelAngle=0,
titleOrient='top',
labelOrient='bottom',
labelAlign='center',
labelPadding=25,
),
),
).transform_calculate(
# Generate Gaussian jitter with a Box-Muller transform
jitter='sqrt(-2*log(random()))*cos(2*PI*random())'
).configure_facet(
spacing=0
).configure_view(
stroke=None
).configure_axis(
labelFontSize=16,
titleFontSize=16
).properties(height=200, width=100)
stripplot.save("consensus_varying_n_stripplot.html")
stripplot
```
Same basic plot as swarm plot but with tooltip interactivity
### Visualize genome category enrichment of consensus matches
```
def compare_genome_cat_enrichment(df,sci=False):
fig, axes = plt.subplots(nrows=2, ncols=8, sharey=True, figsize=(15,8))
axes_list = [item for sublist in axes for item in sublist]
genome_cat_order = ['in gene','intergenic','100:300 to ATG','<100 to ATG']
for nperc, sub_df in df.groupby("nperc"):
# calculate the rank of each match by vote count
# make the bar chart on the next axis
ax = axes_list.pop(0)
sns.barplot(data=sub_df,x='cat',y='match_perc',ax=ax,order=genome_cat_order)
# axis and title configs
ax.set_title(f"{nperc} %")#.split('|')[0])
ax.set_xticklabels(ax.get_xticklabels(),rotation=90)
#ax.set_yticklabels(ax.get_yticklabels(),fontsize=14)
ax.set_xlabel("")
ax.set_ylabel("")
ax.tick_params(axis="y", labelsize=14)
# Now use the matplotlib .remove() method to
# delete anything we didn't use
if sci:
plt.ticklabel_format(axis="y", style="sci", scilimits=(0,0))
for ax in axes_list:
ax.remove()
return fig.tight_layout()
# all pssm matches
compare_genome_cat_enrichment(all_motif_match_cat_df)
# high scoring pssm above 12
compare_genome_cat_enrichment(all_top_motif_match_cat_df)
```
Side by side, we can see for which percentage thresholds there was strong enrichment of the consensus in sequences regions immediately upstream of annotated features. Notably, the consensus derived from the top 4% seems like an anomly: the hexamer match to its own inputs is lower, and the overall information content of the consensus is lower than the consensus for 3% and 5%. Due to this low information content, its more generic, so it finds more matches, however this also leads to fewer strong matches. In fact, there were no sequence matches in the genome that even reached a 12 log odds score.
Otherwise, the consensus motif for the top 2% and 3% of genes seems to be the strongest candidate promoter signal demonstrating upstream enrichement.
| github_jupyter |
```
import sys
sys.path.append('../../')
import os
import dill
import numpy as np
import scipy as sc
import random as rand
from sklearn import preprocessing, linear_model
import matplotlib.pyplot as plt
from core.controllers import ConstantController
from koopman_core.dynamics import LinearLiftedDynamics, BilinearLiftedDynamics
from koopman_core.learning import Edmd_aut, KoopDnn, KoopmanNetAut
```
## Autonomous system with analytic finite dimensional Koopman operator
Consider the continuous-time dynamics
\begin{equation}
x = \begin{bmatrix} x_1\\x_2\\\dot{x}_1\\ \dot{x}_2 \end{bmatrix}, \qquad
\begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \\ \dot{x}_4 \end{bmatrix}
= \begin{bmatrix}
x_3 \\
x_4 \\
\mu x_3\\
-\lambda x_3^2 + \lambda x_4
\end{bmatrix}
\end{equation}
by carefully choosing observables, the drift vector field of the dynamics can be reformulated as an equilvalent linear system (global linearization). Define the observables
\begin{equation}
\begin{bmatrix}
y_1 \\ y_2 \\ y_3 \\ y_4 \\ y_5
\end{bmatrix}
= \begin{bmatrix}
x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_3^2
\end{bmatrix}
\end{equation}
then the system can be rewritten as
\begin{equation}
\begin{bmatrix}
\dot{y}_1 \\ \dot{y}_2 \\ \dot{y}_3 \\ \dot{y}_4 \\ \dot{y}_5 \end{bmatrix} =
\begin{bmatrix}
0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 1 & 0\\
0 & 0 & \mu & 0 & 0\\
0 & 0 & 0 & \lambda & - \lambda \\
0 & 0 & 0 & 0 & \mu \end{bmatrix}
\begin{bmatrix}
y_1 \\ y_2 \\ y_3 \\ y_4 \\ y_5 \end{bmatrix}
\end{equation}
```
from core.dynamics import SystemDynamics
class FiniteKoopSys(SystemDynamics):
def __init__(self, mu, lambd):
SystemDynamics.__init__(self, 4, 0)
self.params = mu, lambd
def drift(self, x, t):
mu, lambd = self.params
return np.array([x[2], x[3], mu*x[2], -lambd*x[2]**2 + lambd*x[3]])
def eval_dot(self, x, u, t):
return self.drift(x, t)
n, m = 4, 0
mu, lambd = -0.3, -0.6
system = FiniteKoopSys(mu, lambd)
sys_name = 'analytic_koop_sys'
```
### Collect data for learning
To collect data, a nominal controller is designed with LQR on the dynamics's linearization around hover. However, any
controller can be used and the method does not require the knowledge of model's linearization. In addition, a
exploratory white noise is added to the controller to ensure that the data is sufficiently excited. Note that the system
is underactuated and that trajectory optimization is necessary to control the position of the vehicle. We use a
simplified trajectory generator based on a model predictive controller for the linearized dynamics. More careful design
of the desired trajectory may be necessary for more demanding applications and this is readily compatible with our method.
4 data sets are collected to evaluate the impact of signal and process noise:
- Nominal data set (no signal or process noise)
- Data set with signal noise
- Data set with process noise
- Data set with signal and process noise
```
# Data collection parameters:
collect_data = True
dt = 1.0e-2 # Time step length
traj_length_dc = 4. # Trajectory length, data collection
n_pred_dc = int(traj_length_dc/dt) # Number of time steps, data collection
t_eval = dt * np.arange(n_pred_dc + 1) # Simulation time points
n_traj_train = 100 # Number of trajectories to execute, data collection
n_traj_val = int(0.2*n_traj_train)
xmax = np.array([1, 1, 1, 1]) # State constraints, trajectory generation
xmin = -xmax
x0_max = xmax # Initial value limits
sub_sample_rate = 1 # Rate to subsample data for training
n_cols = 10 # Number of columns in training data plot
folder_plots = 'figures/' # Path to save plots
directory = os.path.abspath("") # Path to save learned models
from koopman_core.util import run_experiment
if collect_data:
xs_train, t_train = run_experiment(system, n, n_traj_train, n_pred_dc, t_eval, x0_max, plot_experiment_data=True)
xs_val, t_val = run_experiment(system, n, n_traj_val, n_pred_dc, t_eval, x0_max, plot_experiment_data=True)
data_list = [xs_train, t_train, n_traj_train, xs_val, t_val, n_traj_val]
outfile = open(directory + '/data/' + sys_name + '_data.pickle', 'wb')
dill.dump(data_list, outfile)
outfile.close()
else:
infile = open(directory + '/data/' + sys_name + '_data.pickle', 'rb')
xs_train, t_train, n_traj_train, xs_val, t_val, n_traj_val = dill.load(infile)
infile.close()
```
### Learn a lifted linear model with Koopman DNN
```
## import dill, os, torch
load_tuned_params = False
if load_tuned_params:
infile = open(os.path.abspath('') + '/data/analytic_koop_sys_best_params.pickle', 'rb')
net_params, val_loss, test_loss, open_loop_mse, open_loop_std = dill.load(infile)
infile.close()
else:
net_params = {}
net_params['state_dim'] = 4
net_params['encoder_hidden_width'] = 100
net_params['encoder_hidden_depth'] = 1
net_params['encoder_output_dim'] = 5
net_params['optimizer'] = 'adam'
net_params['lr'] = 1e-3
net_params['epochs'] = 100
net_params['batch_size'] = 256
net_params['lin_loss_penalty'] = 0.5
net_params['l2_reg'] = 0
net_params['l1_reg'] = 0
net_params['activation_type'] = 'relu'
net_params['first_obs_const'] = True
net_params['override_kinematics'] = False
net_params['override_C'] = False
net_params['dt'] = dt
print(net_params)
from sklearn import preprocessing
from koopman_core.util import fit_standardizer
standardizer_kdnn = fit_standardizer(xs_train, preprocessing.StandardScaler(with_mean=False))
#standardizer_kdnn = None
net = KoopmanNetAut(net_params, standardizer_x=standardizer_kdnn)
model_koop_dnn = KoopDnn(net)
model_koop_dnn.set_datasets(xs_train, t_train, x_val=xs_val, t_val=t_val)
model_koop_dnn.model_pipeline(net_params, early_stop=False)
model_koop_dnn.construct_koopman_model()
sys_koop_dnn = LinearLiftedDynamics(model_koop_dnn.A, None, model_koop_dnn.C, model_koop_dnn.basis_encode,
continuous_mdl=False, dt=dt, standardizer_x=standardizer_kdnn)
print(model_koop_dnn.net.loss_scaler_x, model_koop_dnn.net.loss_scaler_z)
print(model_koop_dnn.A)
print(model_koop_dnn.C)
train_loss = [l[0] for l in model_koop_dnn.train_loss_hist]
train_pred_loss = [l[1] for l in model_koop_dnn.train_loss_hist]
train_lin_loss = [l[2] for l in model_koop_dnn.train_loss_hist]
val_loss = [l[0] for l in model_koop_dnn.val_loss_hist]
val_pred_loss = [l[1] for l in model_koop_dnn.val_loss_hist]
val_lin_loss = [l[2] for l in model_koop_dnn.val_loss_hist]
epochs = np.arange(0, net_params['epochs'])
plt.figure(figsize=(15,8))
plt.plot(epochs, train_loss, color='tab:orange', label='Training loss')
plt.plot(epochs, train_pred_loss, '--', color='tab:orange', label='Training prediction loss')
plt.plot(epochs, train_lin_loss, ':', color='tab:orange', label='Training linear loss')
plt.plot(epochs, val_loss, color='tab:blue', label='Validation loss')
plt.plot(epochs, val_pred_loss, '--', color='tab:blue', label='Validation prediction loss')
plt.plot(epochs, val_lin_loss, ':', color='tab:blue', label='Validation linear loss')
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.yscale('log')
plt.show()
```
### Learn a linear model with dynamic mode decomposition (DMD)
To compare our method with existing techniques, we first learn a linear state space model from data. This is dubbed dynamic mode decomposition. I.e. we use linear regression with LASSO regularization to learn an approximate linear model with model structure
\begin{equation}
\mathbf{x}_{k+1} = A_{dmd} \mathbf{x}_k
\end{equation}
```
#DMD parameters:
alpha_dmd = 5.4e-4
tune_mdl_dmd = False
basis = lambda x: x
C_dmd = np.eye(n)
cv_dmd = linear_model.MultiTaskLassoCV(fit_intercept=False, n_jobs=-1, cv=3, selection='random')
standardizer_dmd = preprocessing.StandardScaler(with_mean=False)
optimizer_dmd = linear_model.MultiTaskLasso(alpha=alpha_dmd, fit_intercept=False, selection='random')
model_dmd = Edmd_aut(n, basis, n, n_traj_train, optimizer_dmd, cv=cv_dmd, standardizer=standardizer_dmd, C=C_dmd, first_obs_const=False, continuous_mdl=False, dt=dt)
X_dmd, y_dmd = model_dmd.process(xs_train, np.tile(t_train,(n_traj_train,1)), downsample_rate=sub_sample_rate)
model_dmd.fit(X_dmd, y_dmd, cv=tune_mdl_dmd)
sys_dmd = LinearLiftedDynamics(model_dmd.A, None, model_dmd.C, model_dmd.basis, continuous_mdl=False, dt=dt)
if tune_mdl_dmd:
print('$\\alpha$ DMD: ',model_dmd.cv.alpha_)
```
### Learn a lifted linear model with extended dynamic mode decomposition (EDMD)
In addition, we compare our method with the current state of the art of Koopman based learning, the extended dynamic mode
decomposition. We use a dictionary of nonlinear functions $\boldsymbol{\phi(x)}$ to lift the state variables and learn a lifted state space model
of the dynamics. I.e. we first lift and then use linear regression with LASSO regularization to learn an approximate
lifted linear model with model structure
\begin{equation}
\mathbf{z}_{k+1} = A_{edmd}\mathbf{z}_k, \qquad \mathbf{z} = \boldsymbol{\phi(x)}
\end{equation}
```
#EDMD parameters:
alpha_edmd = 1.2e-4
tune_mdl_edmd = False
koop_features = preprocessing.PolynomialFeatures(degree=2)
koop_features.fit(np.zeros((1,n)))
basis = lambda x: koop_features.transform(x)
n_lift_edmd = basis(np.zeros((1,n))).shape[1]
C_edmd = np.zeros((n,n_lift_edmd))
C_edmd[:,1:n+1] = np.eye(n)
optimizer_edmd = linear_model.MultiTaskLasso(alpha=alpha_edmd, fit_intercept=False, selection='random', max_iter=2000)
cv_edmd = linear_model.MultiTaskLassoCV(fit_intercept=False, n_jobs=-1, cv=3, selection='random', max_iter=2000)
standardizer_edmd = preprocessing.StandardScaler(with_mean=False)
model_edmd = Edmd_aut(n, basis, n_lift_edmd, n_traj_train, optimizer_edmd, cv=cv_edmd, standardizer=standardizer_edmd, C=C_edmd, continuous_mdl=False, dt=dt)
X_edmd, y_edmd = model_edmd.process(xs_train, np.tile(t_train,(n_traj_train,1)), downsample_rate=sub_sample_rate)
model_edmd.fit(X_edmd, y_edmd, cv=tune_mdl_edmd)
sys_edmd = LinearLiftedDynamics(model_edmd.A, None, model_edmd.C, model_edmd.basis, continuous_mdl=False, dt=dt)
if tune_mdl_edmd:
print('$\\alpha$ EDMD: ', model_edmd.cv.alpha_)
```
### Evaluate open loop prediction performance
We now evaluate the open loop prediction performance of the implemented methods.
This is done by generating a new data set in the same way as the training set, predicting the evolution of the system
with the control sequence of each trajectory executed in the data set with each of the models, and finally comparing
the mean and standard deviation of the error between the true and predicted evolution over the trajectories. All the models are evaluated on 2 test data sets. One nominal data set (no signal or process noise) and a data set with process noise. No test data with signal noise is used, as we would need to fix the signal noise sequence to do a fair comparison in open loop prediction, hence resulting in the same comparison as the 2 data sets used.
```
# Prediction performance evaluation parameters:
n_traj_ol = n_traj_train # Number of trajectories to execute, open loop
from koopman_core.util import evaluate_ol_pred
from tabulate import tabulate
t_eval = dt * np.arange(4./dt + 1)
xs_ol = np.empty((n_traj_ol, t_eval.shape[0], n))
ctrl = ConstantController(system, 0.)
for ii in range(n_traj_ol):
x0 = np.asarray([rand.uniform(l, u) for l, u in zip(-x0_max, x0_max)])
xs_ol[ii,:,:], _ = system.simulate(x0, ctrl, t_eval)
mdl_lst = [sys_koop_dnn, sys_dmd, sys_edmd]
mdl_names = ['Koopman DNN', 'DMD', 'EDMD']
error, mse, std = [], [], []
for sys in mdl_lst:
err_tmp, mse_tmp, std_tmp = evaluate_ol_pred(sys, xs_ol, t_eval)
error.append(err_tmp)
mse.append(mse_tmp)
std.append(std_tmp)
print('\nOpen loop performance statistics:')
table_data = []
for name, mse_mdl, std_mdl in zip(mdl_names, mse, std):
table_data.append([name, "{:.5f}".format(mse_mdl), "{:.5f}".format(std_mdl)])
print(tabulate(table_data,
headers=['Mean squared error', 'Standard deviation']))
import matplotlib.pyplot as plt
import matplotlib
figwidth = 12
lw = 2
fs = 14
y_lim_gain = 1.2
row = 2
col = 2
#Plot open loop results:
plt.figure(figsize=(figwidth,8))
axs = [plt.subplot(row,col,jj+1) for jj in range(n)]
for ii, err in enumerate(error):
err_mean = np.mean(err, axis=0)
err_std = np.std(err, axis=0)
for jj in range(n):
axs[jj].plot(t_eval[1:], err_mean[:,jj], label=mdl_names[ii])
axs[jj].fill_between(t_eval[1:], err_mean[:,jj]-err_std[:,jj], err_mean[:,jj]+err_std[:,jj], alpha=0.1)
for jj in range(n):
axs[jj].set_xlabel('Time (sec)', fontsize=fs)
axs[jj].set_ylabel('$x_'+ str(jj+1) + '$', fontsize=fs)
plt.legend(frameon=False, fontsize=fs)
stitle=plt.suptitle('Open loop prediction performance of learned models', fontsize=fs+2)
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
plt.savefig(folder_plots + 'koop_sys_prediction.pdf', format='pdf', dpi=2400, bbox_extra_artists=(stitle,), bbox_inches="tight")
plt.show()
```
```
analytic_koop = sc.linalg.expm(np.array([[mu, 0, 0], [0, lambd, -lambd], [0, 0, mu]])*dt)
eig_analytic = np.linalg.eigvals(analytic_koop)
eig_koop_dnn = np.sort(np.linalg.eigvals(sys_koop_dnn.A))[::-1]
eig_edmd = np.sort(np.linalg.eigvals(sys_edmd.A))[::-1]
n_eig = eig_analytic.size
eig_koop_dnn = eig_koop_dnn[:n_eig]
eig_edmd = eig_edmd[:n_eig]
ang = np.linspace(0,2*np.pi,100)
circle = np.array([np.cos(ang), np.sin(ang)])
plt.figure()
plt.plot(circle[0,:], circle[1,:])
plt.scatter(np.real(eig_analytic), np.imag(eig_analytic), marker='o', color='tab:blue', label='Analytic')
plt.scatter(np.real(eig_koop_dnn), np.imag(eig_koop_dnn), marker='*', color='tab:green', label='Koopman DNN')
plt.scatter(np.real(eig_edmd), np.imag(eig_edmd), marker='x', color='tab:orange', label='EDMD')
plt.legend(loc='upper right', fontsize=fs)
plt.grid()
plt.title('Learned VS analytic Koopman spectrum', fontsize=fs)
plt.xlabel('Real part', fontsize=fs)
plt.ylabel('Imaginary part', fontsize=fs)
plt.show()
from koopman_core.controllers import OpenLoopController
ii = 0
x0 = xs_ol[ii, 0, :].reshape(1,-1)
ctrl = ConstantController(sys, 0.)
z0 = sys.basis(np.atleast_2d(x0)).squeeze()
zs_tmp, _ = sys.simulate(z0, ctrl, t_train)
xs_pred = standardizer_kdnn.inverse_transform(np.dot(sys.C, zs_tmp.T).T)
#xs_pred = np.dot(sys.C, zs_tmp.T).T
plt.figure(figsize=(15,6))
for ss in range(2):
plt.subplot(1,2,ss+1)
plt.plot(t_train, xs_ol[ii, :, ss], '--', color='tab:orange', label='True')
plt.plot(t_train, xs_pred[:, ss], '--', color='tab:blue', label='Learned')
plt.plot(t_train, xs_ol[ii, :, ss+2], color='tab:orange', label='True')
plt.plot(t_train, xs_pred[:, ss+2], color='tab:blue', label='Learned')
plt.legend()
plt.show()
```
| github_jupyter |
```
import numpy as np
import librosa
import glob
import os
from random import randint
import torch
import torch.nn as nn
from torch.utils import data
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.utils.data import sampler
import matplotlib.pyplot as plt
%matplotlib inline
import import_ipynb
from Triplet_LSTM_Net import *
from Triplet_LSTM_Dataloader import *
torch.cuda.set_device(3)
print(torch.cuda.current_device())
tripleNet_LSTM_model = TripletLSTM().double().cuda()
state_dict = torch.load('./model/tripleNet_LSTM.pkl',map_location={'cuda:0':'cuda:3'})
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in state_dict.items():
name = k[7:]
new_state_dict[name] = v
tripleNet_LSTM_model.load_state_dict(new_state_dict)
tripleNet_LSTM_model = tripleNet_LSTM_model.eval()
data_dir = "/media/data/cuixuange/ScrawlMusic/train_data_normalizatin/"
def load_exits_file(train_data,index):
str_data = train_data[index].split(",")
x1_name = data_dir + str_data[0]+".npy"
x2_name = data_dir + str_data[1]+".npy"
x3_name = data_dir + str_data[2]+".npy"
if os.path.isfile(x1_name) and os.path.isfile(x2_name) and os.path.isfile(x3_name):
return x1_name,x2_name,x3_name,index
else:
index += 6
return load_exits_file(train_data,index)
result_score_dict = {}
result = []
with torch.no_grad():
for key,token in enumerate(my_test_data):
x1_name,x2_name,x3_name,exit_index = load_exits_file(my_test_data,key)
x1 = torch.from_numpy(np.load(x1_name)).unsqueeze(0).unsqueeze(0).cuda()
x2 = torch.from_numpy(np.load(x2_name)).unsqueeze(0).unsqueeze(0).cuda()
x3 = torch.from_numpy(np.load(x3_name)).unsqueeze(0).unsqueeze(0).cuda()
predict_label = tripleNet_LSTM_model(x1,x2,x3).detach().cpu()
result_score_dict[token] = predict_label[0][0]
print(len(my_test_data))
print(len(result_score_dict))
# for key in result_score_dict:
# print(key,result_score_dict[key])
from __future__ import division
testLength = 1000
def get_result_key(index,x1,x2,x3):
return "song"+str(index)+"_"+str(x1)+","+"song"+str(index)+"_"+str(x2)+","+"song"+str(index)+"_"+str(x3)
def get_result_score(index):
scoreList = []
score_123 = result_score_dict[get_result_key(index,1,2,3)]
score_132 = result_score_dict[get_result_key(index,1,3,2)]
score_213 = result_score_dict[get_result_key(index,2,1,3)]
score_231 = result_score_dict[get_result_key(index,2,3,1)]
score_312 = result_score_dict[get_result_key(index,3,1,2)]
score_321 = result_score_dict[get_result_key(index,3,2,1)]
scoreList.append(score_123)
scoreList.append(score_132)
scoreList.append(score_213)
scoreList.append(score_231)
scoreList.append(score_312)
scoreList.append(score_321)
import operator
index, value = max(enumerate(scoreList), key=operator.itemgetter(1))
return index,value
def cal_accuracy(index):
if(index==0):
return 2,True
elif(index==1):
return 0,False
elif(index==2):
return 0,False
elif(index==3):
return 1,False
elif(index==4):
return 1,False
elif(index==5):
return 0,False
else:
return -1,False
GA = [0,0,0,0,0,0]
PA = 0.0
for i in range(6000,7000):
index, value = get_result_score(i)
# print(index,value)
pair,_ = cal_accuracy(index)
PA += pair/3
if(index == 0):
GA[0] += 1
if(index == 1):
GA[1] += 1
if(index == 2):
GA[2] += 1
if(index == 3):
GA[3] += 1
if(index == 4):
GA[4] += 1
if(index == 5):
GA[5] += 1
#GA[0] is "123"
#GA[1,2,,3,4,5] is "132,213,231,312,321"
print("Accuracy:",GA[0]/testLength)
# for i in range(6):
# print("Accuracy:",GA[i]/testLength)
```
| github_jupyter |
## Lab 1: Tensor Manipulation
First Author: Seungjae Ryan Lee (seungjaeryanlee at gmail dot com)
Second Author: Ki Hyun Kim (nlp.with.deep.learning at gmail dot com)
<div class="alert alert-warning">
NOTE: This corresponds to <a href="https://www.youtube.com/watch?v=ZYX0FaqUeN4&t=23s&list=PLlMkM4tgfjnLSOjrEJN31gZATbcj_MpUm&index=25">Lab 8 of Deep Learning Zero to All Season 1 for TensorFlow</a>.
</div>
### Imports
```
import numpy as np
import torch
```
### NumPy Review
We hope that you are familiar with `numpy` and basic linear algebra.
#### 1D Array with NumPy
```
t = np.array([0., 1., 2., 3., 4., 5., 6.])
print(t)
print('Rank of t: ', t.ndim)
print('Shape of t: ', t.shape)
print('t[0] t[1] t[-1] = ', t[0], t[1], t[-1])
print('t[2:5] t[4:-1] = ', t[2:5], t[4:-1])
print('t[:2] t[3:] = ', t[:2], t[3:])
```
#### 2D Array with NumPy
```
t = np.array([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.], [10., 11., 12.]])
print(t)
print('Rank of t: ', t.ndim)
print('Shape of t: ', t.shape)
```
### PyTorch is like NumPy (but better)
#### 1D Array with PyTorch
```
t = torch.FloatTensor([0., 1., 2., 3., 4., 5., 6.])
print(t)
print(t.dim())
print(t.shape)
print(t.size())
print(t[0], t[1], t[-1])
print(t[2:5], t[4:-1])
print(t[:2], t[3:])
```
#### 2D Array with PyTorch
```
t = torch.FloatTensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.],
[10., 11., 12.]
])
print(t)
print(t.dim())
print(t.size())
print(t[:, 1])
print(t[:, 1].size())
print(t[:, :-1])
```
#### Shape, Rank, Axis
```
t = torch.FloatTensor([[[[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]],
[[13, 14, 15, 16],
[17, 18, 19, 20],
[21, 22, 23, 24]]
]])
print(t.dim())
print(t.size())
```
### Frequently Used Operations in PyTorch
#### Mul vs. Matmul
```
print()
print('-------------')
print('Mul vs Matmul')
print('-------------')
m1 = torch.FloatTensor([[1, 2], [3, 4]])
m2 = torch.FloatTensor([[1], [2]])
print('Shape of Matrix 1: ', m1.shape)
print('Shape of Matrix 2: ', m2.shape)
print(m1.matmul(m2)) # 2 x 1
m1 = torch.FloatTensor([[1, 2], [3, 4]])
m2 = torch.FloatTensor([[1], [2]])
print('Shape of Matrix 1: ', m1.shape)
print('Shape of Matrix 2: ', m2.shape)
print(m1 * m2)
print(m1.mul(m2))
```
#### Broadcasting
<div class="alert alert-warning">
Carelessly using broadcasting can lead to code hard to debug.
</div>
```
m1 = torch.FloatTensor([[3, 3]])
m2 = torch.FloatTensor([[2, 2]])
print(m1 + m2)
m1 = torch.FloatTensor([[1, 2]])
m2 = torch.FloatTensor([3])
print(m1 + m2)
m1 = torch.FloatTensor([[1, 2]])
m2 = torch.FloatTensor([[3], [4]])
print(m1 + m2)
```
#### Mean
```
t = torch.FloatTensor([1, 2])
print(t.mean())
t = torch.LongTensor([1, 2])
try:
print(t.mean())
except Exception as exc:
print(exc)
```
You can also use `t.mean` for higher rank tensors to get mean of all elements, or mean by particular dimension.
```
t = torch.FloatTensor([[1, 2], [3, 4]])
print(t)
print(t.mean())
print(t.mean(dim=0))
print(t.mean(dim=1))
print(t.mean(dim=-1))
```
#### Sum
```
t = torch.FloatTensor([[1, 2], [3, 4]])
print(t)
print(t.sum())
print(t.sum(dim=0))
print(t.sum(dim=1))
print(t.sum(dim=-1))
```
#### Max and Argmax
```
t = torch.FloatTensor([[1, 2], [3, 4]])
print(t)
```
The `max` operator returns one value if it is called without an argument.
```
print(t.max())
```
The `max` operator returns 2 values when called with dimension specified. The first value is the maximum value, and the second value is the argmax: the index of the element with maximum value.
```
print(t.max(dim=0))
print('Max: ', t.max(dim=0)[0])
print('Argmax: ', t.max(dim=0)[1])
print(t.max(dim=1))
print(t.max(dim=-1))
```
#### View
<div class="alert alert-warning">
This is a function hard to master, but is very useful!
</div>
```
t = np.array([[[0, 1, 2],
[3, 4, 5]],
[[6, 7, 8],
[9, 10, 11]]])
ft = torch.FloatTensor(t)
print(ft.shape)
print(ft.view([-1, 3]))
print(ft.view([-1, 3]).shape)
print(ft.view([-1, 1, 3]))
print(ft.view([-1, 1, 3]).shape)
```
#### Squeeze
```
ft = torch.FloatTensor([[0], [1], [2]])
print(ft)
print(ft.shape)
print(ft.squeeze())
print(ft.squeeze().shape)
```
#### Unsqueeze
```
ft = torch.Tensor([0, 1, 2])
print(ft.shape)
print(ft.unsqueeze(0))
print(ft.unsqueeze(0).shape)
print(ft.view(1, -1))
print(ft.view(1, -1).shape)
print(ft.unsqueeze(1))
print(ft.unsqueeze(1).shape)
print(ft.unsqueeze(-1))
print(ft.unsqueeze(-1).shape)
```
#### Scatter (for one-hot encoding)
<div class="alert alert-warning">
Scatter is a very flexible function. We only discuss how to use it to get a one-hot encoding of indices.
</div>
```
lt = torch.LongTensor([[0], [1], [2], [0]])
print(lt)
one_hot = torch.zeros(4, 3)
one_hot.scatter_(1, lt, 1)
print(one_hot)
```
#### Casting
```
lt = torch.LongTensor([1, 2, 3, 4])
print(lt)
print(lt.float())
bt = torch.ByteTensor([True, False, False, True])
print(bt)
print(bt.long())
print(bt.float())
```
#### Concatenation
```
x = torch.FloatTensor([[1, 2], [3, 4]])
y = torch.FloatTensor([[5, 6], [7, 8]])
print(torch.cat([x, y], dim=0))
print(torch.cat([x, y], dim=1))
```
#### Stacking
```
x = torch.FloatTensor([1, 4])
y = torch.FloatTensor([2, 5])
z = torch.FloatTensor([3, 6])
print(torch.stack([x, y, z]))
print(torch.stack([x, y, z], dim=1))
print(torch.cat([x.unsqueeze(0), y.unsqueeze(0), z.unsqueeze(0)], dim=0))
```
#### Ones and Zeros Like
```
x = torch.FloatTensor([[0, 1, 2], [2, 1, 0]])
print(x)
print(torch.ones_like(x))
print(torch.zeros_like(x))
```
#### In-place Operation
```
x = torch.FloatTensor([[1, 2], [3, 4]])
print(x.mul(2.))
print(x)
print(x.mul_(2.))
print(x)
```
### Miscellaneous
#### Zip
```
for x, y in zip([1, 2, 3], [4, 5, 6]):
print(x, y)
for x, y, z in zip([1, 2, 3], [4, 5, 6], [7, 8, 9]):
print(x, y, z)
```
| github_jupyter |
### Introduction
This notebook provides an example for how to use the PAKKR library in a training and validation pipeline using Fisher's iris dataset.
### Setup
Install the packages required for this example
```
%pip install numpy pandas scikit-learn
from typing import Callable, Dict, NamedTuple, List, Union, Tuple
import numpy as np
import pandas as pd
from sklearn import datasets
from sklearn.base import BaseEstimator
from sklearn.linear_model import LogisticRegression, PassiveAggressiveClassifier
from sklearn.model_selection import StratifiedShuffleSplit
from pakkr import returns, Pipeline
class IrisData(NamedTuple):
data: np.ndarray
target: np.ndarray
target_names: np.ndarray
feature_names: List[str]
TestSize = float
```
### Defining the steps
```
@returns(stratified_sampler=StratifiedShuffleSplit)
def initialise_sampler(test_size) -> Dict[str, StratifiedShuffleSplit]:
"""
Saves the sampler into the meta to be consumed by a later step.
"""
return {
"stratified_sampler": StratifiedShuffleSplit(n_splits=1, test_size=test_size)
}
def load_iris_data() -> IrisData:
iris = datasets.load_iris()
return IrisData(**{k: iris[k] for k in IrisData._fields})
# This annotation informs PAKKR that this step returns two objects, rather than a tuple of two objects
@returns(pd.DataFrame, pd.Series)
def convert_to_pandas(iris_data: IrisData) -> Tuple[pd.DataFrame, pd.Series]:
features = pd.DataFrame(iris_data.data, columns=iris_data.feature_names)
labels = pd.Series(iris_data.target).map({
k: v for k, v in enumerate(iris_data.target_names)
})
return features, labels
@returns(pd.DataFrame, pd.Series, test_features=pd.DataFrame, test_labels=pd.Series)
def create_train_test_split(features: pd.DataFrame, labels: pd.Series, stratified_sampler: StratifiedShuffleSplit):
"""
Splits the dataset into training and testing sets.
Saves the test set into the meta to be consumed by a later step.
"""
train_idx, test_idx = next(stratified_sampler.split(features, labels))
return (
features.loc[train_idx], labels.loc[train_idx],
{"test_features": features.loc[test_idx], "test_labels": labels.loc[test_idx]}
)
def train_model(features: pd.DataFrame, labels: pd.Series, clf: BaseEstimator) -> BaseEstimator:
"""
Extracts clf from meta and fits to training data
"""
clf.fit(features, labels)
return clf
def validate_model(clf: BaseEstimator, test_features: pd.DataFrame, test_labels: pd.Series) -> float:
"""
Extracts test data from meta and scores the classifier
"""
return clf.score(test_features, test_labels)
```
### Constructing the pipeline object
```
pipeline: Callable[[BaseEstimator, TestSize], float] = Pipeline(
initialise_sampler,
load_iris_data,
convert_to_pandas,
create_train_test_split,
train_model,
validate_model
)
```
### Running the pipeline on a classifier
```
clf = LogisticRegression(multi_class="ovr", penalty="l2", solver='lbfgs')
pipeline(clf=clf, test_size=0.4)
```
| github_jupyter |
# Class 5 Lab: Databases and ETL
## Objectives
- Configure Google Cloud SQL Databases
- Discover Database Security Options
- Connect to a MySQL DB via Python
- Generate UUIDs in Python
- Normalize API request payload
- Insert API request payloads into DB tables
## Requirements
In order to follow along, the following will need to be installed:
- Python libraries:
- notebook
- [jupyterlab](https://jupyter.org/install)
- pyyaml
- [mysql](https://dev.mysql.com/doc/connector-python/en/connector-python-installation-binary.html)
We will also be using the [WeatherAPI](https://www.weatherapi.com/). While not required, it is recommended to sign up for a free account to acquire an API Key and follow along with the lab.
## Lab
In this lab, we will write an ETL script that extracts data from an API, transforms the payload, and loads it into a MySQL database.
We will first create a MySQL database via Google Cloud SQL.
Then, we will once again be using the free WeatherAPI. We will use the historical weather API to get historical weather for the last 7 days from New York, Mexico City, and Houston. Feel free to use different cities, if you so choose. We will also write
Finally, we'll transform our API payload
### I. Configure Google Cloud SQL Database
First, let's create a new database instance via [Google Cloud SQL](https://cloud.google.com/sql). Cloud SQL is a fully managed, scaleable solution for hosting MySQL, Postgres, and SQL Server Databases. Specifically, the following are managed for you:
- Provisioning Hardware
- Installing DBMS run time on VMs
- Backups
- Encryption
- User Generation
- Networking and Security
Let's navigate to the [Cloud Console](https://console.cloud.google.com/sql) and begin.
0. If you do not already have the Cloud SQL API enabled on your Project, you will need to do this first. Don't worry: you will not be billed for enabling the API.
1. Select Create Instance
2. Select your instance type and run time
1. For this lab, we will be using a MySQL Database
2. And we'll use version 8.0
3. Fill in your Instance Info
1. Select an instance ID. This will be the name of your Cloud SQL Instance, not of the database itself. Don't worry too much about this ID.
2. Select the password for the root user. In general, I would recommend letting Google generate the password for you and do not store it. We will provision user accounts later. Do NOT share root user passwords with anyone.
3. Under Database Version, select MySQL 8.0
4. Choose region and zonal availability
1. We'll be using us-central1 for environmental reasons, but feel free to choose a region that is closer to you.
2. For zonal availability, we will choose single zone as we don't need high availability for this demo and it costs less. In general, we do recommend multi-zone available when creating production databases, however.
5. Customize your instance
1. Machine type: for this demo, let's use a lightweight instance with 1 CPU. For large databases, high memory is a better option.
2. Storage: Select SSD and 10 GB. HDD is a cheaper, lower performance option. Also note that while we can enable automatic storage increases, you cannot decrease the size of your storage. Also note the "Advanced Encryption Option" to provide a customer-managed encryption key.
3. Connections: In production, it is generally preferable to only allow private IP addresses. This restricts access to only computers on the same VPC as your database. For this demo, we will allow a public IP address. But, we can still restrict access through the allowed networks. Right now we will leave this blank and come back later when we create our DB users.
4. Automated Backups: we will leave this on the default settings. Note how GCP manages DB backups for you. In an on-prem set up or user managed DB instance, you would have to create your own backups.
5. Maintenance: we will leave this on the default settins. Note how GCP automates maitenance for you. In an on-prem set up or user managed DB instance, you would have to schedule and perform your own maitenance.
6. We will leave flags empty for now.
7. Under labels, let's add an environment label with the key-value `env: dev` so we now this is not a production database.
Now we can press `CREATE INSTNACE` and wait 3-5 minutes for our hardware to be provisioned and for our DB software to be installed. While we wait, let's dive into our security options.
Now would also be a good time to install the python-mysql-connector if you have not already done so.
### II. Discover Database Security Options
Let's talk a little bit about our options for securing our database. In general, we would recommend one of four patterns:
1. Private IP Address, only
2. Public IP Address with Cloud IAM
3. Public IP Address with SSL
4. Pulic IP Address with whitelisting (encryption optional)
Only allowing a Private IP Address restricts access to your DB from computers in the same VPC. Access can also be allowed by authenticating through a VPN. This is the strictest security option you can select, and is generally recommended for Enterprise Companies. It is more strict than anything we will need today.
Allowing a Public IP Address allows access to your DB from the Internet. This is not as scary as it sounds. By default, no IP Addresses are whitelisted, which means even though your DB is accessible by the internet, no device may access it. You have three options for access control with a publically accessible database: IAM Roles, SSL, and whitelisting.
IAM Roles allow you to provision access to members of your GCP Project, only. This is a great option for accessing your DB from other GCP services as you can provide your VMs with the IAM Roles needed for querying your data. It also allows you to limit what actions certain users can take. For example, you can have read-only roles that can query data, but not create, edit, or delete tables. Remote (i.e. local) access is also feasible, but be careful with storing keys for doing so.
SSL (Secure Socket Layer) is a technology for creating an encrypted link for transfering data. We can require SSL connections to our database, which will require all traffic to include SSL certification. This comes in the form of key files you can downloand from Google Cloud SQL. In general, we prefer security options that do not rely on key files as they can easily be lost or accidentally shared. Always, always store key files in secured Cloud Storage Buckets.
Finally, we can restrict traffic to our database by IP Address. We will be using this option for this demo. You can find your IP Address from [here](https://whatismyipaddress.com/). Then we'll create a new user in Google Cloud SQL that can only access our database from our IPv4 Address. Remember your username and password. We'll need that for our connection.
### III. Connect to a MySQL Database via Python
Now we're ready to connect to our Google Cloud SQL Instance. Let's take a look at [python-mysql-connector](https://dev.mysql.com/doc/connector-python/en/connector-python-example-connecting.html) documentation.
In order to connect to our database, we'll need to:
1. Instantiate a mysql connector object
2. And pass it the public IP Address for our DB, along with our username and password
```python
import mysql.connector
cnx = mysql.connector.connect(
user='scott',
password='password',
host='127.0.0.1'
)
```
Recall from the APIs for Data lab that including passwords in code is a terrible practice. So we will include this information, as well as our API Keys from the WeatherAPI in a yaml file. Create a YAML File with that information and in the cell below, write the code needed to import our YAML file. You should:
1. Open the path to your yaml file
2. Store your yaml as a dict called "config"
```
import yaml
config_file = open('labClass5.yaml', 'r')
config = yaml.safe_load(config_file)
```
In the cell below, import the mysql connector and instantiate a new connection with your config information from above.
```
import mysql.connector
client = mysql.connector.connect(**config['connection'])
```
In the cell below, instantiate a new `cursor` object and execute the following query:
```sql
CREATE DATABASE weather
```
```
cursor = client.cursor()
cursor.execute('CREATE DATABASE weather')
```
### IV. Generate UUIDs via Python
Recall from the Class 5 that we want need to replace repeated fields with an id. [UUIDs](https://docs.python.org/3/library/uuid.html) are a common and effective ID format. In the cell below, create a cities list where each element in the list is an object with two keys, `name` and `id` where the ID is a UUID. I will be using three cities: New York, Houston, and Mexico City. You may use any number of cities.
```
import uuid
### Your Code Here
```
[Data Definition Language](https://www.techopedia.com/definition/1175/data-definition-language-ddl), or DDL, is SQL for creating new database tables. In the cell below, you will see the DDL needed to create a `cities` table with two fields: name and id. Both fields are of type variable character (VARCHAR), which are strings that can have up to a specified number of characters. Also note that to execute the query, we will use the our cursor's `execute()` method.
```
ddl = (
"CREATE TABLE weather.cities("
"name VARCHAR(255),"
"id VARCHAR(40)"
")"
)
cursor.execute(ddl)
```
[Data Manipulation Language](https://www.techopedia.com/definition/1179/data-manipulation-language-dml), or DML, is SQL for adding or updating data in database tables. In the cell below, you will see the code needed to insert each of our cities into our new cities table. Note that in this case, we will pass to our cursor both our DML string and a tuple containing the names and IDs for each city.
```
dml = (
"INSERT INTO weather.cities ("
"name,"
"id"
")"
"VALUES ("
"%s,"
"%s"
")"
)
for city in cities:
cursor.execute(dml, (city['name'], city['id']))
```
To validate that our DML was successful, we will run a query to select each row from our cities table in the cell below. Note that we are using the `fetchall()` method to extract the results of our query.
```
cursor.execute("SELECT * FROM weather.cities")
cursor.fetchall()
```
Finally, after verifying the data in our cities table, we have to commit the changes to our table using our MySQL client's `commit()` method.
```
client.commit()
```
## BREAK
### V. Normalize API Request Payloads
Let's review. To this point we have:
1. Created a MySQL database via Google Cloud SQL
2. Connected to our database via python
3. Created a cities table in our database
Now that we know how to interact with our database, let's replicate the ETL process we ran in Lab 4, but write our API Request Payloads to a database table, rather than a CSV. Let's start by generating our request bodies. Free accounts for the Weather API can query historical data up to 7 days. The following code creates a list with the last 7 days as well as the components you need to create API calls for the history endpoint.
Where indicated, please append to the request_bodies list an API call body for each city in your cities dict and date in the dates list. Note that requests to the history endpoint are formatted as
https://api.weatherapi.com/v1/history.json?key={weather_key}&q={city}&dt={date}
```
import requests
from datetime import date, timedelta
weather_key = config['weather_key']
end = date.today()
start = end - timedelta(7)
dates = [str(start+timedelta(days=x)) for x in range((end-start).days)]
base_url = 'http://api.weatherapi.com/v1/'
history_api = 'history.json?'
auth = f'key={weather_key}'
request_bodies = []
### Your Code Here
```
Great. Now that we have our request bodies, in the cell below:
1. Loop through each request body
2. Make a get request
3. Convert the request payload to a dict using the .json() method
4. And append the payload to the data list
```
import requests
data = []
### Your Code Here
```
Now that we have our payloads, we need to once again parse them into the daily and hourly forecast. The following code loops through each element in our data and pulls out the city, date, and daily forecast. Given that we have three cities and seven days per city, we should end up with 21 elements in our forecast day list.
```
forecast_day = []
for row in data:
for day in row['forecast']['forecastday']:
forecast_day.append(
{'city': row['location']['name'], 'date': day['date'], 'forecast': day['day']}
)
print(len(forecast_day))
```
In the cell below, replicate the process above for the hourly forecasts. Each element in the forecast_horu list should be a dict with the following keys:
- city
- date
- hour
- forecast
Because we have three cities, seven days per city, and 24 hours per day, we should end up with 504 elements in forecast_hour.
```
forecast_hour = []
### Your Code here
###
print(len(forecast_hour))
```
To normalize our request payloads, we need to add the corresponding city id to each object in our forecast_day and forecast_hour lists.
In the cell below, write a for loop that for each object in forecast_day, appends a key `city_id` equal to the city_id corresponding to the city of that forecast.
```
### Your Code Here
```
In the cell below, write a for loop that for each object in forecast_hour, appends a key city_id equal to the city_id corresponding to the city of that forecast.
```
### Your Code Here
```
### VI. Insert API Request Payloads into Database Tables
Ok, now we're ready to write our request payloads to our MySQL Database. In the cell below, I am reading from the "create_daily_forecast.sql" file for my DDL. You may instead choose to write your DDL as a string like above. But it is generally preferable to read SQL files into your code than to include the SQL directly as string.
```
with open("create_daily_forecast.sql") as ddl:
cursor.execute(ddl.read())
```
Remember from the DML for our cities table that we need to pass our data into the cursor as a tuple. In the cell below, I am looping through the forecast_day list and for each day, creating a tuple with:
- A UUID to act as the primary key
- city_id
- Date
- Max Temp
- Min Temp
We would probably want to include additional fields if we were creating a full weather database, but this is enough for a demo.
Once we have our tuples ready, we can use the `executemany()` method, which applies a list of tuples to the our dml code. Also note that we are reading from the "insert_daily_forecast.sql" file for our DML.
```
daily_forecast = []
for day in forecast_day:
daily_forecast.append((
str(uuid.uuid4()),
day['city_id'],
day['date'],
day['forecast']['maxtemp_f'],
day['forecast']['mintemp_f'])
)
with open("insert_daily_forecast.sql") as dml:
cursor.executemany(dml.read(), daily_forecast)
```
Once we insert our rows, we'll need to commit the changes to our DB
```
client.commit()
```
Now to verify the success of our DML, we'll run the following query to get the number of rows in our daily_forecast table. It should match the number of elements in our forecast_day list (21).
Note that the cursor.fetchall() method always returns a list of tuples. As such, we will need to first take the first element of the returned list and than the first element of that tuple to get a single number.
```
query = 'SELECT COUNT(*) FROM weather.daily_forecast'
cursor.execute(query)
output = cursor.fetchall()
print(f"{output[0][0]} rows inserted")
```
In the cell below, please run the DDL to create the hourly forecast table, found in "create_hourly_forecast.sql"
```
### Your Code Here
```
In the cell below, please run the DML to insert the data from forecast_hour into the newly created hourly_forecast table.
```
### Your code Here
```
After running your code below, commit the changes in the cell below.
```
client.commit()
```
Finally, run the cell below to verify your insert was successful. You should have inserted 504 rows (or however many records you had in your forecast_hour list.
```
query = 'SELECT COUNT(*) FROM weather.hourly_forecast'
cursor.execute(query)
output = cursor.fetchall()
print(f"{output[0][0]} rows inserted")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Harrow-Enigma/TeamEngima-ProjectEco-AI/blob/main/Project_Eco_AI_Beta_Testing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Project ECO AI Beta Testing Model
Copyright 2021 YIDING SONG
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
!pip install tabulate
!curl -o label_classes.py https://raw.githubusercontent.com/Harrow-Enigma/TeamEngima-ProjectEco-AI/main/label_classes.py
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib as mpl
import matplotlib.pyplot as plt
from label_classes import Label, FloatLabel, IntClass, IntClassMap
from datetime import timedelta, timezone
from datetime import datetime as dt
from tqdm import tqdm
import pickle as pkl
import requests
import json
import os
mpl.rcParams['figure.figsize'] = (8, 6)
mpl.rcParams['axes.grid'] = False
mpl.rcParams['axes.titlecolor'] = 'green'
mpl.rcParams['axes.labelcolor'] = 'black'
mpl.rcParams['xtick.color'] = 'black'
mpl.rcParams['ytick.color'] = 'black'
np.random.seed(219)
tf.random.set_seed(219)
```
## Loading the data
```
beta_api_url = 'https://dev-test.projecteco.ml/api/v1/rest/output/forms/'
response = requests.get(beta_api_url)
json_data = response.json()
json_data = json_data
for k in range(len(json_data)):
try:
_ = json_data[k]['localpollutiondata']
_has_key = True
except:
print('Error: No `localpollutiondata` key')
_has_key = False
if _has_key:
for pol_key in list(json_data[k]['localpollutiondata']\
['data'].keys()):
json_data[k][pol_key] = json_data[k]['localpollutiondata']\
['data'][pol_key]['v']
json_data[k].pop('localpollutiondata')
print(json.dumps(json_data, indent=4, sort_keys=True))
```
## Defining the labels
```
FEATURE_CLASSES = [
FloatLabel('h'), FloatLabel('no2'), FloatLabel('o3'),
FloatLabel('p'), FloatLabel('pm10'), FloatLabel('pm25'),
FloatLabel('t'), FloatLabel('w'), FloatLabel('wg'),
]
LABEL_CLASSES = [
IntClass('Q1 (General feeling)', _key='q1'),
IntClass('Q2 (Concentration)', _key='q2'),
IntClass('Q3 (Work Stress)', _key='q3'),
IntClassMap('Q4 (Dizziness, headaches, shortness of breath)',
{'no': 0, 'yes': 1},
_key='q4'),
IntClassMap('Q5 (Allergic responses)',
{'no': 0, 'yes': 1},
_key='q5')
]
```
## Data Pre-Processing
```
def handle_exception(feat_class, timestep_data):
try:
_val = timestep_data[feat_class.key]
if _val is not None:
return feat_class.fwd_call(_val)
return feat_class.fallback
except:
return feat_class.fallback
features = []
labels = []
for t in json_data:
features.append([handle_exception(c, t)
for c in FEATURE_CLASSES])
labels.append([handle_exception(c, t)
for c in LABEL_CLASSES])
features = np.array(features, np.float32)
labels = np.array(labels, np.float32)
def standardize(arr):
m = arr.mean(0)
s = arr.std(0)
for i in range(len(s)):
if s[i] == 0:
s[i] = 1e-8
arr = (arr - m)/s
return arr, m, s
def destandardize(arr, m, s):
return arr * s + m
def rescale(arr, delta=0.01):
arr_max = arr.max(axis=0) + delta
arr_min = arr.min(axis=0) - delta
arr_range = arr_max - arr_min
arr_ofst = (arr_max + arr_min) / 2
return (arr - arr_ofst) / arr_range, arr_ofst, arr_range
def descale(arr, arr_ofst, arr_range):
return arr * arr_range + arr_ofst
def normalize(arr):
arr_std, m, s = standardize(arr)
arr_norm, o, r = rescale(arr_std)
return arr_norm, m, s, o, r
def denormalize(arr, m, s, o, r):
return destandardize(descale(arr, o, r), m, s)
(features_norm,
features_mean,
features_stddv,
features_ofst,
features_range) = normalize(features)
(labels_norm,
labels_mean,
labels_stddv,
labels_ofst,
labels_range) = normalize(labels)
'''
assert features.all() == denormalize(features_norm, features_mean,
features_stddv, features_ofst, features_range).all()
assert labels.all() == denormalize(labels_norm, labels_mean,
labels_stddv, labels_ofst, labels_range).all()
'''
dataset = tf.data.Dataset.from_tensor_slices((features, labels))
dataset_size = len(features_norm)
ratio = 9/10
train_no = int(ratio * dataset_size)
test_no = dataset_size - train_no
print('Number of training samples: {}'.format(train_no))
print('Number of testing samples: {}'.format(test_no))
train_features_norm = features_norm[:train_no]
test_features_norm = features_norm[train_no:]
train_labels_norm = labels_norm[:train_no]
test_labels_norm = labels_norm[train_no:]
train_dataset = tf.data.Dataset.from_tensor_slices(
(train_features_norm, train_labels_norm)
).batch(2, drop_remainder=False)
test_dataset = tf.data.Dataset.from_tensor_slices(
(test_features_norm, test_labels_norm)
).batch(2, drop_remainder=False)
pkl.dump({
'features': {
'classes': FEATURE_CLASSES,
'mean': features_mean,
'std_dev': features_stddv,
'offset': features_ofst,
'range': features_range
},
'labels': {
'classes': LABEL_CLASSES,
'mean': labels_mean,
'std_dev': labels_stddv,
'offset': labels_ofst,
'range': labels_range
},
}, open('beta_data_aux.data', 'wb'))
```
## Building the model
```
class DNNModel(tf.keras.Model):
def __init__(self,
inp_shape: int,
out_shape: int,
units=[32, 64, 64],
name='DNNModel'):
super(DNNModel, self).__init__(name = name)
self.inp_shape = inp_shape
self.out_shape = out_shape
self.stack = [tf.keras.layers.Dense(i, activation='relu') for i in units]
self.out = tf.keras.layers.Dense(self.out_shape, activation='tanh')
def call(self, inp):
x = inp
for _layer in self.stack:
x = _layer(x)
return self.out(x)
def functional(self):
inputs = tf.keras.Input(self.inp_shape)
outputs = self.call(inputs)
return tf.keras.Model(inputs, outputs, name=self.name)
sample_model = DNNModel(
len(FEATURE_CLASSES), len(LABEL_CLASSES)
)
for i in dataset.take(1):
sample_pred = sample_model(tf.expand_dims(i[0], 0))
print('Sample prediction of shape {}:\n{}'.format(
sample_pred.shape, sample_pred
))
```
### Model Visualization
```
sample_model.summary()
tf.keras.utils.plot_model(sample_model.functional(), to_file="model.png")
```
## Defining losses and optimizers
```
mae = tf.keras.losses.MeanAbsoluteError()
optim = tf.keras.optimizers.Adam(1e-3)
model = DNNModel(
len(FEATURE_CLASSES), len(LABEL_CLASSES)
)
model.compile(optimizer = optim, loss = mae)
```
## Defining training checkpoints
```
checkpoint_dir = './ProjectECO_Beta_Checkpoints/'
if not os.path.exists(checkpoint_dir):
os.mkdir(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath = checkpoint_prefix,
save_weights_only = True)
```
## TRAINING!!
```
model.fit(train_dataset.repeat(), callbacks=[checkpoint_callback],
steps_per_epoch = 1000, epochs = 10)
model.evaluate(test_dataset)
model.save_weights('beta_weights.h5')
model.functional().save('beta_model_func.h5')
from google.colab import files
for name in os.listdir('./'):
if not os.path.isdir(name):
files.download(name)
```
## Standalone Prediction
```
!pip install tabulate
!curl -o label_classes.py https://raw.githubusercontent.com/Harrow-Enigma/TeamEngima-ProjectEco-AI/main/label_classes.py
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib as mpl
import matplotlib.pyplot as plt
from label_classes import Label, FloatLabel, IntClass, IntClassMap
from datetime import timedelta, timezone
from datetime import datetime as dt
from tabulate import tabulate
from tqdm import tqdm
import pickle as pkl
import requests
import json
import os
```
### Pre-Requisite Objects
```
def standardize_on_params(arr, m, s):
return (arr - m)/s
def destandardize(arr, m, s):
return arr * s + m
def rescale_on_params(arr, arr_ofst, arr_range):
return (arr - arr_ofst) / arr_range
def descale(arr, arr_ofst, arr_range):
return arr * arr_range + arr_ofst
def normalize_on_params(arr, m, s, o, r):
arr_std = standardize_on_params(arr, m, s)
arr_norm = rescale_on_params(arr_std, o, r)
return arr_norm
def denormalize(arr, m, s, o, r):
return destandardize(descale(arr, o, r), m, s)
obj = pkl.load(open('beta_data_aux.data', 'rb'))
FEATURE_CLASSES = obj['features']['classes']
features_mean = obj['features']['mean']
features_stddv = obj['features']['std_dev']
features_ofst = obj['features']['offset']
features_range = obj['features']['range']
LABEL_CLASSES = obj['labels']['classes']
labels_mean = obj['labels']['mean']
labels_stddv = obj['labels']['std_dev']
labels_ofst = obj['labels']['offset']
labels_range = obj['labels']['range']
model = tf.keras.models.load_model('beta_model_func.h5')
```
### Pre-Requisite Data
```
def handle_exception(feat_class, timestep_data):
try:
_val = timestep_data[feat_class.key]
if _val is not None:
return feat_class.fwd_call(_val)
return feat_class.fallback
except:
return feat_class.fallback
beta_api_url = 'https://dev-test.projecteco.ml/api/v1/rest/output/forms/'
response = requests.get(beta_api_url)
json_data = response.json()
for k in range(len(json_data)):
try:
_ = json_data[k]['localpollutiondata']
_has_key = True
except:
print('Error: No `localpollutiondata` key')
_has_key = False
if _has_key:
for pol_key in list(json_data[k]['localpollutiondata']\
['data'].keys()):
json_data[k][pol_key] = json_data[k]['localpollutiondata']\
['data'][pol_key]['v']
json_data[k].pop('localpollutiondata')
server_features = []
server_labels = []
for t in json_data:
server_features.append([handle_exception(c, t)
for c in FEATURE_CLASSES])
server_labels.append([handle_exception(c, t)
for c in LABEL_CLASSES])
server_features = np.array(server_features, np.float32)
server_labels = np.array(server_labels, np.float32)
```
### Visualization Code
```
def visualize_inputs(inps, headers=('Input name', 'Input value')):
data_arr = [[c.name, c.fwd_call(val)] for c, val in zip(FEATURE_CLASSES, inps)]
print(tabulate(data_arr, headers=headers))
def visualize_features(feat, headers=('Feature name', 'Feature value')):
data_arr = [[c.name, val] for c, val in zip(FEATURE_CLASSES, feat)]
print(tabulate(data_arr, headers=headers))
def visualize_preds(preds, headers=('Prediction name', 'Prediction value')):
data_arr = [[c.name, c.rev_call(val)] for c, val in zip(LABEL_CLASSES, preds)]
print(tabulate(data_arr, headers))
def preproc(inps):
feats = np.array([[c.fwd_call(val) for c, val in zip(FEATURE_CLASSES, entry)]
for entry in inps])
feats_norm = normalize_on_params(feats, features_mean, features_stddv,
features_ofst, features_range)
return feats_norm
def denorm_preds(preds_norm):
return denormalize(preds_norm, labels_mean, labels_stddv,
labels_ofst, labels_range)
def visualize(inps, see_feats=False):
feats_norm = preproc(inps)
preds_norm = model.predict(feats_norm)
preds = denormalize(preds_norm, labels_mean, labels_stddv,
labels_ofst, labels_range)
for e, (i, f, p) in enumerate(zip(inps, feats_norm, preds)):
print('Visualizing input {}'.format(e))
print()
visualize_inputs(i)
print()
if see_feats:
visualize_features(f)
print()
visualize_preds(p)
print('\n=====================================================================\n\n')
def compare_vis(inps, labs, see_feats=False):
feats_norm = preproc(inps)
preds_norm = model.predict(feats_norm)
preds = denorm_preds(preds_norm)
for e, (i, f, p, g) in enumerate(zip(inps, feats_norm, preds, labs)):
print('Visualizing input {}'.format(e))
print()
visualize_inputs(i)
print()
if see_feats:
visualize_features(f)
print()
visualize_preds(p, headers=("Prediction Name", "Model Output"))
print()
visualize_preds(g, headers=("Prediction Name", "Ground Truth"))
print('\n==================================================================\n\n')
def locate_label_by_name(_name, classes):
for e, i in enumerate(classes):
if i.name == _name:
return e
def replace_in_arr(arr, idx, newval):
ret = arr.copy()
ret[idx] = newval
return ret
def makegraph(x, y, xlab='', ylab='', title=''):
plt.figure(figsize=(8, 5), dpi=150)
plt.plot(x, y)
plt.xlabel(xlab)
plt.ylabel(ylab)
plt.title(title)
plt.show()
def autorange(featurename, sub_d=100):
feats = server_features[:, locate_label_by_name(featurename,
FEATURE_CLASSES)]
step = (feats.max() - feats.min()) / (sub_d - 1)
return np.arange(feats.min(), feats.max() + step, step)
def plot_relation_by_name(featurename,
labelname,
_range,
_vars=server_features[0]):
_vars = server_features[0]
_feat_loc = locate_label_by_name(featurename, FEATURE_CLASSES)
_lab_loc = locate_label_by_name(labelname, LABEL_CLASSES)
_x = [replace_in_arr(_vars, _feat_loc, i) for i in _range]
_y = denorm_preds(model.predict(preproc(_x)))[:, _lab_loc]
makegraph(_range, _y, featurename, labelname,
'How {} responses vary with {} levels'.format(labelname,
featurename))
def plot_relation_from_data(featurename,
labelname):
_x = server_features[:, locate_label_by_name(featurename, FEATURE_CLASSES)]
_y = server_labels[:, locate_label_by_name(labelname, LABEL_CLASSES)]
_idx = np.argsort(_x)
_x = _x[_idx]
_y = _y[_idx]
makegraph(_x, _y, featurename, labelname,
'How {} responses vary with {} levels'.format(labelname,
featurename))
```
### Visualization
```
visualize(server_features)
compare_vis(server_features, server_labels)
server_features[:, locate_label_by_name('pm10', FEATURE_CLASSES)]
# How stress levels vary with PM2.5 concentration,
# assuming that all other values follow server_features[0]
plot_relation_by_name('pm10', 'Q3 (Work Stress)',
autorange('pm25', 100))
plot_relation_by_name('no2', 'Q2 (Concentration)',
autorange('pm25', 100))
plot_relation_from_data('no2', 'Q2 (Concentration)')
```
| github_jupyter |
```
#Gerekli kütüphaneler
import pandas as pd
import numpy as np
import requests
from bs4 import BeautifulSoup
#Gerekli listeler
url_list = []
prices_list = []
propTitles = []
propValues = []
#Özelliklerin çekilmesi
for i in range(1,2): #2 yerine sayfa sayısı gelmeli
url = "https://www.trendyol.com/cep-telefonu-x-c103498?pi=" + str(i) #Urlin for döngsü ile güncellenmesi
r = requests.get(url) #Hazırlanan url'e istek yapılması
source = BeautifulSoup(r.content,"lxml") #İstek yapılan sayfanın içeriğinin çekilmesi
urls = source.find_all("div",attrs={"class":"p-card-chldrn-cntnr"}) #Tüm verilerin bulunması
for url in urls:
url_phone = "https://www.trendyol.com/"+url.a.get("href") #Verilerin linklerinin bulunması
url_list.append(url_phone) #Linklerin kaydedilmesi
print(url_phone)
r_phone = requests.get(url_phone) #Bulunan linke tekrar istek atılması
source_phone = BeautifulSoup(r_phone.content,"lxml") #Veri içeriğinin çekilmesi
properties = source_phone.find_all("div",attrs={"class":"prop-item"}) #Bütün özelliklerin bulunması
for prop in properties:
prop_title = prop.find("div",attrs={"class":"item-key"}).text
prop_value = prop.find("div",attrs={"class":"item-value"}).text
propTitles.append(prop_title)
propValues.append(prop_value)
prices = source.find_all("div",attrs={"class":"prc-box-sllng"}) #Bütün fiyat özelliklerinin bulunması
for price in prices:
prices_list.append(price.text) #Bulunan özellikler arasında gezinme ve listeye yazma
print(price.text)
print(str(len(url_list))+" Adet link bulundu.")
print(str(len(prices_list))+" Adet fiyat bulundu.")
print(str(len(propTitles))+" Adet özellik başlığı bulundu.")
print(str(len(propValues))+" Adet özellik verisi bulundu.")
#Url ve fiyatları bir data frame yazma
df_urls = pd.DataFrame()
df_urls["urls"] = url_list
df_urls["prices"] = prices_list
df_urls.head()
#Bulunan veri sayısı
phones = len(url_list)
#Bulunan özellik başlıklarının benzersizlerini bulma
columns = np.array(propTitles)
columns = np.unique(columns)
#Başlıkları kullanarak url ve fiyat ile birlikte yeni bir data frame oluşturma
df = pd.DataFrame(columns=columns)
df["url"] = url_list
df["price"] = prices_list
#Oluşturulan data frame gösterme
df.head()
#Data frame'i kullanarak bütün verileri çekme ve sütunlara yazdırma
for i in range(0,phones):
url = df['url'].loc[i]
r = requests.get(url)
source = BeautifulSoup(r.content,"lxml")
properties = source.find_all("div",attrs={"class":"prop-item"})
for prop in properties:
prop_title = prop.find("div",attrs={"class":"item-key"}).text
prop_value = prop.find("div",attrs={"class":"item-value"}).text
print(prop_title+prop_value)
df[prop_title].loc[i] = prop_value
df.head()
#Data frame'i csv formatına çevirip kaydetme
df.to_csv("phone_trendyol_data.csv",index=False)
df.columns
```
| github_jupyter |
# CIFAR10 CNN Classification
Note: This notebook is desinged to run with Python3 and GPU runtime.

This notebook uses TensorFlow 2.x.
```
%tensorflow_version 2.x
```
####[CCC-01]
Import modules and set a random seed.
```
import numpy as np
from pandas import DataFrame
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers, models, initializers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.datasets import cifar10
np.random.seed(20190610)
tf.random.set_seed(20190610)
```
####[CCC-02]
Download the CIFAR10 dataset and store into NumPy arrays.
```
(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()
train_images = train_images.astype('float32') / 255
test_images = test_images.astype('float32') / 255
train_labels = tf.keras.utils.to_categorical(train_labels, 10)
test_labels = tf.keras.utils.to_categorical(test_labels, 10)
```
####[CCC-03]
Show sample images for each category.
```
fig = plt.figure(figsize=(8, 10))
c = 0
for i in range(10):
for j in range(len(train_images)):
if np.argmax(train_labels[j]) == i:
c += 1
subplot = fig.add_subplot(10, 8, c)
subplot.set_xticks([])
subplot.set_yticks([])
subplot.imshow(train_images[j])
if c % 8 == 0:
break
```
####[CCC-04]
Define the image data generator.
```
datagen = ImageDataGenerator(
rotation_range=10,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=[0.8, 1.2],
horizontal_flip=True,
channel_shift_range=0.2)
```
####[CCC-05]
Show some generated images for each category.
```
fig = plt.figure(figsize=(8, 10))
c = 0
for i in range(10):
for j in range(len(train_images)):
if np.argmax(train_labels[j]) == i:
break
c += 1
subplot = fig.add_subplot(10, 8, c)
subplot.set_xticks([])
subplot.set_yticks([])
subplot.imshow(train_images[j])
for _ in range(7):
img = datagen.flow(np.array([train_images[j]]), batch_size=1)[0][0]
c += 1
subplot = fig.add_subplot(10, 8, c)
subplot.set_xticks([])
subplot.set_yticks([])
subplot.imshow(img)
```
####[CCC-06]
Define a CNN model.
```
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), padding='same',
kernel_initializer=initializers.TruncatedNormal(),
use_bias=True, activation='relu',
input_shape=(32, 32, 3),
name='conv_filter1-1'))
model.add(layers.Conv2D(32, (3, 3), padding='same',
kernel_initializer=initializers.TruncatedNormal(),
use_bias=True, activation='relu',
name='conv_filter1-2'))
model.add(layers.MaxPooling2D((2, 2), name='max_pooling1'))
model.add(layers.Dropout(rate=0.25, name='dropout1'))
model.add(layers.Conv2D(64, (3, 3), padding='same',
kernel_initializer=initializers.TruncatedNormal(),
use_bias=True, activation='relu',
name='conv_filter2-1'))
model.add(layers.Conv2D(64, (3, 3), padding='same',
kernel_initializer=initializers.TruncatedNormal(),
use_bias=True, activation='relu',
name='conv_filter2-2'))
model.add(layers.MaxPooling2D((2, 2), name='max_pooling2'))
model.add(layers.Dropout(rate=0.25, name='dropout2'))
model.add(layers.Flatten(name='flatten'))
model.add(layers.Dense(512, activation='relu',
kernel_initializer=initializers.TruncatedNormal(),
name='hidden'))
model.add(layers.Dropout(rate=0.5, name='dropout3'))
model.add(layers.Dense(10, activation='softmax', name='softmax'))
model.summary()
```
####[CCC-07]
Compile the model using the Adam optimizer, and the cross entroy as a loss function.
```
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['acc'])
```
####[CCC-08]
Train the model. It achieves the 82% accuracy.
```
batch_size = 64
history = model.fit_generator(
datagen.flow(train_images, train_labels, batch_size=batch_size),
validation_data=(test_images, test_labels),
steps_per_epoch=len(train_images) / batch_size,
epochs=20)
```
####[CCC-09]
Plot chars for accuracy and loss values.
```
DataFrame({'acc': history.history['acc'],
'val_acc': history.history['val_acc']}).plot()
DataFrame({'loss': history.history['loss'],
'val_loss': history.history['val_loss']}).plot()
```
| github_jupyter |
# About the data
The 20 newsgroups dataset comprises around 18000 newsgroups posts on 20 topics. The classification problem is to identify the newsgroup a post was summited to, given the text of the post.
There are a few versions of this dataset from different sources online. Below, we use the version within scikit-learn which is already split into a train and test/eval set. For a longer introduction to this dataset, see the [scikit-learn website](http://scikit-learn.org/stable/datasets/twenty_newsgroups.html)
This sequence of notebooks will write files to the file system under the WORKSPACE_PATH folder. Feel free to change this location in the next cell.
```
# Note this path must be under /content/datalab in Datalab.
# It is not recommended to use paths in /content/datalab/docs
WORKSPACE_PATH = '/content/datalab/workspace/tf/text_classification_20newsgroup'
!mkdir -p {WORKSPACE_PATH}
import os
os.chdir(WORKSPACE_PATH)
import numpy as np
import pandas as pd
import os
import re
import csv
from sklearn.datasets import fetch_20newsgroups
# data will be downloaded. Note that an error message saying something like "No handlers could be found for
# logger sklearn.datasets.twenty_newsgroups" might be printed, but this is not an error.
news_train_data = fetch_20newsgroups(subset='train', shuffle=True, random_state=42, remove=('headers', 'footers', 'quotes'))
news_test_data = fetch_20newsgroups(subset='test', shuffle=True, random_state=42, remove=('headers', 'footers', 'quotes'))
```
# Inspecting and cleaning the raw data
The distribution of labels/newsgroups within the training and test datasets are almost uniform. But more importantly, the distribution between test and training is about the same. Note that the first column is the integer id for the newsgroup while the 2nd column is the number of text examples with this newsgroup label.
Printing the 3rd element in the test dataset shows the data contains text with newlines, punctuation, misspellings, and other items common in text documents. To build a model, we will clean up the text by removing some of these issues.
```
news_train_data.data[2], news_train_data.target_names[news_train_data.target[2]]
def clean_and_tokenize_text(news_data):
"""Cleans some issues with the text data
Args:
news_data: list of text strings
Returns:
For each text string, an array of tokenized words are returned in a list
"""
cleaned_text = []
for text in news_data:
x = re.sub('[^\w]|_', ' ', text) # only keep numbers and letters and spaces
x = x.lower()
x = re.sub(r'[^\x00-\x7f]',r'', x) # remove non ascii texts
tokens = [y for y in x.split(' ') if y] # remove empty words
tokens = ['[number]' if x.isdigit() else x for x in tokens]
# As an exercise, try stemming each token using python's nltk package.
cleaned_text.append(tokens)
return cleaned_text
clean_train_tokens = clean_and_tokenize_text(news_train_data.data)
clean_test_tokens = clean_and_tokenize_text(news_test_data.data)
def get_unique_tokens_per_row(text_token_list):
"""Collect unique tokens per row.
Args:
text_token_list: list, where each element is a list containing tokenized text
Returns:
One list containing the unique tokens in every row. For example, if row one contained
['pizza', 'pizza'] while row two contained ['pizza', 'cake', 'cake'], then the output list
would contain ['pizza' (from row 1), 'pizza' (from row 2), 'cake' (from row 2)]
"""
words = []
for row in text_token_list:
words.extend(list(set(row)))
return words
# Make a plot where the x-axis is a token, and the y-axis is how many text documents
# that token is in.
words = pd.DataFrame(get_unique_tokens_per_row(clean_train_tokens) , columns=['words'])
token_frequency = words['words'].value_counts() # how many documents contain each token.
token_frequency.plot(logy=True)
```
Note that most of our tokens only appear in 1 document, while some appears almost every document. To build a good model, we should remove these low frequency tokens.
```
len(news_train_data.data), len(token_frequency) # There are many more tokens than examples!
# Filter tokens.
vocab = token_frequency[token_frequency > 10]
vocab.plot(logy=True)
import six
CONTROL_WORDS = ['<s>', '</s>', '<unk>']
vocab_id = {v[0]: idx + 1 for idx, v in enumerate(sorted(six.iteritems(vocab), key=lambda x: x[1], reverse=True))}
for c in CONTROL_WORDS:
vocab_id[c] = len(vocab_id)
def filter_text_by_vocab(news_data, vocab_id):
"""Removes tokens if not in vocab.
Args:
news_data: list, where each element is a token list
vocab: set containing the tokens to keep.
Returns:
List of strings containing the final cleaned text data
"""
wids_all = []
for row in news_data:
wids = [vocab_id[token] if (token in vocab_id) else vocab_id['<unk>'] for token in row]
wids = [vocab_id['<s>']] + wids + [vocab_id['</s>']]
wids = wids[:128]
wids_all.append(wids)
return wids_all
clean_train_data = filter_text_by_vocab(clean_train_tokens, vocab_id)
clean_test_data = filter_text_by_vocab(clean_test_tokens, vocab_id)
# As a check, let's make sure we didn't remove any data rows.
len(clean_train_data), len(news_train_data.data), len(clean_test_data), len(news_test_data.data)
# vocab size
len(vocab_id)
def pad_wids(wids, length):
"""Pad each instance to """
padded = []
for r in wids:
if len(r) >= length:
padded.append(r[0:length])
else:
padded.append(r + [0] * (length - len(r)))
return padded
padded_train_data = pad_wids(clean_train_data, 128)
padded_test_data = pad_wids(clean_test_data, 128)
```
# Build a DNN model
We'll first build a simple DNN model with only 3 layers: input, embeddings, and output.
```
import tensorflow as tf
import shutil
import tensorflow.contrib.learn as tflearn
import tensorflow.contrib.layers as tflayers
from tensorflow.contrib.learn.python.learn import learn_runner
from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils
from google.datalab.ml import Summary
TRAIN_BATCH_SIZE = 64
EVAL_BATCH_SIZE = 512
EMBEDDING_SIZE = 512
def dnn_model(batch_size, train_data, targets, mode):
"""Build an DNN Model. """
with tf.name_scope(mode):
raw_data = tf.convert_to_tensor(train_data, dtype=tf.int64)
targets = tf.convert_to_tensor(targets, dtype=tf.int64)
batch_num = len(train_data) // batch_size - 1
i = tf.train.range_input_producer(batch_num, shuffle=True).dequeue()
input_seqs = raw_data[i * batch_size: (i + 1) * batch_size]
targets = targets[i * batch_size: (i + 1) * batch_size]
length = tf.count_nonzero(input_seqs, axis=1, dtype=tf.int32)
embedding_map = tf.get_variable(
name="embeddings_map",
shape=[len(vocab_id), EMBEDDING_SIZE])
seq_embeddings = tf.nn.embedding_lookup(embedding_map, input_seqs)
# Simply combine embeddings.
combined = tf.sqrt(tf.reduce_sum(tf.square(seq_embeddings), 1))
logits = tf.contrib.layers.fully_connected(
inputs=combined,
num_outputs=20,
activation_fn=None)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=targets, logits=logits)
losses= tf.reduce_mean(cross_entropy, name='xentropy_mean')
predictions = tf.argmax(logits, 1)
_, accuracy = tf.contrib.metrics.streaming_accuracy(targets, predictions)
correct_predictions = tf.count_nonzero(tf.equal(predictions, targets))
return losses, accuracy, correct_predictions
def train(model_fn, train_steps, model_dir):
"""Model trainer."""
g = tf.Graph()
with g.as_default():
uniform_initializer = tf.random_uniform_initializer(minval=-0.08, maxval=0.08)
with tf.variable_scope("Model", reuse=None, initializer=uniform_initializer):
losses_train, _, _ = model_fn(TRAIN_BATCH_SIZE, padded_train_data, news_train_data.target, 'train')
with tf.variable_scope("Model", reuse=True):
_, accuracy, correct_predictions = model_fn(EVAL_BATCH_SIZE, padded_test_data, news_test_data.target, 'eval')
tf.summary.scalar('accuracy', accuracy)
tf.summary.scalar('losses', losses_train)
merged = tf.summary.merge_all()
global_step = tf.Variable(
initial_value=0,
name="global_step",
trainable=False,
collections=[tf.GraphKeys.GLOBAL_STEP, tf.GraphKeys.GLOBAL_VARIABLES])
train_op = tf.contrib.layers.optimize_loss(
loss=losses_train,
global_step=global_step,
learning_rate=0.001,
optimizer='Adam')
def train_step_fn(sess, *args, **kwargs):
total_loss, should_stop = tf.contrib.slim.python.slim.learning.train_step(sess, *args, **kwargs)
if train_step_fn.train_steps % 50 == 0:
summary = sess.run(merged)
train_step_fn.eval_writer.add_summary(summary, train_step_fn.train_steps)
total_correct_predictions = 0
num_eval_batches = len(padded_test_data) / EVAL_BATCH_SIZE
for i in range(len(padded_test_data) / EVAL_BATCH_SIZE):
total_correct_predictions += sess.run(correct_predictions)
print('accuracy: %.4f' % (float(total_correct_predictions)/(num_eval_batches*EVAL_BATCH_SIZE)))
train_step_fn.train_steps += 1
return [total_loss, should_stop]
train_step_fn.train_steps = 0
train_step_fn.eval_writer = tf.summary.FileWriter(os.path.join(model_dir, 'eval'))
tf.contrib.slim.learning.train(
train_op,
model_dir,
graph=g,
global_step=global_step,
number_of_steps=train_steps,
log_every_n_steps=50,
train_step_fn=train_step_fn)
train_step_fn.eval_writer.close()
# Start from fresh. Note that you can skip this step to continue training from previous checkpoint.
!rm -rf dnn
train(dnn_model, train_steps=1501, model_dir='dnn')
summary = Summary('dnn/eval')
summary.plot(['losses', 'accuracy'])
```
# Build a bidirectional LSTM model
Let's try an LSTM based sequential model and see if it can beat DNN.
```
LSTM_SIZE=128
def lstm_model(batch_size, train_data, targets, mode):
"""Build an LSTM Model. """
with tf.name_scope(mode):
raw_data = tf.convert_to_tensor(train_data, dtype=tf.int64)
targets = tf.convert_to_tensor(targets, dtype=tf.int64)
batch_num = len(train_data) // batch_size - 1
i = tf.train.range_input_producer(batch_num, shuffle=True).dequeue()
input_seqs = raw_data[i * batch_size: (i + 1) * batch_size]
targets = targets[i * batch_size: (i + 1) * batch_size]
length = tf.count_nonzero(input_seqs, axis=1, dtype=tf.int32)
embedding_map = tf.get_variable(
name="embeddings_map",
shape=[len(vocab_id), EMBEDDING_SIZE])
seq_embeddings = tf.nn.embedding_lookup(embedding_map, input_seqs)
# This section is different from DNN model function.
#===================================================
lstm_cellf = tf.contrib.rnn.BasicLSTMCell(num_units=LSTM_SIZE)
lstm_cellb = tf.contrib.rnn.BasicLSTMCell(num_units=LSTM_SIZE)
lstm_outputs, states = tf.nn.bidirectional_dynamic_rnn(
cell_fw=lstm_cellf,
cell_bw=lstm_cellb,
inputs=seq_embeddings,
dtype=tf.float32)
lstm_outputs = tf.concat(lstm_outputs, 2)
indices = tf.range(tf.shape(length)[0])
slices = tf.stack([indices, length-1], axis=1)
lstm_outputs = tf.gather_nd(lstm_outputs, indices=slices)
#===================================================
logits = tf.contrib.layers.fully_connected(
inputs=lstm_outputs,
num_outputs=20,
activation_fn=None)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=targets, logits=logits)
losses= tf.reduce_mean(cross_entropy, name='xentropy_mean')
predictions = tf.argmax(logits, 1)
_, accuracy = tf.contrib.metrics.streaming_accuracy(targets, predictions)
correct_predictions = tf.count_nonzero(tf.equal(predictions, targets))
return losses, accuracy, correct_predictions
# Start from fresh. Note that you can skip this step to continue training from previous checkpoint.
!rm -rf lstm
# We will use the same trainer. Note that the training steps is greater based on experiments and
# training time is much longer.
train(lstm_model, train_steps=1501, model_dir='lstm')
summary = Summary('lstm/eval')
summary.plot(['losses', 'accuracy'])
```
# Compare Model Performance
With DNN, we aggregate the words embeddings in each sentence. This appears more efficient and accurate. With LSTM, we have an overfitting problem. With both models produce the same loss value towards the end of the training, the accuracy on eval data is different.
```
summary = Summary(['dnn/eval', 'lstm/eval'])
summary.plot(['losses', 'accuracy'])
```
Perhaps the results of LSTM model may get close to that of DNN with more training data. Also, it will probably helpful if we add a convlutional layer before LSTM layer.
| github_jupyter |
```
import OnePy as op
%matplotlib inline
```
# Cleaner介绍
```
from OnePy.sys_module.base_cleaner import CleanerBase
class SMA(CleanerBase):
"""
编写自己的cleaner,只需自己创建一个CleanerBase的子类,然后覆盖calculate方法
默认提供data字典, key 为 ticker_frequency 的形式,比如 000001_D
self.data 内又是以open,high,low,close,volume为键值的字典,
每次事件循环只会保留固定rolling_window长度的数据
"""
def calculate(self, ticker):
key = f'{ticker}_{self.frequency}'
close = self.data[key]['close']
return sum(close)/len(close)
class SmaStrategy(op.StrategyBase):
def __init__(self):
super().__init__()
self.sma1 = SMA(rolling_window=3, # 设置cleaner中data长度
buffer_day= 40, # 设置preload数据长度,过低会自动调整
frequency='D' # 设置需要用来计算的数据频率,默认为系统频率
).calculate
self.sma2 = SMA(5, 40).calculate
def handle_bar(self):
for ticker in self.env.tickers:
if self.sma1(ticker) > self.sma2(ticker):
self.buy(100, ticker, takeprofit=15,
stoploss=100)
else:
self.sell(100, ticker)
TICKER_LIST = ['000001']
INITIAL_CASH = 20000
FREQUENCY = 'D'
START, END = '2018-03-01', '2018-10-01'
# 实例化策略,会自动添加到env.strategies中,key为类名
SmaStrategy()
# 用MongoDB数据库回测
go = op.backtest.stock(TICKER_LIST, FREQUENCY, INITIAL_CASH, START, END)
go.sunny()
from OnePy.builtin_module.mongodb_saver.tushare_saver import multi_tushare_to_mongodb
from OnePy.builtin_module.mongodb_saver.utils import MongoDBFunc
# 保存H1数据
FREQUENCY = ["H1"] # 注意Tushre的H1数据只有从2018年开始的数据
START = '2018-02-01'
TICKER_LIST = ['000001']
multi_tushare_to_mongodb(ticker_list=TICKER_LIST,
period_list=FREQUENCY,
fromdate=START)
MongoDBFunc().drop_duplicates(TICKER_LIST, FREQUENCY, 'tushare')
```
**尝试用不同的frequency的cleaner回测, 变成每次用H1数据计算SMA信号**
```
class SmaStrategy(op.StrategyBase):
def __init__(self):
super().__init__()
self.sma1 = SMA(rolling_window=3,
buffer_day= 10,
frequency='H1'
).calculate
self.sma2 = SMA(5, 10,'H1').calculate
def handle_bar(self):
for ticker in self.env.tickers:
if self.sma1(ticker) > self.sma2(ticker):
self.buy(100, ticker, takeprofit=15,
stoploss=100)
else:
self.sell(100, ticker)
TICKER_LIST = ['000001']
INITIAL_CASH = 20000
FREQUENCY = 'D'
START, END = '2018-04-01', '2018-10-01'
# 实例化策略,会自动添加到env.strategies中
SmaStrategy()
# 用MongoDB数据库回测
go = op.backtest.stock(TICKER_LIST, FREQUENCY, INITIAL_CASH, START, END)
go.sunny(show_process=False)
go.output.show_setting()
```
## Strategy介绍
```
class SmaStrategy(op.StrategyBase):
def __init__(self):
super().__init__() # 需要继承父类构造函数
def handle_bar(self):
pass
## 调取账户信息
# go.env.recorder.holding_pnl.latest(ticker='000001', long_or_short='long')
# go.env.recorder.realized_pnl
# go.env.recorder.commission
# go.env.recorder.market_value
# go.env.recorder.margin
# go.env.recorder.position
# go.env.recorder.avg_price
# go.env.recorder.cash
# go.env.recorder.frozen_cash
# go.env.recorder.balance
## Function
# self.cur_price
# self.buy
# self.sell
# self.short
# self.cover
# self.cancel_pending
# self.cancel_tst
```
| github_jupyter |
```
import numpy as np
np.random.seed(1)
# grAdapt
import grAdapt
from grAdapt.space.datatype import Float, Integer
from grAdapt.optimizer import AMSGrad, Adam, AMSGradBisection
from grAdapt.surrogate import GPRSlidingWindow, NoModel, NoGradient
from grAdapt.models import Sequential
# sklearn
# Import datasets, classifiers and performance metrics
from sklearn.metrics import log_loss
from sklearn import datasets, svm, metrics
from sklearn.model_selection import train_test_split
# The digits dataset
digits = datasets.load_digits()
# plot
import matplotlib.pyplot as plt
```
## 1. Load NIST Dataset
```
import os
plot_path = 'plots/'
if not os.path.exists(plot_path):
os.makedirs(plot_path)
# The digits dataset
digits = datasets.load_digits()
# To apply a classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
```
## 2. Fit SVM
```
# Create a classifier: a support vector classifier
classifier = svm.SVC(gamma=0.001, probability=True)
# Split data into train and test subsets
X_train, X_test, y_train, y_test = train_test_split(
data, digits.target, test_size=0.5, shuffle=False)
# We learn the digits on the first half of the digits
classifier.fit(X_train, y_train)
classifier.score(X_test, y_test)
```
## 3. Display a test image which has been rightfully classified
### 3.1 Setting target image
```
target_idx = 1
plt.imshow(X_test[target_idx].reshape(8, 8), cmap=plt.cm.binary)
y_test[target_idx]
classifier.predict(X_test[target_idx].reshape(1, -1)) == y_test[target_idx]
```
### 3.2 Setting target label
```
np.argmax(classifier.predict_log_proba(X_test[target_idx].reshape(1, -1)))
target_label = np.argsort(np.max(classifier.predict_proba(X_test[target_idx].reshape(1, -1)), axis=0))[-2]
target_label
```
## 4. Goal
We aim to perturbate the image above such that is will be missclassified. We have two objectives: Target the image to another label and secondly minimize the norm of the pertubation. We have to set bounds to our objective such that the created image is a valid image.
The image dataset has been normalized to 0-16. We now have to constrain the domain.
### 4.1 Define Black-Box
```
def categorical_distance(x1, x2):
if x1 == x2:
return 0
else:
return 1
def adversarial_examples(perturbation):
perturbated_image = (X_test[target_idx] + perturbation).reshape(1, -1)
pred_label_perturbated = classifier.predict(perturbated_image)
pred_label = classifier.predict(X_test[target_idx].reshape(1, -1))
# distance for categorical attributes
categorical_loss = 1 - categorical_distance(pred_label_perturbated, pred_label)
# minimal perturbation
perturbation_loss = np.linalg.norm(perturbation)/len(perturbation)
return categorical_loss + perturbation_loss
```
def adversarial_examples(perturbation):
# @ Intriguing properties of neural networks
# authors: Szegedy et al.
perturbated_image = (X_test[target_idx] + perturbation).reshape(1, -1)
#print(perturbated_image)
loss = classifier.predict_proba(perturbated_image).T
loss_target = -loss[target_label][0]
perturbation_loss = np.linalg.norm(perturbation)/len(perturbation)
# print(loss_target + perturbation_loss)
return loss_target + perturbation_loss
```
upper_bounds = (16*np.ones(X_test[target_idx].shape[0]) - X_test[target_idx])#//(1.5)
bounds = [Float(0, x) for x in upper_bounds]
```
### 4.2 grAdapt
Using NoGradient accelerates the training process when dealing with high dimensional optimization problems (64 dimension). Only the escape functions are used to obtain the next point. The best point is used as the mean and surrounding points are then evaluated.
```
surrogate = NoGradient()
model = Sequential(surrogate=surrogate)
res = model.minimize(adversarial_examples, bounds, 1000)
```
#### 4.2.1 Plot Loss
```
plt.title('Loss')
plt.plot(res['y'], label='grAdapt: Training loss')
plt.legend(loc='upper right')
#plt.yscale('log')
plt.show()
plt.title('Loss')
plt.scatter(np.arange(len(res['y'])), res['y'], label='grAdapt: Training loss', s=1)
plt.legend(loc='upper right')
#plt.yscale('log')
plt.show()
```
#### 4.2.2 Plot original and perturbated image
```
perturbated_image = np.array(res['x_sol'] + X_test[target_idx], dtype=np.float)
perturbated_image
fig=plt.figure(figsize=(8, 8))
columns = 2
rows = 1
img = [X_test[target_idx], perturbated_image]
labels = ['Original label: '+str(), 'Perturbated label: '+str()]
for i in range(1, columns*rows +1):
fig.add_subplot(rows, columns, i, title=labels[i-1]+str())
plt.imshow(img[i-1].reshape(8, 8), cmap=plt.cm.binary)
plt.show()
classifier.predict(perturbated_image.reshape(1, -1))
classifier.predict(perturbated_image.reshape(1, -1)) != classifier.predict(X_test[target_idx].reshape(1, -1))
```
The perturbated has indeed been wrongly classified.
### 4.3 BFGS with scipy
```
import scipy
x0 = grAdapt.utils.sampling.sample_points_bounds(bounds, 1, random_state=1)
res_scipy = scipy.optimize.minimize(adversarial_examples, x0, bounds=bounds)
res_scipy
```
#### 4.3.1 Plot original and perturbated image
```
perturbated_image_scipy = np.round(res_scipy.x + X_test[target_idx])
fig=plt.figure(figsize=(8, 8))
columns = 2
rows = 1
img = [X_test[target_idx], perturbated_image_scipy]
labels = ['Original', 'Perturbated', ]
for i in range(1, columns*rows +1):
fig.add_subplot(rows, columns, i, title=labels[i-1])
plt.imshow(img[i-1].reshape(8, 8), cmap=plt.cm.binary)
plt.show()
classifier.predict(perturbated_image_scipy.reshape(1, -1))
res_scipy.fun
```
## 5. Conclusion
In this optimization formulation, BFGS struggles to find an adversarial example whereas grAdapt is able to perturbate the target image to be misclassified.
| github_jupyter |
```
#
# Convolution Neural Network Image classifier
#
# @author becxer
# @email becxer87@gmail.com
# @reference https://github.com/sjchoi86/Tensorflow-101
#
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
print ("packages are loaded")
# Load npz data
npz_path = "images/MYIMG/my_img.npz"
load_data = np.load(npz_path)
print ("Load data : " + str(load_data.files))
train_img = load_data['train_img']
train_label = load_data['train_label']
valid_img = load_data['valid_img']
valid_label = load_data['valid_label']
test_img = load_data['test_img']
test_label = load_data['test_label']
print ("train_img shape : " + str(train_img.shape))
print ("valid_img shape : " + str(valid_img.shape))
print ("test_img shape : " + str(test_img.shape))
# Plot image
rand_idx = np.arange(train_img.shape[0])
np.random.shuffle(rand_idx)
rand_idx = rand_idx[:5]
for idx in rand_idx:
label = np.argmax(train_label[idx])
img = np.reshape(train_img[idx], (64,64))
plt.matshow(img,cmap=plt.get_cmap('gray'))
plt.colorbar()
plt.title("Label : " + str(label))
plt.show()
# Options for training
learning_rate = 0.001
training_epochs = 200
batch_size = 100
display_step = 10
# Options for Convolution
x_conv_shape = [-1,64,64,1] # because MNIST data is oneline
n_conv_shapes = [[7,7,1,128],[3,3,128,128]]
n_conv_strides = [[1,1,1,1],[1,1,1,1]]
n_maxp_shapes = [[1,2,2,1],[1,2,2,1]]
n_maxp_strides = [[1,2,2,1],[1,2,2,1]]
# Options for Dense layer
x_dense_size = 16 * 16 * 128
n_dense = [128]
w_dev = 0.1
# Options for dropout
drop_out_ratio = 0.7
# Build Graph of Convolution Neural Network
# Define placeholder & Variables
x = tf.placeholder("float", [None, train_img.shape[1]])
y = tf.placeholder("float", [None, train_label.shape[1]])
drop_out_prob = tf.placeholder("float")
def one_cnn_layer(_x, _weight_C, _stride_C, _bias_C, _shape_MP, _stride_MP, _dop):
_conv1 = tf.nn.conv2d(_x, _weight_C, strides=_stride_C, padding='SAME')
_conv2 = tf.nn.batch_normalization(_conv1, 0.001, 1.0, 0, 1, 0.0001)
_conv3 = tf.nn.bias_add(_conv2, _bias_C)
_conv4 = tf.nn.relu(_conv3)
_pool = tf.nn.max_pool(_conv4, ksize=_shape_MP, strides=_stride_MP, padding='SAME')
_out = tf.nn.dropout(_pool, _dop)
return {'conv1':_conv1, 'conv2':_conv2, 'conv3':_conv3, 'conv4':_conv4, 'pool':_pool, 'out':_out}
def one_dense_layer(_x, _W, _b, _dop):
return tf.nn.dropout(tf.nn.relu(tf.add(tf.matmul(_x, _W),_b)), _dop)
WS = {}
BS = {}
last_input_layer = tf.reshape(x, shape = x_conv_shape)
for idx in range(len(n_conv_shapes)):
_weight_C = tf.Variable(tf.random_normal(n_conv_shapes[idx], stddev=w_dev))
_stride_C = n_conv_strides[idx]
_bias_C = tf.Variable(tf.random_normal([n_conv_shapes[idx][-1]], stddev=w_dev))
_shape_MP = n_maxp_shapes[idx]
_stride_MP = n_maxp_strides[idx]
layer = one_cnn_layer(last_input_layer, _weight_C, _stride_C, _bias_C, _shape_MP, _stride_MP, drop_out_prob)
last_input_layer = layer['out']
WS['wc_' + str(idx)] = _weight_C
BS['bc_' + str(idx)] = _bias_C
last_input_layer_size = x_dense_size
last_input_layer = tf.reshape(last_input_layer, [-1, x_dense_size])
for idx, hl_size in enumerate(n_dense):
_W = tf.Variable(tf.random_normal([last_input_layer_size, hl_size], stddev=w_dev))
_b = tf.Variable(tf.random_normal([hl_size]))
last_input_layer = one_dense_layer(last_input_layer, _W, _b, drop_out_prob)
last_input_layer_size = hl_size
WS['wd_' + str(idx)] = _W
BS['bd_' + str(idx)] = _b
WS['out'] = tf.Variable(tf.random_normal([last_input_layer_size, train_label.shape[1]], stddev=w_dev))
BS['out'] = tf.Variable(tf.random_normal([train_label.shape[1]], stddev=w_dev))
# Define operators
out = tf.add(tf.matmul(last_input_layer, WS['out']), BS['out'])
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(out, y))
optm = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
pred = tf.argmax(out, 1)
accr = tf.reduce_mean(tf.cast(tf.equal(pred, tf.argmax(y, 1)),"float"))
init = tf.initialize_all_variables()
print ("Graph build")
# Training Graph
sess = tf.Session()
sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0.
num_batch = int(train_img.shape[0]/batch_size)
for i in range(num_batch):
randidx = np.random.randint(train_img.shape[0], size=batch_size)
batch_xs = train_img[randidx, :]
batch_ys = train_label[randidx, :]
sess.run(optm, feed_dict={x: batch_xs, y: batch_ys, drop_out_prob : drop_out_ratio})
avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys, drop_out_prob : 1.})/num_batch
if epoch % display_step == 0:
train_img_acc = sess.run(accr , ({x: batch_xs, y: batch_ys, drop_out_prob : 1.}))
print ("epoch: %03d/%03d , cost: %.6f , train_img_acc: %.3f" \
% (epoch, training_epochs, avg_cost, train_img_acc))
test_batch_size = 10
avg_acc = 0.
num_batch_test = int(test_img.shape[0]/test_batch_size)
for i in range(num_batch_test):
batch_xs_test = test_img[i * test_batch_size : (i+1) * test_batch_size ]
batch_ys_test = test_label[i * test_batch_size : (i+1) * test_batch_size ]
avg_acc += sess.run(accr, feed_dict={x : batch_xs_test, y : batch_ys_test, drop_out_prob : 1.})
print ("Training complete, Accuracy : %.6f" \
% (avg_acc / num_batch_test,))
```
| github_jupyter |
# Building Dense Vectors Using Transformers
We will be using the [`sentence-transformers/stsb-distilbert-base`](https://huggingface.co/sentence-transformers/stsb-distilbert-base) model to build our dense vectors.
```
from transformers import AutoTokenizer, AutoModel
import torch
```
First we initialize our model and tokenizer:
```
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/stsb-distilbert-base')
model = AutoModel.from_pretrained('sentence-transformers/stsb-distilbert-base')
```
Then we tokenize a sentence just as we have been doing before. Notice that the max length of the sentence tokens is 128. This produces an embedding array for each token of 768 values.
```
text = "hello world what a time to be alive!"
tokens = tokenizer.encode_plus(text, max_length=128,
truncation=True, padding='max_length',
return_tensors='pt')
```
We process these tokens through our model:
```
outputs = model(**tokens)
outputs
```
The dense vector representations of our `text` are contained within the `outputs` **'last_hidden_state'** tensor, which we access like so:
```
embeddings = outputs.last_hidden_state
embeddings
embeddings.shape
```
After we have produced our dense vectors `embeddings`, we need to perform a *mean pooling* operation on them to create a single vector encoding (the **sentence embedding**). To do this mean pooling operation we will need to multiply each value in our `embeddings` tensor by it's respective `attention_mask` value - so that we ignore non-real tokens.
To perform this operation, we first resize our `attention_mask` tensor:
```
attention_mask = tokens['attention_mask']
attention_mask.shape
mask = attention_mask.unsqueeze(-1).expand(embeddings.size()).float()
mask.shape
attention_mask
mask[0][0].shape
mask
```
Each vector above represents a single token attention mask - each token now has a vector of size 768 representing it's *attention_mask* status. Then we multiply the two tensors to apply the attention mask:
```
masked_embeddings = embeddings * mask
masked_embeddings.shape
masked_embeddings
```
Then we sum the remained of the embeddings along axis `1`:
```
summed = torch.sum(masked_embeddings, 1)
summed.shape
```
Then sum the number of values that must be given attention in each position of the tensor:
```
summed_mask = torch.clamp(mask.sum(1), min=1e-9)
summed_mask.shape
summed_mask
```
Finally, we calculate the mean as the sum of the embedding activations `summed` divided by the number of values that should be given attention in each position `summed_mask`:
```
mean_pooled = summed / summed_mask
mean_pooled
```
And that is how we calculate our dense similarity vector.
| github_jupyter |
# 2 Dead reckoning
*Dead reckoning* is a means of navigation that does not rely on external observations. Instead, a robot’s position is estimated by summing its incremental movements relative to a known starting point.
Estimates of the distance traversed are usually obtained from measuring how many times the wheels have turned, and how many times they have turned in relation to each other. For example, the wheels of the robot could be attached to an odometer, similar to the device that records the mileage of a car.
In RoboLab we will calculate the position of a robot from how long it moves in a straight line or rotates about its centre. We will assume that the length of time for which the motors are switched on is directly related to the distance travelled by the wheels.
## 2.1 Activity – Dead reckoning
An environment for the simulated robot to navigate is shown below, based on the 2018 First Lego League ‘Into Orbit’ challenge.
The idea is that the robot must get to the target satellite from its original starting point by avoiding the obstacles in its direct path.

The [First Lego League (FLL)](https://www.firstlegoleague.org/) is a friendly international youth based robot competition in which teams compete at national and international level on an annual basis. School teams are often coached by volunteers. In the UK, volunteers often coach teams under the auspices of the [STEM Ambassadors Scheme](https://www.stem.org.uk/stem-ambassadors). Many companies run volunteering schemes that allow employees to volunteer their skills in company time using schemes such as STEM Ambassadors.
Load in the simulator in the usual way:
```
from nbev3devsim.load_nbev3devwidget import roboSim, eds
%load_ext nbev3devsim
```
To navigate the environment, we will use a small robot configuration within the simulator. The robot configuration can be set via the simulator user interface, or by passing the `-r Small_Robot` parameter setting in the simulator magic.
The following program should drive the robot from its starting point to the target, whilst avoiding the obstacles. We define the obstacle as being avoided if it is not crossed by the robot’s *pen down* trail.
Load the *FLL_2018_Into_Orbit* background into the simulator. Run the following code cell to download the program to the simulator and then, with the *Pen Down*, run the program in the simulator.
Remember, you can use the `-P / --pencolor` flag to change the pen color and the `-C / --clear` option to clear the pen trace.
Does the robot reach the target satellite without encountering any obstacles?
```
%%sim_magic_preloaded -b FLL_2018_Into_Orbit -p -r Small_Robot
# Turn on the spot to the right
tank_turn.on_for_rotations(100, SpeedPercent(70), 1.7 )
# Go forwards
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 20)
# Slight graceful turn to left
tank_drive.on_for_rotations(SpeedPercent(35), SpeedPercent(50), 8.5)
# Turn on the spot to the left
tank_turn.on_for_rotations(-100, SpeedPercent(75), 0.8)
# Forwards a bit
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 2.0)
#Turn on the spot a bit more to the right
tank_turn.on_for_rotations(100, SpeedPercent(60), 0.4 )
# Go forwards a bit more and dock on the satellite
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 1.0)
say("Hopefully I have docked with the satellite...")
```
*Add your notes on how well the simulated robot performed the task here.*
To set the speeds and times, I used a bit of trial and error.
If the route had been much more complex then I would have been tempted to comment out the steps up I had already run and add new steps that would be applied from wherever the robot was currently located.
Note that the robot could have taken other routes to get to the satellite – I just thought I should avoid the asteroid!
### 2.1.1 Using motor tacho counts to identify how far the robot has travelled
In the above example, the motors were turned on for a specific amount of time to move the robot on each leg of its journey. This would not be an appropriate control strategy if we wanted to collect sensor data along the route, because the `on_for_X()` motor commands are blocking commands.
However, suppose we replaced the forward driving `tank_drive.on_for_rotations()` commands with commands of the form:
```python
from time import sleep
tank_drive.on(SPEED)
while int(tank_drive.left_motor.position) < DISTANCE:
# We need something that takes a finite time
# to run in the loop or the program will hang
sleep(0.1)
```
Now we could drive the robot forwards until the motor tacho count exceeds a specified `DISTANCE` and at the same time, optionally include additional commands, such as sensor data logging commands, inside the body of each `while` loop.
*As well as `tank_drive.left_motor.position` we can also refer to `tank_drive.right_motor.position`. Also note that these values are returned as strings and need to be cast to integers for numerical comparisons.*
### 2.1.2 Activity — Dead reckoning over distances (optional)
Use the `.left_motor.position` and/or `.right_motor.position` motor tacho counts in a program that allows the robot to navigate from its home base to the satellite rendezvous.
*Your design notes here.*
```
# YOUR CODE HERE
```
*Your notes and observations here.*
## 2.2 Challenge – Reaching the moon base
In the following code cell, write a program to move the simulated robot from its location servicing the satellite to the moon base identified as the circular area marked on the moon in the top right-hand corner of the simulated world.
In the simulator, set the robot’s X location to `1250` and Y location `450`.
Use the following code cell to write your own dead reckoning program to drive the robot to the moon base at location `(2150, 950)`.
```
%%sim_magic_preloaded
# YOUR CODE HERE
```
## 2.3 Dead reckoning with noise
The robot traverses its path using timing information for dead reckoning. In principle, if the simulated robot had a map then it could calculate all the distances and directions for itself, convert these to times, and dead reckon its way to the target. However, there is a problem with dead reckoning: *noise*.
In many physical systems, a perfect intended behaviour is subject to *noise* – random perturbations that arise within the system as time goes on as a side effect of its operation. In a robot, noise might arise in the behaviour of the motors, the transmission or the wheels. The result is that the robot does not execute its motion without error. We can model noise effects in the mobility system of our robot by adding a small amount of noise to the motor speeds as the simulator runs. This noise component may speed up or slow down the speed of each motor, in a random way. As with real systems, the noise represents slight random deviations from the theoretical, ideal behaviour.
For the following experiment, create a new, empty background cleared of pen traces.
```
%sim_magic -b Empty_Map --clear
```
Run the following code cell to download the program to the simulator using an empty background (select the *Empty_Map*) and the *Pen Down* mode selected. Also reset the initial location of the robot to an X value of `150` and Y value of `400`.
Run the program in the simulator and observe what happens.
```
%%sim_magic_preloaded -b Empty_Map -p -x 150 -y 400 -r Small_Robot --noisecontrols
tank_drive.on_for_rotations(SpeedPercent(30),
SpeedPercent(30), 10)
```
*Record your observations here describing what happens when you run the program.*
When you run the program, you should see the robot drive forwards a short way in a straight line, leaving a straight line trail behind it.
Reset the location of the robot. Within the simulator, use the *Noise controls* to increase the *Wheel noise* value from zero by dragging the slider to the right a little way. Alternatively, add noise in the range `0..500` using the `--motornoise / -M` magic flag.
Run the program in the simulator again.
You should notice this time that the robot does not travel in a straight line. Instead, it drifts from side to side, although possibly to one side of the line.
Move the robot back to the start position, or rerun the previous code cell to do so, and run the program in the simulator again. This time, you should see it follows yet another different path.
Depending on how severe the noise setting is, the robot will travel closer (low noise) the original straight line, or follow an ever-more erratic path (high noise).
*Record your own notes and observations here describing the behaviour of the robot for different levels of motor noise.*
Clear the pen traces from the simulator by running the following line magic:
```
%sim_magic -C
```
Now run the original satellite-finding dead reckoning program again, using the *FLL_2018_Into_Orbit* background, but in the presence of *Wheel noise*. How well does it perform this time compared to previously?
```
%%sim_magic_preloaded -b FLL_2018_Into_Orbit -p -r Small_Robot
# Turn on the spot to the right
tank_turn.on_for_rotations(100, SpeedPercent(70), 1.7 )
# Go forwards
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 20)
# Slight graceful turn to left
tank_drive.on_for_rotations(SpeedPercent(35), SpeedPercent(50), 8.5)
# Turn on the spot to the left
tank_turn.on_for_rotations(-100, SpeedPercent(75), 0.8)
# Forwards a bit
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 2.0)
#Turn on the spot a bit more to the right
tank_turn.on_for_rotations(100, SpeedPercent(60), 0.4 )
# Go forwards a bit more and dock on the satellite
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 1.0)
say("Did I avoid crashing and dock with the satellite?")
```
Reset the robot to its original location and run the program in the simulator again. Even with the same level of motor noise as on the previous run, how does the path followed by the robot this time compare with the previous run?
*Add your own notes and observations here.*
## 2.4 Summary
In this notebook you have seen how we can use dead reckoning to move the robot along a specified path. Using the robot's motor speeds and by monitoring how long the motors are switched on for, we can use distance time calculations to estimate out the robot's path. If we add in accurate measurements regarding how far we want the robot to travel, and in what direction, this provides one way of helping the robot to navigate to a particular waypoint.
However, in the presence of noise, this approach is likely to be very unreliable: whilst the robot may think it is following one path, as determined by how long it has turned its motors on, and at what speed, it may in fact be following another. In a real robot, the noise may be introduced in all sorts of ways, including from friction in the motor bearings, the time taken to accelerate from a standing start and "get up to speed, and loss of traction effects such as wheel spin and slip as the robot's wheels turn.
Whilst in some cases it may reach the target safely, in others it may end somewhere completely different, or encounter an obstacle along the way.
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<a href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20In%20Practice/Course%204%20-%20S%2BP/S%2BP%20Week%204%20Lesson%201.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.05
noise_level = 5
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[1:]))
return ds.batch(batch_size).prefetch(1)
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
window_size = 30
train_set = windowed_dataset(x_train, window_size, batch_size=128, shuffle_buffer=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 200)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
#batch_size = 16
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=3,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 200)
])
optimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset,epochs=500)
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
```
| github_jupyter |
# Best-practices for Cloud-Optimized Geotiffs
**Part 4. Dask GatewayCluster**
Unlike LocalCluster, a Dask GatewayCluster gives us the ability to dynamically increase our CPU and RAM across many machines! This is extremely powerful, because now we can load very big datasets into RAM for efficient calculations. There is a complication in that now we are running computations on many physical machines instead of just one, so network communication is more challenging and the dask machines likely don't have access to your local files. When COGS are store on S3 though, we can access them from any machine!
```
import xarray as xr
import s3fs
import pandas as pd
import os
import dask
from dask.distributed import Client, progress
from dask_gateway import Gateway
# use the same GDAL environment settings as we did for the single COG case
env = dict(GDAL_DISABLE_READDIR_ON_OPEN='EMPTY_DIR',
AWS_NO_SIGN_REQUEST='YES',
GDAL_MAX_RAW_BLOCK_CACHE_SIZE='200000000',
GDAL_SWATH_SIZE='200000000',
VSI_CURL_CACHE_SIZE='200000000')
os.environ.update(env)
```
### Dask GatewayCluster
dask gateway allow us to connect to a Kubernetes Cluster so that we can go beyond the RAM and CPU of a single machine. It can take several minutes for these machines to initialize on the Cloud, so be patient when starting a cluster.
```
# dask gateway allow us to connect to a Kubernetes Cluster so that we can go beyond the RAM and CPU of a single machine
# NOTE: we have to explicitly pass local environment variables to the cluster now
# By default each worker has 2 cores and 4GB memory and effectively runs as a separate process
# NOTE: how to deal with cores vs threads in a gateway cluster?
gateway = Gateway()
options = gateway.cluster_options()
options.environment = env
cluster = gateway.new_cluster(options)
cluster.scale(4) # let's get the same number of "workers" as our previous LocalCluster examples
# The dashboard link can also be pasted into the dask lab-extension
cluster
# NOTE: just like with a LocalCluster, it's good to explicitly connect to our GatewayCluster
client = Client(cluster)
# the dashboard link works just like a localcluster
client
options
# Make sure that your dask workers see GDAL environment variables
def get_env(env):
import os
return os.environ.get(env)
print(client.run(get_env, 'GDAL_DISABLE_READDIR_ON_OPEN'))
%%time
s3 = s3fs.S3FileSystem(anon=True)
objects = s3.glob('sentinel-s1-rtc-indigo/tiles/RTC/1/IW/10/T/ET/**Gamma0_VV.tif')
images = ['s3://' + obj for obj in objects]
print(len(images))
images.sort(key=lambda x: x[-32:-24]) #sort list in place by date in filename
# Let's use first 100 images for simplicity
images = images[:100]
dates = [pd.to_datetime(x[-32:-24]) for x in images]
@dask.delayed
def lazy_open(href):
chunks=dict(band=1, x=2745, y=2745)
return xr.open_rasterio(href, chunks=chunks)
%%time
# ~6.5 s
dataArrays = dask.compute(*[lazy_open(href) for href in images])
da = xr.concat(dataArrays, dim='band', join='override', combine_attrs='drop').rename(band='time')
da['time'] = dates
da
%%time
# 41 s
da.mean(dim=['x','y']).compute()
%%time
# 44 s
# just like with a LocalCluster this workflow requires pulling (nCOGS x chunk size) into worker RAM to get mean through time for each chunk (3GB)
# Now we have 4GB per worker (and we can adjust this via cluster settings)
da.mean(dim='time').compute()
```
### Visualization
Using hvplot like we've done before will utilize the dask cluster as you request to plot each image
```
import hvplot.xarray
da.hvplot.image(rasterize=True,
aspect='equal', frame_width=500,
cmap='gray', clim=(0,0.4))
```
| github_jupyter |
# 적층 양방향 LSTM 감성 분류기
이 노트북에서 *적층* 양방향 LSTM을 사용해 감성에 따라 IMDB 영화 리뷰를 분류합니다.
[](https://colab.research.google.com/github/rickiepark/dl-illustrated/blob/master/notebooks/11-7.stacked_bi_lstm_sentiment_classifier.ipynb)
#### 라이브러리 적재
```
from tensorflow import keras
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Embedding, SpatialDropout1D, LSTM
from tensorflow.keras.layers import Bidirectional
from tensorflow.keras.callbacks import ModelCheckpoint
import os
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
%matplotlib inline
```
#### 하이퍼파라미터 설정
```
# 출력 디렉토리
output_dir = 'model_output/stackedLSTM'
# 훈련
epochs = 4
batch_size = 128
# 벡터 공간 임베딩
n_dim = 64
n_unique_words = 10000
max_review_length = 200
pad_type = trunc_type = 'pre'
drop_embed = 0.2
# LSTM 층 구조
n_lstm_1 = 64 # 줄임
n_lstm_2 = 64 # new!
drop_lstm = 0.2
```
#### 데이터 적재
```
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words) # n_words_to_skip 삭제
```
#### 데이터 전처리
```
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
```
#### 신경망 만들기
```
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(SpatialDropout1D(drop_embed))
model.add(Bidirectional(LSTM(n_lstm_1, dropout=drop_lstm,
return_sequences=True)))
model.add(Bidirectional(LSTM(n_lstm_2, dropout=drop_lstm)))
model.add(Dense(1, activation='sigmoid'))
# 양 방향으로 역전파되기 때문에 LSTM 층의 파라미터가 두 배가 됩니다.
model.summary()
```
#### 모델 설정
```
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
```
#### 훈련!
```
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
```
#### 평가
```
model.load_weights(output_dir+"/weights.02.hdf5")
y_hat = model.predict(x_valid)
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
"{:0.2f}".format(roc_auc_score(y_valid, y_hat)*100.0)
```
| github_jupyter |
```
import pandas as pd
df = pd.read_csv('/Users/pbhagwat/DEV/CohortAnalysis/Cohort-Analysis/Data/Telco-Customer-Churn.csv')
pd.set_option('display.max_columns', 100)
df.head()
dummies = pd.get_dummies(
df[['gender', 'SeniorCitizen', 'Partner', 'Dependents', 'tenure', 'PhoneService', 'MultipleLines',
'InternetService', 'OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport',
'StreamingTV',
'StreamingMovies', 'Contract', 'PaperlessBilling', 'PaymentMethod', 'Churn']]
)
dummies = dummies[['gender_Female', 'Partner_Yes', 'Dependents_Yes', 'PhoneService_Yes', 'MultipleLines_Yes', 'InternetService_DSL', 'InternetService_Fiber optic', 'OnlineSecurity_Yes', 'DeviceProtection_Yes', 'TechSupport_Yes', 'StreamingTV_Yes', 'StreamingMovies_Yes', 'Contract_One year', 'Contract_Two year', 'PaperlessBilling_Yes', 'PaymentMethod_Bank transfer (automatic)', 'PaymentMethod_Credit card (automatic)', 'PaymentMethod_Electronic check','Churn_Yes']]
data = dummies.join(df[['customerID','MonthlyCharges', 'TotalCharges','tenure']])
data.set_index('customerID', inplace=True)
data['TotalCharges'] = data[['TotalCharges']].replace([' '], '0')
data['TotalCharges'] = pd.to_numeric(data['TotalCharges'])
data.head()
from lifelines import CoxPHFitter
from sklearn.model_selection import train_test_split
x_select = ['gender_Female', 'Partner_Yes', 'Dependents_Yes', 'PhoneService_Yes', 'MultipleLines_Yes', 'InternetService_DSL', 'InternetService_Fiber optic', 'OnlineSecurity_Yes', 'DeviceProtection_Yes', 'TechSupport_Yes', 'StreamingTV_Yes', 'StreamingMovies_Yes', 'Contract_One year', 'Contract_Two year', 'PaperlessBilling_Yes', 'PaymentMethod_Bank transfer (automatic)', 'PaymentMethod_Credit card (automatic)', 'PaymentMethod_Electronic check','MonthlyCharges', 'TotalCharges','tenure', 'Churn_Yes']
temp_x_select = ['gender_Female','Contract_One year','Contract_Two year','PaymentMethod_Bank transfer (automatic)','PaymentMethod_Credit card (automatic)','PaymentMethod_Electronic check','InternetService_DSL', 'InternetService_Fiber optic','MonthlyCharges', 'TotalCharges', 'tenure', 'Churn_Yes']
cph_train, cph_test = train_test_split(data[x_select], test_size=0.2)
cph = CoxPHFitter()
cph.fit(cph_train, 'tenure', 'Churn_Yes')
cph.print_summary()
cph.plot()
cph.plot_partial_effects_on_outcome('TotalCharges',values=[0,4000], cmap='coolwarm').set_xlabel('tenure period')
```
## Churn Prediction
### Next step is to derive some insights and to make predictions of the existing customer behaviour
```
# censored observation is one which is yet to have an ‘event’, i.e. customers who are yet to churn.
censored_subjects = data.loc[data['Churn_Yes']==0]
print("Number of customers yet to churn:", len(censored_subjects))
# predict_survival_function() creates the matrix containing a survival probability for each remaining customers
#'unconditioned' survival function 'cuz some of these curves will predict churn before the customer's current tenure time
# row index => tenure period; column_index is the data index where Churn_Yes=0
unconditioned_sf = cph.predict_survival_function(censored_subjects)
unconditioned_sf.head()
# We've to condition the prediction on the basis that the customers were still with us when the data was collected
# c.name => row number(index) of the data where Churn_Yes=0
# data.loc[c.name, 'tenure'] => tenure value of specific index(c.name) in original data
# c.loc[data.loc[c.name, 'tenure']]<=1 always in unconditioned_cf, which may not be true cuz the customers might continue using the platform even after the date of collection of data
conditioned_sf = unconditioned_sf.apply(lambda c:(c/c.loc[data.loc[c.name, 'tenure']]).clip(upper=1))
conditioned_sf.head()
import matplotlib.pyplot as plt
customer = '1452-KIOVK'
df.loc[df['customerID'] == customer]
# investigate individual customers and see how the conditioning has affected their survival over the base line
plt.figure(figsize=(10, 5))
subject = customer
unconditioned_sf[subject].plot(ls="--", color="#A60628", label="unconditioned")
conditioned_sf[subject].plot(color="#A60628", label=("conditioned on $T>%s$" % data.loc[subject]['tenure'])) # T>34 indicate that the customer is active even after 58 months
plt.xlabel('tenure period')
plt.legend()
```
### Getting actionable insights out of the model
```
from lifelines.utils import median_survival_times, qth_survival_times
# Predict the month number where the survival chance of customer is 50%
# This can also be modified as predictions_50 = qth_survival_times(.50, conditioned_sf), where the percentile can be modified depending on our requirement
percentile = 0.5
predictions_50 = qth_survival_times(percentile, conditioned_sf)
# predictions_10 = qth_survival_times(.10, conditioned_sf) #This provides the month where survival chance of customer is 10%
predictions_50
predictions_50[[customer]]
# Investigate the predicted remeaining value that a customer has for the business
values = predictions_50.T.join(data[['MonthlyCharges','tenure']])
values['RemainingValue'] = values['MonthlyCharges'] * (values[0.5] - values['tenure']) # With this we can predict which customers might inflict the highest damage to the business
values.loc[[customer]]
```
## Churn prevention - What can we do to keep them?
```
# Through coefficient chart we concluded that these 4 features i.e. Contract_Two year, Contract_One year, PaymentMethod_Credit card (automatic), PaymentMethod_Bank transfer (automatic)
# promotes the survival chances positively, so let's focus on those i.e reverse the existing values and see the resulting survival chances
upgrades = ['PaymentMethod_Credit card (automatic)', 'PaymentMethod_Bank transfer (automatic)', 'Contract_One year', 'Contract_Two year']
results_dict = {}
# Run this for all the customers
actual = data.loc[[customer]]
change = data.loc[[customer]]
results_dict[customer] = [cph.predict_median(actual)]
for upgrade in upgrades:
change[upgrade] = 1 if list(change[upgrade]) == [0] else 0
results_dict[customer].append(cph.predict_median(change))
change[upgrade] = 1 if list(change[upgrade]) == [0] else 0
results_dict
result_df = pd.DataFrame(results_dict).T
result_df.columns = ['baseline'] + upgrades
actions = values.join(result_df).drop([0.5], axis=1)
data.loc[[customer],upgrades]
# Notice that if we get the 1st customer to use CC we increase the survival period of cust '5575-GNVDE' by 5 months i.e. 46(baseline) -> 51(PaymentMethod_Credit card (automatic)) and so on..
# Note: Cust 5575-GNVDE was already having Contract_One year, after reverting it we can see that the survival chances goes down from 46 to 37
actions.loc[[customer]]
```
##### Calculate what impact it has financially
```
actions['CreditCard Diff'] = (actions['PaymentMethod_Credit card (automatic)'] - actions['baseline']) * actions['MonthlyCharges']
actions['BankTransfer Diff'] = (actions['PaymentMethod_Bank transfer (automatic)'] - actions['baseline']) * actions['MonthlyCharges']
actions['1yrContract Diff'] = (actions['Contract_One year'] - actions['baseline']) * actions['MonthlyCharges']
actions['2yrContract Diff'] = (actions['Contract_Two year'] - actions['baseline']) * actions['MonthlyCharges']
actions.loc[[customer]]
```
### Accuracy and Calibration
#### Calibration is the propensity of the model to get probabilities right over time (i.e. having high recall value)
```
from sklearn.calibration import calibration_curve
from sklearn.metrics import brier_score_loss
import numpy as np
cph_test.head()
plt.figure(figsize=(10, 10))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
probs = 1-np.array(cph.predict_survival_function(cph_test).loc[13]) # here tenure=13
actual = cph_test['Churn_Yes']
fraction_of_positives, mean_predicted_value = calibration_curve(actual, probs, n_bins=10, normalize=False)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-", label="%s" % ("CoxPH"))
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.set_xlabel("mean_predicted_value")
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
# To understand how far away the line is from the perfect calibration we use brier_score_loss
brier_score_loss(cph_test['Churn_Yes'], 1-np.array(cph.predict_survival_function(cph_test).loc[13]), pos_label=1)
# Inspect the calibration of the model at all the time periods (above one is just for tenure=13)
loss_dict = {}
for i in range(1,73):
score=brier_score_loss(cph_test['Churn_Yes'], 1-np.array(cph.predict_survival_function(cph_test).loc[i]), pos_label=1)
loss_dict[i] = [score]
loss_df = pd.DataFrame(loss_dict).T
fig, ax = plt.subplots()
ax.plot(loss_df.index, loss_df.values)
ax.set(xlabel='Pediction Time', ylabel='Calibration Loss', title='Cox PH Model Calibration Loss / Time')
ax.grid()
# Here we can see that the model is well caliberated b/w 5 and 25 months
plt.show()
# upper and lower bounds for the expected return on investment from getting customers to make changes
loss_df.columns = ['loss']
temp_df = actions.reset_index().set_index('PaymentMethod_Credit card (automatic)').join(loss_df)
temp_df = temp_df.set_index('index')
actions['CreditCard Lower'] = temp_df['CreditCard Diff'] - (temp_df['loss'] * temp_df['CreditCard Diff'])
actions['CreditCard Upper'] = temp_df['CreditCard Diff'] + (temp_df['loss'] * temp_df['CreditCard Diff'])
temp_df = actions.reset_index().set_index('PaymentMethod_Bank transfer (automatic)').join(loss_df)
temp_df = temp_df.set_index('index')
actions['BankTransfer Lower'] = temp_df['BankTransfer Diff'] - (.5 * temp_df['loss'] * temp_df['BankTransfer Diff'])
actions['BankTransfer Upper'] = temp_df['BankTransfer Diff'] + (.5 * temp_df['loss'] * temp_df['BankTransfer Diff'])
temp_df = actions.reset_index().set_index('Contract_One year').join(loss_df)
temp_df = temp_df.set_index('index')
actions['1yrContract Lower'] = temp_df['1yrContract Diff'] - (.5 * temp_df['loss'] * temp_df['1yrContract Diff'])
actions['1yrContract Upper'] = temp_df['1yrContract Diff'] + (.5 * temp_df['loss'] * temp_df['1yrContract Diff'])
temp_df = actions.reset_index().set_index('Contract_Two year').join(loss_df)
temp_df = temp_df.set_index('index')
actions['2yrContract Lower'] = temp_df['2yrContract Diff'] - (.5 * temp_df['loss'] * temp_df['2yrContract Diff'])
actions['2yrContract Upper'] = temp_df['2yrContract Diff'] + (.5 * temp_df['loss'] * temp_df['2yrContract Diff'])
actions.loc[[customer]]
```
| github_jupyter |
# Import Libraries
```
#from __future__ import print_function
from pandas import read_csv
from pandas import DataFrame
from pandas import concat
from datetime import datetime
from matplotlib import pyplot
from math import sqrt
from numpy import concatenate
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import mean_squared_error
from keras.layers import Dense, LSTM
from keras.models import Sequential
```
# Preprocessing
```
inputFile = 'data.csv'
def parse(x):
#function for parsing data into required format
return datetime.strptime(x, '%Y %m %d %H')
df = read_csv(inputFile,
parse_dates = [['year', 'month', 'day', 'hour']],
index_col=0,
date_parser=parse)
df.head()
df.drop('No', axis=1, inplace=True)
df.head()
# manually specify column names
df.columns = ['pollution', 'dew', 'temp', 'press', 'wnd_dir', 'wnd_spd', 'snow', 'rain']
df.index.name = 'date'
df.head()
# mark all NA values with 0
df['pollution'].fillna(0, inplace=True)
df.head()
# drop the first 24 hours as it has 0 value
df = df[24:]
df.head()
def visualize(df,groups):
# plot columns definded in "groups" from inputFile
values = df.values
i = 1
# plot each column
pyplot.figure()
for group in groups:
pyplot.subplot(len(groups), 1, i)
pyplot.plot(values[:, group])
pyplot.title(df.columns[group], y=0.5, loc='right')
i += 1
pyplot.show()
#call function for visualizing data
visualize(df,[0, 1, 2, 3, 5, 6, 7])
def convert_timeseries(data, n_in=1, n_out=1, dropnan=True):
#covert timeseries data to t-n to t-1 form
#n defines how many previous value should be taken into consideration
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
```
## Actual code starts here
```
# load dataset
values = df.values
print(values)
# encode direction into integer
encoder = LabelEncoder()
values[:,4] = encoder.fit_transform(values[:,4])
print(values[:,4])
# ensure all data is float
values = values.astype('float32')
print(values)
# normalize features
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
print(scaled)
# frame as supervised learning
reframed = convert_timeseries(scaled, 1, 1)
print(reframed.head())
# drop columns we don't want to predict
# need to change this if we change N or change dataset
reframed.drop(reframed.columns[[9,10,11,12,13,14,15]], axis=1, inplace=True)
print(reframed.head())
```
## Splitting Dataset
```
# split into train and test sets
values = reframed.values
print(values)
n_train_hours = 365 * 24 #1 year
train = values[:n_train_hours, :]
test = values[n_train_hours:, :]
print('train shape: ', train.shape)
print('test shape: ',test.shape)
# split into input and outputs
train_X= train[:, :-1]
train_y= train[:, -1]
test_X= test[:, :-1]
test_y= test[:, -1]
print('train_X shape: ', train_X.shape)
print('test_X shape: ',test_X.shape)
print('train_y shape: ', train_y.shape)
print('test_y shape: ',test_y.shape)
print(train_X)
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
```
## Create the LSTM Model
```
# design network
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
# fit network
history = model.fit(train_X,
train_y,
epochs=50,
batch_size=72,
validation_data=(test_X, test_y),
verbose=2,
shuffle=False)
# plot history
pyplot.plot(history.history['loss'], label='Training Loss')
pyplot.plot(history.history['val_loss'], label='Validation Loss')
pyplot.legend()
pyplot.show()
```
## Make a Prediction
```
# make a prediction
yhat = model.predict(test_X)
test_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))
print(yhat)
print(yhat.shape)
print(test_X)
print(test_X.shape)
# invert scaling for forecast to revert data into original form
inv_yhat = concatenate((yhat, test_X[:, 1:]), axis=1)
print(inv_yhat)
inv_yhat = scaler.inverse_transform(inv_yhat)
print(inv_yhat)
# Actial Input
inv_xp=inv_yhat[:,1:]
#predicted output
inv_yhat = inv_yhat[:,0]
print('inv_xp: ', inv_xp)
print('inv_yhat: ', inv_yhat)
# invert scaling for actual
test_y = test_y.reshape((len(test_y), 1))
inv_y = concatenate((test_y, test_X[:, 1:]), axis=1)
inv_y = scaler.inverse_transform(inv_y)
#Actual output
inv_y = inv_y[:,0]
print("Actual Input:")
print(inv_xp)
print("Actual Output:")
print(inv_y)
#predicted output will be offset by 1
print("Predicted Output:")
print(inv_yhat)
# calculate RMSE
rmse = sqrt(mean_squared_error(inv_y, inv_yhat))
print('Test RMSE: %.3f' % rmse)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
```
FT algorithm receives a trajectory, apply its filters to find the appropriate cycles, and outputs the full set of cyclic components. There are two algorithms:
- the Discrete Fourier Transform (DFT) which requires $O(n^2)$ operations (for n samples)
- the Fast Fourier Transform (FFT) which requires $O(nlog(n))$ operations
## DFT
\begin{equation}
X_k = \sum_{n=0}^{N-1} x_n e^{-i 2 \pi k n / N}
\end{equation}
\begin{equation}
x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k e^{i 2 \pi k n / N}
\end{equation}
where:
- $X_k$ amount of frequency $k$ in the signal; each $k$th value is a complex number including strength (amplitute) and phase shift
- $N$ number of samples
- $n$ current sample, $n\in{0\cdots N−1}$
- $k$ current frequency, between $0$ Hz to $N-1$ Hz
- $1/N$ not necessary but it gives the actual sizes of the time spikes
- $n/N$ is the percent of the time we’ve gone through
- $2\pi{k}$ the speed in radians/second
- $e^{−ix}$ the backwards-moving circular path. This last three tell how far we’ve moved, for this speed and time.
```
# step by step dft
def dft_k(x, k):
N = len(x)
return sum((x[n]*np.e**(-1j*2*np.pi*k*n/N) for n in range(N)))
# step by step idft
def idft_n(X, n):
N = len(x)
return sum((1/N * X[k] * np.e**(1j*2*np.pi*k*n/N) for k in range(N)))
def remove_frequencies(Xn, N, fs, fre_low, fre_high):
# remove specific frequencies
Yn = np.copy(Xn)
fre_low_n = fre_low * N // fs
fre_high_n = fre_high * N // fs
# remove two side frequiencies
Yn[fre_low_n:fre_high_n] = 0
Yn[N-fre_high_n:N-fre_low_n] = 0
return Yn
N = 1000
fs = 10000
T = 1/fs
t = np.linspace(0, N * T, N)
x = np.sin(2*np.pi*50*t) + 2 * np.sin(2*np.pi*150*t) + 0.5 * np.sin(2*np.pi*1000*t)
# dft: analysis
Xn = np.zeros(N, dtype=np.complex)
for i in range(N):
X = dft_k(x, i)
Xn[i] = X
# filter: remove specific requencies
Yn = remove_frequencies(Xn, N, fs, 800, 1200)
# idft: synthesis
Re = np.zeros(N, dtype=np.complex)
for i in range(N):
Re[i] = idft_n(Yn, i)
# one side frequency range
xf = np.linspace(0.0, 1.0/(2.0*T), N//2)
xn = 2.0/N * np.abs(Xn[0:N//2])
yn = 2.0/N * np.abs(Yn[0:N//2])
#------------------------------------------------------------
# Set up the plots
fig = plt.figure(figsize=(10, 10))
fig.subplots_adjust(left=0.09, bottom=0.09, right=0.95, top=0.95,
hspace=0.05, wspace=0.05)
#----------------------------------------
# plot the origional signal
ax1 = fig.add_subplot(221)
ax1.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)
ax1.plot(t, x, '-k', label=r'data $D(x)$')
plt.setp(ax1.get_xticklabels(), visible=False)
#----------------------------------------
# plot the dft and area to remove
ax2 = fig.add_subplot(222)
ax2.plot(xf, xn, '-k')
ax2.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)
ax2.axvspan(800, 1200, facecolor='#b7b7b7', alpha=0.5)
ax2.yaxis.tick_right()
plt.setp(ax2.get_xticklabels(), visible=False)
#----------------------------------------
# plot the left frequencies
ax3 = fig.add_subplot(224, sharex=ax2)
ax3.plot(xf, yn, '-k')
ax3.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)
ax3.yaxis.tick_right()
#----------------------------------------
# plot the filtered signal
ax4 = fig.add_subplot(223, sharex=ax1)
ax4.plot(t, Re.real, '-k')
ax4.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)
#------------------------------------------------------------
# Plot flow arrows
ax = fig.add_axes([0, 0, 1, 1], xticks=[], yticks=[], frameon=False)
arrowprops = dict(arrowstyle="simple",
color="#333333", alpha=0.5,
shrinkA=5, shrinkB=5,
patchA=None,
patchB=None,
connectionstyle="arc3,rad=-0.35")
ax.annotate('', [0.57, 0.57], [0.47, 0.57],
arrowprops=arrowprops,
transform=ax.transAxes)
ax.annotate('', [0.57, 0.47], [0.57, 0.57],
arrowprops=arrowprops,
transform=ax.transAxes)
ax.annotate('', [0.47, 0.47], [0.57, 0.47],
arrowprops=arrowprops,
transform=ax.transAxes)
plt.show()
```
## Reference:
- [Fourier Transform: A R Tutorial](http://www.di.fc.ul.pt/~jpn/r/fourier/fourier.html)
- [An Interactive Guide To The Fourier Transform](https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/)
- [An Interactive Introduction to Fourier Transforms](http://www.jezzamon.com/fourier/index.html)
| github_jupyter |
# cyBERT: a flexible log parser based on the BERT language model
## Table of Contents
* Introduction
* Generating Labeled Logs
* Subword Tokenization
* Data Loading
* Fine-tuning pretrained BERT
* Model Evaluation
* Parsing with cyBERT
## Introduction
One of the most arduous tasks of any security operation (and equally as time consuming for a data scientist) is ETL and parsing. This notebook illustrates how to train a BERT language model using a toy dataset of just 1000 previously parsed apache server logs as a labeled data. We will fine-tune a pretrained BERT model from [HuggingFace](https://github.com/huggingface) with a classification layer for Named Entity Recognition.
```
import torch
from torch.optim import Adam
from torch.utils.data import TensorDataset, DataLoader
from torch.utils.data.dataset import random_split
from torch.utils.dlpack import from_dlpack
import torch.nn.functional as F
from seqeval.metrics import classification_report,accuracy_score,f1_score
from tqdm import tqdm,trange
from collections import defaultdict
import pandas as pd
import numpy as np
import s3fs
from os import path
import cupy
import cudf
from transformers import BertForTokenClassification
```
## Generating Labels For Our Training Dataset
To train our model we begin with a dataframe containing parsed logs and additional `raw` column containing the whole raw log as a string. We will use the column names as our labels.
```
# download log data
APACHE_SAMPLE_CSV = "apache_sample_1k.csv"
S3_BASE_PATH = "rapidsai-data/cyber/clx"
if not path.exists(APACHE_SAMPLE_CSV):
fs = s3fs.S3FileSystem(anon=True)
fs.get(S3_BASE_PATH + "/" + APACHE_SAMPLE_CSV, APACHE_SAMPLE_CSV)
logs_df = cudf.read_csv(APACHE_SAMPLE_CSV)
# sample parsed log
logs_df.sample(1)
# sample raw log
print(logs_df.raw.loc[10])
def labeler(index_no, cols):
"""
label the words in the raw log with the column name from the parsed log
"""
raw_split = logs_df.raw_preprocess[index_no].split()
# words in raw but not in parsed logs labeled as 'other'
label_list = ['other'] * len(raw_split)
# for each parsed column find the location of the sequence of words (sublist) in the raw log
for col in cols:
if str(logs_df[col][index_no]) not in {'','-','None','NaN'}:
sublist = str(logs_df[col][index_no]).split()
sublist_len=len(sublist)
match_count = 0
for ind in (i for i,el in enumerate(raw_split) if el==sublist[0]):
# words in raw log not present in the parsed log will be labeled with 'other'
if (match_count < 1) and (raw_split[ind:ind+sublist_len]==sublist) and (label_list[ind:ind+sublist_len] == ['other'] * sublist_len):
label_list[ind:ind+sublist_len] = [col] * sublist_len
match_count = 1
return label_list
logs_df['raw_preprocess'] = logs_df.raw.str.replace('"','')
logs_df_small = logs_df.sample(10)
# column names to use as lables
cols = logs_df.columns.values.tolist()
# do not use raw columns as labels
cols.remove('raw')
cols.remove('raw_preprocess')
# using for loop for labeling funcition until string UDF capability in rapids- it is currently slow
labels = []
for indx in range(len(logs_df_small)):
labels.append(labeler(indx, cols))
print(labels[10])
```
## Subword Labeling
We are using the `bert-base-cased` tokenizer vocabulary. This tokenizer splits our whitespace separated words further into in dictionary sub-word pieces. The model eventually uses the label from the first piece of a word as the sole label for the word, so we do not care about the model's ability to predict individual labels for the sub-word pieces. For training, the label used for these pieces is `X`. To learn more see the [BERT paper](https://arxiv.org/abs/1810.04805)
```
def subword_labeler(log_list, label_list):
"""
label all subword pieces in tokenized log with an 'X'
"""
subword_labels = []
for log, tags in zip(log_list,label_list):
temp_tags = []
words = cudf.Series(log.split())
words_size = len(words)
subword_counts = words.str.subword_tokenize("data/bert-cased-vocab-hash.txt", 10000, 10000,\
max_num_strings=words_size,max_num_chars=10000,\
max_rows_tensor=words_size,\
do_lower=False, do_truncate=False)[2].reshape(words_size, 3)[:,2]
for i, tag in enumerate(tags):
temp_tags.append(tag)
temp_tags.extend('X'* subword_counts[i].item())
subword_labels.append(temp_tags)
return subword_labels, subword_counts
subword_labels = subword_labeler(logs_df.iloc[0:].raw_preprocess.to_arrow().to_pylist(), labels)
logs_df.raw_preprocess.to_arrow().to_pylist()[0]
print(subword_labels[10])
```
We create a set list of all labels from our dataset, add `X` for wordpiece tokens we will not have tags for and `[PAD]` for logs shorter than the length of the model's embedding.
```
# set of labels
label_values = list(set(x for l in labels for x in l))
label_values[:0] = ['[PAD]']
label_values.append('X')
# Set a dict for mapping id to tag name
label2idx = {t: i for i, t in enumerate(label_values)}
print(label2idx)
def pad(l, content, width):
l.extend([content] * (width - len(l)))
return l
padded_labels = [pad(x[:256], '[PAD]', 256) for x in subword_labels]
int_labels = [[label2idx.get(l) for l in lab] for lab in padded_labels]
label_tensor = torch.tensor(int_labels).to('cuda')
```
# Training and Validation Datasets
For training and validation our datasets need three features. (1) `input_ids` subword tokens as integers padded to the specific length of the model (2) `attention_mask` a binary mask that allows the model to ignore padding (3) `labels` corresponding labels for tokens as integers.
```
def bert_cased_tokenizer(strings):
"""
converts cudf.Seires of strings to two torch tensors- token ids and attention mask with padding
"""
num_strings = len(strings)
num_bytes = strings.str.byte_count().sum()
token_ids, mask = strings.str.subword_tokenize("data/bert-cased-vocab-hash.txt", 256, 256,
max_num_strings=num_strings,
max_num_chars=num_bytes,
max_rows_tensor=num_strings,
do_lower=False, do_truncate=True)[:2]
# convert from cupy to torch tensor using dlpack
input_ids = from_dlpack(
token_ids
.reshape(num_strings,256)
.astype(cupy.float)
.toDlpack()
)
attention_mask = from_dlpack(
mask
.reshape(num_strings,256)
.astype(cupy.float)
.toDlpack()
)
return input_ids.type(torch.long), attention_mask.type(torch.long)
input_ids, attention_masks = bert_cased_tokenizer(logs_df.raw_preprocess)
num_strings = len(logs_df.raw_preprocess)
num_bytes = logs_df.raw_preprocess.str.byte_count().sum()
logs_df.raw_preprocess.str.subword_tokenize("data/bert-cased-vocab-hash.txt", 256, 256,
max_num_strings=num_strings,
max_num_chars=num_bytes,
max_rows_tensor=num_strings,
do_lower=False, do_truncate=True)[:2][0].reshape(num_strings, 256).shape
token_ids, mask
len(logs_df)
# create dataset
dataset = TensorDataset(input_ids, attention_masks, label_tensor)
# use pytorch random_split to create training and validation data subsets
dataset_size = len(input_ids)
training_dataset, validation_dataset = random_split(dataset, (int(dataset_size*.8), int(dataset_size*.2)))
# create dataloader
train_dataloader = DataLoader(dataset=training_dataset, shuffle=True, batch_size=32)
val_dataloader = DataLoader(dataset=validation_dataset, shuffle=False, batch_size=1)
```
# Fine-tuning pretrained BERT
Download pretrained model from HuggingFace and move to GPU
```
model = BertForTokenClassification.from_pretrained("bert-base-cased", num_labels=len(label2idx))
# model to gpu
model.cuda();
```
Define optimizer and learning rate for training
```
FULL_FINETUNING = True
if FULL_FINETUNING:
#fine tune all layer parameters
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
else:
# only fine tune classifier parameters
param_optimizer = list(model.classifier.named_parameters())
optimizer_grouped_parameters = [{"params": [p for n, p in param_optimizer]}]
optimizer = Adam(optimizer_grouped_parameters, lr=3e-5)
%%time
# using 2 epochs to avoid overfitting
epochs = 2
max_grad_norm = 1.0
for _ in trange(epochs, desc="Epoch"):
# TRAIN loop
model.train()
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
for step, batch in enumerate(train_dataloader):
b_input_ids, b_input_mask, b_labels = batch
# forward pass
loss, scores = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
# backward pass
loss.backward()
# track train loss
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
# gradient clipping
torch.nn.utils.clip_grad_norm_(parameters=model.parameters(), max_norm=max_grad_norm)
# update parameters
optimizer.step()
model.zero_grad()
# print train loss per epoch
print("Train loss: {}".format(tr_loss/nb_tr_steps))
```
## Model Evaluation
```
# no dropout or batch norm during eval
model.eval();
# Mapping index to name
idx2label={label2idx[key] : key for key in label2idx.keys()}
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
y_true = []
y_pred = []
for step, batch in enumerate(val_dataloader):
input_ids, input_mask, label_ids = batch
with torch.no_grad():
outputs = model(input_ids, token_type_ids=None,
attention_mask=input_mask,)
# For eval mode, the first result of outputs is logits
logits = outputs[0]
# Get NER predicted result
logits = torch.argmax(F.log_softmax(logits,dim=2),dim=2)
logits = logits.detach().cpu().numpy()
# Get NER true result
label_ids = label_ids.detach().cpu().numpy()
# Only predict the groud truth, mask=0, will not calculate
input_mask = input_mask.detach().cpu().numpy()
# Compare the valuable predict result
for i,mask in enumerate(input_mask):
# ground truth
temp_1 = []
# Prediction
temp_2 = []
for j, m in enumerate(mask):
# Mask=0 is PAD, do not compare
if m: # Exclude the X label
if idx2label[label_ids[i][j]] != "X" and idx2label[label_ids[i][j]] != "[PAD]":
temp_1.append(idx2label[label_ids[i][j]])
temp_2.append(idx2label[logits[i][j]])
else:
break
y_true.append(temp_1)
y_pred.append(temp_2)
print("f1 score: %f"%(f1_score(y_true, y_pred)))
print("Accuracy score: %f"%(accuracy_score(y_true, y_pred)))
# Get acc , recall, F1 result report
print(classification_report(y_true, y_pred,digits=4))
```
## Saving model files for future parsing with cyBERT
```
# model weights
#torch.save(model.state_dict(), 'path/to/save.pth')
# label map
#with open('path/to/save.pth', mode="wt") as output:
# for label in idx2label.values():
# output.writelines(label)
```
| github_jupyter |
```
# Imports
from datetime import datetime, timedelta
from Database import db
import numpy as np
import pickle
import os
import re
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook
from keras.optimizers import RMSprop
from keras.models import Sequential, load_model, Model
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Input, concatenate, SpatialDropout1D, GRU
from keras.layers import Dense, Flatten, Embedding, LSTM, Activation, BatchNormalization, Dropout, Conv1D, MaxPooling1D
from keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint, TensorBoard
import keras.backend as K
from keras.utils import plot_model
# Options
stocks = ['AAPL', 'AMD', 'AMZN', 'GOOG', 'MSFT', 'INTC']
all_sources = ['reddit', 'reuters', 'twitter', 'seekingalpha', 'fool', 'wsj', 'thestreet']
sample_size = 5
tick_window = 30
max_length = 600
vocab_size = None # Set by tokenizer
emb_size = 300
model_type = 'multireg'
epochs = 100
batch_size = 64
test_cutoff = datetime(2018, 2, 14)
def add_time(date, days):
return (date + timedelta(days=days)).strftime('%Y-%m-%d')
def clean(sentence):
sentence = sentence.lower()
sentence = sentence.replace('-', ' ').replace('_', ' ').replace('&', ' ')
sentence = re.sub('\$?\d+%?\w?', 'numbertoken', sentence)
sentence = ''.join(c for c in sentence if c in "abcdefghijklmnopqrstuvwxyz ")
sentence = re.sub('\s+', ' ', sentence)
return sentence.strip()
def make_headline_to_effect_data():
"""
Headline -> Effect
Creates essentially the X, Y data for the embedding model to use
when analyzing/encoding headlines. Returns a list of headlines and
a list of corresponding 'effects' which represent a change in the stock price.
"""
all_headlines, all_tick_hist, all_effects, test_indexes = [], [], [], []
with db() as (conn, cur):
for stock in stocks:
## Headline For Every Date ##
cur.execute("SELECT DISTINCT date FROM headlines WHERE stock=? ORDER BY date ASC LIMIT 1", [stock])
start_date = cur.fetchall()[0][0]
cur.execute("SELECT DISTINCT date FROM ticks WHERE stock=? AND date >= ? ORDER BY date ASC", [stock, start_date])
dates = [date[0] for date in cur.fetchall()]
for date in tqdm_notebook(dates, desc=stock):
## Collect Headlines ##
event_date = datetime.strptime(date, '%Y-%m-%d')
cur.execute("SELECT date, source, rawcontent FROM headlines WHERE stock=? AND date BETWEEN ? AND ? ORDER BY date DESC",
[stock, add_time(event_date, -14), date])
headlines = [(date, source, clean(content), (event_date - datetime.strptime(date, '%Y-%m-%d')).days)
for (date, source, content) in cur.fetchall() if content]
if len(headlines) < sample_size:
continue
## Find corresponding tick data ##
cur.execute("""SELECT open, high, low, adjclose, volume FROM ticks WHERE stock=? AND date BETWEEN ? AND ? ORDER BY date DESC""",
[stock,
add_time(event_date, -30 - tick_window),
add_time(event_date, 0)])
before_headline_ticks = cur.fetchall()[:tick_window]
if len(before_headline_ticks) != tick_window:
continue
cur.execute("""SELECT AVG(adjclose) FROM ticks WHERE stock=? AND date BETWEEN ? AND ? ORDER BY date""",
[stock,
add_time(event_date, 1),
add_time(event_date, 4)])
after_headline_ticks = cur.fetchall()
if len(after_headline_ticks) == 0:
continue
previous_tick = before_headline_ticks[0][3]
result_tick = after_headline_ticks[0][0]
if not previous_tick or not result_tick:
continue
tick_hist = np.array(before_headline_ticks)
tick_hist -= np.mean(tick_hist, axis=0)
tick_hist /= np.std(tick_hist, axis=0)
## Create training example ##
probs = [1 / (headline[3] + 1) for headline in headlines]
probs /= np.sum(probs)
contents = [headline[2] for headline in headlines]
num_samples = len(contents) // sample_size
effect = [(result_tick - previous_tick) / previous_tick / 0.023]
for i in range(num_samples):
indexes = np.random.choice(np.arange(len(headlines)), sample_size, replace=False, p=probs)
sample = [headlines[i] for i in indexes]
if event_date > test_cutoff: # Mark as Test Example
test_indexes.append(len(all_headlines))
all_headlines.append(sample)
all_tick_hist.append(tick_hist)
all_effects.append(effect)
return all_headlines, np.array(all_tick_hist), np.array(all_effects), np.array(test_indexes)
def encode_sentences(headlines, tokenizer=None, max_length=100):
"""
Encoder
Takes a list of headlines and converts them into vectors
"""
## Encoding Sentences
sentences = []
for example in headlines:
sentences.append(" ".join([data[2] for data in example])) # Merge headlines into one long headline
# print(np.mean(sizes))
if not tokenizer:
tokenizer = Tokenizer(filters='', lower=False) # Already PreProcessed
tokenizer.fit_on_texts(sentences)
encoded_headlines = tokenizer.texts_to_sequences(sentences)
padded_headlines = pad_sequences(encoded_headlines, maxlen=max_length, padding='post')
## Encoding Meta Data
# TODO
return padded_headlines, tokenizer
def split_data(X, X2, Y, test_indexes):
"""
Splits X/Y to Train/Test
"""
indexes = np.arange(X.shape[0])
np.random.shuffle(indexes)
train_indexes = np.setdiff1d(indexes, test_indexes, assume_unique=True)
trainX, testX = X[train_indexes], X[test_indexes]
trainX2, testX2 = X2[train_indexes], X2[test_indexes]
trainY, testY = Y[train_indexes], Y[test_indexes]
return trainX, trainX2, trainY, testX, testX2, testY
def get_embedding_matrix(tokenizer, pretrained_file='glove.840B.300d.txt'):
embedding_matrix = np.zeros((vocab_size + 1, emb_size))
if not pretrained_file:
return embedding_matrix, None
## Load Glove File (Super Slow) ##
glove_db = dict()
with open(os.path.join('..', 'data', pretrained_file), 'r', encoding="utf-8") as glove:
for line in tqdm_notebook(glove, desc='Glove', total=2196017):
values = line.split(' ')
word = values[0].replace('-', '').lower()
coefs = np.asarray(values[1:], dtype='float32')
glove_db[word] = coefs
## Set Embeddings ##
for word, i in tokenizer.word_index.items():
embedding_vector = glove_db.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
return embedding_matrix, glove_db
def correct_sign_acc(y_true, y_pred):
"""
Accuracy of Being Positive or Negative
"""
diff = K.equal(y_true > 0, y_pred > 0)
return K.mean(diff, axis=-1)
def get_model(emb_matrix):
## Headline ##
headline_input = Input(shape=(max_length,), name="headlines")
emb = Embedding(vocab_size + 1, emb_size, input_length=max_length, weights=[emb_matrix], trainable=True)(headline_input)
emb = SpatialDropout1D(.2)(emb)
# (TODO) MERGE META WITH EMBEDDINGS
text_rnn = LSTM(300, dropout=0.3, recurrent_dropout=0.3, return_sequences=True)(emb)
text_rnn = Activation('selu')(text_rnn)
text_rnn = BatchNormalization()(text_rnn)
text_rnn = LSTM(300, dropout=0.3, recurrent_dropout=0.3, return_sequences=False)(text_rnn)
text_rnn = Activation('selu')(text_rnn)
text_rnn = BatchNormalization()(text_rnn)
## Ticks ##
tick_input = Input(shape=(tick_window, 5), name="stockticks")
tick_conv = Conv1D(filters=64, kernel_size=5, padding='same', activation='selu')(tick_input)
tick_conv = MaxPooling1D(pool_size=2)(tick_conv)
tick_rnn = LSTM(200, dropout=0.3, recurrent_dropout=0.3, return_sequences=False)(tick_conv)
tick_rnn = Activation('selu')(tick_rnn)
tick_rnn = BatchNormalization()(tick_rnn)
## Combined ##
merged = concatenate([text_rnn, tick_rnn])
final_dense = Dense(400)(merged)
final_dense = Activation('selu')(final_dense)
final_dense = BatchNormalization()(final_dense)
final_dense = Dropout(0.5)(final_dense)
final_dense = Dense(200)(merged)
final_dense = Activation('selu')(final_dense)
final_dense = BatchNormalization()(final_dense)
final_dense = Dropout(0.5)(final_dense)
pred_dense = Dense(1)(final_dense)
out = pred_dense
model = Model(inputs=[headline_input, tick_input], outputs=out)
model.compile(optimizer=RMSprop(lr=0.001), loss='mse', metrics=[correct_sign_acc])
return model
if __name__ == "__main__":
headlines, tick_hists, effects, test_indexes = make_headline_to_effect_data()
encoded_headlines, toke = encode_sentences(headlines, max_length=max_length)
vocab_size = len(toke.word_counts)
emb_matrix, glove_db = get_embedding_matrix(toke)
trainX, trainX2, trainY, testX, testX2, testY = split_data(encoded_headlines, tick_hists, effects, test_indexes)
print(trainX.shape, trainX2.shape, testY.shape)
# TRAIN MODEL
if __name__ == "__main__":
## Save Tokenizer ##
with open(os.path.join('..', 'models', 'toke2-tick.pkl'), 'wb') as toke_file:
pickle.dump(toke, toke_file, protocol=pickle.HIGHEST_PROTOCOL)
## Create Model ##
model = get_model(emb_matrix)
monitor_mode = 'correct_sign_acc'
tensorboard = TensorBoard(log_dir="logs/{}".format(datetime.now().strftime("%Y,%m,%d-%H,%M,%S,tick," + model_type)))
e_stopping = EarlyStopping(monitor='val_loss', patience=50)
checkpoint = ModelCheckpoint(os.path.join('..', 'models', 'media-headlines-ticks-' + model_type + '.h5'),
monitor=monitor_mode,
verbose=0,
save_best_only=True)
plot_model(model, to_file='model.png', show_shapes=True)
## Train ##
history = model.fit([trainX, trainX2],
trainY,
epochs=epochs,
batch_size=batch_size,
validation_data=([testX, testX2], testY),
verbose=0,
callbacks=[e_stopping, checkpoint, tensorboard])
## Display Train History ##
plt.plot(np.log(history.history['loss']))
plt.plot(np.log(history.history['val_loss']))
plt.legend(['LogTrainLoss', 'LogTestLoss'])
plt.show()
plt.plot(history.history[monitor_mode])
plt.plot(history.history['val_' + monitor_mode])
plt.legend(['TrainAcc', 'TestAcc'])
plt.show()
# Predict (TEST)
def predict(stock, model=None, toke=None, current_date=None, predict_date=None):
import keras.metrics
keras.metrics.correct_sign_acc = correct_sign_acc
if not model or not toke:
with open(os.path.join('..', 'models', 'toke2-tick.pkl'), 'rb') as toke_file:
toke = pickle.load(toke_file)
model = load_model(os.path.join('..', 'models', 'media-headlines-ticks-' + model_type + '.h5'))
vocab_size = len(toke.word_counts)
if not current_date:
current_date = datetime.today()
if not predict_date:
predict_date = current_date + timedelta(days=1)
all_headlines, all_tick_hist = [], []
with db() as (conn, cur):
event_date = current_date
date = datetime.strftime(event_date, '%Y-%m-%d')
cur.execute("SELECT date, source, rawcontent FROM headlines WHERE stock=? AND date BETWEEN ? AND ? ORDER BY date DESC",
[stock, add_time(event_date, -14), date])
headlines = [(date, source, clean(content), (event_date - datetime.strptime(date, '%Y-%m-%d')).days)
for (date, source, content) in cur.fetchall() if content]
## Find corresponding tick data ##
cur.execute("""SELECT open, high, low, adjclose, volume FROM ticks WHERE stock=? AND date BETWEEN ? AND ? ORDER BY date DESC""",
[stock,
add_time(event_date, -30 - tick_window),
add_time(event_date, 0)])
before_headline_ticks = cur.fetchall()[:tick_window]
actual_current = before_headline_ticks[0][3]
tick_hist = np.array(before_headline_ticks)
tick_hist -= np.mean(tick_hist, axis=0)
tick_hist /= np.std(tick_hist, axis=0)
## Create training example ##
probs = [1 / (headline[3] + 1) for headline in headlines]
probs /= np.sum(probs)
contents = [headline[2] for headline in headlines]
num_samples = len(contents) // sample_size
for i in range(num_samples):
indexes = np.random.choice(np.arange(len(headlines)), sample_size, replace=False, p=probs)
sample = [headlines[i] for i in indexes]
all_headlines.append(sample)
all_tick_hist.append(tick_hist)
## Process ##
encoded_headlines, toke = encode_sentences(all_headlines, tokenizer=toke, max_length=max_length)
tick_hists = np.array(all_tick_hist)
predictions = model.predict([encoded_headlines, tick_hists])[:, 0]
prices = predictions * 0.023 * actual_current + actual_current
return predictions, prices
# [TEST] Spot Testing
if __name__ == "__main__":
## **This Test May Overlap w/Train Data** ##
## Options ##
stock = 'INTC'
current_date = '2018-03-07'
predict_date = '2018-03-08'
## Run ##
predictions, prices = predict(stock,
current_date=datetime.strptime(current_date, '%Y-%m-%d'),
predict_date=datetime.strptime(predict_date, '%Y-%m-%d'))
## Find Actual Value ##
with db() as (conn, cur):
cur.execute("""SELECT adjclose FROM ticks WHERE stock=? AND date BETWEEN ? AND ? ORDER BY date ASC LIMIT 1""",
[stock,
add_time(datetime.strptime(predict_date, '%Y-%m-%d'), 0),
add_time(datetime.strptime(predict_date, '%Y-%m-%d'), 6)])
after_headline_ticks = cur.fetchall()
try:
actual_result = after_headline_ticks[0][0]
except:
actual_result = -1
## Display ##
parse = lambda num: str(round(num, 2))
print("Predicting Change Coef: " + parse(np.mean(predictions)))
print("Predicting Price: " + parse(np.mean(prices)))
print("Actual Price: " + parse(actual_result))
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/GetStarted/05_map_function.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/GetStarted/05_map_function.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/GetStarted/05_map_function.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# This function gets NDVI from Landsat 8 imagery.
def addNDVI(image):
return image.addBands(image.normalizedDifference(['B5', 'B4']))
# Load the Landsat 8 raw data, filter by location and date.
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1') \
.filterBounds(ee.Geometry.Point(-122.262, 37.8719)) \
.filterDate('2014-06-01', '2014-10-01')
# Map the function over the collection.
ndviCollection = collection.map(addNDVI)
first = ndviCollection.first()
print(first.getInfo())
bandNames = first.bandNames()
print(bandNames.getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("..")
from optimus import Optimus
# Create optimus
op = Optimus("dask", verbose = True)
```
# Mysql
```
# !pip install mysqlclient
# Put your db credentials here
db = op.connect(
driver="mysql",
host="165.227.196.70",
database= "optimus",
user= "test",
password = "test")
db.tables()
db.table_to_df("test_data").ext.display()
db.execute(query="SELECT * FROM test_data", partition_column= "id", table_name = "test_data").ext.display()
db.execute(query="SELECT * FROM test_data", table_name = "test_data", num_partitions = 9).ext.display()
db.execute(query="SELECT * FROM test_data")
a = """SELECT *, NTILE (4) OVER (ORDER BY id) id FROM test_data;"""
%%time
db.execute(query="SELECT * FROM test_data").ext.display()
a.ext.table()
df = db.table_to_df("test_data", limit=None)
db.tables_names_to_json()
```
# Postgres
```
# Put your db credentials here
db = op.connect(
driver="postgresql",
host="165.227.196.70",
database= "optimus",
user= "testuser",
password = "test")
db.tables()
db.table_to_df("test_data").table()
db.tables_names_to_json()
```
## MSSQL
```
# Put your db credentials here
db = op.connect(
driver="sqlserver",
host="165.227.196.70",
database= "optimus",
user= "test",
password = "test*0261")
db.tables()
db.table_to_df("test_data").table()
db.tables_names_to_json()
```
## Redshit
```
# Put your db credentials here
db = op.connect(
driver="redshift",
host="165.227.196.70",
database= "optimus",
user= "testuser",
password = "test")
db.tables()
db.table_to_df("test_data").table()
```
## Oracle
```
# Put your db credentials here
db = op.connect(
driver="oracle",
host="165.227.196.70",
database= "optimus",
user= "testuser",
password = "test")
```
## SQLlite
```
# Put your db credentials here
db = op.connect(
driver="sqlite",
host="chinook.db",
database= "employes",
user= "testuser",
password = "test")
db.tables()
db.table_to_df("albums",limit="all").table()
db.tables_names_to_json()
```
## Redis
```
df = op.load.csv("https://raw.githubusercontent.com/ironmussa/Optimus/master/examples/data/foo.csv", sep=",", header='true', infer_schema='true', charset="UTF-8", null_value="None")
df.table()
# Put your db credentials here
db = op.connect(
driver="redis",
host="165.227.196.70",
port = 6379,
database= 1,
password = "")
db.df_to_table(df, "hola1", redis_primary_key="id")
# https://stackoverflow.com/questions/56707978/how-to-write-from-a-pyspark-dstream-to-redis
db.table_to_df(0)
```
| github_jupyter |
# Migrating from PyTorch Lightning
[PyTorch Lightning](https://www.pytorchlightning.ai/) is a popular and very well designed framework for training deep learning models. If you are interested in trying our efficient algorithms and using the Composer trainer, the below is a quick guide on how to adapt your models.
If you are running in Colab, or haven't installed composer yet, do so:
```
!pip install mosaicml
!pip install pytorch-lightning
```
# Getting started
In this section, we'll go through the process of migrating the Resnet18 on CIFAR10 model from PTL to Composer. We will be following the PTL example [here](https://pytorch-lightning.readthedocs.io/en/stable/notebooks/lightning_examples/cifar10-baseline.html).
First, some relevant imports, as well as creating the model as in the PTL tutorial.
```
import torch.nn as nn
import torch.nn.functional as F
from pytorch_lightning import LightningModule
from torch.optim.lr_scheduler import OneCycleLR
def create_model():
model = torchvision.models.resnet18(pretrained=False, num_classes=10)
model.conv1 = nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
model.maxpool = nn.Identity()
return model
```
# Training data
As is standard, we setup the training data for CIFAR10 using `torchvision` datasets.
```
import torch
import torchvision
transform = torchvision.transforms.Compose(
[
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
]
)
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
train_dataloader = torch.utils.data.DataLoader(trainset, batch_size=256, shuffle=True)
test_dataloader = torch.utils.data.DataLoader(testset, batch_size=256, shuffle=False)
```
## Importing PTL Lightning Module
Following the PTL tutorial, we use the `LitResnet` model:
```
class LitResnet(LightningModule):
def __init__(self, lr=0.05):
super().__init__()
self.save_hyperparameters()
self.model = create_model()
def forward(self, x):
out = self.model(x)
return F.log_softmax(out, dim=1)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
self.log("train_loss", loss)
return loss
def evaluate(self, batch, stage=None):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
if stage:
self.log(f"{stage}_loss", loss, prog_bar=True)
self.log(f"{stage}_acc", acc, prog_bar=True)
def validation_step(self, batch, batch_idx):
self.evaluate(batch, "val")
def test_step(self, batch, batch_idx):
self.evaluate(batch, "test")
def configure_optimizers(self):
optimizer = torch.optim.SGD(
self.model.parameters(),
lr=self.hparams.lr,
momentum=0.9,
weight_decay=5e-4,
)
steps_per_epoch = 45000 // 256
scheduler_dict = {
"scheduler": OneCycleLR(
optimizer,
0.1,
epochs=30,
steps_per_epoch=steps_per_epoch,
),
"interval": "step",
}
return {"optimizer": optimizer, "lr_scheduler": scheduler_dict}
PTLModel = LitResnet(lr=0.05)
```
## Lightning module to Composer
Notice that up to here, we have only used pytorch lightning code. Here we will transfer the PTL module to be compatible with Composer. There are a few major differences:
* The `training_step` is broken into two parts, the `forward` and the `loss` methods. This is needed since our algorithms (such as label smoothing or selective backprop) sometimes need to intercept and modify the loss.
* Optimizers and schedulers are passed directly to the `Trainer` during initialization.
* Our `forward` step accepts as input the entire batch and has to take care of unpacking the batch.
For more information about the `ComposerModel` format, see our [guide](https://docs.mosaicml.com/en/stable/composer_model.html).
```
from torchmetrics.classification.accuracy import Accuracy
from composer.models.base import ComposerModel
PTLmodel = LitResnet(lr=0.05)
class MosaicResnet(ComposerModel):
def __init__(self):
super().__init__()
self.model = create_model()
self.acc = Accuracy()
def loss(self, outputs, batch, *args, **kwargs):
"""
Accepts the outputs from forward() and the batch
"""
x, y = batch # unpack the labels
return F.nll_loss(outputs, y)
def metrics(self, train):
return self.acc
def forward(self, batch):
x, _ = batch
y = self.model(x)
return F.log_softmax(y, dim=1)
def validate(self, batch):
_, targets = batch
outputs = self.forward(batch)
return outputs, targets
```
## Training
We instantiate the Mosaic trainer similarly by specifying
the model, dataloaders, optimizers, and max_duration (epochs). For more details on the trainer arguments, see our [Using the Trainer](https://docs.mosaicml.com/en/stable/trainer/using_the_trainer.html) guide.
Now you are ready to insert your algorithms! As an example, here we add a few common algorithms -- [Label Smoothing](https://docs.mosaicml.com/en/latest/method_cards/label_smoothing.html) and [BlurPool](https://docs.mosaicml.com/en/latest/method_cards/blurpool.html).
```
from composer import Trainer
from composer.algorithms import LabelSmoothing, BlurPool
model = MosaicResnet()
optimizer = torch.optim.SGD(
model.parameters(),
lr=0.05,
momentum=0.9,
weight_decay=5e-4,
)
steps_per_epoch = 45000 // 256
scheduler = OneCycleLR(
optimizer,
0.1,
epochs=30,
steps_per_epoch=steps_per_epoch,
)
trainer = Trainer(
model=model,
algorithms=[
BlurPool(
replace_convs=True,
replace_maxpools=True,
blur_first=True
)
],
train_dataloader=train_dataloader,
device="gpu" if torch.cuda.is_available() else "cpu",
eval_dataloader=test_dataloader,
optimizers=optimizer,
schedulers=scheduler,
step_schedulers_every_batch=True, # interval should be step
max_duration='2ep',
validate_every_n_epochs=1,
)
trainer.fit()
```
| github_jupyter |
```
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import numpy as np
algos_labels = ['CBS', 'CBS+PC', 'CBS+DS', 'CBS+H']
def represent_scatter(min_agents, max_agents, results, ylabel, title, ax):
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
agents = range(min_agents, max_agents + 1)
for values, label in zip(results, algos_labels):
ax.scatter(agents, values, label=label)
ax.plot(agents, values)
ax.set_xlabel('Number of agents')
ax.set_ylabel(ylabel)
ax.set_title(title)
ax.legend()
def represent_hist(min_agents, max_agents, results, ylabel, title, ax):
n = len(algos_labels)
agents = list(map(str, range(min_agents, max_agents + 1)))
men_means = [20, 34, 30, 35, 27]
women_means = [25, 32, 34, 20, 25]
x = np.arange(len(agents)) # the label locations
width = 0.5 # the width of the bars
rects = []
for i, (values, label) in enumerate(zip(results, algos_labels)):
rects.append(ax.bar(x + (i - n/2 + 0.5)*width/n, values, width/n, label=label))
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel(ylabel)
ax.set_xlabel('Number of agents')
ax.set_title(title)
ax.set_xticks(x)
ax.set_xticklabels(agents)
ax.legend()
for rect in rects:
ax.bar_label(rect, padding=3)
fig.tight_layout()
plt.show()
```
## den520d
```
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
ratios_CBS = np.load('../stats/ratios_den520d.npy')
ratios_PC = np.load('../stats/ratios_den520d_PC.npy')
ratios_DS = np.load('../stats/ratios_den520d_DS.npy')
ratios_H = np.load('../stats/ratios_den520d_H.npy')
represent_hist(5, 10, [ratios_CBS, ratios_PC, ratios_DS, ratios_H], 'Success rate', 'den520d map', ax)
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
nodes_CBS = np.load('../stats/nodes_den520d.npy')
nodes_CBS = nodes_CBS.mean(axis=1)
nodes_PC = np.array([
np.mean([1, 1, 1, 1, 1, 3, 1, 1, 3, 5, 1, 1, 1, 3, 1, 1, 3, 1, 1, 1]),
np.mean([1, 3, 1, 1, 1, 1, 1, 5, 3, 1, 1, 5, 1, 3, 1, 1, 1, 3, 1]),
np.mean([1, 1, 1, 3, 1, 1, 3, 1, 9, 1, 1, 1, 3, 1, 3, 1, 5, 1, 1, 1]),
np.mean([1, 5, 1, 1, 1, 1, 3, 7, 3, 1, 1, 1, 1, 1, 3, 1, 1, 5, 3, 3]),
np.mean([1, 1, 5, 1, 3, 1, 3, 1, 3, 3, 1, 5, 9, 1, 9, 3, 1, 1, 1, 3]),
np.mean([3, 1, 3, 1, 1, 11, 1, 1, 3, 3, 3, 1, 3, 1, 1, 1, 7, 5, 7, 1])
])
nodes_DS = np.load('../stats/nodes_den520d_DS.npy')
nodes_DS = nodes_DS.mean(axis=1)
nodes_H = np.load('../stats/nodes_den520d_H.npy')
nodes_H = np.array([
np.mean([1, 1, 1, 1, 1, 3, 1, 1, 3, 7, 1, 1, 1, 3, 1, 1, 3, 1, 1, 1]),
np.mean([1, 3, 1, 1, 1, 1, 1, 7, 3, 1, 1, 7, 1, 3, 1, 1, 1, 3, 1]),
np.mean([1, 1, 1, 3, 1, 1, 3, 1, 17, 1, 1, 1, 3, 1, 3, 1, 7, 1, 1, 1]),
np.mean([1, 3, 1, 1, 1, 1, 3, 11, 3, 1, 1, 1, 1, 1, 3, 1, 1, 7, 3, 3]),
np.mean([1, 1, 7, 1, 3, 1, 3, 1, 3, 3, 1, 5, 17, 1, 19, 3, 1, 1, 1, 3]),
np.mean([3, 1, 3, 1, 1, 33, 1, 1, 3, 3, 3, 1, 3, 1, 1, 1, 13, 7, 11, 1])
])
represent_hist(5, 10, [nodes_CBS, nodes_PC, nodes_DS, nodes_H], 'CT nodes created', 'den520d map', ax)
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
times_CBS = np.load('../stats/times_den520d.npy')
times_CBS = times_CBS.mean(axis=1)
times_PC = np.load('../stats/times_den520d_PC.npy')
times_PC = times_PC.mean(axis=1)
times_DS = np.load('../stats/times_den520d_DS.npy')
times_DS = times_DS.mean(axis=1)
times_H = np.load('../stats/times_den520d_H.npy')
times_H = times_H.mean(axis=1)
represent_hist(5, 10, [times_CBS, times_PC, times_DS, times_H], 'Time', 'den520d map', ax)
```
## room
```
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
ratios_CBS = np.load('../stats/ratios_room.npy')
ratios_PC = np.load('../stats/ratios_room_PC.npy')
ratios_DS = np.load('../stats/ratios_room_DS.npy')
ratios_H = np.load('../stats/ratios_room_H.npy')
represent_hist(5, 10, [ratios_CBS, ratios_PC, ratios_DS, ratios_H], 'Success rate', 'Room map', ax)
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
nodes_CBS = np.load('../stats/nodes_room.npy')
nodes_PC = np.load('../stats/nodes_room.npy')
nodes_DS = np.load('../stats/nodes_room_DS.npy')
nodes_H = np.load('../stats/nodes_room_H.npy')
represent_hist(5, 10, [nodes_CBS, nodes_PC, nodes_DS, nodes_H], 'CT nodes created', 'Room map', ax)
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
times_CBS = np.load('../stats/times_room.npy')
times_PC = np.load('../stats/times_room_PC.npy')
times_DS = np.load('../stats/times_room_DS.npy')
times_H = np.load('../stats/times_room_H.npy')
represent_hist(5, 10, [times_CBS, times_PC, times_DS, times_H], 'Time', 'Room map', ax)
```
# Empty 8x8
```
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
ratios_CBS = np.load('../stats/ratios_empty_8.npy')
ratios_PC = np.load('../stats/ratios_empty_8_PC.npy')
ratios_DS = np.load('../stats/ratios_empty_8_DS.npy')
ratios_H = np.load('../stats/ratios_empty_8_H.npy')
represent_hist(5, 20, [ratios_CBS, ratios_PC, ratios_DS, ratios_H], 'Success rate', 'Room map', ax)
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
ax.set_yscale('log')
nodes_CBS = np.load('../stats/nodes_empty_8.npy')
nodes_PC = np.load('../stats/nodes_empty_8_PC.npy')
nodes_DS = np.load('../stats/nodes_empty_8_DS.npy')
nodes_H = np.load('../stats/nodes_empty_8_H.npy')
represent_hist(5, 20, [nodes_CBS, nodes_PC, nodes_DS, nodes_H], 'CT nodes created', 'Room map', ax)
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
ax.set_yscale('log')
times_CBS = np.load('../stats/times_empty_8.npy')
times_PC = np.load('../stats/times_empty_8_PC.npy')
times_DS = np.load('../stats/times_empty_8_DS.npy')
times_H = np.load('../stats/times_empty_8_H.npy')
represent_hist(5, 20, [times_CBS, times_PC, times_DS, times_H], 'Time', 'Room map', ax)
```
# Room 32x32
```
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
# ratios_CBS = np.load('../stats/ratios_room.npy')
# ratios_PC = np.load('../stats/ratios_room_PC.npy')
# ratios_DS = np.load('../stats/ratios_room_DS.npy')
ratios_H = np.load('../stats/ratios_room_H.npy')
represent_hist(5, 20, [ratios_H], 'Success rate', 'Room map', ax)
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
ax.set_yscale('log')
# nodes_CBS = np.load('../stats/nodes_empty_8.npy')
# nodes_PC = np.load('../stats/nodes_empty_8_PC.npy')
# nodes_DS = np.load('../stats/nodes_empty_8_DS.npy')
nodes_H = np.load('../stats/nodes_room_H.npy')
represent_hist(5, 20, [nodes_H], 'CT nodes created', 'Room map', ax)
fig, ax = plt.subplots()
fig.set_size_inches(18.5, 10.5)
ax.set_yscale('log')
# times_CBS = np.load('../stats/times_empty_8.npy')
# times_PC = np.load('../stats/times_empty_8_PC.npy')
# times_DS = np.load('../stats/times_empty_8_DS.npy')
times_H = np.load('../stats/times_room_H.npy')
represent_hist(5, 20, [times_CBS, times_PC, times_DS, times_H], 'Time', 'Room map', ax)
```
| github_jupyter |
# 4c. Improving the training loop
Now that we are able to compute the loss for our training data, we are able to train the model with the same couple of steps that we have encountered at the end of [**Notebook 2**](../2_Tensors/2b_Tensors_features_Solution.ipynb).
We will take this as a starting point to introduce the `torch.optim` package which provides us with the `Optimizer` API that greatly simplifies the training loop.
## Key concepts of this section
1. `Optimizer` API from the `torch.optim` package
```
import random
import collections
import math
from typing import Tuple, List
import torch
import torchvision
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import seaborn as sns
sns.set(style="darkgrid")
from intro_to_pytorch import data
```
## Training loop so far
Going back to the last section of [**Notebook 4b**](4b_loss_functions_Solutions.ipynb), we finished by calculating the per batch loss.
```
# Nothing new here, just repeating definitions for clarity ... don't do this at home!
class MnistDataSet:
def __init__(self, train=True):
subset = "training" if train else "test"
self.x, self.y = torch.load(data.DATA_PATH / f"MNIST/processed/{subset}.pt")
self.x = self.x.float()
def __getitem__(self, key) -> Tuple[torch.Tensor, torch.Tensor]:
return self.x[key], self.y[key]
def __len__(self):
return len(self.x)
class MnistDataLoader:
def __init__(self, dataset, batch_size, shuffle, transform=None):
self.dataset, self.batch_size, self.shuffle, self.transform = dataset, batch_size, shuffle, transform
def __iter__(self):
self.idx = list(range(len(self.dataset)))
if self.shuffle:
random.shuffle(self.idx)
return self
def __next__(self):
if self.idx:
batch, self.idx = self.idx[:self.batch_size], self.idx[self.batch_size:]
x, y = self.dataset[batch]
if self.transform:
return self.transform(x, y)
return x, y
raise StopIteration()
class ImageNormalizer:
def __init__(self, mean, std):
self.mean, self.std = mean, std
def __call__(self, x, y):
return (x - self.mean).div_(self.std), y
class Flatten(torch.nn.Module):
def forward(self, x):
return x.reshape(x.shape[0], -1)
def get_model():
return torch.nn.Sequential(collections.OrderedDict([
("reshape", torch.nn.Flatten()),
("hidden", torch.nn.Linear(28*28,256)),
("sigmoid", torch.nn.Sigmoid()),
("output", torch.nn.Linear(256,10)),
]))
train_dl = MnistDataLoader(MnistDataSet(train=True), 1024, True, ImageNormalizer(33.32, 78.57))
test_dl = MnistDataLoader(MnistDataSet(train=False), 1024, True, ImageNormalizer(33.32, 78.57))
ce = torch.nn.CrossEntropyLoss()
def accuracy(preds, target):
return (preds.max(-1)[1] == target).float().mean()
# for x, y in train_dl:
# preds = model(x)
# print(ce(preds, y))
# print(accuracy(preds, y))
```
In [**Notebook 2**](../2_Tensors/2b_Tensors_features_Solution.ipynb) we already saw such a loop when finding the parameters for a **linear fit** to some data by minimizing the **MSE**.
Let's us try to reproduce this algorithm with our current setup!
```
def train_nn(model, n_epochs, lr):
train_losses = np.array([])
test_losses = np.array([])
accuracies = np.array([])
for epoch in range(n_epochs):
for x, y in train_dl:
train_loss = ce(model(x), y)
train_loss.backward()
train_losses = np.append(train_losses, train_loss.item())
with torch.no_grad():
for p in model.parameters():
p += - lr * p.grad
p.grad.data.zero_()
test_loss, acc = evaluate_nn(model, test_dl)
test_losses = np.append(test_losses, test_loss)
accuracies = np.append(accuracies, acc)
print(f"Epoch: {epoch} \t Training loss: {train_losses[-1]} \t Test loss: {test_losses[-1]} \t Test accurarcy: {accuracies[-1]}")
return train_losses, test_losses, accuracies
def evaluate_nn(model, test_dl):
preds = torch.tensor([])
targets = torch.tensor([]).long()
with torch.no_grad():
for x, y in test_dl:
targets = torch.cat([targets, y])
preds = torch.cat([preds, model(x)])
test_loss = ce(preds, targets)
return test_loss.item(), accuracy(preds, targets).item()
def plot_metrics(train_losses, test_losses, accuracies):
fig, (ax0, ax1) = plt.subplots(1, 2, sharex=True, figsize=(15,5))
x = np.array(range(len(train_losses)))
iterations_per_epoch = int(len(train_losses)/ len(test_losses))
x_val = x[iterations_per_epoch - 1 :: iterations_per_epoch]
ax0.plot(x, train_losses, label='train')
ax0.plot(x_val, test_losses, label='test')
ax0.legend()
ax0.set_ylabel("Loss")
ax0.set_xlabel("Iteration")
ax1.set_ylabel("Accuracy")
ax1.plot(x_val, accuracies)
ax1.set_xlabel("Iteration")
plt.tight_layout()
model = get_model()
train_losses, test_losses, accuracies = train_nn(model, n_epochs=10, lr=0.01)
plot_metrics(train_losses, test_losses, accuracies)
```
## PyTorch Optimizers
The above is a very neat example. The actual training code is only the part from l.6 to l.15, while the rest is mainly for logging.
However, it looks like the part from l.12 to l.15 is very generic and it can certainly be refactored away:
```
class Optimizer:
def __init__(self, parameters, lr):
self.parameters, self.lr = list(parameters), lr
def step(self):
with torch.no_grad():
for p in self.parameters:
p += - self.lr * p.grad
def zero_grad(self):
with torch.no_grad():
for p in self.parameters:
if p.grad is not None:
p.grad.data.zero_()
def train_nn(model, optim, n_epochs):
train_losses = np.array([])
test_losses = np.array([])
accuracies = np.array([])
for epoch in range(n_epochs):
for x, y in train_dl:
train_loss = ce(model(x), y)
train_loss.backward()
train_losses = np.append(train_losses, train_loss.item())
optim.step()
optim.zero_grad()
test_loss, acc = evaluate_nn(model, test_dl)
test_losses = np.append(test_losses, test_loss)
accuracies = np.append(accuracies, acc)
print(f"Epoch: {epoch} \t Training loss: {train_losses[-1]} \t Test loss: {test_losses[-1]} \t Test accurarcy: {accuracies[-1]}")
return train_losses, test_losses, accuracies
model = get_model()
train_losses, test_losses, accuracies = train_nn(model, Optimizer(model.parameters(), lr=0.01), n_epochs=10)
plot_metrics(train_losses, test_losses, accuracies)
torch.save(model, data.DATA_PATH / "model.pt")
```
This basically introduces the core of what the [torch.optim.Optimizer](https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer) does.
The `torch.optim` package, however, does not only offer an almost empty capsule for updating model parameters - it has all the deep learning batteries included: in our simple case, the weight update is performed with the **SGD** rule:
$$\omega_t = \omega_{t-1} - \lambda * \nabla \omega$$
Much more advanced algorithms exist to perform the weight update, like **SGD with momentum**, **Adagrad**, **Adam**, ...
## Exercise 1:
Check the documentation for a couple of other optimizers and see if you can improve the model performance!
```
model = get_model()
train_losses, test_losses, accuracies = train_nn(model, torch.optim.AdamW(model.parameters(), lr=0.01), n_epochs=10)
plot_metrics(train_losses, test_losses, accuracies)
torch.save(model, data.DATA_PATH / "model_optimized_with_adam.pt")
class AdamOptimizer(Optimizer):
def __init__(self, parameters, lr, beta1 = 0.9, beta2 = 0.999, epsilon=1e-8, wd=1e-2):
self.parameters, self.lr, self.beta1, self.beta2, self.epsilon, self.wd = list(parameters), lr, beta1, beta2, epsilon, wd
self.t, self.state = 0, [{"m": 0., "v": 0.} for _ in self.parameters]
def step(self):
self.t += 1
with torch.no_grad():
for s, p in zip(self.state, self.parameters):
p.mul_(1-self.lr*self.wd)
s["m"] = self.beta1 * s["m"] + (1-self.beta1)*p.grad
s["v"] = self.beta2 * s["v"] + (1-self.beta2)*p.grad.pow(2)
p += - self.lr * math.sqrt(1-self.beta2**self.t) / (1 - self.beta1**self.t) * s["m"] / (s["v"].sqrt() + self.epsilon)
model = get_model()
train_losses, test_losses, accuracies = train_nn(model, AdamOptimizer(model.parameters(), lr=0.01), n_epochs=10)
plot_metrics(train_losses, test_losses, accuracies)
torch.save(model, data.DATA_PATH / "model_optimized_with_adam.pt")
class LambOptimizer(Optimizer):
def __init__(self, parameters, lr, beta1 = 0.9, beta2 = 0.999, epsilon=1e-8, wd=1e-2):
self.parameters, self.lr, self.beta1, self.beta2, self.epsilon, self.wd = list(parameters), lr, beta1, beta2, epsilon, wd
self.t, self.state = 0, [{"m": 0., "v": 0.} for _ in self.parameters]
def step(self):
self.t += 1
with torch.no_grad():
for s, p in zip(self.state, self.parameters):
s["m"] = self.beta1 * s["m"] + (1-self.beta1)*p.grad
s["v"] = self.beta2 * s["v"] + (1-self.beta2)*p.grad.pow(2)
r = s["m"] / (s["v"].sqrt() + self.epsilon)* math.sqrt(1-self.beta2**self.t) / (1 - self.beta1**self.t)
scale = r + self.wd*p
p += - self.lr * scale * torch.norm(p).clamp(0., 10.) / torch.norm(scale)
model = get_model()
train_losses, test_losses, accuracies = train_nn(model, LambOptimizer(model.parameters(), lr=0.01), n_epochs=10)
plot_metrics(train_losses, test_losses, accuracies)
torch.save(model, data.DATA_PATH / "model_optimized_with_lamb.pt")
```
## Exercise 2:
What other API is exposed by the `torch.optim` package?
```
# TODO: Exercise 2
# Anwser is _LRScheduler
```
## Exercise 3:
Save only the model's parameters to *../data/model_params.pt*.
```
torch.save(model.state_dict(), data.DATA_PATH / "model_params.pt")
```
## Section summary
The `torch.optim` package provides useful APIs and state-of-the-art algorithms for performing weight updates and learning rate scheduling.
| github_jupyter |
This is the notebook associated with the blog post titled Interactive Explainable Machine Learning with SAS Viya, Streamlit and Docker
Install SWAT if you haven't done so already. Import the required modules
```
#!pip install swat
from swat import CAS, options
import pandas as pd
import numpy as np
```
Connect to CAS and load the required action sets
```
host = ""
port = ""
username = ""
password = ""
s = CAS(host, port, username, password)
s.loadActionSet('autotune')
s.loadactionset('aStore')
s.loadactionset('decisionTree')
s.loadactionset("explainModel")
s.loadactionset('table')
```
Load and inspect the dataset
```
hmeq = pd.read_csv('hmeq.csv')
hmeq
```
Load the dataframe to a CASTable and train a model and perform hyperparameter optimization
```
s.upload(hmeq,casout={'name' : 'hmeqTest', 'caslib' : 'public','replace' : True})
result = s.autotune.tuneGradientBoostTree(
trainOptions = {
"table" : {"name":'hmeqTest', 'caslib' : 'public'},
"inputs" : {'LOAN','MORTDUE','VALUE','YOJ','DEROG','DELINQ','CLAGE','NINQ','CLNO','DEBTINC','REASON', 'JOB'},
"target" : 'BAD',
"nominal" : {'BAD','REASON', 'JOB'},
"casout" : {"name":"gradboosthmeqtest", "caslib":"public",'replace':True},
"varImp" : True
},
tunerOptions={"seed":12345, "maxTime":60}
)
```
Promote the table with training data, export the astore and promote the astore to global scope. Important for the Streamlit portion
```
s.table.promote(name="hmeqTest", caslib='public',target="hmeqTest",targetLib='public')
modelAstore = s.decisionTree.dtreeExportModel(modelTable = {"caslib":"public","name":"gradboosthmeqtest" },
casOut = {"caslib":"public","name":'hmeqTestAstore','replace':True})
s.table.promote(name='hmeqTestAstore', caslib='public',target='hmeqTestAstore',targetLib='public')
```
Let's test out the model. Create a sample observation, convert it to a pandas dataframe, then a cas table and score against the model
```
#Convert dictonary of input data to pandas dataframe (a tabular data format for scoring)
datadict = {'LOAN':140,'MORTDUE':3000, 'VALUE':40000, 'REASON':'HomeImp','JOB':'Other','YOJ':12,
'DEROG':0.0,'DELINQ':0.0, 'CLAGE':89,'NINQ':1.0, 'CLNO':10.0, 'DEBTINC':0.05}
```
Create a small helper function to convert the python dictionary to a pandas DataFrame. This could be done with a single line of code but the data types end up changing. Hence this slightly verbose function
```
def dicttopd(datadict):
for key in datadict:
datadict[key] = [datadict[key]]
return pd.DataFrame.from_dict(datadict)
samplepd = dicttopd(datadict)
samplepd
```
score this against the model
```
s.upload(samplepd,casout={'name' : 'realtime', 'caslib' : 'public','replace' : True})
s.aStore.score(rstore = {"caslib":"public","name":"hmeqTestAstore"},
table = {"caslib":'public',"name":'realtime'},
out = {"caslib":'public',"name":'realscore', 'replace':True})
```
Inspect the scores
```
scoredData = s.CASTable(name='realscore',caslib='public')
datasetDict = scoredData.to_dict()
scores = pd.DataFrame(datasetDict, index=[0])
scores
```
Convert this to a neat little function for later use in the app
```
def score(samplepd):
s.upload(samplepd,casout={'name' : 'realtime', 'caslib' : 'public','replace' : True})
s.aStore.score(rstore = {"caslib":"public","name":"hmeqTestAstore"},
table = {"caslib":'public',"name":'realtime'},
out = {"caslib":'public',"name":'realscore', 'replace':True})
#scoretable2= s.table.fetch(score_tableName)
scoredData = s.CASTable(name='realscore',caslib='public')
datasetDict = scoredData.to_dict()
scores = pd.DataFrame(datasetDict, index=[0])
return scores
```
Test to make sure this works
```
score(samplepd)
```
Let's add the I_BAD value to the 'BAD' field in sample pd
```
samplepd['BAD'] = scores.I_BAD.to_list()
samplepd
```
Get interpretability scores using kernelshap algorithm in the linearexplainer action set
```
s.upload(samplepd,casout={'name' : 'realtime', 'caslib' : 'public','replace' : True})
shapvals = s.linearExplainer(
table = {"name" : 'hmeqTest','caslib':'public'},
query = {"name" : 'realtime','caslib':'public'},
modelTable = {"name" :"hmeqTestAstore",'caslib':'public'},
modelTableType = "ASTORE",
predictedTarget = 'P_BAD1',
seed = 1234,
preset = "KERNELSHAP",
inputs = ['LOAN','MORTDUE','VALUE','YOJ','DEROG','DELINQ','CLAGE','NINQ','CLNO','DEBTINC','REASON', 'JOB','BAD'],
nominals = ['REASON', 'JOB','BAD']
)
shap1 = shapvals['ParameterEstimates']
shap = shap1[['Variable','Estimate']][0:10]
```
Inspect the results
```
shap
!pip install altair
import altair as alt
alt.Chart(shap).mark_bar().encode(
x='Variable',
y='Estimate'
)
```
| github_jupyter |
```
%matplotlib inline
%run notebook_setup
```
# Interpolation with PyMC3
## A 1D example
To start, we'll do a simple 1D example where we have a model evaluated at control points and we interpolate between them to estimate the model value.
```
import numpy as np
import matplotlib.pyplot as plt
import exoplanet as xo
np.random.seed(42)
x = np.sort(np.random.uniform(-5, 5, 25))
points = [x]
values = x**3-x**2
interpolator = xo.interp.RegularGridInterpolator(points, values[:, None])
t = np.linspace(-6, 6, 5000)
plt.plot(t, interpolator.evaluate(t[:, None]).eval(), label="interpolation")
plt.plot(x, values, "o", label="control points")
plt.xlabel("x")
plt.ylabel("y")
plt.legend(fontsize=12);
```
Here's how we build the PyMC3 model:
```
import pymc3 as pm
truth = 45.0
data_sd = 8.0
data_mu = truth + data_sd * np.random.randn()
with pm.Model() as model:
# The value passed into the interpolator must have the shape
# (ntest, ndim), but in our case that is (1, 1)
xval = pm.Uniform("x", lower=-8, upper=8, shape=(1, 1))
# Evaluate the interpolated model and extract the scalar value
# we want
mod = pm.Deterministic("y", interpolator.evaluate(xval)[0, 0])
# The usual likelihood
pm.Normal("obs", mu=mod, sd=data_sd, observed=data_mu)
# Sampling!
trace = pm.sample(draws=1000, tune=2000, step_kwargs=dict(target_accept=0.9))
```
And here are the results:
```
t = np.linspace(-6, 6, 5000)
plt.plot(trace["x"][:, 0, 0], trace["y"], ".k", alpha=0.1, label="posterior samples")
plt.axhline(truth, color="k", lw=3, alpha=0.5, label="truth")
plt.plot(t, interpolator.evaluate(t[:, None]).eval(), label="interpolation")
plt.xlabel("x")
plt.ylabel("y")
plt.legend(fontsize=12);
```
## A 2D example
In this case, we'll interpolate a 2D function. This one is a hard one because the posterior is a ring, but it demonstrates the principle.
```
points = [
np.linspace(-5, 5, 50),
np.linspace(-1, 1, 25),
]
values = np.exp(-0.5*(points[0]**2)[:, None] - 0.5*(points[1]**2 / 0.5)[None, :] - points[0][:, None]*points[1][None, :])
interpolator = xo.interp.RegularGridInterpolator(points, values[:, :, None], nout=1)
plt.pcolor(points[0], points[1], values.T)
plt.colorbar()
plt.xlabel("x")
plt.ylabel("y");
```
Set things up and sample.
```
import theano.tensor as tt
data_mu = 0.6
data_sd = 0.1
with pm.Model() as model:
xval = pm.Uniform("x", lower=-5, upper=5, shape=(1,))
yval = pm.Uniform("y", lower=-1, upper=1, shape=(1,))
xtest = tt.stack([xval, yval], axis=-1)
mod = interpolator.evaluate(xtest)
# The usual likelihood
pm.Normal("obs", mu=mod, sd=data_sd, observed=data_mu)
# Sampling!
trace = pm.sample(draws=4000, tune=4000, step_kwargs=dict(target_accept=0.9))
```
And here are the results:
```
import corner
samples = pm.trace_to_dataframe(trace)
corner.corner(samples);
```
| github_jupyter |
```
#Basics
import pandas as pd
import numpy as np
#sklearn
from sklearn.model_selection import train_test_split,cross_val_score,GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestRegressor,RandomForestClassifier
from sklearn.metrics import precision_score, recall_score, confusion_matrix, precision_recall_curve, roc_auc_score
# Visualisation
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(color_codes=True)
pal = sns.color_palette("Set2", 10)
sns.set_palette(pal)
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
```
### Load data
```
df = pd.read_csv('train.csv')
df_predict = pd.read_csv('predict.csv')
```
### Split data into training and testing sets
```
#all columns except survived column
X = df.iloc[:,2:]
# only survived column
y = pd.DataFrame(df['Survived'])
Xtrain1, Xtest1, ytrain1,ytest1 = train_test_split(X,y, test_size = 0.2, random_state = 42, stratify =y)
df_train = pd.merge(Xtrain1, ytrain1, left_index= True, right_index = True)
df_test = pd.merge(Xtest1, ytest1, left_index= True, right_index = True)
df_train.shape, df_test.shape
df_train.head(3)
```
### Visualization
```
# Gender
pivot_gender = df_train.pivot_table(index="Sex",values="Survived")
pivot_gender.plot.bar()
plt.show()
# Gender & Class
pivot_gender = df_train.pivot_table(index=["Pclass","Sex"],values="Survived")
pivot_gender.plot.bar()
plt.show()
# Agegroup
df_train['Age_group'] =(df_train['Age'] // 10*10)
pivot_age = df_train.pivot_table(index="Age_group",values="Survived")
pivot_age.plot.bar()
plt.show()
# Class
pivot_class= df_train.pivot_table(index="Pclass",values="Survived")
pivot_class.plot.bar()
plt.show()
# Title
df_train['Titles'] = df_train['Name'].str.split(r'\s*,\s*|\s*\.\s*').str[1]
pivot_class= df_train.pivot_table(index="Titles",values="Survived")
pivot_class.plot.bar()
plt.show()
df_train.drop(['Titles'], axis=1, inplace=True)
```
### Clean Data
```
# missing data
miss = df_train.isna().sum()
miss_perc =(1-df_train.notnull().mean())*100
pd.concat([miss,round(miss_perc)], axis=1, keys=['missing', 'in %'])
def cleaning(dataframe):
### Age
#for dataset in dataframe:
mean = dataframe['Age'].mean()
std = dataframe['Age'].std()
is_null = dataframe['Age'].isnull().sum()
# compute random numbers
rand_age = np.random.randint(mean - std, mean + std, size = is_null)
# fill NaN values in Age column
age_slice = dataframe['Age'].copy()
age_slice[np.isnan(age_slice)] = rand_age
dataframe['Age'] = age_slice
dataframe['Age'] = dataframe['Age'].astype(int)
### recoding the gender in two columns
f = dataframe['Sex'] == 'female'
dataframe['Gender'] =f.astype(int)
### defining age groups
def agegroup(row):
# age 0-5
if row < 6:
return 1
# age 6-14
elif row < 15:
return 2
# age 15-36
elif row < 37:
return 3
# age 37-55
elif row < 56:
return 4
# 56-80
elif row < 81:
return 5
else:
return 6
dataframe['Agegroup'] = dataframe['Age'].apply(agegroup)
### recoding the cabin into floor
# to get starting letter: df_train['Cabin'].str[:1].unique()
characters = ('N','A','B','C','D','E','F','G','T')
numbers = ('0','1','2','3','4','5','6','7','8')
df_deck = pd.DataFrame(
{'Deck': characters,
'Deck_No': numbers})
#replace NaN with N as cabin
dataframe['Cabin'].fillna('N', inplace=True)
#Cabin indicator as column
dataframe['Deck'] = dataframe['Cabin'].str[:1]
#merge with lookup of level aka deck no
dataframe = pd.merge(dataframe,df_deck,
on='Deck',
how='left')
### Embarking ports
port_dict = {'Q': 1, 'C': 2, 'S':3 }
dataframe['Ports'] = dataframe['Embarked'].map(port_dict)
dataframe['Ports'] = dataframe['Ports'].fillna(1)
### titles
# how it works: take the substring that is preceded by , (but do not include),
# consists of at least one word character, and ends with a .
dataframe['Title'] = dataframe['Name'].str.split(r'\s*,\s*|\s*\.\s*').str[1]
titles_dummies = pd.get_dummies(dataframe['Title'], prefix='Title')
dataframe = pd.concat([dataframe, titles_dummies], axis=1)
### "Women and children first"
dataframe.loc[( (dataframe['Sex'] == 'female') & (dataframe['Age'] >= 15) ), 'Category_Code'] = 1
dataframe.loc[( (dataframe['Sex'] == 'male') & (dataframe['Age'] >= 15) ), 'Category_Code'] = 2
dataframe.loc[( dataframe['Age'] < 15 ), 'Category_Code'] = 3
return dataframe
df_train = cleaning(df_train)
df_test = cleaning(df_test)
# dropping unused columns
def dropping(dataframe):
dataframe.drop(['Sex','Name','Ticket','Cabin','Deck','Embarked'], axis=1, inplace=True)
return dataframe
df_train = dropping(df_train)
df_test = dropping(df_test)
```
### Build a Logistic Regression model
```
# Define X and Y for train data
y_train = df_train['Survived']
X_train = df_train[['Age', 'Pclass', 'SibSp', 'Parch', 'Fare','Deck_No','Gender',
'Title_Master','Title_Miss','Title_Mr','Title_Mrs','Title_Rev',
'Agegroup','Ports','Category_Code'
]]
m = LogisticRegression()
m.fit(X_train,y_train)
m.intercept_, m.coef_
m.score(X_train,y_train)
# Define X and Y for test data
y_test = df_test['Survived']
X_test = df_test[['Age', 'Pclass', 'SibSp', 'Parch', 'Fare','Deck_No','Gender',
'Title_Master','Title_Miss','Title_Mr','Title_Mrs','Title_Rev',
'Agegroup','Ports','Category_Code'
]]
m.score(X_test,y_test)
df_predict=pd.DataFrame(m.predict_proba(X_train), columns = ['Survived_No', 'Survived_Yes'])
df_predict.round(decimals=2)
df_predict.head(3)
```
### Merge Data
```
df_predict2 = pd.merge(df_test,df_predict, left_index= True, right_index = True)
#df_predict2.head(3)
y_pred =m.predict(X_train)
confusion_matrix(y_pred, y_train)
precision_score(y_pred=y_pred, y_true=y_train)
recall_score(y_true=y_train,y_pred=y_pred)
```
### Precision-Recall-Curve
```
y_pred_prob = m.predict_proba(X_train)[:,1]
precision, recall, thresholds = precision_recall_curve(y_train, y_pred_prob)
plt.plot(precision, recall)
plt.xlabel('precision')
plt.ylabel('recall')
plt.title('Precision-Recall-Curve')
plt.show()
```
### Random Forest
```
rf_model = RandomForestRegressor (n_estimators=100, oob_score=True, random_state=42 )
rf_model.fit(X_test,y_test)
rf_model.oob_score_
y_oob = rf_model.oob_prediction_
"C-stat: ", roc_auc_score(y_test, y_oob)
```
### Crossvalidation
```
scores = cross_val_score(rf_model, X_test,y_test, cv=5)
print(scores)
print(sum(scores)/5)
for ntrees in range(1,20,3):
for depth in range (1,11):
rf_model = RandomForestClassifier(max_depth=depth, n_estimators=ntrees)
scores = cross_val_score(rf_model, X_test,y_test, cv=5)
print(ntrees, depth, sum(scores)/5)
rf_model = RandomForestClassifier(random_state=42)
params = {
'max_depth':[2,3,4,5,6],
'n_estimators':[1,3,5,7,10,15,20]
}
g = GridSearchCV(rf_model, param_grid=params)
g.fit (X_test,y_test)
g.score(X_test,y_test)
g.best_params_
g.cv_results_['mean_test_score'].reshape((5,7))
mtx= g.cv_results_['mean_test_score'].reshape((5,7))
sns.heatmap(mtx)
```
| github_jupyter |
# Módulo 2: Scraping con Selenium
## LATAM Airlines
<a href="https://www.latam.com/es_ar/"><img src="https://i.pinimg.com/originals/dd/52/74/dd5274702d1382d696caeb6e0f6980c5.png" width="420"></img></a>
<br>
Vamos a scrapear el sitio de Latam para averiguar datos de vuelos en funcion el origen y destino, fecha y cabina. La información que esperamos obtener de cada vuelo es:
- Precio(s) disponibles
- Horas de salida y llegada (duración)
- Información de las escalas
**¡Empecemos!**
Utilicemos lo aprendido hasta ahora para lograr el objetivo propuesto
```
import requests
from bs4 import BeautifulSoup
url = 'https://www.latam.com/es_ar/apps/personas/booking?fecha1_dia=18&fecha1_anomes=2019-12&auAvailability=1&ida_vuelta=ida&vuelos_origen=Buenos%20Aires&from_city1=BUE&vuelos_destino=Madrid&to_city1=MAD&flex=1&vuelos_fecha_salida_ddmmaaaa=18/12/2019&cabina=Y&nadults=1&nchildren=0&ninfants=0&cod_promo=#/'
r = requests.get(url)
r.status_code
s = BeautifulSoup(r.text, 'lxml')
print(s.prettify())
```
Vemos que la respuesta de la página no contiene la información que buscamos, ya que la misma aparece recién después de ejecutar el código JavaSCript que está en la respuesta.
## Selenium
Selenium es una herramienta que nos permitirá controlar un navegador y podremos utilizar las funcionalidades del motor de JavaScript para cargar el contenido que no viene en el HTML de la página. Para esto necesitamos el módulo `webdriver`.
```
from selenium import webdriver
```
Paso 1: instanciar un **driver** del navegador
```
options = webdriver.ChromeOptions()
options.add_argument('--incognito')
driver = webdriver.Chrome(executable_path='../chromedriver', options=options)
```
Paso 2: hacer que el navegador cargue la página web.
```
driver.get(url)
```
Paso 3: extraer la información de la página
```
vuelos = driver.find_elements_by_xpath('//li[@class="flight"]')
vuelos
vuelo = vuelos[0]
vuelo
# Hora de salida
vuelo.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime')
# Hora de llegada
vuelo.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime')
# Duración del vuelo
vuelo.find_element_by_xpath('.//span[@class="duration"]/time').get_attribute('datetime')
boton_escalas = vuelo.find_element_by_xpath('.//div[@class="flight-summary-stops-description"]/button')
boton_escalas.click()
segmentos = vuelo.find_elements_by_xpath('//div[@class="segments-graph"]/div[@class="segments-graph-segment"]')
segmentos
escalas = len(segmentos) - 1
escalas
segmento = segmentos[0]
# Origen
segmento.find_element_by_xpath('.//div[@class="departure"]/span[@class="ground-point-name"]').text
# Hora de salida
segmento.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime')
# Destino
segmento.find_element_by_xpath('.//div[@class="arrival"]/span[@class="ground-point-name"]').text
# Hora de llegada
segmento.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime')
# Duración del vuelo
segmento.find_element_by_xpath('.//span[@class="duration flight-schedule-duration"]/time').get_attribute('datetime')
# Numero del vuelo
segmento.find_element_by_xpath('.//span[@class="equipment-airline-number"]').text
# Modelo de avion
segmento.find_element_by_xpath('.//span[@class="equipment-airline-material"]').text
# Duracion de la escala
segmento.find_element_by_xpath('.//div[@class="stop connection"]//p[@class="stop-wait-time"]//time').get_attribute('datetime')
driver.find_element_by_xpath('//div[@class="modal-dialog"]//button[@class="close"]').click()
vuelo.click()
tarifas = vuelo.find_elements_by_xpath('.//div[@class="fares-table-container"]//tfoot//td[contains(@class, "fare-")]')
tarifas
precios = []
for tarifa in tarifas:
nombre = tarifa.find_element_by_xpath('.//label').get_attribute('for')
moneda = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="currency-symbol"]').text
valor = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="value"]').text
dict_tarifa={nombre:{'moneda':moneda, 'valor':valor}}
precios.append(dict_tarifa)
print(dict_tarifa)
def obtener_precios(vuelo):
'''
Función que retorna una lista de diccionarios con las distintas tarifas
'''
tarifas = vuelo.find_elements_by_xpath('.//div[@class="fares-table-container"]//tfoot//td[contains(@class, "fare-")]')
precios = []
for tarifa in tarifas:
nombre = tarifa.find_element_by_xpath('.//label').get_attribute('for')
moneda = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="currency-symbol"]').text
valor = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="value"]').text
dict_tarifa={nombre:{'moneda':moneda, 'valor':valor}}
precios.append(dict_tarifa)
return precios
def obtener_datos_escalas(vuelo):
'''
Función que retorna una lista de diccionarios con la información de
las escalas de cada vuelo
'''
segmentos = vuelo.find_elements_by_xpath('//div[@class="segments-graph"]/div[@class="segments-graph-segment"]')
info_escalas = []
for segmento in segmentos:
# Origen
origen = segmento.find_element_by_xpath('.//div[@class="departure"]/span[@class="ground-point-name"]').text
# Hora de salida
dep_time = segmento.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime')
# Destino
destino = segmento.find_element_by_xpath('.//div[@class="arrival"]/span[@class="ground-point-name"]').text
# Hora de llegada
arr_time = segmento.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime')
# Duración del vuelo
duracion_vuelo = segmento.find_element_by_xpath('.//span[@class="duration flight-schedule-duration"]/time').get_attribute('datetime')
# Numero del vuelo
numero_vuelo = segmento.find_element_by_xpath('.//span[@class="equipment-airline-number"]').text
# Modelo de avion
modelo_avion =segmento.find_element_by_xpath('.//span[@class="equipment-airline-material"]').text
if segmento != segmentos[-1]:
# Duracion de la escala
duracion_escala = segmento.find_element_by_xpath('.//div[@class="stop connection"]//p[@class="stop-wait-time"]//time').get_attribute('datetime')
else:
duracion_escala = ''
data_dict={
'origen':origen,
'dep_time':dep_time,
'destino':destino,
'arr_time':arr_time,
'duracion_vuelo':duracion_vuelo,
'numero_vuelo':numero_vuelo,
'modelo_avion':modelo_avion,
'duracion_escala':duracion_escala,
}
info_escalas.append(data_dict)
return info_escalas
def obtener_tiempos(vuelo):
'''
Función que retorna un diccionario con los horarios de salida y llegada de cada
vuelo, incluyendo la duración.
Nota: la duración del vuelo no es la hora de llegada - la hora de salida porque
puede haber diferencia de horarios entre el origen y el destino.
'''
# Hora de salida
salida = vuelo.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime')
# Hora de llegada
llegada = vuelo.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime')
# Duracion
duracion = vuelo.find_element_by_xpath('.//span[@class="duration"]/time').get_attribute('datetime')
tiempos = {'hora_salida': salida, 'hora_llegada': llegada, 'duracion': duracion}
return tiempos
def obtener_info(driver):
vuelos = driver.find_elements_by_xpath('//li[@class="flight"]')
print(f'Se encontraron {len(vuelos)} vuelos.')
print('Iniciando scraping...')
info = []
for vuelo in vuelos:
#obtenemos los tiempos generales de cada vuelo
tiempos = obtener_tiempos(vuelo)
# Clickeamos sobre el boton de las escalas
vuelo.find_element_by_xpath('.//div[@class="flight-summary-stops-description"]/button').click()
escalas = obtener_datos_escalas(vuelo)
# Cerramos el modal
driver.find_element_by_xpath('//div[@class="modal-dialog"]//button[@class="close"]').click()
# Clickeamos el vuelo para ver los precios
vuelo.click()
precios = obtener_precios(vuelo)
vuelo.click()
info.append({'precios':precios, 'tiempos': tiempos, 'escalas':escalas})
return info
import time
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
options = webdriver.ChromeOptions()
options.add_argument('--incognito')
driver = webdriver.Chrome(executable_path='../chromedriver', options=options)
driver.get(url)
# Introducir una demora
delay = 10
try:
# introducir demora inteligente
vuelo = WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.XPATH, '//li[@class="flight"]')))
print('La página terminó de cargar')
info_vuelos = obtener_info(driver)
except TimeoutException:
print('La página tardó demasiado en cargar')
info_vuelos = []
driver.close()
info_vuelos
```
Paso 4: cerrar el navegador
```
driver.close()
```
| github_jupyter |
# Data setup
```
#Uploading Dataset
from google.colab import files
uploaded = files.upload()
# ignore the error
pip install -U numpy pandas scikit-learn
import os
import glob
import datetime
from collections import defaultdict
import pandas as pd
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.metrics import confusion_matrix, accuracy_score, f1_score
from sklearn.linear_model import LogisticRegression
from sklearn.utils import resample
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier, AdaBoostClassifier
from sklearn.svm import SVC
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression, LinearRegression
from sklearn.decomposition import TruncatedSVD
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
import warnings
def warn(*args, **kwargs):
pass
warnings.warn = warn
sequence_track_df = pd.read_csv ('sequence_data.csv')
print(sequence_track_df)
track_info_df = pd.read_csv ('track_info.csv')
print(track_info_df)
sequence_track_df = sequence_track_df.rename(columns={"track_id_clean": "track_id"})
df = pd.merge(track_info_df, sequence_track_df, on='track_id')
len(df)
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
print(df.corr())
# Checking co-relation amongst features.
df.dtypes
df.iloc[1]
```
# Preprocessing
```
df.session_length.value_counts().plot(kind = 'bar')
df.hour_of_day.value_counts().plot(kind="bar")
df.session_id.nunique()
hist_user_behavior_reason_start_map = {}
for idx, val in enumerate(df.hist_user_behavior_reason_start.unique()):
hist_user_behavior_reason_start_map[val] = idx
hist_user_behavior_reason_end_map = {}
for idx, val in enumerate(df.hist_user_behavior_reason_end.unique()):
hist_user_behavior_reason_end_map[val] = idx
context_type_map = {}
for idx, val in enumerate(df.context_type.unique()):
context_type_map[val] = idx
mode_map = {}
for idx, val in enumerate(df.loc[:, "mode"].unique()):
mode_map[val] = idx
session_id_map = {}
for idx, val in enumerate(df.loc[:, "session_id"].unique()):
session_id_map[val] = idx
track_id_map = {}
for idx, val in enumerate(df.loc[:, "track_id"].unique()):
track_id_map[val] = idx
skip_time_map = {
1: "skip_1",
2: "skip_2",
3: "skip_3",
4: "not_skipped",
5: "otherwise"
}
pause_behaviour_map = {
1: "no_pause_before_play",
2: "short_pause_before_play",
3: "long_pause_before_play",
4: "otherwise"
}
def convert_data_to_ordinal(df, skip_type):
df['track_id_int'] = df.apply(lambda row : track_id_map[row.track_id], axis = 1)
df['session_id_int'] = df.apply(lambda row : session_id_map[row.session_id], axis = 1)
df['weekday'] = df.apply(lambda row : datetime.datetime.strptime(row.date, "%Y-%m-%d").weekday(), axis = 1)
df['context_type_int'] = df.apply(lambda row : context_type_map[row.context_type], axis = 1)
df['hist_user_behavior_reason_end_int'] = df.apply(lambda row : hist_user_behavior_reason_end_map[row.hist_user_behavior_reason_end], axis = 1)
df['hist_user_behavior_reason_start_int'] = df.apply(lambda row : hist_user_behavior_reason_start_map[row.hist_user_behavior_reason_start], axis = 1)
df['mode_int'] = df.apply(lambda row : mode_map[row['mode']], axis = 1)
df['hist_user_behavior_is_shuffle_int'] = df.apply(lambda row : 1 if row.hist_user_behavior_is_shuffle else 0, axis = 1)
df['premium_int'] = df.apply(lambda row : 1 if row.premium else 0, axis = 1)
df['pause_behaviour'] = df.apply(lambda row : 1 if row.no_pause_before_play else 2 if row.short_pause_before_play else 3 if row.long_pause_before_play else 4, axis = 1)
df['skip_time'] = df.apply(lambda row : 1 if row.skip_1 else 2 if row.skip_2 else 3 if row.skip_3 else 4 if row.not_skipped else 5, axis = 1)
if skip_type == 1:
df['skipped'] = df.apply(lambda row : 1 if row.skip_1 else 0, axis = 1)
elif skip_type == 2:
df['skipped'] = df.apply(lambda row : 1 if row.skip_2 else 0, axis = 1)
elif skip_type == 3:
df['skipped'] = df.apply(lambda row : 1 if row.skip_3 else 0, axis = 1)
else:
df['skipped'] = df.apply(lambda row : 1 if row.not_skipped else 0, axis = 1)
df['session_poss_amal'] = df.apply(lambda row : str(int(row.session_id_int)) + '_' + str(int(row.session_position)), axis = 1)
return df
def plot_skip_behaviour_distribution(df):
df = convert_data_to_ordinal(df, 1)
df.skip_time.value_counts().plot(kind='bar')
plot_skip_time(df)
# Plot of distribution of data based on skip behaviour
def plot_weekday_distribution(df):
df = convert_data_to_ordinal(df, 1)
df.weekday.value_counts().plot(kind='bar')
plot_weekday(df)
# Plot of distribution of data based on weekday
def df_drop_redundant(df):
return df.drop(columns=['skip_1', 'skip_2', 'skip_3', 'not_skipped', 'date', 'long_pause_before_play', 'short_pause_before_play', 'no_pause_before_play', 'track_id', 'session_id', 'context_type', 'hist_user_behavior_reason_end', 'hist_user_behavior_reason_start', 'mode', 'hist_user_behavior_is_shuffle', 'premium','skip_time'])
def df_setup(n, df):
df_map = {}
for i in range(1, n + 1):
df_map[i] = df.copy(deep=True)
df_map[i].columns = ['last_' + str(i) + '_' + str(col) for col in df_map[i].columns]
for i in range(1, n + 1):
# print('last_' + str(i) + '_session_poss_amal')
df['last_' + str(i) + '_session_poss_amal'] = df.apply(lambda row : str(int(row.session_id_int)) + '_' + str(int(row.session_position - i)) if (row.session_position >= i + 1) else None, axis = 1)
df = pd.merge(df, df_map[i], on="last_" + str(i) + "_session_poss_amal")
df_map.pop(i)
df.drop(columns=['last_' + str(i) + '_session_poss_amal', 'last_' + str(i) + '_session_id_int', 'last_' + str(i) + '_session_position', 'last_' + str(i) + '_session_length'])
df.drop(columns=['session_poss_amal'])
return df
def upscaling(x_train, y_train):
x = pd.concat([x_train, y_train], axis=1)
# separate minority and majority classes
not_skipped = x[x.skipped==0]
skipped = x[x.skipped==1]
skipped_upsampled = resample(skipped,
replace=True, # sample with replacement
n_samples=len(not_skipped), # match number in majority class
random_state=27) # reproducible results
# combine majority and upsampled minority
upsampled = pd.concat([not_skipped, skipped_upsampled])
x_train = upsampled[upsampled.columns.difference(['skip_time', 'skipped'])]
y_train = upsampled.loc[:, upsampled.columns == 'skipped']
# print(upsampled.skipped.value_counts())
return (x_train, y_train)
def scale_data(x_train, x_test):
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
# print(x_train)
# print(x_test)
return (x_train, x_test)
```
# Analysis
```
def report_confusion_metrics(y_test, y_pred, algo_name):
lda_tn, lda_fp, lda_fn, lda_tp = confusion_matrix(y_test, y_pred).ravel()
lda_tpr = lda_tp/(lda_tp + lda_fn)
lda_tnr = lda_tn/(lda_tn + lda_fp)
lda_accuracy = (lda_tp + lda_tn) / (lda_tp + lda_fn + lda_tn + lda_fp)
lda_f1_accuracy = f1_score(y_test, y_pred)
logistic_regression_confusion_matrix_info = pd.DataFrame({"TP": float(lda_tp),
"FP": lda_fp,
"TN": lda_tn,
"FN": lda_fn,
"TPR": lda_tpr,
"TNR": lda_tnr,
"Accuracy": float(lda_accuracy),
"F1 Accuracy": float(lda_f1_accuracy)},index=[algo_name])
print(logistic_regression_confusion_matrix_info.to_markdown())
def generate_logistics_regression_results(x_train, y_train, x_test, y_test):
logistic_regression = LogisticRegression(max_iter=1000)
logistic_regression.fit(x_train, y_train.values.ravel())
logistic_regression_y_pred = logistic_regression.predict(x_test)
report_confusion_metrics(y_test, logistic_regression_y_pred, "Logistics Regression")
def generate_random_forest_results(x_train, y_train, x_test, y_test):
random_forest_classifier = RandomForestClassifier(random_state=0)
random_forest_classifier.fit(x_train, y_train.values.ravel())
random_forest_classifier_pred = random_forest_classifier.predict(x_test)
report_confusion_metrics(y_test, random_forest_classifier_pred, "Random Forest Classifier")
def generate_lda_results(x_train, y_train, x_test, y_test):
lda = LDA(solver="svd")
lda.fit(x_train, y_train.values.ravel())
lda_pred = lda.predict(x_test)
report_confusion_metrics(y_test, lda_pred, "LDA Classifier")
def generate_qda_results(x_train, y_train, x_test, y_test):
qda = QDA()
qda.fit(x_train, y_train.values.ravel())
qda_pred = qda.predict(x_test)
report_confusion_metrics(y_test, qda_pred, "QDA Classifier")
def generate_linear_regression_results(x_train, y_train, x_test, y_test):
linear_regression = LinearRegression()
linear_regression.fit(x_train, y_train.values.ravel())
linear_regression_y_pred = linear_regression.predict(x_test)
binary_linear_regression_y_pred = []
for i in linear_regression_y_pred:
if i <= 0.5:
binary_linear_regression_y_pred.append(0)
else:
binary_linear_regression_y_pred.append(1)
report_confusion_metrics(y_test, binary_linear_regression_y_pred, "Linear Regression")
def generate_knn_classifier_results(n, p, x_train, y_train, x_test, y_test):
knn = KNeighborsClassifier(n_neighbors=n, p=p, metric='minkowski', n_jobs=-1)
knn.fit(x_train, y_train.values.ravel())
y_pred = knn.predict(x_test)
report_confusion_metrics(y_test, y_pred, "KNN Classifier: No. of neighbours= " + str(n) + " and Power parameter= " + str(p))
def generate_gaussian_naive_bayes_results(x_train, y_train, x_test, y_test):
gnb = GaussianNB()
gnb_pred = gnb.fit(x_train, y_train).predict(x_test)
report_confusion_metrics(y_test, gnb_pred, "Gaussian Naive Bayes Classifier")
def generate_gradient_boosting_classifier(x_train, y_train, x_test, y_test):
gb_classifier = GradientBoostingClassifier(max_depth=1, random_state=0)
gb_classifier.fit(x_train, y_train.values.ravel())
gb_classifier_pred = gb_classifier.predict(x_test)
report_confusion_metrics(y_test, gb_classifier_pred, "Gradient Boosting Classifier")
def generate_adaptive_boosting_classifier(x_train, y_train, x_test, y_test):
ab_classifier = AdaBoostClassifier(random_state=0)
ab_classifier.fit(x_train, y_train.values.ravel())
ab_classifier_pred = ab_classifier.predict(x_test)
report_confusion_metrics(y_test, ab_classifier_pred, "Adaptive Boosting Classifier")
def create_train_test_data(train_df, test_df):
x_train = train_df[train_df.columns.difference(['skip_time', 'skipped'])]
x_test = test_df[test_df.columns.difference(['skipped', 'skip_time'])]
y_train = train_df.loc[:, train_df.columns == 'skipped']
y_test = test_df.loc[:, test_df.columns == 'skipped']
return x_train, y_train, x_test, y_test
def predict_results(df, skip_type, n):
df = convert_data_to_ordinal(df, skip_type)
df = df_drop_redundant(df)
df = df_setup(n, df)
train_df, test_df = train_test_split(df, test_size=0.5, random_state=42, shuffle=True)
x_train, y_train, x_test, y_test = create_train_test_data(train_df, test_df)
x_train, y_train = upscaling(x_train, y_train)
x_train, x_test = scale_data(x_train, x_test)
print()
generate_logistics_regression_results(x_train, y_train, x_test, y_test)
print()
generate_random_forest_results(x_train, y_train, x_test, y_test)
print()
generate_lda_results(x_train, y_train, x_test, y_test)
print()
generate_qda_results(x_train, y_train, x_test, y_test)
print()
generate_linear_regression_results(x_train, y_train, x_test, y_test)
print()
generate_knn_classifier_results(3, 1, x_train, y_train, x_test, y_test)
print()
generate_knn_classifier_results(3, 2, x_train, y_train, x_test, y_test)
print()
generate_knn_classifier_results(5, 2, x_train, y_train, x_test, y_test)
print()
generate_gaussian_naive_bayes_results(x_train, y_train, x_test, y_test)
print()
generate_gradient_boosting_classifier(x_train, y_train, x_test, y_test)
print()
generate_adaptive_boosting_classifier(x_train, y_train, x_test, y_test)
```
# Results
Due to session disconnect on Google Colab Platforms, prediction is made in parts.
```
for skip_type in [1, 2, 3, 4]:
for n in [1, 3, 5, 10, 15, 19]:
print("For skip type =", skip_type, "and n =", n)
predict_results(df.copy(), skip_type, n)
for skip_type in [1]:
for n in [5, 10, 15, 19]:
print("For skip type =", skip_type, "and n =", n)
predict_results(df.copy(), skip_type, n)
for skip_type in [2, 3, 4]:
for n in [1, 3, 5, 10, 15, 19]:
print("For skip type =", skip_type, "and n =", n)
predict_results(df.copy(), skip_type, n)
for skip_type in [3]:
for n in [3, 5, 10, 15, 19]:
print("For skip type =", skip_type, "and n =", n)
predict_results(df.copy(), skip_type, n)
for skip_type in [4]:
for n in [1, 3, 5, 10, 15, 19]:
print("For skip type =", skip_type, "and n =", n)
predict_results(df.copy(), skip_type, n)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Classifying Images of Clothing
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/courses/udacity_intro_to_tensorflow_for_deep_learning/l03c01_classifying_images_of_clot.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In this tutorial, we'll build and train a neural network to classify images of clothing, like sneakers and shirts.
It's okay if you don't understand everything. This is a fast-paced overview of a complete TensorFlow program, with explanations along the way. The goal is to get the general sense of a TensorFlow project, not to catch every detail.
This guide uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow.
## Install and import dependencies
We'll need [TensorFlow Datasets](https://www.tensorflow.org/datasets/), an API that simplifies downloading and accessing datasets, and provides several sample datasets to work with. We're also using a few helper libraries.
```
!pip install -U tensorflow_datasets
from __future__ import absolute_import, division, print_function
# Import TensorFlow and TensorFlow Datasets
import tensorflow as tf
import tensorflow_datasets as tfds
tf.logging.set_verbosity(tf.logging.ERROR)
# Helper libraries
import math
import numpy as np
import matplotlib.pyplot as plt
# Improve progress bar display
import tqdm
import tqdm.auto
tqdm.tqdm = tqdm.auto.tqdm
print(tf.__version__)
# This will go away in the future.
# If this gives an error, you might be running TensorFlow 2 or above
# If so, the just comment out this line and run this cell again
tf.enable_eager_execution()
```
## Import the Fashion MNIST dataset
This guide uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset, which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 $\times$ 28 pixels), as seen here:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc) in an identical format to the articles of clothing we'll use here.
This guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code.
We will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow, using the [Datasets](https://www.tensorflow.org/datasets) API:
```
dataset, metadata = tfds.load('fashion_mnist', as_supervised=True, with_info=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
```
Loading the dataset returns metadata as well as a *training dataset* and *test dataset*.
* The model is trained using `train_dataset`.
* The model is tested against `test_dataset`.
The images are 28 $\times$ 28 arrays, with pixel values in the range `[0, 255]`. The *labels* are an array of integers, in the range `[0, 9]`. These correspond to the *class* of clothing the image represents:
<table>
<tr>
<th>Label</th>
<th>Class</th>
</tr>
<tr>
<td>0</td>
<td>T-shirt/top</td>
</tr>
<tr>
<td>1</td>
<td>Trouser</td>
</tr>
<tr>
<td>2</td>
<td>Pullover</td>
</tr>
<tr>
<td>3</td>
<td>Dress</td>
</tr>
<tr>
<td>4</td>
<td>Coat</td>
</tr>
<tr>
<td>5</td>
<td>Sandal</td>
</tr>
<tr>
<td>6</td>
<td>Shirt</td>
</tr>
<tr>
<td>7</td>
<td>Sneaker</td>
</tr>
<tr>
<td>8</td>
<td>Bag</td>
</tr>
<tr>
<td>9</td>
<td>Ankle boot</td>
</tr>
</table>
Each image is mapped to a single label. Since the *class names* are not included with the dataset, store them here to use later when plotting the images:
```
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
### Explore the data
Let's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, and 10000 images in the test set:
```
num_train_examples = metadata.splits['train'].num_examples
num_test_examples = metadata.splits['test'].num_examples
print("Number of training examples: {}".format(num_train_examples))
print("Number of test examples: {}".format(num_test_examples))
```
## Preprocess the data
The value of each pixel in the image data is an integer in the range `[0,255]`. For the model to work properly, these values need to be normalized to the range `[0,1]`. So here we create a normalization function, and then apply it to each image in the test and train datasets.
```
def normalize(images, labels):
images = tf.cast(images, tf.float32)
images /= 255
return images, labels
# The map function applies the normalize function to each element in the train
# and test datasets
train_dataset = train_dataset.map(normalize)
test_dataset = test_dataset.map(normalize)
```
### Explore the processed data
Let's plot an image to see what it looks like.
```
# Take a single image, and remove the color dimension by reshaping
for image, label in test_dataset.take(1):
break
image = image.numpy().reshape((28,28))
# Plot the image - voila a piece of fashion clothing
plt.figure()
plt.imshow(image, cmap=plt.cm.binary)
plt.colorbar()
plt.grid(False)
plt.show()
```
Display the first 25 images from the *training set* and display the class name below each image. Verify that the data is in the correct format and we're ready to build and train the network.
```
plt.figure(figsize=(10,10))
i = 0
for (image, label) in test_dataset.take(25):
image = image.numpy().reshape((28,28))
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image, cmap=plt.cm.binary)
plt.xlabel(class_names[label])
i += 1
plt.show()
```
## Build the model
Building the neural network requires configuring the layers of the model, then compiling the model.
### Setup the layers
The basic building block of a neural network is the *layer*. A layer extracts a representation from the data fed into it. Hopefully, a series of connected layers results in a representation that is meaningful for the problem at hand.
Much of deep learning consists of chaining together simple layers. Most layers, like `tf.keras.layers.Dense`, have internal parameters which are adjusted ("learned") during training.
```
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28, 1)),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
```
This network has three layers:
* **input** `tf.keras.layers.Flatten` — This layer transforms the images from a 2d-array of 28 $\times$ 28 pixels), to a 1d-array of 784 pixels (28\*28). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn, as it only reformats the data.
* **"hidden"** `tf.keras.layers.Dense`— A densely connected layer of 128 neurons. Each neuron (or node) takes input from all 784 nodes in the previous layer, weighting that input according to hidden parameters which will be learned during training, and outputs a single value to the next layer.
* **output** `tf.keras.layers.Dense` — A 10-node *softmax* layer, with each node representing a class of clothing. As in the previous layer, each node takes input from the 128 nodes in the layer before it. Each node weights the input according to learned parameters, and then outputs a value in the range `[0, 1]`, representing the probability that the image belongs to that class. The sum of all 10 node values is 1.
### Compile the model
Before the model is ready for training, it needs a few more settings. These are added during the model's *compile* step:
* *Loss function* — An algorithm for measuring how far the model's outputs are from the desired output. The goal of training is this measures loss.
* *Optimizer* —An algorithm for adjusting the inner parameters of the model in order to minimize loss.
* *Metrics* —Used to monitor the training and testing steps. The following example uses *accuracy*, the fraction of the images that are correctly classified.
```
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
## Train the model
First, we define the iteration behavior for the train dataset:
1. Repeat forever by specifying `dataset.repeat()` (the `epochs` parameter described below limits how long we perform training).
2. The `dataset.shuffle(60000)` randomizes the order so our model cannot learn anything from the order of the examples.
3. And `dataset.batch(32)` tells `model.fit` to use batches of 32 images and labels when updating the model variables.
Training is performed by calling the `model.fit` method:
1. Feed the training data to the model using `train_dataset`.
2. The model learns to associate images and labels.
3. The `epochs=5` parameter limits training to 5 full iterations of the training dataset, so a total of 5 * 60000 = 300000 examples.
(Don't worry about `steps_per_epoch`, the requirement to have this flag will soon be removed.)
```
BATCH_SIZE = 32
train_dataset = train_dataset.repeat().shuffle(num_train_examples).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
model.fit(train_dataset, epochs=5, steps_per_epoch=math.ceil(num_train_examples/BATCH_SIZE))
```
As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.88 (or 88%) on the training data.
## Evaluate accuracy
Next, compare how the model performs on the test dataset. Use all examples we have in the test dataset to assess accuracy.
```
test_loss, test_accuracy = model.evaluate(test_dataset, steps=math.ceil(num_test_examples/32))
print('Accuracy on test dataset:', test_accuracy)
```
As it turns out, the accuracy on the test dataset is smaller than the accuracy on the training dataset. This is completely normal, since the model was trained on the `train_dataset`. When the model sees images it has never seen during training, (that is, from the `test_dataset`), we can expect performance to go down.
## Make predictions and explore
With the model trained, we can use it to make predictions about some images.
```
for test_images, test_labels in test_dataset.take(1):
test_images = test_images.numpy()
test_labels = test_labels.numpy()
predictions = model.predict(test_images)
predictions.shape
```
Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:
```
predictions[0]
```
A prediction is an array of 10 numbers. These describe the "confidence" of the model that the image corresponds to each of the 10 different articles of clothing. We can see which label has the highest confidence value:
```
np.argmax(predictions[0])
```
So the model is most confident that this image is a shirt, or `class_names[6]`. And we can check the test label to see this is correct:
```
test_labels[0]
```
We can graph this to look at the full set of 10 channels
```
def plot_image(i, predictions_array, true_labels, images):
predictions_array, true_label, img = predictions_array[i], true_labels[i], images[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img[...,0], cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
```
Let's look at the 0th image, predictions, and prediction array.
```
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
```
Let's plot several images with their predictions. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent (out of 100) for the predicted label. Note that it can be wrong even when very confident.
```
# Plot the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
```
Finally, use the trained model to make a prediction about a single image.
```
# Grab an image from the test dataset
img = test_images[0]
print(img.shape)
```
`tf.keras` models are optimized to make predictions on a *batch*, or collection, of examples at once. So even though we're using a single image, we need to add it to a list:
```
# Add the image to a batch where it's the only member.
img = np.array([img])
print(img.shape)
```
Now predict the image:
```
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
```
`model.predict` returns a list of lists, one for each image in the batch of data. Grab the predictions for our (only) image in the batch:
```
np.argmax(predictions_single[0])
```
And, as before, the model predicts a label of 6 (shirt).
# Exercises
Experiment with different models and see how the accuracy results differ. In particular change the following parameters:
* Set training epochs set to 1
* Number of neurons in the Dense layer following the Flatten one. For example, go really low (e.g. 10) in ranges up to 512 and see how accuracy changes
* Add additional Dense layers between the Flatten and the final Dense(10, activation=tf.nn.softmax), experiment with different units in these layers
* Don't normalize the pixel values, and see the effect that has
Remember to enable GPU to make everything run faster (Runtime -> Change runtime type -> Hardware accelerator -> GPU).
Also, if you run into trouble, simply reset the entire environment and start from the beginning:
* Edit -> Clear all outputs
* Runtime -> Reset all runtimes
```
```
| github_jupyter |
# GCP Dataflow Component Sample
A Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner.
## Intended use
Use this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline.
## Runtime arguments
Name | Description | Optional | Data type| Accepted values | Default |
:--- | :----------| :----------| :----------| :----------| :---------- |
python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |
project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |
region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |
staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |
requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |
args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |
wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 |
## Input data schema
Before you use the component, the following files must be ready in a Cloud Storage bucket:
- A Beam Python code file.
- A `requirements.txt` file which includes a list of dependent packages.
The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:
- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-params#setting-other-cloud-pipeline-options).
- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code.
## Output
Name | Description
:--- | :----------
job_id | The id of the Cloud Dataflow job that is created.
## Cautions & requirements
To use the components, the following requirements must be met:
- Cloud Dataflow API is enabled.
- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:
```
component_op(...)
```
The Kubeflow user service account is a member of:
- `roles/dataflow.developer` role of the project.
- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.
- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`.
## Detailed description
The component does several things during the execution:
- Downloads `python_file_path` and `requirements_file_path` to local files.
- Starts a subprocess to launch the Python program.
- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.
- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.
- Waits for the job to finish.
# Setup
```
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
```
## Install Pipeline SDK
```
!python3 -m pip install 'kfp>=0.1.31' --quiet
```
## Load the component using KFP SDK
```
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
```
## Use the wordcount python sample
In this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
```
!gsutil cat gs://ml-pipeline/sample-pipeline/word-count/wc.py
```
## Example pipeline that uses the component
```
import kfp
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = json.dumps(['--output', f'{staging_dir}/wc/wordcount.out']),
wait_interval = wait_interval)
```
## Submit the pipeline for execution
```
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
```
#### Inspect the output
```
!gsutil cat $output/wc/wordcount.out
```
## References
* [Component python code](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py)
* [Component docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile)
* [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/dataflow/launch_python/sample.ipynb)
* [Dataflow Python Quickstart](https://cloud.google.com/dataflow/docs/quickstarts/quickstart-python)
| github_jupyter |
<small><small><i>
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/05_Python_Files)**
</i></small></small>
# Python Directory and Files Management
In this class, you'll learn about file and directory management in Python, i.e. creating a directory, renaming it, listing all directories, and working with them.
## Python Directory
If there are a large number of **[files](https://github.com/milaan9/05_Python_Files/blob/main/001_Python_File_Input_Output.ipynb)** to handle in our Python program, we can arrange our code within different directories to make things more manageable.
A directory or folder is a collection of files and subdirectories. Python has the **`os`** **[module](https://github.com/milaan9/04_Python_Functions/blob/main/007_Python_Function_Module.ipynb)** that provides us with many useful methods to work with directories (and files as well).
### Get Current Directory `getcwd()` -
We can get the present working directory using the **`getcwd()`** method of the os module.
This method returns the current working directory in the form of a string. We can also use the **`getcwd()`** method to get it as bytes object.
```
import os
print(os.getcwd())
import os
os.getcwdb()
```
The extra backslash implies an escape sequence. The **`print()`** function will render this properly.
### Changing Directory `chdir()` -
We can change the current working directory by using the **`chdir()`** method.
The new path that we want to change into must be supplied as a string to this method. We can use both the forward-slash **`/`** or the backward-slash **`\`** to separate the path elements.
It is safer to use an escape sequence when using the backward slash.
**Syntax:**
**`os.chdir("newdir")`**
>**Remember to**:
1. First copy the path for current working directory
2. Windows user and MacOS users, first create an empty folder with name "**xyz**" on your desktop.
```
import os
# Changing a directory to "C:\Users\Deepak\OneDrive\Desktop\xyz"
os.chdir(r"C:\Users\Deepak\OneDrive\Desktop\xyz")
print("Directory changed")
print(os.getcwd())
import os
print(os.getcwd())
import os
# Changing a directory back to original directory "C:\Users\Deepak\01_Learn_Python4Data\05_Python_Files"
os.chdir(r"C:\Users\Deepak\01_Learn_Python4Data\05_Python_Files")
print("Directory changed")
import os
print(os.getcwd())
```
### List Directories and Files `listdir()` -
The **`listdir()`** method displays all files and sub-directories inside a directory.
This method takes in a path and returns a list of subdirectories and files in that path. If no path is specified, it returns the list of subdirectories and files from the current working directory.
```
print(os.getcwd())
os.listdir()
os.listdir('C:\\')
```
### Making a New Directory `mkdir()` -
You can use the **`mkdir()`** method of the os module to create directories in the current directory. You need to supply an argument to this method, which contains the name of the directory to be created.
This method takes in the path of the new directory. If the full path is not specified, the new directory is created in the current working directory.
**Syntax:**
**`os.mkdir("dir_name")`**
```
import os
os.mkdir('python_study')
print("Directory created")
os.listdir()
```
### Renaming a Directory or a File `rename()` -
The **`rename()`** method can rename a directory or a file.
**Syntax:**
**`os.rename(current_file_name, new_file_name)`**
```
os.listdir()
os.rename('python_study','python_learning')
print("Directory renamed")
os.listdir()
import os
os.rename('data_1.txt','my_data.txt')
print("file renamed")
```
<div>
<img src="img/io2.png" width="1000"/>
</div>
### Removing a Directory or a File `remove()` and `rmdir()` -
A file can be removed (deleted) using the **`remove()`** method.
Similarly, the **`rmdir()`** method removes an empty directory. Before removing a directory, all the contents in it should be removed.
```
import os
# This would remove "C:\Users\Deepak\OneDrive\Desktop\xyz" directory.
os.rmdir(r"C:\Users\Deepak\OneDrive\Desktop\xyz")
print("Directory deleted")
os.listdir()
import os
os.remove('my_data.txt')
print("file deleted")
```
>**Note**: The **`rmdir()`** method can only remove empty directories.
In order to remove a non-empty directory, we can use the **`rmtree()`** method inside the **`shutil`** module.
```
import shutil
shutil.rmtree('python_learning')
os.listdir()
```
| github_jupyter |
# Machine Learning with PySpark - Introduction
> Spark is a framework for working with Big Data. In this chapter you'll cover some background about Spark and Machine Learning. You'll then find out how to connect to Spark using Python and load CSV data.
You'll learn about them in this chapter. This is the Summary of lecture "Machine Learning with PySpark
", via datacamp.
- toc: true
- badges: true
- comments: true
- author: Chanseok Kang
- categories: [Python, Datacamp, PySpark, Machine_Learning]
- image: images/spark_process.png
```
import pyspark
import numpy as np
import pandas as pd
```
## Machine Learning & Spark

- Spark
- Compute accross a distributed cluster.
- Data processed in memory
- Well documented high level API

## Connecting to Spark
### Creating a SparkSession
In this exercise, you'll spin up a local Spark cluster using all available cores. The cluster will be accessible via a SparkSession object.
The `SparkSession` class has a builder attribute, which is an instance of the `Builder` class. The `Builder` class exposes three important methods that let you:
- specify the location of the master node;
- name the application (optional); and
- retrieve an existing `SparkSession` or, if there is none, create a new one.
The `SparkSession` class has a `version` attribute which gives the version of Spark.
Find out more about `SparkSession` [here](https://spark.apache.org/docs/3.0.0/api/python/pyspark.sql.html#pyspark.sql.SparkSession).
Once you are finished with the cluster, it's a good idea to shut it down, which will free up its resources, making them available for other processes.
```
from pyspark.sql import SparkSession
# Create SparkSession object
spark = SparkSession.builder.master('local[*]').appName('test').getOrCreate()
# What version of Spark?
print(spark.version)
# Terminate the cluster
spark.stop()
```
## Loading Data
### Loading flights data
In this exercise you're going to load some airline flight data from a CSV file. To ensure that the exercise runs quickly these data have been trimmed down to only 50 000 records. You can get a larger dataset in the same format [here](https://assets.datacamp.com/production/repositories/3918/datasets/e1c1a03124fb2199743429e9b7927df18da3eacf/flights-larger.csv).
Notes on CSV format:
- fields are separated by a comma (this is the default separator) and
- missing data are denoted by the string 'NA'.
Data dictionary:
- `mon` — month (integer between 1 and 12)
- `dom` — day of month (integer between 1 and 31)
- `dow` — day of week (integer; 1 = Monday and 7 = Sunday)
- `org` — origin airport (IATA code)
- `mile` — distance (miles)
- `carrier` — carrier (IATA code)
- `depart` — departure time (decimal hour)
- `duration` — expected duration (minutes)
- `delay` — delay (minutes)
```
spark = SparkSession.builder.master('local[*]').appName('flights').getOrCreate()
# Read data from CSV file
flights = spark.read.csv('./dataset/flights-larger.csv', sep=',', header=True, inferSchema=True,
nullValue='NA')
# Get number of records
print("The data contain %d records." % flights.count())
# View the first five records
flights.show(5)
# Check column data types
print(flights.printSchema())
print(flights.dtypes)
```
### Loading SMS spam data
You've seen that it's possible to infer data types directly from the data. Sometimes it's convenient to have direct control over the column types. You do this by defining an explicit schema.
The file `sms.csv` contains a selection of SMS messages which have been classified as either 'spam' or 'ham'. These data have been adapted from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/sms+spam+collection). There are a total of 5574 SMS, of which 747 have been labelled as spam.
Notes on CSV format:
- no header record and
- fields are separated by a semicolon (this is not the default separator).
Data dictionary:
- `id` — record identifier
- `text` — content of SMS message
- `label` — spam or ham (integer; 0 = ham and 1 = spam)
```
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
# Specify column names and types
schema = StructType([
StructField("id", IntegerType()),
StructField("text", StringType()),
StructField("label", IntegerType())
])
# Load data from a delimited file
sms = spark.read.csv('./dataset/sms.csv', sep=';', header=False, schema=schema)
# Print schema of DataFrame
sms.printSchema()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jan-kreischer/UZH_ML4NLP/blob/main/Project-01/index_jan.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Exercise 01 - Linear Classification
## Dependencies
```
!pip install demoji
!pip install googletrans==4.0.0rc1
```
## Imports
```
from googletrans import Translator
translator = Translator()
import csv
import re
import numpy as np
import pandas as pd
pd.set_option('display.max_rows', 200)
pd.set_option('display.max_columns', 200)
pd.set_option('display.width', 4000)
from io import StringIO
import requests
import matplotlib.pyplot as plt
import demoji
from sklearn.utils import resample
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import SGDClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
url_train_dev = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vTOZ2rC82rhNsJduoyKYTsVeH6ukd7Bpxvxn_afOibn3R-eadZGXu82eCU9IRpl4CK_gefEGsYrA_oM/pub?gid=1863430984&single=true&output=tsv'
url_test = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vT-KNR9nuYatLkSbzSRgpz6Ku1n4TN4w6kKmFLkA6QJHTfQzmX0puBsLF7PAAQJQAxUpgruDd_RRgK7/pub?gid=417546901&single=true&output=tsv'
#translated_data = pd.read_pickle('sample_data/augmented_data.pkl')
```
## Constants
```
TARGET_COLUMN = 'label'
TWEET_COLUMN = 'tweet'
SAMPLE_THRESHOLD = 20
```
## 1. Data Acquisition
```
def load_dataset(url):
r = requests.get(url)
data = r.content.decode('utf8')
df = pd.read_csv(StringIO(data), sep='\t')
df.columns = ['tweet', 'label']
return df
training_data = load_dataset(url_train_dev)
test_data = load_dataset(url_test)
dataset = pd.concat([training_data, test_data], axis=0) # Merge into one dataset for the pre-processing
print("The length of the combined dataset is {0} training samples + {1} test samples = {2} samples".format(len(training_data), len(test_data), len(dataset)))
dataset = dataset.sample(frac=1).reset_index(drop=True) # Randomly shuffle the data
dataset.head(10)
```
## 2. Data Exploration
```
def data_exploration(df):
n_labels = len(np.unique(df["label"]))
df = df.sort_values('label')
print("Dataset contains the columns: {}".format(list(df.keys())))
print("with a total of {} observations".format(len(df)))
print("and {} different possible labels.".format(n_labels))
print("The unique labels are {}".format(df["label"].unique()))
plt.figure(figsize=(15, 3))
plt.hist(df["label"], bins=n_labels)
plt.xticks(rotation=90)
plt.yscale("log")
plt.xlabel("Language")
plt.ylabel("#Occurences")
plt.show()
def get_underrepresented_languages(df, target_column, sample_threshold):
df = df.groupby(target_column).size().to_frame().reset_index(drop=False).rename(columns={0: 'occurences'})
underrepresented_languages = list(df[df['occurences'] < SAMPLE_THRESHOLD][target_column])
return underrepresented_languages
def print_number_of_underrepresented_languages(df, target_column, sample_threshold):
underrepresented_languages = get_underrepresented_languages(df, target_column, sample_threshold)
print("There are {} languages in this data set with less then {} samples.".format(len(underrepresented_languages), sample_threshold))
data_exploration(dataset)
print_number_of_underrepresented_languages(dataset, TARGET_COLUMN, SAMPLE_THRESHOLD)
```
## 3. Text Cleaning
This is generally a good idea as many text classification tools rely on counting the occurrences of words. If both upper and lower case versions of the same word are found in the text then the algorithm will count them as different words even though the meaning is the same. Of course this does mean that where the capitalised versions of a word exists, that does have a different meaning. For example the company Apple vs the fruit apple. This could result in poorer performance for some data sets. This is one area of NLP where you may try different methods to see how they affect the overall performance of the model.
```
def remove_all_emojis(text):
dem = demoji.findall(text)
for item in dem.keys():
text = text.replace(item, '')
return text
def clean_data(df, column):
df = df.copy(deep=True) # Make deep copy of tweets
df[column] = df[column].str.lower() # Transform into all lowercase
patterns = []
retweet_pattern = '^RT'
patterns.append(retweet_pattern)
xml_pattern = '&\S+;'
patterns.append(xml_pattern)
hashtag_pattern = '#[A-Za-z0-9_]+'
patterns.append(hashtag_pattern)
twitter_mention_pattern = '@[A-Za-z0-9_]+'
patterns.append(twitter_mention_pattern)
http_pattern = 'http\S+'
patterns.append(http_pattern)
www_pattern = 'www\S+'
patterns.append(www_pattern)
tab_pattern = '\t'
patterns.append(tab_pattern)
punctuation_pattern = '[!"#$%&\\()*+,-./:;<=>?@\[\]^_`\'{}~]+'
patterns.append(punctuation_pattern)
numeric_pattern = '[0-9]+'
patterns.append(numeric_pattern)
regex = "|".join(patterns)
#df[column] = df[column].apply(lambda elem: re.sub(r"(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)|^rt|http.+?", "", elem))
df[column] = df[column].apply(lambda elem: re.sub(r"{}".format(regex), "", elem))
df[column] = df[column].apply(remove_all_emojis)
return df
# Now we want to find out which special characters need to be removed from tweets in order to make the prediction better.
# We go over the printed list an not down the symbold which are not needed for language identification.
# These will be removed in a later step.
languages = list(np.unique(test_data['label']))
for language in languages:
localized_tweets = training_data[training_data['label'] == language]
# Clean and compare them
cleaned_localized_tweets = clean_data(localized_tweets, 'tweet')
comparison_view = pd.concat([localized_tweets.drop(['label'], axis=1), cleaned_localized_tweets], axis=1)
print(comparison_view.head(5))
#print(localized_tweets.head(5))
print("---")
# Symbols like @<mention>, #, http://link !, numeric values (e.g 16000), " do not help for language identification.
cleaned_dataset = clean_data(dataset, TARGET_COLUMN)
cleaned_dataset.isnull().values.any() # Dataset does not contain any rows with null values
```
# 4.Data Augmentation
```
def back_translation(df,target_languages=['en']):
translated_data = pd.DataFrame(columns={TWEET_COLUMN, TARGET_COLUMN})
for target_language in target_languages:
for index, row in df.iterrows():
try:
tweet = row['tweet']
source_language = row['label']
translated_data=translated_data.append({'tweet': translator.translate(translator.translate(tweet, dest=target_language).text, dest=source_language).text, 'label': source_language}, ignore_index=True)
except Exception as e:
print(e)
pass
return translated_data
print_number_of_underrepresented_languages(cleaned_dataset, TARGET_COLUMN, SAMPLE_THRESHOLD)
underrepresented_languages = get_underrepresented_languages(cleaned_dataset, TARGET_COLUMN, SAMPLE_THRESHOLD)
upsampled_dataset = cleaned_dataset.copy()
for l in underrepresented_languages:
if not l.endswith('latn'):
continue #skip latn languages because they dont work with back translation
underrepresented_language = upsampled_dataset[upsampled_dataset[TARGET_COLUMN]==l]
upsampled_dataset.drop(upsampled_dataset[upsampled_dataset[TARGET_COLUMN]==l].index, inplace = True, axis=0)
len_first = len(underrepresented_language)
if not l.endswith('latn'): #skip latn languages because they dont work with back translation
underrepresented_language = pd.concat([underrepresented_language, back_translation(underrepresented_language)], axis=0)
len_second = len(underrepresented_language)
if(len_second < SAMPLE_THRESHOLD):
underrepresented_language = resample(underrepresented_language, n_samples=SAMPLE_THRESHOLD)
len_third = len(underrepresented_language)
print("({0}): #{1}=>back_translation=>#{2}=>resampling=>#{3}".format(l, len_first, len_second, len_third))
upsampled_dataset = pd.concat([upsampled_dataset, underrepresented_language], axis=0)
upsampled_dataset.reset_index(drop=True)
upsampled_dataset.to_pickle('./dataset.pkl')
upsampled_dataset.to_csv('./dataset.csv', header=True, index=False)
get_underrepresented_languages(upsampled_dataset, TARGET_COLUMN, SAMPLE_THRESHOLD)
print_number_of_underrepresented_languages(upsampled_dataset, TARGET_COLUMN, SAMPLE_THRESHOLD)
#for l in underrepresented_languages:
# underrepresented_language = upsampled_dataset[upsampled_dataset[TARGET_COLUMN]==l]
# underrepresented_language = resample(underrepresented_language, n_samples=SAMPLE_THRESHOLD)
# upsampled_dataset = pd.concat([upsampled_dataset, underrepresented_language], axis=0)
print("The length of the upsampled dataset is {}".format(len(upsampled_dataset)))
```
# 5.Modeling & Evaluation
```
X = upsampled_dataset[TWEET_COLUMN]
y = upsampled_dataset[TARGET_COLUMN]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle=True)
```
## 5.1 SGDClassifier
```
sgd_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf', SGDClassifier()),
])
parameters = {
'tfidf__ngram_range' : [(1,4)],
'tfidf__analyzer': ['char'],
'clf__alpha': [1e-4, 1e-5, 1e-6],
'clf__loss': [ "perceptron", "log", "hinge"],
'clf__penalty': ['elasticnet'],
'clf__class_weight': ['balanced'],
'clf__early_stopping': [True,False]
}
gs_sgd_clf = GridSearchCV(sgd_clf, parameters, cv=5, n_jobs=-1, verbose=10)
gs_sgd_clf.fit(X_train, y_train)
for param_name in sorted(parameters.keys()):
print("%s: %r" % (param_name, gs_sgd_clf.best_params_[param_name]))
print("Accuracy of the sgd_clf on the test data: {}".format(accuracy_score(y_test, gs_sgd_clf.predict(X_test))))
sgd_clf = Pipeline([('tfidf', TfidfVectorizer(analyzer='char', ngram_range=(1,4))),
('clf', SGDClassifier(alpha=1e-06, class_weight='balanced', early_stopping=False, loss='log', penalty='elasticnet')),
])
sgd_clf.fit(X_train, y_train)
print(classification_report(y_test, sgd_clf.predict(X_test)))
```
## 5.2 MultinomialNB
```
mnb_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf', MultinomialNB(fit_prior=True, class_prior=None)),
])
from sklearn.model_selection import GridSearchCV
parameters = {
'tfidf__ngram_range' : [(1,2), (1,3), (1,4)],
'tfidf__analyzer': ['char'],
'clf__alpha': (1, 0.8,0.6, 0.4, 0.2, 0)
}
gs_mnb_clf = GridSearchCV(mnb_clf, parameters, cv=5, n_jobs=-1, verbose=10)
gs_mnb_clf.fit(X_train, y_train)
for param in sorted(parameters.keys()):
print("%s: %r" % (param, gs_mnb_clf.best_params_[param]))
print(classification_report(y_test, gs_mnb_clf.predict(X_test)))
```
| github_jupyter |
# PyNNDescent Performance
How fast is PyNNDescent for approximate nearest neighbor search? How does it compare with other approximate nearest neighbor search algorithms and implementations? To answer these kinds of questions we'll make use of the [ann-benchmarks](https://github.com/erikbern/ann-benchmarks) suite of tools for benchmarking approximate nearest neighbor (ANN) search algorithms. The suite provides a wide array of datasets to benchmark on, and supports a wide array of ANN search libraries. Since the runtime of these benchmarks is quite large we'll be presenting results obtained earlier, and only for a selection of datasets and for the main state-of-the-art implementations. This page thus reflects the performance at a given point in time, and on a specific choice of benchmarking hardware. Implementations may (and likely will) improve, and different hardware will likely result in somewhat different performance characteristics amongst the implementations benchmarked here.
We chose the following implementations of ANN search based on their strong performance in ANN search benchmarks in general:
* Annoy (a tree based algorithm for comparison)
* HNSW from FAISS, Facebooks ANN library
* HNSW from nmslib, the reference implementation of the algorithm
* HNSW from hnswlib, a small spinoff library from nmslib
* ONNG from NGT, a more recent algorithm and implementaton with impressive performance
* PyNNDescent version 0.5
Not all the algorithms ran entirely successfully on all the datasets; where an algorithm gave spurious or unrepresentative results we have left it off rather the given benchmark.
The ann-benchmark suite is designed to look at the trade-off in performance between search accuracy and search speed (or other performance statistic, such as index creation time, or index size). Since this is a trade-off that can often be tuned by appropriately adjusting parameters ann-benchmarks handles this by running a predefined (for each algorithm or implementation) range of parameters. It then finds the [pareto frontier](https://en.wikipedia.org/wiki/Pareto_efficiency#Use_in_engineering) for the optimal speed / accuracy trade-off and presents this as a curve. The various implementations can then be compared in terms of the pareto frontier curves. The default choices of measure for ann-benchmarks puts recall (effective search accuracy) along the x-axis and queries-per-second (search speed) on the y-axis. Thus curves that are further up and / or more to the right are providing better speed and / or more accuracy.
To get a good overview of the relative performance characteristics of the different implementations we'll look at the speed / accuracy trade-off curves for a variety of datasets. This is because the dataset size, dimensionality, distribution and metric can all have non-trivial impacts on performance in various ways, and results for one dataset are not necessarily representative of how things will look for a different dataset. We will introduce each dataset in turn, and then look at the performance curves. To start with we'll consider datasets which use Euclidean distance.
## Euclidean distance
Euclidean distance is the usual notion of distance that we are familiar with in everyday life, just extended to arbitrary dimensions (instead of only two or three). It is defined as $d(\bar{x}, \bar{y}) = \sum_i (x_i - y_i)^2$ for vectors $\bar{x} = (x_1, x_2, \ldots, x_D)$ and $\bar{y} = (y_1, y_2, \ldots, y_D)$. It is widely used as a distance measure, but can have difficulties with high dimensional data in some cases.
The first dataset we will consider that uses Euclidean distance is the MNIST dataset. MNIST consists of grayscale images of handwritten digits (from 0 to 9). Each digit image is 28 by 28 pixels, which is usually unravelled into a single vectors of 784 dimensions. In total there are 70,000 images in the dataset, and ann-benchmarks uses the usual split into 60,000 training samples and 10,000 test samples. The ANN index is built on the training set, and then the test set is used as the query data for benchmarking.
<div align="middle"><img src="mnist.png" alt="MNIST performance" width=600px></div>
Remember that up and to the right is better. Also note that the y axis (queries per second) is plotted in *log scale* so each major grid step represents an order of magnitude performance difference. We can see that PyNNDescent performs very well here, outpacing the other ANN libraries in the high accuracy range. It is worth noting, however, that for lower accuracy queries it finishes essentially on par with ONNG, and unlike ONNG and nmslib's HNSW implementation, it does not extend to very high performance but low accuracy queries. If speed is absolutely paramount, and you only need to be in the vaguely right ballpark for accuracy then PyNNDescent may not be the right choice here.
Next up for dataset is Fashion-MNIST. This was a dataset designed to be a drop in replacement for MNIST, but meant to be more challenging for machine learning tasks. Instead of grayscale images of digits it is grayscale images of fashion items (dresses, shirts, pants, boots, sandas, handbags, etc.). Just like MNIST each image is 28 by 28 pixels resulting in 784-dimensional vectors. Also just like MNIST there are 70,000 total images, split into 60,000 training images and 10,000 test images.
<div align="middle"><img src="fmnist.png" alt="Fashion-MNIST performance" width=600px></div>
Again we see a very similar result (although this should not entirely be a surprise given the similarity of the dataset in terms of the number of samples and dimensionality). PyNNDescent performs very well in the high accuracy regime, but does not scale to the very high performance but low accuracy ranges that ONNG and nmslib's HNSW manage. It is also worth noting the clear difference between the various graph based search algorithms and the tree based Annoy -- while Annoy is a very impressive ANN search implementation it compares poorly to the graph based search techniques on these datasets.
Next up is the SIFT dataset. SIFT stands for [Scale-Invariant Feature Transform](https://en.wikipedia.org/wiki/Scale-invariant_feature_transform) and is a technique from compute vision for generating feature vectors from images. For ann-benchmarks this means that there exist some large databases of SIFT features from image datasets which can be used to test nearest neighbor search. In particular the SIFT dataset in ann-benchmarks is a dataset of one million SIFT vectors where each vector is 128-dimensional. This provides a good contrast to the earlier datasets which had relatively high dimensionality, but not an especially large number of samples. For ann-benchmarks the dataset is split into 990,000 training samples, and 10,000 test samples for querying with.
<div align="middle"><img src="sift.png" alt="SIFT-128 performance" width=600px></div>
Again we see that PyNNDescent performs very well. This time, however, with the more challenging search problem presented by a training set this large, it does produce some lower accuracy searches and in those cases both ONG and nmslib's HNSW outperform it. It's also worth noting that in this lower dimensional dataset Annoy performs better, comparatively, than the previous datasets. Still, over the Euclidean distance datasets tested here PyNNDescent remains a clear winner for high accuracy queries. Let's move on to the angular distance based datasets.
## Angular distance
Angular based distances measure the similarity of two vectors in terms of the angle they span -- the greater the angle the larger the distance between the vectors. Thus two vectors of different length can be viewed as being very close as long as they are pointing in the same direction. Another way of looking at this is to imagine that the data is being projected onto a high dimensional sphere (by intersecting a ray in the vectors direction with a unit sphere), and distances are measured in terms of arcs around the sphere.
In practice the most commonly used angular distance is cosine distance, defined as
$$d(\bar{x}, \bar{y}) = 1 - \sum_i \frac{x_i y_i}{\|\bar{x}\|_2 \|\bar{y}\|_2}$$
where $\|\bar{x}\|_2$ denotes the $\ell^2$ [norm](https://en.wikipedia.org/wiki/Norm_
(mathematics)#Euclidean_norm) of $\bar{x}$. To see why this is a measure of angular distance note that $\sum_i x_i y_i$ is the euclidean dot product of $\bar{x}$ and $\bar{y}$ and that the euclidean dot product formula gives $\bar{x}\cdot \bar{y} = \|x\|_2 \|y\|_2 \cos\theta$ where $\theta$ is the angle between the vectors.
In the case where the vectors all have unit norm the cosine distance reduces to just one minus the dot product of the vectors -- which is sometimes used as an angular distance measure. Indeed, that is the case for our first dataset, the LastFM dataset. This dataset is constructed of 64 factors in a recommendation system for the Last FM online music service. It contains 292,385 training samples and 50,000 test samples. Compared to the other datasets explored so far this is considerably lower dimensional and the distance computation is simpler. Let's see what results we get.
<div align="middle"><img src="lastfm.png" alt="LastFM performance" width=600px></div>
Here we see hnswlib and HNSW from nmslib performing extremely well -- outpacing ONNG unlike we saw in the previous euclidean datasets. The HNSW implementation is FAISS is further behind. While PyNNDescent is not the fastest option on this dataset it is highly competitive with the two top performing HNSW implementations.
The next dataset is a GloVe dataset of word vectors. The GloVe datasets are generated from a word-word co-occurrence count matrix generated from vast collections of text. Each word that occurs (frequently enough) in the text will get a resulting vector, with the principle that words with similar meanings will be assigned vectors that are similar (in angular distance). The dimensionality of the generated vectors is an input to the GloVe algorithm. For the first of the the GloVe datasets we will be looking at the 25 dimensional vectors. Since GloVe vectors were trained useing a vast corpus there are over one million different words represented, and thus we have 1,183,514 training samples and 10,000 test samples to work with. This gives is a low dimensional but extremely large dataset to work with.
<div align="middle"><img src="glove25.png" alt="GloVe-25 performance" width=600px></div>
In this case PyNNDescent and hnswlib are the apparent winners -- although PyNNDescent, similar to the earlier examples, performs less well once we get below abot 80% accuracy.
Next we'll move up to a higher dimensional version of GloVe vectors. These vectors were trained on the same underlying text dataset, so we have the same number of samples (both for train and test), but now we have 100 dimensional vectors. This makes the problem more challenging as the underlying distance computation is a little more expensive given the higher dimensionality.
<div align="middle"><img src="glove100.png" alt="GloVe-100 performance" width=600px></div>
This time it is ONNG that surges to the front of the pack. Relatively speaking PyNNDescent is not too far behind. This goes to show, however, how much performance can vary based on the exact nature of the dataset: while ONNG was a (relatively) poor performer on the 25-dimensional version of this data with hnswlib out in front, the roles are reversed for this 100-dimensional data.
The last dataset is the NY-Times dataset. This is data generated as dimension reduced (via PCA) [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) vectors of New York Times articles. The resulting dataset has 290,000 training samples and 10,000 test samples in 256 dimensions. This is quite a challenging dataset, and all the algorithms have significantly lower query-per-second performance on this data.
<div align="middle"><img src="nytimes.png" alt="NY-Times performance" width=600px></div>
Here we see that PyNNDescent and ONNG are the best performing implementations, particularly at the higher accuracy range (ONNG has a slight edge on PyNNDescent here).
This concludes our examination of performance for now. Having examined performance for many different datasets it is clear that the various algorithms and implementations vary in performance depending on the exact nature of the data. None the less, we hope that this has demonstrated that PyNNDescent has excellent performance characteristics across a wide variety of datasets, often performing better than many state-of-the-art implementations.
| github_jupyter |
# Mask R-CNN - Train on NewShapes Dataset
### Notes from implementation
This notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too slow to train on a CPU. On a GPU, you can start to get okay-ish results in a few minutes, and good results in less than an hour.
The code of the *Shapes* dataset is included below. It generates images on the fly, so it doesn't require downloading any data. And it can generate images of any size, so we pick a small image size to train faster.
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../')
import tensorflow as tf
import keras.backend as KB
import numpy as np
from mrcnn.datagen import data_generator, load_image_gt
from mrcnn.callbacks import get_layer_output_1,get_layer_output_2
from mrcnn.utils import mask_string
import mrcnn.visualize as visualize
import mrcnn.new_shapes as new_shapes
from mrcnn.prep_notebook import prep_newshapes_train2
import pprint
pp = pprint.PrettyPrinter(indent=2, width=100)
##------------------------------------------------------------------------------------
## Build configuration object
##------------------------------------------------------------------------------------
config = new_shapes.NewShapesConfig()
config.FCN_LAYERS = True
config.BATCH_SIZE = 2 # Batch size is 2 (# GPUs * images/GPU).
config.IMAGES_PER_GPU = config.BATCH_SIZE # Must match BATCH_SIZE
config.STEPS_PER_EPOCH = 2
config.LEARNING_RATE = 0.000001
config.EPOCHS_TO_RUN = 300
config.FCN_INPUT_SHAPE = config.IMAGE_SHAPE[0:2]
config.LAST_EPOCH_RAN = 0
config.WEIGHT_DECAY = 2.0e-4
config.VALIDATION_STEPS = 100
config.REDUCE_LR_FACTOR = 0.2
config.REDUCE_LR_COOLDOWN = 30
config.REDUCE_LR_PATIENCE = 40
config.MIN_LR = 1.0e-10
config.TRAINING_IMAGES = 10000
config.VALIDATION_IMAGES = 2500
config.CHECKPOINT_FOLDER = 'newshape_fcn'
model_file = 'E:\\Models\\newshape_mrcnn\\shapes20180621T1554\\mask_rcnn_shapes_0565.h5'
model, dataset_train, dataset_val, train_generator, val_generator, config = \
prep_newshapes_train2(init_with = model_file, config=config)
# Load and display random samples
image_ids = np.random.choice(dataset_train.image_ids, 4)
for image_id in image_ids:
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names, limit=6)
```
### Print some model information
```
print('\n Inputs: ')
for i, out in enumerate(model.keras_model.inputs):
print(i , ' ', out)
print('\n Outputs: ')
for i, out in enumerate(model.keras_model.outputs):
print(i , ' ', out)
print('\n Losses (model.metrics_names): ')
pp.pprint(model.get_deduped_metrics_names())
# model.keras_model.summary(line_length = 150)
```
## Training - FCN
Train in two stages:
1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.
- #### Or now we can pass a list of layers we want to train in layers !
2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers.
```
model.config.MIN_LR
train_layers = ['fcn']
loss_names = ["fcn_norm_loss"]
# config.VALIDATION_STEPS= 125
# config.EPOCHS_TO_RUN = 100
config.STEPS_PER_EPOCH = 1
model.epoch = 0
model.train(dataset_train, dataset_val,
learning_rate = config.LEARNING_RATE,
epochs_to_run = config.EPOCHS_TO_RUN,
# epochs = 25, # total number of epochs to run (accross multiple trainings)
layers = train_layers,
losses = loss_names)
```
## Training head using Keras.model.fit_generator()
```
# Train the head branches
# Passing layers="heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
# Wed 09-05-2018
# train_layers = ['mrcnn', 'fpn','rpn']
# loss_names = [ "rpn_class_loss", "rpn_bbox_loss" , "mrcnn_class_loss", "mrcnn_bbox_loss", "mrcnn_mask_loss"]
train_layers = ['mrcnn', 'fpn','rpn']
loss_names = [ "rpn_class_loss", "rpn_bbox_loss" , "mrcnn_class_loss", "mrcnn_bbox_loss" ]
config.STEPS_PER_EPOCH = 8
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs_to_run = config.EPOCHS_TO_RUN,
# epochs = 2500,
# epochs_to_run =2,
layers = train_layers,
losses= loss_names
)
```
## - Training heads using train_in_batches ()
We need to use this method for the time being as the fit generator does not have provide EASY access to the output in Keras call backs. By training in batches, we pass a batch through the network, pick up the generated RoI detections and bounding boxes and generate our semantic / gaussian tensors ...
```
model.train_in_batches(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE/6,
epochs_to_run = 3,
layers='heads')
```
## Fine Tuning
Fine tune all layers
```
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE / 10,
epochs=211,
layers="all")
```
## Save
```
# Save weights
# Typically not needed because callbacks save after every epoch
# Uncomment to save manually
model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes_2500.h5")
model.keras_model.save_weights(model_path)
```
### Define Data Generators, get next shapes from generator and display loaded shapes
### Define Data Generator
```
train_generator = data_generator(dataset_train, model.config, shuffle=True,
batch_size=model.config.BATCH_SIZE,
augment = False)
val_generator = data_generator(dataset_val, model.config, shuffle=True,
batch_size=model.config.BATCH_SIZE,
augment=False)
```
### Get next shapes from generator and display loaded shapes
```
train_batch_x, train_batch_y = next(train_generator)
# train_batch_x, train_batch_y = next(train_generator)
imgmeta_idx = model.keras_model.input_names.index('input_image_meta')
img_meta = train_batch_x[imgmeta_idx]
for img_idx in range(config.BATCH_SIZE):
image_id = img_meta[img_idx,0]
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
print('Image id: ',image_id)
print('Image meta', img_meta[img_idx])
print('Classes (1: circle, 2: square, 3: triangle ): ',class_ids)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
```
### Push Data thru model using get_layer_output()
```
layers_out = get_layer_output_2(model.keras_model, train_batch_x, 1)
input_gt_class_ids = train_batch_x[4]
target_class_ids = layers_out[5]
mrcnn_class_logits = layers_out[9]
rpn_class_loss = layers_out[13]
rpn_bbox_loss = layers_out[14]
mrcnn_class_loss = layers_out[15]
mrcnn_bbox_loss = layers_out[16]
mrcnn_mask_loss = layers_out[17]
active_class_ids = layers_out[20]
# pred_masks = tf.identity(layers_out[18])
# gt_masks = tf.identity(layers_out[19])
# shape = KB.int_shape(pred_masks)
print(rpn_class_loss, rpn_bbox_loss)
print(mrcnn_class_loss, mrcnn_bbox_loss, mrcnn_mask_loss)
print(active_class_ids)
print()
print(target_class_ids[1])
print()
print(mrcnn_class_logits[1])
print('gt class ids')
print(input_gt_class_ids)
```
### Simulate `mrcnn_class_loss` computation
```
print('\n>>> mrcnn_class_loss_graph ' )
print(' target_class_ids size :', target_class_ids.shape)
print(' pred_class_logits size :', mrcnnpred_class_logits.shape)
print(' active_class_ids size :', active_class_ids.shape)
target_class_ids = tf.cast(target_class_ids, 'int64')
# Find predictions of classes that are not in the dataset.
pred_class_ids = tf.argmax(pred_class_logits, axis=2)
# TODO: Update this line to work with batch > 1. Right now it assumes all
# images in a batch have the same active_class_ids
pred_active = tf.gather(active_class_ids[0], pred_class_ids)
# Loss
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=target_class_ids, logits=pred_class_logits)
# Erase losses of predictions of classes that are not in the active
# classes of the image.
loss = loss * pred_active
# Computer loss mean. Use only predictions that contribute
# to the loss to get a correct mean.
loss = tf.reduce_sum(loss) / tf.reduce_sum(pred_active)
loss = KB.reshape(loss, [1, 1])
return loss
```
## Plot Predicted and Ground Truth Probability Heatmaps `pred_gaussian` and `gt_gaussian` (Tensorflow)
`pred_gaussian2` and `gt_gaussian2` from Tensorflow PCN layer
```
# gt_heatmap = layers_out[27] # gt_gaussiam
# pred_heatmap= layers_out[24] # pred_gaussian
gt_heatmap = layers_out[21] # gt_gaussiam
pred_heatmap= layers_out[18] # pred_gaussian
print('gt_gaussian heatmap shape : ', gt_heatmap.shape, ' pred_gaussian heatmap shape: ', pred_heatmap.shape)
num_images = 1 # config.IMAGES_PER_GPU
num_classes = config.NUM_CLASSES
img = 2
image_id = img_meta[img,0]
print('Image id: ',image_id)
print('Classes (1: circle, 2: square, 3: triangle ): ')
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
for cls in range(num_classes):
ttl = 'GROUND TRUTH HEATMAP - image : {} class: {} '.format(img,cls)
print(' *** Zout ', gt_heatmap[img,:,:,cls].shape, ttl)
plot_gaussian( gt_heatmap[img,:,:,cls], title = ttl)
ttl = 'PREDICTED heatmap - image : {} class: {} '.format(img,cls)
print(' *** pred_heatmap ', pred_heatmap[img,:,:,cls].shape, ttl)
plot_gaussian(pred_heatmap[img,:,:,cls], title = ttl)
```
### Plot Output from FCN network `fcn_bilinear` and compare with `pred_gaussian`
```
from mrcnn.visualize import plot_gaussian
import matplotlib as plt
%matplotlib inline
img = 2
image_id = img_meta[img,0]
print('Image id: ',image_id)
print('Classes (1: circle, 2: square, 3: triangle ): ')
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
Zout = layers_out[21] # gt_gaussiam
Zout2 = layers_out[12] # fcn_bilinear
print(Zout.shape, Zout2.shape)
num_images = config.IMAGES_PER_GPU
num_classes = config.NUM_CLASSES
for cls in range(num_classes):
ttl = 'GroundTruth - image : {} class: {} '.format(img,cls)
print(' *** Zout ', Zout[img,:,:,cls].shape, ttl)
plot_gaussian( Zout[img,:,:,cls], title = ttl)
ttl = 'FCN_Bilinear- image : {} class: {} '.format(img,cls)
print(' *** Zout2 ', Zout2[img,:,:,cls].shape, ttl)
plot_gaussian(Zout2[img,:,:,cls], title = ttl)
```
### Display ground truth bboxes from Shapes database (using `load_image_gt` )
Here we are displaying the ground truth bounding boxes as provided by the dataset
```
img = 0
image_id = img_meta[img,0]
print('Image id: ',image_id)
p_original_image, p_image_meta, p_gt_class_id, p_gt_bbox, p_gt_mask = \
load_image_gt(dataset_train, config, image_id, augment=False, use_mini_mask=True)
# print(p_gt_class_id.shape, p_gt_bbox.shape, p_gt_mask.shape)
print(p_gt_bbox[0:3,:])
print(p_gt_class_id)
visualize.draw_boxes(p_original_image, p_gt_bbox[0:3])
# image_id = img_meta[img,0]
# print('Image id: ',image_id)
# p_original_image, p_image_meta, p_gt_class_id, p_gt_bbox, p_gt_mask = \
# load_image_gt(dataset_train, config, image_id, augment=False, use_mini_mask=True)
# # print(p_gt_class_id.shape, p_gt_bbox.shape, p_gt_mask.shape)
# print(p_gt_bbox)
# print(p_gt_class_id)
# visualize.draw_boxes(p_original_image, p_gt_bbox)
```
### Display Predicted Ground Truth Bounding Boxes `gt_tensor` and `gt_tensor2`
layers_out[22] `gt_tensor` is based on input_gt_class_ids and input_normlzd_gt_boxes
layers_out[28] `gt_tensor2` is based on input_gt_class_ids and input_normlzd_gt_boxes, generated using Tensorflow
Display the Ground Truth bounding boxes from the tensor we've constructed
```
from mrcnn.utils import stack_tensors, stack_tensors_3d
# print(gt_bboxes)
# visualize.display_instances(p_original_image, p_gt_bbox, p_gt_mask, p_gt_class_id,
# dataset_train.class_names, figsize=(8, 8))
# pp.pprint(gt_bboxes)
img = 0
image_id = img_meta[img,0]
print('Image id: ',image_id)
p_image, p_image_meta, p_gt_class_id, p_gt_bbox, p_gt_mask = \
load_image_gt(dataset_train, config, image_id, augment=False, use_mini_mask=True)
gt_bboxes_stacked = stack_tensors_3d(layers_out[22][img])
print(gt_bboxes_stacked)
visualize.draw_boxes(p_image, gt_bboxes_stacked[0:2,2:6])
```
## Display RoI proposals `pred_bboxes` generated for one class
Display bounding boxes from tensor of proposals produced by the network
Square: 1 , Circle:2 , Triangle 3
```
img = 0
cls = 1 # <==== Class to display
pred_tensor = layers_out[19] # numpy pred_tesnor
# pred_tensor = layers_out[25] # tensorflow pred_tensor
image_id = img_meta[img,0]
print('Image id: ',image_id)
p_image, p_image_meta, p_gt_class_id, p_gt_bbox, p_gt_mask = \
load_image_gt(dataset_train, config, image_id, augment=False, use_mini_mask=True)
print(p_image_meta)
print(pred_tensor[img,cls,:].shape)
print(pred_tensor[img,cls])
#+'-'+str(np.around(int(x[1]),decimals = 3))
# class id: str(int(x[6]))+'-'+
caps = [str(int(x[0]))+'-'+str(np.around(x[1],decimals = 3)) for x in pred_tensor[img,cls,:].tolist() ]
print(caps)
visualize.draw_boxes(p_image, pred_tensor[img,cls,:,2:6], captions = caps)
layers_out[0][0] * [128, 128,128,128] #output_rois*
```
### Calculate mrcnn_bbox_loss
```
import keras.backend as K
from mrcnn.utils import apply_box_deltas
from mrcnn.loss import smooth_l1_loss
target_class_ids = layers_out[1][0:1]
target_bbox = layers_out[2][0:1]
mrcnn_bbox = layers_out[10][0:1]
mrcnn_class_ids = np.argmax(layers_out[9][0:1],axis = -1) # mrcnn_class_ids
print('target_class_ids', target_class_ids.shape)
print(target_class_ids) # tgt_class_ids
print(' class with max probability', mrcnn_class_ids.shape)
print(mrcnn_class_ids)
print('target_bboxes', target_bbox.shape)
# print(target_bbox) # tgt_bounding boxes
print('mrcnn_bboxes',mrcnn_bbox.shape)
# print(mrcnn_bbox) #mrcnn_bboxes
pred_bbox = mrcnn_bbox
# calc mrcnn_bbox_loss
target_class_ids = K.reshape(target_class_ids, (-1,))
print(target_class_ids.shape)
target_bbox = K.reshape(target_bbox, (-1, 4))
print('target_bboxx: ', target_bbox.shape)
pred_bbox = K.reshape(pred_bbox, (-1, pred_bbox.shape[2], 4))
print('pred_bbox : ', pred_bbox.shape)
positive_roi_ix = tf.where(target_class_ids > 0)[:, 0]
print(positive_roi_ix.eval())
positive_roi_class_ids = tf.cast( tf.gather(target_class_ids, positive_roi_ix), tf.int64)
print(positive_roi_class_ids.eval())
indices = tf.stack([positive_roi_ix, positive_roi_class_ids], axis=1)
print(indices.eval())
target_bbox = tf.gather(target_bbox, positive_roi_ix)
print(target_bbox.eval())
pred_bbox = tf.gather_nd(pred_bbox, indices)
print(pred_bbox.eval())
print('tf.size ',tf.size(target_bbox).eval())
diff = K.abs(target_bbox - pred_bbox)
print(diff.eval())
less_than_one = K.cast(K.less(diff, 1.0), "float32")
# print(less_than_one.eval())
loss = (less_than_one * 0.5 * diff**2) + (1 - less_than_one) * (diff - 0.5)
# print( (1-less_than_one).eval())
# loss = K.switch(tf.size(target_bbox) > 0,
# smooth_l1_loss(y_true=target_bbox, y_pred=pred_bbox),
# tf.constant(0.0))
print(loss.eval())
sumloss = K.sum(loss)
print(sumloss.eval())
print((sumloss/40).eval())
meanloss = K.mean(loss)
print(meanloss.eval())
```
### Calculate mrcnn_class_loss
```
import keras.backend as K
from mrcnn.utils import apply_box_deltas
from mrcnn.loss import smooth_l1_loss
target_class_ids = layers_out[1][0:1]
pred_class_logits = layers_out[8][0:1]
active_class_ids = np.array([1,1,1,1])
# mrcnn_class_ids = np.argmax(layers_out[9][0:1],axis = -1) # mrcnn_class_ids
print(' target_class_ids', target_class_ids.shape)
print(target_class_ids) # tgt_class_ids
print(' class logits', pred_class_logits.shape)
print(pred_class_logits)
print(' active, class_ids ', active_class_ids.shape)
print(active_class_ids) # tgt_bounding boxes
pred_class_ids = tf.argmax(pred_class_logits, axis=2)
print(pred_class_ids.eval()) #mrcnn_bboxes
mrcnn_class_ids = np.argmax(layers_out[9][0:1],axis = -1) # mrcnn_class_ids
print(mrcnn_class_ids)
# pred_bbox = mrcnn_bbox
pred_active = tf.to_float(tf.gather(active_class_ids, pred_class_ids))
print(pred_active.eval())
# calc mrcnn_bbox_loss
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=target_class_ids, logits=pred_class_logits)
print(loss.eval())
loss = loss * tf.to_float(pred_active)
print(loss.eval())
print(tf.reduce_sum(loss).eval())
print(tf.reduce_sum(pred_active).eval())
loss = tf.reduce_sum(loss) / tf.reduce_sum(pred_active)
print(loss.eval())
```
### Calculate mrcnn_mask_loss
```
import keras.backend as K
from mrcnn.utils import apply_box_deltas
from mrcnn.loss import smooth_l1_loss
target_class_ids = layers_out[1][0:3]
target_masks = layers_out[3][0:3]
pred_masks = layers_out[11][0:3]
# mrcnn_class_ids = np.argmax(layers_out[9][0:1],axis = -1) # mrcnn_class_ids
print(' target_class_ids shape :', target_class_ids.shape)
print(' target_masks shape :', target_masks.shape)
print(' pred_masks shape :', pred_masks.shape)
target_class_ids = K.reshape(target_class_ids, (-1,))
print(' target_class_ids shape :', target_class_ids.shape, '\n', target_class_ids.eval())
mask_shape = tf.shape(target_masks)
print(' mask_shape shape :', mask_shape.shape, mask_shape.eval())
target_masks = K.reshape(target_masks, (-1, mask_shape[2], mask_shape[3]))
print(' target_masks shape :', tf.shape(target_masks).eval())
pred_shape = tf.shape(pred_masks)
print(' pred_shape shape :', pred_shape.shape, pred_shape.eval())
pred_masks = K.reshape(pred_masks, (-1, pred_shape[2], pred_shape[3], pred_shape[4]))
print(' pred_masks shape :', tf.shape(pred_masks).eval())
pred_masks = tf.transpose(pred_masks, [0, 3, 1, 2])
print(' pred_masks shape :', tf.shape(pred_masks).eval())
# Only positive ROIs contribute to the loss. And only
# the class specific mask of each ROI.
positive_ix = tf.where(target_class_ids > 0)[:, 0]
positive_class_ids = tf.cast(tf.gather(target_class_ids, positive_ix), tf.int64)
indices = tf.stack([positive_ix, positive_class_ids], axis=1)
print(indices.eval())
y_true = tf.gather(target_masks, positive_ix)
print(' y_true shape:', tf.shape(y_true).eval())
y_pred = tf.gather_nd(pred_masks, indices)
print(' y_pred shape:', tf.shape(y_pred).eval())
loss = K.switch(tf.size(y_true) > 0,
K.binary_crossentropy(target=y_true, output=y_pred),
tf.constant(0.0))
print(tf.shape(loss).eval())
loss = K.mean(loss)
print(' final loss shape:', tf.shape(loss).eval())
print(loss.eval())
loss = K.reshape(loss, [1, 1])
print(' final loss shape:', tf.shape(loss).eval())
print(loss.eval())
```
### Calculate a pixel loss on fcn_gaussian and gt_gaussian
```
import keras.backend as K
from mrcnn.utils import apply_box_deltas
from mrcnn.loss import smooth_l1_loss
pred_masks = layers_out[12][0:3]
target_masks = layers_out[27][0:3]
print(' target_masks shape :', tf.shape(target_masks).eval())
print(' pred_masks shape :', tf.shape(pred_masks).eval())
diff = K.abs(target_masks - pred_masks)
print(tf.shape(diff).eval())
less_than_one = K.cast(K.less(diff, 1.0), "float32")
print(tf.shape(less_than_one).eval())
loss = (less_than_one * 0.5 * diff**2) + (1 - less_than_one) * (diff - 0.5)
print(tf.shape(loss).eval())
# print( (1-less_than_one).eval())
# loss = K.switch(tf.size(y_true) > 0,
# K.binary_crossentropy(target=y_true, output=y_pred),
# tf.constant(0.0))
meanloss = K.mean(loss)
print(tf.shape(meanloss).eval())
print(meanloss.eval())
# loss = K.reshape(loss, [1, 1])
# print(' final loss shape:', loss.get_shape())
# return loss
mask_shape = tf.shape(target_masks)
print(' mask_shape shape :', tf.shape(mask_shape).eval())
target_masks = K.reshape(target_masks, (-1, mask_shape[1], mask_shape[2]))
print(' target_masks shape :', tf.shape(target_masks).eval())
pred_shape = tf.shape(pred_masks)
print(' pred_shape shape :', tf.shape(pred_shape).eval())
pred_masks = K.reshape(pred_masks, (-1, pred_shape[1], pred_shape[2]))
print(' pred_masks shape :', tf.shape(pred_masks).eval())
# Permute predicted masks to [N, num_classes, height, width]
# diff = K.abs(target_masks - pred_masks)
# print(tf.shape(diff).eval())
# less_than_one = K.cast(K.less(diff, 1.0), "float32")
# print(tf.shape(less_than_one).eval())
# loss = (less_than_one * 0.5 * diff**2) + (1 - less_than_one) * (diff - 0.5)
# print(tf.shape(loss).eval())
# meanloss = K.mean(loss)
# print(tf.shape(meanloss).eval())
# print(meanloss.eval())
loss = K.switch(tf.size(target_masks) > 0,
smooth_l1_loss(y_true=target_masks, y_pred=pred_masks),
tf.constant(0.0))
loss = K.mean(loss)
loss = K.reshape(loss, [1, 1])
print(' final loss shape:', loss.get_shape())
print(loss.eval())
```
### Mean values of GT, Pred, and FCN heatmaps
```
pred_masks = tf.identity(layers_out[24])
gt_masks = tf.identity(layers_out[27])
fcn_masks = tf.identity(layers_out[12])
print(gt_masks.shape, fcn_masks.shape)
for img in range(5):
for cls in range(4):
gt_mean = K.mean(gt_masks[img,:,:,cls])
fcn_mean= K.mean(fcn_masks[img,:,:,cls])
pred_mean= K.mean(pred_masks[img,:,:,cls])
print('Img/Cls: ', img, '/', cls,' gtmean: ', gt_mean.eval(), '\t fcn : ' , fcn_mean.eval(), '\t pred :', pred_mean.eval())
img = 0
class_probs = layers_out[9][img] # mrcnn_class
deltas = layers_out[10][img] # mrcnn_bbox
print(class_probs.shape)
print('class probabilities')
print(class_probs)
class_ids = np.argmax(layers_out[9][img],axis = 1) # mrcnn_class_ids
print(' class with max probability')
print(class_ids)
# layers_out[10][2,0,3]
print('deltas.shape :', deltas.shape)
print(deltas[0:4])
deltas_specific = deltas[np.arange(32),class_ids]
print('deltas of max prob class: ', deltas_specific.shape)
print(deltas_specific[0:5])
output_rois = layers_out[0][img]*[128,128,128,128]
print('output_rois: ', output_rois.shape)
print(output_rois[0:])
refined_rois = apply_box_deltas(output_rois, deltas_specific * config.BBOX_STD_DEV)
print('refined rois: ',refined_rois.shape)
print(refined_rois)
img = 0
cls = 0
fcn_out = layers_out[12][img]
fcn_sum = np.sum(fcn_out, axis=(0,1))
print(fcn_sum)
for cls in range(4):
print('min :', np.min(fcn_out[:,:,cls]), 'max :', np.max(fcn_out[:,:,cls]), )
print(train_batch_x[4][2])
print(train_batch_x[5][2]/[128,128,128,128])
```
| github_jupyter |
# Momentum and AdaGrad
Presented during ML reading group, 2019-11-5.
Author: Ioana Plajer, ioana.plajer@unitbv.ro
```
#%matplotlib notebook
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
print(f'Numpy version: {np.__version__}')
```
# AdaGrad
The [AdaGrad paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf) comes with the idea of using different learning rates for each feature. Hence, instead of:
$$w_{t+1} = w_{t} - \eta \nabla J_{w}(w_t)$$
AdaGrad comes with:
$$w_{t+1}^{(j)} = w_{t}^{(j)} - \frac{\eta}{\sqrt{\varepsilon + \sum_{\tau=1}^{t}{(g_{\tau}^{(j)}})^2}} \nabla J_{w}(w_t^{(j)})$$
where $g_{\tau}$ is the gradient of error function at iteration $\tau$, $g_{\tau}^{(j)}$ is the partial derivative of the
error function in direction of the $j$ - th feature, at iteration $\tau$, $m$ - is the number of features, i.e.
$$g_{\tau}^{(j)} = \nabla J_{w}(w_\tau^{(j)})$$
AdaGrad specifies the update as:
$$w_{t+1} = w_{t} - \frac{\eta}{\sqrt{\varepsilon I + diag(G_t)}} \nabla J_{w}(w_t)$$
where:
* $\eta$ is the initial learning rate (hyperparameter)
* $n$ is the number of items in (mini)batch
* $G_t = \sum\limits_{\tau=1}^t \mathbf{g}_\tau \mathbf{g}_\tau^T$
* $diag(A)$ is the diagonal form of the square matrix $A$
* $\varepsilon > 0$ is used to avoid division by 0
* $I$ is the unit matrix of size $m$
* $G_t^{(j,j)} = \sum\limits_{\tau = 1}^{t}{(g_\tau^{(j)})^2}$ is the sum of the squared partial derivatives in direction
of the $j$ - th feature from the first iteration up to the current iteration
In a more detailed form, the update of the weights through AdaGrad is done by:
$$\left[\begin{array}{c}
w_{t+1}^{(1)}\\
w_{t+1}^{(2)}\\
\vdots\\
w_{t+1}^{(m)}
\end{array}\right] = \left[\begin{array}{c}
w_{t}^{(1)}\\
w_{t}^{(2)}\\
\vdots\\
w_{t}^{(m)}\end{array}\right] - \eta\left(\left[\begin{array}{cccc} \varepsilon & 0 & \ldots & 0\\
0 & \varepsilon & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \ldots & 0\end{array}\right]+
\left[\begin{array}{cccc}
G_{t}^{(1,1)} & 0 & \ldots & 0\\
0 & G_{t}^{(2,2)} & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & G_{t}^{(m,m)}\end{array}\right]\right)^{-1/2}
\left[\begin{array}{c}
g_t^{(1)}\\
g_t^{(2)}\\
\vdots\\
g_t^{(m)}\end{array}\right]$$
which simplifies to:
$$\left[\begin{array}{c}
w_{t+1}^{(1)}\\
w_{t+1}^{(2)}\\
\vdots\\
w_{t+1}^{(m)}
\end{array}\right] = \left[\begin{array}{c}
w_{t}^{(1)}\\
w_{t}^{(2)}\\
\vdots\\
w_{t}^{(m)}\end{array}\right] - \left[\begin{array}{c}
\frac{\eta}{\sqrt{\varepsilon+G_{t}^{(1,1)}}}g_t^{(1)}\\
\frac{\eta}{\sqrt{\varepsilon+G_{t}^{(2,2)}}}g_t^{(2)}\\
\vdots\\
\frac{\eta}{\sqrt{\varepsilon+G_{t}^{(m,m)}}}g_t^{(m)}
\end{array}\right]$$
## Generate data
```
from scipy.sparse import random #to generate sparse data
np.random.seed(10) # for reproducibility
m_data = 100
n_data = 4 #number of features of the data
_scales = np.array([1,10, 10,1 ]) # play with these...
_parameters = np.array([3, 0.5, 1, 7])
def gen_data(m, n, scales, parameters, add_noise=True):
# Adagrad is designed especially for sparse data.
# produce: X, a 2d tensor with m lines and n columns
# and X[:, k] uniformly distributed in [-scale_k, scale_k] with the first and the last column containing sparse data
#(approx 75% of the elements are 0)
#
# To generate a sparse data matrix with m rows and n columns
# and random values use S = random(m, n, density=0.25).A, where density = density of the data. S will be the
# resulting matrix
# more information at https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.random.html
#
# To obtain X - generate a random matrix with X[:, k] uniformly distributed in [-scale_k, scale_k]
# set X[:, 0] and X[:, -1] to 0 and add matrix S with the sparse data.
#
# let y be X@parameters.T + epsilon, with epsilon ~ N(0, 1); y is a vector with m elements
# parameters - the ideal weights, used to produce output values y
#
X = np.random.rand(m,n) *2*scales - scales
X[:, 0] = 0
X[:, -1] = 0
S = random(m, n, density=0.25).A
X = X + S
y = X@parameters.T + np.random.randn(m)
return X, y
X, y = gen_data(m_data, n_data, _scales, _parameters)
print(X)
print(y)
```
## Define error function, gradient, inference
```
def model_estimate(X, w):
'''Computes the linear regression estimation on the dataset X, using coefficients w
:param X: 2d tensor with m_data lines and n_data columns
:param w: a 1d tensor with n_data coefficients (no intercept)
:return: a 1d tensor with m_data elements y_hat = w @X.T
'''
y_hat = w@X.T
return y_hat
def J(X, y, w):
"""Computes the mean squared error of model. See the picture from last week's sheet.
:param X: input values, of shape m_data x n_data
:param y: ground truth, column vector with m_data values
:param w: column with n_data coefficients for the linear form
:return: a scalar value >= 0
:use the same formula as in the exercise from last week
"""
m = X.shape[0]
prod = model_estimate(X,w) - y
err = (1.0/(2*m))*prod.T@prod
return err
def gradient(X, y, w):
'''Commputes the gradients to be used for gradient descent.
:param X: 2d tensor with training data
:param y: 1d tensor with y.shape[0] == W.shape[0]
:param w: 1d tensor with current values of the coefficients
:return: gradients to be used for gradient descent.
:use the same formula as in the exercise from last week
'''
m = len(y)
y_hat = model_estimate(X,w);
grad = 1.0/m*X.T@(y_hat - y)
return grad## implement
#The function from last week for comparison
def gd_no_momentum(X, y, w_init, eta=1e-1, thresh = 0.001):
'''Iterates with gradient descent algorithm
:param X: 2d tensor with data
:param y: 1d tensor, ground truth
:param w_init: 1d tensor with the X.shape[1] initial coefficients
:param eta: the learning rate hyperparameter
:param thresh: the threshold for the gradient norm (to stop iterations)
:return: the list of succesive errors and the found w* vector
'''
w = w_init
w_err=[]
while True:
grad = gradient(X, y, w)
err = J(X, y, w)
w_err.append(err)
w = w - eta * grad.T
if np.linalg.norm(grad) < thresh:
break;
return w_err, w
w_init = np.array([0, 0, 0, 0])
errors, w_best = gd_no_momentum(X, y, w_init, 0.0001)
print(f'How many iterations were made: {len(errors)}')
w_best
fig, axes = plt.subplots()
axes.plot(list(range(len(errors))), errors)
axes.set_xlabel('Epochs')
axes.set_ylabel('Error')
axes.set_title('Optimization without momentum')
# TODO: show evolution of w on a 2d countour plot
```
## Momentum algorithm
```
#The function from last week for comparison
def gd_with_momentum(X, y, w_init, eta=1e-1, gamma = 0.9, thresh = 0.001):
"""Applies gradient descent with momentum coefficient
:params: as in gd_no_momentum
:param gamma: momentum coefficient
:param thresh: the threshold for gradient norm (to stop iterations)
:return: the list of succesive errors and the found w* vector
"""
w = w_init
w_err=[]
delta = np.zeros_like(w)
while True:
grad = gradient(X, y, w)
err = J(X, y, w)
w_err.append(err)
w_nou = w + gamma * delta - eta * grad
delta = w_nou - w
w = w_nou
if np.linalg.norm(grad) < thresh :
break;
return w_err, w
w_init = np.array([0, 0, 0, 0])
errors_momentum, w_best = gd_with_momentum(X, y, w_init,0.0001, 0.9)
print(f'How many iterations were made: {len(errors_momentum)}')
w_best
fig, axes = plt.subplots()
axes.plot(list(range(len(errors_momentum))), errors_momentum)
axes.set_xlabel('Epochs')
axes.set_ylabel('Error')
axes.set_title('Optimization with momentum')
```
## Apply AdaGrad and report resulting $\eta$'s
```
def ada_grad(X, y, w_init, eta_init=1e-1, eps = 0.001, thresh = 0.001):
'''Iterates with gradient descent. algorithm
:param X: 2d tensor with data
:param y: 1d tensor, ground truth
:param w_init: 1d tensor with the X.shape[1] initial coefficients
:param eta_init: the initial learning rate hyperparameter
:param eps: the epsilon value from the AdaGrad formula
:param thresh: the threshold for gradient norm (to stop iterations)
:return: the list of succesive errors w_err, the found w - the estimated feature vector
:and rates the learning rates after the final iteration
'''
n = X.shape[1]
w = w_init
w_err=[]
sum_sq_grad = np.zeros(n)
rates = np.zeros(n) + eta_init
while True:
grad = gradient(X, y, w)
pgrad = grad**2
err = J(X, y, w)
w_err.append(err)
prod = rates*grad
w = w - prod
sum_sq_grad += pgrad
rates = eta_init/np.sqrt(eps + sum_sq_grad)
if np.linalg.norm(grad) < thresh:
break;
return w_err, w, rates
w_init = np.array([0,0,0,0])
adaGerr, w_ada_best, rates = ada_grad(X, y, w_init)
print(rates)
print(f'How many iterations were made: {len(adaGerr)}')
w_ada_best
fig, axes = plt.subplots()
axes.plot(list(range(len(adaGerr))),adaGerr)
axes.set_xlabel('Epochs')
axes.set_ylabel('Error')
axes.set_title('Optimization with AdaGrad')
```
| github_jupyter |
```
import os
import cv2
import math
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, fbeta_score
from keras import optimizers
from keras import backend as K
from keras.models import Sequential, Model
from keras.applications.vgg16 import VGG16
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, EarlyStopping
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, Activation, BatchNormalization, GlobalAveragePooling2D, Input
# Set seeds to make the experiment more reproducible.
from tensorflow import set_random_seed
from numpy.random import seed
set_random_seed(0)
seed(0)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
train = pd.read_csv('../input/imet-2019-fgvc6/train.csv')
labels = pd.read_csv('../input/imet-2019-fgvc6/labels.csv')
test = pd.read_csv('../input/imet-2019-fgvc6/sample_submission.csv')
train["attribute_ids"] = train["attribute_ids"].apply(lambda x:x.split(" "))
train["id"] = train["id"].apply(lambda x: x + ".png")
test["id"] = test["id"].apply(lambda x: x + ".png")
print('Number of train samples: ', train.shape[0])
print('Number of test samples: ', test.shape[0])
print('Number of labels: ', labels.shape[0])
display(train.head())
display(labels.head())
```
### Model parameters
```
# Model parameters
BATCH_SIZE = 128
EPOCHS = 30
LEARNING_RATE = 0.0001
HEIGHT = 64
WIDTH = 64
CANAL = 3
N_CLASSES = labels.shape[0]
ES_PATIENCE = 5
DECAY_DROP = 0.5
DECAY_EPOCHS = 10
classes = list(map(str, range(N_CLASSES)))
def f2_score_thr(threshold=0.5):
def f2_score(y_true, y_pred):
beta = 2
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold), K.floatx())
true_positives = K.sum(K.clip(y_true * y_pred, 0, 1), axis=1)
predicted_positives = K.sum(K.clip(y_pred, 0, 1), axis=1)
possible_positives = K.sum(K.clip(y_true, 0, 1), axis=1)
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
return K.mean(((1+beta**2)*precision*recall) / ((beta**2)*precision+recall+K.epsilon()))
return f2_score
def custom_f2(y_true, y_pred):
beta = 2
tp = np.sum((y_true == 1) & (y_pred == 1))
tn = np.sum((y_true == 0) & (y_pred == 0))
fp = np.sum((y_true == 0) & (y_pred == 1))
fn = np.sum((y_true == 1) & (y_pred == 0))
p = tp / (tp + fp + K.epsilon())
r = tp / (tp + fn + K.epsilon())
f2 = (1+beta**2)*p*r / (p*beta**2 + r + 1e-15)
return f2
def step_decay(epoch):
initial_lrate = LEARNING_RATE
drop = DECAY_DROP
epochs_drop = DECAY_EPOCHS
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
train_datagen=ImageDataGenerator(rescale=1./255, validation_split=0.25)
train_generator=train_datagen.flow_from_dataframe(
dataframe=train,
directory="../input/imet-2019-fgvc6/train",
x_col="id",
y_col="attribute_ids",
batch_size=BATCH_SIZE,
shuffle=True,
class_mode="categorical",
classes=classes,
target_size=(HEIGHT, WIDTH),
subset='training')
valid_generator=train_datagen.flow_from_dataframe(
dataframe=train,
directory="../input/imet-2019-fgvc6/train",
x_col="id",
y_col="attribute_ids",
batch_size=BATCH_SIZE,
shuffle=True,
class_mode="categorical",
classes=classes,
target_size=(HEIGHT, WIDTH),
subset='validation')
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_dataframe(
dataframe=test,
directory = "../input/imet-2019-fgvc6/test",
x_col="id",
target_size=(HEIGHT, WIDTH),
batch_size=1,
shuffle=False,
class_mode=None)
```
### Model
```
def create_model(input_shape, n_out):
input_tensor = Input(shape=input_shape)
base_model = VGG16(weights=None, include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
x = Dropout(0.5)(x)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.5)(x)
final_output = Dense(n_out, activation='sigmoid', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
# warm up model
# first: train only the top layers (which were randomly initialized)
model = create_model(input_shape=(HEIGHT, WIDTH, CANAL), n_out=N_CLASSES)
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
optimizer = optimizers.Adam(lr=LEARNING_RATE)
metrics = ["accuracy", "categorical_accuracy"]
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=ES_PATIENCE)
callbacks = [es]
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics)
model.summary()
```
#### Train top layers
```
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=2,
callbacks=callbacks,
verbose=2,
max_queue_size=16, workers=3, use_multiprocessing=True)
```
#### Fine-tune the complete model
```
for layer in model.layers:
layer.trainable = True
metrics = ["accuracy", "categorical_accuracy"]
lrate = LearningRateScheduler(step_decay)
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=(ES_PATIENCE))
callbacks = [es, lrate]
optimizer = optimizers.Adam(lr=0.0001)
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callbacks,
verbose=2,
max_queue_size=16, workers=3, use_multiprocessing=True)
```
### Build complete model
### Complete model graph loss
```
sns.set_style("whitegrid")
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharex='col', figsize=(20,7))
ax1.plot(history.history['loss'], label='Train loss')
ax1.plot(history.history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history.history['acc'], label='Train Accuracy')
ax2.plot(history.history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
ax3.plot(history.history['categorical_accuracy'], label='Train Cat Accuracy')
ax3.plot(history.history['val_categorical_accuracy'], label='Validation Cat Accuracy')
ax3.legend(loc='best')
ax3.set_title('Cat Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
```
### Find best threshold value
```
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(valid_generator)
scores = model.predict(im, batch_size=valid_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
print(lastFullValPred.shape, lastFullValLabels.shape)
def find_best_fixed_threshold(preds, targs, do_plot=True):
score = []
thrs = np.arange(0, 0.5, 0.01)
for thr in thrs:
score.append(custom_f2(targs, (preds > thr).astype(int)))
score = np.array(score)
pm = score.argmax()
best_thr, best_score = thrs[pm], score[pm].item()
print(f'thr={best_thr:.3f}', f'F2={best_score:.3f}')
if do_plot:
plt.plot(thrs, score)
plt.vlines(x=best_thr, ymin=score.min(), ymax=score.max())
plt.text(best_thr+0.03, best_score-0.01, f'$F_{2}=${best_score:.3f}', fontsize=14);
plt.show()
return best_thr, best_score
threshold, best_score = find_best_fixed_threshold(lastFullValPred, lastFullValLabels, do_plot=True)
```
### Apply model to test set and output predictions
```
test_generator.reset()
STEP_SIZE_TEST = test_generator.n//test_generator.batch_size
preds = model.predict_generator(test_generator, steps=STEP_SIZE_TEST)
predictions = []
for pred_ar in preds:
valid = []
for idx, pred in enumerate(pred_ar):
if pred > threshold:
valid.append(idx)
if len(valid) == 0:
valid.append(np.argmax(pred_ar))
predictions.append(valid)
filenames = test_generator.filenames
label_map = {valid_generator.class_indices[k] : k for k in valid_generator.class_indices}
results = pd.DataFrame({'id':filenames, 'attribute_ids':predictions})
results['id'] = results['id'].map(lambda x: str(x)[:-4])
results['attribute_ids'] = results['attribute_ids'].apply(lambda x: list(map(label_map.get, x)))
results["attribute_ids"] = results["attribute_ids"].apply(lambda x: ' '.join(x))
results.to_csv('submission.csv',index=False)
results.head(10)
```
| github_jupyter |
# Integral Calculus
:label:`sec_integral_calculus`
Differentiation only makes up half of the content of a traditional calculus education. The other pillar, integration, starts out seeming a rather disjoint question, "What is the area underneath this curve?" While seemingly unrelated, integration is tightly intertwined with the differentiation via what is known as the *fundamental theorem of calculus*.
At the level of machine learning we discuss in this book, we will not need a deep understanding of integration. However, we will provide a brief introduction to lay the groundwork for any further applications we will encounter later on.
## Geometric Interpretation
Suppose that we have a function $f(x)$. For simplicity, let us assume that $f(x)$ is non-negative (never takes a value less than zero). What we want to try and understand is: what is the area contained between $f(x)$ and the $x$-axis?
```
%matplotlib inline
import tensorflow as tf
from IPython import display
from mpl_toolkits import mplot3d
from d2l import tensorflow as d2l
x = tf.range(-2, 2, 0.01)
f = tf.exp(-x**2)
d2l.set_figsize()
d2l.plt.plot(x, f, color='black')
d2l.plt.fill_between(x.numpy(), f.numpy())
d2l.plt.show()
```
In most cases, this area will be infinite or undefined (consider the area under $f(x) = x^{2}$), so people will often talk about the area between a pair of ends, say $a$ and $b$.
```
x = tf.range(-2, 2, 0.01)
f = tf.exp(-x**2)
d2l.set_figsize()
d2l.plt.plot(x, f, color='black')
d2l.plt.fill_between(x.numpy()[50:250], f.numpy()[50:250])
d2l.plt.show()
```
We will denote this area by the integral symbol below:
$$
\mathrm{Area}(\mathcal{A}) = \int_a^b f(x) \;dx.
$$
The inner variable is a dummy variable, much like the index of a sum in a $\sum$, and so this can be equivalently written with any inner value we like:
$$
\int_a^b f(x) \;dx = \int_a^b f(z) \;dz.
$$
There is a traditional way to try and understand how we might try to approximate such integrals: we can imagine taking the region in-between $a$ and $b$ and chopping it into $N$ vertical slices. If $N$ is large, we can approximate the area of each slice by a rectangle, and then add up the areas to get the total area under the curve. Let us take a look at an example doing this in code. We will see how to get the true value in a later section.
```
epsilon = 0.05
a = 0
b = 2
x = tf.range(a, b, epsilon)
f = x / (1 + x**2)
approx = tf.reduce_sum(epsilon*f)
true = tf.math.log(tf.constant([5.])) / 2
d2l.set_figsize()
d2l.plt.bar(x, f, width=epsilon, align='edge')
d2l.plt.plot(x, f, color='black')
d2l.plt.ylim([0, 1])
d2l.plt.show()
f'approximation: {approx}, truth: {true}'
```
The issue is that while it can be done numerically, we can do this approach analytically for only the simplest functions like
$$
\int_a^b x \;dx.
$$
Anything somewhat more complex like our example from the code above
$$
\int_a^b \frac{x}{1+x^{2}} \;dx.
$$
is beyond what we can solve with such a direct method.
We will instead take a different approach. We will work intuitively with the notion of the area, and learn the main computational tool used to find integrals: the *fundamental theorem of calculus*. This will be the basis for our study of integration.
## The Fundamental Theorem of Calculus
To dive deeper into the theory of integration, let us introduce a function
$$
F(x) = \int_0^x f(y) dy.
$$
This function measures the area between $0$ and $x$ depending on how we change $x$. Notice that this is everything we need since
$$
\int_a^b f(x) \;dx = F(b) - F(a).
$$
This is a mathematical encoding of the fact that we can measure the area out to the far end-point and then subtract off the area to the near end point as indicated in :numref:`fig_area-subtract`.

:label:`fig_area-subtract`
Thus, we can figure out what the integral over any interval is by figuring out what $F(x)$ is.
To do so, let us consider an experiment. As we often do in calculus, let us imagine what happens when we shift the value by a tiny bit. From the comment above, we know that
$$
F(x+\epsilon) - F(x) = \int_x^{x+\epsilon} f(y) \; dy.
$$
This tells us that the function changes by the area under a tiny sliver of a function.
This is the point at which we make an approximation. If we look at a tiny sliver of area like this, it looks like this area is close to the rectangular area with height the value of $f(x)$ and the base width $\epsilon$. Indeed, one can show that as $\epsilon \rightarrow 0$ this approximation becomes better and better. Thus we can conclude:
$$
F(x+\epsilon) - F(x) \approx \epsilon f(x).
$$
However, we can now notice: this is exactly the pattern we expect if we were computing the derivative of $F$! Thus we see the following rather surprising fact:
$$
\frac{dF}{dx}(x) = f(x).
$$
This is the *fundamental theorem of calculus*. We may write it in expanded form as
$$\frac{d}{dx}\int_{-\infty}^x f(y) \; dy = f(x).$$
:eqlabel:`eq_ftc`
It takes the concept of finding areas (*a priori* rather hard), and reduces it to a statement derivatives (something much more completely understood). One last comment that we must make is that this does not tell us exactly what $F(x)$ is. Indeed $F(x) + C$ for any $C$ has the same derivative. This is a fact-of-life in the theory of integration. Thankfully, notice that when working with definite integrals, the constants drop out, and thus are irrelevant to the outcome.
$$
\int_a^b f(x) \; dx = (F(b) + C) - (F(a) + C) = F(b) - F(a).
$$
This may seem like abstract non-sense, but let us take a moment to appreciate that it has given us a whole new perspective on computing integrals. Our goal is no-longer to do some sort of chop-and-sum process to try and recover the area, rather we need only find a function whose derivative is the function we have! This is incredible since we can now list many rather difficult integrals by just reversing the table from :numref:`sec_derivative_table`. For instance, we know that the derivative of $x^{n}$ is $nx^{n-1}$. Thus, we can say using the fundamental theorem :eqref:`eq_ftc` that
$$
\int_0^{x} ny^{n-1} \; dy = x^n - 0^n = x^n.
$$
Similarly, we know that the derivative of $e^{x}$ is itself, so that means
$$
\int_0^{x} e^{x} \; dx = e^{x} - e^{0} = e^x - 1.
$$
In this way, we can develop the entire theory of integration leveraging ideas from differential calculus freely. Every integration rule derives from this one fact.
## Change of Variables
:label:`integral_example`
Just as with differentiation, there are a number of rules which make the computation of integrals more tractable. In fact, every rule of differential calculus (like the product rule, sum rule, and chain rule) has a corresponding rule for integral calculus (integration by parts, linearity of integration, and the change of variables formula respectively). In this section, we will dive into what is arguably the most important from the list: the change of variables formula.
First, suppose that we have a function which is itself an integral:
$$
F(x) = \int_0^x f(y) \; dy.
$$
Let us suppose that we want to know how this function looks when we compose it with another to obtain $F(u(x))$. By the chain rule, we know
$$
\frac{d}{dx}F(u(x)) = \frac{dF}{du}(u(x))\cdot \frac{du}{dx}.
$$
We can turn this into a statement about integration by using the fundamental theorem :eqref:`eq_ftc` as above. This gives
$$
F(u(x)) - F(u(0)) = \int_0^x \frac{dF}{du}(u(y))\cdot \frac{du}{dy} \;dy.
$$
Recalling that $F$ is itself an integral gives that the left hand side may be rewritten to be
$$
\int_{u(0)}^{u(x)} f(y) \; dy = \int_0^x \frac{dF}{du}(u(y))\cdot \frac{du}{dy} \;dy.
$$
Similarly, recalling that $F$ is an integral allows us to recognize that $\frac{dF}{dx} = f$ using the fundamental theorem :eqref:`eq_ftc`, and thus we may conclude
$$\int_{u(0)}^{u(x)} f(y) \; dy = \int_0^x f(u(y))\cdot \frac{du}{dy} \;dy.$$
:eqlabel:`eq_change_var`
This is the *change of variables* formula.
For a more intuitive derivation, consider what happens when we take an integral of $f(u(x))$ between $x$ and $x+\epsilon$. For a small $\epsilon$, this integral is approximately $\epsilon f(u(x))$, the area of the associated rectangle. Now, let us compare this with the integral of $f(y)$ from $u(x)$ to $u(x+\epsilon)$. We know that $u(x+\epsilon) \approx u(x) + \epsilon \frac{du}{dx}(x)$, so the area of this rectangle is approximately $\epsilon \frac{du}{dx}(x)f(u(x))$. Thus, to make the area of these two rectangles to agree, we need to multiply the first one by $\frac{du}{dx}(x)$ as is illustrated in :numref:`fig_rect-transform`.

:label:`fig_rect-transform`
This tells us that
$$
\int_x^{x+\epsilon} f(u(y))\frac{du}{dy}(y)\;dy = \int_{u(x)}^{u(x+\epsilon)} f(y) \; dy.
$$
This is the change of variables formula expressed for a single small rectangle.
If $u(x)$ and $f(x)$ are properly chosen, this can allow for the computation of incredibly complex integrals. For instance, if we even chose $f(y) = 1$ and $u(x) = e^{-x^{2}}$ (which means $\frac{du}{dx}(x) = -2xe^{-x^{2}}$), this can show for instance that
$$
e^{-1} - 1 = \int_{e^{-0}}^{e^{-1}} 1 \; dy = -2\int_0^{1} ye^{-y^2}\;dy,
$$
and thus by rearranging that
$$
\int_0^{1} ye^{-y^2}\; dy = \frac{1-e^{-1}}{2}.
$$
## A Comment on Sign Conventions
Keen-eyed readers will observe something strange about the computations above. Namely, computations like
$$
\int_{e^{-0}}^{e^{-1}} 1 \; dy = e^{-1} -1 < 0,
$$
can produce negative numbers. When thinking about areas, it can be strange to see a negative value, and so it is worth digging into what the convention is.
Mathematicians take the notion of signed areas. This manifests itself in two ways. First, if we consider a function $f(x)$ which is sometimes less than zero, then the area will also be negative. So for instance
$$
\int_0^{1} (-1)\;dx = -1.
$$
Similarly, integrals which progress from right to left, rather than left to right are also taken to be negative areas
$$
\int_0^{-1} 1\; dx = -1.
$$
The standard area (from left to right of a positive function) is always positive. Anything obtained by flipping it (say flipping over the $x$-axis to get the integral of a negative number, or flipping over the $y$-axis to get an integral in the wrong order) will produce a negative area. And indeed, flipping twice will give a pair of negative signs that cancel out to have positive area
$$
\int_0^{-1} (-1)\;dx = 1.
$$
If this discussion sounds familiar, it is! In :numref:`sec_geometry-linear-algebraic-ops` we discussed how the determinant represented the signed area in much the same way.
## Multiple Integrals
In some cases, we will need to work in higher dimensions. For instance, suppose that we have a function of two variables, like $f(x, y)$ and we want to know the volume under $f$ when $x$ ranges over $[a, b]$ and $y$ ranges over $[c, d]$.
```
# Construct grid and compute function
x, y = tf.meshgrid(tf.linspace(-2., 2., 101), tf.linspace(-2., 2., 101))
z = tf.exp(- x**2 - y**2)
# Plot function
ax = d2l.plt.figure().add_subplot(111, projection='3d')
ax.plot_wireframe(x, y, z)
d2l.plt.xlabel('x')
d2l.plt.ylabel('y')
d2l.plt.xticks([-2, -1, 0, 1, 2])
d2l.plt.yticks([-2, -1, 0, 1, 2])
d2l.set_figsize()
ax.set_xlim(-2, 2)
ax.set_ylim(-2, 2)
ax.set_zlim(0, 1)
ax.dist = 12
```
We write this as
$$
\int_{[a, b]\times[c, d]} f(x, y)\;dx\;dy.
$$
Suppose that we wish to compute this integral. My claim is that we can do this by iteratively computing first the integral in $x$ and then shifting to the integral in $y$, that is to say
$$
\int_{[a, b]\times[c, d]} f(x, y)\;dx\;dy = \int_c^{d} \left(\int_a^{b} f(x, y) \;dx\right) \; dy.
$$
Let us see why this is.
Consider the figure above where we have split the function into $\epsilon \times \epsilon$ squares which we will index with integer coordinates $i, j$. In this case, our integral is approximately
$$
\sum_{i, j} \epsilon^{2} f(\epsilon i, \epsilon j).
$$
Once we discretize the problem, we may add up the values on these squares in whatever order we like, and not worry about changing the values. This is illustrated in :numref:`fig_sum-order`. In particular, we can say that
$$
\sum _ {j} \epsilon \left(\sum_{i} \epsilon f(\epsilon i, \epsilon j)\right).
$$

:label:`fig_sum-order`
The sum on the inside is precisely the discretization of the integral
$$
G(\epsilon j) = \int _a^{b} f(x, \epsilon j) \; dx.
$$
Finally, notice that if we combine these two expressions we get
$$
\sum _ {j} \epsilon G(\epsilon j) \approx \int _ {c}^{d} G(y) \; dy = \int _ {[a, b]\times[c, d]} f(x, y)\;dx\;dy.
$$
Thus putting it all together, we have that
$$
\int _ {[a, b]\times[c, d]} f(x, y)\;dx\;dy = \int _ c^{d} \left(\int _ a^{b} f(x, y) \;dx\right) \; dy.
$$
Notice that, once discretized, all we did was rearrange the order in which we added a list of numbers. This may make it seem like it is nothing, however this result (called *Fubini's Theorem*) is not always true! For the type of mathematics encountered when doing machine learning (continuous functions), there is no concern, however it is possible to create examples where it fails (for example the function $f(x, y) = xy(x^2-y^2)/(x^2+y^2)^3$ over the rectangle $[0,2]\times[0,1]$).
Note that the choice to do the integral in $x$ first, and then the integral in $y$ was arbitrary. We could have equally well chosen to do $y$ first and then $x$ to see
$$
\int _ {[a, b]\times[c, d]} f(x, y)\;dx\;dy = \int _ a^{b} \left(\int _ c^{d} f(x, y) \;dy\right) \; dx.
$$
Often times, we will condense down to vector notation, and say that for $U = [a, b]\times [c, d]$ this is
$$
\int _ U f(\mathbf{x})\;d\mathbf{x}.
$$
## Change of Variables in Multiple Integrals
As with single variables in :eqref:`eq_change_var`, the ability to change variables inside a higher dimensional integral is a key tool. Let us summarize the result without derivation.
We need a function that reparameterizes our domain of integration. We can take this to be $\phi : \mathbb{R}^n \rightarrow \mathbb{R}^n$, that is any function which takes in $n$ real variables and returns another $n$. To keep the expressions clean, we will assume that $\phi$ is *injective* which is to say it never folds over itself ($\phi(\mathbf{x}) = \phi(\mathbf{y}) \implies \mathbf{x} = \mathbf{y}$).
In this case, we can say that
$$
\int _ {\phi(U)} f(\mathbf{x})\;d\mathbf{x} = \int _ {U} f(\phi(\mathbf{x})) \left|\det(D\phi(\mathbf{x}))\right|\;d\mathbf{x}.
$$
where $D\phi$ is the *Jacobian* of $\phi$, which is the matrix of partial derivatives of $\boldsymbol{\phi} = (\phi_1(x_1, \ldots, x_n), \ldots, \phi_n(x_1, \ldots, x_n))$,
$$
D\boldsymbol{\phi} = \begin{bmatrix}
\frac{\partial \phi _ 1}{\partial x _ 1} & \cdots & \frac{\partial \phi _ 1}{\partial x _ n} \\
\vdots & \ddots & \vdots \\
\frac{\partial \phi _ n}{\partial x _ 1} & \cdots & \frac{\partial \phi _ n}{\partial x _ n}
\end{bmatrix}.
$$
Looking closely, we see that this is similar to the single variable chain rule :eqref:`eq_change_var`, except we have replaced the term $\frac{du}{dx}(x)$ with $\left|\det(D\phi(\mathbf{x}))\right|$. Let us see how we can to interpret this term. Recall that the $\frac{du}{dx}(x)$ term existed to say how much we stretched our $x$-axis by applying $u$. The same process in higher dimensions is to determine how much we stretch the area (or volume, or hyper-volume) of a little square (or little *hyper-cube*) by applying $\boldsymbol{\phi}$. If $\boldsymbol{\phi}$ was the multiplication by a matrix, then we know how the determinant already gives the answer.
With some work, one can show that the *Jacobian* provides the best approximation to a multivariable function $\boldsymbol{\phi}$ at a point by a matrix in the same way we could approximate by lines or planes with derivatives and gradients. Thus the determinant of the Jacobian exactly mirrors the scaling factor we identified in one dimension.
It takes some work to fill in the details to this, so do not worry if they are not clear now. Let us see at least one example we will make use of later on. Consider the integral
$$
\int _ {-\infty}^{\infty} \int _ {-\infty}^{\infty} e^{-x^{2}-y^{2}} \;dx\;dy.
$$
Playing with this integral directly will get us no-where, but if we change variables, we can make significant progress. If we let $\boldsymbol{\phi}(r, \theta) = (r \cos(\theta), r\sin(\theta))$ (which is to say that $x = r \cos(\theta)$, $y = r \sin(\theta)$), then we can apply the change of variable formula to see that this is the same thing as
$$
\int _ 0^\infty \int_0 ^ {2\pi} e^{-r^{2}} \left|\det(D\mathbf{\phi}(\mathbf{x}))\right|\;d\theta\;dr,
$$
where
$$
\left|\det(D\mathbf{\phi}(\mathbf{x}))\right| = \left|\det\begin{bmatrix}
\cos(\theta) & -r\sin(\theta) \\
\sin(\theta) & r\cos(\theta)
\end{bmatrix}\right| = r(\cos^{2}(\theta) + \sin^{2}(\theta)) = r.
$$
Thus, the integral is
$$
\int _ 0^\infty \int _ 0 ^ {2\pi} re^{-r^{2}} \;d\theta\;dr = 2\pi\int _ 0^\infty re^{-r^{2}} \;dr = \pi,
$$
where the final equality follows by the same computation that we used in section :numref:`integral_example`.
We will meet this integral again when we study continuous random variables in :numref:`sec_random_variables`.
## Summary
* The theory of integration allows us to answer questions about areas or volumes.
* The fundamental theorem of calculus allows us to leverage knowledge about derivatives to compute areas via the observation that the derivative of the area up to some point is given by the value of the function being integrated.
* Integrals in higher dimensions can be computed by iterating single variable integrals.
## Exercises
1. What is $\int_1^2 \frac{1}{x} \;dx$?
2. Use the change of variables formula to integrate $\int_0^{\sqrt{\pi}}x\sin(x^2)\;dx$.
3. What is $\int_{[0,1]^2} xy \;dx\;dy$?
4. Use the change of variables formula to compute $\int_0^2\int_0^1xy(x^2-y^2)/(x^2+y^2)^3\;dy\;dx$ and $\int_0^1\int_0^2f(x, y) = xy(x^2-y^2)/(x^2+y^2)^3\;dx\;dy$ to see they are different.
[Discussions](https://discuss.d2l.ai/t/1093)
| github_jupyter |
# Latent Dirichlet Allocation for Text Data
In this assignment you will
* apply standard preprocessing techniques on Wikipedia text data
* use GraphLab Create to fit a Latent Dirichlet allocation (LDA) model
* explore and interpret the results, including topic keywords and topic assignments for documents
Recall that a major feature distinguishing the LDA model from our previously explored methods is the notion of *mixed membership*. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one.
With this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document.
**Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook.
## Text Data Preprocessing
We'll start by importing our familiar Wikipedia dataset.
The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read [this page](https://turi.com/download/upgrade-graphlab-create.html).
```
import graphlab as gl
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(gl.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
# import wiki data
wiki = gl.SFrame('people_wiki.gl/')
wiki
```
In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a _bag of words_, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be.
Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps: first, we will create a bag of words representation for each article, and then we will remove the common words that don't help us to distinguish between documents. For both of these tasks we can use pre-implemented tools from GraphLab Create:
```
wiki_docs = gl.text_analytics.count_words(wiki['text'])
wiki_docs = wiki_docs.dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True)
```
## Model fitting and interpretation
In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module.
Note: This may take several minutes to run.
```
topic_model = gl.topic_model.create(wiki_docs, num_topics=10, num_iterations=200)
```
GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results.
```
topic_model
```
It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will
* get the top words in each topic and use these to identify topic themes
* predict topic distributions for some example documents
* compare the quality of LDA "nearest neighbors" to the NN output from the first assignment
* understand the role of model hyperparameters alpha and gamma
## Load a fitted topic model
The method used to fit the LDA model is a _randomized algorithm_, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization.
It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model.
We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above.
```
topic_model = gl.load_model('lda_assignment_topic_model')
```
# Identifying topic themes by top words
We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA.
In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word: for instance, a topic that ends up describing science and technology articles might place more probability on the word 'university' than a topic that describes sports or politics. Looking at the highest probability words in each topic will thus give us a sense of its major themes. Ideally we would find that each topic is identifiable with some clear theme _and_ that all the topics are relatively distinct.
We can use the GraphLab Create function get_topics() to view the top words (along with their associated probabilities) from each topic.
__Quiz Question:__ Identify the top 3 most probable words for the first topic.
__ Quiz Question:__ What is the sum of the probabilities assigned to the top 50 words in the 3rd topic?
Let's look at the top 10 words for each topic to see if we can identify any themes:
```
[x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)]
```
We propose the following themes for each topic:
- topic 0: Science and research
- topic 2: Team sports
- topic 3: Music, TV, and film
- topic 4: American college and politics
- topic 5: General politics
- topic 6: Art and publishing
- topic 7: Business
- topic 8: International athletics
- topic 9: Great Britain and Australia
- topic 10: International music
We'll save these themes for later:
```
themes = ['science and research','team sports','music, TV, and film','American college and politics','general politics', \
'art and publishing','Business','international athletics','Great Britain and Australia','international music']
```
### Measuring the importance of top words
We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words.
We'll do this with two visualizations of the weights for the top words in each topic:
- the weights of the top 100 words, sorted by the size
- the total weight of the top 10 words
Here's a plot for the top 100 words by weight in each topic:
```
for i in range(10):
plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score'])
plt.xlabel('Word rank')
plt.ylabel('Probability')
plt.title('Probabilities of Top 100 Words in each Topic')
```
In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total!
Next we plot the total weight assigned by each topic to its top 10 words:
```
top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)]
ind = np.arange(10)
width = 0.5
fig, ax = plt.subplots()
ax.bar(ind-(width/2),top_probs,width)
ax.set_xticks(ind)
plt.xlabel('Topic')
plt.ylabel('Probability')
plt.title('Total Probability of Top 10 Words in each Topic')
plt.xlim(-0.5,9.5)
plt.ylim(0,0.15)
plt.show()
```
Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary.
Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all.
# Topic distributions for some example documents
As we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic.
We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition.
Topic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a _distribution_ over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama:
```
obama = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]])
pred1 = topic_model.predict(obama, output_type='probability')
pred2 = topic_model.predict(obama, output_type='probability')
print(gl.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]}))
```
To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document:
```
def average_predictions(model, test_document, num_trials=100):
avg_preds = np.zeros((model.num_topics))
for i in range(num_trials):
avg_preds += model.predict(test_document, output_type='probability')[0]
avg_preds = avg_preds/num_trials
result = gl.SFrame({'topics':themes, 'average predictions':avg_preds})
result = result.sort('average predictions', ascending=False)
return result
print average_predictions(topic_model, obama, 100)
```
__Quiz Question:__ What is the topic most closely associated with the article about former US President George W. Bush? Use the average results from 100 topic predictions.
__Quiz Question:__ What are the top 3 topics corresponding to the article about English football (soccer) player Steven Gerrard? Use the average results from 100 topic predictions.
# Comparing LDA to nearest neighbors for document retrieval
So far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations.
In this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment.
We'll start by creating the LDA topic distribution representation for each document:
```
wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability')
```
Next we add the TF-IDF document representations:
```
wiki['word_count'] = gl.text_analytics.count_words(wiki['text'])
wiki['tf_idf'] = gl.text_analytics.tf_idf(wiki['word_count'])
```
For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model:
```
model_tf_idf = gl.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
model_lda_rep = gl.nearest_neighbors.create(wiki, label='name', features=['lda'],
method='brute_force', distance='cosine')
```
Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist:
```
model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
model_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
```
Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents.
With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example: the top 10 nearest neighbors are all economists from the US, UK, or Canada.
Our LDA representation, on the other hand, defines similarity between documents in terms of their topic distributions. This means that documents can be "close" if they share similar themes, even though they may not share many of the same keywords. For the article on Paul Krugman, we expect the most important topics to be 'American college and politics' and 'science and research'. As a result, we see that the top 10 nearest neighbors are academics from a wide variety of fields, including literature, anthropology, and religious studies.
__Quiz Question:__ Using the TF-IDF representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use `mylist.index(value)` to find the index of the first instance of `value` in `mylist`.)
__Quiz Question:__ Using the LDA representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use `mylist.index(value)` to find the index of the first instance of `value` in `mylist`.)
# Understanding the role of LDA model hyperparameters
Finally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic.
In the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document "likes" a topic (in the case of alpha) or how much each topic "likes" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences "smoother" over topics, and gamma makes the topic preferences "smoother" over words.
Our goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model.
__Quiz Question:__ What was the value of alpha used to fit our original topic model?
__Quiz Question:__ What was the value of gamma used to fit our original topic model? Remember that GraphLab Create uses "beta" instead of "gamma" to refer to the hyperparameter that influences topic distributions over words.
We'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model:
- tpm_low_alpha, a model trained with alpha = 1 and default gamma
- tpm_high_alpha, a model trained with alpha = 50 and default gamma
```
tpm_low_alpha = gl.load_model('lda_low_alpha')
tpm_high_alpha = gl.load_model('lda_high_alpha')
```
### Changing the hyperparameter alpha
Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha.
```
a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1]
b = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1]
c = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1]
ind = np.arange(len(a))
width = 0.3
def param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab):
fig = plt.figure()
ax = fig.add_subplot(111)
b1 = ax.bar(ind, a, width, color='lightskyblue')
b2 = ax.bar(ind+width, b, width, color='lightcoral')
b3 = ax.bar(ind+(2*width), c, width, color='gold')
ax.set_xticks(ind+width)
ax.set_xticklabels(range(10))
ax.set_ylabel(ylab)
ax.set_xlabel(xlab)
ax.set_ylim(0,ylim)
ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param])
plt.tight_layout()
param_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha',
xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article')
```
Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics.
__Quiz Question:__ How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the **low alpha** model? Use the average results from 100 topic predictions.
__Quiz Question:__ How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the **high alpha** model? Use the average results from 100 topic predictions.
### Changing the hyperparameter gamma
Just as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models.
Now we will consider the following two models:
- tpm_low_gamma, a model trained with gamma = 0.02 and default alpha
- tpm_high_gamma, a model trained with gamma = 0.5 and default alpha
```
del tpm_low_alpha
del tpm_high_alpha
tpm_low_gamma = gl.load_model('lda_low_gamma')
tpm_high_gamma = gl.load_model('lda_high_gamma')
a_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
b_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
c_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
a_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
b_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
c_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
ind = np.arange(len(a))
width = 0.3
param_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma',
xlab='Topics (sorted by weight of top 100 words)',
ylab='Total Probability of Top 100 Words')
param_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma',
xlab='Topics (sorted by weight of bottom 1000 words)',
ylab='Total Probability of Bottom 1000 Words')
```
From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary.
__Quiz Question:__ For each topic of the **low gamma model**, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the _cdf_\__cutoff_ argument).
__Quiz Question:__ For each topic of the **high gamma model**, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the _cdf_\__cutoff_ argument).
We have now seen how the hyperparameters alpha and gamma influence the characteristics of our LDA topic model, but we haven't said anything about what settings of alpha or gamma are best. We know that these parameters are responsible for controlling the smoothness of the topic distributions for documents and word distributions for topics, but there's no simple conversion between smoothness of these distributions and quality of the topic model. In reality, there is no universally "best" choice for these parameters. Instead, finding a good topic model requires that we be able to both explore the output (as we did by looking at the topics and checking some topic predictions for documents) and understand the impact of hyperparameter settings (as we have in this section).
| github_jupyter |
<center>
<img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# **SpaceX Falcon 9 first stage Landing Prediction**
# Lab 1: Collecting the data
Estimated time needed: **45** minutes
In this capstone, we will predict if the Falcon 9 first stage will land successfully. SpaceX advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars; other providers cost upward of 165 million dollars each, much of the savings is because SpaceX can reuse the first stage. Therefore if we can determine if the first stage will land, we can determine the cost of a launch. This information can be used if an alternate company wants to bid against SpaceX for a rocket launch. In this lab, you will collect and make sure the data is in the correct format from an API. The following is an example of a successful and launch.

Several examples of an unsuccessful landing are shown here:

Most unsuccessful landings are planned. Space X performs a controlled landing in the oceans.
## Objectives
In this lab, you will make a get request to the SpaceX API. You will also do some basic data wrangling and formating.
* Request to the SpaceX API
* Clean the requested data
***
## Import Libraries and Define Auxiliary Functions
We will import the following libraries into the lab
```
# Requests allows us to make HTTP requests which we will use to get data from an API
import requests
# Pandas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
# NumPy is a library for the Python programming language,
# adding support for large, multi-dimensional arrays and matrices,
# along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
# Datetime is a library that allows us to represent dates
import datetime
# Setting this option will print all collumns of a dataframe
pd.set_option('display.max_columns', None)
# Setting this option will print all of the data in a feature
pd.set_option('display.max_colwidth', None)
```
Below we will define a series of helper functions that will help us use the API to extract information using identification numbers in the launch data.
From the <code>rocket</code> column we would like to learn the booster name.
```
# Takes the dataset and uses the rocket column to call the API and append the data to the list
def getBoosterVersion(data):
for x in data['rocket']:
response = requests.get("https://api.spacexdata.com/v4/rockets/"+str(x)).json()
BoosterVersion.append(response['name'])
```
From the <code>launchpad</code> we would like to know the name of the launch site being used, the logitude, and the latitude.
```
# Takes the dataset and uses the launchpad column to call the API and append the data to the list
def getLaunchSite(data):
for x in data['launchpad']:
response = requests.get("https://api.spacexdata.com/v4/launchpads/"+str(x)).json()
Longitude.append(response['longitude'])
Latitude.append(response['latitude'])
LaunchSite.append(response['name'])
```
From the <code>payload</code> we would like to learn the mass of the payload and the orbit that it is going to.
```
# Takes the dataset and uses the payloads column to call the API and append the data to the lists
def getPayloadData(data):
for load in data['payloads']:
response = requests.get("https://api.spacexdata.com/v4/payloads/"+load).json()
PayloadMass.append(response['mass_kg'])
Orbit.append(response['orbit'])
```
From <code>cores</code> we would like to learn the outcome of the landing, the type of the landing, number of flights with that core, whether gridfins were used, wheter the core is reused, wheter legs were used, the landing pad used, the block of the core which is a number used to seperate version of cores, the number of times this specific core has been reused, and the serial of the core.
```
# Takes the dataset and uses the cores column to call the API and append the data to the lists
def getCoreData(data):
for core in data['cores']:
if core['core'] != None:
response = requests.get("https://api.spacexdata.com/v4/cores/"+core['core']).json()
Block.append(response['block'])
ReusedCount.append(response['reuse_count'])
Serial.append(response['serial'])
else:
Block.append(None)
ReusedCount.append(None)
Serial.append(None)
Outcome.append(str(core['landing_success'])+' '+str(core['landing_type']))
Flights.append(core['flight'])
GridFins.append(core['gridfins'])
Reused.append(core['reused'])
Legs.append(core['legs'])
LandingPad.append(core['landpad'])
```
Now let's start requesting rocket launch data from SpaceX API with the following URL:
```
spacex_url="https://api.spacexdata.com/v4/launches/past"
response = requests.get(spacex_url)
```
Check the content of the response
```
print(response.content)
```
You should see the response contains massive information about SpaceX launches. Next, let's try to discover some more relevant information for this project.
### Task 1: Request and parse the SpaceX launch data using the GET request
To make the requested JSON results more consistent, we will use the following static response object for this project:
```
static_json_url='https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/API_call_spacex_api.json'
```
We should see that the request was successfull with the 200 status response code
```
response.status_code
```
Now we decode the response content as a Json using <code>.json()</code> and turn it into a Pandas dataframe using <code>.json_normalize()</code>
```
# Use json_normalize meethod to convert the json result into a dataframe
data = pd.json_normalize(response.json())
```
Using the dataframe <code>data</code> print the first 5 rows
```
# Get the head of the dataframe
data.head()
```
You will notice that a lot of the data are IDs. For example the rocket column has no information about the rocket just an identification number.
We will now use the API again to get information about the launches using the IDs given for each launch. Specifically we will be using columns <code>rocket</code>, <code>payloads</code>, <code>launchpad</code>, and <code>cores</code>.
```
# Lets take a subset of our dataframe keeping only the features we want and the flight number, and date_utc.
data = data[['rocket', 'payloads', 'launchpad', 'cores', 'flight_number', 'date_utc']]
# We will remove rows with multiple cores because those are falcon rockets with 2 extra rocket boosters and rows that have multiple payloads in a single rocket.
data = data[data['cores'].map(len)==1]
data = data[data['payloads'].map(len)==1]
# Since payloads and cores are lists of size 1 we will also extract the single value in the list and replace the feature.
data['cores'] = data['cores'].map(lambda x : x[0])
data['payloads'] = data['payloads'].map(lambda x : x[0])
# We also want to convert the date_utc to a datetime datatype and then extracting the date leaving the time
data['date'] = pd.to_datetime(data['date_utc']).dt.date
# Using the date we will restrict the dates of the launches
data = data[data['date'] <= datetime.date(2020, 11, 13)]
```
* From the <code>rocket</code> we would like to learn the booster name
* From the <code>payload</code> we would like to learn the mass of the payload and the orbit that it is going to
* From the <code>launchpad</code> we would like to know the name of the launch site being used, the longitude, and the latitude.
* From <code>cores</code> we would like to learn the outcome of the landing, the type of the landing, number of flights with that core, whether gridfins were used, whether the core is reused, whether legs were used, the landing pad used, the block of the core which is a number used to seperate version of cores, the number of times this specific core has been reused, and the serial of the core.
The data from these requests will be stored in lists and will be used to create a new dataframe.
```
#Global variables
BoosterVersion = []
PayloadMass = []
Orbit = []
LaunchSite = []
Outcome = []
Flights = []
GridFins = []
Reused = []
Legs = []
LandingPad = []
Block = []
ReusedCount = []
Serial = []
Longitude = []
Latitude = []
```
These functions will apply the outputs globally to the above variables. Let's take a looks at <code>BoosterVersion</code> variable. Before we apply <code>getBoosterVersion</code> the list is empty:
```
BoosterVersion
```
Now, let's apply <code> getBoosterVersion</code> function method to get the booster version
```
# Call getBoosterVersion
getBoosterVersion(data)
```
the list has now been update
```
BoosterVersion[0:5]
```
we can apply the rest of the functions here:
```
# Call getLaunchSite
getLaunchSite(data)
# Call getPayloadData
getPayloadData(data)
# Call getCoreData
getCoreData(data)
```
Finally lets construct our dataset using the data we have obtained. We we combine the columns into a dictionary.
```
launch_dict = {'FlightNumber': list(data['flight_number']),
'Date': list(data['date']),
'BoosterVersion':BoosterVersion,
'PayloadMass':PayloadMass,
'Orbit':Orbit,
'LaunchSite':LaunchSite,
'Outcome':Outcome,
'Flights':Flights,
'GridFins':GridFins,
'Reused':Reused,
'Legs':Legs,
'LandingPad':LandingPad,
'Block':Block,
'ReusedCount':ReusedCount,
'Serial':Serial,
'Longitude': Longitude,
'Latitude': Latitude}
```
Then, we need to create a Pandas data frame from the dictionary launch_dict.
```
# Create a data from launch_dict
launch_df = pd.DataFrame.from_dict(launch_dict)
```
Show the summary of the dataframe
```
# Show the head of the dataframe
launch_df.head()
```
### Task 2: Filter the dataframe to only include `Falcon 9` launches
Finally we will remove the Falcon 1 launches keeping only the Falcon 9 launches. Filter the data dataframe using the <code>BoosterVersion</code> column to only keep the Falcon 9 launches. Save the filtered data to a new dataframe called <code>data_falcon9</code>.
```
# Hint data['BoosterVersion']!='Falcon 1'
data_falcon9 = launch_df[launch_df['BoosterVersion'] == 'Falcon 9']
```
Now that we have removed some values we should reset the FlgihtNumber column
```
data_falcon9.loc[:,'FlightNumber'] = list(range(1, data_falcon9.shape[0]+1))
data_falcon9
```
## Data Wrangling
We can see below that some of the rows are missing values in our dataset.
```
data_falcon9.isnull().sum()
```
Before we can continue we must deal with these missing values. The <code>LandingPad</code> column will retain None values to represent when landing pads were not used.
### Task 3: Dealing with Missing Values
Calculate below the mean for the <code>PayloadMass</code> using the <code>.mean()</code>. Then use the mean and the <code>.replace()</code> function to replace `np.nan` values in the data with the mean you calculated.
```
# Calculate the mean value of PayloadMass column
PayloadMass_mean = data_falcon9.PayloadMass.mean()
# Replace the np.nan values with its mean value
data_falcon9['PayloadMass'] = data_falcon9['PayloadMass'].replace(np.nan, PayloadMass_mean)
```
You should see the number of missing values of the <code>PayLoadMass</code> change to zero.
```
data_falcon9.isnull().sum()
```
Now we should have no missing values in our dataset except for in <code>LandingPad</code>.
We can now export it to a <b>CSV</b> for the next section,but to make the answers consistent, in the next lab we will provide data in a pre-selected date range.
<code>data_falcon9.to_csv('dataset_part\_1.csv', index=False)</code>
## Authors
<a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | ----------------------------------- |
| 2020-09-20 | 1.1 | Joseph | get result each time you run |
| 2020-09-20 | 1.1 | Azim | Created Part 1 Lab using SpaceX API |
| 2020-09-20 | 1.0 | Joseph | Modified Multiple Areas |
Copyright © 2021 IBM Corporation. All rights reserved.
| github_jupyter |
```
# python packages pd
import numpy as np
import matplotlib.pyplot as plt
import sys
import os
import inspect
import tensorflow as tf
import keras
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D, Bidirectional, Activation
from keras.layers import CuDNNLSTM
from keras.utils.np_utils import to_categorical
# from keras.callbacks import EarlyStopping
from keras.layers import Dropout
from sklearn.model_selection import train_test_split
import importlib
import utilis
# custom
from keras.constraints import MinMaxNorm
from keras import backend as K
from keras.layers import Layer
from keras import initializers, regularizers, constraints, Input
from keras.models import Model
sys.path.append("..")
# custom python scripts
from packages import generator
importlib.reload(generator)
# # check version
# print(inspect.getsource(generator.Keras_DataGenerator))
```
# Bidirectional LSTM with Hypotheses
```
# Check that you are running GPU's
utilis.GPU_checker()
utilis.aws_setup()
```
# Config, generators and train
```
INPUT_TENSOR_NAME = "inputs_input"
SIGNATURE_NAME = "serving_default"
W_HYP = False
# LEARNING_RATE = 0.001
BATCH_SIZE = 64
# constnats
VOCAB_SIZE = 1254
INPUT_LENGTH = 3000 if W_HYP else 1000
EMBEDDING_DIM = 512
print(INPUT_LENGTH)
importlib.reload(generator)
# generators
training_generator = generator.Keras_DataGenerator(data_dir='', subset_frac = 0.04, dataset='train_new', w_hyp=W_HYP)
print()
validation_generator = generator.Keras_DataGenerator(data_dir='', subset_frac = 0.04, dataset='valid_new', w_hyp=W_HYP)
# custom dot product function
def dot_product(x, kernel):
if K.backend() == 'tensorflow':
return K.squeeze(K.dot(x, K.expand_dims(kernel)), axis=-1)
else:
return K.dot(x, kernel)
# find a way to return attention weight vector a
class AttentionWithContext(Layer):
def __init__(self,
W_regularizer=None, u_regularizer=None, b_regularizer=None,
W_constraint=None, u_constraint=None, b_constraint=None,
bias=True, **kwargs):
self.supports_masking = True
# initialization of all learnable params
self.init = initializers.get('lecun_uniform')
# regularizers for params, init as None
self.W_regularizer = regularizers.get(W_regularizer)
self.u_regularizer = regularizers.get(u_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
# constraints for params, init as None
self.W_constraint = constraints.get(W_constraint)
self.u_constraint = constraints.get(u_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
super(AttentionWithContext, self).__init__(**kwargs)
def build(self, input_shape):
# assert len(input_shape) == 3
# weight matrix
self.W = self.add_weight((input_shape[-1], input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
# bias term
if self.bias:
self.b = self.add_weight((input_shape[-1],),
initializer='lecun_uniform',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
# context vector
self.u = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_u'.format(self.name),
regularizer=self.u_regularizer,
constraint=self.u_constraint)
super(AttentionWithContext, self).build(input_shape)
def compute_mask(self, input, input_mask=None):
# do not pass the mask to the next layers
return None
def call(self, x, mask=None):
uit = dot_product(x, self.W)
if self.bias:
uit += self.b
uit = K.tanh(uit)
ait = dot_product(uit, self.u)
a = K.exp(ait)
# apply mask after the exp. will be re-normalized next
if mask is not None:
# Cast the mask to floatX to avoid float64 upcasting in theano
a *= K.cast(mask, K.floatx())
# in some cases especially in the early stages of training the sum may be almost zero
# and this results in NaN's. A workaround is to add a very small positive number ε to the sum.
# a /= K.cast(K.sum(a, axis=1, keepdims=True), K.floatx())
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon()* 100, K.floatx())
a = K.expand_dims(a)
weighted_input = x * a
return K.sum(weighted_input, axis=1)
def compute_output_shape(self, input_shape):
return input_shape[0], input_shape[-1]
# model
def build_model(vocab_size, embedding_dim, input_length):
sequence_input = Input(shape=(input_length,), dtype='int32')
embedded_sequences = Embedding(vocab_size, embedding_dim, input_length=input_length)(sequence_input)
output_1 = SpatialDropout1D(0.9)(embedded_sequences)
output1 = Dropout(0.9)(output_1)
output_2 = Bidirectional(CuDNNLSTM(512, return_sequences=True,
kernel_constraint=MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0),
recurrent_constraint=MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0),
bias_constraint = MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0)))(output_1)
context_vec = AttentionWithContext(
W_constraint = MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0),
u_constraint = MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0),
b_constraint = MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0))(output_2)
context_vec = Dropout(0.9)(context_vec)
predictions = Dense(41, kernel_constraint = MinMaxNorm(min_value=0.0001, max_value=1.0, rate=1.0, axis=0), activation='softmax')(context_vec)
model = Model(inputs=sequence_input, outputs=predictions)
return model
```
### testing generator
```
model = build_model(VOCAB_SIZE, EMBEDDING_DIM, INPUT_LENGTH)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
## ARE YOU LOADING A MODEL IF YES RUN TEH FOLLOWING LINES
# from keras.models import model_from_json
# json_file = open('model.json', 'r')
# loaded_model_json = json_file.read()
# json_file.close()
# loaded_model = model_from_json(loaded_model_json)
# # load weights into new model
# loaded_model.load_weights("model.h5")
# print("Loaded model from disk")
# # REMEMEBER TO COMPILE
# loaded_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#overwriting model
# model = loaded_model
# model.layers[4].get_weights()
%%time
#try and make it run until 9 am GMT+1
n_epochs = 22
history = model.fit_generator(generator=training_generator,
validation_data=validation_generator,
verbose=1,
use_multiprocessing=False,
epochs=n_epochs)
```
## Save modek
```
# FOR SAVING MODEL
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")
#WARNING_DECIDE_HOW_TO_NAME_LOG
#descriptionofmodel_personwhostartsrun
#e.g. LSTM_128encoder_etc_tanc
LOSS_FILE_NAME = "recs_1"
#WARNING NUMBER 2 - CURRENTLY EVERYTIME YOU RERUN THE CELLS BELOW THE FILES WITH THOSE NAMES GET WRITTEN OVER
# save history - WARNING FILE NAME
utilis.history_saver_bad(history, LOSS_FILE_NAME)
test_generator = generator.Keras_DataGenerator(data_dir='', subset_frac=0.01, dataset='test_new', w_hyp=W_HYP)
model.evaluate_generator(test_generator, verbose =1)
scores = model.predict_generator(test_generator)
medians = np.mean(scores, axis = 0)
medians.shape, np.array(lis).shape
plt.bar(lis, medians)
plt.xlabel("Label")
plt.ylabel("Frequency")
plt.title("Distribution of Labels - Attention G")
lis = [x for x in range(1,42)]
lis
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/band_math.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/band_math.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/band_math.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Load two 5-year Landsat 7 composites.
landsat1999 = ee.Image('LANDSAT/LE7_TOA_5YEAR/1999_2003')
landsat2008 = ee.Image('LANDSAT/LE7_TOA_5YEAR/2008_2012')
# Compute NDVI the hard way.
ndvi1999 = landsat1999.select('B4').subtract(landsat1999.select('B3')) \
.divide(landsat1999.select('B4').add(landsat1999.select('B3')))
# Compute NDVI the easy way.
ndvi2008 = landsat2008.normalizedDifference(['B4', 'B3'])
# Compute the multi-band difference image.
diff = landsat2008.subtract(landsat1999)
Map.addLayer(diff,
{'bands': ['B4', 'B3', 'B2'], 'min': -32, 'max': 32},
'difference')
# Compute the squared difference in each band.
squaredDifference = diff.pow(2)
Map.addLayer(squaredDifference,
{'bands': ['B4', 'B3', 'B2'], 'max': 1000},
'squared diff.')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# Class 4: Plotting with Matplotlib
Matplotlib is a powerful plotting module that is part of Python's standard library. The website for matplotlib is at http://matplotlib.org/. And you can find a bunch of examples at the following two locations: http://matplotlib.org/examples/index.html and http://matplotlib.org/gallery.html.
Matplotlib contains a module called pyplot that was written to provide a Matlab-style plotting interface.
```
# Import matplotlib.pyplot
import matplotlib.pyplot as plt
```
Next, we want to make sure that the plots that we create are displayed in this notebook. To achieve this we have to issue a command to be interpretted by Jupyter -- called a *magic* command. A magic command is preceded by a `%` character. Magics are *not* Python and will create errors if used in a Python program outside of the Jupyter notebook (e.g., if you are running Python in IDLE or executing a Python script form the command line).
```
# Magic command for the Jupyter Notebook to display plots in the notebook
%matplotlib inline
```
## An Introductory Example
Create a plot of the sine function for x values between -6 and 6. Add axis labels and a title.
```
# Import numpy as np
import numpy as np
# Create an array of x values from -6 to 6
x = np.arange(-6,6,0.001)
# Create a variable y equal to the sine of x
y = np.sin(x)
# Use the plot function to plot x on the horizontal axis and y on the vertical axis
plt.plot(x,y)
# Add a title and axis labels
plt.title('sin(x)')
plt.xlabel('x')
plt.ylabel('y')
```
## Style Sheets
Plots made using Matplotlib are highly customizable. Users can issue commands to control the limits of the axes, location of axis ticks, labels associated with axis ticks, and the color, thickness, opacity, and style of plotted data points.
But as a starting-point, Matplotlib uses a *style sheet* to establish initial plot settings until the user specifies different alternatives. The release of Matplotlib 2.0 included a major change in the default syle and the new default doesn't appeal to me.
```
# Set the Matplotlib style sheet to 'classic'
plt.style.use('classic')
# Re-run the magic command for the Jupyter Notebook to display plots in the notebook
%matplotlib inline
# Use the plot function to plot the sine function
plt.plot(x,y)
# Add a title and axis labels
plt.title('sin(x)')
plt.xlabel('x')
plt.ylabel('y')
```
The matplotlib attribute `plt.style.available` stores as a `list` the default sstyle sheets that you can choose from. Here's a gallery of the different sytles compared with each other. Aside from `classic`, other styles that I think look nice include: `bmh`, `ggplot`, and `seaborn`. But try them all out and see what you like!
```
#Print the available matplotlib style sheets. PROVIDED
print(plt.style.available)
```
## The `plot` Function
The `plot` function creates a two-dimensional plot of one variable against another.
```
# Use the help function to see the documentation for plt.plot()
help(plt.plot)
```
### Example
Create a plot of $f(x) = x^2$ with $x$ between -2 and 2.
* Set the linewidth to 3 points
* Set the line opacity (alpha) to 0.6
* Set axis labels and title
* Add a grid to the plot
```
# Create an array of x values from -2 to 2
x = np.arange(-2,2,0.001)
# Create a variable y equal to x squared
y = x**2
# Use the plot function to plot the line
plt.plot(x,y,linewidth=3,alpha = 0.6)
# Add a title and axis labels
plt.title('$f(x) = x^2$')
plt.xlabel('x')
plt.ylabel('y')
# Add grid
plt.grid()
```
### Example
Create plots of the functions $f(x) = \log x$ (natural log) and $g(x) = 1/x$ between 0.01 and 5
* Make the line for $log(x)$ solid blue
* Make the line for $1/x$ dashed magenta
* Set the linewidth of each line to 3 points
* Set the line transparency (alpha) for each line to 0.6
* Set the limits for the $x$-axis to (0,5)
* Set the limits for the $y$-axis to (-2,5)
* Set axis labels and title
* Add a legend
* Add a grid to the plot
```
# Create an array of x values from 0.01 to 5
x = np.arange(0.01,5,0.011)
# Create y variables
y1 = np.log(x)
y2 = 1/x
# Use the plot function to plot the lines
plt.plot(x,y1,'b-',linewidth=3,alpha = 0.6,label='$\log(x)$')
plt.plot(x,y2,'m--',linewidth=3,alpha = 0.6,label='$1/x$')
# Add a title and axis labels
plt.title('Two functions')
plt.xlabel('x')
plt.ylabel('y')
# Set axis limits
plt.xlim([0,5])
plt.ylim([-2,4])
# Make legend
plt.legend(loc='lower right',ncol=2)
# Add grid
plt.grid()
```
### Example
Consider the linear regression model:
\begin{align}
y_i = \beta_0 + \beta_1 x_i + \epsilon_i
\end{align}
where $x_i$ is the independent variable, $\epsilon_i$ is a random regression error term, $y_i$ is the dependent variable and $\beta_0$ and $\beta_1$ are constants.
Let's simulate the model
* Set $\beta_0=1$ and $\beta_1=-0.5$
* Create an array of $x_i$ values from -5 to 5
* Create an array of $\epsilon_i$ values from the standard normal distribution equal in length to the array of $x_i$ values
* Create an array of $y_i$ values
* Plot y against x with either a circle ('o'), triangle ('^'), or square ('s') marker and transparency (alpha) to 0.5
* Add axis lables, a title, and a grid to the plot
```
# Set betas
beta0 = 1
beta1 = -0.5
# Create x values
x = np.arange(-5,5,0.01)
# Create epsilon values from the standard normal distribution
epsilon = np.random.normal(size=len(x))
# Create y
y = beta0 + beta1*x+epsilon
# Plot
plt.plot(x,y,'s',alpha = 0.5)
# Add a title and axis labels
plt.title('Data')
plt.xlabel('x')
plt.ylabel('y')
# Set axis limits
plt.xlim([-5,5])
# Add grid
plt.grid()
```
### Example
Create plots of the functions $f(x) = x$, $g(x) = x^2$, and $h(x) = x^3$ for $x$ between -2 and 2
* Use the optional string format argument to format the lines:
- $x$: solid blue line
- $x^2$: dashed green line
- $x^3$: dash-dot magenta line
* Set the linewidth of each line to 3 points
* Set transparency (alpha) for each line to 0.7
* Set axis labels and title
* Add a legend to lower right with 3 columns
* Add a grid to the plot
```
# Create an array of x values from -2 to 2
x = np.arange(-2,2,0.001)
# Create y variables
y1 = x
y2 = x**2
y3 = x**3
# Use the plot function to plot the lines
plt.plot(x,y1,'b-',lw=3,label='$x$',alpha=0.7)
plt.plot(x,y2,'g--',lw=3,label='$x^2$',alpha=0.7)
plt.plot(x,y3,'m-.',lw=3,label='$x^3$',alpha=0.7)
# Add a title and axis labels
plt.title('Three functions')
plt.xlabel('x')
plt.ylabel('y')
# Make legend
plt.legend(loc='lower right',ncol=3)
# Add grid
plt.grid()
```
## Figures, Axes, and Subplots
Often we want to create plots with multiple axes or we want to modify the size and shape of the plot areas. To be able to do these things, we need to explicity create a figure and then create axes within the figure. The best way to see how this works is by example.
### Example: A Single Plot with Double Width
The default dimensions of a matplotlib figure are 6 inches by 4 inches. As we saw above, this leaves some whitespace on the right side of the figure in the Notebook. Suppose we want to remove that by making the plot area twice as wide.
Plot the sine function on -6 to 6 using a figure with dimensions 12 inches by 4 inches
```
# Create an array of x values from -6 to 6
x = np.arange(-6,6,0.001)
# Create y variables
y = np.sin(x)
# Create a new figure that is twice as wide as the default
fig = plt.figure(figsize=(12,4))
# Create axis within the figure
ax1 = fig.add_subplot(1,1,1)
# Plot
ax1.plot(x,y,lw=3,alpha = 0.6)
# Add grid
ax1.grid()
```
In the previous example the `fig = plt.figure()` function creates a new figure called `fig` and `fig.add_subplot()` puts a new axis on the figure. The command `ax1 = fig.add_subplot(1,1,1)` means divide the figure `fig` into a 1 by 1 grid and assign the first component of that grid to the variable `ax1`.
### Example: Two Plots Stacked
Create a new figure with two axes in a column and plot the sine function on -6 to 6 on the top axis and the cosine function on -6 to 6 on the bottom axis.
```
# Create an array of x values from -6 to 6
x = np.arange(-6,6,0.001)
# Create y variables
y1 = np.sin(x)
y2 = np.cos(x)
# Create a new figure that is twice as tall as the default
fig = plt.figure(figsize=(6,8))
# Create axis 1 and plot with title and axis labels
ax1 = fig.add_subplot(2,1,1)
ax1.plot(x,y1,lw=3,alpha = 0.6)
ax1.grid()
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title('sin')
# Create axis 2 and plot with title and axis labels
ax2 = fig.add_subplot(2,1,2)
ax2.plot(x,y2,lw=3,alpha = 0.6)
ax2.grid()
ax2.set_xlabel('x')
ax2.set_ylabel('y')
ax2.set_title('cos')
# Improve the spacing between the plots
fig.tight_layout()
```
### Example: Block of Four Plots
The default dimensions of a matplotlib figure are 6 inches by 4 inches. As we saw above, this leaves some whitespace on the right side of the figure. Suppose we want to remove that by making the plot area twice as wide.
Create a new figure with four axes in a two-by-two grid. Plot the following functions on the interval -2 to 2:
* $y = x$
* $y = x^2$
* $y = x^3$
* $y = x^4$
Set the figure size to 12in. by 8in. Run the command `plt.tight_layout()` to adust the figure's margins after creating your figure, axes, and plots.
```
# Create data for plot
x = np.arange(-2,2,0.001)
y1 = x
y2 = x**2
y3 = x**3
y4 = x**4
# Create a new figure that is twice as wide and twice as tall as the default
fig = plt.figure(figsize=(12,8))
# Create axis 1 and plot with title
ax1 = fig.add_subplot(2,2,1)
ax1.plot(x,y1,lw=3,alpha = 0.6)
ax1.grid()
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title('$x$')
# Create axis 2 and plot with title
ax2 = fig.add_subplot(2,2,2)
ax2.plot(x,y2,lw=3,alpha = 0.6)
ax2.grid()
ax2.set_xlabel('x')
ax2.set_ylabel('y')
ax2.set_title('$x^2$')
# Create axis 3 and plot with title
ax3 = fig.add_subplot(2,2,3)
ax3.plot(x,y3,lw=3,alpha = 0.6)
ax3.grid()
ax3.set_xlabel('x')
ax3.set_ylabel('y')
ax3.set_title('$x^3$')
# Create axis 4 and plot with title
ax4 = fig.add_subplot(2,2,4)
ax4.plot(x,y4,lw=3,alpha = 0.6)
ax4.grid()
ax4.set_xlabel('x')
ax4.set_ylabel('y')
ax4.set_title('$x^4$')
# Improve the spacing between the plots
plt.tight_layout()
```
## Exporting Figures to Image Files
Use the `plt.savefig()` function to save figures to images.
```
# Create data to plot sin(x)/x on [-20,20]
x = np.arange(-20,20,0.001)
y = np.sin(x)/x
# Plot y against x
plt.plot(x,y)
# Save figure as 'fig_econ126_class04_sine_over_x.png' at 120 DPI resolution
plt.savefig('fig_econ126_class04_sine_over_x.png',dpi=120)
```
In the previous example, the image is saved as a PNG file with 120 dots per inch. This resolution is high enough to look good even when projected on a large screen. The image format is inferred by the extension on the filename.
| github_jupyter |
# Multiclass Partition Explainer: Emotion Data Example
This notebook demonstrates how to use the partition explainer for multiclass scenario with text data and visualize feature attributions towards individual classes. For computing shap values for a multiclass scenario, it uses the partition explainer over the text data and computes attribution for a feature towards a given class based on its marginal contribution towards the difference in the one vs all logit for the respective class from its base value.
Below we demonstrate using the partition explainer on the Emotion dataset provided hugging face and the Emo-MobileBERT (https://huggingface.co/lordtt13/emo-mobilebert) which is a thin version of BERT LARGE, trained on the Emotion dataset to infer the underlying emotion of utterance by choosing from four emotion classes: happy, sad, angry and others
```
import os
import copy
import shutil
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import shap
import scipy as sp
import nlp
import torch
pd.set_option('display.max_columns', None)
pd.set_option('max_colwidth', None)
pd.set_option("max_rows", None)
```
### Load data
```
train, test = nlp.load_dataset("emo", split = ["train", "test"])
id2label = {0: 'others', 1: 'happy', 2: 'sad', 3: 'angry'}
labels=list(id2label.values())
label2id = {}
for i,label in enumerate(labels):
label2id[label]=i
data={'text':[],
'emotion':[]}
for val in train:
if id2label[val['label']]!='others':
data['text'].append(val['text'])
data['emotion'].append(id2label[val['label']])
data = pd.DataFrame(data)
```
### Load model and tokenizer
```
tokenizer = AutoTokenizer.from_pretrained("lordtt13/emo-mobilebert",use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained("lordtt13/emo-mobilebert").cuda()
```
### Distribution of emotion labels
```
ax = data.emotion.value_counts().plot.bar()
```
### Sample data
```
data.head()
```
### Define function
```
def f(x):
tv = torch.tensor([tokenizer.encode(v, pad_to_max_length=True, max_length=128,truncation=True) for v in x]).cuda()
attention_mask = (tv!=0).type(torch.int64).cuda()
outputs = model(tv,attention_mask=attention_mask)[0].detach().cpu().numpy()
scores = (np.exp(outputs).T / np.exp(outputs).sum(-1)).T
val = sp.special.logit(scores)
return val
```
### Create an explainer object
```
explainer = shap.Explainer(f,tokenizer,output_names=labels)
```
### Compute shap values for multiclass scenario
```
shap_values_multiclass = explainer(data['text'][0:50])
```
### Wrapper functions for bar and text plots
```
def custom_masked_bar_plot(class_index,mask_type,viz_type):
#determine type of operation on the explanation object
if viz_type=='mean':
compute_shap=copy.copy(shap_values_multiclass.mean(0))
if viz_type=='sum':
compute_shap=copy.copy(shap_values_multiclass.sum(0))
if viz_type=='abs_mean':
compute_shap=copy.copy(shap_values_multiclass.abs.sum(0))
if viz_type=='abs_sum':
compute_shap=copy.copy(shap_values_multiclass.abs.sum(0))
#create a mask to visualize either positively or negatively contributing features
if mask_type=='pos':
mask=compute_shap.values[:,class_index]>=0
else:
mask=compute_shap.values[:,class_index]<=0
#slice values related to a given class
compute_shap.values=compute_shap.values[:,class_index][mask]
compute_shap.feature_names=list(np.array(compute_shap.feature_names)[mask])
#plot
shap.plots.bar(compute_shap.abs,order=shap.Explanation.identity)
def text_plot(class_index,start_idx,end_idx):
shap_values = copy.copy(shap_values_multiclass[:])
#slice values related to a given class
if len(shap_values.base_values.shape) == 1:
shap_values.values = shap_values.values[:,class_index]
shap_values.hierarchical_values = shap_values.hierarchical_values[:,class_index]
shap_values.base_values = shap_values.base_values[class_index]
else:
for i in range(start_idx,end_idx):
shap_values.values[i] = shap_values.values[i][:,class_index]
shap_values.hierarchical_values[i] = shap_values.hierarchical_values[i][:,class_index]
shap_values.base_values = shap_values.base_values[:,class_index]
#plot
shap.plots.text(shap_values[start_idx:end_idx])
```
### Top words towards emotion happy
```
label='happy'
shap.plots.bar(shap_values_multiclass.abs.mean(0)[:,label2id[label]])
```
### Top positive words towards emotion happy
```
label='happy'
custom_masked_bar_plot(label2id[label],'pos','mean')
```
### Top negative words towards emotion happy
```
label='happy'
custom_masked_bar_plot(label2id[label],'neg','mean')
```
### Top words towards emotion sad
```
label='sad'
shap.plots.bar(shap_values_multiclass.abs.mean(0)[:,label2id[label]])
```
### Top positive words towards emotion sad
```
label='sad'
custom_masked_bar_plot(label2id[label],'pos','mean')
```
### Top negative words towards emotion sad
```
label='sad'
custom_masked_bar_plot(label2id[label],'neg','mean')
```
### Top words towards emotion angry
```
label='angry'
shap.plots.bar(shap_values_multiclass.abs.mean(0)[:,label2id[label]])
```
### Top positive words towards emotion angry
```
label='angry'
custom_masked_bar_plot(label2id[label],'pos','mean')
```
### Top negative words towards emotion angry
```
label='angry'
custom_masked_bar_plot(label2id[label],'neg','mean')
```
### Visualizing text plots over attribution of features towards a given class
```
label='sad'
text_plot(label2id[label],0,3)
```
| github_jupyter |
```
from WenShuan import WenShuan
from bs4 import BeautifulSoup
import re
from matplotlib import pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = "retina"
```
# Organize WenShuan into Text and Comment Tuples
In this notebook, we would try to split texts and commentaries in WenShuan into a list of tuples. The final goal is to analyze the 李善 commentaries in WenShuan. Because Han-Ji used smaller fontsize for both commentaries and sound glosses, we have to get rid of the sound glosses first.
## Loading Data for WenShuan
At first, we can load the raw html pages from the `data/` folder.
```
wenshuan = WenShuan("2018-06-02", "MF" )
wenshuan.bookname = "wenshuan_rare_sinica_char"
wenshuan.load_htmls()
len(wenshuan.flat_bodies)
```
In the `wenshuan.flat_bodies` attribute, we have a list of `bs4` object stands for the raw page from WenShuan. Because we want to design further methods to organize the information in WenShuan, saving a list of `bs4` objects could help us to saving our time to looping over the HanJi web page repeatedly.
## Get the List of Bookmark
Since we have a list of `bs4` soup, we can loop over it and get the bookmark on every pages via,
```python
for soup in self.flat_bodies:
# extract gobookmark
path = soup.find('a', attrs={'class', 'gobookmark'}).text
self.paths.append(path)
```
and save in the `wenshuan.paths`.
```
wenshuan.extract_paths()
# preview some bookmarks
wenshuan.paths[:5]
```
## Extract MetaData
The extraction of metadata is done by the organizing the bookmark. However, for the author names, we first extract the author names on every pages using `indent` & `padding` via `wenshuan.get_author_bag`, and `wenshuan.get_author_bag` will save author names and the comments attached on it to a `defaultdict(list)` in `wenshuan.author_bag`. And we will use the `author_bag` to cross-check the author names in the bookmarks.
```
wenshuan.get_author_bag()
```
The warning information shows that, for some comments of the author, the comments would possibly be cut off by the page divider. In that case, we would attach the comments to the previous author name in the same page.
```
# here are all the author names got from every pages
wenshuan.author_bag.keys()
wenshuan.extract_meta() # Warning will print all the pages without an author name in bookmark
```
Here we found an error in 集/總集/文選/卷第五十九 碑文下 墓誌/碑文下/王簡棲頭陁寺碑文. The author name we got in the raw page is 主簡棲, but we only have 王簡棲 in the bookmark.
```
# here are the metadata dicts
wenshuan.flat_meta[:5]
```
# Organize the Text and Commentary Tuples
In this step, we just put the nearby text and smaller fontsize together.
```
wenshuan.heads2tuples() # put head and all other thing behind a head to a tuple
wenshuan.passages2tuples() # put text and the smaller fontsize behind it to a tuple
wenshuan.flat_passages[5]
```
## Possible Methods to Cut Off the Sound Glosses
Usually, the sound glosses are shorter than the true commentary. We could try out this hypothesis.
We could plot the lengths of every comments into a histogram.
```
plt.hist(
[len(c) for passage in wenshuan.flat_passages
for _,c in passage], log=1
)
```
The overall distribution range from 0 character to ~1,000 characters, but we only want to focus on shorter range, like 0 ~ 10 characters.
```
plt.hist(
[len(c) for passage in wenshuan.flat_passages
for _,c in passage],
bins=range(20)
)
```
There's a clear gap around 5 characters. Maybe we can try to set the limit to 5 characters.
```
print([c for passage in wenshuan.flat_passages
for _,c in passage if len(c) < 5][::5])
```
It seems ok, but we noticed 3 character phrases were the phrases contained "切" or "反" or "作" or the decomposition of the rare characters. Also, we have to replace " " with "" and replace all punctuations. The irregularities are '其三。', '其八。', '其八。', '其六。', '其九。', '其七。', '其四。', I do not have a good strategy to strip them in current stage.
```
print([c for passage in wenshuan.flat_passages
for _,c in passage if len(c) == 3])
```
## Punctuations Counting
The followings were the tricky things I have done. The hypothesis here was that sound glosses could be divided into 2 categories:
- **Only sound glosses in the comments**: we would get zero or one punctuation. (mistakes⁇ why putting punctuations in sound glosses 🤷)
- **Sound glosses before 李善 commentary**: we could split `。` and get the sound glosses before first period. We should only the zero punctuation for sound glosses in this case.
So, this function simply count the total number of punctuations in a string based on items in `pun_bag`.
```
def punctuation_count(phrase, pun_bag = {"、","。", ",", "?", ":", ";", "「", "」"}):
'''Count num punctuactions in phrase based on a given pun_bag'''
return sum(phrase.count(p) for p in pun_bag)
punctuation_count('東觀漢記曰:永平二年,詔曰:登靈臺,正儀度。休徵,已見上文。') # Yeh, 7 punctuations
```
So, my suggestion for dividing these two types of commentaries are like this:
```
def test(text, comment):
if (punctuation_count(comment) < 2 and # only count 0 or 1 punctuation
len(re.sub(r"[、。,?:;「」]", "", comment.replace(" ", ""))) < 5 and # restrict comment not longer than chars
comment != ""): # do not count ""
return (text[-1], comment)
elif (punctuation_count(comment.split("。")[0]) == 0 and # if there's no punctuation before period
0 < len(comment.split("。")[0]) < 5): # if the length of comment before period < 5 and > 0
return (text[-1], comment.split("。")[0])
else: return None
# case 1
text, comment = wenshuan.flat_passages[29][45]
print("CASE 1:", text, comment, test(text, comment), "\n")
# case 2
text, comment = wenshuan.flat_passages[29][-12]
print("CASE 2:", text, comment, test(text, comment), "\n")
```
The hilarious thing is that `text[-1]` may refer to punctuation. 🤦
```
for passage in wenshuan.flat_passages[:5]:
for text, comment in passage:
if test(text, comment) != None:
print(test(text, comment), "What's going on?", text)
```
## Avoid Attaching Sound Glosses to Punctuations 🤦
So the idea here is simple, just search backward and find the the character which is not a punctuation.
```
def _backward_char_search(phrase, exclude = {" ", "、","。", ",", "?", ":", ";", "「", "」"}):
for char in phrase[::-1]:
if char not in exclude:
return char
else: continue
# Try it!
_backward_char_search("於是聖皇乃握乾符,闡坤珍。披皇圖,稽帝文。赫然發憤,應若興雲。霆擊昆陽,憑怒雷震。")
```
Yeah.
## Wrap All Things Up
```
def _punctuation_count(phrase, pun_bag = {"、","。", ",", "?", ":", ";", "「", "」"}):
'''Count num punctuactions in phrase based on a given pun_bag'''
return sum(phrase.count(p) for p in pun_bag)
def _backward_char_search(phrase, exclude = {" ", "、","。", ",", "?", ":", ";", "「", "」"}):
'''Return the frist char which is not in exclude.'''
for char in phrase[::-1]:
if char not in exclude:
return char
else: continue
def _sound_glosses_check(text, comment):
'''Check the comment is a sound glosses or not.
If it is a sound glosses, return (character reffered to, sound) as a tuple.'''
if (_punctuation_count(comment) < 2 and
len(re.sub(r"[、。,?:;「」]", "", comment.replace(" ", ""))) < 5 and
comment != ""):
return _backward_char_search(text), re.sub(r"[、。,?:;「」]", "", comment)
elif (_punctuation_count(comment.split("。")[0]) == 0 and
0 < len(comment.split("。")[0]) < 5):
return _backward_char_search(text), comment.split("。")[0]
else: return None
# A place to save sound glosses
sound_glosses = []
for i,passage in enumerate(wenshuan.flat_passages):
for p,c in passage:
# check if c is a single phrase comment
sound_gloss = _sound_glosses_check(p, c)
if sound_gloss != None:
sound_glosses.append((i,) + sound_gloss)
```
Apparently, comments with length 4 are not sound glosses. 😑
```
print([c for i,p,c in sound_glosses if len(c) == 4][::5])
## Just repeat the code above but changing the condition for len of the sound glosses
def _sound_glosses_check(text, comment):
'''Check the comment is a sound glosses or not.
If it is a sound glosses, return (character reffered to, sound) as a tuple.'''
if (_punctuation_count(comment) < 2 and
len(re.sub(r"[、。,?:;「」]", "", comment.replace(" ", ""))) < 4 and
comment != ""):
return _backward_char_search(text), re.sub(r"[、。,?:;「」]", "", comment)
elif (_punctuation_count(comment.split("。")[0]) == 0 and
0 < len(comment.split("。")[0]) < 4):
return _backward_char_search(text), comment.split("。")[0]
else: return None
# A place to save sound glosses
sound_glosses = []
for i,passage in enumerate(wenshuan.flat_passages):
for p,c in passage:
# check if c is a single phrase comment
sound_gloss = _sound_glosses_check(p, c)
if sound_gloss != None:
sound_glosses.append((i,) + sound_gloss)
print(sound_glosses[::15])
```
Yeah, so far seems ok.
## Put back to WenShuan Class
This is the part of the WenShuan class. This part is the class methods we defined in this notebook:
```python
class WenShuan(Book):
...
def _punctuation_count(self, phrase, pun_bag = {"、","。", ",", "?", ":", ";", "「", "」"}):
'''Count num punctuactions in phrase based on a given pun_bag'''
return sum(phrase.count(p) for p in pun_bag)
def _backward_char_search(self, phrase, exclude = {" ", "、","。", ",", "?", ":", ";", "「", "」"}):
'''Return the frist char which is not in exclude.'''
for char in phrase[::-1]:
if char not in exclude:
return char
else: continue
def _sound_glosses_check(self, text, comment):
'''Check the comment is a sound glosses or not.
If it is a sound glosses, return (character reffered to, sound) as a tuple.'''
if (self._punctuation_count(comment) < 2 and
len(re.sub(r"[、。,?:;「」]", "", comment.replace(" ", ""))) < 4 and
comment != ""):
return self._backward_char_search(text), re.sub(r"[、。,?:;「」]", "", comment)
elif (self._punctuation_count(comment.split("。")[0]) == 0 and
0 < len(comment.split("。")[0]) < 4):
return self._backward_char_search(text), comment.split("。")[0]
else: return None
def extract_sound_glosses(self):
# A place to save sound glosses
self.sound_glosses = []
for i,passage in enumerate(self.flat_passages):
for p,c in passage:
# check if c is a single phrase comment
sound_gloss = self._sound_glosses_check(p, c)
if sound_gloss != None:
self.sound_glosses.append((i,) + sound_gloss)
```
## (Added) Take Out Sound Glosses
We not only want to get a bag of sound glosses as a dictionary but also want to remove sound gloss sentences in the original texts. To do that, we can use a new list to store the phrases with sound glosses and concatenate them to the next phrase without sound glosses.
```
passage = wenshuan.flat_passages[5]
# A place to save sound glosses
new_passage = []
p_preivous_buffer = ''
for j,(p,c) in enumerate(passage):
# check if c is a single phrase comment
sound_gloss = _sound_glosses_check(p, c)
if sound_gloss != None:
p_preivous_buffer += p
else:
new_passage.append((p_preivous_buffer + p, c))
p_preivous_buffer = ''
passage
new_passage
```
Wow, I notice we should take care of the punctuations inside the sound glosses. Otherwise, this '百穀蓁蓁,庶草蕃廡屢惟豊年,於皇樂胥。' would frequently happen.
## (Added) Bring Back the Inline Punctuations
```
sound_glosses = []
new_flat_passages = []
for i,passage in enumerate(wenshuan.flat_passages):
# A place to save sound glosses
new_passage = []
p_preivous_buffer = ''
for j,(p,c) in enumerate(passage):
# check if c is a single phrase comment
sound_gloss = _sound_glosses_check(p, c)
if sound_gloss != None:
sound_glosses.append((i,) + sound_gloss)
p_preivous_buffer += p
# CASE 2: Inline Sound Glosses
if len(c) >= 5:
if p_preivous_buffer[-1] != "。":
p_preivous_buffer += "。"
# CASE 1: Single Phrase
elif re.search(r"(.+)([、。,?:;「」])", c) != None:
match = re.search(r"(.+)([、。,?:;「」])", c)
p_preivous_buffer += match.group(2)
else:
new_passage.append((p_preivous_buffer + p, c))
p_preivous_buffer = ''
new_flat_passages.append(new_passage)
for i,passage in enumerate(new_flat_passages):
for p,_ in passage:
for match in re.finditer(r"[、。,?:;][、。,?:;]", p):
print(i, p)
```
It is still not perfect!! 🤨
However, it would be too complicate if we add too many `if..then..` in to the control flow. We can deal with the duplicate punctuations afterward.
So, we can update the method as:
```python
def extract_sound_glosses(self, remove_sound_glosses=True):
new_flat_passages = []
for i,passage in enumerate(self.flat_passages):
# A place to save sound glosses
new_passage = []
p_preivous_buffer = ''
for j,(p,c) in enumerate(passage):
# check if c is a single phrase comment
sound_gloss = self._sound_glosses_check(p, c)
if sound_gloss != None:
p_preivous_buffer += p
# CASE 2: Inline Sound Glosses
if len(c) >= 5:
if p_preivous_buffer[-1] != "。":
p_preivous_buffer += "。"
# CASE 1: Single Phrase
elif re.search(r"(.+)([、。,?:;「」])", c) != None:
match = re.search(r"(.+)([、。,?:;「」])", c)
p_preivous_buffer += match.group(2)
else:
new_passage.append((p_preivous_buffer + p, c))
p_preivous_buffer = ''
new_flat_passages.append(new_passage)
if remove_sound_glosses:
self.flat_passages = new_flat_passages
```
### (Supplementary) Anti-Correlation between the Num of Single Phrase Sound Glosses and Sound Glosses Inline?
While I was trying to find a page to test the `extract_sound_glosses` function, I found that it is difficult to find a good sample! I almost couldn't find a good sample to test inline sound glosses and single phrase sound glosses at the same time, so I guessed there would be some kind of anti-correlation there.
```
# store (num_of_single_sound, num_of_sound_inline)
single_inline_pairs = []
for i,passage in enumerate(wenshuan.flat_passages):
num_single = 0
num_inline = 0
for j,(p,c) in enumerate(passage):
sound_gloss = _sound_glosses_check(p, c)
if sound_gloss != None:
# check if c is a single phrase comment
if len(c) >= 5:
num_inline += 1
# check if c is a single phrase comment
elif len(c) < 5:
num_single += 1
single_inline_pairs.append((num_single, num_inline))
plt.scatter(
[p[0] for p in single_inline_pairs if sum(p) > 0],
[p[1] for p in single_inline_pairs if sum(p) > 0], alpha=0.5
)
plt.xlabel("Num of Single Phrase Sound Glosses in a Page")
plt.ylabel("Num of Inline Phrase Sound Glosses in a Page")
plt.hist([p[0] / sum(p) for p in single_inline_pairs if sum(p) > 0],)
plt.xlabel("(Single Phrase Sound Glosses)/(Num of Sound Glosses)")
```
It appears that single phrase sound glosses did not bound with inline sound glosses too often. Therefore, in the above histogram, low frequency and high frequency of `(Single Phrase Sound Glosses)/(Num of Sound Glosses)` are clustering as different groups.
| github_jupyter |
# `AStream` Online Training
**[THIS IS WORK IN PROGRESS]**
This notebook performs online training of the **appearance stream parent model** on the **car-shadow** sequence, so make sure you've run the [`AStream` Offline Training](astream_offline_training.ipynb) notebook before running this one.
The online training of the `AStream` network is done by finetuning the parent model **on the first frame** of the video sequence. This is the only frame for which a mask is provided. It is augmented using scaling and vertical flipping. The network is trained for 500 iterations using the same training parameters as during offline training, except that deep supervision is disabled.

To monitor training, run:
```
tensorboard --logdir E:\repos\tf-video-seg\tfvos\models\astream_car-shadow
http://localhost:6006
```
```
"""
astream_online_training.ipynb
AStream online trainer
Written by Phil Ferriere
Licensed under the MIT License (see LICENSE for details)
Based on:
- https://github.com/scaelles/OSVOS-TensorFlow/blob/master/osvos_parent_demo.py
Written by Sergi Caelles (scaelles@vision.ee.ethz.ch)
This file is part of the OSVOS paper presented in:
Sergi Caelles, Kevis-Kokitsi Maninis, Jordi Pont-Tuset, Laura Leal-Taixe, Daniel Cremers, Luc Van Gool
One-Shot Video Object Segmentation
CVPR 2017
Unknown code license
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os, sys
from PIL import Image
import numpy as np
import tensorflow as tf
slim = tf.contrib.slim
import matplotlib.pyplot as plt
# Import model files
import model
import datasets
```
## Configuration
```
# Model paths
seq_name = "car-shadow"
segnet_stream = 'astream'
parent_path = 'models/' + segnet_stream + '_parent/' + segnet_stream + '_parent.ckpt-50000'
ckpt_name = segnet_stream + '_' + seq_name
logs_path = 'models/' + ckpt_name
# Online training parameters
gpu_id = 0
max_training_iters = 500
learning_rate = 1e-8
save_step = max_training_iters
side_supervision = 3
display_step = 10
```
## Dataset load
```
# Load the DAVIS 2016 sequence
options = datasets._DEFAULT_DAVIS16_OPTIONS
options['use_cache'] = False
options['data_aug'] = True
# Set the following to wherever you have downloaded the DAVIS 2016 dataset
dataset_root = 'E:/datasets/davis2016/' if sys.platform.startswith("win") else '/media/EDrive/datasets/davis2016/'
test_frames = sorted(os.listdir(dataset_root + 'JPEGImages/480p/' + seq_name))
test_imgs = ['JPEGImages/480p/' + seq_name + '/' + frame for frame in test_frames]
train_imgs = ['JPEGImages/480p/' + seq_name + '/' + '00000.jpg ' + 'Annotations/480p/' + seq_name + '/' + '00000.png']
dataset = datasets.davis16(train_imgs, test_imgs, dataset_root, options)
# Display dataset configuration
dataset.print_config()
```
## Online Training
```
# Finetune this branch of the binary segmentation network
with tf.Graph().as_default():
with tf.device('/gpu:' + str(gpu_id)):
global_step = tf.Variable(0, name='global_step', trainable=False)
model.train_finetune(dataset, parent_path, side_supervision, learning_rate, logs_path, max_training_iters,
save_step, display_step, global_step, segnet_stream, iter_mean_grad=1, ckpt_name=ckpt_name)
```
## Training losses & learning rate
You should get training curves similar to the following:



## Testing
```
# Result path (if you want to check how well this branch is doing on its own)
result_path = dataset_folder + 'Results/Segmentations/480p/' + ckpt_name
# Test this branch of the network
with tf.Graph().as_default():
with tf.device('/gpu:' + str(gpu_id)):
ckpt_path = logs_path + '/' + ckpt_name + '.ckpt-' + str(max_training_iters))
model.test(dataset, ckpt_path, result_path)
```
Log output should be similar to the following:
```
INFO:tensorflow:Restoring parameters from models\car-shadow_new\car-shadow_new.ckpt-500
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00000.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00001.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00002.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00003.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00004.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00005.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00006.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00007.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00008.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00009.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00010.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00011.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00012.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00013.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00014.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00015.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00016.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00017.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00018.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00019.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00020.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00021.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00022.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00023.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00024.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00025.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00026.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00027.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00028.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00029.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00030.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00031.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00032.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00033.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00034.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00035.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00036.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00037.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00038.png
Saving E:/datasets/davis2016\Results\Segmentations\480p\OSVOS\car-shadow_new\00039.png
```
## Visual evaluation
Let's load the original images and their predicted masks to get an idea of how this branch of the network is doing. Note that the first mask is displayed in red overlay, as it is given to us. The predicted masks are displayed using a green overlay.
```
# Load results
frames = []
predicted_masks=[]
for test_frame in test_frames:
frame_num = test_frame.split('.')[0]
frame = np.array(Image.open(dataset_root + 'JPEGImages/480p/' + seq_name + '/' + test_frame))
predicted_mask = np.array(Image.open(result_path + frame_num +'.png'))
frames.append(frame)
predicted_masks.append(predicted_mask)
# Overlay the masks on top of the frames
frames_with_predictions = visualize.overlay_frames_with_predictions(frames, predicted_masks')
```
### Display individual frames
```
visualize.display_images(frames_with_predictions)
```
### Display results as a video clip
```
# Set path to video clips
video_clip_folder = dataset_root + 'clips/'
video_clip = video_clip_folder + ckpt_name + '.mp4'
# Combine images in a video clip
visualize.make_clip(video_clip, frames_with_predictions)
# Display video
visualize.show_clip(video_clip)
```
| github_jupyter |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Solution Notebook
## Problem: Given sorted arrays A, B, merge B into A in sorted order.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
## Constraints
* Does A have enough space for B?
* Yes
* Can the inputs have duplicate array items?
* Yes
* Can we assume the inputs are valid?
* No
* Does the inputs also include the actual size of A and B?
* Yes
* Can we assume this fits memory?
* Yes
## Test Cases
* A or B is None -> Exception
* index of last A or B < 0 -> Exception
* A or B is empty
* General case
* A = [1, 3, 5, 7, 9, None, None, None]
* B = [4, 5, 6]
* A = [1, 3, 4, 5, 5, 6, 7, 9]
## Algorithm
<pre>
i k
A = [1, 3, 5, 7, 9, None, None, None]
j
B = [4, 5, 6]
---
A[k] = max(A[i], B[j])
i k
A = [1, 3, 5, 7, 9, None, None, 9]
j
B = [4, 5, 6]
---
A[k] = max(A[i], B[j])
i k
A = [1, 3, 5, 7, 9, None, 7, 9]
j
B = [4, 5, 6]
---
A[k] = max(A[i], B[j])
i k
A = [1, 3, 5, 7, 9, 6, 7, 9]
j
B = [4, 5, 6]
---
A[k] = max(A[i], B[j])
i k
A = [1, 3, 5, 7, 5, 6, 7, 9]
j
B = [4, 5, 6]
---
A[k] = max(A[i], B[j])
i k
A = [1, 3, 5, 5, 5, 6, 7, 9]
j
B = [4, 5, 6]
---
A[k] = max(A[i], B[j])
i k
A = [1, 3, 4, 5, 5, 6, 7, 9]
j
B = [4, 5, 6]
---
A[k] = max(A[i], B[j])
ik
A = [1, 3, 4, 5, 5, 6, 7, 9]
B = [4, 5, 6]
---
A = [1, 3, 4, 5, 5, 6, 7, 9]
</pre>
Complexity:
* Time: O(m + n)
* Space: O(1)
## Code
```
class Array(object):
def merge_into(self, source, dest, source_end_index, dest_end_index):
if source is None or dest is None:
raise TypeError('source or dest cannot be None')
if source_end_index < 0 or dest_end_index < 0:
raise ValueError('end indices must be >= 0')
if not source:
return dest
if not dest:
return source
source_index = source_end_index - 1
dest_index = dest_end_index - 1
insert_index = source_end_index + dest_end_index - 1
while dest_index >= 0:
if source[source_index] > dest[dest_index]:
source[insert_index] = source[source_index]
source_index -= 1
else:
source[insert_index] = dest[dest_index]
dest_index -= 1
insert_index -= 1
return source
```
## Unit Test
```
%%writefile test_merge_into.py
import unittest
class TestArray(unittest.TestCase):
def test_merge_into(self):
array = Array()
self.assertRaises(TypeError, array.merge_into, None, None, None, None)
self.assertRaises(ValueError, array.merge_into, [1], [2], -1, -1)
a = [1, 2, 3]
self.assertEqual(array.merge_into(a, [], len(a), 0), [1, 2, 3])
a = [1, 2, 3]
self.assertEqual(array.merge_into(a, [], len(a), 0), [1, 2, 3])
a = [1, 3, 5, 7, 9, None, None, None]
b = [4, 5, 6]
expected = [1, 3, 4, 5, 5, 6, 7, 9]
self.assertEqual(array.merge_into(a, b, 5, len(b)), expected)
print('Success: test_merge_into')
def main():
test = TestArray()
test.test_merge_into()
if __name__ == '__main__':
main()
%run -i test_merge_into.py
```
| github_jupyter |
# Named Entity Recognition (NER) With SpaCy
We will be performing NER on threads from the **Investing** subreddit, but first let's test SpaCy for named entity recognition (NER) using an example from */r/investing*.
```
import spacy
from spacy import displacy
!python -m spacy download en_core_web_sm
nlp = spacy.load('en_core_web_sm')
txt = ("Given the recent downturn in stocks especially in tech which is likely to persist as yields keep going up, "
"I thought it would be prudent to share the risks of investing in ARK ETFs, written up very nicely by "
"[The Bear Cave](https://thebearcave.substack.com/p/special-edition-will-ark-invest-blow). The risks comes "
"primarily from ARK's illiquid and very large holdings in small cap companies. ARK is forced to sell its "
"holdings whenever its liquid ETF gets hit with outflows as is especially the case in market downturns. "
"This could force very painful liquidations at unfavorable prices and the ensuing crash goes into a "
"positive feedback loop leading into a death spiral enticing even more outflows and predatory shorts.")
doc = nlp(txt)
displacy.render(doc, style='ent')
# displacy.serve(doc, style='ent') if not running in a notebook
```
Immediately we're able to produce not perfect, but pretty good NER. We are using the [`en_core_web_sm`](https://spacy.io/models/en) model - `en` referring to English and `sm` small.
The model is accurately identifying ARK as an organization. It does also classify ETF (exchange traded fund) as an organization, which is not the case (an ETF is a grouping of securities on the markets), but it's easy to see why this is being classified as one. The other tag we can see is `WORK_OF_ART`, it isn't inherently clear what exactly this means, so we can get more information using `spacy.explain`:
```
spacy.explain('WORK_OF_ART')
```
And we can see that this description fits well to the tagged item, which refers to an article (although not quite a book).
We have a visual output from our tagged text, but this won't be particularly useful programatically. What we need is a way to extract the relevant tags (the organizations) from our text. To do that we can use `doc.ents` which will return a list of all identified entities.
Each item in this entity list contains two attributes that we are interested in, `label_` and `text`:
```
for entity in doc.ents:
print(f"{entity.label_}: {entity.text}")
```
We're almost there. Now, we need to filter out any entities that are not `ORG` entities, and append those remaining `ORG`s to an organization list:
```
# initialize our list
org_list = []
for entity in doc.ents:
# if label_ is ORG, we append text, otherwise ignore
if entity.label_ == 'ORG':
org_list.append(entity.text)
org_list
# we don't need to see 'ARK' three times, so we use set() to remove duplicates, and then convert back to list
org_list = list(set(org_list))
org_list
```
| github_jupyter |
# Named Entity Recognition (NER)
In this, you will learn to build a model for Named Entity Recognition (NER) task with Trax.
# Introduction
We first start by defining named entity recognition (NER). NER is a subtask of information extraction that locates and classifies named entities in a text. The named entities could be organizations, persons, locations, times, etc.
For example:
<img src = 'ner.png' width="width" height="height" style="width:600px;height:150px;"/>
Is labeled as follows:
- French: geopolitical entity
- Morocco: geographic entity
- Christmas: time indicator
Everything else that is labeled with an `O` is not considered to be a named entity. In this, you will train a named entity recognition system that could be trained in a few seconds (on a GPU) and will get around 75% accuracy. Then, you will load in the exact version of your model, which was trained for a longer period of time. You could then evaluate the trained version of your model to get 96% accuracy! Finally, you will be able to test your named entity recognition system with your own sentence.
```
#!pip -q install trax==1.3.1
import trax
from trax import layers as tl
import os
import numpy as np
import pandas as pd
from utils import get_params, get_vocab
import random as rnd
# set random seeds to make this notebook easier to replicate
trax.supervised.trainer_lib.init_random_number_generators(33)
```
<a name="1"></a>
# Part 1: Exploring the data
We will be using a dataset from Kaggle, which we will preprocess for you. The original data consists of four columns, the sentence number, the word, the part of speech of the word, and the tags. A few tags you might expect to see are:
* geo: geographical entity
* org: organization
* per: person
* gpe: geopolitical entity
* tim: time indicator
* art: artifact
* eve: event
* nat: natural phenomenon
* O: filler word
```
# display original kaggle data
data = pd.read_csv("ner_dataset.csv", encoding = "ISO-8859-1")
train_sents = open('data/small/train/sentences.txt', 'r').readline()
train_labels = open('data/small/train/labels.txt', 'r').readline()
print('SENTENCE:', train_sents)
print('SENTENCE LABEL:', train_labels)
print('ORIGINAL DATA:\n', data.head(5))
del(data, train_sents, train_labels)
```
## 1.1 Importing the Data
In this part, we will import the preprocessed data and explore it.
```
vocab, tag_map = get_vocab('data/large/words.txt', 'data/large/tags.txt')
t_sentences, t_labels, t_size = get_params(vocab, tag_map, 'data/large/train/sentences.txt', 'data/large/train/labels.txt')
v_sentences, v_labels, v_size = get_params(vocab, tag_map, 'data/large/val/sentences.txt', 'data/large/val/labels.txt')
test_sentences, test_labels, test_size = get_params(vocab, tag_map, 'data/large/test/sentences.txt', 'data/large/test/labels.txt')
```
`vocab` is a dictionary that translates a word string to a unique number. Given a sentence, you can represent it as an array of numbers translating with this dictionary. The dictionary contains a `<PAD>` token.
When training an LSTM using batches, all your input sentences must be the same size. To accomplish this, you set the length of your sentences to a certain number and add the generic `<PAD>` token to fill all the empty spaces.
```
# vocab translates from a word to a unique number
print('vocab["the"]:', vocab["the"])
# Pad token
print('padded token:', vocab['<PAD>'])
```
The tag_map corresponds to one of the possible tags a word can have. Run the cell below to see the possible classes you will be predicting. The prepositions in the tags mean:
* I: Token is inside an entity.
* B: Token begins an entity.
```
print(tag_map)
```
So the coding scheme that tags the entities is a minimal one where B- indicates the first token in a multi-token entity, and I- indicates one in the middle of a multi-token entity. If you had the sentence
**"Sharon flew to Miami on Friday"**
the outputs would look like:
```
Sharon B-per
flew O
to O
Miami B-geo
on O
Friday B-tim
```
your tags would reflect three tokens beginning with B-, since there are no multi-token entities in the sequence. But if you added Sharon's last name to the sentence:
**"Sharon Floyd flew to Miami on Friday"**
```
Sharon B-per
Floyd I-per
flew O
to O
Miami B-geo
on O
Friday B-tim
```
then your tags would change to show first "Sharon" as B-per, and "Floyd" as I-per, where I- indicates an inner token in a multi-token sequence.
```
# Exploring information about the data
print('The number of outputs is tag_map', len(tag_map))
# The number of vocabulary tokens (including <PAD>)
g_vocab_size = len(vocab)
print(f"Num of vocabulary words: {g_vocab_size}")
print('The vocab size is', len(vocab))
print('The training size is', t_size)
print('The validation size is', v_size)
print('An example of the first sentence is', t_sentences[0])
print('An example of its corresponding label is', t_labels[0])
```
So you can see that we have already encoded each sentence into a tensor by converting it into a number. We also have 16 possible classes, as shown in the tag map.
## 1.2 Data generator
In python, a generator is a function that behaves like an iterator. It will return the next item. Here is a [link](https://wiki.python.org/moin/Generators) to review python generators.
In many AI applications it is very useful to have a data generator. You will now implement a data generator for our NER application.
```
def data_generator(batch_size, x, y, pad, shuffle=False, verbose=False):
'''
Input:
batch_size - integer describing the batch size
x - list containing sentences where words are represented as integers
y - list containing tags associated with the sentences
shuffle - Shuffle the data order
pad - an integer representing a pad character
verbose - Print information during runtime
Output:
a tuple containing 2 elements:
X - np.ndarray of dim (batch_size, max_len) of padded sentences
Y - np.ndarray of dim (batch_size, max_len) of tags associated with the sentences in X
'''
# count the number of lines in data_lines
num_lines = len(x)
# create an array with the indexes of data_lines that can be shuffled
lines_index = [*range(num_lines)]
# shuffle the indexes if shuffle is set to True
if shuffle:
rnd.shuffle(lines_index)
index = 0 # tracks current location in x, y
while True:
buffer_x = [0] * batch_size # Temporal array to store the raw x data for this batch
buffer_y = [0] * batch_size # Temporal array to store the raw y data for this batch
### START CODE HERE (Replace instances of 'None' with your code) ###
# Copy into the temporal buffers the sentences in x[index : index + batch_size]
# along with their corresponding labels y[index : index + batch_size]
# Find maximum length of sentences in x[index : index + batch_size] for this batch.
# Reset the index if we reach the end of the data set, and shuffle the indexes if needed.
max_len = 0
for i in range(batch_size):
# if the index is greater than or equal to the number of lines in x
if index >= num_lines:
# then reset the index to 0
index = 0
# re-shuffle the indexes if shuffle is set to True
if shuffle:
rnd.shuffle(lines_index)
# The current position is obtained using `lines_index[index]`
# Store the x value at the current position into the buffer_x
buffer_x[i] = x[lines_index[index]]
# Store the y value at the current position into the buffer_y
buffer_y[i] = y[lines_index[index]]
lenx = len(buffer_x[i]) #length of current x[]
if lenx > max_len:
max_len = lenx #max_len tracks longest x[]
# increment index by one
index += 1
# create X,Y, NumPy arrays of size (batch_size, max_len) 'full' of pad value
X = np.full((batch_size, max_len), pad)
Y = np.full((batch_size, max_len), pad)
# copy values from lists to NumPy arrays. Use the buffered values
for i in range(batch_size):
# get the example (sentence as a tensor)
# in `buffer_x` at the `i` index
x_i = buffer_x[i]
# similarly, get the example's labels
# in `buffer_y` at the `i` index
y_i = buffer_y[i]
# Walk through each word in x_i
for j in range(len(x_i)):
# store the word in x_i at position j into X
X[i, j] = x_i[j]
# store the label in y_i at position j into Y
Y[i, j] = y_i[j]
### END CODE HERE ###
if verbose: print("index=", index)
yield((X,Y))
batch_size = 5
mini_sentences = t_sentences[0: 8]
mini_labels = t_labels[0: 8]
dg = data_generator(batch_size, mini_sentences, mini_labels, vocab["<PAD>"], shuffle=False, verbose=True)
X1, Y1 = next(dg)
X2, Y2 = next(dg)
print(Y1.shape, X1.shape, Y2.shape, X2.shape)
print(X1[0][:], "\n", Y1[0][:])
```
**Expected output:**
```
index= 5
index= 2
(5, 30) (5, 30) (5, 30) (5, 30)
[ 0 1 2 3 4 5 6 7 8 9 10 11
12 13 14 9 15 1 16 17 18 19 20 21
35180 35180 35180 35180 35180 35180]
[ 0 0 0 0 0 0 1 0 0 0 0 0
1 0 0 0 0 0 2 0 0 0 0 0
35180 35180 35180 35180 35180 35180]
```
# Part 2: Building the model
You will now implement the model. You will be using Google's TensorFlow. Your model will be able to distinguish the following:
<table>
<tr>
<td>
<img src = 'ner1.png' width="width" height="height" style="width:500px;height:150px;"/>
</td>
</tr>
</table>
The model architecture will be as follows:
<img src = 'ner2.png' width="width" height="height" style="width:600px;height:250px;"/>
Concretely:
* Use the input tensors you built in your data generator
* Feed it into an Embedding layer, to produce more semantic entries
* Feed it into an LSTM layer
* Run the output through a linear layer
* Run the result through a log softmax layer to get the predicted class for each word.
```
def NER(vocab_size=35181, d_model=50, tags=tag_map):
'''
Input:
vocab_size - integer containing the size of the vocabulary
d_model - integer describing the embedding size
Output:
model - a trax serial model
'''
### START CODE HERE (Replace instances of 'None' with your code) ###
model = tl.Serial(
tl.Embedding(vocab_size=vocab_size, d_feature=d_model), # Embedding layer
tl.LSTM(n_units=d_model), # LSTM layer
tl.Dense(n_units=len(tags)), # Dense layer with len(tags) units
tl.LogSoftmax() # LogSoftmax layer
)
### END CODE HERE ###
return model
# initializing your model
model = NER()
# display your model
print(model)
```
**Expected output:**
```
Serial[
Embedding_35181_50
LSTM_50
Dense_17
LogSoftmax
]
```
# Part 3: Train the Model
This section will train your model.
Before you start, you need to create the data generators for training and validation data. It is important that you mask padding in the loss weights of your data, which can be done using the `id_to_mask` argument of `trax.supervised.inputs.add_loss_weights`.
```
from trax.supervised import training
rnd.seed(33)
batch_size = 64
# Create training data, mask pad id=35180 for training.
train_generator = trax.supervised.inputs.add_loss_weights(
data_generator(batch_size, t_sentences, t_labels, vocab['<PAD>'], True),
id_to_mask=vocab['<PAD>'])
# Create validation data, mask pad id=35180 for training.
eval_generator = trax.supervised.inputs.add_loss_weights(
data_generator(batch_size, v_sentences, v_labels, vocab['<PAD>'], True),
id_to_mask=vocab['<PAD>'])
```
### 3.1 Training the model
You will now write a function that takes in your model and trains it.
```
def train_model(NER, train_generator, eval_generator, train_steps=1, output_dir='model'):
'''
Input:
NER - the model you are building
train_generator - The data generator for training examples
eval_generator - The data generator for validation examples,
train_steps - number of training steps
output_dir - folder to save your model
Output:
training_loop - a trax supervised training Loop
'''
### START CODE HERE (Replace instances of 'None' with your code) ###
train_task = training.TrainTask(
train_generator, # A train data generator
loss_layer = tl.CrossEntropyLoss(), # A cross-entropy loss function
optimizer = trax.optimizers.Adam(0.01), # The adam optimizer
)
eval_task = training.EvalTask(
labeled_data = eval_generator, # A labeled data generator
metrics = [tl.CrossEntropyLoss(), tl.Accuracy()], # Evaluate with cross-entropy loss and accuracy
n_eval_batches = 10 # Number of batches to use on each evaluation
)
training_loop = training.Loop(
NER, # A model to train
train_task, # A train task
eval_task = eval_task, # The evaluation task
output_dir = output_dir) # The output directory
# Train with train_steps
training_loop.run(n_steps = train_steps)
### END CODE HERE ###
return training_loop
train_steps = 100 # In coursera we can only train 100 steps
!rm -f 'model/model.pkl.gz' # Remove old model.pkl if it exists
# Train the model
training_loop = train_model(NER(), train_generator, eval_generator, train_steps)
```
**Expected output (Approximately)**
```
...
Step 1: train CrossEntropyLoss | 2.94375849
Step 1: eval CrossEntropyLoss | 1.93172036
Step 1: eval Accuracy | 0.78727312
Step 100: train CrossEntropyLoss | 0.57727730
Step 100: eval CrossEntropyLoss | 0.36356260
Step 100: eval Accuracy | 0.90943187
...
```
This value may change between executions, but it must be around 90% of accuracy on train and validations sets, after 100 training steps.
```
# loading in a pretrained model..
model = NER()
model.init(trax.shapes.ShapeDtype((1, 1), dtype=np.int32))
# Load the pretrained model
model.init_from_file('model.pkl.gz', weights_only=True)
```
# Part 4: Compute Accuracy
You will now evaluate in the test set. Previously, you have seen the accuracy on the training set and the validation (noted as eval) set. You will now evaluate on your test set. To get a good evaluation, you will need to create a mask to avoid counting the padding tokens when computing the accuracy.
```
#Example of a comparision on a matrix
a = np.array([1, 2, 3, 4])
a == 2
# create the evaluation inputs
x, y = next(data_generator(len(test_sentences), test_sentences, test_labels, vocab['<PAD>']))
print("input shapes", x.shape, y.shape)
# sample prediction
tmp_pred = model(x)
print(type(tmp_pred))
print(f"tmp_pred has shape: {tmp_pred.shape}")
```
Note that the model's prediction has 3 axes:
- the number of examples
- the number of words in each example (padded to be as long as the longest sentence in the batch)
- the number of possible targets (the 17 named entity tags).
```
def evaluate_prediction(pred, labels, pad):
"""
Inputs:
pred: prediction array with shape
(num examples, max sentence length in batch, num of classes)
labels: array of size (batch_size, seq_len)
pad: integer representing pad character
Outputs:
accuracy: float
"""
### START CODE HERE (Replace instances of 'None' with your code) ###
## step 1 ##
outputs = np.argmax(pred, axis=2)
print("outputs shape:", outputs.shape)
## step 2 ##
mask = (labels != pad)
print("mask shape:", mask.shape, "mask[0][20:30]:", mask[0][20:30])
## step 3 ##
accuracy = np.sum(outputs==labels) / float(np.sum(mask))
### END CODE HERE ###
return accuracy
accuracy = evaluate_prediction(model(x), y, vocab['<PAD>'])
print("accuracy: ", accuracy)
```
**Expected output (Approximately)**
```
outputs shape: (7194, 70)
mask shape: (7194, 70) mask[0][20:30]: [ True True True False False False False False False False]
accuracy: 0.9543761281155191
```
# Part 5: Testing with your own sentence
Below, you can test it out with your own sentence!
```
# This is the function you will be using to test your own sentence.
def predict(sentence, model, vocab, tag_map):
s = [vocab[token] if token in vocab else vocab['UNK'] for token in sentence.split(' ')]
batch_data = np.ones((1, len(s)))
batch_data[0][:] = s
sentence = np.array(batch_data).astype(int)
output = model(sentence)
outputs = np.argmax(output, axis=2)
labels = list(tag_map.keys())
pred = []
for i in range(len(outputs[0])):
idx = outputs[0][i]
pred_label = labels[idx]
pred.append(pred_label)
return pred
# Try the output for the introduction example
#sentence = "Many French citizens are goin to visit Morocco for summer"
#sentence = "Sharon Floyd flew to Miami last Friday"
# New york times news:
sentence = "Peter Navarro, the White House director of trade and manufacturing policy of U.S, said in an interview on Sunday morning that the White House was working to prepare for the possibility of a second wave of the coronavirus in the fall, though he said it wouldn’t necessarily come"
s = [vocab[token] if token in vocab else vocab['UNK'] for token in sentence.split(' ')]
predictions = predict(sentence, model, vocab, tag_map)
for x,y in zip(sentence.split(' '), predictions):
if y != 'O':
print(x,y)
```
** Expected Results **
```
Peter B-per
Navarro, I-per
White B-org
House I-org
Sunday B-tim
morning I-tim
White B-org
House I-org
coronavirus B-tim
fall, B-tim
```
| github_jupyter |
**Authors:** Jozef Hanč, Martina Hančová <br>
[Faculty of Science](https://www.upjs.sk/en/faculty-of-science/?prefferedLang=EN) *P. J. Šafárik University in Košice, Slovakia* <br>
email: [jozef.hanc@upjs.sk](mailto:jozef.hanc@upjs.sk)
***
**<font size=6 color=brown> Introduction</font>**
**<font size=4> Scholarly literature on Jupyter $-$ basic descriptive statistics</font>**
<font size=4> Computational tool: </font> **<font size=4> Scientific Python - Pandas </font>**
# Data
Numbers of papers in four scholar databases
```
# load data as dataframe df
file = '../Data/00_Scholarly-literature.xlsx'
filename = file.split('.')[0]
df = pd.read_excel(file)
df
# dataframe columns
cols = list(df.columns)
print(cols)
```
# Numerical summary
```
# multi-index in table
multidx = ['database', 'keywords in search']
dfi = df.set_index(multidx).sort_index()
dfi
# latex code for table
subs = {'toprule':'hline', 'midrule':'hline', 'bottomrule':'hline'}
latex_code = dfi.to_latex()
latex_code = replace_all(latex_code, subs)
print(latex_code)
```
# Graphical summary
```
# transposing table and removing zeros by nan values
dff = dfi.transpose().replace({0:np.nan})
dff
```
## Graphical parameters
```
%matplotlib inline
import matplotlib.pyplot as plt
from itertools import product
name = 'ScholarLiterature.png'
typ = 'bar'
resolution = 300
font ='large'
size = (10,10)
legend_title = 'Keywords in database search (log scale y axis)'
path = '/media/sf_D_DRIVE/Dropbox/00 publikacie/Didaktika/2019Jupyter/clanok/figures/'
factor = [1.08, 1.04, 1.04, 1.03]
```
## Plots
```
# initial plot setup
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=size)
position = list(product([0,1], repeat=2))
databases = dff.columns.levels[0]
f = {databases[i]:factor[i] for i in range(len(factor))}
# generate plots for databases
for database, pos in zip(databases, position):
g = dff[database].plot(kind = typ, ax=axes[pos],legend=False,fontsize=font, rot=0)
g.set_title(database)
if database == databases[0]:
handles, labels = g.get_legend_handles_labels()
for p in g.patches:
if p.get_height()>0:
x, y = p.get_x(), p.get_height()
g.annotate(str(y)[:-2], (x, y*f[database]), fontsize=10)
g.set_yscale('log')
# legend for final figure
legend = fig.legend(handles, labels,
bbox_to_anchor=(0.047,1,0.935,0.1),
loc="lower center", mode="expand",
borderaxespad=0, ncol=len(databases),
title = legend_title,
fontsize =font,
title_fontsize =font
)
legend.get_frame().set_edgecolor('black')
legend.get_frame().set_facecolor('lightgrey')
# tight layout - smallest spaces among subplots
plt.tight_layout()
#save figure
#fig.savefig(name, bbox_extra_artists=(legend,), bbox_inches='tight', dpi=resolution)
#fig.savefig(path+name, bbox_extra_artists=(legend,), bbox_inches='tight', dpi=resolution)
```
| github_jupyter |
---
layout: page
title: Intervalos de Confiança
nav_order: 9
---
[<img src="./colab_favicon_small.png" style="float: right;">](https://colab.research.google.com/github/icd-ufmg/icd-ufmg.github.io/blob/master/_lessons/09-ics.ipynb)
# Intervalos de Confiança
{: .no_toc .mb-2 }
Conceito base para pesquisas estatísticas
{: .fs-6 .fw-300 }
{: .no_toc .text-delta }
Resultados Esperados
1. Entender como a distribuição amostral faz inferência
1. Uso e entendimento de ICs através do teorema central do limite
1. Uso e entendimento de ICs através do bootstrap
1. Como os dois se ligam
---
**Sumário**
1. TOC
{:toc}
---
```
# -*- coding: utf8
from IPython.display import HTML
from matplotlib import animation
from scipy import stats as ss
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
plt.style.use('seaborn-colorblind')
plt.rcParams['figure.figsize'] = (16, 10)
plt.rcParams['axes.labelsize'] = 20
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['legend.fontsize'] = 20
plt.rcParams['xtick.labelsize'] = 20
plt.rcParams['ytick.labelsize'] = 20
plt.rcParams['lines.linewidth'] = 4
plt.ion()
def despine(ax=None):
if ax is None:
ax = plt.gca()
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# Only show ticks on the left and bottom spines
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
```
## Introdução
Vamos explorar a ideia de intervalos de confiança. Inicialmente, lembre-se do teorema central do limite que diz: se $X_1, ..., X_n$ são variáveis aleatórias. Em particular, todas as VAs foram amostradas de uma mesma população com média $\mu$ (finita), desvio padrão $\sigma$ (finito). Além do mais, a geração de cada VA é independente da outra, sendo toas identicamente distribuídas. Quando $n$ é grande, então
$$\frac{1}{n}(X_1 + \cdots + X_n)$$
é aproximadamente distribuído por uma Normal com média $\mu$ e desvio padrão $\sigma/\sqrt{n}$:
$$\frac{1}{n}(X_1 + \cdots + X_n) \sim Normal(\mu, \sigma/\sqrt{n})$$.
## Distribuição amostral e Intervalos de Confiança
A distribuição dos valores de uma estatística a partir de amostras é chamada de *distribuição amostral* daquela estatística. Ela tem um papel importante, porque é a partir do entendimento dela que estimaremos quanta confiança temos em uma estatística que estamos calculando a partir de uma amostra. No exemplo acima, cada $X_i$ é uma amostra e $X_i/n$ é a média desta amostra. Então, $\frac{1}{n}(X_1 + \cdots + X_n)$ é a distribuição amostral das médias!
O principal a entender aqui é que se conhecermos a distribuição amostral, saberemos quão longe normalmente a estatística calculada para uma amostra está daquela calculada para a população. Sabendo isso, podemos calcular uma margem de erro para a estimativa feita a partir da amostra, tal estimativa será o nosso intervalo de confiança.
Vamos iniciar com um caso que conheçemos a distribuição da população.
## Exemplo Moedas (Caso onde Sabemos da População!)
**É importante falar que por um bom tempo este notebook não vai computar ICs, preste atenção no fluxo de ideias.**
Por simplicidade, vamos fazer uso um exemplo de lançamento de moedas. Isto é, vamos explorar a probabilidade de uma moeda ser justa usando estatística e amostragem (conceitos não exclusivos).
Lembrando, temos um espaço amostral:
\begin{align}
\mathcal{S} &= \{h, t\} \\
P(h) &= 0.5 \\
P(t) &= 0.5
\end{align}
No caso das moedas é simples saber a **distribuição da população**. O número de sucessos de lançamentos de uma moeda segue uma distribuição Binomial. A mesma se parece bastante com a Normal. A PMF de uma Binomial é:
$$P(k; p, n) = \binom{n}{k} p^k (1-p)^{n-k}$$
onde $n$ captura o número de caras e $k$ o número de lançamentos.
```
p = 0.5 # probabilidade de heads/tails
k = 30 # temos 30 jogadas
x = np.arange(0, 31) # Valores no eixo x
prob_binom = ss.distributions.binom.pmf(x, k, p)
plt.stem(x, prob_binom)
plt.xlabel('Num Caras - x')
plt.ylabel('P(sair x caras)')
despine()
```
Usando a função `ppf` podemos ver onde ficam $95\%$ dos lançamentos de moedas. Para isto, temos que considerar $2.5\%$ para a esquerda e $2.5\%$ para a direita.
A `ppf` pode é inverso da CDF. Pegamos valor no percentil, não o percentil dado um valor.
```
p = 0.5 # probabilidade de heads/tails
k = 30 # temos 30 jogadas
x = np.arange(0, 31) # Valores no eixo x
prob_binom = ss.distributions.binom.cdf(x, k, p)
plt.step(x, prob_binom)
plt.xlabel('Num Caras - x')
plt.ylabel('P(X <= x)')
plt.title('CDF da Binomial')
despine()
# 2.5% dos dados P[X <= 10] = 0.025
ss.distributions.binom.ppf(0.025, k, p)
print(1-0.025)
# 2.5% dos dados para cima P[X > 20] = 0.025
ss.distributions.binom.ppf(1-0.025, k, p)
```
**Caso 1: Quando sabemos a população é fácil responder a pergunta**
$95\%$ dos lançamentos de 30 moedas justas deve cair entre 10 e 20. Acamos de computar lá em cima usando o inverso da CDF `a PPF`.
```
p = 0.5 # probabilidade de heads/tails
k = 30 # temos 30 jogadas
x = np.arange(0, 31) # Valores no eixo x
prob_binom = ss.distributions.binom.pmf(x, k, p)
plt.stem(x, prob_binom)
plt.xlabel('Num Caras - x')
plt.ylabel('P(sair x caras)')
despine()
x2 = np.arange(10, 21) # Valores no eixo x
prob_binom = ss.distributions.binom.pmf(x2, k, p)
plt.fill_between(x2, prob_binom, color='r', alpha=0.5)
```
## Simulando
Agora, vamos assumir que não sei disto. Isto é, não sei nada de ppf, pdf, pmf, cdf etc. Mas eu sei jogar moedas para cima. Será que consigo estimar o mesmo efeito?!
```
# Jogando uma única moeda
np.random.randint(0, 2)
# Jogando 30 moedas
np.random.randint(0, 2, size=30)
NUM_SIMULACOES = 100000
resultados = []
for i in range(NUM_SIMULACOES):
jogadas = np.random.randint(0, 2, size=30) # joga 30 moedas para cima
n_caras = (jogadas == 1).sum() # conta quantas foram == 1, caras
resultados.append(n_caras)
bins = np.arange(0, 31) + 0.5
plt.hist(resultados, bins=bins, edgecolor='k');
despine()
plt.xlabel('Numero de Caras')
plt.ylabel('Fração de Casos')
```
**Caso 2: Quando sabemos gerar dados que seguem a população é fácil responder a pergunta.**
Podemos verificar o resultado empiricamente na CDF. Estou usando `side='left'` pois por motivos que não entendo o statsmodels faz `P[X < x]` e não `P[X <= x]` por default. Com side `left` corrigimos isto.
```
from statsmodels.distributions.empirical_distribution import ECDF
ecdf = ECDF(resultados, side='left')
plt.plot(ecdf.x, ecdf.y)
plt.xlabel('Num caras')
plt.ylabel('P[X <= x]')
despine()
np.percentile(resultados, 2.5)
np.percentile(resultados, 97.5)
ecdf(10)
ecdf(21)
```
Até agora eu estou assumindo muito.
1. Sei da população
1. Sei amostrar de forma uniforme da população.
E quando eu estiver apenas com 1 amostra?!
1. amostras = []
1. para cada amostra de tamanho 100:
1. amostra[i] = np.mean(amostra)
1. plt.hist(amostras) --> normal
1. estou trabalhando com uma delas: amostra[10]
## Quando não sabemos de muita coisa
**Preste atenção a partir daqui**
Não sei nem jogar uma moeda para cima. Desempilhe o TCL.
Lembre-se que distribuição Binomial captura a **média** de caras esperadas em _n_ lançamentos. Note que, ao somar cada um dos meus experimentos estou justamente voltando para o **TCL**. A distribuição amostral aqui é a média de caras a cada 30 jogadas. Assim, podemos ver a aproximação abaixo.
```
bins = np.arange(0, 31) + 0.5
plt.hist(resultados, bins=bins, edgecolor='k', density=True);
plt.xlabel('Numero de Caras')
plt.ylabel('Fração de Casos')
x = np.linspace(0, 31, 1000)
y = ss.distributions.norm.pdf(loc=np.mean(resultados),
scale=np.std(resultados, ddof=1), ## ddof=1 faz dividir por n-1
x=x)
plt.plot(x, y, label='Aproximação Normal')
plt.legend()
despine()
```
**Qual o siginificado do plot acima??**
1. Cada experimento foi n-lançamentos. Tiramos a média dos n.
1. Tenho a variância das médias, ou seja, a variância do estimaodor (lembre-se das aulas passadas)
1. Resultado final --> Normal!
Observe como com uma única jogada de 30 moedas eu chego em uma normal bem próxima da anterior!
Cada jogo é um vetor de booleans.
```
np.random.randint(0, 2, size=30)
```
A média é a fração de caras
```
np.random.randint(0, 2, size=30).mean()
```
E ao multiplicar por 30, tenho o número de caras, afinal foram 30 lançamentos.
```
np.random.randint(0, 2, size=30).mean() * 30
```
Colando tudo junto, note que estou computando o desvio do estimador! Para isto, use a variância do estimador.
$Var(\hat{\mu}) = s^2 / n$
$Std(\hat{\mu}) = s / \sqrt{n}$
```
uma_vez = np.random.randint(0, 2, size=30)
mean_uma = np.mean(uma_vez) * 30
std_uma = np.std(uma_vez, ddof=1) * 30 # o desvio padrão é na mesma unidade da média
std_est = std_uma / np.sqrt(30)
```
Observe uma normal muito próxima da anterior com uma jogada!
```
x = np.linspace(0, 31, 1000)
y = ss.distributions.norm.pdf(loc=mean_uma,
scale=std_est,
x=x)
plt.plot(x, y, label='Aproximação Normal com Uma Amostra')
plt.legend()
despine()
```
Observe que ao fazer várias amostras existe uma variabilidade na normal estimada. Vamos entender teóricamente.
```
for _ in range(30):
uma_vez = np.random.randint(0, 2, size=30)
mean_uma = np.mean(uma_vez) * 30
std_uma = np.std(uma_vez, ddof=1) * 30
std_est = std_uma / np.sqrt(30)
x = np.linspace(0, 31, 1000)
y = ss.distributions.norm.pdf(loc=mean_uma,
scale=std_est,
x=x)
plt.plot(x, y)
despine()
```
## ICs com Normal
Considere o caso que tenho **UMA** amostra apenas. Aqui eu não tenho mais distribuição amostral, pois só fiz uma coleta de dados. Felizmente, eu tenho algo para me ajudar, o TCL.
Quando o TCL funciona, podemos computar o intervalo de confiança usando uma Normal. Essa é a base comum que motivamos algumas aulas atrás. Vamos brincar um pouco de shiftar/escalar nossa Normal. Sabendo que:
$$\frac{1}{n}(X_1 + \cdots + X_n) \sim Normal(\mu, \sigma/\sqrt{n}).$$
Vamos fazer:
$$Z_i = X_i - \mu$$
Note que estamos apenas jogando todo mundo para a esquerda $-\mu$. O valor esperado (média) de uma VA X, $E[x]$, menos uma constante $c$ nada mais é do que $E[x]-c$. Além do mais, a variânçia não muda. Veja as propriedades no Wikipedia.
Assim:
$$\frac{1}{n}\sum_i Z_i = \frac{1}{n}\sum_i X_i - \mu \sim Normal(0, \sigma/\sqrt{n})$$
Agora, vamos dividir $Z_i$ por. Neste caso, o desvio padrão e a média vão ser divididos pelo mesmo valor:
$$\frac{1}{n}\sum_i Z_i = \frac{1}{n}\sum_i \frac{X_i - \mu}{\sigma/\sqrt{n}} \sim Normal(0, 1)$$
Isto quer dizer que **a média** (note a soma e divisão por n) das **distribuições amostrais** $Z_i$ seguem uma $Normal(0, 1)$. Note que estamos assumindo que o TCL está em voga. Às vezes (quando quebramos IID ou Variância finita), o mesmo não vale, mas vamos ignorar tais casos. Bacana, e daí? **Não importa a população inicial, essa é a beleza do TCL!**.
Então, mesmo sem saber a média real da população $\mu$, eu posso brincar com a equação acima. Primeiramente vamos focar na média $\frac{1}{n}\sum_i Z_i$, vamos chamar esta distribuição de $Z$ apenas. Sabendo que uma população segue uma Normal, eu consigo facilmente saber onde caem 95\% dos casos. Isto é similar ao exemplo das moedas e da Binomial acima. Porém, note que eu não assumo nada da população dos dados.
Uma forma comum de computar tais intervalos é usando tabelas ou uma figura como a apresentada abaixo. Hoje em dia, podemos usar a função `ppf`. A mesma indica que 95% dos casos estão ente $-1.96$ e $1.96$.

```
ss.norm.ppf(0.975)
ss.norm.ppf(1-0.975)
```
Agora eu preciso apenas voltar para $X$. Para tal, vamos fazer uso de estimador não viésado de $\sigma$, o desvio padrão da amostra.
$$s = \sqrt{\frac{\sum_i ({x_i - \bar{x}})^2}{n-1}}$$
Fazendo $z=1.96$ e $P(-z \le Z \le z) = 0.95$
\begin{align}
0.95 & = P(-z \le Z \le z)=P \left(-1.96 \le \frac {\bar X-\mu}{\sigma/\sqrt{n}} \le 1.96 \right) \\
& = P \left( \bar X - 1.96 \frac \sigma {\sqrt{n}} \le \mu \le \bar X + 1.96 \frac \sigma {\sqrt{n}}\right).
\end{align}
Substituindo $\sigma$ por $s$: a probabilidade da média da população está entre $\bar{X} +- 1.96 \frac \sigma {\sqrt{n}}$ é de 95%.
1. https://en.wikipedia.org/wiki/Variance#Properties
1. https://en.wikipedia.org/wiki/Expected_value#Basic_properties
## Computando um IC dos dados
```
# brinque com este valor um pouco, observe a mudança nas células abaixo.
TAMANHO_AMOSTRA = 100
resultados = []
for i in range(TAMANHO_AMOSTRA):
jogadas = np.random.randint(0, 2, size=30) # joga 30 moedas para cima
n_caras = (jogadas == 1).sum() # conta quantas foram == 1, caras
resultados.append(n_caras)
s = np.std(resultados, ddof=1)
s
s_over_n = s / np.sqrt(len(resultados))
s_over_n
mean = np.mean(resultados)
mean
mean - 1.96 * s_over_n
mean + 1.96 * s_over_n
# até aqui.
```
## Entendendo um IC
Diferente de quando temos uma distribuição populacional, temos que interpretar o IC diferente. Note que:
1. **Não estamos computando onde caem 95% dos casos da população**. Basta comparar os valores acima.
1. **Não estamos computando onde caem 95% das médias**. Bast comparar com os valores acima.
Estamos resolvendo:
$$P(-z \le Z \le z)=P \left(-1.96 \le \frac {\bar X-\mu}{\sigma/\sqrt{n}} \le 1.96 \right)$$
E chegando em:
$$P \left( \bar X - 1.96 \frac \sigma {\sqrt{n}} \le \mu \le \bar X + 1.96 \frac \sigma {\sqrt{n}}\right)$$
Ou seja:
**A PROBABILIDADE DA MÉDIA REAL $\mu$ CAIR ENTRE $X +- 1.96 \frac \sigma {\sqrt{n}}$ É DE 95%**
ou
**EU TENHO 95% DE CONFIANÇA DE QUE A MÉDIA ESTÁ ENTRE $X +- 1.96 \frac \sigma {\sqrt{n}}$**
ou
**95% DAS AMOSTRAS DE TAMANHO N VÃO CONTER A MÉDIA REAL**
```
# Construindo um IC
(mean - 1.96 * s_over_n, mean + 1.96 * s_over_n)
```
**95% de chance da média cair no intervalo de tamanho n acima. O mesmo não inclui o 22, então podemos assumir que o valor é não esperado.**
Observe que existe uma chance de cometermos erros, qual é?
## A situação mais comum na vida real
Normalmente temos *uma amostra* da população apenas. Daí não conhecemos a distribuição amostral. Mas gostaríamos de a partir da nossa amostra estimar onde está a estatística para a população.
Exemplo: queremos estimar qual a proporção de pessoas que gostará do produto (a estatística) entre todos os usuários (a população) a partir do cálculo da proporção de pessoas que gostou do produto (a mesma estatística) em um teste com 100 pessoas (a amostra).
Repare que se conhecermos como a estatística varia na distribuição amostral (ex: 2 pontos pra mais ou pra menos cobrem 99% dos casos) e temos a estatística calculada para a amostra, poderíamos estimar uma faixa de valores onde achamos que a estatística está para a população _com 99% de confiança_.
### A ideia central que usaremos
Para exemplificar o caso acima, vamos explorar alguns dados reais de salários da Billboard.
A ideia principal que usaremos, em uma técnica chamada *boostrapping* é que _usar a amostra como substituto da população e simular a amostragem através de reamostragem com reposição fornece uma estimativa precisa da variação na distribuição amostral_.
Para implementar o Bootstrap, vamos implementar uma função para o bootstrap_raw. A mesma faz uso da função `df.sample` que gera uma amostra aleatória de n elementos retirados do df. O funcionamento é similar a função `np.random.choice`. Note que estamos fazendo um bootstrap da mediana, podemos fazer patra outras medidas centrais.
1. Dado `n` e `size`
2. Gere `n` amostras de tamanho `size` com reposição
3. Tira a mediana (podia ser média ou qualquer outra medida central)
4. Retorne as novas amostras e veja a distribuição das mesmas
```
def bootstrap_median(df, n=5000, size=None):
if size is None:
size = len(df)
values = np.zeros(n)
for i in range(n):
sample = df.sample(size, replace=True)
values[i] = sample.median()
return values
# 1. lendo dados
df = pd.read_csv('https://media.githubusercontent.com/media/icd-ufmg/material/master/aulas/09-ICs/billboard_2000_2018_spotify_lyrics.csv',
encoding='iso-8859-1', na_values='unknown')
# 2. removendo na
df = df.dropna()
df = df[['title', 'main_artist', 'duration_ms']]
# 3. convertendo para minutos
df['duration_m'] = df['duration_ms'] / (60*1000)
# 4. apagando coluna antiga
del df['duration_ms']
df.head(5)
```
Imagine por agora que os dados que temos de apenas 100 música Billboard são completos. Sei que tenho mais no `df`, mas preciso de small data aqui para executar o notebook.
```
df = df.sample(100)
plt.hist(df['duration_m'], bins=30, edgecolor='k')
plt.xlabel('Duração em minutos')
plt.ylabel('P[mediana]')
despine()
```
A mediana foi de:
```
df['duration_m'].median()
```
Se calcularmos a mediana do números de novas músicas para três amostras de 1000 elementos, teremos 3 resultados diferentes. Estamos usando 1000 pois é o mesmo tamanho da nossa **falsa** população. A ideia do bootstrap é usar amostras da amostra original como diferentes visões da população.
```
for _ in range(3):
print(df.sample(len(df), replace=True).median())
print()
```
Se fizermos isso muitas vezes podemos ver como essa variação acontece. Em particular, vamos fazer 10000 vezes. Note que o código abaixo é essenciamente o mesmo da função `bootstrap` acima.
```
S = len(df)
N = 5000
values = np.zeros(N)
for i in range(N):
sample = df.sample(S, replace=True)
values[i] = sample.median()
print(values)
plt.hist(values, bins=30, edgecolor='k')
plt.xlabel('Mediana da Amostra de Tamanho 1000')
plt.ylabel('P(mediana)')
despine()
```
Usando a função score at percentile sabemos onde ficam 95% dos dados sintéticos.
```
np.percentile(values, 2.5)
np.percentile(values, 97.5)
```
Acamos de construir um **IC**.
Pegando por partes:
* Consideramos a amostra $A$ que tem tamanho $n$ como sendo um substituto da população
* Repetimos $b$ vezes o seguinte processo: criamos uma amostra de tamanho proporcional a $n$ obtendo elementos aleatoriamente de $A$, repondo cada elemento depois de cada sorteio.
* Calculamos a estatística $e$ que nos interessa (média, mediana, desvio padrão, o que for) para cada uma das $b$ amostras.
Como resultado, sabemos como a estatística $e$ varia em uma simulação de $b$ amostragens. Podemos usar os percentis para criar um IC. Assim, se estimamos que em $P[E <= e_l] = 0.025$ e $P[E > e_h] = 0.025$ ou $P[E <= e_h] = 0.975$, nosso IC será: $(e - e_l, e + e_h)$.
1. Podemos usar bootstrap para tendências centrais não extremas.
1. O bootstrap falha quando os dados tem cauda pesada.
Um pouco de código de animações abaixo, pode ignorar o mesmo!
**Ingore daqui, caso queira**
```
def update_hist(num, data):
plt.cla()
plt.hist(data[0:100 * (num+1)], bins=20, edgecolor='k')
plt.xlabel('Mediana da Amostra de Tamanho 1000')
plt.ylabel('P(mediana)')
despine()
values = bootstrap_median(df)
fig = plt.figure()
ani = animation.FuncAnimation(fig, update_hist, 30, fargs=(values, ))
HTML(ani.to_html5_video())
```
**Até aqui!**
## Brincando com o bootstrap
E se a amostra fosse muito menor? Parece que temos alguns problemas! Voltamos a estimar o histograma, qual o motivo?
```
S = 2
N = 5000
values = bootstrap_median(df, size=S, n=N)
plt.hist(values, bins=20, edgecolor='k');
plt.xlabel('Mediana da Amostra de Tamanho 2')
plt.ylabel('P(mediana)')
despine()
```
No geral, devemos gerar amostras perto do tamanho da amostra original, ou pelo menos algum valor alto. Note que algumas centenas, neste caso, já se comporta similar ao caso com 1000.
```
S = 30
N = 5000
values = bootstrap_median(df, size=S, n=N)
plt.hist(values, bins=20, edgecolor='k');
plt.xlabel('Mediana da Amostra de Tamanho 30')
plt.ylabel('P(mediana)')
despine()
```
Um novo problema. Se forem poucas amostras, mesmo sendo um `S` razoável?!
```
S = len(df)
N = 10
values = bootstrap_median(df, size=S, n=N)
plt.hist(values, bins=10, edgecolor='k');
plt.xlabel('Mediana de 10 amstras de Tamanho 100')
plt.ylabel('P(mediana)')
despine()
```
Gerando 2000 bootstraps e observar os que não capturam a mediana. O código abaixo demora usar um pouco de mágia numpy para executar de forma rápida.
## Comparando o Bootstrap com o caso Teórico
```
# voltando para as moedas
TAMANHO_AMOSTRA = 100
resultados = []
for i in range(TAMANHO_AMOSTRA):
jogadas = np.random.randint(0, 2, size=30) # joga 30 moedas para cima
n_caras = (jogadas == 1).sum() # conta quantas foram == 1, caras
resultados.append(n_caras)
resultados = np.array(resultados)
def bootstrap_mean(x, n=5000, size=None):
if size is None:
size = len(x)
values = np.zeros(n)
for i in range(n):
sample = np.random.choice(x, size=size, replace=True)
values[i] = sample.mean()
return values
boot_samples = bootstrap_mean(resultados)
np.percentile(boot_samples, 2.5)
np.percentile(boot_samples, 97.5)
s = np.std(resultados, ddof=1)
s_over_n = s / np.sqrt(len(resultados))
mean = np.mean(resultados)
(mean - 1.96 * s_over_n, mean + 1.96 * s_over_n)
plt.hist(boot_samples, edgecolor='k');
```
| github_jupyter |
The InfiniteHMM class is capable of reading a GROMACS trajectory file and converting the xy coordinates to radial coordinates with respect to the pore centers. This is all done in the __init__ function. This notebook outlines how the radial coordinates are calculated.
```
import hdphmm
import mdtraj as md
```
First, let's load the trajectory.
```
traj = '5ms_nojump.xtc'
gro = 'em.gro'
first_frame = 7000
t = md.load(traj, top=gro)[first_frame:]
```
Now we will calculate the center of mass of the residue whose coordinates are being tracked
```
from hdphmm.utils import physical_properties
res = 'MET'
residue = physical_properties.Residue(res)
ndx = [a.index for a in t.topology.atoms if a.residue.name == res]
names = [a.name for a in t.topology.atoms if a.residue.name == res][:residue.natoms] # names of atoms in one residue
mass = [residue.mass[x] for x in names]
com = physical_properties.center_of_mass(t.xyz[:, ndx, :], mass)
```
Now we need to locate the pore centers. The pores are not perfectly straight, so we create a spline that runs through them and is a function of z.
```
monomer = physical_properties.Residue('NAcarb11V')
pore_atoms = [a.index for a in t.topology.atoms if a.name in monomer.pore_defining_atoms and
a.residue.name in monomer.residues]
spline_params = {'npts_spline': 10, 'save': True, 'savename': 'test_spline.pl'}
spline = physical_properties.trace_pores(t.xyz[:, pore_atoms, :], t.unitcell_vectors,
spline_params['npts_spline'], save=spline_params['save'], savename=spline_params['savename'])[0]
```
The physical_properties module can write out the coordinates of the spline in .gro format if you'd like to compare it to the actual system.
```
physical_properties.write_spline_coordinates(spline)
```
Now we just need to calculate the distance between each solute center of mass and the closest spline
```
import numpy as np
import tqdm
nres = com.shape[1]
radial_distances = np.zeros([t.n_frames, nres])
npores = 4
for f in tqdm.tqdm(range(t.n_frames), unit=' Frames'):
d = np.zeros([npores, nres])
for p in range(npores):
# calculate radial distance between each solute and all splines
d[p, :] = physical_properties.radial_distance_spline(spline[f, p, ...], com[f, ...], t.unitcell_vectors[f, ...])
# record distance to closest pore center
radial_distances[f, :] = d[np.argmin(d, axis=0), np.arange(nres)]
radial_distances.shape
import matplotlib.pyplot as plt
traj_no = 2
rd = radial_distances[:, traj_no]
diff = rd[1:] - rd[:-1]
fig, ax = plt.subplots(2, 1, figsize=(12, 5))
ax[0].plot(rd)
ax[1].plot(diff)
plt.show()
```
| github_jupyter |
## Import modules. Remember it is always good practice to do this at the beginning of a notebook.
If you don't have seaborn, you can install it with conda install seaborn
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
### Use notebook magic to render matplotlib figures inline with notebook cells.
```
%matplotlib inline
```
Let's begin!
We'll use pandas read_csv function to read in our data.
```
df = pd.read_csv('HCEPDB/HCEPDB_moldata.csv')
```
Let's take a look at the data to make sure it looks right with head, and then look at the shape of the dataframe
```
df.head()
df.shape
```
That's a lot of data. Let's take a random subsampling of the full dataframe to make playing with the data faster. This is something you may consider doing when you have large datasets and want to do data exploration. Pandas has a built-in method called sample that will do this for you.
```
df_sample = df.sample(frac=0.01)
df_sample.head()
df_sample.shape
```
We can use this to try some of our plotting functions. We will start with two variables in the dataset, PCE and HOMO energy.
There are multiple packages you can use for plotting. Pandas has some built-in object-oriented methods we can try first.
```
df.plot.scatter('pce', 'e_homo_alpha')
```
Oops! We used the wrong dataset. The full dataset took a while to plot. We can use %%timeit to see how long that took.
```
%%timeit -n 1 -r 1
df.plot.scatter('pce', 'e_homo_alpha')
```
Note that %%timeit repeats the function call a number of times and averages it. You can alter this behavior by changing the defaults. Let's see how long it takes to plot our subsample:
```
%%timeit -n 1 -r 7
df_sample.plot.scatter('pce', 'e_homo_alpha')
```
That's a lot quicker! It doesn't scale perfectly with datasize (plotting took about 1/5 of the time with 1/10 of the data) likely due to code overhead.
But the default plot settings are pretty ugly. We can take advantage of the object-oriented nature of pandas plots to modify the output.
```
p_v_hplot = df_sample.plot.scatter('pce', 'e_homo_alpha')
p_v_hplot.set_xlabel('PCE')
p_v_hplot.set_ylabel('HOMO')
p_v_hplot.set_title('Photoconversion Efficiency vs. HOMO energy')
```
That's a bit butter, but we can still make improvements, like adding gridlines, making the y-axis label more accurate, increasing size, and adjusting the aspect ratio.
```
p_v_hplot = df_sample.plot.scatter('pce', 'e_homo_alpha', figsize=(6, 6))
p_v_hplot.set_xlabel('PCE')
p_v_hplot.set_ylabel('$E_{HOMO}$')
p_v_hplot.set_title('Photoconversion Efficiency vs. HOMO energy')
p_v_hplot.grid()
```
Note that we used LaTeX notation to create the subscript text. LaTeX can be used to generate mathematical expressions, symbols, and Greek letters for figures. One reference guide is included here: https://www.overleaf.com/learn/latex/Subscripts_and_superscripts
Take a moment to try to figure out the following using the pandas documentation:
* How to change the x range to be 2 to 10
* How to change the y range to be -6 to 2
* How to change the font size to 18
* how to change the colors and transparency.
You can access the documentation [here](https://pandas.pydata.org/pandas-docs/stable/visualization.html).
### An aside: Matplotlib can also be used to plot datasets in a similar fashion
Pandas visualization toolbox is a convenience feature built on top of Matplotlib.
```
p_v_hplot = plt.figure(figsize=(6, 6))
p_v_hplot.subplots_adjust(hspace=0.5)
ax1, ax2 = p_v_hplot.add_subplot(211), p_v_hplot.add_subplot(212)
ax1.scatter(df_sample['pce'], df_sample['e_homo_alpha'], alpha=0.1)
ax2.scatter(df_sample['pce'], df_sample['e_gap_alpha'], alpha=0.1)
ax1.set_xlabel('PCE')
ax1.set_ylabel('$E_{HOMO}$')
ax1.set_title('Photoconversion Efficiency vs. HOMO energy')
ax1.grid()
ax2.set_xlabel('PCE')
ax2.set_ylabel('$E_{GAP}$')
ax2.set_title('Photoconversion Efficiency vs. gap energy')
ax2.grid()
plt.show()
```
Note that pandas can also be used like matplotlib to create subplots. It just has a slightly different notation:
```
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(6, 6))
df_sample.plot(x='pce', y='e_homo_alpha', ax=axes[0], kind='scatter', alpha=0.1)
df_sample.plot(x='pce', y='e_gap_alpha', ax=axes[1], kind='scatter', alpha=0.1)
axes[0].grid()
axes[1].grid()
plt.show()
```
### Back to pandas: Quick dataset exploration tools
A very useful tool for quickly exploring relationships between variables in a dataset is the built-in pandas scatterplot matrix:
```
from pandas.plotting import scatter_matrix
scatter_matrix(df_sample, figsize=(10, 10), alpha=0.1)
```
That's a lot of information in one figure! Note the funky id plot at the left. IDs are the molecule ids and don't contain any useful information. Let's make that a column index before moving on.
```
df_sample.set_index('id', inplace=True)
df_sample.head()
```
OK, let's move on to density plots. These show the probability density of particular values for a variable. Notice how we used an alternate way of specifying plot type.
```
df_sample['pce'].plot(kind='kde')
```
We can plot two different visualizations on top of each other, for instance, the density plot and a histogram plot. Since the density plot has a different y axis than the density plot, make sure to use a secondary y axis
```
ax = df_sample['pce'].plot(kind='hist')
df_sample['pce'].plot(kind='kde', ax=ax, secondary_y=True)
```
### Alternate plot styles
As pandas is built on Matplotlib, you can use Matplotlib to alter then plot style. Styles are essentially a set of defaults for the plot appearance, so you don't have to modify them all yourselves. Let's try the ggplot style that mimics the ggplot2 style output from R.
```
import matplotlib
matplotlib.style.use('ggplot')
df_sample['pce'].plot(kind='kde')
```
You can find the list of matplotlib styles [here](https://tonysyu.github.io/raw_content/matplotlib-style-gallery/gallery.html)
### Seaborn improvements
Matplotlib can be used to create publication-quality images, but has some limitations-- including capabilities with 3D plots. There's another package Seaborn, that has a lot of built-in styles for very high-quality plots. Let's take a look at some of the options available:
```
sns.set_style('white')
sns.kdeplot(df_sample['pce'], df_sample['e_homo_alpha'])
sns.kdeplot(df_sample['pce'], df_sample['e_homo_alpha'], cmap='Reds',
shade=True, bw=0.1)
```
### In class exercise
Fix the above subplots so they aren't as shoddy. Add titles, increase font size, change colors and alpha, and change the margins and layout so they are side by side.
| github_jupyter |
# ORF recognition by CNN
Use variable number of bases between START and STOP. Thus, ncRNA will have its STOP out-of-frame or too close to the START, and pcRNA will have its STOP in-frame and far from the START.
```
import time
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
PC_SEQUENCES=32000 # how many protein-coding sequences
NC_SEQUENCES=32000 # how many non-coding sequences
PC_TESTS=1000
NC_TESTS=1000
RNA_LEN=32 # how long is each sequence
CDS_LEN=16 # min CDS len to be coding
ALPHABET=4 # how many different letters are possible
INPUT_SHAPE_2D = (RNA_LEN,ALPHABET,1) # Conv2D needs 3D inputs
INPUT_SHAPE = (RNA_LEN,ALPHABET) # Conv1D needs 2D inputs
FILTERS = 16 # how many different patterns the model looks for
NEURONS = 16
DROP_RATE = 0.4
WIDTH = 3 # how wide each pattern is, in bases
STRIDE_2D = (1,1) # For Conv2D how far in each direction
STRIDE = 1 # For Conv1D, how far between pattern matches, in bases
EPOCHS=500 # how many times to train on all the data
SPLITS=3 # SPLITS=3 means train on 2/3 and validate on 1/3
FOLDS=3 # train the model this many times (range 1 to SPLITS)
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
#drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
from RNA_describe import Random_Base_Oracle
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')
with open('RNA_prep.py', 'w') as f:
f.write(r.text)
from RNA_prep import prepare_inputs_len_x_alphabet
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_describe import ORF_counter,Random_Base_Oracle
from SimTools.RNA_prep import prepare_inputs_len_x_alphabet
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
from os import listdir
import csv
from zipfile import ZipFile
import numpy as np
import pandas as pd
from scipy import stats # mode
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Conv1D,Conv2D
from keras.layers import Flatten,MaxPooling1D,MaxPooling2D
from keras.losses import BinaryCrossentropy
# tf.keras.losses.BinaryCrossentropy
import matplotlib.pyplot as plt
from matplotlib import colors
mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1
np.set_printoptions(precision=2)
rbo=Random_Base_Oracle(RNA_LEN,True)
pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,10) # just testing
pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,PC_SEQUENCES+PC_TESTS)
print("Use",len(pc_all),"PC seqs")
print("Use",len(nc_all),"NC seqs")
# Describe the sequences
def describe_sequences(list_of_seq):
oc = ORF_counter()
num_seq = len(list_of_seq)
rna_lens = np.zeros(num_seq)
orf_lens = np.zeros(num_seq)
for i in range(0,num_seq):
rna_len = len(list_of_seq[i])
rna_lens[i] = rna_len
oc.set_sequence(list_of_seq[i])
orf_len = oc.get_max_orf_len()
orf_lens[i] = orf_len
print ("Average RNA length:",rna_lens.mean())
print ("Average ORF length:",orf_lens.mean())
print("Simulated sequences prior to adjustment:")
print("PC seqs")
describe_sequences(pc_all)
print("NC seqs")
describe_sequences(nc_all)
pc_train=pc_all[:PC_SEQUENCES]
nc_train=nc_all[:NC_SEQUENCES]
pc_test=pc_all[PC_SEQUENCES:]
nc_test=nc_all[NC_SEQUENCES:]
# Use code from our SimTools library.
X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles
print("Data ready.")
def make_DNN():
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
#dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same",
input_shape=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
#dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
#dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
#dnn.add(MaxPooling1D())
dnn.add(Flatten())
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=np.float32))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
#ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)
#bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
#model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"])
return dnn
model = make_DNN()
print(model.summary())
from keras.callbacks import ModelCheckpoint
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(X,y)
from keras.models import load_model
X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
```
| github_jupyter |
# Introduction to Python & Jupyter Notebooks
In this class, we will rely on Python as our main tool for data science. We will be running in Python in Jupyter Notebooks. Most of you are at home in Python, and will only have to spend a few moments here. I have planned for three scenarios
1. **You don't know anything about Python, only have a bit of experience**. Then read this whole thing carefully and follow the advice in Part 3 carefully.
2. **You have a bit of experience with Python, but feel like you need a refresher**. Then go through this notebook, chill a bit with the video, and make sure that you can do everything asked of you in the exercises at the end of Part 2. Then you're good to go.
3. **You're a pretty good python user and used to working with Jupiter Notebooks**. Then safely skip the whole thing.
## Part 1: Installing Python
Now it's time to install Python.
* We recommend you use the _Anaconda distribution_ of Python. You can download it [**here**](https://www.anaconda.com/distribution/).
* Once Anaconda is installed, you start up the notebook system by typing `jupyter notebook` in your terminal, and the system should be ready to use in your favorite browser.
* Be sure to check the keyboards shortcuts under the heading of "Help" where you will find for instance shortcut to code-completion (Tab) and tooltip with documentation (Shift-Tab) which will save you a ton of time.
Part 3 will teach you how to use the Jupyter Notebook. Note that if you want to use another Python distribution, that's fine, but we cannot promise to help you with anything other than Anaconda.
## Part 2: Simple Python exercises
> **_Video lecture_**: If you'd like an intro to Jupyter Notebooks and iPython, there's one below. I talk a bit about why we use Python, demo MarkDown, and provide a few tips & tricks. If you already known Python and Jupyter Notebooks, this video lecture is safe to skip.
```
from IPython.display import YouTubeVideo
YouTubeVideo("H9YrBVIcXS4",width=600, height=337.5)
```
Video notes:
* Scipy (https://www.scipy.org)
* NetworkX Homepage (https://networkx.github.io)
* XKCD (https://xkcd.com/353/). About trying Python for the first time.
* Anaconda (https://www.anaconda.com/distribution/).
* Markdown (https://daringfireball.net/projects/markdown/syntax and https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet)
* Stack Overflow (Don't search stack overflow. Google your problem and the stack overflow is usually the right link).
> _Exercises_
>
> * Download the IPython file that I've prepared for you and save it somewhere where you can find it again. The link is [**here**](https://raw.githubusercontent.com/socialcomplexitylab/socialgraphs2020/master/files/Training_notebook.ipynb). (**Hint**: Be careful not to save this in _.txt_ format - make sure the extension is _.ipynb_.)
> * Work through exercise 1-9 in the file, solving the simple Python exercises in there. Also use this as a chance to become familiar with how the IPython notebook works. (And a little bit of `json`).
# Part 3: Pandas
It'll also be nice for you to be a bit more on top of `pandas`. Pandas is a super-useful framework within Python to work with structured data. A lot of people think of it as Excel with superpowers.
I recommend you get familiar with Pandas through the official tutorial to get started. It'll make the exercises for Week 1 much more easier.
https://pandas.pydata.org/pandas-docs/stable/user_guide/10min.html
> *Exercise*:
> * Work through the tutorial above in your own Jupyter notebook.
# Part 4: A warning
### STOP: Super important notice
Now that you've completed working through the IPython notebook, it's time for the moment of truth! If you had great difficulty with the Python coding itself, you're going to be in trouble. Everything we do going forward in this class will depend on you being comfortable with Python. There is simply **no way** that you will be able to do well, if you're also struggling with Python on top of everything else you'll be learning.
**So if you're not 100% comfortable with Python, I recommend you stop right now, and follow a tutorial to teach you Python, for example [this one](https://www.learnpython.org), before proceeding**. This might seem tough, but the ability to program is a prerequisite for this class, and if you know how to program, you should be able to handle the Python questions above.`
| github_jupyter |
# Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
* [Pix2Pix](https://affinelayer.com/pixsrv/)
* [CycleGAN](https://github.com/junyanz/CycleGAN)
* [A whole list](https://github.com/wiseodd/generative-models)
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.

The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
```
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
```
## Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input `inputs_real` and the generator input `inputs_z`. We'll assign them the appropriate sizes for each of the networks.
```
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
```
## Generator network

Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
#### Variable Scope
Here we need to use `tf.variable_scope` for two reasons. Firstly, we're going to make sure all the variable names start with `generator`. Similarly, we'll prepend `discriminator` to the discriminator variables. This will help out later when we're training the separate networks.
We could just use `tf.name_scope` to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also _sample from it_ as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the `reuse` keyword for `tf.variable_scope` to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use `tf.variable_scope`, you use a `with` statement:
```python
with tf.variable_scope('scope_name', reuse=False):
# code here
```
Here's more from [the TensorFlow documentation](https://www.tensorflow.org/programmers_guide/variable_scope#the_problem) to get another look at using `tf.variable_scope`.
#### Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to `tf.maximum`. Typically, a parameter `alpha` sets the magnitude of the output for negative values. So, the output for negative input (`x`) values is `alpha*x`, and the output for positive `x` is `x`:
$$
f(x) = max(\alpha * x, x)
$$
#### Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
```
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
```
## Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
```
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
```
## Hyperparameters
```
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
#g_hidden_size = 128
#d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
```
## Build network
Now we're building the network from the functions defined above.
First is to get our inputs, `input_real, input_z` from `model_inputs` using the sizes of the input and z.
Then, we'll create the generator, `generator(input_z, input_size)`. This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as `g_model`. So the real data discriminator is `discriminator(input_real)` while the fake discriminator is `discriminator(g_model, reuse=True)`.
```
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
```
## Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_loss_real + d_loss_fake`. The losses will by sigmoid cross-entropys, which we can get with `tf.nn.sigmoid_cross_entropy_with_logits`. We'll also wrap that in `tf.reduce_mean` to get the mean for all the images in the batch. So the losses will look something like
```python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
```
For the real image logits, we'll use `d_logits_real` which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter `smooth`. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like `labels = tf.ones_like(tensor) * (1 - smooth)`
The discriminator loss for the fake data is similar. The logits are `d_logits_fake`, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using `d_logits_fake`, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
```
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
```
## Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use `tf.trainable_variables()`. This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with `generator`. So, we just need to iterate through the list from `tf.trainable_variables()` and keep variables to start with `generator`. Each variable object has an attribute `name` which holds the name of the variable as a string (`var.name == 'weights_0'` for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with `discriminator`.
Then, in the optimizer we pass the variable lists to `var_list` in the `minimize` method. This tells the optimizer to only update the listed variables. Something like `tf.train.AdamOptimizer().minimize(loss, var_list=var_list)` will only train the variables in `var_list`.
```
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
```
## Training
```
!mkdir checkpoints
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
```
## Training loss
Here we'll check out the training losses for the generator and discriminator.
```
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
```
## Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
```
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
```
These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
```
_ = view_samples(-1, samples)
```
Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
```
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
```
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
## Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
```
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
```
| github_jupyter |
```
import os
import sys
import random
import numpy as np
import pandas as pd
import torch
from torch.utils.data import Dataset, DataLoader
!pip install transformers
from transformers import BertTokenizer
from transformers import BertForSequenceClassification
from transformers import BertConfig
!pip install -U spacy[cuda92]
!python -m spacy download en_core_web_sm
import spacy
import en_core_web_sm
spacy.prefer_gpu()
spacy_nlp = en_core_web_sm.load()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
DIR = "question_generator/"
PRETRAINED_MODEL = 'bert-base-cased'
BATCH_SIZE = 16
SEQ_LENGTH = 512
tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL)
class QAEvalDataset(Dataset):
def __init__(self, csv):
self.df = pd.read_csv(csv, engine='python')
self.transforms = [self.shuffle, self.corrupt]
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
_, question, answer = self.df.iloc[idx]
label = random.choice([0, 1])
if label == 0:
question, answer = random.choice(self.transforms)(question, answer)
encoded_data = tokenizer(
text=question,
text_pair=answer,
pad_to_max_length=True,
max_length=SEQ_LENGTH,
truncation=True,
return_tensors="pt"
)
encoded_data['input_ids'] = torch.squeeze(encoded_data['input_ids'])
encoded_data['token_type_ids'] = torch.squeeze(encoded_data['token_type_ids'])
encoded_data['attention_mask'] = torch.squeeze(encoded_data['attention_mask'])
return (encoded_data.to(device), torch.tensor(label).to(device))
def shuffle(self, question, answer):
shuffled_answer = answer
while shuffled_answer == answer:
shuffled_answer = self.df.sample(1)['answer'].item()
return question, shuffled_answer
def corrupt(self, question, answer):
doc = spacy_nlp(question)
if len(doc.ents) > 1:
# Replace all entities in the sentence with the same thing
copy_ent = str(random.choice(doc.ents))
for ent in doc.ents:
question = question.replace(str(ent), copy_ent)
elif len(doc.ents) == 1:
# Replace the answer with an entity from the question
answer = str(doc.ents[0])
else:
question, answer = self.shuffle(question, answer)
return question, answer
train_set = QAEvalDataset(os.path.join(DIR, 'qa_eval_train.csv'))
train_loader = DataLoader(train_set, batch_size=BATCH_SIZE, shuffle=True)
valid_set = QAEvalDataset(os.path.join(DIR, 'qa_eval_valid.csv'))
valid_loader = DataLoader(valid_set, batch_size=BATCH_SIZE, shuffle=False)
LR = 0.001
EPOCHS = 10
LOG_INTERVAL = 500
model = BertForSequenceClassification.from_pretrained(PRETRAINED_MODEL)
model = model.to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
SAVED_MODEL_PATH = "question_generator/qa_eval_model_trained.pth"
def train():
model.train()
total_loss = 0.
for batch_index, batch in enumerate(train_loader):
data, labels = batch
optimizer.zero_grad()
output = model(**data, labels=labels)
loss = output[0]
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
total_loss += loss.item()
if batch_index % LOG_INTERVAL == 0 and batch_index > 0:
cur_loss = total_loss / LOG_INTERVAL
print('| epoch {:3d} | '
'{:5d}/{:5d} batches | '
'loss {:5.2f}'.format(
epoch,
batch_index, len(train_loader),
cur_loss))
total_loss = 0
def evaluate(eval_model, data_loader):
eval_model.eval()
total_score = 0.
with torch.no_grad():
for batch_index, batch in enumerate(data_loader):
data, labels = batch
output = eval_model(**data, labels=labels)
preds = np.argmax(output[1].cpu(), axis=1)
total_score += (preds == labels.cpu()).sum()
return total_score / (len(data_loader) * BATCH_SIZE)
def save(epoch, model_state_dict, optimizer_state_dict, loss):
torch.save({
'epoch': epoch,
'model_state_dict': model_state_dict,
'optimizer_state_dict': optimizer_state_dict,
'best_loss': loss,
}, SAVED_MODEL_PATH)
print("| Model saved.")
print_line()
def load():
return torch.load(SAVED_MODEL_PATH)
def print_line():
LINE_WIDTH = 60
print('-' * LINE_WIDTH)
highest_accuracy = 0
accuracy = evaluate(model, valid_loader)
print_line()
print('| Before training | accuracy on valid set: {:5.2f}%'.format(accuracy))
print_line()
for epoch in range(1, EPOCHS + 1):
train()
accuracy = evaluate(model, valid_loader)
print_line()
print('| end of epoch {:3d} | accuracy on valid set: {:5.2f}%'.format(
epoch,
accuracy)
)
print_line()
if accuracy > highest_accuracy:
highest_accuracy = accuracy
save(
epoch,
model.state_dict(),
optimizer.state_dict(),
highest_accuracy
)
```
| github_jupyter |
## Identifiability Test of Linear VAE on Synthetic Dataset
```
%load_ext autoreload
%autoreload 2
import torch
import torch.nn.functional as F
from torch.utils.data import DataLoader, random_split
import leap
import numpy as np
from leap.datasets.sim_dataset import SimulationDataset
from leap.modules.linear_vae import AfflineVAESynthetic
import random
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
use_cuda = True
device = torch.device("cuda:0" if use_cuda else "cpu")
latent_size = 8
data = SimulationDataset(directory = '/data/datasets/logs/cmu_wyao/data/',
transition='linear_nongaussian')
num_validation_samples = 2500
train_data, val_data = random_split(data, [len(data)-num_validation_samples, num_validation_samples])
train_loader = DataLoader(train_data, batch_size=12800, shuffle=True, pin_memory=True)
val_loader = DataLoader(val_data, batch_size=16, shuffle=False, pin_memory=True)
model = AfflineVAESynthetic(latent_size,latent_size,2).to(device)
```
### Warm start spline flow
Do not run this block if already warm-started
```
batch_size = 64
spline_optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.spline.parameters()),
lr=1e-3,
weight_decay=0.0)
# Warm-start the prior to standard normal dist
for step in range(5000):
latent_size = 8
# y_t = torch.normal(0, 1, size=(batch_size, latent_size))
y_t = torch.distributions.laplace.Laplace(0,1).rsample((batch_size, latent_size))
dataset = y_t.to(device)
spline_optimizer.zero_grad()
z, logabsdet = model.spline(dataset)
logp = model.spline.base_dist.log_prob(z) + logabsdet
loss = -torch.mean(logp)
loss.backward(retain_graph=True)
spline_optimizer.step()
# This checkpoint will be loaded in lvae.py
torch.save(model.spline.state_dict(), "/home/cmu_wyao/spline.pth")
```
### Load model checkpoint
```
model = model.load_from_checkpoint("/data/datasets/logs/cmu_wyao/toy_linear_2lag/lightning_logs/version_6/checkpoints/epoch=99-step=777299.ckpt",
input_dim = latent_size, z_dim=latent_size, lag=2)
model.eval()
model.to('cpu')
```
### Compute permutation and sign flip
```
for batch in train_loader:
break
batch_size = batch['xt'].shape[0]
x_recon, mu, logvar, z = model.forward(batch)
mu = mu.view(batch_size, -1, latent_size)
A = mu[:,0,:].detach().cpu().numpy()
B = batch['yt'][:,0,:].detach().cpu().numpy()
C = np.zeros((latent_size,latent_size))
for i in range(latent_size):
C[i] = -np.abs(np.corrcoef(B, A, rowvar=False)[i,latent_size:])
from scipy.optimize import linear_sum_assignment
row_ind, col_ind = linear_sum_assignment(C)
A = A[:, col_ind]
mask = np.ones(latent_size)
for i in range(latent_size):
if np.corrcoef(B, A, rowvar=False)[i,latent_size:][i] > 0:
mask[i] = -1
print("Permutation:",col_ind)
print("Sign Flip:", mask)
fig = plt.figure(figsize=(4,4))
sns.heatmap(-C, vmin=0, vmax=1, annot=True, fmt=".2f", linewidths=.5, cbar=False, cmap='Greens')
plt.xlabel("Estimated latents ")
plt.ylabel("True latents ")
plt.title("MCC=0.98")
# Permute column here
mu = mu[:,:,col_ind]
# Flip sign here
mu = mu * torch.Tensor(mask, device=mu.device).view(1,1,latent_size)
mu = -mu
fig = plt.figure(figsize=(8,2))
col = 0
plt.plot(mu[:250,-1,col].detach().cpu().numpy(), color='b', label='True', alpha=0.75)
plt.plot(batch['yt_'].squeeze()[:250,col].detach().cpu().numpy(), color='r', label="Estimated", alpha=0.75)
plt.legend()
plt.title("Current latent variable $z_t$")
fig = plt.figure(figsize=(8,2))
col = 3
l = 1
plt.plot(batch['yt'].squeeze()[:250,l,col].detach().cpu().numpy(), color='b', label='True')
plt.plot(mu[:,:-1,:][:250,l,col].detach().cpu().numpy(), color='r', label="Estimated")
plt.xlabel("Sample index")
plt.ylabel("Latent variable value")
plt.legend()
plt.title("Past latent variable $z_l$")
fig = plt.figure(figsize=(2,2))
eps = model.sample(batch["xt"].cpu())
eps = eps.detach().cpu().numpy()
component_idx = 4
sns.distplot(eps[:,component_idx], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2});
plt.title("Learned noise prior")
```
### System identification (causal discovery)
```
from leap.modules.components.base import GroupLinearLayer
trans_func = GroupLinearLayer(din = 8,
dout = 8,
num_blocks = 2,
diagonal = False)
b = torch.nn.Parameter(0.001 * torch.randn(1, 8))
opt = torch.optim.Adam(trans_func.parameters(),lr=0.01)
lossfunc = torch.nn.L1Loss()
max_iters = 2
counter = 0
for step in range(max_iters):
for batch in train_loader:
batch_size = batch['yt'].shape[0]
x_recon, mu, logvar, z = model.forward(batch)
mu = mu.view(batch_size, -1, 8)
# Fix permutation before training
mu = mu[:,:,col_ind]
# Fix sign flip before training
mu = mu * torch.Tensor(mask, device=mu.device).view(1,1,8)
mu = -mu
pred = trans_func(mu[:,:-1,:]).sum(dim=1) + b
true = mu[:,-1,:]
loss = lossfunc(pred, true) #+ torch.mean(adaptive.lossfun((pred - true)))
opt.zero_grad()
loss.backward()
opt.step()
if counter % 100 == 0:
print(loss.item())
counter += 1
```
### Visualize causal matrix
```
B2 = model.trans_func.w[0][col_ind][:, col_ind].detach().cpu().numpy()
B1 = model.trans_func.w[1][col_ind][:, col_ind].detach().cpu().numpy()
B1 = B1 * mask.reshape(1,-1) * (-1*mask).reshape(-1,1)
B2 = B2 * mask.reshape(1,-1) * (-1*mask).reshape(-1,1)
BB2 = np.load("/data/datasets/logs/cmu_wyao/data/linear_nongaussian/W2.npy")
BB1 = np.load("/data/datasets/logs/cmu_wyao/data/linear_nongaussian/W1.npy")
b = np.concatenate((B1,B2), axis=0)
bb = np.concatenate((BB1,BB2), axis=0)
b = b / np.linalg.norm(b, axis=0).reshape(1, -1)
bb = bb / np.linalg.norm(bb, axis=0).reshape(1, -1)
pred = (b / np.linalg.norm(b, axis=0).reshape(1, -1)).reshape(-1)
true = (bb / np.linalg.norm(bb, axis=0).reshape(1, -1)).reshape(-1)
fig, ax = plt.subplots(figsize=(3,3))
ax.scatter(pred, true, s=10, cmap=plt.cm.coolwarm, zorder=10, color='b')
lims = [-1,1
]
# now plot both limits against eachother
ax.plot(lims, lims, '-.', alpha=0.75, zorder=0)
ax.set_aspect('equal')
# ax.set_xlim(lims)
# ax.set_ylim(lims)
ax.set_xlabel("Estimated weight")
ax.set_ylabel("Ground truth weight")
ax.set_title("Affline VAE + Spline flow")
import numpy as numx
def calculate_amari_distance(matrix_one,
matrix_two,
version=1):
""" Calculate the Amari distance between two input matrices.
:param matrix_one: the first matrix
:type matrix_one: numpy array
:param matrix_two: the second matrix
:type matrix_two: numpy array
:param version: Variant to use.
:type version: int
:return: The amari distance between two input matrices.
:rtype: float
"""
if matrix_one.shape != matrix_two.shape:
return "Two matrices must have the same shape."
product_matrix = numx.abs(numx.dot(matrix_one,
numx.linalg.inv(matrix_two)))
product_matrix_max_col = numx.array(product_matrix.max(0))
product_matrix_max_row = numx.array(product_matrix.max(1))
n = product_matrix.shape[0]
""" Formula from ESLII
Here they refered to as "amari error"
The value is in [0, N-1].
reference:
Bach, F. R.; Jordan, M. I. Kernel Independent Component
Analysis, J MACH LEARN RES, 2002, 3, 1--48
"""
amari_distance = product_matrix / numx.tile(product_matrix_max_col, (n, 1))
amari_distance += product_matrix / numx.tile(product_matrix_max_row, (n, 1)).T
amari_distance = amari_distance.sum() / (2 * n) - 1
amari_distance = amari_distance / (n-1)
return amari_distance
print("Amari distance for B1:", calculate_amari_distance(B1, BB1))
print("Amari distance for B2:", calculate_amari_distance(B2, BB2))
```
| github_jupyter |
```
import sys
sys.path.append('..')
import torch
import pandas as pd
import numpy as np
import pickle
import argparse
import networkx as nx
from collections import Counter
from torch_geometric.utils import dense_to_sparse, degree
import matplotlib.pyplot as plt
from src.gcn import GCNSynthetic
from src.utils.utils import normalize_adj, get_neighbourhood
```
### Syn1 dataset (BA houses) , best params so far: SGD + momentum=0.9, epochs=500, LR=0.1, beta=0.5
#### Uses correct version of symmetry constraint
#### For BA houses, class 0 = BA, class 1 = middle, class 2 = bottom, class 3 = top
```
header = ["node_idx", "new_idx", "cf_adj", "sub_adj", "y_pred_cf", "y_pred_orig",
"label", "num_nodes", "node_dict", "loss_graph_dist"]
# For original model
dataset = "syn1"
hidden = 20
seed = 42
dropout = 0.0
# Load original dataset and model
with open("../data/gnn_explainer/{}.pickle".format(dataset), "rb") as f:
data = pickle.load(f)
adj = torch.Tensor(data["adj"]).squeeze() # Does not include self loops
features = torch.Tensor(data["feat"]).squeeze()
labels = torch.tensor(data["labels"]).squeeze()
idx_train = torch.tensor(data["train_idx"])
idx_test = torch.tensor(data["test_idx"])
edge_index = dense_to_sparse(adj)
norm_adj = normalize_adj(adj)
model = GCNSynthetic(nfeat=features.shape[1], nhid=hidden, nout=hidden,
nclass=len(labels.unique()), dropout=dropout)
model.load_state_dict(torch.load("../models/gcn_3layer_{}.pt".format(dataset)))
model.eval()
output = model(features, norm_adj)
y_pred_orig = torch.argmax(output, dim=1)
print("test set y_true counts: {}".format(np.unique(labels[idx_test].numpy(), return_counts=True)))
print("test set y_pred_orig counts: {}".format(np.unique(y_pred_orig[idx_test].numpy(), return_counts=True)))
print("Whole graph counts: {}".format(np.unique(labels.numpy(), return_counts=True)))
num_epochs = 500
# Load cf examples for test set
with open("../baselines/results/random_perturb/{}_baseline_cf_examples_epochs{}".format(dataset, num_epochs), "rb") as f:
cf_examples = pickle.load(f)
df = pd.DataFrame(cf_examples, columns=header)
print("ALL CF EXAMPLES")
print("Num cf examples found: {}/{}".format(len(df), len(idx_test)))
print("Average graph distance: {}".format(np.mean(df["loss_graph_dist"])))
df.head()
# Add num edges to df
num_edges = []
for i in df.index:
num_edges.append(sum(sum(df["sub_adj"][i]))/2)
df["num_edges"] = num_edges
```
### FINAL NUMBERS
```
print("Num cf examples found: {}/{}".format(len(df), len(idx_test)))
print("Coverage: {}".format(len(df)/len(idx_test)))
print("Average graph distance: {}".format(np.mean(df["loss_graph_dist"])))
print("Average prop comp graph perturbed: {}".format(np.mean(df["loss_graph_dist"]/df["num_edges"])))
font = {'weight' : 'normal',
'size' : 18}
plt.rc('font', **font)
# Plot graph loss of cf examples
bins = [0, 250, 500, 750, 1000, 1250, 1500, 1750]
plt.hist(df["loss_graph_dist"], bins=bins, weights=np.ones(len(df))/len(df))
# plt.title("BA-SHAPES")
plt.xlabel("Explanation Size")
plt.ylim(0, 1.1)
plt.ylabel("Prop CF examples")
# For accuracy, only look at motif nodes
df_motif = df[df["y_pred_orig"] != 0].reset_index(drop=True)
accuracy = []
# Get original predictions
dict_ypred_orig = dict(zip(sorted(np.concatenate((idx_train.numpy(), idx_test.numpy()))),
y_pred_orig.numpy()))
for i in range(len(df_motif)):
node_idx = df_motif["node_idx"][i]
new_idx = df_motif["new_idx"][i]
_, _, _, node_dict = get_neighbourhood(int(node_idx), edge_index, 4, features, labels)
# Confirm idx mapping is correct
if node_dict[node_idx] == df_motif["new_idx"][i]:
cf_adj = df_motif["cf_adj"][i]
sub_adj = df_motif["sub_adj"][i]
perturb = np.abs(cf_adj - sub_adj)
perturb_edges = np.nonzero(perturb) # Edge indices
nodes_involved = np.unique(np.concatenate((perturb_edges[0], perturb_edges[1]), axis=0))
perturb_nodes = nodes_involved[nodes_involved != new_idx] # Remove original node
# Retrieve original node idxs for original predictions
perturb_nodes_orig_idx = []
for j in perturb_nodes:
perturb_nodes_orig_idx.append([key for (key, value) in node_dict.items() if value == j])
perturb_nodes_orig_idx = np.array(perturb_nodes_orig_idx).flatten()
# Retrieve original predictions
perturb_nodes_orig_ypred = np.array([dict_ypred_orig[k] for k in perturb_nodes_orig_idx])
nodes_in_motif = perturb_nodes_orig_ypred[perturb_nodes_orig_ypred != 0]
prop_correct = len(nodes_in_motif)/len(perturb_nodes_orig_idx)
accuracy.append([node_idx, new_idx, perturb_nodes_orig_idx,
perturb_nodes_orig_ypred, nodes_in_motif, prop_correct])
df_accuracy = pd.DataFrame(accuracy, columns=["node_idx", "new_idx", "perturb_nodes_orig_idx",
"perturb_nodes_orig_ypred", "nodes_in_motif", "prop_correct"])
print("Accuracy", np.mean(df_accuracy["prop_correct"]))
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as ss
import tensorflow as tf
import math
import random
import tensorflow as tf
from tensorflow.keras import backend as K
from tensorflow.keras.utils import get_custom_objects
from tensorflow.keras.layers import Activation
df=pd.read_excel('adj_data_new.xlsx')
df
ncode=np.unique(df['secID'].values)
ncode[0]
mon=[1,2,3,4]
year=[2009,2010,2011,2012,2013,2014,2015,2016,2017,2018,2019]
a1=df[df['year'].isin([2010])].copy().reset_index()
a2=df[df['year'].isin([2011])].copy().reset_index()
k=a2[a2.columns[2:14]]-a1[a1.columns[2:14]]
k
k['secID']=a2['secID']
k['date']=a2['date']
k['year']=a2['year']
k['mon']=a2['mon']
k
newset=[]
for i in range(len(year)-1):
temp=df[df['year'].isin([year[i]])].copy().reset_index()
temp1=df[df['year'].isin([year[i+1]])].copy().reset_index()
k=temp1[temp1.columns[2:14]]-temp[temp.columns[2:14]]
k['secID']=temp1['secID']
k['date']=temp1['date']
k['year']=temp1['year']
k['mon']=temp1['mon']
newset.append(k)
data=pd.concat(newset)
data
data.to_excel('diff_data.xlsx')
df=pd.read_excel('diff_data.xlsx')
df
ncode=np.unique(df['secID'].values)
ncode[0]
mon=[1,2,3,4]
trainyear=[2010,2011,2012,2013]
trainyear1=[2011,2012,2013,2014]
trainyear2=[2012,2013,2014,2015]
train=[]
for i in range(len(ncode)):
temp=df[df['secID'].isin([ncode[i]])].copy()
temp1=temp[temp['year'].isin(trainyear)].copy()
for j in mon:
temp2=temp1[temp1['mon'].isin([j])].copy()
values=temp2.loc[:,'revenue':'marketValue'].copy()
val=values.values
train.append(val)
for i in range(len(ncode)):
temp=df[df['secID'].isin([ncode[i]])].copy()
temp1=temp[temp['year'].isin(trainyear1)].copy()
for j in mon:
temp2=temp1[temp1['mon'].isin([j])].copy()
values=temp2.loc[:,'revenue':'marketValue'].copy()
val=values.values
train.append(val)
for i in range(len(ncode)):
temp=df[df['secID'].isin([ncode[i]])].copy()
temp1=temp[temp['year'].isin(trainyear2)].copy()
for j in mon:
temp2=temp1[temp1['mon'].isin([j])].copy()
values=temp2.loc[:,'revenue':'marketValue'].copy()
val=values.values
train.append(val)
x_train=np.array(train)
x_train.shape
testyear=[2014]
testyear1=[2015]
testyear2=[2016]
test=[]
for i in range(len(ncode)):
temp=df[df['secID'].isin([ncode[i]])].copy()
temp1=temp[temp['year'].isin(testyear)].copy()
for j in mon:
temp2=temp1[temp1['mon'].isin([j])].copy()
values=temp2.loc[:,'revenue':'marketValue'].copy()
val=values.values
test.append(val)
for i in range(len(ncode)):
temp=df[df['secID'].isin([ncode[i]])].copy()
temp1=temp[temp['year'].isin(testyear1)].copy()
for j in mon:
temp2=temp1[temp1['mon'].isin([j])].copy()
values=temp2.loc[:,'revenue':'marketValue'].copy()
val=values.values
test.append(val)
for i in range(len(ncode)):
temp=df[df['secID'].isin([ncode[i]])].copy()
temp1=temp[temp['year'].isin(testyear2)].copy()
for j in mon:
temp2=temp1[temp1['mon'].isin([j])].copy()
values=temp2.loc[:,'revenue':'marketValue'].copy()
val=values.values
test.append(val)
y_train=np.array(test)
y_train=y_train.reshape(2172,12)
y_train.shape
RNN_CELLSIZE = 100
SEQLEN = x_train.shape[1]
VAR=x_train.shape[2]
BATCHSIZE=256
LR = 0.005
```
The Maximum Likelihood Function is $\theta ^{MLE}=max\prod_{i=1}^{n}\frac{1}{\sqrt{2\pi g_{\theta }(\mathbf{x_{i}})^{2} }}exp(\frac{-(y_{i}-f_{\theta }(\mathbf{x_{i}})^{2})}{2g_{\theta }(\mathbf{x_{i}})^{2}})$. And the loss function of the neural network is $\theta ^{MLE}=min\sum_{i=1}^{n} (log( g_{\theta }(\mathbf{x_{i}}))+\frac{(y_{i}-f_{\theta }(\mathbf{x_{i}}))^{2}}{2g_{\theta }(\mathbf{x_{i}})^{2}})$
```
class GRUModel(tf.keras.Model):
def __init__(self, seq_length, cell_size,batchsize,var):
super().__init__()
self.batchsize=batchsize
self.var=var
self.seq_length = seq_length
self.cell_size = cell_size
self.layer1 = tf.keras.layers.Reshape((self.seq_length,self.var), batch_size = self.batchsize)
#self.layer2= tf.keras.layers.Dense(10,input_shape=(self.seq_length,self.var),activation='relu',
#kernel_initializer=tf.keras.initializers.TruncatedNormal(mean=1,stddev=1),
#bias_initializer=tf.keras.initializers.Constant(1))
self.layer_GRU = tf.keras.layers.GRU(self.cell_size,activation='relu', return_sequences=False)
self.layer_dense = tf.keras.layers.Dense(12,activation='softplus',bias_initializer=tf.keras.initializers.Constant(1),
kernel_initializer=tf.keras.initializers.TruncatedNormal(mean=2,stddev=1))
self.layer_dense1 = tf.keras.layers.Dense(12,activation='linear')
def call(self, inputs):
x1 = self.layer1(inputs)
#x1 = self.layer2(x1)
x2 = self.layer_GRU(x1)
output1=self.layer_dense(x2)
output2=self.layer_dense1(x2)
return output1,output2
def loss_fun(y_pred1,y_pred2,y_test):
#loss=tf.math.log(y_pred1)+tf.clip_by_value((tf.square(y_test-y_pred2))/(tf.square(y_pred1)*2),0,100)
loss=tf.math.log(y_pred1)+tf.square(y_test-y_pred2)/(tf.square(y_pred1)*2)
#loss=tf.math.log(tf.reduce_mean(y_pred1,axis=0))+tf.reduce_mean(tf.square(y_test-y_pred2),axis=0)/(tf.square(tf.reduce_mean(y_pred1,axis=0))*2)
return tf.square(tf.reduce_mean(loss))
model = GRUModel(SEQLEN,RNN_CELLSIZE,BATCHSIZE, VAR)
optimizer = tf.keras.optimizers.Adam(learning_rate = LR)
for epoch in range(100):
with tf.GradientTape() as tape:
y_pred = model(x_train)
loss=loss_fun(y_pred[0],y_pred[1],y_train)
#loss=tf.reduce_mean((y_pred-y_train)**2)
if epoch%1 == 0:
print("epoch: {}, loss: {}".format(epoch, loss))
grads = tape.gradient(loss, model.variables)
#print(grads)
optimizer.apply_gradients(zip(grads,model.variables))
ytestyear1=[2015,2016,2017,2018]
test_x=[]
for i in range(len(ncode)):
temp=df[df['secID'].isin([ncode[i]])].copy()
temp1=temp[temp['year'].isin(ytestyear1)].copy()
for j in mon:
temp2=temp1[temp1['mon'].isin([j])].copy()
values=temp2.loc[:,'revenue':'marketValue'].copy()
val=values.values
test_x.append(val)
x_test=np.array(test_x)
x_test.shape
result=model.predict(x_test)
EBIT_mkt=result[1][:,3]
```
risk-adjusted EBIT
```
adje=EBIT_mkt/result[0][:,3]
codelist=[]
for i in range(len(ncode)):
code=ncode[i]
for i in range(4):
codelist.append(code)
codelist
monlist=mon*len(ncode)
da=pd.DataFrame()
da['code']=codelist
da['adjEBIT']=adje
da['fo_EBIT']=result[1][:,3]
da['mon']=monlist
da['year']=2019
da
mul=(len(da['year']))/4
mul
date=['2018-12-28','2019-03-29','2019-06-28','2019-09-30']*181
date
da['date']=date
da
da.to_excel('2019diffdf.xlsx')
```
| github_jupyter |
### 98. Validate Binary Search Tree
#### Content
<p>Given the <code>root</code> of a binary tree, <em>determine if it is a valid binary search tree (BST)</em>.</p>
<p>A <strong>valid BST</strong> is defined as follows:</p>
<ul>
<li>The left subtree of a node contains only nodes with keys <strong>less than</strong> the node's key.</li>
<li>The right subtree of a node contains only nodes with keys <strong>greater than</strong> the node's key.</li>
<li>Both the left and right subtrees must also be binary search trees.</li>
</ul>
<p> </p>
<p><strong>Example 1:</strong></p>
<img alt="" src="https://assets.leetcode.com/uploads/2020/12/01/tree1.jpg" style="width: 302px; height: 182px;" />
<pre>
<strong>Input:</strong> root = [2,1,3]
<strong>Output:</strong> true
</pre>
<p><strong>Example 2:</strong></p>
<img alt="" src="https://assets.leetcode.com/uploads/2020/12/01/tree2.jpg" style="width: 422px; height: 292px;" />
<pre>
<strong>Input:</strong> root = [5,1,4,null,null,3,6]
<strong>Output:</strong> false
<strong>Explanation:</strong> The root node's value is 5 but its right child's value is 4.
</pre>
<p> </p>
<p><strong>Constraints:</strong></p>
<ul>
<li>The number of nodes in the tree is in the range <code>[1, 10<sup>4</sup>]</code>.</li>
<li><code>-2<sup>31</sup> <= Node.val <= 2<sup>31</sup> - 1</code></li>
</ul>
#### Difficulty: Medium, AC rate: 29.3%
#### Question Tags:
- Tree
- Depth-First Search
- Binary Search Tree
- Binary Tree
#### Links:
🎁 [Question Detail](https://leetcode.com/problems/validate-binary-search-tree/description/) | 🎉 [Question Solution](https://leetcode.com/problems/validate-binary-search-tree/solution/) | 💬 [Question Discussion](https://leetcode.com/problems/validate-binary-search-tree/discuss/?orderBy=most_votes)
#### Hints:
#### Sample Test Case
[2,1,3]
---
What's your idea?
转换为检查「中序遍历是否严格单调递增」问题
---
```
from typing import List
# Definition for a binary tree node.
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
class Solution:
def isValidBST(self, root: TreeNode) -> bool:
curr_min = [(-2)**31 - 1]
return self.visit(root, curr_min)
def visit(self, node: TreeNode, curr_min: List[int]) -> bool:
is_left_valid = node.left is None or self.visit(node.left, curr_min)
if node.val <= curr_min[0]:
return False
else:
curr_min[0] = node.val
is_right_valid = node.right is None or self.visit(node.right, curr_min)
return is_left_valid and is_right_valid
s = Solution()
n1 = TreeNode(1)
n3 = TreeNode(3)
n2 = TreeNode(2, n1, n3)
s.isValidBST(n2)
n1 = TreeNode(1)
n3 = TreeNode(3)
n6 = TreeNode(6)
n4 = TreeNode(4, n3, n6)
n5 = TreeNode(5, n1, n4)
s.isValidBST(n5)
n_negative1 = TreeNode(-1)
n_0 = TreeNode(0, n_negative1)
s.isValidBST(n_0)
n3 = TreeNode(3)
n7 = TreeNode(7)
n6 = TreeNode(6, n3, n7)
n4 = TreeNode(4)
n5 = TreeNode(5, n4, n6)
s.isValidBST(n5)
n1_1 = TreeNode(1)
n1_2 = TreeNode(1, n1_1)
s.isValidBST(n1_2)
import sys, os; sys.path.append(os.path.abspath('..'))
from submitter import submit
submit(98)
```
| github_jupyter |
# Rethinking Statistics course in Stan - Week 5
Lecture 9: Conditional Manatees
- [Video](https://www.youtube.com/watch?v=QhHfo6-Bx8o)
- [Slides](https://speakerdeck.com/rmcelreath/l09-statistical-rethinking-winter-2019)
Lecture 10: Markov Chain Monte Carlo
- [Video](https://youtu.be/v-j0UmWf3Us)
- [Slides](https://speakerdeck.com/rmcelreath/l10-statistical-rethinking-winter-2019)
[Proposed problems](https://github.com/gbosquechacon/statrethinking_winter2019/blob/master/homework/week05.pdf) and [solutions in R](https://github.com/gbosquechacon/statrethinking_winter2019/blob/master/homework/week05_solutions.pdf) for the exercises of the week.
```
import pandas as pd
import numpy as np
from cmdstanpy import CmdStanModel
from plotnine import *
%load_ext watermark
%watermark -n -u -p pandas,numpy,cmdstanpy,plotnine
```
## Exercise 1
> Consider the data (`Wines2012`) data table. These data are expert ratings of 20 different French and American wines by 9 different French and American judges. Your goal is to model score, the subjective rating assigned by each judge to each wine. I recommend standardizing it.
> In this first problem, consider only variation among judges and wines. Construct index variables of judge and wine and then use these index variables to construct a linear regression model. Justify your priors. You should end up with 9 judge parameters and 20 wine parameters. Use `ulam` instead of `quap` to build this model, and be sure to check the chains for convergence. If you'd rather build the model directly in `Stan` or `PyMC3` or `NumPyro`, go ahead. I just want you to use Hamiltonian Monte Carlo instead of quadratic approximation.
> How do you interpret the variation among individual judges and individual wines? Do you notice any patterns, just by plotting the differences? Which judges gave the highest/lowest ratings? Which wines were rated worst/best on average?
Let's get the data.
```
d = pd.read_csv('./dat/Wines2012.csv', header=0, sep=';')
d.tail(3)
d.score = (d.score - d.score.mean())/d.score.std()
for feat in ['judge', 'flight', 'wine']:
d[feat] = d[feat].astype('category').cat.codes
d = d.rename(columns = {'wine.amer':'win_usa', 'judge.amer':'jdg_usa'})
d.tail(3)
```
The model is straightforward. The only issue is the priors. Since I've standardized the outcome, we can use the ordinary $N(0,0.5)$ prior from the examples in the text with standardized outcomes. Then the prior outcomes will stay largely within the possible outcome space. A bit more regularization than that wouldn't be a bad idea either.
```
nj = d.judge.unique().size
nw = d.wine.unique().size
model = '''
data {
int n;
int judge[n];
int wine[n];
vector[n] score;
}
parameters {
real a[%s];
real w[%s];
real sigma;
}
model {
// priors
a ~ normal(0, 0.5);
w ~ normal(0, 0.5);
sigma ~ exponential(1);
// likelihood
vector[n] mu;
for (i in 1:n) {
mu[i] = a[judge[i]] + w[wine[i]];
}
score ~ normal(mu, sigma);
}
''' % (nj, nw)
stan_file = './stn/week05_01.stan'
with open(stan_file, 'w') as f:
print(model, file=f)
stan_model = CmdStanModel(stan_file=stan_file)
stan_model.compile()
%%time
data = d[['judge', 'wine', 'score']]
data.judge = data.judge + 1
data.wine = data.wine + 1
data = data.to_dict(orient='list')
data['n'] = len(data['score'])
stan_fit = stan_model.sample(data=data, chains=4)
stan_fit.summary()
atts = ['a', 'w', 'sigma']
df = stan_fit.draws_pd(vars=atts)
im = pd.melt(df, value_vars=df.columns)
(
ggplot(im)
+ aes(x='variable', y='value')
+ coord_flip()
+ geom_boxplot(outlier_alpha=0.1)
+ theme_light()
+ theme(figure_size=(4, 6))
)
```
The `a` parameters are the judges. Each represents an average deviation of the scores. So judges with lower values are harsher on average. Judges with higher values liked the wines more on average. There is some noticeable variation here. It is fairly easy to tell the judges apart. The `w` parameters are the wines. Each represents an average score across all judges. Except for wine 18 (a New Jersey red I think), there isn't that much variation. These are good wines, after all. Overall, there is more variation from judge than from wine.
Index starts at zero in python so all references from now on in the original text to specific parameters or individuals, such as wine 18 in the previous text refer to one integer less in the results in `numpyro`. So in this case wine 18 is actually wine 17, because there is a wine 0. It's pretty clear just having a look at the foresplot but we better keep that in mind because in other occasions this may be a cause for confusion.
## Exercise 2
> Now consider three features of the wines and judges:
> 1. `flight`: Whether the wine is red or white.
2. `win_usa`: Indicator variable for American wines.
3. `jdg_usa`: Indicator variable for American judges.
> Use indicator or index variables to model the influence of these features on the scores. Omit the individual judge and wine index variables from Problem (1) Do not include interaction effects yet. Justify your priors, and be sure to check the chains. What do you conclude about the differences among the wines and judges? Try to relate the results to the inferences in Problem (1).
By "indicator" he means using a unique coefficient per variable. By "index" he means what in Machine Learning we just call one-hot-encoding variables. We divide each categorical variable in as many variables as categories with ones and zeros so we end up with many more variables and parameters.
### Indicator variables version
The easiest way to code the data is to use indicator variables. Let's look at that approach first. I'll do an index variable version next. I'll use the three indicator variables `W` (American wine), `J` (American judge), and `R` (red wine).
The model structure is just a linear model with an ordinary intercept. I'll put a relatively tight prior on the intercept, since it must be near zero (centered outcome). What about the coefficients for the indicator variables? Let's pretend we haven't already seen the results from Problem 1, there aren't any big wine differences to find there. Without that cheating foresight, we should consider what the most extreme effect could be. How big could the difference between American and French wines be? Could it be a full standard deviation? If so, then maybe a Normal(0,0.5) prior makes sense, since they place a full standard deviation difference out in the tails of the prior. I'd personally be inclined to something even tighter, so that it regularizes more. But let's go with these wide priors, which nevertheless stay within the outcome space. It would make even more sense to put a tighter prior on the difference between red and white wines, on average they should be the no different, because judges only compare within flights. Here's the model:
```
model = '''
data {
int n;
vector[n] win_usa;
vector[n] jdg_usa;
vector[n] flight;
vector[n] score;
}
parameters {
real a;
real bW;
real bJ;
real bR;
real sigma;
}
model {
// priors
a ~ normal(0, 0.2);
bW ~ normal(0, 0.5);
bJ ~ normal(0, 0.5);
bR ~ normal(0, 0.5);
sigma ~ exponential(1);
// likelihood
vector[n] mu;
mu = a + bW*win_usa + bJ*jdg_usa + bR*flight;
score ~ normal(mu, sigma);
}
'''
stan_file = './stn/week05_02a.stan'
with open(stan_file, 'w') as f:
print(model, file=f)
model_2a = CmdStanModel(stan_file=stan_file)
model_2a.compile()
%%time
data = d[['score', 'jdg_usa', 'win_usa', 'flight']].to_dict(orient='list')
data['n'] = len(data['score'])
fit_2a = model_2a.sample(data=data, chains=4)
fit_2a.summary()
```
As expected, red and wines are on average the same. `bR` is right on top of zero. American judges seem to be more on average slightly more generous with ratings, `bJ` is slightly but reliably above zero. American wines have slightly lower average ratings than French wines, `bW` is mostly below zero, but not very large in absolute size.
### Index variables version
Okay, now for an index variable version. The thing about index variables is that you can easily end up with more parameters than in an equivalent indicator variable model. But it's still the same posterior distribution. You can convert from one to the other, if the priors are also equivalent. We'll need three index variables.
```
nJ = d.jdg_usa.unique().size
nW = d.win_usa.unique().size
nR = d.flight.unique().size
model = '''
data {
int n;
int jdg_usa[n];
int win_usa[n];
int flight[n];
vector[n] score;
}
parameters {
real j[%s];
real w[%s];
real r[%s];
real sigma;
}
model {
// priors
j ~ normal(0, 0.5);
w ~ normal(0, 0.5);
r ~ normal(0, 0.5);
sigma ~ exponential(1);
// likelihood
vector[n] mu;
for (i in 1:n) {
mu[i] = w[win_usa[i]] + j[jdg_usa[i]] + r[flight[i]];
}
score ~ normal(mu, sigma);
}
''' % (nJ, nW, nR)
stan_file = './stn/week05_02b.stan'
with open(stan_file, 'w') as f:
print(model, file=f)
model_2b = CmdStanModel(stan_file=stan_file)
model_2b.compile()
%%time
data = d[['score', 'jdg_usa', 'win_usa', 'flight']].copy()
data.jdg_usa = data.jdg_usa + 1
data.win_usa = data.win_usa + 1
data.flight = data.flight + 1
data = data.to_dict(orient='list')
data['n'] = len(data['score'])
fit_2b = model_2b.sample(data=data, chains=4)
fit_2b.summary()
```
To see that this model is the same as the previous, let's compute contrasts. The contrast between American and French wines is:
```
ppc = fit_2b.draws_pd(vars='w')
diff_w = ppc['w[2]'] - ppc['w[1]']
print(f'mu={diff_w.mean():.2f}, std={diff_w.std():.2f}')
```
That's almost exactly the same mean and standard deviation as `bW` in the first model. The other contrasts match as well. Something to notice about the two models is that the second one does sample less efficiently. The `n_eff` (`ess_mean` in ArviZ) values are lower. This isn't a problem, but it is a consequence of the higher correlations in the posterior, a result of the redundant parameterization. This is because really it is a difference that matters, and many combinations of two numbers can produce the same difference. But the priors keep this from ruining our inference. If you tried the same thing without priors, it would likely fall apart and return very large standard errors.
## Exercise 3
> Now consider two-way interactions among the three features. You should end up with three different interaction terms in your model. These will be easier to build, if you use indicator variables. Again, justify your priors, and be sure to check the chains. Explain what each interaction means. Be sure to interpret the model's predictions on the outcome scale (mu, the expected score), not on the scale of individual parameters. You can use link to help with this, or just use your knowledge of the linear model instead.
> What do you conclude about the features and the scores? Can you relate the results of your model(s) to the individual judge and wine inferences from Exercise 1?
### Indicator variables version
I used the same priors as before for the main effects. I used tighter priors for the interactions. Why? Because interactions represent sub-categories of data, and if we keep slicing up the sample, differences can't keep getting bigger. Again, the most important thing is not to use flat priors like Normal(0,10) that produce impossible outcomes.
```
model = '''
data {
int n;
vector[n] win_usa;
vector[n] jdg_usa;
vector[n] flight;
vector[n] score;
}
parameters {
real a;
real bW;
real bJ;
real bR;
real bWJ;
real bWF;
real bJF;
real sigma;
}
transformed parameters {
vector[n] mu;
mu = a + bW*win_usa + bJ*jdg_usa + bR*flight + bWJ*win_usa.*jdg_usa + bWF*win_usa.*flight + bJF*jdg_usa.*flight;
}
model {
// prior
a ~ normal(0, 0.2);
bW ~ normal(0, 0.5);
bJ ~ normal(0, 0.5);
bR ~ normal(0, 0.5);
bWJ ~ normal(0, 0.25);
bWF ~ normal(0, 0.25);
bJF ~ normal(0, 0.25);
sigma ~ exponential(1);
// likelihood
score ~ normal(mu, sigma);
}
generated quantities {
vector[n] score_ppc;
for (i in 1:n) {
score_ppc[i] = normal_rng(mu[i], sigma);
}
}
'''
stan_file = './stn/week05_3a.stan'
with open(stan_file, 'w') as f:
print(model, file=f)
model_3a = CmdStanModel(stan_file=stan_file)
model_3a.compile()
%%time
data = d[['score', 'jdg_usa', 'win_usa', 'flight']].to_dict(orient='list')
data['n'] = len(data['score'])
fit_3a = model_3a.sample(data=data, chains=4)
atts = ['a', 'bW', 'bJ', 'bR', 'bWJ', 'bWF', 'bJF', 'sigma']
fit_3a.summary().loc[atts]
```
Reading the parameters this way is not easy. But right away you might notice that `bW` is now close to zero and overlaps it a lot on both sides. NJ wines are no longer on average worse. So the interactions did something. Glancing at the interaction parameters, you can see that only one of them has much mass away from zero, `bWR`, the interaction between NJ wines and red flight, so red NJ wines.
```
df = fit_3a.draws_pd(vars=atts)
im = pd.melt(df, value_vars=df.columns)
(
ggplot(im)
+ aes(x='value')
+ geom_density(fill='grey', alpha=0.2)
+ facet_grid('variable ~ .')
+ theme_light()
+ theme(figure_size=(4,8))
)
```
### Index variables version
Now let's do an index version. The way to think of this is to make unique parameters for each combination. If we consider all the interactions, including a three-way interaction between nation, judge and flight, there would be 8 combinations and so 8 parameters to estimate. Let's go ahead and do that, so we can simultaneously consider the 3-way interaction.
```
nJ = d.jdg_usa.unique().size
nW = d.win_usa.unique().size
nR = d.flight.unique().size
model = '''
data {
int n;
int ixJ[n];
int ixW[n];
int ixR[n];
vector[n] score;
}
parameters {
real w[%s];
real j[%s];
real f[%s];
real wj[%s];
real wf[%s];
real jf[%s];
real wjf[%s];
real sigma;
}
transformed parameters {
vector[n] mu;
for (i in 1:n) {
mu[i] = w[ixW[i]] + j[ixJ[i]] + f[ixR[i]]
+ wj[ixW[i].*ixJ[i]] + wf[ixW[i].*ixR[i]] + jf[ixJ[i].*ixR[i]]
+ wjf[ixW[i].*ixJ[i].*ixR[i]];
}
}
model {
// priors
w ~ normal(0, 0.5);
j ~ normal(0, 0.5);
f ~ normal(0, 0.5);
wj ~ normal(0, 0.5);
wf ~ normal(0, 0.5);
jf ~ normal(0, 0.5);
wjf ~ normal(0, 0.5);
sigma ~ exponential(1);
// likelihood
score ~ normal(mu, sigma);
}
generated quantities {
vector[n] score_ppc;
for (i in 1:n) {
score_ppc[i] = normal_rng(mu[i], sigma);
}
}
''' % (nW, nJ, nR, nW*nJ, nW*nR, nJ*nR, nW*nJ*nR)
stan_file = './stn/week05_3b.stan'
with open(stan_file, 'w') as f:
print(model, file=f)
model_3b = CmdStanModel(stan_file=stan_file)
model_3b.compile()
%%time
data = d[['score', 'jdg_usa', 'win_usa', 'flight']].copy()
data.jdg_usa = data.jdg_usa + 1
data.win_usa = data.win_usa + 1
data.flight = data.flight + 1
data.columns = ['score', 'ixJ', 'ixW', 'ixR']
data = data.to_dict(orient='list')
data['n'] = len(data['score'])
fit_3b = model_3b.sample(data=data, chains=4)
df = fit_3b.summary()
atts = [att for att in df.index.to_list() if 'mu' not in att and 'ppc' not in att]
df.loc[atts]
```
Models predictions and comparison:
```
pp_31 = fit_3a.draws_pd(vars='score_ppc')
pp_32 = fit_3b.draws_pd(vars='score_ppc')
d = d.assign(score_hat_32=pp_32.T.mean(axis=1).values,
score_hat_31=pp_31.T.mean(axis=1).values)
aux = d.groupby(['win_usa', 'jdg_usa','flight'])[['score','score_hat_31','score_hat_32']].mean().reset_index()
combinations = ['FFW', 'FFR', 'FAW', 'FAR', 'AFW', 'AFR', 'AAW', 'AAR'] # wine, judge, flight order
aux['combination'] = combinations
aux = round(aux[['combination', 'score','score_hat_31', 'score_hat_32']],2)
aux
im = aux.melt(id_vars='combination',
value_vars=['score','score_hat_31','score_hat_32'],
var_name='score_origin',
value_name='scores')
(
ggplot(im)
+ aes(x='scores', y='combination', color='score_origin', shape='score_origin')
+ geom_point(size=3)
+ theme_light()
+ theme(figure_size=(6,4))
)
```
`score` is just the average for that category, `score_hat_31` is the posterior predictive for the Indicator version and `score_hat_32` for the index version. FFW means: French wine, French judge, White wine. The two rows that jump out are the 4th and the (_6th_), AFR and FAR. Those are NJ red wines as judged by French judges and French red wines as judged by American judges. French judges didn't like NJ reds so much (really only one NJ red, if you look back at Exercise 1). And American judges liked French reds more. Besides these two interactions, notice that it is very hard to figure this out from the table of coefficients.
The most noticeable change when comparing models is that FFW (French wines, French judges, white) have a lower expected rating in the full interaction model. There are some other minor differences as well. What has happened? The three way interaction would be, in the first model's indicator terms, when a wine is American, the judge is American, and the flight is red. In the first model, a prediction for such a wine is just a sum of parameters:
$\mu_{i} = \alpha + \beta_{W} + \beta_{J} + \beta_{R} + \beta_{WJ} + \beta_{WR} + \beta_{JR}$
This of course limits means that these parameters have to account for the AAR wine. In the full interaction mode, an AAR wine gets its own parameter, as does every other combination. None of the parameters get polluted by averaging over different combinations. Of course, there isn't a lot of evidence that prediction is improved much by allowing this extra parameter. The differences are small, overall. These wines are all quite good. But it is worth understand how the full interaction model gains additional flexibility. This additional flexibility typically requires some addition regularization. When we arrive at multilevel models later, you'll see how we can handle regularization more naturally inside of a model.
| github_jupyter |
```
top_directory = '/Users/iaincarmichael/Dropbox/Research/law/law-net/'
from __future__ import division
import os
import sys
import time
from math import *
import copy
import cPickle as pickle
# data
import numpy as np
import pandas as pd
# viz
import matplotlib.pyplot as plt
# graph
import igraph as ig
# NLP
from nltk.corpus import stopwords
# our code
sys.path.append(top_directory + 'code/')
from load_data import load_and_clean_graph, case_info
from pipeline.download_data import download_bulk_resource
sys.path.append(top_directory + 'explore/vertex_metrics_experiment/code/')
from make_snapshots import *
from make_edge_df import *
from attachment_model_inference import *
from compute_ranking_metrics import *
from pipeline_helper_functions import *
from make_case_text_files import *
from bag_of_words import *
# directory set up
data_dir = top_directory + 'data/'
experiment_data_dir = data_dir + 'vertex_metrics_experiment/'
court_name = 'scotus'
# jupyter notebook settings
%load_ext autoreload
%autoreload 2
%matplotlib inline
G = load_and_clean_graph(data_dir, court_name)
```
# load similarity matrix
```
similarity_matrix = load_sparse_csr(filename=experiment_data_dir + 'cosine_sims.npz')
with open(experiment_data_dir + 'CLid_to_index.p', 'rb') as f:
CLid_to_index = pickle.load(f)
```
# Look at similarities
```
def get_similarities(similarity_matrix, CLid_A, CLid_B, CLid_to_index):
"""
Returns the similarities for cases index by CL ids as a list
Parameters
----------
similarity_matrix: precomputed similarity matrix
CLid_A, CLid_B: two lists of CL ids whose similarities we want
CLid_to_index: dict that maps CL ids to similarity_matrix indices
"""
if len(CLid_A) != len(CLid_B):
raise ValueError('lists not the same length')
else:
N = len(CLid_A)
# list to return
similarities = [0] * N
# grab each entry
for i in range(N):
try:
# convet CL id to matrix index
idA = CLid_to_index[CLid_A[i]]
idB = CLid_to_index[CLid_B[i]]
similarities[i] = similarity_matrix[idA, idB]
except KeyError:
# if one of the CLid's is not in the similarity matrix return nan
similarities[i] = np.nan
return similarities
def save_similarity_matrix(experiment_data_dir, similarity_matrix, CLid_to_index):
"""
saves similarity matrix and CLid_to_index dict
"""
# save similarity matrix
save_sparse_csr(filename=experiment_data_dir + 'cosine_sims',
array=S)
# save clid to index map
with open(experiment_data_dir + 'CLid_to_index.p', 'wb') as fp:
pickle.dump(CLid_to_index, fp)
def load_similarity_matrix(experiment_data_dir):
"""
Load similarity matrix and CLid_to_index dict
Parameters
----------
experiment_data_dir:
Output
------
similarity_matrix, CLid_to_index
"""
similarity_matrix = load_sparse_csr(filename=experiment_data_dir + 'cosine_sims.npz')
with open(experiment_data_dir + 'CLid_to_index.p', 'rb') as f:
CLid_to_index = pickle.load(f)
return similarity_matrix, CLid_to_index
CLid_ing = []
CLid_ed = []
for e in G.es:
CLid_ing.append(G.vs[e.source]['name'])
CLid_ed.append(G.vs[e.target]['name'])
start = time.time()
sims = get_similarities(S, CLid_ing, CLid_ed, CLid_to_index)
runtime = time.time() - start
```
# surgery
```
len(CLid_to_index.keys())
map_clids = CLid_to_index.keys()
print 'there are %d keys' % len(CLid_to_index.keys())
len(G.vs)
G_clids = G.vs['name']
print 'there are %d vertices in the graph' % len(G.vs)
set(G_clids).difference(set(map_clids))
len(os.listdir(experiment_data_dir + 'textfiles/'))
```
| github_jupyter |
# EODAG as STAC client
## STAC API
EODAG can perform an item search over a STAC compliant API. Found STAC items are returned as [EOProduct](../../api_reference/eoproduct.rst#eodag.api.product._product.EOProduct) objects with STAC metadata mapped to OGC OpenSearch Extension for Earth Observation.
EODAG comes with already configured providers, but you can also add new ones dynamically.
```
import os
# To have some basic feedback on what eodag is doing, we configure logging to output minimum information
from eodag import setup_logging
setup_logging(verbose=2)
from eodag.api.core import EODataAccessGateway
dag = EODataAccessGateway()
```
List already configured providers providing a STAC API compliant service:
```
[p.name for p in dag.providers_config.values() if hasattr(p, "search") and p.search.type == 'StacSearch']
```
Then, let's update EODAG's configuration with a new STAC provider
```
dag.update_providers_config("""
tamn:
search:
type: StacSearch
api_endpoint: https://tamn.snapplanet.io/search
products:
S2_MSI_L1C:
productType: S2
GENERIC_PRODUCT_TYPE:
productType: '{productType}'
download:
type: AwsDownload
base_uri: https://tamn.snapplanet.io
flatten_top_dirs: True
auth:
type: AwsAuth
credentials:
aws_access_key_id: PLEASE_CHANGE_ME
aws_secret_access_key: PLEASE_CHANGE_ME
""")
dag.set_preferred_provider("tamn")
```
Search some S2_MSI_L1C products over Luxembourg:
```
prods_S2L1C, _ = dag.search(productType="S2_MSI_L1C", locations=dict(country="LUX"), start="2020-05-01", end="2020-05-15", items_per_page=50)
```
Filter over any item property using crunchers:
```
from eodag.plugins.crunch.filter_property import FilterProperty
prods_S2L1C_filtered = prods_S2L1C.crunch(FilterProperty({"cloudCover": 10, "operator": "lt"}))
len(prods_S2L1C_filtered)
```
List available assets from the first retrieved product
```
[(key, asset["href"]) for key, asset in prods_S2L1C[0].assets.items()]
```
Same thing with an unconfigured product type (should match available collections).
For Tamn Landsat-8 products are available in L8 Collection. Let's search them over Spain:
```
prods_L8, _ = dag.search(productType="L8", country="ESP", start="2020-05-01", end="2020-05-15")
[(key, asset["href"]) for key, asset in prods_L8[0].assets.items()]
```
## STAC Static catalog
EODAG can search for items over a STAC static catalog. Path to the catalog must be set as `provider.search.api_endpoint` with `provider.search.type=StaticStacSearch`. A download plugin must also be set, and depends of the provider.
Here is an example with a catalog from https://stacindex.org/catalogs/spot-orthoimages-canada-2005, which will use
`HTTPDownload` as download plugin, without credentials as no authentication needed for download.
See [download plugins documentation](../../plugins_reference/download.rst) for other available plugins.
<div class="alert alert-warning">
Warning
Please note that `StaticStacSearch` plugin development is still at an early stage. If search is too slow using this plugin, please use a catalog with less elements.
</div>
```
# Decrease logging level
setup_logging(verbose=1)
# create a workspace
workspace = 'eodag_workspace_stac_client'
if not os.path.isdir(workspace):
os.mkdir(workspace)
# add the provider
dag.update_providers_config("""
stac_http_provider:
search:
type: StaticStacSearch
api_endpoint: https://canada-spot-ortho.s3.amazonaws.com/canada_spot_orthoimages/canada_spot5_orthoimages/S5_2007/catalog.json
products:
GENERIC_PRODUCT_TYPE:
productType: '{productType}'
download:
type: HTTPDownload
base_uri: https://fake-endpoint
flatten_top_dirs: True
outputs_prefix: %s
""" % os.path.abspath(workspace))
dag.set_preferred_provider("stac_http_provider")
```
Let's perform search :
```
from shapely.geometry import Polygon
search_polygon = Polygon([(-70, 45), (-75, 47), (-80, 47), (-80, 44)])
query_args = {"start": "2007-05-01", "end": "2007-05-06" , "geom": search_polygon}
products, found = dag.search(**query_args)
print("%s product(s) found" % found)
```
Before downloading, make some cleanup in the products as `canada-spot-ortho.s3.amazonaws.com` thumbnails are not available for download :
- remove thumbnail assets
- remove products with no assets
```
for idx, product in enumerate(products):
# remove thumbnail
if "thumbnail" in product.assets:
del products[idx].assets["thumbnail"]
# remove items with empty assets
if not product.assets:
del products[idx]
print("%s product(s) with valid assets" % len(products))
# plot products and search polygon on map
import ipyleaflet as ipyl
m = ipyl.Map(center=(45, -75), zoom=5)
polygon_layer = ipyl.GeoJSON(data=search_polygon.__geo_interface__, style=dict(color='blue'))
m.add_layer(polygon_layer)
items_layer = ipyl.GeoJSON(data=products.as_geojson_object(), style=dict(color='green'))
m.add_layer(items_layer)
m
```
Download items from the filtered search results:
```
paths = dag.download_all(products)
paths
for path in paths:
!basename $path
!ls {path.replace("file://","")}
```
###
| github_jupyter |
## **MoroccoAI Data Challenge (Edition 001)**
This notebook walks through The prcoccess of detecting plates from images using our 2 Fast-RCNN models that were trained on Plate Detection and Moroccan Plate Charachter Detection, and the post-processing that followed the predection.
<br>
### **Overview**
In Morocco, the number of registered vehicles doubled between 2000 and 2019. In 2019, a few months before lockdowns due to the Coronavirus Pandemic, 8 road fatalities were recorded per 10 000 registered vehicles. This rate is extremely high when compared with other IRTAD countries. The National Road Safety Agency (NARSA) established the road safety strategy 2017-26 with the main target to reduce the number of road deaths by 50% between 2015 and 2026 [1].Is crucial for law enforcement and authorities in order to assure the safety of the roads and to check the registration and the licence of the vehicles.
Therefore the aim to automate this task is very beneficial.
**This Jupyter Notebook only loads the trained Checkpoints, You can find Training Notebook in the next link.**
💡 Recommendation: [The Jupyter Notebook were we trained our models to detect Plates from pictures at first stage then detect Characters from Plates](https://colab.research.google.com/drive/1Niz1AVejRSm8UKFolP7DWla8WRpq6JwD?usp=sharing).
<br>
### **Dataset**
The dataset is 654 jpg pictures of the front or back of vehicles showing the license plate. They are of different sizes and are mostly cars. The plate license follows Moroccan standard.
For each plate corresponds a string (series of numbers and latin characters) labeled manually. The plate strings could contain a series of numbers and latin letters of different length. Because letters in Morocco license plate standard are Arabic letters, we will consider the following transliteration: a <=> أ, b <=> ب, j <=> ج (jamaa), d <=> د , h <=> ه , waw <=> و, w <=> w (newly licensed cars), p <=> ش (police), fx <=> ق س (auxiliary forces), far <=> ق م م (royal army forces), m <=>المغرب, m <=>M. For example:
the string “123ب45” have to be converted to “12345b”,<br>
the string “123و4567” to “1234567waw”,<br>
the string “12و4567” to “1234567waw”,<br>
the string “1234567ww” to “1234567ww”, (remain the same)<br>
the string “1234567far” to “1234567ق م م”,<br>
the string “1234567m” to “1234567المغرب", etc.
<br>
We offer the plate strings of 450 images (training set). The remaining 204 unlabeled images will be the test set. The participants are asked to provide the plate strings in the test set.
<br>
### **Our Approach & Models**
Our approach was to use Object Detection to detect plate characters from images. We have chosen to build two models separately instead of using libraries directly like easyOCR or Tesseract due to its weaknesses in handling the variance in the shapes of Moroccan License plates.
The first model was trained to detect the licence plate to be then cropped from the original image, which will be then passed into the second model that was trained to detect the characters.
This notebook will be showing a code example on pretrained faster-rcnn model for both Object detection tasks, using library called detectron2 developed by FaceBook AI Research Laboratory (FAIR) based on Pytorch.
<br>
### **Detectron2**
#### 
Detectron2 is Facebook AI Research's next generation library that provides state-of-the-art detection and segmentation algorithms. It is the successor of Detectron and maskrcnn-benchmark. It supports a number of computer vision research projects and production applications in Facebook.
### **Fast-RCNN**
#### 
This version of the notebook doesn't contain the inference part since it's still on development. Once the inference and deployment part is done this will noteboook will be update.
### **About**
[MoroccoAI](https://morocco.ai/) MoroccoAI is an initiative led by AI experts in Morocco and abroad to promote AI growth in Morocco across the spectrum.
#### 
<h2>Installing Detecron2<h2>
<p>
First thing to do is try to Install detectron2 and Restart the runtime so all installed libraires get loaded.
</p>
```
!pip install pyyaml==5.1
import torch
TORCH_VERSION = ".".join(torch.__version__.split(".")[:2])
CUDA_VERSION = torch.__version__.split("+")[-1]
print("torch: ", TORCH_VERSION, "; cuda: ", CUDA_VERSION)
!pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/$CUDA_VERSION/torch$TORCH_VERSION/index.html
```
<h3>Importing libraries and Packages </h3>
```
from detectron2.engine import DefaultTrainer
from detectron2.config import get_cfg
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import statistics
import os, json, cv2, random
from matplotlib import pyplot as plt
from google.colab.patches import cv2_imshow
import pandas as pd
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.structures import BoxMode
from detectron2.utils.visualizer import ColorMode
```
<h3>Importing Faster-RCNN model and loading checkpoint for licence Plates detection </h3>
Since this notebook is for running models to predict images from testset we will be loading our pretrained model and use "CPU" as MODEL.DEVICE
```
from detectron2.engine import DefaultTrainer
cfg = get_cfg()
cfg.MODEL.DEVICE = "cpu"
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"))
#cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
#cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
#cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (licence). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
cfg.MODEL.WEIGHTS = os.path.join("./Plate_Detection_Model", "model_final.pth") # path to the model we just trained
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.9 # set a custom testing threshold
predictor = DefaultPredictor(cfg)
```
<h3>Test folder</h3>
<p>in order to generate a csv file of plates countained on the folder,
you can pass the test folder's path to Image_folder vriable
</p>
```
Images_folder = './test'
```
<h3>Extraction Folder</h3>
<p>This is the exporting folder for the plate image after extracting it from original images
</p>
```
Extraction_folder = './Plate_detection'
if not os.path.exists(Extraction_folder):
os.makedirs(Extraction_folder)
```
<h3>Plates Extraction</h3>
<p>in order to handle images with multiple licence Plates.
We created two function one that extract all plates that exists in Image, and one that extract only the one plate that model predected with highest confidance
</p>
```
def Plates_Detection_All_Plates_In_Image(Images_folder,Image,Extraction_folder):
im = cv2.imread(os.path.join(Images_folder, Image))
outputs = predictor(im)
boxes = outputs['instances'].pred_boxes.tensor.cpu().numpy().tolist()
scores = outputs['instances'].scores.numpy().tolist()
if len(scores)>0:
classes = outputs['instances'].pred_classes.to('cpu').tolist()
Plates = { i : boxes[i] for i in range(0, len(scores) ) }
for k,v in Plates.items():
cv2.imwrite(os.path.join(Extraction_folder, Image[:-4]+'_'+str(k)+'.jpg'), im[int(v[1]):int(v[3]), int(v[0]):int(v[2]), :])
def Plates_Detection_Top_Score_Plate_In_image(Images_folder,Image,Extraction_folder):
im = cv2.imread(os.path.join(Images_folder, Image))
outputs = predictor(im)
boxes = outputs['instances'].pred_boxes.tensor.cpu().numpy().tolist()
scores = outputs['instances'].scores.numpy().tolist()
if len(scores)>0:
classes = outputs['instances'].pred_classes.to('cpu').tolist()
Plates = { i : boxes[i] for i in range(0, len(scores) ) }
Sorted_Plates_by_Score = sorted(Plates.items(), key=lambda e: e[1][0], reverse=True)
Top_Score_Plate = Sorted_Plates_by_Score[0][1]
cv2.imwrite(os.path.join(Extraction_folder, Image), im[int(Top_Score_Plate[1]):int(Top_Score_Plate[3]), int(Top_Score_Plate[0]):int(Top_Score_Plate[2]), :])
```
<p>if you want to take in considiration all licence plates on each image,
the parameter all_plates should be set to True, otherwise keep the default variable False
</p>
```
def Detect_Plates_From_Images(Images_folder,all_plates=False):
for image in os.listdir(Images_folder):
if image.lower().endswith(('.png', '.jpg', '.jpeg')) :
if all_plates :
Plates_Detection_All_Plates_In_Image(Images_folder,image,Extraction_folder)
else:
Plates_Detection_Top_Score_Plate_In_image(Images_folder,image,Extraction_folder)
```
In the next Code we will be running our Main function which will be predicting and saving our detected plates into the extraction folder.
```
Detect_Plates_From_Images(Images_folder,all_plates=False)
```
<h3>Importing Faster-RCNN model and loading checkpoint for characters detector </h3>
```
cfg_ocr = get_cfg()
cfg_ocr.MODEL.DEVICE = "cpu"
cfg_ocr.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"))
#cfg_ocr.DATALOADER.NUM_WORKERS = 2
#cfg_ocr.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg_ocr.SOLVER.IMS_PER_BATCH = 2
cfg_ocr.SOLVER.STEPS = [] # do not decay learning rate
cfg_ocr.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 # faster, and good enough for this toy dataset (default: 512)
cfg_ocr.MODEL.ROI_HEADS.NUM_CLASSES = 20 # Numbers of charachters that appears in moroccon licence plates
cfg_ocr.MODEL.WEIGHTS = os.path.join("./Characters_Detection_Model", "model_final.pth") # path to the model we just trained
cfg_ocr.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set a custom testing threshold
predictor_ocr = DefaultPredictor(cfg_ocr)
```
We will create a dictionary that will take class ID (int) from predictor and return the exact string transformation like it was asked in the challenge
```
classlist = ["0","1","2","3","4","5","6","7","8","9", "a","b","h","w","d","p","waw","j","m","m"]
classestolettres = { i : classlist[i] for i in range(0, len(classlist) ) }
```
<p>Function that return the characters and there bounding boxes(without order)</p>
```
def OCR_Predictor(Extraction_folder,Image):
im = cv2.imread(os.path.join(Extraction_folder, Image))
outputs = predictor_ocr(im)
boxes = outputs['instances'].pred_boxes.tensor.cpu().numpy().tolist()
classes = outputs['instances'].pred_classes.to('cpu').tolist()
dict_Of_predection = { i : [classes[i],boxes[i]] for i in range(0, len(outputs['instances'].pred_classes.to('cpu').tolist()) ) }
return(dict_Of_predection)
```
This is post processing function aim to generate the right sequence of charachters to match the content of a licence plate
1. Split characters based on median of Y_Min of all detected letters boxes, by taking characters where their Y_Max is smaller than Median_Y_Mins into a string called top_characters, and those who have Y_Max greater than Median_Y_Mins will be in bottom_characters
2. Order characters in top and bottom list from left to right based on the X_Min of the detected Box of each character
```
def OCR_Plates_Post_Processing(plate,dict_Of_predection):
Plate = [item[1][1][1] for item in list(dict_Of_predection.items())]
if len(Plate)<=0:
ocr_result = {'plate_id':plate[:-4],'plate_string':''}
return(ocr_result)
medYmin = statistics.median(Plate)
toplettres = dict()
bottomlettres = dict()
for k,v in dict_Of_predection.items():
if (v[1][3] <= medYmin ):
toplettres[k] = v
else :
bottomlettres[k] = v
TopRes = sorted(toplettres.items(), key=lambda e: e[1][1][0])
BottomRes = sorted(bottomlettres.items(), key=lambda e: e[1][1][0])
TopPlate = [classestolettres[item[1][0]] for item in TopRes]
BottomPlate = [classestolettres[item[1][0]] for item in BottomRes]
TopPlate = "".join(str(x) for x in TopPlate)
BottomPlate = "".join(str(x) for x in BottomPlate)
ocr_result = {'plate_id':plate[:-4],'plate_string':BottomPlate+TopPlate}
return(ocr_result)
```
this function is returning a pandas data frame of couples for image names and there prediction
```
def Get_OCR_From_Plates(Extraction_folder):
column_names = ["plate_id", "plate_string"]
submission_result = pd.DataFrame(columns = column_names)
for plate in os.listdir(Extraction_folder):
if plate.lower().endswith(('.png', '.jpg', '.jpeg')) :
dict_Of_predection = OCR_Predictor(Extraction_folder,plate)
ocr_result = OCR_Plates_Post_Processing(plate,dict_Of_predection)
submission_result = submission_result.append(ocr_result, ignore_index=True)
submission_result = submission_result.sort_values(by=['plate_id'], ascending=True)
return(submission_result)
submission_result = Get_OCR_From_Plates(Extraction_folder)
submission_result
```
<p>Saving the Data Frame as csv File</p>
```
submission_result.to_csv("sample_submission.csv", encoding='utf-8', index=False)
```
| github_jupyter |
# Gene enrichment analysis
```
from pymodulon.enrichment import *
from pymodulon.example_data import load_ecoli_data, trn
ica_data = load_ecoli_data()
```
## General functions
To perform a basic enrichment test between two gene sets, use the ``compute_enrichment`` function.
Optional arguments:
* ``label``: Label for your target gene set (e.g. regulator name or GO term)
```
gene_set = ['b0002','b0003','b0004'] # e.g. an iModulon or other gene set
target_genes = ['b0002','b0005','b0007'] # e.g. a regulon or genes in a COG category
all_genes = ica_data.gene_names # List of all genes in your expression dataset
compute_enrichment(gene_set,target_genes,all_genes)
```
To perform an enrichment test against a regulon in a TRN table, use the ``compute_regulon`` function.
```
compute_regulon_enrichment(gene_set,'lrp',all_genes,trn)
```
Each row of the TRN table represents a regulatory interaction. The table requires the columns ``regulator`` and ``gene_id``.
```
trn.head()
```
To perform an enrichment against all regulons in a TRN table, use the ``compute_trn_enrichment`` function. This function includes false detection correction using the Benjamini-Hochberg procedure to compute [Q-values](https://en.wikipedia.org/wiki/Q-value_(statistics)). By default, the false detection rate is ``1e-5``, and can be updated using the ``fdr`` argument.
```
compute_trn_enrichment(gene_set,all_genes,trn)
```
``compute_trn_enrichment`` can perform enrichment against complex regulons in boolean combinations. The ``max_regs`` argument determines the number of regulators to include in the complex regulon, and the ``method`` argument determines how to combine complex regulons. ``method='or'`` computes enrichments against the union of regulons, ``'and'`` computes enrichment against intersection of regulons, and ``'both'`` performs both tests (default).
```
compute_trn_enrichment(gene_set,all_genes,trn,max_regs=2).head()
```
Increasing the number of regulators for complex regulons beyond three can lead to a combinatorial explosion. If you are sure you want to search for larger complex regulons, use ``force=True``
```
compute_trn_enrichment(gene_set,all_genes,trn,max_regs=3,force=True).head()
```
## Using the ``IcaData`` object
```
# Compare a single iModulon to a simple regulon
ica_data.compute_regulon_enrichment('GlpR',regulator='glpR')
# Compare a single iModulon to a complex regulon
ica_data.compute_regulon_enrichment('Copper','cusR/cueR')
# Get all significant enrichments for all iModulons.
# Optional parameters are same as the global compute_trn_enrichment function
ica_data.compute_trn_enrichment()
```
### Save to sample table
You can directly save iModulon enrichments to the sample table using either the `ica_data.compute_regulon_enrichment` function or `ica_data.compute_trn_enrichment` function. Columns unrelated to regulon enrichments (e.g. `Function` and `Category`) are left alone.
```
ica_data.imodulon_table.loc[['GlpR']]
ica_data.compute_regulon_enrichment('GlpR',regulator='crp',save=True)
ica_data.imodulon_table.loc[['GlpR']]
# The enrichment with the lowest qvalue is saved as the official enrichment
ica_data.compute_trn_enrichment(save=True)
ica_data.imodulon_table
```
| github_jupyter |
```
# !export SPOTIPY_CLIENT_ID='63594c9b2f99411a8cbd18df04851fc4'
# !export SPOTIPY_CLIENT_SECRET='096168b2bd1f4378ae410726955c9ed8'
# !export SPOTIPY_REDIRECT_URI='https://www.google.com/'
# ! SPOTIPY_CLIENT_ID
import os
import sys
import json
import spotipy
import webbrowser
import spotipy.util as util
from json.decoder import JSONDecodeError
USERNAME = 'basslaughter' #your spotify username
CLIENT_ID = '63594c9b2f99411a8cbd18df04851fc4' #set at your developer account
CLIENT_SECRET = '096168b2bd1f4378ae410726955c9ed8' #set at your developer account
REDIRECT_URI = 'http://google.com/' #set at your developer account, usually "http://localhost:8000"
SCOPE = 'user-library-read' # or else
# ps. REDIRECT_URI is crucial here. if http://localhost:8000 is not set, or with a single '/' misplaced, it will generate a connection error.
# then pass them:
token = util.prompt_for_user_token(username = USERNAME,
scope = SCOPE,
client_id = CLIENT_ID,
client_secret = CLIENT_SECRET,
redirect_uri = REDIRECT_URI)
token
# if token:
# sp = spotipy.Spotify(auth=token)
token
sp = spotipy.Spotify(auth=token)
results = sp.current_user_saved_tracks()
for item in results['items']:
track = item['track']
print (track['name'] + ' - ' + track['artists'][0]['name'])
content = {"token": "BQCFw6c8mbhbrRhloc1eKRUzc7FqF2tFROc0SF7je07DiVIB4uosYNqxh8KsBs8-ypBnibBchF-1DyFn-dtee8NHYugxHVQcc8GBS4qGcsuXkUSb9WEr2gSRFJEfqDegUYOGJJsF_t6QIHX8Iio_HdsRlClW0L6-LOJdYvr4otMB5H5m6QopDKYGaeBMb07sq0zvoN_G-A"
}
content["token"]
results["href"]
# results = sp.search()
# q=artist%3Aqueen%20track%3Awe%20will%20rock%20you
# name = "Queen We Will Rock You"
artist = "Queen"
track = "We Will Rock You"
# query = f'q=artist:{artist}%20track:{track}&type=album'
results = sp.search(q='artist:' + artist + ' ' + 'track:' + track, type='track')
# results = sp.search(q='artist:' + name, type='track')
# results = sp.search(q=name, type='track')
results
# items = results['artists']['items']
# if len(items) > 0:
# artist = items[0]
# print(artist['name'], artist['images'][0]['url'])
# query=artist%3Aqueen+track%3Awe+will+rock+you&type=track&offset=0&limit=20"
results["tracks"]["items"][0]["id"]
for k,v in sp.audio_features('54flyrjcdnQdco7300avMJ')[0].items():
print(k,v)
import pandas as pd
song_list = pd.read_pickle("./song_list.pkl")
song_list.head()
song_list = song_list.drop([0,1], axis=1)
song_list.head()
{'danceability': 0.693,
'energy': 0.497,
'key': 2,
'loudness': -7.316,
'mode': 1,
'speechiness': 0.119,
'acousticness': 0.679,
'instrumentalness': 0,
'liveness': 0.258,
'valence': 0.473,
'tempo': 81.308,
'type': 'audio_features',
'id': '54flyrjcdnQdco7300avMJ',
'uri': 'spotify:track:54flyrjcdnQdco7300avMJ',
'track_href': 'https://api.spotify.com/v1/tracks/54flyrjcdnQdco7300avMJ',
'analysis_url': 'https://api.spotify.com/v1/audio-analysis/54flyrjcdnQdco7300avMJ',
'duration_ms': 122067,
'time_signature': 4}
```
| github_jupyter |
```
%matplotlib inline
from keras.datasets import mnist
from keras.layers import Input, Dense, Lambda
from keras.models import Model
from keras.objectives import binary_crossentropy
from keras.callbacks import LearningRateScheduler
import numpy as np
import matplotlib.pyplot as plt
import keras.backend as K
import tensorflow as tf
# バッチサイズ
m = 50
# 潜在変数の次元数
n_z = 2
n_epoch = 10
# Q(z|X) -- encoder
inputs = Input(shape=(784, ))
h_q = Dense(512, activation='relu')(inputs)
# 中間層のh_qからmuとsigmaを生成
# zは N(mu, sigma) からサンプリングされる
mu = Dense(n_z, activation='linear')(h_q)
log_sigma = Dense(n_z, activation='linear')(h_q)
def sample_z(args):
mu, log_sigma = args
eps = K.random_normal(shape=(m, n_z), mean=0.0, stddev=1.0)
return mu + K.exp(log_sigma / 2) * eps
sample_z((0, 1))
# encoderの出力 [mu, log_sigma] を使ってサンプリング
# m個のサンプルzが得られる
# sample z ~ Q(z|X)
z = Lambda(sample_z)([mu, log_sigma])
# P(X|z) -- decoder
decoder_hidden = Dense(512, activation='relu')
decoder_out = Dense(784, activation='sigmoid')
h_p = decoder_hidden(z)
outputs = decoder_out(h_p)
outputs
# Overall VAE model, for reconstruction and training
vae = Model(inputs, outputs)
# Encoder model, to encode input into latent variable
# We use the mean as the output as it is the center point, representative of Gaussian
encoder = Model(inputs, mu)
# Generator model, generate new data given latent variable z
d_in = Input(shape=(n_z, ))
d_h = decoder_hidden(d_in)
d_out = decoder_out(d_h)
decoder = Model(d_in, d_out)
def vae_loss(y_true, y_pred):
# E[log P(X|z)] => max <=> zから生成されたXと元のXの誤差が小さい
# axis=1はBatch方向に和をとる
recon = K.sum(K.binary_crossentropy(y_pred, y_true), axis=1)
# D_KL(Q(z|x) || P(z)) => min <=> 小さくしたいのでそのままlossとする
kl = 0.5 * K.sum(K.exp(log_sigma) + K.square(mu) - 1.0 - log_sigma, axis=1)
return recon + kl
vae.compile(optimizer='adam', loss=vae_loss)
vae
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.astype('float32') / 255.0
X_test = X_test.astype('float32') / 255.0
X_train.shape
X_train = X_train.reshape((-1, 28 * 28))
X_test = X_test.reshape((-1, 28 * 28))
X_train.shape
vae.fit(X_train, X_train, batch_size=m, epochs=5)
z_test = encoder.predict(X_test)
z_test.shape
y_test.shape
plt.figure()
plt.scatter(z_test[:, 0], z_test[:, 1], c=y_test)
plt.colorbar()
z = np.random.normal(size=100)
z = z.reshape((50, 2))
z.shape
decoder.summary()
x = decoder.predict(z)
x.shape
x = x.reshape(-1, 28, 28)
x.shape
plt.imshow(x[0], cmap='gray')
```
| github_jupyter |
# Exercise- Neural Network
As introduced in the previous section, a neural network is a powerful tool often utilized in machine learning. Because neural networks are, fundamentally, very mathematical, we'll use them to motivate Numpy!
We review the simplest neural network here:

The output of the neural network, z1 , is dependent on the inputs x1 and x2 . The importance of each of the inputs is given by values called weights. There is one weight from each input to each output. We show this here:

The inputs are given by x , and the outputs are given by z1 . Here, w11 is the weight of input 1 on output 1 (our only output in this case), and w21 is the weight of input 2 on output 1. In general, wij represents the weight of input i on output j .
The output, z1 , is given by z1=f(w11x1+w21x2) :

where f is a specified nonlinear function, and it is usually the hyperbolic tangent function, tanh .
If we express our inputs and weights as matrices, as shown here,

then we can develop an elegant mathematical expression: z1=tanh(wTx⃗ ) .
1. Here, we will write a function neural_network, which will apply a neural network operation with 2 inputs and 1 output and a given weight matrix.
Your function should take two arguments: inputs and weights, two NumPy arrays of shape (2,1) and should return a NumPy array of shape (1,1) , the output of the neural network. Do not forget the tanh activation.
```
import numpy as np
def neural_network(inputs, weights):
"""
Takes an input vector and runs it through a 1-layer neural network
with a given weight matrix and returns the output.
Arg:
inputs - 2 x 1 NumPy array
weights - 2 x 1 NumPy array
Returns (in this order):
out - a 1 x 1 NumPy array, representing the output of the neural network
"""
#Your code here
raise NotImplementedError
##Incorrect submission because I was recieving a vector (2,) when it was supposed to be a (2 x 1 array)
def neural_network(inputs, weights):
multiplication= np.tanh(np.dot(inputs,weights))
return multiplication
inputs = np.array([2,1])
weights = np.array([3,1])
neural_network(inputs,weights)
##I have to transform the weights into a (1,2) and then multiply it
def neural_network(inputs, weights):
multiplication= np.tanh(np.dot(np.squeeze(inputs),np.squeeze(weights)))
multiplication= np.array(multiplication, ndmin=2)
return multiplication
### Taking into account a 2 x 1 Numpy array
inputs= np.array([0.628692,0.8245354])
inputs= np.array(inputs, ndmin=2).T
print("inputs shape", inputs, inputs.shape) # a 2 x 1 NumPy array
weights= np.array([0.30583591,0.23121399])
weights= np.array(weights, ndmin=2).T
print("weights shape", weights, weights.shape) # a 2 x 1 NumPy array
##Testing de Function that gives a result that has an output size of (1,1)
data = neural_network(inputs, weights)
print("Final result", result)
```
| github_jupyter |
# 1.4 Data types
`Kiddo explanation 😇: `
We might use many materials like sand, bricks, concrete to construct a house. These are basic and essential needs to have the construction done and each of them have a specific role or usage.
Likewise, we need various data types like string, boolean, integer, dictionary etc.. for the development of a code. We need to know where to use a specific data type and it's functionality.😊
We have various built-in data types that come out of the box 😎.
| Data type | Mutable? |
|-----------------|------------|
| None | ❌ |
| bytes | ❌ |
| bool | ❌ |
| int | ❌ |
| float | ❌ |
| complex | ❌ |
| str | ❌ |
| tuple | ❌ |
| list | ✅ |
| set | ✅ |
| dictionary | ✅ |
The First question we would be interested in is "What is Mutable?🤔".
If a object can be altered after its creation, then it is Mutable, else Immutable.
### **None**
None is a singleton object, which represents empty or null.
#### *Example of None usage*:
In this example, Let's try getting the environment variables 😉
We would be using the `os` module's `getenv` method to fetch the environment variable's value, if there isn't that environment variable, it would be returning `None`
```
import os
# let's set a env variable first
new_environment_variable_name: str = input("Enter the variable name: \n>>>")
new_environment_variable_value: str = input("Enter the variable's value: \n>>>")
os.environ[new_environment_variable_name] = new_environment_variable_value
# Now let's try to fetch a envrionment's variable value
env_variable_name: str = input("Enter the variable name to be searched: \n>>>")
value = os.getenv(env_variable_name)
if value is None:
print(f"There is no environment variable named {env_variable_name}")
else:
print(
f"The value assigned for the environment variable named {env_variable_name} is {value}"
)
```
### **bytes**
byte objects are the sequences of bytes, these are machine readable form and can be stored on the disk. Based on the encoding format, the bytes yield results.
bytes can be converted to string by decoding it, vice-versa is known as encoding.
bytes objects can be created by prefixing `b` before the string.
```
bytes_obj: bytes = b"Hello Python Enthusiast!"
print(bytes_obj)
```
We see that they are visually the same as string when printed. But actually they are ASCII values, for the convenience of the developer, we see them as human readable strings.
But how to see the actual representation of bytes object? 🤔
It's pretty simple 😉! We can typecast the bytes object to a list and we see each character as it's respective ASCII value.
```
print(list(bytes_obj))
```
### **bool**
bool objects have only two values: `True`✅ and `False`❌, integer equivalent of True is 1 and for False is 0
```
do_we_love_python = True
if do_we_love_python:
print("🐍 Python too loves and takes care of you ❤️")
else:
print("🐍 Python still loves you ❤️")
```
PS: Boolean values in simple terms mean **Yes** for `True` and **No** for `False`
### **int**
int objects are any mathematical Integers. pretty easy right 😎
```
# Integer values can be used for any integer arithmetics.
# A few simple operations are addition, subtraction, multiplication, division etc..
operand_1 = int(input("Enter an integer value: \n>>>"))
operand_2 = int(input("Enter an integer value: \n>>>"))
print(operand_1 + operand_2)
```
### **float**
float objects are any rational numbers.
```
# Like integer objects float objects are used for decimal arithmetics
# A few simple operations are addition, subtraction, multiplication, division etc..
# We are typcasting integer or float value to float values explicitly.
operand_1 = float(input("Enter the integer/float value: \n>>>"))
operand_2 = float(input("Enter the integer/float value: \n>>>"))
print(operand_1 + operand_2)
```
### **complex**
complex objects aren't so complex to understand 😉
complex objects hold a Real number and an imaginary number.
While creating the complex object, we would be having a `j` beside the imaginary number.
```
operand_1 = 10 + 5j
operand_2 = 3 + 4j
print(operand_1 * operand_2)
```
explanation for the above math: 😉
```math
(3+4j)*(10+5j)
3(10+5j) + 4j(10+5j)
30 + 15j + 40j + 20(j*j)
30 + 15j + 40j + 20(-1)
30 + 15j + 40j - 20
30 - 20 + 15j + 40j
10 + 55j
```
### **str**
string objects hold an sequence of characters.
```
my_string = "🐍 Python is cool"
print(my_string)
```
### **tuple**
tuple object is an immutable datatype which can have any datatype objects inside it and is created by enclosing paranthesis `()` and objects are separated by a comma.
Once the tuple object is created, the tuple can't be modified, although if the objects in the tuple are mutable, they can be changed 😊
The objects in the tuple are ordered, So the objects in the tuple can be accessed by using its index ranging from 0 to (number of elements - 1).
```
# tuples are best suited for having data which doesn't change in it's lifetime.
apple_and_its_colour = ("apple", "red")
watermelon_and_its_colour = ("watermelon", "green")
language_initial_release_year = ("Golang", 2012)
language_initial_release_year = ("Angular", 2010)
language_initial_release_year = ("Python", 1990)
# We can't add new data types objects, delete the existing datatype objects, or change the values
# of the existing objects.
# We can get the values by index.
print(
f"{language_initial_release_year[0]} is released in {language_initial_release_year[1]}"
)
```
### **list**
list objects are similar to tuple, the differences are the list object is mutable, so we can add or remove objects in the list even after its creation. It is created by using `[]`.
```
about_python = [
"interpreted",
"object-oriented",
"dynamically typed",
"open source",
"high level language",
"🐍",
1990,
]
print(about_python)
# We can add more values to the above list. append method of list object is used to add a new object.
# let's give a try 🙃
about_python.append("Guido Van Rossum")
print(about_python)
```
### **set**
set objects are unordered, unindexed, non repetitive collection of objects.
Mathematical set theory operations can be applied using set datatype objects. 😊
it is created by using `{}`.
PS: `{}` denotes a dictionary, we need to use `set()` for creating an empty set, there won't be this issue when creating set objects containing objects, for example: `{1,"a"}`
set objects are good for having the mathematical set operations.
```
set_obj = {6, 4, 4, 3, 10, "Python", "Python", "Golang"}
# We see that we have created a set with 8 objects.
print(set_obj)
# But when printed, we see that only 6 are present because set doesn't allow same objects repeated.
```
### **dict**
dictionary objects are used for creating key-value pairs, Here keys would be unique while values can be repeated.
The object assigned to a key can be fetched by using `<dict_obj>[key]` which raises a KeyError when no given key is found. The other way to fetch is by using `<dict_obj>.get(key)` which returns `None` by default if no key is found.
```
dict_datatype = {
"language": "Python",
"Inventor": "Guido Van Rossum",
"release_year": 1991,
}
print(f"The programming language is: {dict_datatype['language']}")
# We could use get method to prevent KeyError if the given Key is not found.
result = dict_datatype.get("LatestRelease")
# Value of the result would be None as the key LatestRelease is not present in dict_datatype
print(f"The result is: {result}")
```
| github_jupyter |
### This is a common homework assignment for both frameworks
This week's assignment appears to be unusually grandeur, so please read submission/grading guidelines before you upload it for review.
__Submisson__: To ease mutual pain, please submit
- Some kind of readable report with links to your evaluations, gym uploads, investigation results, etc.
- Explicitly state that you took on a bonus task and where to find it [to make sure it is found and graded].
__Grading__: The main purpose (and source of points) for this notebook is your investigation, not squeezing out average rewards from random environments.
Getting near/above state of the art performance on one particular game will earn you some bonus points, but you can get much more by digging deeper into what makes the algorithms tick and how they compare to one another.
Okay, now brace yourselves, here comes an assignment!
#### 7.2 Deep kung-fu (3 pts)
Implement and train recurrent actor-critic on `KungFuMaster-v0` env in the second notebook. Try to get a score of >=20k.
__[bonus points]__ +1 point per each +5k average reward over 20k baseline (25k = +1, 30k = +2, ...)
#### 7.3 Comparing what we know (7+ pts)
_Please read this assignment carefully._
Choose a partially-observable environment for experimentation out of [atari](https://gym.openai.com/envs#atari), [doom](https://gym.openai.com/envs#doom) or [pygame](https://gym.openai.com/envs#pygame) catalogue (if you really want to try some other pomdp, feel free to proceed at your own risk).
Not all atari environements are bug free and these minor bugs can hurt learning performance.
We recommend to pick one of those:
* [Assault-v0](https://gym.openai.com/envs/Assault-v0)
* [DoomDefendCenter-v0](https://gym.openai.com/envs/DoomDefendCenter-v0) (use env code from [this](https://github.com/yandexdataschool/Practical_RL/blob/master/week4/Seminar4.2_conv_agent.ipynb) notebook)
* [RoadRunner-v0](https://gym.openai.com/envs/RoadRunner-v0)
Unless you have aesthetical preference, we would appreciate if you chose env out of recommended ones by `random.choice`.
__Your task__ is to implement DRQN and A3C (seminar code may be reused) and apply them __both__ to the environement of your choice. Then compare them on the chosen game (convergence / sample efficiency / final performance).
* It's probably a good idea to compare a3c vs q-learning with similar network complexity.
* Also remember that you can only use large experience replay for 1-step q-learning
__Tips__:
Your new environment may require some tuning before it gives up to your agent:
* Different preprocessing. Mostly cropping.
* In some cases, even larger screen size or colorization.
* View resulting image to figure that out.
* Reward scaling.
* Kung-fu used `rewards=replay.rewards/100.` because you got +100 per success.
* Just avoid training on raw +100 rewards or it's gonna blow up mean squared error.
* Deterministic/FrameSkip
* For doom/pygame/custom, use frameskip to speed up learning
* ```from gym.wrappers import SkipWrapper```
* ```env = SkipWrapper(how_many_frames_to_skip)(your_env)``` in your make_env
* For atari only, consider __training__ on deterministic version of environment
* Works by appending Deterministic to env name: `AssaultDeterministic-v0`, `KungFuMasterDeterministic-v0`
* Expect faster training due to less variance.
* You still need to __switch back to normal env for evaluation__ (there's no leaderbord for deterministic envs)
* Knowledge transfer
* If you want to switch network mid-game, you are recommended to use some pre-trained layers
* At minimum, save convolutional weights and re-use them in every new architecture using fine-tuning
* At it's darkest, [soft-targets](http://www.kdnuggets.com/2015/05/dark-knowledge-neural-network.html), [policy distillation](https://arxiv.org/pdf/1511.06295.pdf), [net2net](https://arxiv.org/abs/1511.05641) or similar __[bonus points]__.
#### For the curious
- __[4+ bonus points]__ Implement attentive model for DQRN/A3C (see lecture slides for implementation details). How does it compare to the vanilla architecture?
* __[2+ bonus points]__ If you have any q-learning modiffications from week5 (double q-l, prioritized replay, etc.), they are most welcome here!
* __[2+ bonus points]__ How different memory amounts and types (LSTM / GRU / RNN / combo / custom) affects DRQN / A2C performance? Try to find optimal configuration.
- __[2+ bonus points]__ No one said l2 loss is perfect. Implement Huber or MAE loss for DRQN and/or A2C critic and compare it's performance on the game of your choice (pass proper `loss_function` to `get_elementwise_objective()`) .
- __[1++ bonus points]__ Does it help to add recurrent units when in MDP scenario, e.g fully observable "CartPole-v0"? How about if you only give it access to position observations? Only speed observations? Try that out!
- __[4+ bonus points]__ See the very end of this notebook. Some of the games (right side) benefit a lot from additional LSTM memory. But others (left side) do not. That is interesting. Pick up one or several games from the left side and try to figure out why A2C performance decreases when adding LSTM to feadforward architecture?
#### Bonus: Neural Maps (a LOT of points if successful)
Pick up either [DoomMyWayHome-v0](https://gym.openai.com/envs/DoomMyWayHome-v0) or [RaycastMaze-v0](https://gym.openai.com/envs/RaycastMaze-v0) and apply Neural Map to it. Main details of Neural Map are given in lecture slides and you could also benefit from reading [Neural Map article](https://arxiv.org/abs/1702.08360).
[hse/ysda] Feel free to ask Pavel Shvechikov / Fedor Ratnikov any questions, guidance and clarifications on the topic.
This block is highly experimental and may be connected with some additional difficulties compared to main track. With some brief description of you work you could get additional points
_Scoring points are not pre-determined for this task because we're uncertain of implementation complexity._
__You can use the following template for DRQN implementation or throw it away entirely__
```
N_ACTIONS = env.action_space.n
OBS_SHAPE = env.observation_space.shape
OBS_CHANNELS, OBS_HEIGHT, OBS_WIDTH = OBS_SHAPE
N_SIMULTANEOUS_GAMES = 2 # this is also known as number of agents in exp_replay_pool
MAX_POOL_SIZE = 1000
REPLAY_SIZE = 100
SEQ_LENGTH = 15
N_POOL_UPDATES = 1
EVAL_EVERY_N_ITER = 10
N_EVAL_GAMES = 1
N_FRAMES_IN_BUFFER = 4 # number of consequent frames to feed in CNN
observation_layer = InputLayer((None,) + OBS_SHAPE)
prev_wnd = InputLayer((None, N_FRAMES_IN_BUFFER) + OBS_SHAPE)
new_wnd = WindowAugmentation(observation_layer, prev_wnd)
wnd_reshape = reshape(
new_wnd, [-1, N_FRAMES_IN_BUFFER * OBS_CHANNELS, OBS_HEIGHT, OBS_WIDTH])
conv1 = Conv2DLayer(wnd_reshape, num_filters=32, filter_size=(8, 8), stride=4)
conv2 = Conv2DLayer(conv1, num_filters=64, filter_size=(4, 4), stride=2)
conv3 = Conv2DLayer(conv2, num_filters=64, filter_size=(3, 3), stride=1)
dense1 = DenseLayer(conv3, num_units=512)
qvalues_layer = DenseLayer(dense1, num_units=N_ACTIONS, nonlinearity=None)
action_layer = EpsilonGreedyResolver(qvalues_layer)
targetnet = TargetNetwork(qvalues_layer)
qvalues_old_layer = targetnet.output_layers
agent = Agent(observation_layers=observation_layer,
policy_estimators=(qvalues_layer, qvalues_old_layer),
action_layers=action_layer,
agent_states={new_wnd: prev_wnd})
pool = EnvPool(agent, make_env=make_env,
n_games=N_SIMULTANEOUS_GAMES, max_size=MAX_POOL_SIZE)
replay = pool.experience_replay.sample_session_batch(REPLAY_SIZE)
# .get_sessions() returns env_states, observations, agent_states, actions, policy_estimators
(qvalues_seq, old_qvalues_seq) = agent.get_sessions(
replay, session_length=SEQ_LENGTH, experience_replay=True)[-1]
elwise_mse_loss = qlearning.get_elementwise_objective(
qvalues_seq,
replay.actions[0],
replay.rewards,
replay.is_alive,
qvalues_target=old_qvalues_seq,
gamma_or_gammas=0.999,
n_steps=1
)
loss = elwise_mse_loss.sum() / replay.is_alive.sum()
weights = lasagne.layers.get_all_params(action_layer, trainable=True)
updates = lasagne.updates.adam(loss, weights, learning_rate=1e-4)
train_step = theano.function([], loss, updates=updates)
```
***
***
***
### A3C feadforward vs A3C LSTM on Atari games
```
a3c_ff = [518.4, 263.9, 5474.9, 22140.5, 4474.5, 911091.0, 970.1, 12950.0, 22707.9, 817.9, 35.1, 59.8, 681.9, 3755.8, 7021.0, 112646.0, 56533.0, 113308.4, -0.1, -82.5, 18.8, 0.1, 190.5, 10022.8, 303.5, 32464.1, -2.8, 541.0, 94.0,
5560.0, 28819.0, 67.0, 653.7, 10476.1, 52894.1, -78.5, 5.6, 206.9, 15148.8, 12201.8, 34216.0, 32.8, 2355.4, -10911.1, 1956.0, 15730.5, 138218.0, -9.7, -6.3, 12679.0, 156.3, 74705.7, 23.0, 331628.1, 17244.0, 7157.5, 24622.0]
a3c_lstm = [945.3, 173.0, 14497.9, 17244.5, 5093.1, 875822.0, 932.8, 20760.0, 24622.2, 862.2, 41.8, 37.3, 766.8, 1997.0, 10150.0, 138518.0, 233021.5, 115201.9, 0.1, -82.5, 22.6, 0.1, 197.6, 17106.8, 320.0, 28889.5, -1.7, 613.0,
125.0, 5911.4, 40835.0, 41.0, 850.7, 12093.7, 74786.7, -135.7, 10.7, 421.1, 21307.5, 6591.9, 73949.0, 2.6, 1326.1, -14863.8, 1936.4, 23846.0, 164766.0, -8.3, -6.4, 27202.0, 144.2, 105728.7, 25.0, 470310.5, 18082.0, 5615.5, 23519.0]
game_names = "Alien Amidar Assault Asterix Asteroids Atlantis Bank Battle Beam Berzerk Bowling Boxing Breakout Centipede Chopper Crazy Defender Demon Double Enduro Fishing Freeway Frostbite Gopher Gravitar H.E.R.O. Ice James Kangaroo Krull Kung-Fu Montezuma's Ms. Name Phoenix Pit Pong Private Q*Bert River Road Robotank Seaquest Skiing Solaris Space Star Surround Tennis Time Tutankham Up Venture Video Wizard Yars Zaxxon".split(
" ")
score_difference = np.array(a3c_lstm) - np.array(a3c_ff)
idxs = np.argsort(score_difference)
plt.figure(figsize=(15, 6))
plt.plot(np.sort(score_difference))
plt.yscale("symlog")
plt.xticks(np.arange(len(game_names)), np.array(
game_names)[idxs], rotation='vertical')
plt.grid()
plt.title("Comparison A3C on atari games: with and without LSTM memory")
plt.ylabel("Difference between A3C_LSTM and A3C_FeadForward scores")
```
| github_jupyter |
# Data Structures
* tuple
* list
* dict
* set
## tuple
A tuple is a one dimensional, fixed-length, immutable sequence.
Create a tuple:
```
tup = (1, 2, 3)
tup
```
Convert to a tuple:
```
list_1 = [1, 2, 3]
type(tuple(list_1))
```
Create a nested tuple:
```
nested_tup = ([1, 2, 3], (4, 5))
nested_tup
```
Access a tuple's elements by index O(1):
```
nested_tup[0]
```
Although tuples are immutable, their contents can contain mutable objects.
Modify a tuple's contents:
```
nested_tup[0].append(4)
nested_tup[0]
```
Concatenate tuples by creating a new tuple and copying objects:
```
(1, 3, 2) + (4, 5, 6)
```
Multiply tuples to copy references to objects (objects themselves are not copied):
```
('foo', 'bar') * 2
```
Unpack tuples:
```
a, b = nested_tup
a, b
```
Unpack nested tuples:
```
(a, b, c, d), (e, f) = nested_tup
a, b, c, d, e, f
```
A common use of variable unpacking is when iterating over sequences of tuples or lists:
```
seq = [( 1, 2, 3), (4, 5, 6), (7, 8, 9)]
for a, b, c in seq:
print(a, b, c)
```
## list
A list is a one dimensional, variable-length, mutable sequence.
Create a list:
```
list_1 = [1, 2, 3]
list_1
```
Convert to a list:
```
type(list(tup))
```
Create a nested list:
```
nested_list = [(1, 2, 3), [4, 5]]
nested_list
```
**Access a list's elements** by index O(1):
```
nested_list[1]
```
Append an element to a list O(1):
```
nested_list.append(6)
nested_list
```
Insert an element to a list at a specific index (note that insert is expensive as it has to shift subsequent elements O(n)):
```
nested_list.insert(0, 'start')
nested_list
```
Pop is expensive as it has to shift subsequent elements O(n). The operation is O(1) if pop is used for the last element.
Remove and return an element from a specified index:
```
nested_list.pop(0)
nested_list
```
Locates the first such value and remove it O(n):
```
nested_list.remove((1, 2, 3))
nested_list
```
Check if a list contains a value O(n):
```
6 in nested_list
```
Concatenate lists by creating a new list and copying objects:
```
[1, 3, 2] + [4, 5, 6]
```
Extend a list by appending elements (faster than concatenating lists, as it does not have to create a new list):
```
nested_list.extend([7, 8, 9])
nested_list
```
## dictionary or dicts
A dict is also known as a hash map or associative array. A dict is a mutable collection of key-value pairs.
Note: Big O complexities are listed as average case, with most worst case complexities being O(n).
Create a dict:
```
dict_1 = { 'a' : 'foo', 'b' : [0, 1, 2, 3] }
dict_1
```
Access a dict's elements by index O(1)
```
dict_1['b']
```
Insert or set a dict's elements by index O(1):
```
dict_1[5] = 'bar'
dict_1
```
Check if a dict contains a key O(1):
```
5 in dict_1
```
Delete a value from a dict O(1):
```
dict_2 = dict(dict_1)
del dict_2[5]
dict_2
```
Remove and return an element from a specified index O(1):
```
value = dict_2.pop('b')
print(value)
print(dict_2)
```
Get or pop can be called with a default value if the key is not found. By default, get() will return None and pop() will throw an exception if the key is not found.
```
value = dict_1.get('z', 0)
value
```
Return a default value if the key is not found:
```
print(dict_1.setdefault('b', None))
print(dict_1.setdefault('z', None))
```
By contrast to setdefault(), defaultdict lets you specify the default when the container is initialized, which works well if the default is appropriate for all keys:
```
from collections import defaultdict
seq = ['foo', 'bar', 'baz']
first_letter = defaultdict(list)
for elem in seq:
first_letter[elem[0]].append(elem)
first_letter
```
dict keys must be "hashable", i.e. they must be immutable objects like scalars (int, float, string) or tuples whose objects are all immutable. Lists are mutable and therefore are not hashable, although you can convert the list portion to a tuple as a quick fix.
```
print(hash('string'))
print(hash((1, 2, (3, 4))))
```
Get the list of keys in no particular order (although keys() outputs the keys in the same order). In Python 3, keys() returns an iterator instead of a list.
```
dict_1.keys()
```
Get the list of values in no particular order (although values() outputs the keys in the same order). In Python 3, keys() returns an iterator instead of a list.
```
dict_1.values()
```
Iterate through a dictionary's keys and values:
```
for key, value in dict_1.items():
print (key, value)
```
Merge one dict into another:
```
dict_1.update({'e' : 'elephant', 'f' : 'fish'})
dict_1
```
Pair up two sequences element-wise in a dict:
```
mapping = dict(zip(range(7), reversed(range(7))))
mapping
```
## set
A set is an unordered sequence of unique elements.
Create a set:
```
set_1 = set([0, 1, 2, 3, 4, 5])
set_1
set_2 = {1, 2, 3, 5, 8, 13}
set_2
```
Sets support set operations like union, intersection, difference, and symmetric difference.
Union O(len(set_1) + len(set_2)):
```
set_1 | set_2
```
Intersection O(min(len(set_1), len(set_2)):
```
set_1 & set_2
```
Difference O(len(set_1)):
```
set_1 - set_2
```
Symmetric Difference O(len(set_1)):
```
set_1 ^ set_2
```
Subset O(len(set_3)):
```
set_3 = {1, 2, 3}
set_3.issubset(set_2)
```
Superset O(len(set_3)):
```
set_2.issuperset(set_3)
```
Equal O(min(len(set_1), len(set_2)):
```
{1, 2, 3} == {3, 2, 1}
```
| github_jupyter |
# Python Crash Course Exercises
This is an optional exercise to test your understanding of Python Basics. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as [Complete Python Bootcamp](https://www.udemy.com/complete-python-bootcamp)
## Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
** What is 7 to the power of 4?**
```
7**4
```
** Split this string:**
s = "Hi there Sam!"
**into a list. **
```
s = 'Hi there Sam!'
s.split()
```
** Given the variables:**
planet = "Earth"
diameter = 12742
** Use .format() to print the following string: **
The diameter of Earth is 12742 kilometers.
```
planet = "Earth"
diameter = 12742
'The diameter of {} is {} kilometers.'.format(planet,diameter)
```
** Given this nested list, use indexing to grab the word "hello" **
```
lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]
lst[3][1][2][0]
```
** Given this nested dictionary grab the word "hello". Be prepared, this will be annoying/tricky **
```
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
d['k1'][3]['tricky'][3]['target'][3]
```
** What is the main difference between a tuple and a list? **
```
# Tuple is immutable, list items can be changed
```
** Create a function that grabs the email website domain from a string in the form: **
user@domain.com
**So for example, passing "user@domain.com" would return: domain.com**
```
def domainGet(inp):
return inp.split('@')[1]
domainGet('user@domain.com')
```
** Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization. **
```
def findDog(inp):
return 'dog' in inp.lower().split()
findDog('Is there a dog here?')
```
** Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases. **
```
def countDog(inp):
dog = 0
for x in inp.lower().split():
if x == 'dog':
dog += 1
return dog
countDog('This dog runs faster than the other dog dude!')
```
** Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:**
seq = ['soup','dog','salad','cat','great']
**should be filtered down to:**
['soup','salad']
```
seq = ['soup','dog','salad','cat','great']
list(filter(lambda item:item[0]=='s',seq))
```
### Final Problem
**You are driving a little too fast, and a police officer stops you. Write a function
to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket".
If your speed is 60 or less, the result is "No Ticket". If speed is between 61
and 80 inclusive, the result is "Small Ticket". If speed is 81 or more, the result is "Big Ticket". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all
cases. **
```
def caught_speeding(speed, is_birthday):
if is_birthday:
speed = speed - 5
if speed > 80:
return 'Big Ticket'
elif speed > 60:
return 'Small Ticket'
else:
return 'No Ticket'
caught_speeding(81,True)
caught_speeding(81,False)
```
# Great job!
| github_jupyter |
## Model compression demo
This notebook demonstrates model compression through quantization using TFLite. We trained a ResNet50 mask/no-mask model to demonstrate this, which can be found in ../data/classifier_model_weights/resnet50_classifier.h5. Of course you are free to train your own model using the train-mask-nomask notebook.
```
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications.resnet50 import preprocess_input
from sklearn import metrics
from pathlib import Path
import numpy as np
```
### Extract test data and set up generator
```
test_dir = Path('../data/test')
model_dir = Path('../data/classifier_model_weights')
target_size = (112,112)
batch_size = 32
test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
test_generator = test_datagen.flow_from_directory(test_dir,
target_size=target_size,
batch_size=batch_size,
class_mode='binary',
classes=['not_masked', 'masked'],
shuffle=False)
```
### Load the original model and check accuracy
```
model = tf.keras.models.load_model(str(model_dir / 'resnet50_classifier.h5'))
preds = [x[0] > 0.5 for x in model.predict(test_generator)]
acc = metrics.accuracy_score(test_generator.classes, preds)
print(f"The original model accuracy = {acc:.3f}")
```
### Convert to tflite model and check accuracy
```
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open(model_dir / 'resnet50_classifier.tflite', 'wb') as f:
f.write(tflite_model)
```
We see that the TFlite file is slightly smaller than the original .h5 file, but this is only due to the format conversion. No compression is done at this point.
```
!ls -lh $model_dir
```
As this is still the same model but just in a different format, we expect to see the same accuracy
```
interpreter = tf.lite.Interpreter(str(model_dir / 'resnet50_classifier.tflite'))
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.allocate_tensors()
preds = []
for batch_idx in range(len(test_generator)):
for img in test_generator[batch_idx][0]:
interpreter.set_tensor(input_details[0]['index'], np.expand_dims(img, axis=0))
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
preds.append(output_data[0][0])
preds = [x > 0.5 for x in preds]
tflit_acc = metrics.accuracy_score(test_generator.labels, preds)
print(f"The TFlite model accuracy = {acc:.3f}")
```
### Dynamic range quantization
We do the same as before, but now enabling the default optimization. This will result in weights being quantized to 8 bit precision.
```
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
with open(model_dir / 'resnet50_classifier_quantized.tflite', 'wb') as f:
f.write(tflite_model)
!ls -lh $model_dir
```
We can see that, as expected, the quantized model takes up ca. 1/4 of the disk space.
Let's check the accurcay of this model as well.
```
interpreter = tf.lite.Interpreter(str(model_dir / 'resnet50_classifier_quantized.tflite'))
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.allocate_tensors()
preds = []
for batch_idx in range(len(test_generator)):
for img in test_generator[batch_idx][0]:
interpreter.set_tensor(input_details[0]['index'], np.expand_dims(img, axis=0))
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
preds.append(output_data[0][0])
preds = [x > 0.5 for x in preds]
tflite_acc = metrics.accuracy_score(test_generator.labels, preds)
print(f"The quantized TFlite model accuracy = {tflite_acc:.3f}")
```
### Benchmark RAM memory usage
We use the TFLite benchmark tool to compare inference memory usage.
```
!wget https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/linux_x86-64_benchmark_model
!chmod +x linux_x86-64_benchmark_model
non_compressed_model_path = (model_dir / 'resnet50_classifier.tflite').as_posix()
!./linux_x86-64_benchmark_model --graph=$non_compressed_model_path --num_threads=4
compressed_model_path = (model_dir / 'resnet50_classifier_quantized.tflite').as_posix()
!./linux_x86-64_benchmark_model --graph=$compressed_model_path --num_threads=4
```
| github_jupyter |
```
# Copyright 2021 Google LLC
# Use of this source code is governed by an MIT-style
# license that can be found in the LICENSE file or at
# https://opensource.org/licenses/MIT.
# Author(s): Kevin P. Murphy (murphyk@gmail.com) and Mahmoud Soliman (mjs@aucegypt.edu)
```
<a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a>
<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/figures//chapter21_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Cloning the pyprobml repo
```
!git clone https://github.com/probml/pyprobml
%cd pyprobml/scripts
```
# Installing required software (This may take few minutes)
```
!apt install octave -qq > /dev/null
!apt-get install liboctave-dev -qq > /dev/null
```
## Figure 21.2:
(a) An example of single link clustering using city block distance. Pairs (1,3) and (4,5) are both distance 1 apart, so get merged first. (b) The resulting dendrogram. Adapted from Figure 7.5 of \citep Alpaydin04 .
Figure(s) generated by [agglomDemo.m](https://github.com/probml/pmtk3/blob/master/demos/agglomDemo.m)
```
!octave -W agglomDemo.m >> _
```
## Figure 21.4:
Hierarchical clustering of yeast gene expression data. (a) Single linkage. (b) Complete linkage. (c) Average linkage.
Figure(s) generated by [hclustYeastDemo.m](https://github.com/probml/pmtk3/blob/master/demos/hclustYeastDemo.m)
```
!octave -W hclustYeastDemo.m >> _
```
## Figure 21.5:
(a) Some yeast gene expression data plotted as a heat map. (b) Same data plotted as a time series.
Figure(s) generated by [kmeansYeastDemo.m](https://github.com/probml/pmtk3/blob/master/demos/kmeansYeastDemo.m)
```
!octave -W kmeansYeastDemo.m >> _
```
## Figure 21.6:
Hierarchical clustering applied to the yeast gene expression data. (a) The rows are permuted according to a hierarchical clustering scheme (average link agglomerative clustering), in order to bring similar rows close together. (b) 16 clusters induced by cutting the average linkage tree at a certain height.
Figure(s) generated by [hclustYeastDemo.m](https://github.com/probml/pmtk3/blob/master/demos/hclustYeastDemo.m)
```
!octave -W hclustYeastDemo.m >> _
```
## Figure 21.7:
Illustration of K-means clustering in 2d. We show the result of using two different random seeds. Adapted from Figure 9.5 of \citep Geron2019 .
Figure(s) generated by [kmeans_voronoi.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_voronoi.py)
```
%run ./kmeans_voronoi.py
```
## Figure 21.8:
Clustering the yeast data from \cref fig:yeast using K-means clustering with $K=16$. (a) Visualizing all the time series assigned to each cluster. (d) Visualizing the 16 cluster centers as prototypical time series.
Figure(s) generated by [kmeansYeastDemo.m](https://github.com/probml/pmtk3/blob/master/demos/kmeansYeastDemo.m)
```
!octave -W kmeansYeastDemo.m >> _
```
## Figure 21.9:
An image compressed using vector quantization with a codebook of size $K$. (a) $K=2$. (b) $K=4$.
Figure(s) generated by [vqDemo.m](https://github.com/probml/pmtk3/blob/master/demos/vqDemo.m)
```
!octave -W vqDemo.m >> _
```
## Figure 21.10:
Illustration of batch vs mini-batch K-means clustering on the 2d data from \cref fig:kmeansVoronoi . Left: distortion vs $K$. Right: Training time vs $K$. Adapted from Figure 9.6 of \citep Geron2019 .
Figure(s) generated by [kmeans_minibatch.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_minibatch.py)
```
%run ./kmeans_minibatch.py
```
## Figure 21.11:
Performance of K-means and GMM vs $K$ on the 2d dataset from \cref fig:kmeansVoronoi . (a) Distortion on validation set vs $K$.
Figure(s) generated by [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py) [gmm_2d.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_2d.py) [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py)
```
%run ./kmeans_silhouette.py
%run ./gmm_2d.py
%run ./kmeans_silhouette.py
```
## Figure 21.12:
Voronoi diagrams for K-means for different $K$ on the 2d dataset from \cref fig:kmeansVoronoi .
Figure(s) generated by [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py)
```
%run ./kmeans_silhouette.py
```
## Figure 21.13:
Silhouette diagrams for K-means for different $K$ on the 2d dataset from \cref fig:kmeansVoronoi .
Figure(s) generated by [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py)
```
%run ./kmeans_silhouette.py
```
## Figure 21.14:
Some data in 2d fit using a GMM with $K=5$ components. Left column: marginal distribution $p(\mathbf x )$. Right column: visualization of each mixture distribution, and the hard assignment of points to their most likely cluster. (a-b) Full covariance. (c-d) Tied full covariance. (e-f) Diagonal covairance, (g-h) Spherical covariance. Color coding is arbitrary.
Figure(s) generated by [gmm_2d.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_2d.py)
```
%run ./gmm_2d.py
```
## Figure 21.15:
Some 1d data, with a kernel density estimate superimposed. Adapted from Figure 6.2 of \citep Martin2018 .
Figure(s) generated by [gmm_identifiability_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_identifiability_pymc3.py)
```
%run ./gmm_identifiability_pymc3.py
```
## Figure 21.16:
Illustration of the label switching problem when performing posterior inference for the parameters of a GMM. We show a KDE estimate of the posterior marginals derived from 1000 samples from 4 HMC chains. (a) Unconstrained model. Posterior is symmetric. (b) Constrained model, where we add a penalty to ensure $\mu _0 < \mu _1$. Adapted from Figure 6.6-6.7 of \citep Martin2018 .
Figure(s) generated by [gmm_identifiability_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_identifiability_pymc3.py)
```
%run ./gmm_identifiability_pymc3.py
```
## Figure 21.17:
Fitting GMMs with different numbers of clusters $K$ to the data in \cref fig:gmmIdentifiabilityData . Black solid line is KDE fit. Solid blue line is posterior mean; feint blue lines are posterior samples. Dotted lines show the individual Gaussian mixture components, evaluated by plugging in their posterior mean parameters. Adapted from Figure 6.8 of \citep Martin2018 .
Figure(s) generated by [gmm_chooseK_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_chooseK_pymc3.py)
```
%run ./gmm_chooseK_pymc3.py
```
## Figure 21.18:
WAIC scores for the different GMMs. The empty circle is the posterior mean WAIC score for each model, and the black lines represent the standard error of the mean. The solid circle is the in-sample deviance of each model, i.e., the unpenalized log-likelihood. The dashed vertical line corresponds to the maximum WAIC value. The gray triangle is the difference in WAIC score for that model compared to the best model. Adapted from Figure 6.10 of \citep Martin2018 .
Figure(s) generated by [gmm_chooseK_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_chooseK_pymc3.py)
```
%run ./gmm_chooseK_pymc3.py
```
## Figure 21.19:
We fit a mixture of 20 Bernoullis to the binarized MNIST digit data. We visualize the estimated cluster means $ \boldsymbol \mu _k$. The numbers on top of each image represent the estimated mixing weights $ \pi _k$. No labels were used when training the model.
Figure(s) generated by [mixBerMnistEM.m](https://github.com/probml/pmtk3/blob/master/demos/mixBerMnistEM.m)
```
!octave -W mixBerMnistEM.m >> _
```
## Figure 21.20:
Clustering data consisting of 2 spirals. (a) K-means. (b) Spectral clustering.
Figure(s) generated by [spectral_clustering_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/spectral_clustering_demo.py)
```
%run ./spectral_clustering_demo.py
```
| github_jupyter |
# Project 1
- **Team Members**: Chika Ozodiegwu, Kelsey Wyatt, Libardo Lambrano, Kurt Pessa

### Data set used::
* https://open-fdoh.hub.arcgis.com/datasets/florida-covid19-case-line-data
##### Dependencies
```
import step1_raw_data_collection as step1
import step2_data_processing_and_cleaning as step2
import pandas as pd
```
# Process of Data Analysis

## Step 1: Raw Data Collection
-----
```
df = step1.get_data()
#help(step1.get_data)
```
## Step 2: Data Processing & Data Cleaning
-----
##### Single group
```
#df = step2.get_hospitalized_data()
#df = step2.get_df_with_datetime_and_formatted_column()
df = step2.get_hospitalizations_by_casedatetime(df)
#group_name = "Gender"
#group_name = "Age_group"
#group_name = "Travel_related"
#group_name = "Jurisdiction"
#group_name = "County"
#df = step2.get_group(group_name)
#df
```
##### Two groups: before and after opening
```
#df1, df2 = step2.get_groups_before_and_after_opening_date()
#df1, df2 = step2.get_groups_by_casedatetime()
group_name = "Gender"
#group_name = "Age_group"
#group_name = "Travel_related"
#group_name = "Jurisdiction"
#group_name = "County"
df1,df2 = step2.get_groups(group_name)
#df
#pd.concat([df1,df2],axis=1)
df = step2.get_df_with_datetime_and_formatted_column()
filt = df["Gender"]=="Male"
df = df[filt]
df = step2.get_hospitalizations_by_casedatetime(df)
male_by_week = df.groupby(pd.Grouper(freq='W',key='CaseDateTime')).sum()
df = step2.get_df_with_datetime_and_formatted_column()
filt = df["Gender"]=="Female"
df = df[filt]
df = step2.get_hospitalizations_by_casedatetime(df)
female_by_week = df.groupby(pd.Grouper(freq='W',key='CaseDateTime')).sum()
male_perc = male_by_week['Hospitalized']/(male_by_week['Hospitalized']+female_by_week['Hospitalized'])*100
female_perc = female_by_week['Hospitalized']/(male_by_week['Hospitalized']+female_by_week['Hospitalized'])*100
import matplotlib.pyplot as plt
plt.figure(figsize=(8,6))
p1 = plt.bar(male_perc.index,male_perc,width=5,label='male',alpha=0.5)
p2 = plt.bar(female_perc.index,female_perc,bottom=male_perc,width=5,label='female',alpha=0.5)
plt.hlines(y=50,xmin=male_perc.index[0],xmax=male_perc.index[-1],alpha=0.8)
plt.ylabel("Percent")
plt.xticks(male_perc.index,[date.strftime("%m/%d") for date in male_perc.index],rotation='vertical')
plt.title("Percent Male/Female for Hospitalizations by week")
plt.legend(handles=[p1,p2])
plt.show()
#clean csv
new_csv_data_df = df[['ObjectId', "County",'Age',"Age_group", "Gender", "Jurisdiction", "Travel_related", "Hospitalized","Case1"]]
new_csv_data_df.head()
#new_csv_data_df.to_csv(new_csv_data_df, 'new_cleaned_data.csv')
new_csv_data_df.to_csv ("new_covid_dataframe.csv") # index = False, header=True)
```
# There is no change in hospitalizations since reopening
### Research Question to Answer:
* “There is no change in hospitalizations since reopening?”
### Part 1: Six (6) Steps for Hypothesis Testing
<details><summary> click to expand </summary>
#### 1. Identify
- **Populations** (divide Hospitalization data in two groups of data):
1. Prior to opening
2. After opening
* Decide on the **date**:
* May 4th - restaurants opening to 25% capacity
* June (Miami opening beaches)
- Distribution:
* Distribution
#### 2. State the hypotheses
- **H0**: There is no change in hospitalizations after Florida has reopened
- **H1**: There is a change in hospitalizations after Florida has reopened
#### 3. Characteristics of the comparison distribution
- Population means, standard deviations
#### 4. Critical values
- p = 0.05
- Our hypothesis is nondirectional so our hypothesis test is **two-tailed**
#### 5. Calculate
#### 6. Decide!
</details>
### Part 2: Visualization
* Trends
### Further Inquries
* Increases in groups?
* Age
* Gender
* Ethnicitiy
### Rough Breakdown of Tasks
* Data Massaging
```
#Calculate total number of cases
Total_covid_cases = new_csv_data_df["ObjectId"].nunique()
Total_covid_cases = pd.DataFrame({"Total Number of Cases": [Total_covid_cases]})
Total_covid_cases
#Total number of cases by county (Kelsey) Include bar chart
#Total number of cases by gender (Kelsey) Include pie chart
#Top 10 counties with the most cases (Libardo)
#Divide hospitalization data in two groups of data prior to reopening and create new dataframe (Kurt) consider total (Chika)
#Divide hospitalization data in two groups of data after reopening and create new dataframe (Kurt) condider total (Chika)
#Total number of hospitalization for all counties (Libardo)
#Total number of hospitalization for each county and put in DataFrame # Create a visualization (Kelsey)
#Percentage of hospitalization by gender # Create Visualization (Libardo)
#Percentage of hospitalization by age group (Chika) #Create visualization
#Percentage of hospitalization before shut down (Not done yet) (Rephrase) (Chika)
#Percentage of hospitalization during shut down (backburner)
#Percentage of hospitalization after reopening(Not done yet) (Rephrase) (Chika)
#Compare travel-related hospitalization to non-travelrelated cases (Not done yet) (Chika)
#Average number of hospitalization by county (Not done yet) (Kelsey)
#Hospitalization by case date/month (needs more) (Libardo)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import mglearn
from IPython.display import display
from sklearn.model_selection import train_test_split
%matplotlib inline
mglearn.plots.plot_knn_regression(n_neighbors=1)
mglearn.plots.plot_knn_regression(n_neighbors=3)
# Implementing the knn for regression
from sklearn.neighbors import KNeighborsRegressor
X, y = mglearn.datasets.make_wave(n_samples=40)
# Split the wave dataset into a training and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Instantiate the model and set the number of neighbors to consider to 3
reg = KNeighborsRegressor(n_neighbors=3)
# Fit the model using the training data and training targets
reg.fit(X_train, y_train)
# Making predictions
print("Test set predictions: {}".format(reg.predict(X_test)))
# Evaluating the model, it returns R² score, it is also known as the coefficient of determination,
# is a measure of goodness of a prediction for a regression model, and yields between 0 and 1.
# 1 = perfect prediction,, 0 = constant model hat just predicts the mean of the training set responses,
# y_train:
#
fig, axes = plt.subplots(1, 3, figsize=(15, 4))
# Create 1,000 data points evenly spaced between -3 and 3
line = np.linspace(-3, 3, 1000).reshape(-1, 1)
for n_neighbors, ax in zip([1, 3, 9], axes):
# make predictions using1, 3, or 9 neighbors
reg = KNeighborsRegressor(n_neighbors=n_neighbors)
reg.fit(X_train, y_train)
ax.plot(line, reg.predict(line))
ax.plot(X_train, y_train, '^', c=mglearn.cm2(0), markersize=8)
ax.plot(X_test, y_test, 'v', c=mglearn.cm2(1), markersize=8)
ax.set_title("{} neighbor(s)\n train score: {:.2f} test score: {:.2f}".format(
n_neighbors, reg.score(X_train, y_train), reg.score(X_test, y_test)))
ax.set_xlabel("Feature")
ax.set_ylabel("Target")
axes[0].legend(["Model predictions", "Training data/target", "Test data/target"], loc="best")
```
To calculate what is the best number of neighbors to include to get the best predicctions is beyond the scope of the book,but the main points are:
* In many cases using 3-5 neighbors is ok
* Defenitelly you need to make sure what is the best number of neighbors to get the best result
* Use Euclidean distance to get the neighbors number
!!! While the nearest k-neighbors algorithm is easy to understand, it is not often used in practice, due to prediction being slow and its inability to handle many features. The method we discuss next has neither of these drawbacks. !!!
| github_jupyter |
# Randomized Image Sampling for Explanations (RISE)
```
import os
import numpy as np
from matplotlib import pyplot as plt
from skimage.transform import resize
from tqdm import tqdm
```
## Change code below to incorporate your *model* and *input processing*
### Define your model here:
```
from keras.applications.resnet50 import ResNet50, preprocess_input, decode_predictions
from keras import backend as K
class Model():
def __init__(self):
K.set_learning_phase(0)
self.model = ResNet50()
self.input_size = (224, 224)
def run_on_batch(self, x):
return self.model.predict(x)
```
### Load and preprocess image
```
from keras.preprocessing import image
def load_img(path):
img = image.load_img(path, target_size=model.input_size)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return img, x
```
---
## RISE
```
def generate_masks(N, s, p1):
cell_size = np.ceil(np.array(model.input_size) / s)
up_size = (s + 1) * cell_size
grid = np.random.rand(N, s, s) < p1
grid = grid.astype('float32')
masks = np.empty((N, *model.input_size))
for i in tqdm(range(N), desc='Generating masks'):
# Random shifts
x = np.random.randint(0, cell_size[0])
y = np.random.randint(0, cell_size[1])
# Linear upsampling and cropping
masks[i, :, :] = resize(grid[i], up_size, order=1, mode='reflect',
anti_aliasing=False)[x:x + model.input_size[0], y:y + model.input_size[1]]
masks = masks.reshape(-1, *model.input_size, 1)
return masks
batch_size = 100
def explain(model, inp, masks):
preds = []
# Make sure multiplication is being done for correct axes
masked = inp * masks
for i in tqdm(range(0, N, batch_size), desc='Explaining'):
preds.append(model.run_on_batch(masked[i:min(i+batch_size, N)]))
preds = np.concatenate(preds)
sal = preds.T.dot(masks.reshape(N, -1)).reshape(-1, *model.input_size)
sal = sal / N / p1
return sal
```
---
## Running explanations
```
def class_name(idx):
return decode_predictions(np.eye(1, 1000, idx))[0][0][1]
model = Model()
img, x = load_img('catdog.png')
N = 2000
s = 8
p1 = 0.5
masks = generate_masks(2000, 8, 0.5)
sal = explain(model, x, masks)
class_idx = 243
plt.title('Explanation for `{}`'.format(class_name(class_idx)))
plt.axis('off')
plt.imshow(img)
plt.imshow(sal[class_idx], cmap='jet', alpha=0.5)
# plt.colorbar()
plt.show()
```
| github_jupyter |
<center>
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# Simple Linear Regression
Estimated time needed: **15** minutes
## Objectives
After completing this lab you will be able to:
* Use scikit-learn to implement simple Linear Regression
* Create a model, train it, test it and use the model
### Importing Needed packages
```
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline
```
### Downloading Data
To download the data, we will use !wget to download it from IBM Object Storage.
```
!wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
```
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
## Understanding the Data
### `FuelConsumption.csv`:
We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01)
* **MODELYEAR** e.g. 2014
* **MAKE** e.g. Acura
* **MODEL** e.g. ILX
* **VEHICLE CLASS** e.g. SUV
* **ENGINE SIZE** e.g. 4.7
* **CYLINDERS** e.g 6
* **TRANSMISSION** e.g. A6
* **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9
* **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9
* **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2
* **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0
## Reading the data in
```
df = pd.read_csv("FuelConsumption.csv")
# take a look at the dataset
df.head()
```
### Data Exploration
Let's first have a descriptive exploration on our data.
```
# summarize the data
df.describe()
```
Let's select some features to explore more.
```
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
```
We can plot each of these features:
```
viz = cdf[['CYLINDERS','ENGINESIZE','CO2EMISSIONS','FUELCONSUMPTION_COMB']]
viz.hist()
plt.show()
```
Now, let's plot each of these features against the Emission, to see how linear their relationship is:
```
plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("FUELCONSUMPTION_COMB")
plt.ylabel("Emission")
plt.show()
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
```
## Practice
Plot **CYLINDER** vs the Emission, to see how linear is their relationship is:
```
plt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Cylinders")
plt.ylabel("Emission")
plt.show()
```
<details><summary>Click here for the solution</summary>
```python
plt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Cylinders")
plt.ylabel("Emission")
plt.show()
```
</details>
#### Creating train and test dataset
Train/Test Split involves splitting the dataset into training and testing sets that are mutually exclusive. After which, you train with the training set and test with the testing set.
This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the model. Therefore, it gives us a better understanding of how well our model generalizes on new data.
This means that we know the outcome of each data point in the testing dataset, making it great to test with! Since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it is truly an out-of-sample testing.
Let's split our dataset into train and test sets. 80% of the entire dataset will be used for training and 20% for testing. We create a mask to select random rows using **np.random.rand()** function:
```
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
```
### Simple Regression Model
Linear Regression fits a linear model with coefficients B = (B1, ..., Bn) to minimize the 'residual sum of squares' between the actual value y in the dataset, and the predicted value yhat using linear approximation.
#### Train data distribution
```
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
```
#### Modeling
Using sklearn package to model data.
```
from sklearn import linear_model
regr = linear_model.LinearRegression()
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
regr.fit (train_x, train_y)
# The coefficients
print ('Coefficients: ', regr.coef_)
print ('Intercept: ', regr.intercept_)
```
As mentioned before, **Coefficient** and **Intercept** in the simple linear regression, are the parameters of the fit line.
Given that it is a simple linear regression, with only 2 parameters, and knowing that the parameters are the intercept and slope of the line, sklearn can estimate them directly from our data.
Notice that all of the data must be available to traverse and calculate the parameters.
#### Plot outputs
We can plot the fit line over the data:
```
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.plot(train_x, regr.coef_[0][0]*train_x + regr.intercept_[0], '-r')
plt.xlabel("Engine size")
plt.ylabel("Emission")
```
#### Evaluation
We compare the actual values and predicted values to calculate the accuracy of a regression model. Evaluation metrics provide a key role in the development of a model, as it provides insight to areas that require improvement.
There are different model evaluation metrics, lets use MSE here to calculate the accuracy of our model based on the test set:
* Mean Absolute Error: It is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just average error.
* Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. It’s more popular than Mean Absolute Error because the focus is geared more towards large errors. This is due to the squared term exponentially increasing larger errors in comparison to smaller ones.
* Root Mean Squared Error (RMSE).
* R-squared is not an error, but rather a popular metric to measure the performance of your regression model. It represents how close the data points are to the fitted regression line. The higher the R-squared value, the better the model fits your data. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).
```
from sklearn.metrics import r2_score
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
test_y_ = regr.predict(test_x)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y , test_y_) )
```
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="https://www.ibm.com/analytics/spss-statistics-software?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://www.ibm.com/cloud/watson-studio?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01">Watson Studio</a>
### Thank you for completing this lab!
## Author
Saeed Aghabozorgi
### Other Contributors
<a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01" target="_blank">Joseph Santarcangelo</a>
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ------------- | ---------------------------------- |
| 2020-11-03 | 2.1 | Lakshmi Holla | Changed URL of the csv |
| 2020-08-27 | 2.0 | Lavanya | Moved lab to course repo in GitLab |
| | | | |
| | | | |
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.