text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
<a href="https://colab.research.google.com/github/wileyw/DeepLearningDemos/blob/master/SinGAN/DoubleGAN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# SinGAN
[Official SinGAN Repository](https://github.com/tamarott/SinGAN)
In this notebook, we will implement and create a SinGAN homework assignment for other's to learn how to implement SinGAN as well.
```
%cd /content/
!git clone https://github.com/mswang12/SinGAN.git
%cd /content/SinGAN/
!git checkout experimental
# Explore Input images here:
%cd /content/SinGAN/Input/Images/
!ls
import cv2
import glob
from google.colab.patches import cv2_imshow
print('original image')
original_img_path = '/content/SinGAN/Input/Images/volacano.png'
img = cv2.imread(original_img_path)
print(img.shape)
cv2_imshow(img)
print('original image')
original_img_path = '/content/SinGAN/Input/Images/colusseum.png'
img2 = cv2.imread(original_img_path)
new_img = cv2.resize(img2, (img.shape[1], img.shape[0]))
print(new_img.shape)
cv2_imshow(new_img)
cv2.imwrite('/content/SinGAN/Input/Images/tree_resized.png', new_img)
!ls
```
# Let's train SinGAN here
## Notes
1. mode: "rand" vs "rec" - rand generates noise on the fly, it uses Z_opt only for size. rec uses recorded Z_opt without changed it
2. z_opt is unique noise, it's for monitoring the training results with fixed noise
3. Things to figure out, sample around the images for cropping
4. draw_concat() creates a new image with the inputs (noise + previous image, previous image)
```
# Let's pull out the important functions we want to reimplement
def train_single_scale2(netD,netG,reals,Gs,Zs,in_s,NoiseAmp,opt,centers=None):
print('placeholder')
real = reals[len(Gs)]
opt.nzx = real.shape[2]#+(opt.ker_size-1)*(opt.num_layer)
opt.nzy = real.shape[3]#+(opt.ker_size-1)*(opt.num_layer)
opt.receptive_field = opt.ker_size + ((opt.ker_size-1)*(opt.num_layer-1))*opt.stride
pad_noise = int(((opt.ker_size - 1) * opt.num_layer) / 2)
pad_image = int(((opt.ker_size - 1) * opt.num_layer) / 2)
if opt.mode == 'animation_train':
opt.nzx = real.shape[2]+(opt.ker_size-1)*(opt.num_layer)
opt.nzy = real.shape[3]+(opt.ker_size-1)*(opt.num_layer)
pad_noise = 0
m_noise = nn.ZeroPad2d(int(pad_noise))
m_image = nn.ZeroPad2d(int(pad_image))
alpha = opt.alpha
fixed_noise = functions.generate_noise([opt.nc_z,opt.nzx,opt.nzy],device=opt.device)
z_opt = torch.full(fixed_noise.shape, 0, device=opt.device)
z_opt = m_noise(z_opt)
# setup optimizer
optimizerD = optim.Adam(netD.parameters(), lr=opt.lr_d, betas=(opt.beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=opt.lr_g, betas=(opt.beta1, 0.999))
schedulerD = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizerD,milestones=[1600],gamma=opt.gamma)
schedulerG = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizerG,milestones=[1600],gamma=opt.gamma)
errD2plot = []
errG2plot = []
D_real2plot = []
D_fake2plot = []
z_opt2plot = []
# NOTE: Train for only 100 epochs to speed things up
#for epoch in range(int(opt.niter / 2)):
for epoch in range(5):
if (Gs == []) & (opt.mode != 'SR_train'):
z_opt = functions.generate_noise([1,opt.nzx,opt.nzy], device=opt.device)
z_opt = m_noise(z_opt.expand(1,3,opt.nzx,opt.nzy))
noise_ = functions.generate_noise([1,opt.nzx,opt.nzy], device=opt.device)
noise_ = m_noise(noise_.expand(1,3,opt.nzx,opt.nzy))
else:
noise_ = functions.generate_noise([opt.nc_z,opt.nzx,opt.nzy], device=opt.device)
noise_ = m_noise(noise_)
############################
# (1) Update D network: maximize D(x) + D(G(z))
###########################
for j in range(int(opt.Dsteps)):
# train with real
netD.zero_grad()
output = netD(real).to(opt.device)
#D_real_map = output.detach()
errD_real = -output.mean()#-a
errD_real.backward(retain_graph=True)
D_x = -errD_real.item()
# train with fake
if (j==0) & (epoch == 0):
if (Gs == []) & (opt.mode != 'SR_train'):
prev = torch.full([1,opt.nc_z,opt.nzx,opt.nzy], 0, device=opt.device)
in_s = prev
prev = m_image(prev)
z_prev = torch.full([1,opt.nc_z,opt.nzx,opt.nzy], 0, device=opt.device)
z_prev = m_noise(z_prev)
opt.noise_amp = 1
elif opt.mode == 'SR_train':
z_prev = in_s
criterion = nn.MSELoss()
RMSE = torch.sqrt(criterion(real, z_prev))
opt.noise_amp = opt.noise_amp_init * RMSE
z_prev = m_image(z_prev)
prev = z_prev
else:
prev = draw_concat(Gs,Zs,reals,NoiseAmp,in_s,'rand',m_noise,m_image,opt)
prev = m_image(prev)
z_prev = draw_concat(Gs,Zs,reals,NoiseAmp,in_s,'rec',m_noise,m_image,opt)
criterion = nn.MSELoss()
RMSE = torch.sqrt(criterion(real, z_prev))
opt.noise_amp = opt.noise_amp_init*RMSE
z_prev = m_image(z_prev)
else:
prev = draw_concat(Gs,Zs,reals,NoiseAmp,in_s,'rand',m_noise,m_image,opt)
prev = m_image(prev)
if opt.mode == 'paint_train':
prev = functions.quant2centers(prev,centers)
plt.imsave('%s/prev.png' % (opt.outf), functions.convert_image_np(prev), vmin=0, vmax=1)
if (Gs == []) & (opt.mode != 'SR_train'):
noise = noise_
else:
noise = opt.noise_amp*noise_+prev
fake = netG(noise.detach(),prev)
output = netD(fake.detach())
# NOTE: netD outputs a tensor. The Discriminator is fully convolution and does not depend on the size of the image.
# An image is real or fake depending on the mean of the output tensor.
# Maybe we can talk about this in our Blog post?
errD_fake = output.mean()
errD_fake.backward(retain_graph=True)
D_G_z = output.mean().item()
gradient_penalty = functions.calc_gradient_penalty(netD, real, fake, opt.lambda_grad, opt.device)
gradient_penalty.backward()
errD = errD_real + errD_fake + gradient_penalty
optimizerD.step()
errD2plot.append(errD.detach())
############################
# (2) Update G network: maximize D(G(z))
###########################
for j in range(opt.Gsteps):
netG.zero_grad()
output = netD(fake)
#D_fake_map = output.detach()
errG = -output.mean()
errG.backward(retain_graph=True)
if alpha!=0:
loss = nn.MSELoss()
if opt.mode == 'paint_train':
z_prev = functions.quant2centers(z_prev, centers)
plt.imsave('%s/z_prev.png' % (opt.outf), functions.convert_image_np(z_prev), vmin=0, vmax=1)
Z_opt = opt.noise_amp*z_opt+z_prev
rec_loss = alpha*loss(netG(Z_opt.detach(),z_prev),real)
rec_loss.backward(retain_graph=True)
rec_loss = rec_loss.detach()
else:
Z_opt = z_opt
rec_loss = 0
optimizerG.step()
errG2plot.append(errG.detach()+rec_loss)
D_real2plot.append(D_x)
D_fake2plot.append(D_G_z)
z_opt2plot.append(rec_loss)
if epoch % 25 == 0 or epoch == (opt.niter-1):
print('scale %d:[%d/%d]' % (len(Gs), epoch, opt.niter))
if epoch % 500 == 0 or epoch == (opt.niter-1):
plt.imsave('%s/fake_sample.png' % (opt.outf), functions.convert_image_np(fake.detach()), vmin=0, vmax=1)
plt.imsave('%s/G(z_opt).png' % (opt.outf), functions.convert_image_np(netG(Z_opt.detach(), z_prev).detach()), vmin=0, vmax=1)
#plt.imsave('%s/D_fake.png' % (opt.outf), functions.convert_image_np(D_fake_map))
#plt.imsave('%s/D_real.png' % (opt.outf), functions.convert_image_np(D_real_map))
#plt.imsave('%s/z_opt.png' % (opt.outf), functions.convert_image_np(z_opt.detach()), vmin=0, vmax=1)
#plt.imsave('%s/prev.png' % (opt.outf), functions.convert_image_np(prev), vmin=0, vmax=1)
#plt.imsave('%s/noise.png' % (opt.outf), functions.convert_image_np(noise), vmin=0, vmax=1)
#plt.imsave('%s/z_prev.png' % (opt.outf), functions.convert_image_np(z_prev), vmin=0, vmax=1)
torch.save(z_opt, '%s/z_opt.pth' % (opt.outf))
schedulerD.step()
schedulerG.step()
functions.save_networks(netG,netD,z_opt,opt)
return z_opt,in_s,netG
# Define the Networks here
import torch
import torch.nn as nn
import numpy as np
import torch.nn.functional as F
class ConvBlock(nn.Sequential):
def __init__(self, in_channel, out_channel, ker_size, padd, stride):
super(ConvBlock, self).__init__()
# NOTE: Is there a reason why BatchNorm2d is before and not after LeakReLU?
self.add_module('conv', nn.Conv2d(in_channel, out_channel, kernel_size=ker_size, stride=stride, padding=padd)),
self.add_module('norm', nn.BatchNorm2d(out_channel)),
self.add_module('LeakyRelu', nn.LeakyReLU(0.2, inplace=True))
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv2d') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('Norm') != -1:
# TODO: Verify that this code initializes to zero mean and unit variance.
# The normal_(1.0, 0.01) is confusing because it looks like unit mean and 0.01 variance?
# https://forums.fast.ai/t/how-is-batch-norm-initialized/39818
m.weight.data.normal_(1.0, 0.01)
m.bias.data.fill_(0)
class WDiscriminator2(nn.Module):
def __init__(self, opt):
super(WDiscriminator2, self).__init__()
self.is_cuda = torch.cuda.is_available()
N = int(opt.nfc)
self.head = ConvBlock(opt.nc_im,N,opt.ker_size,opt.padd_size,1)
self.body = nn.Sequential()
for i in range(opt.num_layer-2):
N = int(opt.nfc/pow(2,(i+1)))
block = ConvBlock(max(2*N,opt.min_nfc),max(N,opt.min_nfc),opt.ker_size,opt.padd_size,1)
self.body.add_module('block%d'%(i+1),block)
self.tail = nn.Conv2d(max(N,opt.min_nfc),1,kernel_size=opt.ker_size,stride=1,padding=opt.padd_size)
def forward(self,x):
x = self.head(x)
x = self.body(x)
x = self.tail(x)
return x
class GeneratorConcatSkip2CleanAdd2(nn.Module):
def __init__(self, opt):
super(GeneratorConcatSkip2CleanAdd2, self).__init__()
self.is_cuda = torch.cuda.is_available()
N = opt.nfc
self.head = ConvBlock(opt.nc_im,N,opt.ker_size,opt.padd_size,1) #GenConvTransBlock(opt.nc_z,N,opt.ker_size,opt.padd_size,opt.stride)
self.body = nn.Sequential()
for i in range(opt.num_layer-2):
N = int(opt.nfc/pow(2,(i+1)))
block = ConvBlock(max(2*N,opt.min_nfc),max(N,opt.min_nfc),opt.ker_size,opt.padd_size,1)
self.body.add_module('block%d'%(i+1),block)
self.tail = nn.Sequential(
nn.Conv2d(max(N,opt.min_nfc),opt.nc_im,kernel_size=opt.ker_size,stride =1,padding=opt.padd_size),
nn.Tanh()
)
def forward(self,x,y):
x = self.head(x)
x = self.body(x)
x = self.tail(x)
# NOTE: Are they downsampling/upsampling here?
ind = int((y.shape[2]-x.shape[2])/2)
y = y[:,:,ind:(y.shape[2]-ind),ind:(y.shape[3]-ind)]
return x+y
class DummyOpt:
def __init__(self):
self.nfc = 32
self.nc_im = 3
self.ker_size = 3
self.padd_size = 1
self.num_layer = 5
self.min_nfc = 3
opt_example = DummyOpt()
D_example = WDiscriminator2(opt_example)
G_example = GeneratorConcatSkip2CleanAdd2(opt_example)
print(D_example)
print(G_example)
def read_image2(opt):
x = img.imread('%s/%s' % (opt.input_dir,opt.input_name))
x = functions.np2torch(x,opt)
x = x[:,0:3,:,:]
x2 = img.imread('%s/%s' % (opt.input_dir,opt.input_name2))
x2 = functions.np2torch(x2,opt)
x2 = x2[:,0:3,:,:]
array = [x, x2]
return array
def train2(opt,Gs,Zs,reals, reals2,NoiseAmp):
array = read_image2(opt)
real_ = array[0]
real_2 = array[1]
in_s = 0
scale_num = 0
real = imresize(real_,opt.scale1,opt)
real2 = imresize(real_2,opt.scale1,opt)
reals = functions.creat_reals_pyramid(real,reals,opt)
reals2 = functions.creat_reals_pyramid(real2,reals2,opt)
nfc_prev = 0
while scale_num<opt.stop_scale+1:
opt.nfc = min(opt.nfc_init * pow(2, math.floor(scale_num / 4)), 128)
opt.min_nfc = min(opt.min_nfc_init * pow(2, math.floor(scale_num / 4)), 128)
opt.out_ = functions.generate_dir2save(opt)
opt.outf = '%s/%d' % (opt.out_,scale_num)
try:
os.makedirs(opt.outf)
except OSError:
pass
#plt.imsave('%s/in.png' % (opt.out_), functions.convert_image_np(real), vmin=0, vmax=1)
#plt.imsave('%s/original.png' % (opt.out_), functions.convert_image_np(real_), vmin=0, vmax=1)
plt.imsave('%s/real_scale.png' % (opt.outf), functions.convert_image_np(reals[scale_num]), vmin=0, vmax=1)
D_curr,G_curr = init_models(opt)
if (nfc_prev==opt.nfc):
G_curr.load_state_dict(torch.load('%s/%d/netG.pth' % (opt.out_,scale_num-1)))
D_curr.load_state_dict(torch.load('%s/%d/netD.pth' % (opt.out_,scale_num-1)))
for j in range(100):
z_curr,in_s,G_curr = train_single_scale2(D_curr,G_curr,reals,Gs,Zs,in_s,NoiseAmp,opt)
z_curr,in_s,G_curr = train_single_scale2(D_curr,G_curr,reals2,Gs,Zs,in_s,NoiseAmp,opt)
G_curr = functions.reset_grads(G_curr,False)
G_curr.eval()
D_curr = functions.reset_grads(D_curr,False)
D_curr.eval()
Gs.append(G_curr)
Zs.append(z_curr)
NoiseAmp.append(opt.noise_amp)
torch.save(Zs, '%s/Zs.pth' % (opt.out_))
torch.save(Gs, '%s/Gs.pth' % (opt.out_))
torch.save(reals, '%s/reals.pth' % (opt.out_))
torch.save(NoiseAmp, '%s/NoiseAmp.pth' % (opt.out_))
scale_num+=1
nfc_prev = opt.nfc
del D_curr,G_curr
return
%cd /content/SinGAN
!git checkout experimental
![ -d TrainedModels ] && rm -r TrainedModels
import torch
import torch.nn as nn
import sys
import os
# Import help functions from SinGAN
import config
from config import get_arguments
from SinGAN.manipulate import *
from SinGAN.training import *
import SinGAN.functions as functions
print('Implement SinGAN here...')
# Replace the specific functions we want to reimplement
SinGAN.training.train_single_scale = train_single_scale2
SinGAN.training.train = train2
SinGAN.models.WDiscriminator = WDiscriminator2
#SinGAN.models.GeneratorConcatSkip2CleanAdd = GeneratorConcatSkip2CleanAdd2
functions.read_image = read_image2
del sys.argv[:]
sys.argv.append('main_train.py')
parser = get_arguments()
parser.add_argument('--input_dir', help='input image dir', default='Input/Images')
parser.add_argument('--input_name', help='input image name', default='birds.png')
parser.add_argument('--mode', help='task to be done', default='train')
opt = parser.parse_args()
opt = functions.post_config(opt)
Gs = []
Zs = []
reals = []
reals2 = []
NoiseAmp = []
dir2save = functions.generate_dir2save(opt)
if os.path.exists(dir2save):
print('trained model already exists: {}'.format(dir2save))
else:
try:
os.makedirs(dir2save)
except OSError:
pass
print(opt)
print(functions.read_image)
#opt.ref_image = ''
opt.input_name = 'volacano.png'
opt.input_name2 = 'tree_resized.png'
array = functions.read_image(opt)
real = array[0]
real2 = array[1]
#real = functions.read_image(opt)
functions.adjust_scales2image(real, opt)
train2(opt, Gs, Zs, reals, reals2, NoiseAmp)
SinGAN_generate(Gs,Zs,reals,NoiseAmp,opt)
```
# Evaluation: Let's generate some SinGAN images and look at the results
```
!python3 random_samples.py --input_name volacano.png --mode random_samples_arbitrary_sizes --scale_h 1 --scale_v 1
!ls
!ls -l Output/RandomSamples/volacano
!ls -l Output/RandomSamples/volacano/gen_start_scale=0
import cv2
import glob
from google.colab.patches import cv2_imshow
print('original image')
original_img_path = 'Input/Images/volacano.png'
img = cv2.imread(original_img_path)
cv2_imshow(img)
print('original image')
original_img_path = 'Input/Images/colusseum.png'
img = cv2.imread(original_img_path)
cv2_imshow(img)
# Get generated images
img_paths = glob.glob('Output/RandomSamples/volacano/gen_start_scale=0/*.png')
print('random sample')
img = cv2.imread(img_paths[0])
cv2_imshow(img)
print('random sample')
img = cv2.imread(img_paths[1])
cv2_imshow(img)
print('random sample')
img = cv2.imread(img_paths[2])
cv2_imshow(img)
print('random sample')
img = cv2.imread(img_paths[3])
cv2_imshow(img)
print('random sample')
img = cv2.imread(img_paths[4])
cv2_imshow(img)
print('random sample')
img = cv2.imread(img_paths[5])
cv2_imshow(img)
```
| github_jupyter |
# 12. Analysing proteins using python
In previous sections we have primarily focused on showing you the basic components of python. We have primarily looked at small example cases where we process some type of input data to generate some kind of text or numerical output.
In this section we want to show you how you can go beyond this and use python to do everything from loading complex structure files to generating graphs and interactive objects. We don't necessarily expect you to learn exactly how all of this works, instead we want to show you what can be done should you wish to look further into these tools and libaries.
The particular use case we are looking at, is some basic analysis of crystallographic coordinates for a protein (HIV-1 protease) in complex with the ligand indinavir. It assumes that you have a certain amount of prior knowledge about the type of data that can be collected and deposited to the RSCB PDB from crystallographic experiments. For more information please see the [RSCB PDB website](https://www.rcsb.org/).
It is worth noting that we are only providing a very minimal overview of some of the things you could do. If you want to chat about how you could be using these tools to do you own work, please do get in contact with one of the course instructors.
### Python libraries
In this tutorial we will be using three main non-standard python libraries:
1. [MDAnalysis](https://www.mdanalysis.org/):
MDAnalysis is a python library primarily developed to help with the anlysis of Molecular Dynamics (MD) trajectories. Beyond just MD, it offers many different tools and functions that can be useful when trying to explore atomistic models.
2. [NGLView](https://github.com/nglviewer/nglview)
NGLView is a powerful widget that allows you to visualise molecular models within jupyter notebooks.
3. [Matplotlib](https://matplotlib.org/)
One of the main plotting tools for python, matplotlib offers a wide range of functionality to generate graphs of everything from a simple scatter plot to [complex animated 3D plots](https://matplotlib.org/gallery/animation/random_walk.html#sphx-glr-gallery-animation-random-walk-py).
## Using MDAnalysis to load a PDB structure
Here we will look at how we can use MDAnalysis to load a PDB file (stored under `datafiles/1HSG.pdb`) and look at its basic properties (e.g. number of atoms, residues, chains, non-protein atoms).
We will only be giving a very superficial overview of MDAnalysis, if you want to know more, please have a look at the [MDAnalysis user guide](https://userguide.mdanalysis.org/1.0.0/index.html).
One of the core components of MDAnalysis is the `Universe` class. You can consider this as the container where we store all the information about the structure file. In a PDB structure, this includes (amongst many other things): 3D coordinates for all the heavy atoms, atom names (i.e. pseudo-arbitrary labels about the types of atoms in the structure), elements, residue names, chain identifiers, and temperature factors.
First, let us create a `Universe` class and call it `pdb` by passing it a string with the path to our PDB file:
```
import MDAnalysis
pdb = MDAnalysis.Universe('datafiles/1HSG.pdb')
```
The `Universe` object has plenty of different attributes and methods, most of which we will not cover here.
The main one that you will work with in the MD tutorial is `trajectory`, which allows you to traverse through a simulation trajectory. However since we only have a single PDB structure, we don't have to deal with this here.
Let's use the `Universe` to gather some basic information about the 1HSG structure. Take some time to look at its [PDB entry](https://www.rcsb.org/structure/1hsg). From the page, we can see that the structure has a total of 1686 atoms, 198 residues, and two chains (called `A` and `B`). We can use MDAnalysis to recover this data.
```
# We can get the number of atoms using the "atoms" sub-class
# "atoms" handles all the information about the atoms in a structure
# here it has a `n_atoms` attribute which tells you how many atoms there are
print("number of atoms: ", pdb.atoms.n_atoms)
# We can also use `n_residues` to get the number of residues
print("number of residues: ", pdb.atoms.n_residues)
# And `n_segments` for chains (MDAnalysis calls chains "segments")
print("number of chains", pdb.atoms.n_segments)
```
As you probably noticed, the number of residues returned as 326, not 198. Why do you think this is?
> Answer: the PDB page states the number of protein residues, so there are 128 non-protein residues
Let's use MDAnalysis to get a little bit more information about these residues.
Here we use one of the `Universe` methods `select_atoms`. Similar to what you may get a chance to do with VMD (MD tutorial), and Pymol (docking/homology modelling tutorial), this allows you to use a text based selection to capture a specific portion of your `Universe`.
For example, if we wanted to get all the protein residues:
```
protein_residues = pdb.select_atoms('protein')
print("number of protein residues: ", protein_residues.atoms.n_residues)
```
Similarly, we can do the same to get the number of non-protein residues:
```
non_protein_residues = pdb.select_atoms('not protein')
print("number of non-protein residues:", non_protein_residues.atoms.n_residues)
```
We can keep using `select_atoms` on these newly created subsampled objects to go deeper into the details.
How many of them are waters?
```
# Create a selection from non_protein_residues that only includes waters
# In the PDB waters are named HOH, so we can make a selection from this
# Here we use the "resname" selection to select by residue name
waters = non_protein_residues.select_atoms('resname HOH')
print("number of waters: ", waters.atoms.n_residues)
# What about non-water non-protein residues?
not_water = non_protein_residues.select_atoms('not resname HOH')
print("number of non-water, non-protein residues: ", not_water.atoms.n_residues)
```
As we can see, there is 1 non-protein non-water residue.
Let's find out more information about it.
First let's see what this residue is called. Here we will be using the `residues` object, which is like `atoms`, but rather than containing atomic information it contains information about the residues. Specifically here we are looking at `resnames` that tells us what the residue name is:
```
print("residue name: ", not_water.residues.resnames)
```
MK1 is the PDB name for the drug indinavir. You can look at the PDB entry for it [here](https://www.rcsb.org/ligand/MK1).
Since the PDB file contains per-atom information (in `atoms`), we can use MDAnalysis to list the atoms that make indinavir:
```
print(not_water.atoms.types)
```
We can also use the coordinates from the PDB file to obtain more information about MK1. Since MDAnalysis takes the coordinate information from the PDB file, we could use the MK1 coordinates (accessible under `not_water.atoms.positions`) to calculate the center of geometry. MDAnalysis provides a simple method for doing this called `center_of_geometry()`:
```
print(not_water.center_of_geometry())
```
#### Exercise 1 - Protein center of geometry
What about the center of the geometry of the protein? Using the `protein_residues` subselection we made earlier, apply the same thing to work out what the center of geometry of the protein atoms is.
```
# Exercise 1:
print(protein_residues.center_of_geometry())
```
## Visualising a PDB using NGLView
Having access to all the information contained in a PDB file is great, however looking at a text or numerical outputs can be quite a lot to digest. Here we can use NGLView to have a look at the visual representation of our protein.
Handily, nglview offers a direct interface to read in MDAnalysis objects through the `show_mdanalysis` method. To facilitate things, we will be doing so here to look at the `Universe` named `pdb` that we created earlier. There are plenty of other ways to feed information to and customise NGLView, but we will leave it to you to look into it more, if it is something you are interested in.
### NGLView controls
After executing the code below, you should see a widget pop up with the representation of a protein in cartoon form.
NGLView widgets can be directly interacted with, here are some basic things you can do:
1. Rotating the structure
This can be done by left-clicking within the protein viewer and dragging a given direction.
2. Zooming into the structure
This can be done by scrolling with your mouse wheel.
3. Translating the structure
This can be done by right-clicking and dragging with your mouse.
4. Going full screen
This can be done by going to "view" in the toolbar and clicking on "Full screen".
Once entered, you can exit full screen by pressing the "Esc" button on your keyboard.
```
import nglview
# Use the `show_mdanalysis` method to parse an MDAnalysis Universe class
pdbview = nglview.show_mdanalysis(pdb)
# Here we set this gui_style attribute so we get a nice interface to interact with
pdbview.gui_style = 'ngl'
# The defaults for NGLView are great, but let's customise a little bit
pdbview.clear_representations()
# We make the protein residues show up as cartoons coloured by their secondary structure
pdbview.add_representation('cartoon', selection='protein', color='sstruc')
# We make the ligand show up in a licorice representation
pdbview.add_representation('licorice', selection='MK1')
# We make the waters show up as red spheres
pdbview.add_representation('ball+stick', selection='water', color='red')
# Finally we call the NGLView object to get an output
pdbview
```
## Looking at temperature factors
Up until now, we've done things that could mostly be done by looking at the [PDB entry for 1HSG](https://www.rcsb.org/structure/1hsg). Let's apply these things to look at something that could be useful on a day to day basis.
Here we will analyse the protein's temperature factors (also known as bfactors) to know which parts of the protein are moving the most. If you want to know more about temperature factors, see [this useful guide by the PDB](https://pdb101.rcsb.org/learn/guide-to-understanding-pdb-data/dealing-with-coordinates).
Temperature factors are recorded in PDB files and are read by MDAnalysis when available. These can be found as an attribute of the `atoms` class.
```
# Temperature factors of the protein residues
print(protein_residues.atoms.tempfactors)
```
Just printing the raw numbers isn't very informative.
What we can do here is plot the temperature factors of the alpha carbons in our protein.
Do to this, let us first create a selection of the alpha carbon atoms (named "CA") for each chain:
```
# Alpha carbons for chain A (also known as segid A)
chainA_alphaC = protein_residues.select_atoms('name CA and segid A')
# Alpha carbons for chain B (also known as segid B)
chainB_alphaC = protein_residues.select_atoms('name CA and segid B')
```
Now let's use the plotting library matplotlib to create a plot of the alpha carbon temperature factors for each residue in each chain.
```
# We import pyplot from matplotlib
# Note the "inline" call is some jupyter magic to be able to show the plot
%matplotlib inline
from matplotlib import pyplot as plt
# We pass the residue ids and alpha carbon temperature factors to pyplot's plot function
plt.plot(chainA_alphaC.resids, chainA_alphaC.atoms.tempfactors, label='chain A')
plt.plot(chainB_alphaC.resids, chainB_alphaC.atoms.tempfactors, label='chain B')
# Let's add some titles and legends
plt.title('Plot of alpha carbon temperature factors')
plt.xlabel('residue number')
plt.ylabel('temperature factor')
plt.legend()
# We call show() to show the plot
plt.show()
```
Here we have a plot with the blue line showing the alpha carbon temperature factors for chain A, and the yellow line for chain B. As we can see, the two chains don't completely agree, but there are particular patterns to observe. Specifically, we see very low temperature factors in the regions around residues 25 and 80. We also see defined peaks near residues 15 and 70.
Knowing this information can be quite useful when trying to work out what parts of your protein are moving and what might be influencing this motion.
That being said, looking purely at a plot does not help. What we can also do, is use NGLView to directly plot the temperature factors unto the cartoon representation of our protein. We do this in the following way:
```
# Create an NGL view based on our protein_residues selection
pdbview = nglview.show_mdanalysis(pdb)
# Set the interaction session interface type
pdbview.gui_style = 'ngl'
# Clear the representations and add a cartoon representation coloured by "beta factor"
pdbview.clear_representations()
pdbview.add_representation('cartoon', color='bfactor')
# We'll also show the ligand atoms as licorice
pdbview.add_representation('licorice', selection='MK1')
# Show the widget
pdbview
```
Using the plot we created, can you work out what the colouring scheme of NGLView shows?
> Answer: Here we go from red being low beta factor regions, to blue being high ones. That is to say that bluer regions are more mobile.
Using these the plot and the NGLView representation, can you explain why there happens to be more mobile regions?
> Answer: think about which areas are more solvent exposed and therefore more likely to be in motion.
Looking at where the ligand is situated, are there any mobile residues that may influence binding?
> Answer: the loops composed of residues 49-52 are quite mobile and close to the ligand. In fact previous [work by Hornak et al.](https://www.pnas.org/content/103/4/915) shows that these can spontaneously open and close. Doing a molecular dynamics simulation (as you will in the MD tutorial), might be helpful in elucidating how these loops move.
| github_jupyter |
<img src="../meta/logo.png" width=400 align="left"/>
Contributors:
- *Liubov Elkhovskaya* <span style="color:blue">lelkhovskaya@itmo.ru</span>
- *Alexander Kshenin* <span style="color:blue">adkshenin@itmo.ru</span>
- *Marina Balakhontceva* <span style="color:blue">mbalakhontceva@itmo.ru</span>
- *Sergey Kovalchuk* <span style="color:blue">kovalchuk@itmo.ru</span>
ProFIT автоматически строит модели бизнес-процессов по данным. Входные данные — журнал событий, содержащий записи об идентификаторах случаев и выполненных действиях, упорядоченные по времени регистрации в системе. Модель процесса представлена в виде ориентированного графа, где зелёная вершина — начало процесса, а красная — конец.
<img src="../meta/pm_general.png" width=600 align="center"/>
## Package location
*Для запуска демо из репозитория в jupyter-notebook.*
Ссылка на проект: https://github.com/Siella/ProFIT.
```
import os
import configparser
PATH = os.getcwd()[:os.getcwd().rfind('\\')] # путь до директории ProFIT
config = configparser.ConfigParser()
config.add_section("packageLocation")
config.set("packageLocation", "workingDir", PATH)
config.set("packageLocation", "packageDir", PATH+'\\profit')
```
## Import
```
import sys
sys.path.append(config["packageLocation"]["workingDir"])
sys.path.append(config["packageLocation"]["packageDir"])
from profit import ProcessMap
```
## How to use
Чтобы начать использовать ProFIT, достаточно объявить и присвоить переменной экземпляр класса ProcessMap, а затем передать путь к логу в формате CSV/TXT/XES (либо сами данные в виде pandas.DataFrame) через метод set_log.
```
monitoring = PATH + "/demo/log_examples/remote_monitoring.csv"
declarations = PATH + "/demo/log_examples/DomesticDeclarations.xes"
import pandas as pd
# log demo
df_monitoring = pd.read_csv(monitoring, encoding='cp1251')
df_monitoring.head()
pm = ProcessMap()
pm.set_log(FILE_PATH = monitoring,
# data = df_monitoring,
encoding = 'cp1251')
pm.update()
```
После каждой настройки необходимо вызвать метод update!
Метод render возвращает модель процесса в виде ориентированного графа на DOT языке (поддерживается и визуализируется пакетом утилит Graphviz). Чтобы показать и сохранить модель в формате, поддерживаемом Graphviz, нужно вызвать данный метод и указать путь к директории, где будет сохранён результат.
```
# without saving
pm.render()
```
С помощью методов set_rates и set_params пользователь может настроить несколько входных параметров:
- уровни отображения событий и переходов, регулирующие детализацию модели процесса;
- опцию построения оптимальной модели процесса, основанной на комбинированной оценке сложности и точности модели;
- параметр оптимизации, регулирующий простоту восприятия и полноту модели;
- опцию выделения мета-состояний путём агрегации циклов в модели;
- способы агрегации и проч. (см. документацию)
По умолчанию (без настроек) ищется оптимальная модель процесса, поэтому при изменении уровней отображений необходимо выставить optimize=False, чтобы отключить их автонастройку. Задача оптимизации выглядит следующим образом:
$$\mathcal{Q}(p, X^l) = (1-\lambda)\cdot F + \lambda\cdot C_{\mathcal{J}} \longrightarrow \min_{\theta},$$
где $\mathcal{Q}$ — функционал качества, $p$ — алгоритм извлечения модели процесса из данных, $X^l$ — подвыборка лога, $\lambda$ — коэффициент регуляризации, $F$ — фитнес-функция, $C_\mathcal{J}$ — функция сложности, $\theta$ — уровни отображений.
```
pm2 = ProcessMap()
pm2.set_log(FILE_PATH=declarations)
pm2.update()
pm2.render()
pm.set_rates(activity_rate=80, path_rate=15)
pm.set_params(optimize=False, aggregate=False)
pm.update()
pm.render()
```
### Meta-states discovering
Под мета-состояниями понимаем значимые циклы, то есть которые часто встречаются в логе.
<img src="../meta/cycles_joining.png" width=600 align="center"/>
Пример перестроения модели процесса (a) изначальная модель; (b) свёртка циклов типа outer joining;
(c) свёртка циклов типа inner joining и эвристикой all; (d) свёртка циклов типа inner joining и эвристикой frequent
```
pm.set_rates(activity_rate=80, path_rate=5)
pm.set_params(optimize=False,
aggregate=True,
heuristic='all',
agg_type='inner')
pm.update()
pm.render()
```
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Data split
Data splitting is one of the most vital tasks in assessing recommendation systems. Splitting strategy greatly affects the evaluation protocol so that it should always be taken into careful consideration by practitioners.
The code hereafter explains how one applies different splitting strategies for specific scenarios.
## 0 Global settings
```
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import pyspark
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
from reco_utils.common.spark_utils import start_or_get_spark
from reco_utils.dataset.download_utils import maybe_download
from reco_utils.dataset.python_splitters import (
python_random_split,
python_chrono_split,
python_stratified_split
)
from reco_utils.dataset.spark_splitters import (
spark_random_split,
spark_chrono_split,
spark_stratified_split,
spark_timestamp_split
)
print("System version: {}".format(sys.version))
print("Pyspark version: {}".format(pyspark.__version__))
DATA_URL = "http://files.grouplens.org/datasets/movielens/ml-100k/u.data"
DATA_PATH = "ml-100k.data"
COL_USER = "UserId"
COL_ITEM = "MovieId"
COL_RATING = "Rating"
COL_PREDICTION = "Rating"
COL_TIMESTAMP = "Timestamp"
```
## 1 Data preparation
### 1.1 Data understanding
For illustration purpose, the data used in the examples below is the MovieLens-100K dataset.
```
filepath = maybe_download(DATA_URL, DATA_PATH)
data = pd.read_csv(filepath, sep="\t", names=[COL_USER, COL_ITEM, COL_RATING, COL_TIMESTAMP])
```
A glimpse at the data
```
data.head()
```
A little more...
```
data.describe()
```
And, more...
```
print(
"Total number of ratings are\t{}".format(data.shape[0]),
"Total number of users are\t{}".format(data[COL_USER].nunique()),
"Total number of items are\t{}".format(data[COL_ITEM].nunique()),
sep="\n"
)
```
### 1.2 Data transformation
Original timestamps are converted to ISO format.
```
data[COL_TIMESTAMP]= data.apply(
lambda x: datetime.strftime(datetime(1970, 1, 1, 0, 0, 0) + timedelta(seconds=x[COL_TIMESTAMP].item()), "%Y-%m-%d %H:%M:%S"),
axis=1
)
data.head()
```
## 2 Experimentation protocol
Experimentation protocol is usually set up to favor a reasonable evaluation for a specific recommendation scenario. For example,
* *Recommender-A* is to recommend movies to people by taking people's collaborative rating similarities. To make sure the evaluation is statisically sound, the same set of users for both model building and testing should be used (to avoid any cold-ness of users), and a stratified splitting strategy should be taken.
* *Recommender-B* is to recommend fashion products to customers. It makes sense that evaluation of the recommender considers time-dependency of customer purchases, as apparently, tastes of the customers in fashion items may be drifting over time. In this case, a chronologically splitting should be used.
## 3 Data split
### 3.1 Random split
Random split simply takes in a data set and outputs the splits of the data, given the split ratios.
```
data_train, data_test = python_random_split(data, ratio=0.7)
data_train.shape[0], data_test.shape[0]
```
Sometimes a multi-split is needed.
```
data_train, data_validate, data_test = python_random_split(data, ratio=[0.6, 0.2, 0.2])
data_train.shape[0], data_validate.shape[0], data_test.shape[0]
```
Ratios can be integers as well.
```
data_train, data_validate, data_test = python_random_split(data, ratio=[3, 1, 1])
```
For producing the same results.
```
data_train.shape[0], data_validate.shape[0], data_test.shape[0]
```
### 3.2 Chronological split
Chronogically splitting method takes in a dataset and splits it on timestamp.
#### 3.2.1 "Filter by"
Chrono splitting can be either by "user" or "item". For example, if it is by "user" and the splitting ratio is 0.7, it means that first 70% ratings for each user in the data will be put into one split while the other 30% is in another. It is worth noting that a chronological split is not "random" because splitting is timestamp-dependent.
```
data_train, data_test = python_chrono_split(
data, ratio=0.7, filter_by="user",
col_user=COL_USER, col_item=COL_ITEM, col_timestamp=COL_TIMESTAMP
)
```
Take a look at the results for one particular user:
* The last 10 rows of the train data:
```
data_train[data_train[COL_USER] == 1].tail(10)
```
* The first 10 rows of the test data:
```
data_test[data_test[COL_USER] == 1].head(10)
```
Timestamps of train data are all precedent to those in test data.
#### 3.3.2 Min-rating filter
A min-rating filter is applied to data before it is split by using chronological splitter. The reason of doing this is that, for multi-split, there should be sufficient number of ratings for user/item in the data.
For example, the following means splitting only applies to users that have at least 10 ratings.
```
data_train, data_test = python_chrono_split(
data, filter_by="user", min_rating=10, ratio=0.7,
col_user=COL_USER, col_item=COL_ITEM, col_timestamp=COL_TIMESTAMP
)
```
Number of rows in the yielded splits of data may not sum to the original ones as users with fewer than 10 ratings are filtered out in the splitting.
```
data_train.shape[0] + data_test.shape[0], data.shape[0]
```
### 3.3 Stratified split
Chronogically splitting method takes in a dataset and splits it by either user or item. The split is stratified so that the same set of users or items will appear in both training and testing data sets.
Similar to chronological splitter, `filter_by` and `min_rating_filter` also apply to the stratified splitter.
The following example shows the split of the sample data with a ratio of 0.7, and for each user there should be at least 10 ratings.
```
data_train, data_test = python_stratified_split(
data, filter_by="user", min_rating=10, ratio=0.7,
col_user=COL_USER, col_item=COL_ITEM
)
data_train.shape[0] + data_test.shape[0], data.shape[0]
```
### 3.4 Data split in scale
Spark DataFrame is used for scalable splitting. This allows splitting operation performed on large dataset that is distributed across Spark cluster.
For example, the below illustrates how to do a random split on the given Spark DataFrame. For simplicity reason, the same MovieLens data, which is in Pandas DataFrame, is transformed into Spark DataFrame and used for splitting.
```
spark = start_or_get_spark()
data_spark = spark.read.csv(filepath)
data_spark_train, data_spark_test = spark_random_split(data_spark, ratio=0.7)
```
Interestingly, it was noticed that Spark random split does not guarantee a deterministic result. This sometimes leads to issues when data is relatively small while users seek for a precision split.
```
data_spark_train.count(), data_spark_test.count()
```
## References
1. Dimitris Paraschakis et al, "Comparative Evaluation of Top-N Recommenders in e-Commerce: An Industrial Perspective", IEEE ICMLA, 2015, Miami, FL, USA.
2. Guy Shani and Asela Gunawardana, "Evaluating Recommendation Systems", Recommender Systems Handbook, Springer, 2015.
3. Apache Spark, url: https://spark.apache.org/.
| github_jupyter |
# Tutorial 1: Neural Nets and Datasets
In this first tutorial, we'll cover the basics of training neural networks and loading/generating datasets. We've extended pytorch neural networks to have a bunch of handy tools. We'll need all these tools to evaluate Lipschitz constants. Since we frequently operate with neural networks trained on real or synthetic datasets, with or without regularization, we cover some tools help us with these tasks.
This Jupyter notebook is intended to be hosted by a server running in the main `LipMIP/` folder (so the imports play nice).
```
# Step 1: Import things
import sys
sys.path.append('..')
import torch
import utilities as utils
from relu_nets import ReLUNet
import neural_nets.data_loaders as data_loaders
import neural_nets.train as train
import neural_nets.adv_attacks as adv_attacks
```
## 1: Building a neural net
We only consider neural networks composed of compositions of affine and ReLU layers. We develop a particular type of pytorch `nn.Module` to encapsulate these networks only. These are initialized randomly and defined a priori based only on the size of each layer.
```
network_A = ReLUNet([4,8,16,2], bias=True) # defines a network R^4->R^2 with fully connected layers with biases
x = torch.rand((10, 4)) # 10 example inputs to network_A
y = network_A(x) # we directly feed inputs to network_A
print("Input: ", x)
print("Output:", y)
# We can also recover the inputs to each ReLU unit as such
preacts = network_A(x, return_preacts=True)
print(len(preacts)) # i'th element is the input to the i'th relu (starting from 0)
assert torch.all(y == preacts[-1]) # final element of preacts is the output of network_A(x)
print([_.shape for _ in preacts])
```
## 2: Loading or generating a dataset
Here we describe how to load the MNIST dataset, as well as the medley of synthetic datasets we use.
```
# The standard MNIST dataset can be loaded as
mnist_train = data_loaders.load_mnist_data('train', batch_size=16, shuffle=True) # Training data
mnist_val = data_loaders.load_mnist_data('val', batch_size=16, shuffle=True) # Validation data
# We can collect and display MNIST images as such
mnist_batch = next(iter(mnist_val))[0]
utils.display_images(mnist_batch)
# To select only a subset of the MNIST digits,
mnist17_train = data_loaders.load_mnist_data('train', digits=[1,7], batch_size=16, shuffle=True) # Training data
mnist17_val = data_loaders.load_mnist_data('val', digits=[1,7], batch_size=16, shuffle=True) # Validation data
mnist17_batch = next(iter(mnist17_val))[0]
utils.display_images(mnist17_batch)
'''
We define several synthetic datasets. Primarily, we use one we'll call a Random K-Cluster
This defines a collection of points over [0,1]^d where each has a label 1...C
Parameters are the :
- num_points: number of elements in the dataset (training AND validation)
- dimension: specifies the d, where the points reside in [0,1]^d
- num_classes: number of distinct labels
- radius: how far each point (in l2 norm) must be from the other points
- k: number of 'leaders' we select
The data generation works by randomly sampling num_points points from [0,1]^d,
such that they are all sufficiently separated. Then we randomly select k points to be 'leaders',
and uniformly randomly assign each 'leader' a label. Then we assign every other point the label
of their closest 'leader'.
'''
# Parameters of datasets are controlled with the RandomKParameters object
data_params = data_loaders.RandomKParameters(num_points=512, dimension=2, num_classes=2, k=10, radius=0.01)
# RandomDataset objects represent actual instantiations of a random dataset defined by the params above
random_dataset = data_loaders.RandomDataset(data_params, random_seed=1234)
random_train, random_val = random_dataset.split_train_val(0.75) # split data into training(75%) and val(25%) sets
# If 2-dimensional, we can visualize the dataset as such
random_dataset.plot_2d()
```
## 3: Training neural nets
With neural nets and datasets defined, we can perform training
```
network_B = ReLUNet([2, 16, 16, 16, 2]) # make a new net to classify the random_dataset defined above
# Parameters regarding how training are performed are contained within the TrainParameters object
# By default, we use CrossEntropyLoss and the Adam optimizer with lr=0.001, and test after every epoch
vanilla_train_params = train.TrainParameters(random_train, random_val, 500, # train for 500 epochs
test_after_epoch=100) # test after every 100 epochs
train.training_loop(network_B, vanilla_train_params)
# We can visualize the decision boundaries learned for networks with 2d inputs
network_B.display_decision_bounds((0.0, 1.0), (0.0, 1.0), 100)
# And we can overlay the dataset on top of this
ax = network_B.display_decision_bounds((0.0, 1.0), (0.0, 1.0), 100)
random_dataset.plot_2d(ax=ax)
```
## 4: Training With Regularization
We can incorporate custom regularizers into our training loop defined above. As an example, we'll apply standard Tikhonov (l2) regularization to the training of an MNIST network. We will also apply FGSM regularization against an $\ell_\infty$-bounded adversary.
```
# Training with l2-regularization
network_MNIST = ReLUNet([784, 32, 32, 10]) # simple MNIST network
# Reload the MNIST datasets
mnist_train = data_loaders.load_mnist_data('train', batch_size=128, shuffle=True) # Training data
mnist_val = data_loaders.load_mnist_data('val', batch_size=128, shuffle=True) # Validation data
# Build the components of the loss function
cross_entropy_loss = train.XEntropyReg(scalar=1.0)
l2_loss = train.LpWeightReg(lp='l2', scalar=0.01)
# Build the loss function to use
loss_functional = train.LossFunctional(regularizers=[cross_entropy_loss, l2_loss])
loss_functional.attach_network(network_MNIST)
# Train the network
mnist_train_params = train.TrainParameters(mnist_train, mnist_val, 10, loss_functional=loss_functional)
train.training_loop(network_MNIST, mnist_train_params)
# This can be sped up with the 'use_cuda=True' kwarg in training_loop(...)
# Training with FGSM Regularizers
network_MNIST.re_init_weights() # reset the weights to random
# Build the FGSM loss
fgsm_loss = train.LossFunctional(regularizers=[train.FGSM(0.1)]) #FGSM with adversary with 0.1 L_inf bound
fgsm_loss.attach_network(network_MNIST)
# Train the network
mnist_train_params = train.TrainParameters(mnist_train, mnist_val, 10, loss_functional=loss_functional)
train.training_loop(network_MNIST, mnist_train_params)
# Evaluate the adversarially trained network
import neural_nets.adv_attacks as adv_attacks
adversary = adv_attacks.build_attack_partial(adv_attacks.fgsm,linf_bound=0.1)
adv_attacks.eval_dataset(network_MNIST, mnist_val, adversary)
```
| github_jupyter |
# Example: CanvasXpress correlation Chart No. 3
This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:
https://www.canvasxpress.org/examples/correlation-3.html
This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.
Everything required for the chart to render is included in the code below. Simply run the code block.
```
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="correlation3",
data={
"z": {
"Annt1": [
"Desc:1",
"Desc:2",
"Desc:3",
"Desc:4"
],
"Annt2": [
"Desc:A",
"Desc:B",
"Desc:A",
"Desc:B"
],
"Annt3": [
"Desc:X",
"Desc:X",
"Desc:Y",
"Desc:Y"
],
"Annt4": [
5,
10,
15,
20
],
"Annt5": [
8,
16,
24,
32
],
"Annt6": [
10,
20,
30,
40
]
},
"x": {
"Factor1": [
"Lev:1",
"Lev:2",
"Lev:3",
"Lev:1",
"Lev:2",
"Lev:3"
],
"Factor2": [
"Lev:A",
"Lev:B",
"Lev:A",
"Lev:B",
"Lev:A",
"Lev:B"
],
"Factor3": [
"Lev:X",
"Lev:X",
"Lev:Y",
"Lev:Y",
"Lev:Z",
"Lev:Z"
],
"Factor4": [
5,
10,
15,
20,
25,
30
],
"Factor5": [
8,
16,
24,
32,
40,
48
],
"Factor6": [
10,
20,
30,
40,
50,
60
]
},
"y": {
"vars": [
"V1",
"V2",
"V3",
"V4"
],
"smps": [
"S1",
"S2",
"S3",
"S4",
"S5",
"S6"
],
"data": [
[
5,
10,
25,
40,
45,
50
],
[
95,
80,
75,
70,
55,
40
],
[
25,
30,
45,
60,
65,
70
],
[
55,
40,
35,
30,
15,
1
]
]
}
},
config={
"correlationAnchorLegend": True,
"correlationAnchorLegendAlignWidth": 20,
"correlationAxis": "variables",
"graphType": "Correlation",
"title": "Correlation Plot",
"yAxisTitle": "Correlation Title"
},
width=613,
height=713,
events=CXEvents(),
after_render=[],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="correlation_3.html")
```
| github_jupyter |
# Personal Pools
Launch this tutorial in a Jupyter Notebook on Binder:
[](https://mybinder.org/v2/gh/htcondor/htcondor-python-bindings-tutorials/master?urlpath=lab/tree/Personal-Pools.ipynb)
A Personal HTCondor Pool is an HTCondor Pool that has a single owner, who is:
- The pool’s administrator.
- The only submitter who is allowed to submit jobs to the pool.
- The owner of all resources managed by the pool.
The HTCondor Python bindings provide a submodule, `htcondor.personal`, which allows you to manage personal pools from Python.
Personal pools are useful for:
- Utilizing local computational resources (i.e., all of the cores on a lab server).
- Created an isolated testing/development environment for HTCondor workflows.
- Serving as an entrypoint to other computational resources, like annexes or flocked pools (not yet implemented).
We can start a personal pool by instantiating a `PersonalPool`.
This object represents the personal pool and lets us manage its "lifecycle": start up and shut down.
We can also use the `PersonalPool` to interact with the HTCondor pool once it has been started up.
Each Personal Pool must have a unique "local directory", corresponding to the HTCondor configuration parameter `LOCAL_DIR`. For this tutorial, we'll put it in the current working directory so that it's easy to find.
> Advanced users can configure the personal pool using the `PersonalPool` constructor. See the documentation for details on the available options.
```
import htcondor
from htcondor.personal import PersonalPool
from pathlib import Path
pool = PersonalPool(local_dir = Path.cwd() / "personal-condor")
pool
```
To tell the personal pool to start running, call the `start()` method:
```
pool.start()
```
`start()` doesn't return until the personal pool is `READY`, which means that it can accept commands (e.g., job submission).
`Schedd` and `Collector` objects for the personal pool are available as properties on the `PersonalPool`:
```
pool.schedd
pool.collector
```
For example, we can submit jobs using `pool.schedd`:
```
sub = htcondor.Submit(
executable = "/bin/sleep",
arguments = "$(ProcID)s",
)
schedd = pool.schedd
with schedd.transaction() as txn:
cluster_id = sub.queue(txn, 10)
print(f"ClusterID is {cluster_id}")
```
And we can query for the state of those jobs:
```
for ad in pool.schedd.query(
constraint = f"ClusterID == {cluster_id}",
projection = ["ClusterID", "ProcID", "JobStatus"]
):
print(repr(ad))
```
We can use the collector to query the state of pool:
```
# get 3 random ads from the daemons in the pool
for ad in pool.collector.query()[:3]:
print(ad)
```
When you're done using the personal pool, you can `stop()` it:
```
pool.stop()
```
`stop()`, like `start()` will not return until the personal pool has actually stopped running.
The personal pool will also automatically be stopped if the `PersonalPool` object is garbage-collected, or when the Python interpreter stops running.
> To prevent the pool from being automatically stopped in these situations, call the `detach()` method. The corresponding `attach()` method can be used to "re-connect" to a detached personal pool.
When working with a personal pool in a script, you may want to use it as a context manager. This pool will automatically start and stop at the beginning and end of the context:
```
with PersonalPool(local_dir = Path.cwd() / "another-personal-condor") as pool: # note: no need to call start()
print(pool.get_config_val("LOCAL_DIR"))
```
| github_jupyter |
<table width="100%">
<tr style="border-bottom:solid 2pt #009EE3">
<td style="text-align:left" width="10%">
<a href="generation_of_time_axis.dwipynb" download><img src="../../images/icons/download.png"></a>
</td>
<td style="text-align:left" width="10%">
<a href="https://mybinder.org/v2/gh/biosignalsnotebooks/biosignalsnotebooks/biosignalsnotebooks_binder?filepath=biosignalsnotebooks_environment%2Fcategories%2FPre-Process%2Fgeneration_of_time_axis.dwipynb" target="_blank"><img src="../../images/icons/program.png" title="Be creative and test your solutions !"></a>
</td>
<td></td>
<td style="text-align:left" width="5%">
<a href="../MainFiles/biosignalsnotebooks.ipynb"><img src="../../images/icons/home.png"></a>
</td>
<td style="text-align:left" width="5%">
<a href="../MainFiles/contacts.ipynb"><img src="../../images/icons/contacts.png"></a>
</td>
<td style="text-align:left" width="5%">
<a href="https://github.com/biosignalsnotebooks/biosignalsnotebooks" target="_blank"><img src="../../images/icons/github.png"></a>
</td>
<td style="border-left:solid 2pt #009EE3" width="15%">
<img src="../../images/ost_logo.png">
</td>
</tr>
</table>
<link rel="stylesheet" href="../../styles/theme_style.css">
<!--link rel="stylesheet" href="../../styles/header_style.css"-->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<table width="100%">
<tr>
<td id="image_td" width="15%" class="header_image_color_4"><div id="image_img" class="header_image_4"></div></td>
<td class="header_text"> Generation of a time axis (conversion of samples into seconds) </td>
</tr>
</table>
<div id="flex-container">
<div id="diff_level" class="flex-item">
<strong>Difficulty Level:</strong> <span class="fa fa-star checked"></span>
<span class="fa fa-star checked"></span>
<span class="fa fa-star"></span>
<span class="fa fa-star"></span>
<span class="fa fa-star"></span>
</div>
<div id="tag" class="flex-item-tag">
<span id="tag_list">
<table id="tag_list_table">
<tr>
<td class="shield_left">Tags</td>
<td class="shield_right" id="tags">pre-process☁time☁conversion</td>
</tr>
</table>
</span>
<!-- [OR] Visit https://img.shields.io in order to create a tag badge-->
</div>
</div>
All electrophysiological signals, collected by *Plux* acquisition systems, are, in its essence, time series.
Raw data contained in the generated .txt, .h5 and .edf files consists in samples and each sample value is in a raw value with 8 or 16 bits that needs to be converted to a physical unit by the respective transfer function.
Plux have examples of conversion rules for each sensor (in separate .pdf files), which may be accessed at <a href="http://biosignalsplux.com/en/learn/documentation">"Documentation>>Sensors" section <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a> of <strong><span class="color2">biosignalsplux</span></strong> website.
<img src="../../images/pre-process/sensors_section.gif">
Although each file returned by <strong><span class="color2">OpenSignals</span></strong> contains a sequence number linked to each sample, giving a notion of "time order" and that can be used as x axis, working with real time units is, in many occasions, more intuitive.
So, in the present **<span class="color5">Jupyter Notebook</span>** is described how to associate a time axis to an acquired signal, taking into consideration the number of acquired samples and the respective sampling rate.
<hr>
<p class="steps">1 - Importation of the needed packages </p>
```
# Package dedicated to download files remotely
from wget import download
# Package used for loading data from the input text file and for generation of a time axis
from numpy import loadtxt, linspace
# Package used for loading data from the input h5 file
import h5py
# biosignalsnotebooks own package.
import biosignalsnotebooks as bsnb
```
<p class="steps"> A - Text Files</p>
<p class="steps">A1 - Load of support data inside .txt file (described in a <span class="color5">Jupyter Notebook</span> entitled <a href="../Load/open_txt.ipynb"><strong> "Load acquired data from .txt file" <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></a>) </p>
```
# Download of the text file followed by content loading.
txt_file_url = "https://drive.google.com/uc?export=download&id=1m7E7PnKLfcd4HtOASH6vRmyBbCmIEkLf"
txt_file = download(txt_file_url, out="download_file_name.txt")
txt_file = open(txt_file, "r")
# [Internal code for overwrite file if already exists]
import os
import shutil
txt_file.close()
if os.path.exists("download_file_name.txt"):
shutil.move(txt_file.name,"download_file_name.txt")
txt_file = "download_file_name.txt"
txt_file = open(txt_file, "r")
```
<p class="steps">A2 - Load of acquisition samples (in this case from the third column of the text file - list entry 2)</p>
```
txt_signal = loadtxt(txt_file)[:, 2]
```
<p class="steps">A3 - Determination of the number of acquired samples</p>
```
# Number of acquired samples
nbr_samples_txt = len(txt_signal)
from sty import fg, rs
print(fg(98,195,238) + "\033[1mNumber of samples (.txt file):\033[0m" + fg.rs + " " + str(nbr_samples_txt))
```
<p class="steps"> B - H5 Files</p>
<p class="steps">B1 - Load of support data inside .h5 file (described in the <span class="color5">Jupyter Notebook</span> entitled <a href="../Load/open_h5.ipynb"><strong> "Load acquired data from .h5 file"<img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></a>) </p>
```
# Download of the .h5 file followed by content loading.
h5_file_url = "https://drive.google.com/uc?export=download&id=1UgOKuOMvHTm3LlQ_e7b6R_qZL5cdL4Rv"
h5_file = download(h5_file_url, out="download_file_name.h5")
h5_object = h5py.File(h5_file)
# [Internal code for overwrite file if already exists]
import os
import shutil
h5_object.close()
if os.path.exists("download_file_name.h5"):
shutil.move(h5_file,"download_file_name.h5")
h5_file = "download_file_name.h5"
h5_object = h5py.File(h5_file)
```
<p class="steps">B2 - Load of acquisition samples inside .h5 file</p>
```
# Device mac-address.
mac_address = list(h5_object.keys())[0]
# Access to signal data acquired by the device identified by "mac_address" in "channel_1".
h5_signal = list(h5_object.get(mac_address).get("raw").get("channel_1"))
```
<p class="steps">B3 - Determination of the number of acquired samples</p>
```
# Number of acquired samples
nbr_samples_h5 = len(h5_signal)
print(fg(232,77,14) + "\033[1mNumber of samples (.h5 file):\033[0m" + fg.rs + " " + str(nbr_samples_h5))
```
As it can be seen, the number of samples is equal for both file types.
```
print(fg(98,195,238) + "\033[1mNumber of samples (.txt file):\033[0m" + fg.rs + " " + str(nbr_samples_txt))
print(fg(232,77,14) + "\033[1mNumber of samples (.h5 file):\033[0m" + fg.rs + " " + str(nbr_samples_h5))
```
So, we can simplify and reduce the number of variables:
```
nbr_samples = nbr_samples_txt
```
Like described in the Notebook intro, for generating a time-axis it is needed the <strong><span class="color4">number of acquired samples</span></strong> and the <strong><span class="color7">sampling rate</span></strong>.
Currently the only unknown parameter is the <strong><span class="color7">sampling rate</span></strong>, which can be easily accessed for .txt and .h5 files as described in <a href="../Load/signal_loading_preparatory_steps.ipynb" target="_blank">"Signal Loading - Working with File Header"<img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a>.
For our acquisition the sampling rate is:
```
sampling_rate = 1000 # Hz
```
<p class="steps">AB4 - Determination of acquisition time in seconds</p>
```
# Conversion between sample number and seconds
acq_time = nbr_samples / sampling_rate
print ("Acquisition Time: " + str(acq_time) + " s")
```
<p class="steps">AB5 - Creation of the time axis (between 0 and 417.15 seconds) through <span class="color4">linspace</span> function</p>
```
time_axis = linspace(0, acq_time, nbr_samples)
print ("Time-Axis: \n" + str(time_axis))
```
<p class="steps">AB6 - Plot of the acquired signal (first 10 seconds) with the generated time-axis</p>
```
bsnb.plot(time_axis[:10*sampling_rate], txt_signal[:10*sampling_rate])
```
*This procedure can be automatically done by **generate_time** function in **conversion** module of **<span class="color2">biosignalsnotebooks</span>** package*
```
time_axis_auto = bsnb.generate_time(h5_file_url)
from numpy import array
print ("Time-Axis returned by generateTime function:")
print (array(time_axis_auto))
```
Time is a really important "dimension" in our daily lives and particularly on signal processing analysis. Without a time "anchor" like <strong><span class="color7">sampling rate</span></strong> it is very difficult to link the acquired digital data with real events.
Concepts like "temporal duration" or "time rate" become meaningless, being more difficult to take adequate conclusions.
However, as can be seen, a researcher in possession of the data to process and a single parameter (sampling rate) can easily generate a time-axis, following the demonstrated procedure.
<strong><span class="color7">We hope that you have enjoyed this guide. </span><span class="color2">biosignalsnotebooks</span><span class="color4"> is an environment in continuous expansion, so don't stop your journey and learn more with the remaining <a href="../MainFiles/biosignalsnotebooks.ipynb">Notebooks <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a></span></strong> !
<hr>
<table width="100%">
<tr>
<td style="border-right:solid 3px #009EE3" width="20%">
<img src="../../images/ost_logo.png">
</td>
<td width="40%" style="text-align:left">
<a href="../MainFiles/aux_files/biosignalsnotebooks_presentation.pdf" target="_blank">☌ Project Presentation</a>
<br>
<a href="https://github.com/biosignalsnotebooks/biosignalsnotebooks" target="_blank">☌ GitHub Repository</a>
<br>
<a href="https://pypi.org/project/biosignalsnotebooks/" target="_blank">☌ How to install biosignalsnotebooks Python package ?</a>
<br>
<a href="../MainFiles/signal_samples.ipynb">☌ Signal Library</a>
</td>
<td width="40%" style="text-align:left">
<a href="../MainFiles/biosignalsnotebooks.ipynb">☌ Notebook Categories</a>
<br>
<a href="../MainFiles/by_diff.ipynb">☌ Notebooks by Difficulty</a>
<br>
<a href="../MainFiles/by_signal_type.ipynb">☌ Notebooks by Signal Type</a>
<br>
<a href="../MainFiles/by_tag.ipynb">☌ Notebooks by Tag</a>
</td>
</tr>
</table>
```
from biosignalsnotebooks.__notebook_support__ import css_style_apply
css_style_apply()
%%html
<script>
// AUTORUN ALL CELLS ON NOTEBOOK-LOAD!
require(
['base/js/namespace', 'jquery'],
function(jupyter, $) {
$(jupyter.events).on("kernel_ready.Kernel", function () {
console.log("Auto-running all cells-below...");
jupyter.actions.call('jupyter-notebook:run-all-cells-below');
jupyter.actions.call('jupyter-notebook:save-notebook');
});
}
);
</script>
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Quantum Convolutional Neural Network
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/qcnn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/qcnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/qcnn.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/qcnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial implements a simplified <a href="https://www.nature.com/articles/s41567-019-0648-8" class="external">Quantum Convolutional Neural Network</a> (QCNN), a proposed quantum analogue to a classical convolutional neural network that is also *translationally invariant*.
This example demonstrates how to detect certain properties of a quantum data source, such as a quantum sensor or a complex simulation from a device. The quantum data source being a <a href="https://arxiv.org/pdf/quant-ph/0504097.pdf" class="external">cluster state</a> that may or may not have an excitation—what the QCNN will learn to detect (The dataset used in the paper was SPT phase classification).
## Setup
```
!pip install tensorflow==2.3.1
```
Install TensorFlow Quantum:
```
!pip install tensorflow-quantum
```
Now import TensorFlow and the module dependencies:
```
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
```
## 1. Build a QCNN
### 1.1 Assemble circuits in a TensorFlow graph
TensorFlow Quantum (TFQ) provides layer classes designed for in-graph circuit construction. One example is the `tfq.layers.AddCircuit` layer that inherits from `tf.keras.Layer`. This layer can either prepend or append to the input batch of circuits, as shown in the following figure.
<img src="./images/qcnn_1.png" width="700">
The following snippet uses this layer:
```
qubit = cirq.GridQubit(0, 0)
# Define some circuits.
circuit1 = cirq.Circuit(cirq.X(qubit))
circuit2 = cirq.Circuit(cirq.H(qubit))
# Convert to a tensor.
input_circuit_tensor = tfq.convert_to_tensor([circuit1, circuit2])
# Define a circuit that we want to append
y_circuit = cirq.Circuit(cirq.Y(qubit))
# Instantiate our layer
y_appender = tfq.layers.AddCircuit()
# Run our circuit tensor through the layer and save the output.
output_circuit_tensor = y_appender(input_circuit_tensor, append=y_circuit)
```
Examine the input tensor:
```
print(tfq.from_tensor(input_circuit_tensor))
```
And examine the output tensor:
```
print(tfq.from_tensor(output_circuit_tensor))
```
While it is possible to run the examples below without using `tfq.layers.AddCircuit`, it's a good opportunity to understand how complex functionality can be embedded into TensorFlow compute graphs.
### 1.2 Problem overview
You will prepare a *cluster state* and train a quantum classifier to detect if it is "excited" or not. The cluster state is highly entangled but not necessarily difficult for a classical computer. For clarity, this is a simpler dataset than the one used in the paper.
For this classification task you will implement a deep <a href="https://arxiv.org/pdf/quant-ph/0610099.pdf" class="external">MERA</a>-like QCNN architecture since:
1. Like the QCNN, the cluster state on a ring is translationally invariant.
2. The cluster state is highly entangled.
This architecture should be effective at reducing entanglement, obtaining the classification by reading out a single qubit.
<img src="./images/qcnn_2.png" width="1000">
An "excited" cluster state is defined as a cluster state that had a `cirq.rx` gate applied to any of its qubits. Qconv and QPool are discussed later in this tutorial.
### 1.3 Building blocks for TensorFlow
<img src="./images/qcnn_3.png" width="1000">
One way to solve this problem with TensorFlow Quantum is to implement the following:
1. The input to the model is a circuit tensor—either an empty circuit or an X gate on a particular qubit indicating an excitation.
2. The rest of the model's quantum components are constructed with `tfq.layers.AddCircuit` layers.
3. For inference a `tfq.layers.PQC` layer is used. This reads $\langle \hat{Z} \rangle$ and compares it to a label of 1 for an excited state, or -1 for a non-excited state.
### 1.4 Data
Before building your model, you can generate your data. In this case it's going to be excitations to the cluster state (The original paper uses a more complicated dataset). Excitations are represented with `cirq.rx` gates. A large enough rotation is deemed an excitation and is labeled `1` and a rotation that isn't large enough is labeled `-1` and deemed not an excitation.
```
def generate_data(qubits):
"""Generate training and testing data."""
n_rounds = 20 # Produces n_rounds * n_qubits datapoints.
excitations = []
labels = []
for n in range(n_rounds):
for bit in qubits:
rng = np.random.uniform(-np.pi, np.pi)
excitations.append(cirq.Circuit(cirq.rx(rng)(bit)))
labels.append(1 if (-np.pi / 2) <= rng <= (np.pi / 2) else -1)
split_ind = int(len(excitations) * 0.7)
train_excitations = excitations[:split_ind]
test_excitations = excitations[split_ind:]
train_labels = labels[:split_ind]
test_labels = labels[split_ind:]
return tfq.convert_to_tensor(train_excitations), np.array(train_labels), \
tfq.convert_to_tensor(test_excitations), np.array(test_labels)
```
You can see that just like with regular machine learning you create a training and testing set to use to benchmark the model. You can quickly look at some datapoints with:
```
sample_points, sample_labels, _, __ = generate_data(cirq.GridQubit.rect(1, 4))
print('Input:', tfq.from_tensor(sample_points)[0], 'Output:', sample_labels[0])
print('Input:', tfq.from_tensor(sample_points)[1], 'Output:', sample_labels[1])
```
### 1.5 Define layers
Now define the layers shown in the figure above in TensorFlow.
#### 1.5.1 Cluster state
The first step is to define the <a href="https://arxiv.org/pdf/quant-ph/0504097.pdf" class="external">cluster state</a> using <a href="https://github.com/quantumlib/Cirq" class="external">Cirq</a>, a Google-provided framework for programming quantum circuits. Since this is a static part of the model, embed it using the `tfq.layers.AddCircuit` functionality.
```
def cluster_state_circuit(bits):
"""Return a cluster state on the qubits in `bits`."""
circuit = cirq.Circuit()
circuit.append(cirq.H.on_each(bits))
for this_bit, next_bit in zip(bits, bits[1:] + [bits[0]]):
circuit.append(cirq.CZ(this_bit, next_bit))
return circuit
```
Display a cluster state circuit for a rectangle of <a href="https://cirq.readthedocs.io/en/stable/generated/cirq.GridQubit.html" class="external"><code>cirq.GridQubit</code></a>s:
```
SVGCircuit(cluster_state_circuit(cirq.GridQubit.rect(1, 4)))
```
#### 1.5.2 QCNN layers
Define the layers that make up the model using the <a href="https://arxiv.org/abs/1810.03787" class="external">Cong and Lukin QCNN paper</a>. There are a few prerequisites:
* The one- and two-qubit parameterized unitary matrices from the <a href="https://arxiv.org/abs/quant-ph/0507171" class="external">Tucci paper</a>.
* A general parameterized two-qubit pooling operation.
```
def one_qubit_unitary(bit, symbols):
"""Make a Cirq circuit enacting a rotation of the bloch sphere about the X,
Y and Z axis, that depends on the values in `symbols`.
"""
return cirq.Circuit(
cirq.X(bit)**symbols[0],
cirq.Y(bit)**symbols[1],
cirq.Z(bit)**symbols[2])
def two_qubit_unitary(bits, symbols):
"""Make a Cirq circuit that creates an arbitrary two qubit unitary."""
circuit = cirq.Circuit()
circuit += one_qubit_unitary(bits[0], symbols[0:3])
circuit += one_qubit_unitary(bits[1], symbols[3:6])
circuit += [cirq.ZZ(*bits)**symbols[6]]
circuit += [cirq.YY(*bits)**symbols[7]]
circuit += [cirq.XX(*bits)**symbols[8]]
circuit += one_qubit_unitary(bits[0], symbols[9:12])
circuit += one_qubit_unitary(bits[1], symbols[12:])
return circuit
def two_qubit_pool(source_qubit, sink_qubit, symbols):
"""Make a Cirq circuit to do a parameterized 'pooling' operation, which
attempts to reduce entanglement down from two qubits to just one."""
pool_circuit = cirq.Circuit()
sink_basis_selector = one_qubit_unitary(sink_qubit, symbols[0:3])
source_basis_selector = one_qubit_unitary(source_qubit, symbols[3:6])
pool_circuit.append(sink_basis_selector)
pool_circuit.append(source_basis_selector)
pool_circuit.append(cirq.CNOT(control=source_qubit, target=sink_qubit))
pool_circuit.append(sink_basis_selector**-1)
return pool_circuit
```
To see what you created, print out the one-qubit unitary circuit:
```
SVGCircuit(one_qubit_unitary(cirq.GridQubit(0, 0), sympy.symbols('x0:3')))
```
And the two-qubit unitary circuit:
```
SVGCircuit(two_qubit_unitary(cirq.GridQubit.rect(1, 2), sympy.symbols('x0:15')))
```
And the two-qubit pooling circuit:
```
SVGCircuit(two_qubit_pool(*cirq.GridQubit.rect(1, 2), sympy.symbols('x0:6')))
```
##### 1.5.2.1 Quantum convolution
As in the <a href="https://arxiv.org/abs/1810.03787" class="external">Cong and Lukin</a> paper, define the 1D quantum convolution as the application of a two-qubit parameterized unitary to every pair of adjacent qubits with a stride of one.
```
def quantum_conv_circuit(bits, symbols):
"""Quantum Convolution Layer following the above diagram.
Return a Cirq circuit with the cascade of `two_qubit_unitary` applied
to all pairs of qubits in `bits` as in the diagram above.
"""
circuit = cirq.Circuit()
for first, second in zip(bits[0::2], bits[1::2]):
circuit += two_qubit_unitary([first, second], symbols)
for first, second in zip(bits[1::2], bits[2::2] + [bits[0]]):
circuit += two_qubit_unitary([first, second], symbols)
return circuit
```
Display the (very horizontal) circuit:
```
SVGCircuit(
quantum_conv_circuit(cirq.GridQubit.rect(1, 8), sympy.symbols('x0:15')))
```
##### 1.5.2.2 Quantum pooling
A quantum pooling layer pools from $N$ qubits to $\frac{N}{2}$ qubits using the two-qubit pool defined above.
```
def quantum_pool_circuit(source_bits, sink_bits, symbols):
"""A layer that specifies a quantum pooling operation.
A Quantum pool tries to learn to pool the relevant information from two
qubits onto 1.
"""
circuit = cirq.Circuit()
for source, sink in zip(source_bits, sink_bits):
circuit += two_qubit_pool(source, sink, symbols)
return circuit
```
Examine a pooling component circuit:
```
test_bits = cirq.GridQubit.rect(1, 8)
SVGCircuit(
quantum_pool_circuit(test_bits[:4], test_bits[4:], sympy.symbols('x0:6')))
```
### 1.6 Model definition
Now use the defined layers to construct a purely quantum CNN. Start with eight qubits, pool down to one, then measure $\langle \hat{Z} \rangle$.
```
def create_model_circuit(qubits):
"""Create sequence of alternating convolution and pooling operators
which gradually shrink over time."""
model_circuit = cirq.Circuit()
symbols = sympy.symbols('qconv0:63')
# Cirq uses sympy.Symbols to map learnable variables. TensorFlow Quantum
# scans incoming circuits and replaces these with TensorFlow variables.
model_circuit += quantum_conv_circuit(qubits, symbols[0:15])
model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],
symbols[15:21])
model_circuit += quantum_conv_circuit(qubits[4:], symbols[21:36])
model_circuit += quantum_pool_circuit(qubits[4:6], qubits[6:],
symbols[36:42])
model_circuit += quantum_conv_circuit(qubits[6:], symbols[42:57])
model_circuit += quantum_pool_circuit([qubits[6]], [qubits[7]],
symbols[57:63])
return model_circuit
# Create our qubits and readout operators in Cirq.
cluster_state_bits = cirq.GridQubit.rect(1, 8)
readout_operators = cirq.Z(cluster_state_bits[-1])
# Build a sequential model enacting the logic in 1.3 of this notebook.
# Here you are making the static cluster state prep as a part of the AddCircuit and the
# "quantum datapoints" are coming in the form of excitation
excitation_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state = tfq.layers.AddCircuit()(
excitation_input, prepend=cluster_state_circuit(cluster_state_bits))
quantum_model = tfq.layers.PQC(create_model_circuit(cluster_state_bits),
readout_operators)(cluster_state)
qcnn_model = tf.keras.Model(inputs=[excitation_input], outputs=[quantum_model])
# Show the keras plot of the model
tf.keras.utils.plot_model(qcnn_model,
show_shapes=True,
show_layer_names=False,
dpi=70)
```
### 1.7 Train the model
Train the model over the full batch to simplify this example.
```
# Generate some training data.
train_excitations, train_labels, test_excitations, test_labels = generate_data(
cluster_state_bits)
# Custom accuracy metric.
@tf.function
def custom_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true)
y_pred = tf.map_fn(lambda x: 1.0 if x >= 0 else -1.0, y_pred)
return tf.keras.backend.mean(tf.keras.backend.equal(y_true, y_pred))
qcnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
history = qcnn_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations, test_labels))
plt.plot(history.history['loss'][1:], label='Training')
plt.plot(history.history['val_loss'][1:], label='Validation')
plt.title('Training a Quantum CNN to Detect Excited Cluster States')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
## 2. Hybrid models
You don't have to go from eight qubits to one qubit using quantum convolution—you could have done one or two rounds of quantum convolution and fed the results into a classical neural network. This section explores quantum-classical hybrid models.
### 2.1 Hybrid model with a single quantum filter
Apply one layer of quantum convolution, reading out $\langle \hat{Z}_n \rangle$ on all bits, followed by a densely-connected neural network.
<img src="./images/qcnn_5.png" width="1000">
#### 2.1.1 Model definition
```
# 1-local operators to read out
readouts = [cirq.Z(bit) for bit in cluster_state_bits[4:]]
def multi_readout_model_circuit(qubits):
"""Make a model circuit with less quantum pool and conv operations."""
model_circuit = cirq.Circuit()
symbols = sympy.symbols('qconv0:21')
model_circuit += quantum_conv_circuit(qubits, symbols[0:15])
model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],
symbols[15:21])
return model_circuit
# Build a model enacting the logic in 2.1 of this notebook.
excitation_input_dual = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state_dual = tfq.layers.AddCircuit()(
excitation_input_dual, prepend=cluster_state_circuit(cluster_state_bits))
quantum_model_dual = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_dual)
d1_dual = tf.keras.layers.Dense(8)(quantum_model_dual)
d2_dual = tf.keras.layers.Dense(1)(d1_dual)
hybrid_model = tf.keras.Model(inputs=[excitation_input_dual], outputs=[d2_dual])
# Display the model architecture
tf.keras.utils.plot_model(hybrid_model,
show_shapes=True,
show_layer_names=False,
dpi=70)
```
#### 2.1.2 Train the model
```
hybrid_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
hybrid_history = hybrid_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations,
test_labels))
plt.plot(history.history['val_custom_accuracy'], label='QCNN')
plt.plot(hybrid_history.history['val_custom_accuracy'], label='Hybrid CNN')
plt.title('Quantum vs Hybrid CNN performance')
plt.xlabel('Epochs')
plt.legend()
plt.ylabel('Validation Accuracy')
plt.show()
```
As you can see, with very modest classical assistance, the hybrid model will usually converge faster than the purely quantum version.
### 2.2 Hybrid convolution with multiple quantum filters
Now let's try an architecture that uses multiple quantum convolutions and a classical neural network to combine them.
<img src="./images/qcnn_6.png" width="1000">
#### 2.2.1 Model definition
```
excitation_input_multi = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state_multi = tfq.layers.AddCircuit()(
excitation_input_multi, prepend=cluster_state_circuit(cluster_state_bits))
# apply 3 different filters and measure expectation values
quantum_model_multi1 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
quantum_model_multi2 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
quantum_model_multi3 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
# concatenate outputs and feed into a small classical NN
concat_out = tf.keras.layers.concatenate(
[quantum_model_multi1, quantum_model_multi2, quantum_model_multi3])
dense_1 = tf.keras.layers.Dense(8)(concat_out)
dense_2 = tf.keras.layers.Dense(1)(dense_1)
multi_qconv_model = tf.keras.Model(inputs=[excitation_input_multi],
outputs=[dense_2])
# Display the model architecture
tf.keras.utils.plot_model(multi_qconv_model,
show_shapes=True,
show_layer_names=True,
dpi=70)
```
#### 2.2.2 Train the model
```
multi_qconv_model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
multi_qconv_history = multi_qconv_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations,
test_labels))
plt.plot(history.history['val_custom_accuracy'][:25], label='QCNN')
plt.plot(hybrid_history.history['val_custom_accuracy'][:25], label='Hybrid CNN')
plt.plot(multi_qconv_history.history['val_custom_accuracy'][:25],
label='Hybrid CNN \n Multiple Quantum Filters')
plt.title('Quantum vs Hybrid CNN performance')
plt.xlabel('Epochs')
plt.legend()
plt.ylabel('Validation Accuracy')
plt.show()
```
| github_jupyter |
TSG073 - InfluxDB logs
======================
Steps
-----
### Parameters
```
import re
tail_lines = 2000
pod = None # All
container = "influxdb"
log_files = [ "/var/log/supervisor/log/influxdb*.log" ]
expressions_to_analyze = []
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Get tail for log
```
# Display the last 'tail_lines' of files in 'log_files' list
pods = api.list_namespaced_pod(namespace)
entries_for_analysis = []
for p in pods.items:
if pod is None or p.metadata.name == pod:
for c in p.spec.containers:
if container is None or c.name == container:
for log_file in log_files:
print (f"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'")
try:
output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True)
except Exception:
print (f"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}")
else:
for line in output.split('\n'):
for expression in expressions_to_analyze:
if expression.match(line):
entries_for_analysis.append(line)
print(line)
print("")
print(f"{len(entries_for_analysis)} log entries found for further analysis.")
```
### Analyze log entries and suggest relevant Troubleshooting Guides
```
# Analyze log entries and suggest further relevant troubleshooting guides
from IPython.display import Markdown
import os
import json
import requests
import ipykernel
import datetime
from urllib.parse import urljoin
from notebook import notebookapp
def get_notebook_name():
"""Return the full path of the jupyter notebook. Some runtimes (e.g. ADS)
have the kernel_id in the filename of the connection file. If so, the
notebook name at runtime can be determined using `list_running_servers`.
Other runtimes (e.g. azdata) do not have the kernel_id in the filename of
the connection file, therefore we are unable to establish the filename
"""
connection_file = os.path.basename(ipykernel.get_connection_file())
# If the runtime has the kernel_id in the connection filename, use it to
# get the real notebook name at runtime, otherwise, use the notebook
# filename from build time.
try:
kernel_id = connection_file.split('-', 1)[1].split('.')[0]
except:
pass
else:
for servers in list(notebookapp.list_running_servers()):
try:
response = requests.get(urljoin(servers['url'], 'api/sessions'), params={'token': servers.get('token', '')}, timeout=.01)
except:
pass
else:
for nn in json.loads(response.text):
if nn['kernel']['id'] == kernel_id:
return nn['path']
def load_json(filename):
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def get_notebook_rules():
"""Load the notebook rules from the metadata of this notebook (in the .ipynb file)"""
file_name = get_notebook_name()
if file_name == None:
return None
else:
j = load_json(file_name)
if "azdata" not in j["metadata"] or \
"expert" not in j["metadata"]["azdata"] or \
"log_analyzer_rules" not in j["metadata"]["azdata"]["expert"]:
return []
else:
return j["metadata"]["azdata"]["expert"]["log_analyzer_rules"]
rules = get_notebook_rules()
if rules == None:
print("")
print(f"Log Analysis only available when run in Azure Data Studio. Not available when run in azdata.")
else:
hints = 0
if len(rules) > 0:
for entry in entries_for_analysis:
for rule in rules:
if entry.find(rule[0]) != -1:
print (entry)
display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.'))
hints = hints + 1
print("")
print(f"{len(entries_for_analysis)} log entries analyzed (using {len(rules)} rules). {hints} further troubleshooting hints made inline.")
print('Notebook execution complete.')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/PGM-Lab/probai-2021-pyro/blob/main/Day1/notebooks/students_PPLs_Intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
# Setup
Let's begin by installing and importing the modules we'll need.
```
!pip install -q --upgrade pyro-ppl torch
import pyro
import torch
import pyro.distributions as dist
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
# 1. **Pyro’s distributions** (http://docs.pyro.ai/en/stable/distributions.html) :
---
* Pyro provides a wide range of distributions: **Normal, Beta, Cauchy, Dirichlet, Gumbel, Poisson, Pareto, etc.**
---
```
normal = dist.Normal(0,1)
normal
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* Samples from the distributions are [Pytorch’s Tensor objects](https://pytorch.org/cppdocs/notes/tensor_creation.html) (i.e. multidimensional arrays).
---
```
sample = normal.sample()
sample
sample = normal.sample(sample_shape=[3,4,5])
sample
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* We can query the **dimensionlity** of a tensor with the ``shape`` property
---
```
sample = normal.sample(sample_shape=[3,4,5])
sample.shape
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* Operations, like **log-likelihood**, are defined over tensors.
---
```
normal.log_prob(sample)
torch.sum(normal.log_prob(sample))
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* **Multiple distributions** can be embedded in single object.
* Below we define **three Normal distributions with different means but the same scale** in a single object.
---
```
normal = dist.Normal(torch.tensor([1.,2.,3.]),1.)
normal
normal.sample()
normal.log_prob(normal.sample())
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
### **<span style="color:red">Exercise: Open the notebook and play around</span>**
* Test that everything works.
* Play a bit with the code in Section 1 of the notebook.
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
# 2. **Pyro’s models** (http://pyro.ai/examples/intro_part_i.html) :
---
* In Pyro, a probabilistic model is defined as a **stochastic function** (i.e. every time it is run, it returns a new sample).
* Each random variable is associated with a **primitive stochastic function** using the construct ``pyro.sample(...)``.
---
### 2.1 A Temperature Model
As initial running example, we consider the problem of **modelling the temperature**. We first start with a simple model where temperture is modeled using a random Normal variable.
```
def model():
temp = pyro.sample('temp', dist.Normal(15.0, 2.0))
return temp
print(model())
print(model())
```
See how the model is a stochastic function which **returns a different value everytime it is invoked**.
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
### 2.2 A Temperature-Sensor Model
---
* In Pyro, a stochastic method is defined as a **composition of primitive stochastic functions**.
* The temperature Model: we consider the presence of a **temperature sensor**.
* The temperature sensor gives **noisy observations** about the real temperature.
* The **error** of the sensor's measurements **is known**.
* A graphical representation of this model:
<center>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/PGM-Tem-Sensor.png?raw=1" alt="Drawing" width="150">
</center>
---
```
def model():
temp = pyro.sample('temp', dist.Normal(15.0, 2.0))
sensor = pyro.sample('sensor', dist.Normal(temp, 1.0))
return (temp, sensor)
out1 = model()
out1
```
---
* The above method defines a joint probability distribution:
$$p(sensor, temp) = p(sensor|temp)p(temp)$$
* In this case, we have a simple dependency between the variables. But, as we are in a PPL, dependencies can be expressed in terms of complex deterministic functions (more examples later).
---
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
# 3. **Pyro’s inference** (http://pyro.ai/examples/intro_part_ii.html) :
### Auxiliary inference functions (more details on Day 3)
To make inference on Pyro, we will use a variational inference method, which performs gradient-based optimization to solve the inference problem. More details will be given on Day 3.
```
from torch.distributions import constraints
from pyro.optim import SGD
from pyro.infer import Trace_ELBO
import matplotlib.pyplot as plt
from pyro.contrib.autoguide import AutoDiagonalNormal
def svi(temperature_model, guide, obs, num_steps = 5000, plot = False):
pyro.clear_param_store()
svi = pyro.infer.SVI(model=temperature_model,
guide=guide,
optim=SGD({"lr": 0.001, "momentum":0.1}),
loss=Trace_ELBO())
losses, a,b = [], [], []
for t in range(num_steps):
losses.append(svi.step(obs))
if t%250==0:
print('Step: '+str(t)+'. Loss: ' +str(losses[-1]))
if (plot):
plt.plot(losses)
plt.title("ELBO")
plt.xlabel("step")
plt.ylabel("loss");
plt.show()
```
---
* To make inference in Pyro over a given model we need to define a *guide*, this *guide* has the same signature than its counterpart model.
* The guide must provide samples for those variables of the model which are not observed using again the ``pyro.sample`` construct.
* Guides are also parametrized using Pyro's parameters (``pyro.param``), so the variational inference algorithm will optimize these parameters.
* All of that will be explained in detail on Day 3.
---
```
#The guide
def guide(obs):
a = pyro.param("mean", torch.tensor(0.0))
b = pyro.param("scale", torch.tensor(1.), constraint=constraints.positive)
temp = pyro.sample('temp', dist.Normal(a, b))
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
### 3.1 Conditioning on a single observation
Now, we continue with the last model defined in section 2.2, and assume we have a sensor reading and we want to compute the posterior distribution over the real temperature.
<center>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/PGM-Tem-Sensor.png?raw=1" alt="Drawing" width="150">
</center>
---
* This can be achived by introducing **observations in the random variable** with the keyword ``obs=``.
---
```
#The observatons
obs = {'sensor': torch.tensor(18.0)}
def model(obs):
temp = pyro.sample('temp', dist.Normal(15.0, 2.0))
sensor = pyro.sample('sensor', dist.Normal(temp, 1.0), obs=obs['sensor'])
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* Inference is made using the previously defined auxiliary functions, ``svi`` and ``guide``.
* We can query the **posterior probability distribution**:
$$p(temp | sensor=18)=\frac{p(sensor=18|temp)p(temp)}{\int p(sensor=18|temp)p(temp) dtemp}$$
---
```
#Run inference
svi(model,guide,obs, plot=True)
#Print results
print("P(Temperature|Sensor=18.0) = ")
print(dist.Normal(pyro.param("mean").item(), pyro.param("scale").item()))
print("")
```
---
* Inference is an **optimization procedure**.
* The **ELBO function is minimized** during the variational inference process.
---
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
### 3.2 Learning from a bunch of observations
---
* Let us assume we have a **set of observations** about the temperature at different time steps.
* In this case, and following a probabilistic modelling approach, we define a **set of random variables**.
* One random variable for each **observation**, using a standard ``for-loop``.
---
```
#The observatons
obs = {'sensor': torch.tensor([18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1])}
def model(obs):
for i in range(obs['sensor'].shape[0]):
temp = pyro.sample(f'temp_{i}', dist.Normal(15.0, 2.0))
sensor = pyro.sample(f'sensor_{i}', dist.Normal(temp, 1.0), obs=obs['sensor'][i])
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* What if we do **not know the mean temperature**.
* We can **infer it from the data** by, e.g., using a **maximum likelihood** approach,
$$ \mu_{t} = \arg\max_\mu \ln p(s_1,\ldots,s_n|\mu) = \arg\max_\mu \prod_i \int_{t_i} p(s_i|t_i)p(t_i|\mu) dt_i $$
where $s_i$ and $t_i$ denote the sensor reading and the real temperature at time $i$.
* The graphical model:
<center>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/PGM-Tem_sensor4.png?raw=1" alt="Drawing" width="150">
</center>
* With PPLs, we do not have to care about the **underlying inference problem** We just define the model and let the **PPL's engine** make the work for us.
* We use Pyro's parameters (defined as ``pyro.param``), which are free variables we can optimize.
---
```
#The observatons
obs = {'sensor': torch.tensor([18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1])}
def model(obs):
mean_temp = pyro.param('mean_temp', torch.tensor(15.0))
for i in range(obs['sensor'].shape[0]):
temp = pyro.sample(f'temp_{i}', dist.Normal(mean_temp, 2.0))
sensor = pyro.sample(f'sensor_{i}', dist.Normal(temp, 1.0), obs=obs['sensor'][i])
#@title
#Define the guide
def guide(obs):
for i in range(obs['sensor'].shape[0]):
mean_i = pyro.param(f'mean_{i}', obs['sensor'][i])
scale_i = pyro.param(f'scale_{i}', torch.tensor(1.), constraint=constraints.positive)
temp = pyro.sample(f'temp_{i}', dist.Normal(mean_i, scale_i))
#@title
#Run inference
svi(model, guide, obs, num_steps=1000)
#Print results
print("Estimated Mean Temperature")
print(pyro.param("mean_temp").item())
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* Instead of performing *maximum likelihood* learning, we can perform **Bayesian learning**.
* We treat the unknown quantity as a **random variable**.
* This model can be graphically represented as follows:
<center>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/PGM-Tem-Sensor2.png?raw=1" alt="Drawing" width="150">
</center>
---
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
```
#The observatons
obs = {'sensor': torch.tensor([18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1])}
def model(obs):
mean_temp = pyro.sample('mean_temp', dist.Normal(15.0, 2.0))
for i in range(obs['sensor'].shape[0]):
temp = pyro.sample(f'temp_{i}', dist.Normal(mean_temp, 2.0))
sensor = pyro.sample(f'sensor_{i}', dist.Normal(temp, 1.0), obs=obs['sensor'][i])
```
---
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* We perform inference over this model:
$$ p(\mu_t | s_1,\ldots, s_n)=\frac{p(\mu_t)\prod_{i=1}^n \int p(s_i|t_i)p(t_i|\mu_t)dt_i }{\int \prod_{i=1}^n p(s_i|\mu_t)p(\mu_t) d\mu} $$
---
```
#@title
#Define the guide
def guide(obs):
mean = pyro.param("mean", torch.mean(obs['sensor']))
scale = pyro.param("scale", torch.tensor(1.), constraint=constraints.positive)
mean_temp = pyro.sample('mean_temp', dist.Normal(mean, scale))
for i in range(obs['sensor'].shape[0]):
mean_i = pyro.param(f'mean_{i}', obs['sensor'][i])
scale_i = pyro.param(f'scale_{i}', torch.tensor(1.), constraint=constraints.positive)
temp = pyro.sample(f'temp_{i}', dist.Normal(mean_i, scale_i))
import time
#Run inference
start = time.time()
svi(model, guide, obs, num_steps=1000)
#Print results
print("P(mean_temp|Sensor=[18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1]) =")
print(dist.Normal(pyro.param("mean").item(), pyro.param("scale").item()))
print("")
end = time.time()
print(f"{(end - start)} seconds")
```
---
* The result of the learning is **not a point estimate**.
* We have a **posterior distribution** which captures **uncertainty** about the estimation.
---
```
import numpy as np
import scipy.stats as stats
mu = 19.312837600708008
scale = 0.6332376003265381
x = np.linspace(mu - 3*scale, mu + 3*scale, 100)
plt.plot(x, stats.norm.pdf(x, mu, scale), label='Posterior')
point = 19.123859405517578
plt.plot([point, point],[0., 1.], label='Point Estimate')
plt.legend()
plt.show()
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
### 3.3 The use of ``plate`` construct
---
* Pyro can exploit **conditional independencies and vectorization** to make inference much faster.
* This can be done with the construct **``plate``**.
* With this construct, we can indicate that the variables $s_i$ and $t_i$ are **conditionally indepdendent** from another variables $s_j$ and $t_j$ given $\mu_t$.
<center>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/PGM-Tem-Sensor2.png?raw=1" alt="Drawing" width="150">
</center>
---
```
#The observatons
obs = {'sensor': torch.tensor([18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1])}
def model(obs):
mean_temp = pyro.sample('mean_temp', dist.Normal(15.0, 2.0))
with pyro.plate('a', obs['sensor'].shape[0]):
temp = pyro.sample('temp', dist.Normal(mean_temp, 2.0))
sensor = pyro.sample('sensor', dist.Normal(temp, 1.0), obs=obs['sensor'])
```
---
* The ``plate`` construct reflects the standard notational use in graphical models denoting the **repetition of some parts of of the graph**.
<center>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/PGM-Tem-Sensor3.png?raw=1" alt="Drawing" width="250">
</center>
* We can here make a distinction between **local** and **global** random variables:
>* **Local random variables** caputure **specific information** about the $i$-th data sample (i.e. the real temperature at this moment in time).
>* **Global random variables** capture **common information** about all the data samples (i.e. the average temperature of all data samples).
---
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
Observe how inference in this model is much **faster**.
```
#@title
#Define the guide
def guide(obs_sensor):
mean = pyro.param("mean", torch.mean(obs['sensor']))
scale = pyro.param("scale", torch.tensor(1.), constraint=constraints.positive)
mean_temp = pyro.sample('mean_temp', dist.Normal(mean, scale))
with pyro.plate('a', obs['sensor'].shape[0]) as i:
mean_i = pyro.param('mean_i', obs['sensor'][i])
scale_i = pyro.param('scale_i', torch.tensor(1.), constraint=constraints.positive)
temp = pyro.sample('temp', dist.Normal(mean_i, scale_i))
#Run inference
start = time.time()
svi(model, guide, obs, num_steps=1000)
#Print results
print("P(mean_temp|Sensor=[18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1]) =")
print(dist.Normal(pyro.param("mean").item(), pyro.param("scale").item()))
print("")
end = time.time()
print(f"{(end - start)} seconds")
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
### **<span style="color:red">Exercise 1: </span>The role of *prior distributions* in learning**
In this case we just want to llustrate how the output of learning depends of the particular prior we introduce in the model. Play with different options and extract conclusions:
1. What happens if we change the mean of the prior?
2. What happens if we change the scale of the prior?
3. What happens to the posterior if the number of data samples deacreases and increases?
```
#The observatons
sample_size = 10
obs = {'sensor': torch.tensor(np.random.normal(18,2,sample_size))}
def model(obs):
mean_temp = pyro.sample('mean_temp', dist.Normal(15.0, 2.0))
with pyro.plate('a', obs['sensor'].shape[0]):
temp = pyro.sample('temp', dist.Normal(mean_temp, 2.0))
sensor = pyro.sample('sensor', dist.Normal(temp, 1.0), obs=obs['sensor'])
#Run inference
svi(model, guide, obs, num_steps=1000)
#Print results
print("P(Temperature|Sensor=18.0) = ")
print(dist.Normal(pyro.param("mean").item(), pyro.param("scale").item()))
x = np.linspace(16, 20, 100)
plt.plot(x, stats.norm.pdf(x, pyro.param("mean").item(), pyro.param("scale").item()), label='Posterior')
point = 18
plt.plot([point, point],[0., 1.], label='Point Estimate')
plt.xlim(16,20)
plt.legend()
plt.show()
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
# **4. Icecream Shop**
* We have an ice-cream shop and we **record the ice-cream sales and the average temperature of the day** (using a temperature sensor).
* We know **temperature affects the sales** of ice-creams.
* We want to **precisely model** how temperature affects ice-cream sales.
<center>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/raw/main/Day1/Figures/Ice-cream_shop_-_Florida.jpg" alt="Drawing" width=300 >
</center>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* We have **observations** from temperature and sales.
* Sales are modeled with a **Poisson** distribution:
>- The rate of the Poisson **linearly depends of the real temperature**.
---
Next figure provides a graphical and a probabilistic description of the model:
<center>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/Ice-Cream-Shop-Model.png?raw=1" alt="Drawing" width=700>
</center>
```
#The observatons
obs = {'sensor': torch.tensor([18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1]),
'sales': torch.tensor([46., 47., 49., 44., 50., 54., 51., 52., 49., 53.])}
def model(obs):
mean_temp = pyro.sample('mean_temp', dist.Normal(15.0, 2.0))
alpha = pyro.sample('alpha', dist.Normal(0.0, 100.0))
beta = pyro.sample('beta', dist.Normal(0.0, 100.0))
with pyro.plate('a', obs['sensor'].shape[0]):
temp = pyro.sample('temp', dist.Normal(mean_temp, 2.0))
sensor = pyro.sample('sensor', dist.Normal(temp, 1.0), obs=obs['sensor'])
rate = torch.max(torch.tensor(0.001), alpha + beta*temp)
sales = pyro.sample('sales', dist.Poisson(rate), obs=obs['sales'])
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
```
#@title
#Define the guide
def guide(obs):
mean = pyro.param("mean", torch.mean(obs['sensor']))
scale = pyro.param("scale", torch.tensor(1.), constraint=constraints.positive)
mean_temp = pyro.sample('mean_temp', dist.Normal(mean, scale))
alpha_mean = pyro.param("alpha_mean", torch.mean(obs['sensor']))
alpha_scale = pyro.param("alpha_scale", torch.tensor(1.), constraint=constraints.positive)
alpha = pyro.sample('alpha', dist.Normal(alpha_mean, alpha_scale))
beta_mean = pyro.param("beta_mean", torch.tensor(1.0))
beta_scale = pyro.param("beta_scale", torch.tensor(1.), constraint=constraints.positive)
beta = pyro.sample('beta', dist.Normal(beta_mean, beta_scale))
with pyro.plate('a', obs['sensor'].shape[0]) as i:
mean_i = pyro.param('mean_i', obs['sensor'][i])
scale_i = pyro.param('scale_i', torch.tensor(1.), constraint=constraints.positive)
temp = pyro.sample('temp', dist.Normal(mean_i, scale_i))
```
---
* We run the **(variational) inference engine** and get the results.
* With PPLs, we only care about modeling, **not about the low-level details** of the machine-learning solver.
---
```
#Run inference
svi(model, guide, obs, num_steps=1000)
#Print results
print("Posterior temperature mean")
print(dist.Normal(pyro.param("mean").item(), pyro.param("scale").item()))
print("")
print("Posterior alpha")
print(dist.Normal(pyro.param("alpha_mean").item(), pyro.param("alpha_scale").item()))
print("")
print("Posterior aeta")
print(dist.Normal(pyro.param("beta_mean").item(), pyro.param("beta_scale").item()))
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
### <span style="color:red">Exercise 2: Introduce Humidity in the Icecream shop model </span>
---
* Assume we also have a bunch of **humidity sensor measurements**.
* Assume the **sales are also linearly influenced by the humidity**.
* **Extend the above model** in order to integrate all of that.
---
Next figure provides a graphical and a probabilistic description of the model:
<center>
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/Ice-Cream-Shop-Model-Humidity.png?raw=1" alt="Drawing" width=700>
</center>
```
#The observatons
obs = {'sensor': torch.tensor([18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1]),
'sales': torch.tensor([46., 47., 49., 44., 50., 54., 51., 52., 49., 53.]),
'sensor_humidity': torch.tensor([82.8, 87.6, 69.1, 74.2, 80.3, 94.2, 91.2, 92.2, 99.1, 93.2])}
def model(obs):
mean_temp = pyro.sample('mean_temp', dist.Normal(15.0, 2.0))
## Introduce a random variable "mean_humidity"
alpha = pyro.sample('alpha', dist.Normal(0.0, 100.0))
beta = pyro.sample('beta', dist.Normal(0.0, 100.0))
## Introduce a coefficient for the humidity "gamma"
with pyro.plate('a', obs['sensor'].shape[0]):
temp = pyro.sample('temp', dist.Normal(mean_temp, 2.0))
sensor = pyro.sample('sensor', dist.Normal(temp, 1.0), obs=obs['sensor'])
#Add the 'humidity' variable and the 'sensor_humidity' variable
#Add the linear dependency for the rate with respect to temp and humidity (keep torch.max to avoid numerical stability issues)
rate = torch.max(torch.tensor(0.001), ????)
sales = pyro.sample('sales', dist.Poisson(rate), obs=obs['sales'])
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* We run the **(variational) inference engine** and get the results.
* With PPLs, we only care about modeling, **not about the low-level details** of the machine-learning solver.
---
```
#@title
#Auxiliary Guide Code
def guide(obs):
mean = pyro.param("mean", torch.mean(obs['sensor']))
scale = pyro.param("scale", torch.tensor(1.), constraint=constraints.positive)
mean_temp = pyro.sample('mean_temp', dist.Normal(mean, scale))
meanH = pyro.param("meanH", torch.mean(obs['sensor_humidity']))
scaleH = pyro.param("scaleH", torch.tensor(1.), constraint=constraints.positive)
mean_humidity = pyro.sample('mean_humidity', dist.Normal(meanH, scaleH))
alpha_mean = pyro.param("alpha_mean", torch.mean(obs['sensor']), constraint=constraints.positive)
alpha_scale = pyro.param("alpha_scale", torch.tensor(1.), constraint=constraints.positive)
alpha = pyro.sample('alpha', dist.Normal(alpha_mean, alpha_scale))
beta_mean = pyro.param("beta_mean", torch.tensor(1.0), constraint=constraints.positive)
beta_scale = pyro.param("beta_scale", torch.tensor(1.), constraint=constraints.positive)
beta = pyro.sample('beta', dist.Normal(beta_mean, beta_scale))
gamma_mean = pyro.param("gamma_mean", torch.tensor(1.0), constraint=constraints.positive)
gamma_scale = pyro.param("gamma_scale", torch.tensor(1.), constraint=constraints.positive)
gamma = pyro.sample('gamma', dist.Normal(gamma_mean, gamma_scale))
with pyro.plate('a', obs['sensor'].shape[0]) as i:
mean_i = pyro.param('mean_i', obs['sensor'][i])
scale_i = pyro.param('scale_i', torch.tensor(1.), constraint=constraints.positive)
temp = pyro.sample('temp', dist.Normal(mean_i, scale_i))
meanH_i = pyro.param('meanH_i', obs['sensor_humidity'][i])
scaleH_i = pyro.param('scaleH_i', torch.tensor(1.), constraint=constraints.positive)
humidity = pyro.sample('humidity', dist.Normal(meanH_i, scaleH_i))
#Run inference
svi(model, guide, obs, num_steps=1000)
#Print results
print("Posterior Temperature Mean")
print(dist.Normal(pyro.param("mean").item(), pyro.param("scale").item()))
print("")
print("Posterior Humidity Mean")
print(dist.Normal(pyro.param("meanH").item(), pyro.param("scaleH").item()))
print("")
print("Posterior Alpha")
print(dist.Normal(pyro.param("alpha_mean").item(), pyro.param("alpha_scale").item()))
print("")
print("Posterior Beta")
print(dist.Normal(pyro.param("beta_mean").item(), pyro.param("beta_scale").item()))
print("")
print("Posterior Gamma")
print(dist.Normal(pyro.param("gamma_mean").item(), pyro.param("gamma_scale").item()))
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
# 5. **Temporal Models**
If we think there is a temporal dependency between the variables, we can easily encode that with PPLs.
---
* Let us assume that there is a **temporal dependency** between the variables.
* E.g. the current **real temperature must be similar to the real temperature in the previous time step**.
* This temporal dependency can **be modeled** using a **for-loop** in Pyro
* Consider the **graphical representation**.
---
<img src="https://github.com/PGM-Lab/probai-2021-pyro/raw/main/Day1/Figures/tempmodel-temporal-III.png" alt="Drawing" style="width: 350px;" >
```
#The observatons
obs = {'sensor': torch.tensor([18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1])}
def model(obs):
mean_temp = pyro.sample('mean_temp', dist.Normal(15.0, 2.0))
for i in range(obs['sensor'].shape[0]):
if i==0:
temp = pyro.sample(f'temp_{i}', dist.Normal(mean_temp, 2.0))
else:
temp = pyro.sample(f'temp_{i}', dist.Normal(prev_temp, 2.0))
sensor = pyro.sample(f'sensor_{i}', dist.Normal(temp, 1.0), obs=obs['sensor'][i])
prev_temp = temp
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* We run the **(variational) inference engine** and get the results.
* With PPLs, we only care about modeling, **not about the low-level details** of the machine-learning solver.
---
```
#@title
#Define the guide
def guide(obs):
mean = pyro.param("mean", torch.mean(obs['sensor']))
scale = pyro.param("scale", torch.tensor(1.), constraint=constraints.positive)
mean_temp = pyro.sample('mean_temp', dist.Normal(mean, scale))
for i in range(obs['sensor'].shape[0]):
mean_i = pyro.param(f'mean_{i}', obs['sensor'][i])
scale_i = pyro.param(f'scale_{i}', torch.tensor(1.), constraint=constraints.positive)
temp = pyro.sample(f'temp_{i}', dist.Normal(mean_i, scale_i))
import time
#Run inference
svi(model, guide, obs, num_steps=2000)
smooth_temp=[]
for i in range(obs['sensor'].shape[0]):
smooth_temp.append(pyro.param(f'mean_{i}').item())
print('Finished')
```
---
* Plot the **observered measurements** of the temperature **against** the inferred **real temperature**.
* By querying the **local hidden** we can **smooth** the temperature.
* The **recovered temperature** is much less noisy than the measured one.
---
```
import matplotlib.pyplot as plt
plt.plot([18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1], label='Sensor Temp')
plt.plot(smooth_temp, label='Smooth Temp')
plt.legend()
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
### <span style="color:red">Exercise 3: Temporal Extension of the Iceacream shop model </span>
---
* **Extends** Excersise 2.
* Assume temperature depends of the **temperature in the previous day**.
* Assume humidity depends of the **humidity in the previous day**.
* Assume sales depends on the **current temperature and humidity**.
* Use the following **graphical representation for reference**.
* Consider here that the plate representation has to be coded in Pyro using a **``for-loop``**.
---
<img src="https://github.com/PGM-Lab/probai-2021-pyro/raw/main/Day1/Figures/icecream-model-temporal.png" alt="Drawing" width=500 >
```
#The observatons
obs = {'sensor': torch.tensor([18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1]),
'sales': torch.tensor([46., 47., 49., 44., 50., 54., 51., 52., 49., 53.]),
'sensor_humidity': torch.tensor([82.8, 87.6, 69.1, 74.2, 80.3, 94.2, 91.2, 92.2, 99.1, 93.2])}
def model(obs):
mean_temp = pyro.sample('mean_temp', dist.Normal(15.0, 2.0))
## Introduce a random variable "mean_humidity"
alpha = pyro.sample('alpha', dist.Normal(0.0, 100.0))
beta = pyro.sample('beta', dist.Normal(0.0, 100.0))
## Introduce a coefficient for the humidity "gamma"
for i in range(obs['sensor'].shape[0]):
if i==0:
temp = pyro.sample(f'temp_{i}', dist.Normal(mean_temp, 2.0))
#Introduce the 'humidity' variable at time 0.
else:
temp = pyro.sample(f'temp_{i}', dist.Normal(prev_temp, 2.0))
#Introduce the f'humidity_{i}' variable defining the transition
sensor = pyro.sample(f'sensor_{i}', dist.Normal(temp, 1.0), obs=obs['sensor'][i])
#Introduce the f'sensor_humidity_{i}' variable.
#Add the linear dependency for the rate with respect to temp and humidity (keep torch.max to avoid numerical stability issues)
rate = torch.max(torch.tensor(0.01),????)
sales = pyro.sample(f'sales_{i}', dist.Poisson(rate), obs=obs['sales'][i])
prev_temp = temp
#Keep humidity for the next time step.
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* We run the **(variational) inference engine** and get the results.
* With PPLs, we only care about modeling, **not about the low-level details** of the machine-learning solver.
---
```
#@title
#Define the guide
def guide(obs):
mean = pyro.param("mean", torch.mean(obs['sensor']))
scale = pyro.param("scale", torch.tensor(1.), constraint=constraints.greater_than(0.01))
mean_temp = pyro.sample('mean_temp', dist.Normal(mean, scale))
meanH = pyro.param("meanH", torch.mean(obs['sensor_humidity']), constraint=constraints.positive)
scaleH = pyro.param("scaleH", torch.tensor(1.), constraint=constraints.greater_than(0.01))
humidity_mean = pyro.sample('mean_humidity', dist.Normal(meanH, scaleH))
alpha_mean = pyro.param("alpha_mean", torch.mean(obs['sensor']))
alpha_scale = pyro.param("alpha_scale", torch.tensor(1.), constraint=constraints.greater_than(0.01))
alpha = pyro.sample('alpha', dist.Normal(alpha_mean, alpha_scale))
beta_mean = pyro.param("beta_mean", torch.tensor(0.0))
beta_scale = pyro.param("beta_scale", torch.tensor(1.), constraint=constraints.greater_than(0.01))
beta = pyro.sample('beta', dist.Normal(beta_mean, beta_scale))
gamma_mean = pyro.param("gamma_mean", torch.tensor(0.0))
gamma_scale = pyro.param("gamma_scale", torch.tensor(1.), constraint=constraints.greater_than(0.01))
gamma = pyro.sample('gamma', dist.Normal(gamma_mean, gamma_scale))
for i in range(obs['sensor'].shape[0]):
mean_i = pyro.param(f'mean_{i}', obs['sensor'][i])
scale_i = pyro.param(f'scale_{i}', torch.tensor(1.), constraint=constraints.greater_than(0.01))
temp = pyro.sample(f'temp_{i}', dist.Normal(mean_i, scale_i))
meanH_i = pyro.param(f'meanH_{i}', obs['sensor_humidity'][i])
scaleH_i = pyro.param(f'scaleH_{i}', torch.tensor(1.), constraint=constraints.greater_than(0.01))
humidity_i = pyro.sample(f'humidity_{i}', dist.Normal(meanH_i, scaleH_i))
import time
#Run inference
svi(model, guide, obs, num_steps=2000)
smooth_temp=[]
smooth_humidity=[]
for i in range(obs['sensor'].shape[0]):
smooth_temp.append(pyro.param(f'mean_{i}').item())
smooth_humidity.append(pyro.param(f'meanH_{i}').item())
print('Finished')
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
---
* We can plot the observered measurements of the temperature against the **inferred real temperature** by our model.
* The **recovered temperature** is much less noisy than the real one.
---
```
plt.plot([18., 18.7, 19.2, 17.8, 20.3, 22.4, 20.3, 21.2, 19.5, 20.1], label='Sensor Temp')
plt.plot(smooth_temp, label='Smooth Temp')
plt.legend()
```
---
* We can plot the observered measurements of the humidity against the **inferred real humidity** by our model.
* The **recovered humidity** is much less noisy than the real one.
---
```
humidity = torch.tensor([82.8, 87.6, 69.1, 74.2, 80.3, 94.2, 91.2, 92.2, 99.1, 93.2])
plt.plot(humidity.detach().numpy(), label='Sensor Humidity')
plt.plot(smooth_humidity, label='Smooth Humidity')
plt.legend()
```
<img src="https://github.com/PGM-Lab/probai-2021-pyro/blob/main/Day1/Figures/blue.png?raw=1" alt="Drawing" width=2000 height=20>
| github_jupyter |
# DATA WRANGLING AND CLEANING
#### Data wrangling is the process of cleaning, structuring and enriching raw data into a desired format for better decision making in less time. Data wrangling is increasingly ubiquitous at today’s top firms. Data has become more diverse and unstructured, demanding increased time spent culling, cleaning, and organizing data ahead of broader analysis. At the same time, with data informing just about every business decision, business users have less time to wait on technical resources for prepared data.
#### Data cleaning is the process of preparing data for analysis by removing or modifying data that is incorrect, incomplete, irrelevant, duplicated, or improperly formatted. This data is usually not necessary or helpful when it comes to analyzing data because it may hinder the process or provide inaccurate results.
## UNDERSTANDING THE DATASET
### THE DATASET IS DIVIDED INTO TWO FEATURE SETS WHICH ARE COMBINED TOGETHER :
#### 1. Basic Features
#### 2. Flow based Features
## 1.BASIC FEATURES
<img src="images/feature.jpg">
#### The basic features are from column 0 to 29 and last three columns
### Data Wrangling on Basic Features
```
import pandas as pd
import numpy as np
df_orginal = pd.read_csv("../dataset/DDoSdata.csv")
df = df_orginal.copy()
df.head()
df.columns
df.rename(columns = {'mean':'average_dur', 'stddev':'stddev_dur',
'sum':'total_dur','min':'min_dur','max':'max_dur'}, inplace = True)
df.shape
df.proto.unique()
df.flgs.value_counts()
pd.set_option('display.float_format', lambda x: '%.5f' % x)
df.stime.value_counts()
df.flgs_number.unique()
df.saddr.value_counts()[:5]
df.daddr.value_counts()[:5]
df.sport.value_counts()[:5]
df.dport.value_counts()[:5]
df.pkts.value_counts()
df.pkts.describe()
df.bytes.value_counts()
df.bytes.describe()
df.state.unique()
df.state_number.unique()
df.ltime
df.seq.value_counts()
df.dur.value_counts()
df.dur.describe()
df.average_dur.describe()
df.stddev_dur.describe()
df.total_dur.describe()
df.min_dur.describe()
df.spkts.value_counts()[:10]
df.dpkts.value_counts()[:10]
df.sbytes.value_counts()
df.sbytes.describe()
df.rate.describe()
df.srate.describe()
df.drate.describe()
df.columns
df_ml = df.copy()
#df.drop(['column_nameA', 'column_nameB'], axis=1, inplace=True)
```
## Data Cleaning and Information of basic features
### 1. Remove the first and second column
#### Define:
The first and second column is basically the id number of each row and is not required for analysis.
#### Code:
```
df.drop(['Unnamed: 0','pkSeqID'], axis=1, inplace=True)
```
#### Test:
```
df.columns
```
### 2. Drop flgs and flgno as its of no use for analytics
#### Define:
We have our second column as fgs which is "Flow state flags seen in transactions" and flgs_number which is associated with the flow state. We are removing this because we are dealing with the flags alone in an upcoming column and dont require this for analysis now.
#### Code:
```
df.drop(['flgs', 'flgs_number'], axis=1, inplace=True)
```
#### Test:
```
df.columns
```
### 3. stime(start time) and ltime(last time) required for our analysis and can it be dropped?
#### Define:
Start time is the time at which the machine started recording the packet transfer and end time is the time at which the packet transfer was completed. SInce in the upcoming columns we are making flow based analysis from this features we dont actually need this as it is of no use for either analysis nor machine learning. This can be dropped from the datafram for further analysis.
#### Code:
```
df.drop(['stime', 'ltime'], axis=1, inplace=True)
```
#### Test:
```
df.columns
df.sport
```
### 4.sport and dport are port numbers but the type available for them is object.
#### Define:
We need to convert both dport and sport into int type as they are numbers.
But icmp have strings so we need to delete them as they are very least in number and not usefull for analysis.
#### Code:
```
df = df[df.proto != 'icmp']
df["sport"] = pd.to_numeric(df['sport'])
df["dport"] = pd.to_numeric(df['dport'])
```
#### Test:
```
df.sport.dtype
df.dport.dtype
```
### 5.we have -1 as port number so we need to see what is wrong and remove it if needed?
#### INFO:
The valid port numbers can range as,
(2^16)-1, or 0-65,535 (the -1 is because port 0 is reserved and unavailable). -ve can also occur if there is a NAT translation occuring.
NAT: Transmision of private ip to public ip.
The -1 indicates port 0 itself and can be ignored. We wont use it for analysis but it dont need to be removed.
### 6.Sate number just tells us about the state and which isnt needed and can be removed.
#### Define:
In state_number it is a numerical representation of states which we already have and can be used for analytics. We can add it later if needed and remove it for now.
#### Code:
```
df.drop(['state_number'], axis=1, inplace=True)
```
#### Test:
```
df.columns
```
### 7.sequence number also isnt required for analysis and can be removed.
#### Define:
Sequence number is of no use in deciding about botnets attack and is used for transfer of data with number and hence it can be removed for analysis phase.
#### Code:
```
df.drop(['seq'], axis=1, inplace=True)
```
#### Test:
```
df.columns
```
### Basics features have been cleaned are ready to export to a new csv.
```
df.to_csv(r'basic_cleaned.csv')
```
# 2.Flow Based Features
<img src="images/flow_features.png">
### Data Wrangling on Flow Based Features
```
df.columns
```
#### Flow based features are derived from basic features and so no cleaning process required.
## Dataset for Machine learning:
Based on the above cleaning process and earlier chunk wrangled process we can prepare a dataset for machine learning purpose by removing the unwanted columns.
```
df.info()
df_ml = df.copy()
df_ml.drop(['proto', 'proto_number', 'saddr', 'sport', 'daddr', 'dport','state','category', 'subcategory',], axis=1, inplace=True)
df_ml.to_csv(r'ml_dataset.csv')
df_ml.head()
df_ml.describe()
```
# We have made a dataset that is perfect for machine learning classifcation
| github_jupyter |
Copyright (C) 2017 Ashish Gupta<br>
<br>
This program is free software: you can redistribute it and/or modify<br>
it under the terms of the GNU General Public License as published by<br>
the Free Software Foundation, either version 3 of the License, or<br>
(at your option) any later version.<br>
<br>
This program is distributed in the hope that it will be useful,<br>
but WITHOUT ANY WARRANTY; without even the implied warranty of<br>
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the<br>
GNU General Public License for more details.<br>
<br>
You should have received a copy of the GNU General Public License<br>
along with this program. If not, see <http://www.gnu.org/licenses/>.<br>
<br>
author Ashish Gupta<br>
version 0.1.0
```
from __future__ import print_function
```
build-in modules
```
import os
```
third-party modules
```
from skimage import io
from skimage import exposure
from skimage import color
from skimage import img_as_float
import numpy as np
from sklearn.metrics import mutual_info_score as mis
import matplotlib.pyplot as plt
def __compute_contrast_quality_for_image(input_image, num_bins=128):
"""
Computes a score of the quality of contrast in input image based on divergence
from an intensity equalized histogram.
We compute the mutual information (MI) (a measure of entropy) between histogram of intensity
of image and its contrast equalized histogram.
MI is not real metric, but a symmetric and non-negative similarity measures that
takes high values for similar images. Negative values are also possible.
Intuitively, mutual information measures the information that histogram_image and histogram_equlized
share: it measures how much knowing one of these variables reduces uncertainty about the other.
The Entropy is defined as:
.. math::
H(X) = - \sum_i p(g_i) * ln(p(g_i)
with :math:`p(g_i)` being the probability of the images intensity value :math:`g_i`.
Assuming two histograms :math:`R` and :math:`T`, the mutual information is then computed by comparing the
histogram entropy values (i.e. a measure how well-structured the common histogram is).
The distance metric is then calculated as follows:
.. math::
MI(R,T) = H(R) + H(T) - H(R,T) = H(R) - H(R|T) = H(T) - H(T|R)
A maximization of the mutual information is equal to a minimization of the joint
entropy.
:param input_image : 2-D array
:param num_bins : integer : the number of bins in histogram, it has a small scaling effect on
the mutual information score since it slightly modifies the shape of the histogram
:return: quality of contrast : float
:raises: argument error if input image data is corrupted
"""
# Check dimensions of input image
# If image dimensions is 2, then it is a gray-scale image
# First convert input to RGB image
if input_image.shape == 2:
input_image = color.gray2rgb(input_image)
# Convert the RGB image to HSV. Exposure is primarily correlated with Value rather
# than Hue and Saturation
image_hsv = color.rgb2hsv(input_image)
# The intensity channel is third in HSV format image
v_channel = image_hsv[:, :, 2]
# compute the contrast equalized array of intensity channel of image
v_channel_equalized = exposure.equalize_hist(v_channel, nbins=num_bins)
# compute the histogram of intensity channel
v_channel_histogram, histogram_bin_edges = np.histogram(img_as_float(v_channel), bins=num_bins, density=True)
# compute the histogram of contrast equalized intensity channel
v_channel_equalized_histogram, _ = np.histogram(img_as_float(v_channel_equalized), bins=num_bins, density=True, range=(histogram_bin_edges[0], histogram_bin_edges[-1]))
# compute the mutual information based contrast quality measure
return mis(v_channel_histogram, v_channel_equalized_histogram)
def contrast_quality(image_url):
"""
Computes the contrast quality of image file
:param image_url: string : path to image file resource
:return: float : contrast quality of input image
"""
try:
image = io.imread(image_url)
return __compute_contrast_quality_for_image(image)
except IOError as err:
# modify print function in case of incompatibility between Python 2.x and 3.x
print(err)
def contrast_quality_collection(image_folder_url, output_folder=None):
"""
Computes the contrast quality of all image files in directory URL provided
:param image_folder_url: URL of image data folder
:return: URL of log file to which contrast quality is recorded
"""
# Check if provided folder URL exists
if not os.path.exists(image_folder_url):
# modify print function in case of incompatibility between Python 2.x and 3.x
print('{} does not exist.'.format(image_folder_url))
return None
# output file path
if output_folder:
result_file_url = os.path.join(output_folder, 'contrast_quality.log')
else:
result_file_url = os.path.join(image_folder_url, '../', 'contrast_quality.txt')
with open(result_file_url, 'w') as output_log_file:
# process only the valid image files in directory
file_list = os.listdir(image_folder_url)
# increment valid image file extensions as required
valid_extensions = ['jpg', 'jpeg', 'png', 'pgm', 'bmp']
for file in file_list:
if file.split('.')[-1] in valid_extensions:
image_url = os.path.join(image_folder_url, file)
cq = contrast_quality(image_url)
# echo to console
# print('%s,%08.6f\n' % (image_url, cq))
output_log_file.write('%s,%08.6f\n' % (file, cq))
return result_file_url
def demo():
"""
Demonstrate contrast_quality measure for sample photographs
The results of estimated contrast quality are consistent with the expectation of
contrast quality of each of the photographs
:return: None
"""
# photograph samples, some of the image sets contain the same image with different exposures
# or contrast enhancement processing
image_set_1 = ['./examples/retinex1.jpg', './examples/retinex2.jpg', './examples/retinex3.jpg', './examples/retinex4.jpg']
image_set_2 = ['./examples/under1.jpg', './examples/over1.jpg', './examples/correct1.jpg']
image_set_3 = ['examples/3894541598_bb37af2dcd_o.jpg', 'examples/18056401685_c5b313e712_o.jpg']
image_set_list = [image_set_1, image_set_2, image_set_3]
for image_set in image_set_list:
num_images = len(image_set)
fig = plt.figure(figsize=(num_images*5, 8))
for i in range(num_images):
plt.subplot(1, num_images, i+1)
image_url = image_set[i]
image = io.imread(image_url)
cq = contrast_quality(image_url)
plt.imshow(image)
plt.title('contrast Q : %08.6f' % cq)
plt.xticks([])
plt.yticks([])
plt.tight_layout()
plt.show()
if __name__ == '__main__':
# run demo on image files in examples directory
demo()
# run demo on all image files in a folder (in this case the examples directory)
contrast_quality_collection('./examples/')
```
| github_jupyter |
# 5.9 含并行连结的网络(GoogLeNet)
```
import time
import torch
from torch import nn, optim
import torch.nn.functional as F
import sys
sys.path.append("..")
import d2lzh_pytorch as d2l
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(torch.__version__)
print(device)
```
## 5.9.1 Inception 块
```
class Inception(nn.Module):
# c1 - c4为每条线路里的层的输出通道数
def __init__(self, in_c, c1, c2, c3, c4):
super(Inception, self).__init__()
# 线路1,单1 x 1卷积层
self.p1_1 = nn.Conv2d(in_c, c1, kernel_size=1)
# 线路2,1 x 1卷积层后接3 x 3卷积层
self.p2_1 = nn.Conv2d(in_c, c2[0], kernel_size=1)
self.p2_2 = nn.Conv2d(c2[0], c2[1], kernel_size=3, padding=1)
# 线路3,1 x 1卷积层后接5 x 5卷积层
self.p3_1 = nn.Conv2d(in_c, c3[0], kernel_size=1)
self.p3_2 = nn.Conv2d(c3[0], c3[1], kernel_size=5, padding=2)
# 线路4,3 x 3最大池化层后接1 x 1卷积层
self.p4_1 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)
self.p4_2 = nn.Conv2d(in_c, c4, kernel_size=1)
def forward(self, x):
p1 = F.relu(self.p1_1(x))
p2 = F.relu(self.p2_2(F.relu(self.p2_1(x))))
p3 = F.relu(self.p3_2(F.relu(self.p3_1(x))))
p4 = F.relu(self.p4_2(self.p4_1(x)))
return torch.cat((p1, p2, p3, p4), dim=1) # 在通道维上连结输出
```
## 5.9.2 GoogLeNet模型
```
b1 = nn.Sequential(nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
b2 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1),
nn.Conv2d(64, 192, kernel_size=3, padding=1),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
b3 = nn.Sequential(Inception(192, 64, (96, 128), (16, 32), 32),
Inception(256, 128, (128, 192), (32, 96), 64),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
b4 = nn.Sequential(Inception(480, 192, (96, 208), (16, 48), 64),
Inception(512, 160, (112, 224), (24, 64), 64),
Inception(512, 128, (128, 256), (24, 64), 64),
Inception(512, 112, (144, 288), (32, 64), 64),
Inception(528, 256, (160, 320), (32, 128), 128),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
b5 = nn.Sequential(Inception(832, 256, (160, 320), (32, 128), 128),
Inception(832, 384, (192, 384), (48, 128), 128),
d2l.GlobalAvgPool2d())
net = nn.Sequential(b1, b2, b3, b4, b5, d2l.FlattenLayer(), nn.Linear(1024, 10))
X = torch.rand(1, 1, 96, 96)
for blk in net.children():
X = blk(X)
print('output shape: ', X.shape)
```
## 5.9.3 获取数据和训练模型
```
batch_size = 128
# 如出现“out of memory”的报错信息,可减小batch_size或resize
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96)
lr, num_epochs = 0.001, 5
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
d2l.train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)
```
| github_jupyter |
# Modeling and Simulation in Python
Chapter 23
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
```
### Code from the previous chapter
```
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
degree = UNITS.degree
params = Params(x = 0 * m,
y = 1 * m,
g = 9.8 * m/s**2,
mass = 145e-3 * kg,
diameter = 73e-3 * m,
rho = 1.2 * kg/m**3,
C_d = 0.3,
angle = 45 * degree,
velocity = 40 * m / s,
t_end = 20 * s)
def make_system(params):
"""Make a system object.
params: Params object with angle, velocity, x, y,
diameter, duration, g, mass, rho, and C_d
returns: System object
"""
unpack(params)
# convert angle to degrees
theta = np.deg2rad(angle)
# compute x and y components of velocity
vx, vy = pol2cart(theta, velocity)
# make the initial state
init = State(x=x, y=y, vx=vx, vy=vy)
# compute area from diameter
area = np.pi * (diameter/2)**2
return System(params, init=init, area=area)
def drag_force(V, system):
"""Computes drag force in the opposite direction of `V`.
V: velocity
system: System object with rho, C_d, area
returns: Vector drag force
"""
unpack(system)
mag = -rho * V.mag**2 * C_d * area / 2
direction = V.hat()
f_drag = mag * direction
return f_drag
def slope_func(state, t, system):
"""Computes derivatives of the state variables.
state: State (x, y, x velocity, y velocity)
t: time
system: System object with g, rho, C_d, area, mass
returns: sequence (vx, vy, ax, ay)
"""
x, y, vx, vy = state
unpack(system)
V = Vector(vx, vy)
a_drag = drag_force(V, system) / mass
a_grav = Vector(0, -g)
a = a_grav + a_drag
return vx, vy, a.x, a.y
def event_func(state, t, system):
"""Stop when the y coordinate is 0.
state: State object
t: time
system: System object
returns: y coordinate
"""
x, y, vx, vy = state
return y
```
### Optimal launch angle
To find the launch angle that maximizes distance from home plate, we need a function that takes launch angle and returns range.
```
def range_func(angle, params):
"""Computes range for a given launch angle.
angle: launch angle in degrees
params: Params object
returns: distance in meters
"""
params = Params(params, angle=angle)
system = make_system(params)
results, details = run_ode_solver(system, slope_func, events=event_func)
x_dist = get_last_value(results.x) * m
return x_dist
```
Let's test `range_func`.
```
%time range_func(45, params)
```
And sweep through a range of angles.
```
angles = linspace(20, 80, 21)
sweep = SweepSeries()
for angle in angles:
x_dist = range_func(angle, params)
print(angle, x_dist)
sweep[angle] = x_dist
```
Plotting the `Sweep` object, it looks like the peak is between 40 and 45 degrees.
```
plot(sweep, color='C2')
decorate(xlabel='Launch angle (degree)',
ylabel='Range (m)',
title='Range as a function of launch angle',
legend=False)
savefig('figs/chap10-fig03.pdf')
```
We can use `max_bounded` to search for the peak efficiently.
```
%time res = max_bounded(range_func, [0, 90], params)
```
`res` is an `ModSimSeries` object with detailed results:
```
res
```
`x` is the optimal angle and `fun` the optional range.
```
optimal_angle = res.x * degree
max_x_dist = res.fun
```
### Under the hood
Read the source code for `max_bounded` and `min_bounded`, below.
Add a print statement to `range_func` that prints `angle`. Then run `max_bounded` again so you can see how many times it calls `range_func` and what the arguments are.
```
%psource max_bounded
%psource min_bounded
```
### The Manny Ramirez problem
Finally, let's solve the Manny Ramirez problem:
*What is the minimum effort required to hit a home run in Fenway Park?*
Fenway Park is a baseball stadium in Boston, Massachusetts. One of its most famous features is the "Green Monster", which is a wall in left field that is unusually close to home plate, only 310 feet along the left field line. To compensate for the short distance, the wall is unusually high, at 37 feet.
Although the problem asks for a minimum, it is not an optimization problem. Rather, we want to solve for the initial velocity that just barely gets the ball to the top of the wall, given that it is launched at the optimal angle.
And we have to be careful about what we mean by "optimal". For this problem, we don't want the longest range, we want the maximum height at the point where it reaches the wall.
If you are ready to solve the problem on your own, go ahead. Otherwise I will walk you through the process with an outline and some starter code.
As a first step, write a function called `height_func` that takes a launch angle and a params as parameters, simulates the flights of a baseball, and returns the height of the baseball when it reaches a point 94.5 meters (310 feet) from home plate.
```
# Solution goes here
```
Always test the slope function with the initial conditions.
```
# Solution goes here
# Solution goes here
```
Test your function with a launch angle of 45 degrees:
```
# Solution goes here
```
Now use `max_bounded` to find the optimal angle. Is it higher or lower than the angle that maximizes range?
```
# Solution goes here
# Solution goes here
# Solution goes here
```
With initial velocity 40 m/s and an optimal launch angle, the ball clears the Green Monster with a little room to spare.
Which means we can get over the wall with a lower initial velocity.
### Finding the minimum velocity
Even though we are finding the "minimum" velocity, we are not really solving a minimization problem. Rather, we want to find the velocity that makes the height at the wall exactly 11 m, given given that it's launched at the optimal angle. And that's a job for `fsolve`.
Write an error function that takes a velocity and a `Params` object as parameters. It should use `max_bounded` to find the highest possible height of the ball at the wall, for the given velocity. Then it should return the difference between that optimal height and 11 meters.
```
# Solution goes here
```
Test your error function before you call `fsolve`.
```
# Solution goes here
```
Then use `fsolve` to find the answer to the problem, the minimum velocity that gets the ball out of the park.
```
# Solution goes here
# Solution goes here
```
And just to check, run `error_func` with the value you found.
```
# Solution goes here
```
| github_jupyter |
```
import matplotlib.pyplot as plt
%matplotlib inline
```
# Functions For Reading Tint Data
```
import struct
import numpy.matlib
def getspikes(fullpath):
"""
This function will return the spike data, spike times, and spike parameters from Tint tetrode data.
Example:
tetrode_fullpath = 'C:\\example\\tetrode_1.1'
ts, ch1, ch2, ch3, ch4, spikeparam = getspikes(tetrode_fullpath)
Args:
fullpath (str): the fullpath to the Tint tetrode file you want to acquire the spike data from.
Returns:
ts (ndarray): an Nx1 array for the spike times, where N is the number of spikes.
ch1 (ndarray) an NxM matrix containing the spike data for channel 1, N is the number of spikes,
and M is the chunk length.
ch2 (ndarray) an NxM matrix containing the spike data for channel 2, N is the number of spikes,
and M is the chunk length.
ch3 (ndarray) an NxM matrix containing the spike data for channel 3, N is the number of spikes,
and M is the chunk length.
ch4 (ndarray) an NxM matrix containing the spike data for channel 4, N is the number of spikes,
and M is the chunk length.
spikeparam (dict): a dictionary containing the header values from the tetrode file.
- Geoff Barrett
"""
spikes, spikeparam = importspikes(fullpath)
ts = spikes['t']
nspk = spikeparam['num_spikes']
spikelen = spikeparam['samples_per_spike']
ch1 = spikes['ch1']
ch2 = spikes['ch2']
ch3 = spikes['ch3']
ch4 = spikes['ch4']
return ts, ch1, ch2, ch3, ch4, spikeparam
def importspikes(filename):
"""Reads through the tetrode file as an input and returns two things, a dictionary containing the following:
timestamps, ch1-ch4 waveforms, and it also returns a dictionary containing the spike parameters
- Geoff Barrett
"""
with open(filename, 'rb') as f:
for line in f:
if 'data_start' in str(line):
spike_data = np.fromstring((line + f.read())[len('data_start'):-len('\r\ndata_end\r\n')], dtype='uint8')
break
elif 'num_spikes' in str(line):
num_spikes = int(line.decode(encoding='UTF-8').split(" ")[1])
elif 'bytes_per_timestamp' in str(line):
bytes_per_timestamp = int(line.decode(encoding='UTF-8').split(" ")[1])
elif 'samples_per_spike' in str(line):
samples_per_spike = int(line.decode(encoding='UTF-8').split(" ")[1])
elif 'bytes_per_sample' in str(line):
bytes_per_sample = int(line.decode(encoding='UTF-8').split(" ")[1])
elif 'timebase' in str(line):
timebase = int(line.decode(encoding='UTF-8').split(" ")[1])
elif 'duration' in str(line):
duration = int(line.decode(encoding='UTF-8').split(" ")[1])
elif 'sample_rate' in str(line):
samp_rate = int(line.decode(encoding='UTF-8').split(" ")[1])
# calculating the big-endian and little endian matrices so we can convert from bytes -> decimal
big_endian_vector = 256 ** np.arange(bytes_per_timestamp - 1, -1, -1)
little_endian_matrix = np.arange(0, bytes_per_sample).reshape(bytes_per_sample, 1)
little_endian_matrix = 256 ** numpy.matlib.repmat(little_endian_matrix, 1, samples_per_spike)
number_channels = 4
# calculating the timestamps
t_start_indices = np.linspace(0, num_spikes * (bytes_per_sample * samples_per_spike * 4 +
bytes_per_timestamp * 4), num=num_spikes, endpoint=False).astype(
int).reshape(num_spikes, 1)
t_indices = t_start_indices
for chan in np.arange(1, number_channels):
t_indices = np.hstack((t_indices, t_start_indices + chan))
t = spike_data[t_indices].reshape(num_spikes, bytes_per_timestamp) # acquiring the time bytes
t = np.sum(np.multiply(t, big_endian_vector), axis=1) / timebase # converting from bytes to float values
t_indices = None
waveform_data = np.zeros((number_channels, num_spikes, samples_per_spike)) # (dimensions, rows, columns)
bytes_offset = 0
# read the t,ch1,t,ch2,t,ch3,t,ch4
for chan in range(number_channels): # only really care about the first time that gets written
chan_start_indices = t_start_indices + chan * samples_per_spike + bytes_per_timestamp + bytes_per_timestamp * chan
for spike_sample in np.arange(1, samples_per_spike):
chan_start_indices = np.hstack((chan_start_indices, t_start_indices +
chan * samples_per_spike + bytes_per_timestamp +
bytes_per_timestamp * chan + spike_sample))
waveform_data[chan][:][:] = spike_data[chan_start_indices].reshape(num_spikes, samples_per_spike).astype(
'int8') # acquiring the channel bytes
waveform_data[chan][:][:][np.where(waveform_data[chan][:][:] > 127)] -= 256
waveform_data[chan][:][:] = np.multiply(waveform_data[chan][:][:], little_endian_matrix)
spikeparam = {'timebase': timebase, 'bytes_per_sample': bytes_per_sample, 'samples_per_spike': samples_per_spike,
'bytes_per_timestamp': bytes_per_timestamp, 'duration': duration, 'num_spikes': num_spikes,
'sample_rate': samp_rate}
return {'t': t.reshape(num_spikes, 1), 'ch1': np.asarray(waveform_data[0][:][:]),
'ch2': np.asarray(waveform_data[1][:][:]),
'ch3': np.asarray(waveform_data[2][:][:]), 'ch4': np.asarray(waveform_data[3][:][:])}, spikeparam
def get_setfile_parameter(parameter, set_filename):
"""
This function will return the parameter value of a given parameter name for a given set filename.
Example:
set_fullpath = 'C:\\example\\tetrode_1.1'
parameter_name = 'duration
duration = get_setfile_parameter(parameter_name, set_fullpath)
Args:
parameter (str): the name of the set file parameter that you want to obtain.
set_filename (str): the full path of the .set file that you want to obtain the parameter value from.
Returns:
parameter_value (str): the value for the given parameter
- Geoff Barrett
"""
if not os.path.exists(set_filename):
return
# adding the encoding because tint data is created via windows and if you want to run this in linux, you need
# to explicitly say this
with open(set_filename, 'r+', encoding='cp1252') as f:
for line in f:
if parameter in line:
if line.split(' ')[0] == parameter:
# prevents part of the parameter being in another parameter name
new_line = line.strip().split(' ')
if len(new_line) == 2:
return new_line[-1]
else:
return ' '.join(new_line[1:])
```
# Pre-Processing (Whiten)
```
from mountainlab_pytools import mdaio
import numpy as np
import multiprocessing
import time
import os
class SharedChunkInfo():
def __init__(self,num_chunks):
self.timer_timestamp = multiprocessing.Value('d',time.time(),lock=False)
self.last_appended_chunk = multiprocessing.Value('l',-1,lock=False)
self.num_chunks=num_chunks
self.num_completed_chunks = multiprocessing.Value('l',0,lock=False)
self.lock = multiprocessing.Lock()
def reportChunkCompleted(self,num):
with self.lock:
self.num_completed_chunks.value+=1
def reportChunkAppended(self,num):
with self.lock:
self.last_appended_chunk.value=num
def lastAppendedChunk(self):
with self.lock:
return self.last_appended_chunk.value
def resetTimer(self):
with self.lock:
self.timer_timestamp.value=time.time()
def elapsedTime(self):
with self.lock:
return time.time()-self.timer_timestamp.value
def printStatus(self):
with self.lock:
print('Processed {} of {} chunks...'.format(self.num_completed_chunks.value,self.num_chunks))
def compute_AAt_matrix_for_chunk(num):
opts=g_opts
in_fname=opts['timeseries'] # The entire (large) input file
out_fname=opts['timeseries_out'] # The entire (large) output file
chunk_size=opts['chunk_size']
X=mdaio.DiskReadMda(in_fname)
t1=int(num*opts['chunk_size']) # first timepoint of the chunk
t2=int(np.minimum(X.N2(),(t1+chunk_size))) # last timepoint of chunk (+1)
# Ensuring that this chunk value is float64 to avoid svd complications
# we have to make sure to remove the first row which are time values
chunk=X.readChunk(i1=0,N1=X.N1(),i2=t1,N2=t2-t1)[1:,:].astype(np.float32) # Read the chunk
ret=chunk @ np.transpose(chunk)
return ret
def whiten_chunk(num,W):
#print('Whitening {}'.format(num))
opts=g_opts
#print('Whitening chunk {} of {}'.format(num,opts['num_chunks']))
in_fname=opts['timeseries'] # The entire (large) input file
out_fname=opts['timeseries_out'] # The entire (large) output file
chunk_size=opts['chunk_size']
X=mdaio.DiskReadMda(in_fname)
t1=int(num*opts['chunk_size']) # first timepoint of the chunk
t2=int(np.minimum(X.N2(),(t1+chunk_size))) # last timepoint of chunk (+1)
chunk=X.readChunk(i1=0,N1=X.N1(),i2=t1,N2=t2-t1) # Read the chunk
# ensuring that the time points aren't whitened
chunk[1:,:] = W @ chunk[1:,:]
###########################################################################################
# Now we wait until we are ready to append to the output file
# Note that we need to append in order, thus the shared_data object
###########################################################################################
g_shared_data.reportChunkCompleted(num) # Report that we have completed this chunk
while True:
if num == g_shared_data.lastAppendedChunk()+1:
break
time.sleep(0.005) # so we don't saturate the CPU unnecessarily
# Append the filtered chunk (excluding the padding) to the output file
mdaio.appendmda(chunk,out_fname)
# Report that we have appended so the next chunk can proceed
g_shared_data.reportChunkAppended(num)
# Print status if it has been long enough
if g_shared_data.elapsedTime()>4:
g_shared_data.printStatus()
g_shared_data.resetTimer()
def whiten(*, timeseries,timeseries_out,
clip_size=50, num_clips_per_chunk=6000, num_processes=os.cpu_count()
):
"""
Whiten a multi-channel timeseries
Parameters
----------
timeseries : INPUT
MxN raw timeseries array (M = #channels, N = #timepoints)
timeseries_out : OUTPUT
Whitened output (MxN array)
"""
chunk_size = int(clip_size * num_clips_per_chunk)
X=mdaio.DiskReadMda(timeseries)
M=X.N1() # Number of channels (+1 since time is the 1st row)
N=X.N2() # Number of timepoints
# chunk_size=N # right now we are putting all the data into one chunk
num_chunks_for_computing_cov_matrix=10
num_chunks=int(np.ceil(N/chunk_size))
print ('Chunk size: {}, Num chunks: {}, Num processes: {}'.format(chunk_size,num_chunks,num_processes))
opts={
"timeseries":timeseries,
"timeseries_out":timeseries_out,
"chunk_size":chunk_size,
"num_processes":num_processes,
"num_chunks":num_chunks
}
global g_opts
g_opts=opts
pool = multiprocessing.Pool(processes=num_processes)
step=int(np.maximum(1,np.floor(num_chunks/num_chunks_for_computing_cov_matrix)))
AAt_matrices=pool.map(compute_AAt_matrix_for_chunk,range(0,num_chunks,step),chunksize=1)
AAt=np.zeros((M-1,M-1),dtype='float64')
for M0 in AAt_matrices:
##important: need to fix the denominator here to account for possible smaller chunk
AAt+=M0/(len(AAt_matrices)*chunk_size)
U, S, Ut = np.linalg.svd(AAt, full_matrices=True)
W = (U @ np.diag(1/np.sqrt(S))) @ Ut
#print ('Whitening matrix:')
#print (W)
global g_shared_data
g_shared_data=SharedChunkInfo(num_chunks)
mdaio.writemda32(np.zeros([M,0]),timeseries_out)
pool = multiprocessing.Pool(processes=num_processes)
pool.starmap(whiten_chunk,[(num,W) for num in range(0,num_chunks)],chunksize=1)
return True
```
# Mountain Sort 4 - Snippets Edition
```
import numpy as np
import isosplit5
from mountainlab_pytools import mdaio
import sys
import os
import multiprocessing
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import h5py
warnings.resetwarnings()
def get_channel_neighborhood(m, Geom, *, adjacency_radius):
M = Geom.shape[0]
if adjacency_radius < 0:
return np.arange(M)
deltas = Geom - np.tile(Geom[m, :], (M, 1))
distsqrs = np.sum(deltas ** 2, axis=1)
inds = np.where(distsqrs <= adjacency_radius ** 2)[0]
inds = np.sort(inds)
return inds.ravel()
def subsample_array(X, max_num):
if X.size == 0:
return X
if max_num >= len(X):
return X
inds = np.random.choice(len(X), max_num, replace=False)
return X[inds]
def compute_principal_components(X, num_components):
u, s, vt = np.linalg.svd(X)
u = u[:, :num_components]
return u
def compute_template_channel_peaks(templates, *, detect_sign):
if detect_sign < 0:
templates = templates * (-1)
elif detect_sign == 0:
templates = np.abs(templates)
else:
pass
tc_peaks = np.max(templates, axis=1)
tc_peak_times = np.argmax(templates, axis=1)
return tc_peaks, tc_peak_times
def compute_sliding_maximum_snippet(snippets, radius=10):
"""This will be a sliding maximum.
It will return a nSpikes x clip_size matrix (ret), for each
sample it will look at samples half of the radius before and half after
and to determine the local max for each sample.
snippets: nSpikes x clip_size matrix"""
ret = np.zeros_like(snippets)
max_i = snippets.shape[-1]
half_radius = int(radius / 2)
for i in np.arange(snippets.shape[-1]):
start = i - half_radius
stop = i + half_radius
if start < 0: start = 0
if stop > max_i: stop = max_i
ret[:, i] = np.amax(snippets[:, start:stop], axis=1)
return ret.flatten()
def remove_zero_features(X):
maxvals = np.max(np.abs(X), axis=1)
features_to_use = np.where(maxvals > 0)[0]
return X[features_to_use, :]
def cluster(features, *, npca):
num_events_for_pca = np.minimum(features.shape[1], 1000)
subsample_inds = np.random.choice(features.shape[1], num_events_for_pca, replace=False)
u, s, vt = np.linalg.svd(features[:, subsample_inds])
features2 = (u.transpose())[0:npca, :] @ features
features2 = remove_zero_features(features2)
labels = isosplit5.isosplit5(features2)
return labels
def branch_cluster(features, *, branch_depth=2, npca=10):
if features.size == 0:
return np.array([])
min_size_to_try_split = 20
labels1 = cluster(features, npca=npca).ravel().astype('int64')
if np.min(labels1) < 0:
tmp_fname = '/tmp/isosplit5-debug-features.mda'
mdaio.writemda32(features, tmp_fname)
raise Exception('Unexpected error in isosplit5. Features written to {}'.format(tmp_fname))
K = int(np.max(labels1))
if K <= 1 or branch_depth <= 1:
return labels1
label_offset = 0
labels_new = np.zeros(labels1.shape, dtype='int64')
for k in range(1, K + 1):
inds_k = np.where(labels1 == k)[0]
if len(inds_k) > min_size_to_try_split:
labels_k = branch_cluster(features[:, inds_k], branch_depth=branch_depth - 1, npca=npca)
K_k = int(np.max(labels_k))
labels_new[inds_k] = label_offset + labels_k
label_offset += K_k
else:
labels_new[inds_k] = label_offset + 1
label_offset += 1
return labels_new
def write_firings_file(channels, times, labels, clip_inds, fname):
L = len(channels)
X = np.zeros((4, L), dtype='float64')
X[0, :] = channels
X[1, :] = times
X[2, :] = labels
X[3, :] = clip_inds
mdaio.writemda64(X, fname)
def detect_on_neighborhood_from_snippets_model(X, channel_number, *, nbhd_channels, detect_threshold, detect_interval,
detect_sign, clip_size, chunk_infos):
"""
X - Snippet Model
channel_number - The channel that you want to get the times from
"""
t1 = chunk_infos[0]['t1']
t2 = chunk_infos[0]['t2']
# data_t = X.getChunk(t1=t1,t2=t2,channels=[0]).astype(np.int32)
# we add 1 to the channel number since the first channel is the times
# data = X.getChunk(t1=t1,t2=t2,channels=[channel_number+1]).astype(np.int32)
data = X.getChunk(t1=t1, t2=t2, channels=nbhd_channels + 1).astype(np.int32)
M = X.numChannels() - 1 # number of data channels
channel_rel = np.where(nbhd_channels == channel_number)[0][
0] # The relative index of the central channel in the neighborhood
if detect_sign < 0:
# negative peaks
data = np.multiply(data, -1)
elif detect_sign == 0:
# both negative and positive peaks
data = np.abs(data)
elif detect_sign > 0:
# positive peaks
pass
# find the max values of the channel data (reshaped like nSpikes x clip_size)
max_inds = np.argmax(data[channel_rel, :].reshape((-1, clip_size)), axis=1)
clip_inds = np.arange(len(max_inds)) # also returning the clip indices
# converting it so the indices matches the flattened data
max_inds = np.arange(len(max_inds)) * clip_size + max_inds
# find the max values to compare to threshold
max_vals = data[channel_rel, :][max_inds]
# return indices where the threshold has been reached
threshold_bool = np.where(max_vals >= detect_threshold)[0]
# the sample number that refers to the spike events on the chosen channel
times = max_inds[threshold_bool]
# the snippet index number corresponding to that event
clip_inds = clip_inds[threshold_bool]
# now we will calculate if the peak belongs to this channel's neighborhood
data = data.reshape((M, -1, clip_size))
# this will find the local neighborhood maximum for each point
nearby_neighborhood_maximum0 = compute_sliding_maximum_snippet(np.amax(data, axis=0), radius=detect_interval)
vals = data[channel_rel, :].flatten()[times]
assign_to_this_neighborhood = (vals == nearby_neighborhood_maximum0[times])
return times, clip_inds, assign_to_this_neighborhood
def compute_event_features_from_snippets(X, times, clip_ind, *, nbhd_channels, clip_size, max_num_clips_for_pca,
num_features, chunk_infos):
"""compute_event_features_from_snippets
X - the snippets model
times - sample value of the channel peaks (for the central channel chosen)
clip_ind - the
"""
if times.size == 0:
return np.array([])
# N=X.numTimepoints()
# X_neigh=X.getChunk(t1=0,t2=N,channels=nbhd_channels)
M_neigh = len(nbhd_channels)
# padding=clip_size*10
# Subsample and extract clips for pca
# times_for_pca=subsample_array(times,max_num_clips_for_pca)
clips_inds_for_pca = subsample_array(clip_ind, max_num_clips_for_pca)
t1 = chunk_infos[0]['t1']
t2 = chunk_infos[0]['t2']
# we add 1 to the channel number since the first channel is the times
clips = X.getChunk(t1=t1, t2=t2, channels=nbhd_channels + 1).astype(np.int32)
clips = clips.reshape((M_neigh, -1, clip_size))
clips = np.swapaxes(clips, 1, 2) # swapping axes
clips_for_pca = clips[:, :, clips_inds_for_pca]
# Compute the principal components
# use twice as many features, because of branch method
principal_components = compute_principal_components(
clips_for_pca.reshape((M_neigh * clip_size, len(clips_inds_for_pca))),
num_features * 2) # (MT x 2F)
# Compute the features for all the clips
# projecting clip data onto the principal component axis
features = principal_components.transpose() @ clips[:, :, clip_ind].reshape(
(M_neigh * clip_size, len(times))) # (2F x MT) @ (MT x L0) -> (2F x L0)
return features
def get_real_times(X, times, *, time_channel, chunk_infos):
"""The times that we are dealing with are really just index values if we concatenated
all the snippets to a nCh x clip_size*n_spikes matrix.
We will bring back the time information as the 1st row of data in the .mda file represents
the sample number that the chunk sample was recorded at (before it was removed from the continuos data).
"""
t1 = chunk_infos[0]['t1']
t2 = chunk_infos[0]['t2']
# we add 1 to the channel number since the first channel is the times
data_t = X.getChunk(t1=t1, t2=t2, channels=[time_channel]).astype(np.int32)
return data_t[0, times]
def compute_templates_from_snippets_model(X, times, clip_ind, labels, *, nbhd_channels, clip_size, chunk_infos):
# TODO: subsample smartly here
M0 = len(nbhd_channels)
t1 = chunk_infos[0]['t1']
t2 = chunk_infos[0]['t2']
# we add 1 to the channel number since the first channel is the times
clips = X.getChunk(t1=t1, t2=t2, channels=nbhd_channels + 1).astype(np.int32)
clips = clips.reshape((M0, -1, clip_size))
clips = clips[:, clip_ind, :]
clips = np.swapaxes(clips, 1, 2) # swapping axes
K = np.max(labels) if labels.size > 0 else 0
template_sums = np.zeros((M0, clip_size, K), dtype='float64')
template_counts = np.zeros(K, dtype='float64')
for k in range(K):
inds_k = np.where(labels == (k + 1))[0]
if len(inds_k) > 0:
template_counts[k] += len(inds_k)
template_sums[:, :, k] += np.sum(clips[:, :, inds_k], axis=2).reshape((M0, clip_size))
templates = np.zeros((M0, clip_size, K))
for k in range(K):
if template_counts[k]:
templates[:, :, k] = template_sums[:, :, k] / template_counts[k]
return templates
def create_chunk_infos(*, N):
chunk_infos = []
chunk_size = N # I have changed it so all the data is on one chunk
num_chunks = int(np.ceil(N / chunk_size))
for i in range(num_chunks):
chunk = {
't1': i * chunk_size,
't2': np.minimum(N, (i + 1) * chunk_size)
}
chunk_infos.append(chunk)
return chunk_infos
class _NeighborhoodSorter:
def __init__(self):
self._sorting_opts = None
self._clip_size = None
self._snippets = None
self._geom = None
self._central_channel = None
self._hdf5_path = None
self._num_assigned_event_time_arrays = 0
self._num_assigned_event_clip_ind_arrays = 0
def setSortingOpts(self, opts):
self._sorting_opts = opts
def setSnippetsModel(self, model):
self._snippets = model
def setHdf5FilePath(self, path):
self._hdf5_path = path
def setGeom(self, geom):
self._geom = geom
def setCentralChannel(self, m):
self._central_channel = m
def getPhase1ClipInds(self):
with h5py.File(self._hdf5_path, "r") as f:
return np.array(f.get('phase1-clip_inds'))
def getPhase1Times(self):
with h5py.File(self._hdf5_path, "r") as f:
return np.array(f.get('phase1-times'))
def getPhase1ChannelAssignments(self):
with h5py.File(self._hdf5_path, "r") as f:
return np.array(f.get('phase1-channel-assignments'))
def getPhase2ClipInds(self):
with h5py.File(self._hdf5_path, "r") as f:
return np.array(f.get('phase2-clip_inds'))
def getPhase2Times(self):
with h5py.File(self._hdf5_path, "r") as f:
return np.array(f.get('phase2-times'))
def getPhase2Labels(self):
with h5py.File(self._hdf5_path, "r") as f:
return np.array(f.get('phase2-labels'))
def addAssignedEventTimes(self, times):
with h5py.File(self._hdf5_path, "a") as f:
f.create_dataset('assigned-event-times-{}'.format(self._num_assigned_event_time_arrays), data=times)
self._num_assigned_event_time_arrays += 1
def addAssignedEventClipIndices(self, clip_inds):
with h5py.File(self._hdf5_path, "a") as f:
f.create_dataset('assigned-event-clip_inds-{}'.format(self._num_assigned_event_clip_ind_arrays),
data=clip_inds)
self._num_assigned_event_clip_ind_arrays += 1
def runPhase1Sort(self):
self.runSort(mode='phase1')
def runPhase2Sort(self):
self.runSort(mode='phase2')
def runSort(self, *, mode):
X = self._snippets
M_global = X.numChannels() - 1
N = X.numTimepoints()
o = self._sorting_opts
m_central = self._central_channel
clip_size = o['clip_size']
detect_interval = o['detect_interval']
detect_sign = o['detect_sign']
detect_threshold = o['detect_threshold']
num_features = o['num_features']
# num_features=10 # TODO: make this a sorting opt
geom = self._geom
if geom is None:
geom = np.zeros((M_global, 2))
# chunk_infos=create_chunk_infos(N=N,chunk_size=100000)
chunk_infos = create_chunk_infos(N=N)
nbhd_channels = get_channel_neighborhood(m_central, geom, adjacency_radius=o['adjacency_radius'])
# M_neigh = len(nbhd_channels)
m_central_rel = np.where(nbhd_channels == m_central)[0][0]
if mode == 'phase1':
print('Detecting events on channel {} ({})...'.format(m_central + 1, mode));
sys.stdout.flush()
# these times are really indices of the peaks of the flattened channel data, if you wanted the sample index
# relating to when the chunk was taken, you need to use this as an index of the time data (1st row of X)
times, clip_ind, assign_to_this_neighborhood = detect_on_neighborhood_from_snippets_model(X,
m_central,
clip_size=clip_size,
nbhd_channels=nbhd_channels,
detect_threshold=detect_threshold,
detect_interval=detect_interval,
detect_sign=detect_sign,
chunk_infos=chunk_infos)
else:
# get the times and clip_ind values from the phase1 sort
times_list = []
clip_ind_list = []
with h5py.File(self._hdf5_path, "r") as f:
for ii in range(self._num_assigned_event_time_arrays):
times_list.append(np.array(f.get('assigned-event-times-{}'.format(ii))))
clip_ind_list.append(np.array(f.get('assigned-event-clip_inds-{}'.format(ii))))
times = np.concatenate(times_list) if times_list else np.array([])
times = times.astype(np.int64) # force this to avoid index errors
clip_ind = np.concatenate(clip_ind_list) if clip_ind_list else np.array([])
print('Computing PCA features for channel {} ({})...'.format(m_central + 1, mode));
sys.stdout.flush()
# max_num_clips_for_pca=1000 # TODO: this should be a setting somewhere
max_num_clips_for_pca = o['max_num_clips_for_pca']
# Note: we use twice as many features, because of branch method (MT x F)
features = compute_event_features_from_snippets(X, times, clip_ind, nbhd_channels=nbhd_channels,
clip_size=clip_size,
max_num_clips_for_pca=max_num_clips_for_pca,
num_features=num_features * 2, chunk_infos=chunk_infos)
# The clustering
print('Clustering for channel {} ({})...'.format(m_central + 1, mode));
sys.stdout.flush()
labels = branch_cluster(features, branch_depth=2, npca=num_features)
K = np.max(labels) if labels.size > 0 else 0
print('Found {} clusters for channel {} ({})...'.format(K, m_central + 1, mode));
sys.stdout.flush()
if mode == 'phase1':
print('Computing templates for channel {} ({})...'.format(m_central + 1, mode));
sys.stdout.flush()
templates = compute_templates_from_snippets_model(X, times, clip_ind, labels, nbhd_channels=nbhd_channels,
clip_size=clip_size, chunk_infos=chunk_infos)
print('Re-assigning events for channel {} ({})...'.format(m_central + 1, mode));
sys.stdout.flush()
# tc_peaks = the peak values for each channel in the tempaltes, tc_peak_times = index where the peaks occur
tc_peaks, tc_peak_times = compute_template_channel_peaks(templates, detect_sign=detect_sign) # M_neigh x K
peak_channels = np.argmax(tc_peaks, axis=0) # The channels on which the peaks occur
# make channel assignments and offset times
inds2 = np.where(assign_to_this_neighborhood)[0]
times2 = times[inds2]
clip_ind2 = clip_ind[inds2]
labels2 = labels[inds2]
channel_assignments2 = np.zeros(len(times2))
for k in range(K):
assigned_channel_within_neighborhood = peak_channels[k]
dt = tc_peak_times[assigned_channel_within_neighborhood][k] - tc_peak_times[m_central_rel][k]
inds_k = np.where(labels2 == (k + 1))[0]
if len(inds_k) > 0:
times2[inds_k] += dt
channel_assignments2[inds_k] = nbhd_channels[assigned_channel_within_neighborhood]
if m_central != nbhd_channels[assigned_channel_within_neighborhood]:
print('Re-assigning {} events from {} to {} with dt={} (k={})'.format(len(inds_k),
m_central + 1,
nbhd_channels[
assigned_channel_within_neighborhood] + 1,
dt, k + 1));
sys.stdout.flush()
# add the phase 1 values to the hdf5 file
with h5py.File(self._hdf5_path, "a") as f:
f.create_dataset('phase1-times', data=times2)
f.create_dataset('phase1-clip_inds', data=clip_ind2)
f.create_dataset('phase1-channel-assignments', data=channel_assignments2)
elif mode == 'phase2':
with h5py.File(self._hdf5_path, "a") as f:
f.create_dataset('phase2-times', data=times)
f.create_dataset('phase2-clip_inds', data=clip_ind)
f.create_dataset('phase2-labels', data=labels)
class SnippetModel_Hdf5:
def __init__(self, path):
self._hdf5_path = path
with h5py.File(self._hdf5_path, "r") as f:
self._num_chunks = np.array(f.get('num_chunks'))[0]
self._chunk_size = np.array(f.get('chunk_size'))[0]
self._padding = np.array(f.get('padding'))[0]
self._num_channels = np.array(f.get('num_channels'))[0]
self._num_timepoints = np.array(f.get('num_timepoints'))[0]
def numChannels(self):
return self._num_channels
def numTimepoints(self):
return self._num_timepoints
def getChunk(self, *, t1, t2, channels):
if (t1 < 0) or (t2 > self.numTimepoints()):
ret = np.zeros((len(channels), t2 - t1))
t1a = np.maximum(t1, 0)
t2a = np.minimum(t2, self.numTimepoints())
ret[:, t1a - (t1):t2a - (t1)] = self.getChunk(t1=t1a, t2=t2a, channels=channels)
return ret
else:
c1 = int(t1 / self._chunk_size)
c2 = int((t2 - 1) / self._chunk_size)
ret = np.zeros((len(channels), t2 - t1))
with h5py.File(self._hdf5_path, "r") as f:
for cc in range(c1, c2 + 1):
if cc == c1:
t1a = t1
else:
t1a = self._chunk_size * cc
if cc == c2:
t2a = t2
else:
t2a = self._chunk_size * (cc + 1)
for ii in range(len(channels)):
m = channels[ii]
assert (cc >= 0)
assert (cc < self._num_chunks)
str = 'part-{}-{}'.format(m, cc)
offset = self._chunk_size * cc - self._padding
ret[ii, t1a - t1:t2a - t1] = f[str][t1a - offset:t2a - offset]
return ret
def prepare_snippet_hdf5(snippet_fname, timeseries_hdf5_fname):
with h5py.File(timeseries_hdf5_fname, "w") as f:
X = mdaio.DiskReadMda(snippet_fname)
M = X.N1() # Number of channels
N = X.N2() # Number of timepoints
chunk_size = N
padding = 0
chunk_size_with_padding = chunk_size + 2 * padding
num_chunks = int(np.ceil(N / chunk_size))
f.create_dataset('chunk_size', data=[chunk_size])
f.create_dataset('num_chunks', data=[num_chunks])
f.create_dataset('padding', data=[padding])
f.create_dataset('num_channels', data=[M])
f.create_dataset('num_timepoints', data=[N])
for j in range(num_chunks):
padded_chunk = np.zeros((X.N1(), chunk_size_with_padding), dtype=X.dt())
t1 = int(j * chunk_size) # first timepoint of the chunk
t2 = int(np.minimum(X.N2(), (t1 + chunk_size))) # last timepoint of chunk (+1)
s1 = int(np.maximum(0, t1 - padding)) # first timepoint including the padding
s2 = int(np.minimum(X.N2(), t2 + padding)) # last timepoint (+1) including the padding
# determine aa so that t1-s1+aa = padding
# so, aa = padding-(t1-s1)
aa = padding - (t1 - s1)
padded_chunk[:, aa:aa + s2 - s1] = X.readChunk(i1=0, N1=X.N1(), i2=s1, N2=s2 - s1) # Read the padded chunk
for m in range(M):
f.create_dataset('part-{}-{}'.format(m, j), data=padded_chunk[m, :].ravel())
def run_phase1_sort(neighborhood_sorter):
neighborhood_sorter.runPhase1Sort()
def run_phase2_sort(neighborhood_sorter):
neighborhood_sorter.runPhase2Sort()
class MountainSort4_snippets:
def __init__(self):
self._sorting_opts = {
"adjacency_radius": -1,
"detect_sign": None, # must be set explicitly
"detect_interval": 10,
"detect_threshold": 3,
"num_features": 10,
'max_num_clips_for_pca': 1000,
}
self._snippets = None
self._firings_out_path = None
self._geom = None
self._temporary_directory = None
self._num_workers = 0
def setSortingOpts(self, adjacency_radius=None, detect_sign=None, detect_interval=None, detect_threshold=None,
clip_size=None, num_features=None, max_num_clips_for_pca=None):
if clip_size is not None:
self._sorting_opts['clip_size'] = clip_size
if adjacency_radius is not None:
self._sorting_opts['adjacency_radius'] = adjacency_radius
if detect_sign is not None:
self._sorting_opts['detect_sign'] = detect_sign
if detect_interval is not None:
self._sorting_opts['detect_interval'] = detect_interval
if detect_threshold is not None:
self._sorting_opts['detect_threshold'] = detect_threshold
if num_features is not None:
self._sorting_opts['num_features'] = num_features
if max_num_clips_for_pca is not None:
self._sorting_opts['max_num_clips_for_pca'] = max_num_clips_for_pca
def setSnippetPath(self, snippets_path):
self._snippets_path = snippets_path
def setFiringsOutPath(self, path):
self._firings_out_path = path
def setNumWorkers(self, num_workers):
self._num_workers = num_workers
def setGeom(self, geom):
self._geom = geom
def setTemporaryDirectory(self, tempdir):
self._temporary_directory = tempdir
def sort(self):
if not self._temporary_directory:
raise Exception('Temporary directory not set.')
num_workers = self._num_workers
if num_workers <= 0:
num_workers = multiprocessing.cpu_count()
# clip_size = self._sorting_opts['clip_size']
temp_hdf5_path = self._temporary_directory + '/snippets.hdf5'
if os.path.exists(temp_hdf5_path):
os.remove(temp_hdf5_path)
'''hdf5_chunk_size=1000000
hdf5_padding=clip_size*10
print ('Preparing {}...'.format(temp_hdf5_path))
prepare_timeseries_hdf5(self._timeseries_path,temp_hdf5_path,chunk_size=hdf5_chunk_size,padding=hdf5_padding)
X=TimeseriesModel_Hdf5(temp_hdf5_path)'''
# hdf5_chunk_size=1000000
# hdf5_padding = 0
# prepare_snippet_hdf5(self._snippets_path, temp_hdf5_path,chunk_size=hdf5_chunk_size,padding=hdf5_padding)
prepare_snippet_hdf5(self._snippets_path, temp_hdf5_path)
X = SnippetModel_Hdf5(temp_hdf5_path)
M = X.numChannels() - 1 # the top row of data are the sample numbers
N = X.numTimepoints()
print('Preparing neighborhood sorters...');
sys.stdout.flush()
neighborhood_sorters = []
# return self._sorting_opts, self._geom
for m in range(M):
NS = _NeighborhoodSorter()
NS.setSortingOpts(self._sorting_opts)
NS.setSnippetsModel(X)
NS.setGeom(self._geom)
NS.setCentralChannel(m)
fname0 = self._temporary_directory + '/neighborhood-{}.hdf5'.format(m)
if os.path.exists(fname0):
os.remove(fname0)
NS.setHdf5FilePath(fname0)
neighborhood_sorters.append(NS)
pool = multiprocessing.Pool(num_workers)
pool.map(run_phase1_sort, neighborhood_sorters)
# for each sorter it will check the assignemnts of the spikes and assign them to the respective
# neighborhood_sorter
for m in range(M):
times_m = neighborhood_sorters[m].getPhase1Times()
clip_inds_m = neighborhood_sorters[m].getPhase1ClipInds()
channel_assignments_m = neighborhood_sorters[m].getPhase1ChannelAssignments()
for m2 in range(M):
inds_m_m2 = np.where(channel_assignments_m == m2)[0]
if len(inds_m_m2) > 0:
neighborhood_sorters[m2].addAssignedEventTimes(times_m[inds_m_m2])
neighborhood_sorters[m2].addAssignedEventClipIndices(clip_inds_m[inds_m_m2])
pool = multiprocessing.Pool(num_workers)
pool.map(run_phase2_sort, neighborhood_sorters)
print('Preparing output...');
sys.stdout.flush()
all_times_list = []
all_labels_list = []
all_channels_list = []
all_clip_inds_list = []
k_offset = 0
for m in range(M):
labels = neighborhood_sorters[m].getPhase2Labels()
all_times_list.append(neighborhood_sorters[m].getPhase2Times())
all_clip_inds_list.append(neighborhood_sorters[m].getPhase2ClipInds())
all_labels_list.append(labels + k_offset)
all_channels_list.append(np.ones(len(neighborhood_sorters[m].getPhase2Times())) * (m + 1))
k_offset += np.max(labels) if labels.size > 0 else 0
all_times = np.concatenate(all_times_list)
all_labels = np.concatenate(all_labels_list)
all_channels = np.concatenate(all_channels_list)
all_clip_inds = np.concatenate(all_clip_inds_list)
# since we are sorting by time we technically might not need
# to do the all_clip_inds since that will sort those too
sort_inds = np.argsort(all_times)
all_times = all_times[sort_inds]
all_labels = all_labels[sort_inds]
all_channels = all_channels[sort_inds]
all_clip_inds = all_clip_inds[sort_inds]
chunk_infos = create_chunk_infos(N=N)
all_times = get_real_times(X, all_times, time_channel=0, chunk_infos=chunk_infos)
print('Writing firings file...');
sys.stdout.flush()
write_firings_file(all_channels, all_times, all_labels, all_clip_inds, self._firings_out_path)
print('Done.');
sys.stdout.flush()
def readMDA(filename):
with open(filename, 'rb') as f:
code = struct.unpack('<l', f.read(4))[0]
if code > 0:
num_dims = code
code = -1
else:
f.read(4)
num_dims = struct.unpack('<l', f.read(4))[0]
S = np.zeros((1, num_dims))
for j in np.arange(num_dims):
S[0, j] = struct.unpack('<l', f.read(4))[0]
N = int(np.prod(S)) # number of spikes
A = np.zeros((int(S[0, 0]), int(S[0, 1])))
if code == -1:
# complex float
M = np.zeros((1, N * 2))
# there are N*2 samples, and 4 bytes per float
M[0, :] = np.asarray(struct.unpack('<%df' % (N * 2), f.read(N * 2 * 4)))
A = (M[0, 0:N * 2:2] + 1j * M[0, 1:N * 2:2]).reshape(
A.shape, order='F')
elif code == -2:
# uint8
A[0, :] = np.asarray(struct.unpack('<%dB' % (N), f.read(N)))
elif code == -3:
# float, float32
A = np.asarray(
struct.unpack('<%df' % (N), f.read(N * 4))).reshape(
A.shape, order='F') # 4 bytes per float
elif code == -4:
# short, int16
A = np.asarray(
struct.unpack('<%dh' % (N), f.read(N * 2))).reshape(
A.shape, order='F') # 2 bytes per short
elif code == -5:
# int, int32
A = np.asarray(
struct.unpack('<%di' % (N), f.read(N * 4))).reshape(
A.shape, order='F') # 2 bytes per int
elif code == -6:
# uint16
A = np.asarray(
struct.unpack('<%dH' % (N), f.read(N * 2))).reshape(
A.shape, order='F') # 2 bytes per uint16
elif code == -7:
# double, float64
# B = struct.unpack('<%dd' % (N), f.read(N*8))
A = np.asarray(
struct.unpack('<%dd' % (N), f.read(N * 8))).reshape(
A.shape, order='F') # 8 bytes per double
elif code == -8:
# uint32
A = np.asarray(
struct.unpack('<%dI' % (N), f.read(N * 4))).reshape(
A.shape, order='F') # 4 bytes per uint32
else:
print('Have not coded for this case yet!')
return
return A, code
```
# Reading / Converting Tetrode Data
```
# choose the example directory
notebook_path = os.path.dirname(os.path.abspath("__file__"))
data_path = os.path.join(os.path.dirname(notebook_path), 'data')
print('Data fullpath: ', data_path)
example_data_paths = [os.path.join(data_path, file) for file in os.listdir(data_path) if file != 'temp']
print('Available example paths:', example_data_paths)
# choose one (if more than one example session)
example_data_directory = example_data_paths[0]
# reading in the tetrode data
# Tint records 50 samples, where (in general) the 10th index is where the signal crossed a defined threshold
# (chosen before recording).
# Each session comes with a .set file which contains all the parameters (threshold, how many pre-threshold
# and post-threshold samples are recorded, Sampling Rate, etc.)
# Each tetrode will have its own recording (.N)
# ts - the time of each snippet, representing where the data crossed threshold (the 10th index)
# find the set files within the directory
set_files = [os.path.join(example_data_directory, file) for file in os.listdir(example_data_directory) if '.set' in file]
print('Available sessions: ')
for file in set_files:
print(file)
# choose a session to analyze
set_filename = set_files[0]
session = os.path.splitext(os.path.basename(set_filename))[0]
print('choosen set file: ', set_filename)
# get the number of samples recorded pre-threshold
pre_threshold = int(get_setfile_parameter('pretrigSamps', set_filename))
Fs = int(get_setfile_parameter('rawRate', set_filename))
threshold = int(get_setfile_parameter('threshold', set_filename)) # this threshold is 16 bit, but the data
# is ultimately saved in 8 bit, so we must convert for it to be applicable to the clips
threshold /= 256
print('Threshold: %s' % (str(threshold)))
def is_tetrode(file, session):
if os.path.splitext(file)[0] == session:
try:
tetrode_number = int(os.path.splitext(file)[1][1:])
return True
except ValueError:
return False
else:
return False
tetrode_files = [os.path.join(example_data_directory, file) for file in os.listdir(example_data_directory) if is_tetrode(file, session)]
print('Available tetrode files: ')
for file in tetrode_files:
print(file)
# choose a tetrode to analyze
tetrode_filename = tetrode_files[0]
print('chosen tetrode file: ', tetrode_filename)
# read tetrode data
ts, ch1, ch2, ch3, ch4, spikeparam = getspikes(tetrode_filename)
n_spikes, n_samples = ch1.shape
n_channels = 4
# converting back to sample number, chose to do np.rint w/ .astype(np.in32) in case of any strange float precision
# could be X.99, but int(X.99) would be X not X+1
# remove the pre trigger values so the time represents the start of the snippet
ts = np.rint(np.multiply(ts, Fs)).astype(np.int32) - pre_threshold
snippets = np.vstack((ch1, ch2, ch3, ch4)).reshape((n_channels,n_spikes,n_samples))
# making it so each sample within the snippet has a sample time
ts_new = (np.tile(ts,50) + np.arange(50)).reshape((1,-1))
# acquire session filename information
tetrode = int(os.path.splitext(tetrode_filename)[-1][1:])
directory = os.path.dirname(tetrode_filename)
basename = os.path.basename(os.path.splitext(tetrode_filename)[0])
print(tetrode, directory, basename)
# create a .mda of this data so it can be read in by MountainSort
mda_data = np.vstack((ch1, ch2, ch3, ch4)).reshape((n_channels,-1))
# we will add the time stamps to the top of the data
mda_data = np.vstack((ts_new, mda_data))
mda_fname = os.path.join(directory, '%s_T%d_snippets.mda' % (basename, tetrode))
print(mda_fname)
mdaio.writemda64(mda_data,mda_fname)
```
# Checking Successful Conversion
```
# checking that the data was written correctly
A, code = readMDA(mda_fname)
print('Conversion Successful: %s' % str(np.array_equal(A, mda_data)))
print(A.shape)
A = None; code=None
```
# Running Sort
```
temp_directory = os.path.join(os.path.dirname(notebook_path), 'data', 'temp')
if not os.path.exists(temp_directory):
os.mkdir(temp_directory)
firing_output = temp_directory + '/firing.mda'
print(temp_directory)
snippet_path = mda_fname
MS = MountainSort4_snippets()
MS.setSnippetPath(snippet_path)
MS.setTemporaryDirectory(temp_directory)
MS.setFiringsOutPath(firing_output)
MS.setSortingOpts(adjacency_radius=-1,detect_sign=1,detect_interval=10,detect_threshold=threshold,clip_size=50,num_features=10,
max_num_clips_for_pca=1000)
MS.sort()
```
# Visualize Output
```
from math import sqrt
from functools import reduce
def factors_f(n):
step = 2 if n%2 else 1
return set(reduce(list.__add__,
([i, n//i] for i in range(1, int(sqrt(n))+1, step) if n % i == 0)))
A, code = readMDA(firing_output)
spike_channel = A[0, :].astype(int) # the channel which the spike belongs to
spike_times = A[1, :].astype(int) # at this stage it is in index values (0-based)
cell_number = A[2, :].astype(int)
clip_inds = A[3, :].astype(int)
cell_ids = np.unique(cell_number)
print('%d cells found!' % len(cell_ids))
n_cells = len(cell_ids)
if np.sqrt(n_cells).is_integer():
rows = int(np.sqrt(n_cells))
cols = int(rows)
else:
'''Finding geometry for the subplots'''
value1 = int(np.ceil(np.sqrt(n_cells)))
value2 = int(np.floor(np.sqrt(n_cells)))
if value1*value2 < n_cells:
value2 = int(np.ceil(np.sqrt(n_cells)))
cols, rows = sorted(np.array([value1,value2]))
fig, axs = plt.subplots(rows, cols, figsize=(12, 10))
for i, ax in enumerate(axs.flatten()):
try:
cell = cell_ids[i]
except IndexError:
continue
cell_ind = np.where(cell_number == cell)[0]
cell_data = snippets[:,clip_inds[cell_ind],:]
cell_data = np.mean(cell_data, axis=1)
ax.plot(cell_data.T)
# these are 8 bit values, thus we will set the limits to -128 and 127
ax.set_ylim([-128,127])
```
# Visualize Output from Tint (Uses KlustaKwik)
```
def find_unit(tetrode_path, tetrode_list):
"""Inputs:
tetrode_path: the path of the tetrode (not including the filename and extension)
example: C:Location\of\File\filename.ext
tetrode_list: list of tetrodes to find the units that are in the tetrode_path
example [1,2,3], will check just the first 3 tetrodes
-------------------------------------------------------------
Outputs:
cut_list: an nx1 list for n-tetrodes in the tetrode_list containing a list of unit numbers that each spike belongs to
unique_cell_list: an nx1 list for n-tetrodes in the tetrode list containing a list of unique unit numbers"""
cut_list = []
unique_cell_list = []
for tet_file in tetrode_list:
cut_fname = os.path.join(tetrode_path, ''.join([os.path.splitext(os.path.basename(tet_file))[0],
'_', os.path.splitext(tet_file)[1][1:], '.cut']))
extract_cut = False
with open(cut_fname, 'r') as f:
for line in f:
if 'Exact_cut' in line: # finding the beginning of the cut values
extract_cut = True
if extract_cut: # read all the cut values
cut_values = str(f.readlines())
for string_val in ['\\n', ',', "'", '[', ']']: # removing non base10 integer values
cut_values = cut_values.replace(string_val, '')
cut_values = [int(val) for val in cut_values.split()]
cut_list.append(cut_values)
unique_cell_list.append(list(set(cut_values)))
return np.asarray(cut_list), np.asarray(unique_cell_list)
tetrode_path = os.path.dirname(set_filename)
print(tetrode_path)
tetrode_list = [tetrode_filename]
cell_number_tint, cell_ids_tint = find_unit(tetrode_path, tetrode_list)
cell_number_tint = cell_number_tint.flatten()
cell_ids_tint = cell_ids_tint.flatten()
# in Tint the 0 cell is the dummy cell, so we will remove it
if 0 in cell_ids_tint:
cell_ids_tint = cell_ids_tint[np.where(cell_ids_tint!=0)]
# we also skip a cell number to separate the good cells from the bad cells (so we can keep the good and bad)
# i.e. cell numbers => [1,2,3,4,6,7,8], we are missing 5, so the 1,2,3,4 are the good cells, and anything 6+ are noise
bad_cells_ind = np.where((np.diff(cell_ids_tint) == 1) == False)[0][0]
if cell_ids_tint[bad_cells_ind] != cell_ids_tint[bad_cells_ind-1]:
# double checking that we have the correct index
print('All Good')
cell_ids_tint = cell_ids_tint[:bad_cells_ind+1]
print(cell_ids_tint)
n_cells = len(cell_ids_tint)
if np.sqrt(n_cells).is_integer():
rows = int(np.sqrt(n_cells))
cols = int(rows)
else:
'''Finding geometry for the subplots'''
value1 = int(np.ceil(np.sqrt(n_cells)))
value2 = int(np.floor(np.sqrt(n_cells)))
if value1*value2 < n_cells:
value2 = int(np.ceil(np.sqrt(n_cells)))
cols, rows = sorted(np.array([value1,value2]))
print(cols, rows)
```
### Initial notes, looks like our original/manual KlustaKwik method still produces more cells than the MountainSort Data, it could be because we have not whitened the data yet.
```
fig, axs = plt.subplots(rows, cols, figsize=(12, 10))
for i, ax in enumerate(axs.flatten()):
try:
cell = cell_ids_tint[i]
except IndexError:
continue
cell_ind = np.where(cell_number_tint == cell)[0]
cell_data = snippets[:,cell_ind,:]
cell_data = np.mean(cell_data, axis=1)
ax.plot(cell_data.T)
# these are 8 bit values, thus we will set the limits to -128 and 127
ax.set_ylim([-128,127])
```
# Below Here is Random Testing Code
```
temp_hdf5_path = MS._temporary_directory + '/snippets.hdf5'
X=SnippetModel_Hdf5(temp_hdf5_path)
M=X.numChannels()-1
N=X.numTimepoints()
chunk_infos=create_chunk_infos(N=N)
t1 = chunk_infos[0]['t1']
t2 = chunk_infos[0]['t2']
data = X.getChunk(t1=t1,t2=t2,channels=[1,2,3,4]).astype(np.int32)
```
### Testing Whitening
```
output_filename = os.path.join(directory, '%s_T%d_snippets_pre.mda' % (basename, tetrode))
whiten(timeseries=mda_fname,timeseries_out=output_filename,
clip_size=50,num_clips_per_chunk=6000,num_processes=os.cpu_count())
A, code = readMDA(output_filename)
whitened_data = A[1:,:]
n_ch = A.shape[0]
whitened_data=whitened_data.reshape((n_channels,-1,50))
whitened_t = A[0,:].astype(np.int64)
whitened_t
spike_index = 12
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(211)
ax.plot(snippets[:,spike_index,:].T)
ax = fig.add_subplot(212)
plt.plot(whitened_data[:,spike_index,:].T)
temp_directory = os.path.join(os.path.dirname(notebook_path), 'data', 'temp')
if not os.path.exists(temp_directory):
os.mkdir(temp_directory)
firing_output = temp_directory + '/firing_whitened.mda'
print(temp_directory)
print(output_filename)
pre_snippet_path = output_filename
MS = MountainSort4_snippets()
MS.setSnippetPath(pre_snippet_path)
MS.setTemporaryDirectory(temp_directory)
MS.setFiringsOutPath(firing_output)
MS.setSortingOpts(adjacency_radius=-1,detect_sign=1,detect_interval=10,detect_threshold=3,clip_size=50,num_features=10,
max_num_clips_for_pca=1000)
MS.sort()
A, code = readMDA(firing_output)
spike_channel = A[0, :].astype(int) # the channel which the spike belongs to
spike_times = A[1, :].astype(int) # at this stage it is in index values (0-based)
cell_number = A[2, :].astype(int)
clip_inds = A[3, :].astype(int)
cell_ids = np.unique(cell_number)
print('%d cells found!' % len(cell_ids))
n_cells = len(cell_ids)
if np.sqrt(n_cells).is_integer():
rows = int(np.sqrt(n_cells))
cols = int(rows)
else:
'''Finding geometry for the subplots'''
value1 = int(np.ceil(np.sqrt(n_cells)))
value2 = int(np.floor(np.sqrt(n_cells)))
if value1*value2 < n_cells:
value2 = int(np.ceil(np.sqrt(n_cells)))
cols, rows = sorted(np.array([value1,value2]))
fig, axs = plt.subplots(rows, cols, figsize=(12, 10))
for i, ax in enumerate(axs.flatten()):
try:
cell = cell_ids[i]
except IndexError:
continue
cell_ind = np.where(cell_number == cell)[0]
cell_data = snippets[:,clip_inds[cell_ind],:]
cell_data = np.mean(cell_data, axis=1)
ax.plot(cell_data.T)
# these are 8 bit values, thus we will set the limits to -128 and 127
ax.set_ylim([-128,127])
```
| github_jupyter |
# A brief intro to pydeck
pydeck is made for visualizing data points in 2D or 3D maps. Specifically, it handles
- rendering large (>1M points) data sets, like LIDAR point clouds or GPS pings
- large-scale updates to data points, like plotting points with motion
- making beautiful maps
Under the hood, it's powered by the [deck.gl](https://github.com/visgl/deck.gl/) JavaScript framework.
pydeck is strongest when used in tandem with [Pandas](https://pandas.pydata.org/) but doesn't have to be.
Please note that **these demo notebooks are best when executed cell-by-cell**, so ideally clone this repo or run it from mybinder.org.
```
import pydeck as pdk
print("Welcome to pydeck version", pdk.__version__)
```
# There are three steps for most pydeck visualizations
We'll walk through pydeck using a visualization of vehicle accident data in the United Kingdom.
## 1. Choose your data
Here, we'll use the history of accident data throughout the United Kingdom. This data set presents the location of every latitude and longitude of car accidents in the UK in 2014 ([source](https://data.gov.uk/dataset/053a6529-6c8c-42ac-ae1e-455b2708e535/road-traffic-accidents)).
```
import pandas as pd
UK_ACCIDENTS_DATA = 'https://raw.githubusercontent.com/visgl/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv'
pd.read_csv(UK_ACCIDENTS_DATA).head()
```
## 2. Configure the visualization: Choose your layer(s) and viewport
pydeck's **`Layer`** object takes two positional and many keyword arguments:
- First, a string specifying the layer type, with our example below using `'HexagonLayer'`
- Next, a data URL–below you'll see the `UK_ACCIDENTS_DATA` that we set above, but we could alternately pass a data frame or list of dictionaries
- Finally, keywords representing that layer's attributes–in our example, this would include `elevation_scale`, `elevation_range`, `extruded`, `coverage`. `pickable=True` also allows us to add a tooltip that appears on hover.
```python
layer = pdk.Layer(
'HexagonLayer',
UK_ACCIDENTS_DATA,
get_position='[lng,lat]',
elevation_scale=50,
pickable=True,
auto_highlight=True,
elevation_range=[0, 3000],
extruded=True,
coverage=1)
```
There is of course an entire catalog of layers which you're welcome to check out within the [deck.gl documentation](https://deck.gl/#/documentation/deckgl-api-reference/layers/overview).
### Configure your viewport
We also have to specifiy a **`ViewState`** object.
The **`ViewState`** object specifies a camera angle relative to the map data. If you don't want to manually specify it, the function **`pydeck.data_utils.compute_view`** can take your data and automatically zoom to it.
pydeck also provides some controls, most of which should be familiar from map applications throughout the web. By default, you can hold out and drag to rotate the map.
```
layer = pdk.Layer(
'HexagonLayer',
UK_ACCIDENTS_DATA,
get_position='[lng, lat]',
auto_highlight=True,
elevation_scale=50,
pickable=True,
elevation_range=[0, 3000],
extruded=True,
coverage=1)
# Set the viewport location
view_state = pdk.ViewState(
longitude=-1.415,
latitude=52.2323,
zoom=6,
min_zoom=5,
max_zoom=15,
pitch=40.5,
bearing=-27.36)
# Combined all of it and render a viewport
r = pdk.Deck(layers=[layer], initial_view_state=view_state)
r.show()
```
## Render an update to the visualization
Execute the cell below and look at the map in the cell above–you'll notice a seamless rendered update on the map
```
layer.elevation_range = [0, 10000]
r.update()
```
## Support updates over time
We can combine any Python function with our work here, of course. Execute the cell below to update our map above over time.
```
import time
r.show()
for i in range(0, 10000, 1000):
layer.elevation_range = [0, i]
r.update()
time.sleep(0.1)
```
# pydeck without Jupyter
If you prefer not to use Jupyter or you'd like to export your map to a separate file, you can also write out maps to HTML locally using the `.to_html` function.
(Note that if you're executing this example on mybinder.org, it won't render, since write access in the binder environment is restricted.)
```
r.to_html('deck-iframe.html')
```
| github_jupyter |
# Jupyter Notebook
Jupyter Notebooks is a good place to start new python experiences...
This page is self-editable, feel free to play...
* CTRL-Enter => Run a cells.
* CTRL-S => Save your notebooks.
* H => More shortcuts...
What is it good for?
* Interactive documentation.
* Send-boxes.
* Write documented tests.
## Let's start with a small task
### Run some basic Python
```
import datetime
print("Today is: {}".format(datetime.date.today()))
```
### Run some basic Bash
```
!echo "Python version: " && python --version
!echo
!echo "Python packages: " && pip freeze
!echo
!echo "Environnement variables: " && env
```
## A few python useful modules and tricks
### Begins
How to easily create command line tools, but it is bit hard to test in a Notebook:
```
import begin
@begin.start
def my_function(model_path=None, data_path=None, action="help"):
"Do some stuff on some data"
print("Input args are:")
print(" model_path: {}".format(model_path))
print(" data_path: {}".format(data_path))
print(" action: {}".format(action))
my_function()
```
### Tqdm
[LINK](https://github.com/tqdm/tqdm)
TQDM is a nice progress bar, always usefull with machine learning!
```
from tqdm import tqdm
from time import sleep
my_data = ["a", "b", "c", "d"]
text = ""
for char in tqdm(my_data, ncols=100, desc="Processing my data", unit="Data processed per second"):
sleep(0.25)
text = text + char
```
With two or more bars, not ideal in a Notebook, work better in a shell:
```
my_data_1 = ["a", "b", "c", "d"]
my_data_2 = ["1", "2", "3", "4"]
text = ""
for char in tqdm(my_data, ncols=100, desc="Processing my data", unit="Data processed per second", position=0):
for char in tqdm(my_data, ncols=100, desc="Processing my index", unit="Index processed per second", position=1):
sleep(0.25)
text = text + char
```
### Showing an image
Always nice to visualize images.
```
from IPython.display import Image
# source: https://dribbble.com/shots/4307976-Python-Logo-Abstract
Image(filename='/source/data/image/python.png')
```
### Create interactive matplotlib graphs
[Source tutorial](https://towardsdatascience.com/how-to-produce-interactive-matplotlib-plots-in-jupyter-environment-1e4329d71651)
Interactive graph allow better data visualization and understanding... a must have...
#### A simple plotting example
```
%matplotlib widget
import pandas as pd
import matplotlib.pyplot as plt
url = "https://raw.githubusercontent.com/plotly/datasets/master/tips.csv"
df = pd.read_csv(url)
print("CSV red")
# Matplotlib Scatter Plot
plt.scatter('total_bill', 'tip',data=df)
plt.xlabel('Total Bill')
plt.ylabel('Tip')
plt.show()
print("Showing graph")
```
#### Interactive map
To run this example, please install following phython modules into your docker:
"contextily geopandas"
```
# install a python package into the docker
!pip install contextily geopandas
import geopandas as gpd
carshare = "https://raw.githubusercontent.com/plotly/datasets/master/carshare.csv"
df_carshare = pd.read_csv(carshare)
gdf = gpd.GeoDataFrame(df_carshare, geometry=gpd.points_from_xy(df_carshare.centroid_lon, df_carshare.centroid_lat),
crs="EPSG:4326")
import contextily as ctx
fig, ax = plt.subplots()
gdf.to_crs(epsg=3857).plot(ax=ax, color="red", edgecolor="white")
ctx.add_basemap(ax, url=ctx.providers.CartoDB.Positron)
plt.title("Car Share", fontsize=30, fontname="Palatino Linotype", color="grey")
ax.axis("off")
plt.show()
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
from sklearn.metrics import accuracy_score
import pickle
from sklearn.metrics import r2_score
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import average_precision_score
from sklearn.metrics import precision_recall_curve
from xgboost import XGBClassifier
from datetime import datetime
from haversine import haversine, Unit
from haversine import haversine_vector
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
# %%
# reading dataset
# https://opendatasus.saude.gov.br/dataset/bd-srag-2020
df = pd.read_csv('/home/pedro/bkp/code/dataset/INFLUD-21-09-2020.csv',sep=';',encoding = "ISO-8859-1")
# Inputing constraint in the dataset
# Positive case:
df = df[df['PCR_SARS2']==1]
print(df.shape)
# Hospitalized people:
df = df[df['PCR_SARS2']==1][df['HOSPITAL']==1][df['NU_IDADE_N']<=110]
print(df.shape)
# Hospitalized people with age small than 110:
df = df[df['PCR_SARS2']==1][df['HOSPITAL']==1][df['NU_IDADE_N']<=110][df['EVOLUCAO'] != 3][df['EVOLUCAO'] != 9][df['EVOLUCAO'].notnull()]
print(df.shape)
# %%
# Latitudes and longitudes table from municipalities
df_cod = pd.read_csv('/home/pedro/bkp/code/dataset/municipios.csv', sep=',')
# %%
# Removing last number from "codenumber"
df_cod['CO_MUN_RES'] = df_cod['CO_MUN_RES'].astype(str).str[:-1].astype(np.int64)
df_cod['CO_MU_INTE'] = df_cod['CO_MU_INTE'].astype(str).str[:-1].astype(np.int64)
# %%
# To match catalogues using muninipacity code
# latitude and longitude for pacient residence
result_01 = pd.merge(df, df_cod[['CO_MUN_RES','latitude_res','longitude_res']], on='CO_MUN_RES', how="left")
result_02 = pd.merge(df, df_cod[['CO_MU_INTE','latitude_int','longitude_int']], on='CO_MU_INTE', how="left")
#print(result_01.shape)
#print(result_02.shape)
# %%
# To transforming in tuple
patient_mun_code = result_01[['latitude_res','longitude_res']].to_numpy()
hospital_mun_code = result_02[['latitude_int','longitude_int']].to_numpy()
#print(patient_mun_code.shape)
#print(hospital_mun_code.shape)
# %%
# To calculate the distance from patient to hospital (difference between municipalities centers in km)
df['distance'] = haversine_vector(patient_mun_code, hospital_mun_code, Unit.KILOMETERS)
# %%
# overcrowded dataset
# dataset with code from hospital (cnes) with epidemiology week and the overcrowded status of hospital.
df_cod = pd.read_csv('/home/pedro/bkp/code/dataset/hospital_overcrowded.csv', sep=',')
# CO_UNI_NOT, SEM_NOT, Overcrowded
# Overload = number of hospitalization in epidemiological week for COVID-19 / 2019 sum hospital hospitalization by SARS
# %%
# To check
df = pd.merge(df, df_cod, on=['CO_UNI_NOT', 'SEM_NOT'], how="left")
#print(df.shape)
# %%
# Municipalities number inicial
# patient municipality code number
#print(len(df['CO_MUN_NOT']))
# reporting health unit code number
#print(len(df['CO_UNI_NOT']))
print(df['CO_MUN_NOT'].nunique())
print(df['CO_MUN_RES'].nunique())
# %%
# IDHM
# Reading IBGE code for each municipalities and separating it for IDHM index
df_atlas = pd.read_excel (r'/home/pedro/bkp/code/dataset/AtlasBrasil_Consulta.xlsx')
# removind last interger in 'code' variable
df_atlas['code'] = df_atlas['code'].astype(str).str[:-1].astype(np.int64)
# Divinding IDHM in bins
IDHM_veryhigh = set(df_atlas['code'][df_atlas['IDHM2010']>=0.800])
#print(len(IDHM_veryhigh))
IDHM_high = set(df_atlas['code'][((df_atlas['IDHM2010']>=0.700)&(df_atlas['IDHM2010']<0.800))])
#print(len(IDHM_high))
IDHM_medium = set(df_atlas['code'][((df_atlas['IDHM2010']>=0.600)&(df_atlas['IDHM2010']<0.700))])
#print(len(IDHM_medium))
IDHM_low = set(df_atlas['code'][((df_atlas['IDHM2010']>=0.500)&(df_atlas['IDHM2010']<0.600))])
#print(len(IDHM_low))
IDHM_verylow = set(df_atlas['code'][df_atlas['IDHM2010']<0.500])
#print(len(IDHM_verylow))
df.loc[df['CO_MUN_NOT'].isin(IDHM_veryhigh) == True, 'IDHM'] = 5
df.loc[df['CO_MUN_NOT'].isin(IDHM_high) == True, 'IDHM'] = 4
df.loc[df['CO_MUN_NOT'].isin(IDHM_medium) == True, 'IDHM'] = 3
df.loc[df['CO_MUN_NOT'].isin(IDHM_low) == True, 'IDHM'] = 2
df.loc[df['CO_MUN_NOT'].isin(IDHM_verylow) == True, 'IDHM'] = 1
# %%
# Private and public hospital separation
df_hospital = pd.read_csv('/home/pedro/bkp/code/dataset/CNES_SUS.txt', sep='\t')
public = set(df_hospital.iloc[:,0][df_hospital.iloc[:,3]=='S'])
private = set(df_hospital.iloc[:,0][df_hospital.iloc[:,3]=='N'])
df.loc[df['CO_UNI_NOT'].isin(public) == True, 'HEALTH_SYSTEM'] = 1
df.loc[df['CO_UNI_NOT'].isin(private) == True, 'HEALTH_SYSTEM'] = 0
# %%
# Constraint on dataset: We only analyze people with evolution, IDHM and Health system outcomes
df = df[df['IDHM'].notnull()][(df['HEALTH_SYSTEM']==1)|(df['HEALTH_SYSTEM']==0)]
#print(df.shape)
# %%
# Municipalities number
#print(len(df['CO_MUN_NOT']))
#print(len(df['CO_MU_INTE']))
#print(df['CO_MUN_NOT'].nunique())
#print(df['CO_MU_INTE'].nunique())
# %%
# To selecting features and target
df = df[['Overload', 'distance','NU_IDADE_N','CS_SEXO','IDHM','CS_RACA','CS_ESCOL_N','SG_UF_NOT','CS_ZONA',\
'HEALTH_SYSTEM','CS_GESTANT','FEBRE','VOMITO','TOSSE','GARGANTA','DESC_RESP','DISPNEIA','DIARREIA',\
'SATURACAO','CARDIOPATI','HEPATICA','ASMA','PNEUMOPATI','RENAL','HEMATOLOGI','DIABETES',\
'OBESIDADE','NEUROLOGIC','IMUNODEPRE','EVOLUCAO']]
# %%
# adding comorbidities
df['SUM_COMORBIDITIES'] = df.iloc[:,19:-1].replace([9,2], 0).fillna(0).sum(axis=1)
# %%
# Ordering features
df = df[['Overload','distance','NU_IDADE_N','CS_SEXO','CS_RACA','IDHM','CS_ESCOL_N','SG_UF_NOT','SUM_COMORBIDITIES',\
'HEALTH_SYSTEM','CS_ZONA','CARDIOPATI','HEPATICA','ASMA','PNEUMOPATI','RENAL','HEMATOLOGI',\
'DIABETES','OBESIDADE','NEUROLOGIC','IMUNODEPRE','EVOLUCAO']]
# %%
# Pre-Processing
df = df[df['EVOLUCAO'].notnull()][df['EVOLUCAO']!=9][df['EVOLUCAO']!=3]#[df_BR['NU_IDADE_N'].notnull()]
df['CS_SEXO']=df['CS_SEXO'].replace({'M': 1, 'F':0, 'I':9, 'NaN':np.nan})
# replacing 2 by 0 (Death patients)
df.iloc[:,11:] = df.iloc[:,11:].replace(to_replace = 2.0, value =0)
df['SG_UF_NOT'] = df['SG_UF_NOT'].map({'SP': 0, 'RJ':1, 'MG': 2 , 'ES':3, \
'RS':4, 'SC': 5, 'PR': 6, 'MT': 7, 'MS': 8, 'GO':9, 'DF':10, 'RO':11,'AC':12,'AM':13,\
'RR':14,'PA':15,'AP':16,'TO':17,'MA':18,'PI':19,'BA':20,'CE':21,'RN':22,'PB':23,'PE':24,'AL':25,'SE':26})
# For missing values in comorbidities and symptoms we filled by 0.
df.iloc[:,11:-1] = df.iloc[:,11:-1].fillna(0)
# For logistic regression in special we filled by -1 the Nan numbers
imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean')
df.iloc[:,:11] = imp_mean.fit_transform(df.iloc[:,:11])
# %%
# Ordering features
df = df[['Overload','distance','NU_IDADE_N','CS_SEXO','CS_RACA','IDHM','CS_ESCOL_N','SG_UF_NOT','SUM_COMORBIDITIES',\
'HEALTH_SYSTEM','CS_ZONA','CARDIOPATI','HEPATICA','ASMA','PNEUMOPATI','RENAL','HEMATOLOGI',\
'DIABETES','OBESIDADE','NEUROLOGIC','IMUNODEPRE','EVOLUCAO']]
df = df.rename(columns={'Overload':'Overload','distance':'Distance','NU_IDADE_N':'Age','CS_SEXO':'Sex','CS_RACA':'Race','IDHM':'IDHM',\
'CS_ESCOL_N':'Education','SG_UF_NOT':'State','SUM_COMORBIDITIES':'Sum Comorbidities',\
'HEALTH_SYSTEM':'Health System','CS_ZONA':'Urbanity','CARDIOPATI':'Cardiopathy','HEPATICA':'Liver',\
'ASMA':'Asthma','PNEUMOPATI':'Pneumopathy','RENAL':'Renal','HEMATOLOGI':'Hematological',\
'DIABETES':'Diabetes','OBESIDADE':'Obesity','NEUROLOGIC':'Neurological',\
'IMUNODEPRE':'Immunosuppression','EVOLUCAO':'EVOLUCAO'})
# %%
# To analyse the Nan of Distance and Overload features
print(df.shape)
print(df['Overload'].isna().sum())
print(df['Distance'].isna().sum())
# %%
# feature
x = df.iloc[:,:-1]
# labels
y = df['EVOLUCAO']
# %%
# data separation
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = \
train_test_split(x, y, test_size=0.2, random_state=5)
# load the model from disk
loaded_model_rf = pickle.load(open('rf_without_symptoms.sav', 'rb'))
# To predicting
rf_pred = loaded_model_rf.predict_proba(x_test)
# CI 95% calculation AUC
from mlxtend.evaluate import bootstrap
def auc_CI(df_test):
x_test, y_test = df_test[:, :-1], df_test[:,[-1]]
pred = loaded_model_rf.predict_proba(x_test)
fpr, tpr, thresholds = roc_curve(y_test, pred[:,1], pos_label=1)
return auc(x=fpr, y=tpr)
df_test = pd.merge(x_test, y_test, left_index=True, right_index=True)
original, std_err, ci_bounds = bootstrap(df_test.values , num_rounds=100, func = auc_CI, ci=0.95, seed=123)
print('Mean: %.3f, SE: +/- %.3f, CI95: [%.3f, %.3f]' % (original, std_err, ci_bounds[0], ci_bounds[1]))
# CI 95% calculation AP Death
def ap_CI_death(df_test):
x_test, y_test = df_test[:, :-1], df_test[:,[-1]]
pred = loaded_model_rf.predict_proba(x_test)
return average_precision_score(1-y_test, pred[:,0])
df_test = pd.merge(x_test, y_test, left_index=True, right_index=True)
original, std_err, ci_bounds = bootstrap(df_test.values , num_rounds=100, func = ap_CI_death, ci=0.95, seed=123)
print('Mean Death: %.3f, SE: +/- %.3f, CI95: [%.3f, %.3f]' % (original, std_err, ci_bounds[0], ci_bounds[1]))
# CI 95% calculation AP Cure
def ap_CI_cure(df_test):
x_test, y_test = df_test[:, :-1], df_test[:,[-1]]
pred = loaded_model_rf.predict_proba(x_test)
return average_precision_score(y_test, pred[:,1])
df_test = pd.merge(x_test, y_test, left_index=True, right_index=True)
original, std_err, ci_bounds = bootstrap(df_test.values , num_rounds=100, func = ap_CI_cure, ci=0.95, seed=123)
print('Mean Cure: %.3f, SE: +/- %.3f, CI95: [%.3f, %.3f]' % (original, std_err, ci_bounds[0], ci_bounds[1]))
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import os
import cv2
from numpy import linalg as LA
def list_files(directory):
if os.path.exists(directory) == False:
return None
return [x for x in os.listdir(directory) if os.path.isfile(os.path.join(directory, x))]
def LoadImageData(dPath, fileNames):
if os.path.exists(dPath) == False or fileNames.__class__ != [].__class__:
return None
Images = list()
Labels = list()
for f in fileNames:
filePath = dPath + '/' + f
Images.append(cv2.imread(filePath, 0))
if f[-5].isdigit():
Labels.append(np.float64(f[-5]))
else:
raise ValueError('The file name does not end with digit.')
return Images , Labels
def ReconstructData(Images , Labels):
if Labels.__class__ != [].__class__ or Images.__class__ != [].__class__:
return None
m,n = Images[0].shape
k = len(Images)
DataMat = np.zeros((k, m*n))
LabelVec = np.array(Labels).reshape((-1,1))
for i in range(k):
DataMat[i] = Images[i].reshape((1,-1))
return DataMat, LabelVec
def LoadMnistData():
dataPath = os.getcwd() + '/Data'
testPath = dataPath + '/test'
trainPath = dataPath + '/train'
TestFileNames = list_files(testPath)
TrainFileNames = list_files(trainPath)
TestList, TestLabelList = LoadImageData(testPath, TestFileNames)
TrainList, TrainLabelList = LoadImageData(trainPath, TrainFileNames)
TestDataMat, TestLabelVec = ReconstructData(TestList, TestLabelList)
TrainDataMat, TrainLabelVec = ReconstructData(TrainList, TrainLabelList)
return TrainDataMat, TrainLabelVec, TestDataMat, TestLabelVec
def BinarizeData(TrainDataMat, TrainLabelVec, TestDataMat, TestLabelVec, num1, num2):
indListTr = list()
for k in range(len(TrainLabelVec)):
if TrainLabelVec[k] == num1:
indListTr.append(k)
TrainLabelVec[k] = 0.
elif TrainLabelVec[k] == num2:
indListTr.append(k)
TrainLabelVec[k] = 1.
indListTe = list()
for k in range(len(TestLabelVec)):
if TestLabelVec[k] == num1:
indListTe.append(k)
TestLabelVec[k] = 0.
elif TestLabelVec[k] == num2:
indListTe.append(k)
TestLabelVec[k] = 1.
return TrainDataMat[indListTr, :], TrainLabelVec[indListTr, :], TestDataMat[indListTe, :], TestLabelVec[indListTe, :]
def SigmoidF(w,x):
return 1 / (1 + np.exp(-x.dot(w)))
def myLogisticRegression(TrainData, TrainLabel, eps=0.01, sensitivity=10 ** (-7)):
w0 = np.random.uniform(0, np.max(TrainData), (TrainData.shape[1],1))
dl0 = TrainData.transpose().dot(TrainLabel - SigmoidF(w0,TrainData))
w1 = w0 + eps * dl0
dl1 = TrainData.transpose().dot(TrainLabel - SigmoidF(w1,TrainData))
df = dl1 - dl0
NormVal = LA.norm(df, 2) ** 2
while NormVal > sensitivity:
w0 = w1
dl0 = dl1
w1 = w0 + eps * dl0
dl1 = TrainData.transpose().dot(TrainLabel - SigmoidF(w1, TrainData))
df = dl1 - dl0
NormVal = LA.norm(df, 2) ** 2
return w1
def myLogisticClassification(TestData, w):
appLabel = SigmoidF(w,TestData)
appLabel[appLabel >= 0.5] = 1
appLabel[appLabel < 0.5] = 0
LabelVec = 1 - appLabel.reshape((-1,1))
return LabelVec
TrainDataMat, TrainLabelVec, TestDataMat, TestLabelVec = LoadMnistData()
num1, num2 = 0., 1.
bTrainDataMat, bTrainLabelVec, bTestDataMat, bTestLabelVec = BinarizeData(TrainDataMat, TrainLabelVec, TestDataMat, TestLabelVec, num1, num2)
W = myLogisticRegression(bTrainDataMat, bTrainLabelVec)
print(W.shape)
ApproximateLabel = myLogisticClassification(bTestDataMat, W)
acc = 100 * np.sum(np.abs(ApproximateLabel - bTestLabelVec)) / len(bTestLabelVec)
print(acc)
```
| github_jupyter |
# Model Fitting - XGBoost
Fit the XGBoost model using the training dataset. XGBoost is faster and has potentially better accuracy. This allow me to use more features and test changes faster.
```
%load_ext autoreload
%autoreload 2
%matplotlib notebook
import numpy as np
from numpy import mean
from numpy import std
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.ensemble import GradientBoostingClassifier
from matplotlib.lines import Line2D
import joblib
from src.data.labels_util import load_labels, LabelCol, get_labels_file, load_clean_labels, get_workouts
from src.data.imu_util import (
get_sensor_file, ImuCol, load_imu_data, Sensor, fix_epoch, resample_uniformly, time_to_row_range, get_data_chunk,
normalize_with_bounds, data_to_features, list_imu_abspaths, clean_imu_data
)
from src.data.util import find_nearest, find_nearest_index, shift, low_pass_filter, add_col
from src.data.workout import Activity, Workout
from src.data.data import DataState
from src.data.build_features import main as build_features
from src.data.features_util import list_test_files
from src.model.train import evaluate_model_accuracy, train_model, create_xgboost
from src.model.predict import evaluate_on_test_data, evaluate_on_test_data_plot
from src.visualization.visualize import multiplot
from src.config import (
TRAIN_BOOT_DIR, TRAIN_POLE_DIR, TRAIN_FEATURES_FILENAME, TRAIN_LABELS_FILENAME, BOOT_MODEL_FILE,
POLE_MODEL_FILE
)
# import data types
from pandas import DataFrame
from numpy import ndarray
from typing import List, Tuple, Optional
```
### Evaluate quality of model and training data
Use k-fold cross-validation to evaluate the performance of the model.
```
# UNCOMMENT to use. It's very slow.
# print('Boot model:')
# features: ndarray = np.load(TRAIN_BOOT_DIR / TRAIN_FEATURES_FILENAME)
# labels: ndarray = np.load(TRAIN_BOOT_DIR / TRAIN_LABELS_FILENAME)
# evaluate_model_accuracy(features, labels, create_xgboost())
# print('Pole model:')
# features: ndarray = np.load(TRAIN_POLE_DIR / TRAIN_FEATURES_FILENAME)
# labels: ndarray = np.load(TRAIN_POLE_DIR / TRAIN_LABELS_FILENAME)
# evaluate_model_accuracy(features, labels)
```
### Train model
```
print('Train boot model:')
# train_model(Activity.Boot, create_xgboost())
print('Train pole model:')
# train_model(Activity.Pole, create_xgboost())
```
### Test model
**NOTE**: Move the trained model (the pickle files) to the ```models``` directory and edit the paths in ```config.py``` to point to the latest model.
```
print('Test boot model:')
evaluate_on_test_data_plot(Activity.Boot, False, test_idx=0)
print('Test pole model:')
evaluate_on_test_data_plot(Activity.Pole, False, test_idx=0)
```
| github_jupyter |
This program will prepare a basic automation routine for functionalizing CO molecules on a clean Cu(111) surface. This program is provided to demonstrate of the automation capabilities for CO-AFM utilizing a CreaTec STM/AFM system.
Programs from other publications may provide fully-autonomous construction and tip-preparation for STM tips, but here we wish to focus on the basics for CO functionalization:
- loading the model,
- utilizing CV tools to help segment your images,
- functionalizing the tip,
- using the model to determine the quality of CO functionalization.
This is a tool designed for SPM practitioners who have some understanding of Python programming, machine learning, and hardware automation. Our hope is that hardware manufacturers will integrate some of these techniques into their own programs to aid experimental researchers speed up their own processes. The reason for this program is that automation STM tip preparation has been shown in numerous examples, but functionalization automation is relatively unexplored.
Keep in mind that automated CO functionalization does not mean taking into account every corner case imaginable. Surface variations, impurities, and cleaning targets will need to be integrated into future models and programs.
```
### Initialization of libraries ###
import numpy as np
import win32com.client
import tensorflow as tf
from tensorflow import keras
import cv2
import time
### Load CreaTec scan read parameters ###
# Channel Selection #
chantopo = 1
chancurrent = 2
# Unit Selection #
unit_default = 0
unit_volt = 1
unit_dac = 2
unit_ampere = 3
unit_nm = 4
unit_Hz = 5
### Load CreaTec scan parameters ###
# This assumes a 512x512 pixel size with 400x400 Å size and a scanning speed equivalent to about 5-6 minutes per image. Why?
# This is approximately the limit of reasonable CO imaging and gives you a large area to test with. Too large of an image
# may yield odd CO images. The program will attempt to scale smaller images as well, but your results may be poor.
def SetScanParameters():
# Set X x Y image size (Pixels) #
NumX = 512
NumY = 512
stm.setparam('Num.X', NumX)
stm.setparam('Num.Y', NumY)
# Set the X x Y image size (Å)
DeltaX = 119
DeltaY = 119
stm.setparam('Delta X [DAC]', DeltaX)
stm.setparam('Delta Y [DAC]', DeltaY)
# Set the speed #
stm.setparam('DX/DDeltaX', 33)
# Set the Topography and Current channels #
stm.setparam('Channelselectval', 3)
# Set to Forward + Backward
stm.setparam('ScanXMode', 0)
# Set to Constant Current mode #
stm.setparam('CHMode', 0)
# Set Bias Voltage (mV) #
stm.setparam('BiasVolt.[mV]', 100)
# Set Setpoint Current (pA) #
stm.setparam('FBLogIset', 50)
### Load point spectra parameters ###
def CO_pickup():
# Set the voltage #
stm.setparam('Vpoint0.V', 0)
stm.setparam('Vpoint0.V', 2600)
stm.setparam('Vpoint0.V', 2600)
stm.setparam('Vpoint0.V', 2600)
stm.setparam('Vpoint0.V', 0)
# Set the time #
stm.setparam('Vpoint0.t', 0)
stm.setparam('Vpoint0.t', 20)
stm.setparam('Vpoint0.t', 500)
stm.setparam('Vpoint0.t', 980)
stm.setparam('Vpoint0.t', 1000)
# Set the parameters #
stm.setparam('Vertchannelselectval',4097)
stm.setparam('Vertmangain', 6)
stm.setparam('Vertmandelay', 100) # Need to calculate this value (clock cycle)
stm.setparam('VertSpecBack', 0)
stm.setparam('VertSpecAvrgnr', 1)
stm.setparam('VertAvrgdelay', 100)
stm.setparam('VertRepeatCounter', 1)
stm.setparam('VertLineCount', 2)
stm.setparam('VertFBMode', 4)
stm.setparam('VertFBLogiset', 0.400)
stm.setparam('VertLatddx', 15.223)
stm.setparam('VertLatdelay', 166)
def CO_dropoff():
# Set the voltage #
stm.setparam('Vpoint0.V', 0)
stm.setparam('Vpoint0.V', 3300)
stm.setparam('Vpoint0.V', 3300)
stm.setparam('Vpoint0.V', 3300)
stm.setparam('Vpoint0.V', 0)
# Set the time #
stm.setparam('Vpoint0.t', 0)
stm.setparam('Vpoint0.t', 20)
stm.setparam('Vpoint0.t', 500)
stm.setparam('Vpoint0.t', 980)
stm.setparam('Vpoint0.t', 1000)
# Set the parameters #
stm.setparam('Vertchannelselectval', 4097)
stm.setparam('Vertmangain', 6)
stm.setparam('Vertmandelay', 100) # Need to calculate this value (clock cycle)
stm.setparam('VertSpecBack', 0)
stm.setparam('VertSpecAvrgnr', 1)
stm.setparam('VertAvrgdelay', 100)
stm.setparam('VertRepeatCounter', 1)
stm.setparam('VertLineCount', 2)
stm.setparam('VertFBMode', 4)
stm.setparam('VertFBLogiset', 0.400)
stm.setparam('VertLatddx', 15.223)
stm.setparam('VertLatdelay', 166)
# Start a Scan
def RunScan():
stm.scanstart()
while stm.scanstatus > 0:
time.sleep(0.1)
### Read the scan image ###
def ReadSTMImage(channel, unit):
image_STM = stm.scandata(channel,unit)
image = np.asarray(image_STM)
return image
### Normalize the image for import into OpenCV ###
def NormalizeSTMImage(image):
# Normalize the image which is needed for later usage in OpenCV tools.
img_n = cv2.normalize(src=image, dst=None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
return img_n
### Size filter to remove points with areas significantly larger features ###
def FilterCO(kp):
# Gather the CO sizes
s = [int(kp[i].size) for i in range(len(kp))]
# Extract COs based on the feature size. This is needed for SURF, where the feature detection and CO points are somewhat
# correlated. A better way to do this is HoughCircle in OpenCV, but this has a habit of detecting COs at the edge.
kp_XY = [kp[i].pt for i in range(len(kp)) if kp[i].size <= np.median(s)]
# Extract the CO sizes. This value isn't really used, but was in testing.
kp_size = [kp[i].size for i in range(len(kp)) if kp[i].size <= np.median(s)]
return s, kp_XY, kp_size
# Generate mask for COs for import into empty area function
def GenerateCOMask(image, kp_XY):
# Extract dimensions from image
height, width = image.shape[:2]
# Generate a blank mask of all zeros
binaryMap = np.zeros((width, height))
# Extract the points from the filtered CO list and convert to X,Y lists
CO_list = [(int(element[0]), int(element[1])) for element in kp_XY]
# Generate a circle mask around each point of similar size to the original radius
for i in range(len(CO_list)):
cv2.circle(binaryMap,(CO_list[i][0],CO_list[i][1]),int(np.median(s)/2),(255,255,255),thickness=-1)
# Output a binary map
th, dst = cv2.threshold(binaryMap, 0, 255, cv2.THRESH_BINARY)
return th, dst
### Crop and Store COs ###
def CropCO(norm_image,kp_XY):
# Set an empty array to fill with images
CO_crop = np.array([])
# Crop each CO with input size to feed into model
for i in range(len(circles)):
# Set a zoom to gather the entire CO (center +surrounding halo)
zoom = 1.5
# Convert the kp coordinates for easier reference
kp_coord = kp_XY[i].pt
# Set the radius to include zoom value. Expands the surrounding rectangle (square)
radius = (kp_XY[i].size/zoom).astype("int")
# Crop at the target coordinates
CO_crop.append(norm_image[kp_coord[0]-radius:kp_coord[1]+radius, kp_coord[0]-radius:kp_coord[1]+radius])
return CO_crop
def FindEmptySquareArea(mat, ZERO=0):
#Find the largest square of ZERO's in the matrix `mat`.
#Source: https://stackoverflow.com/a/1726667
# Extract array shape
nrows, ncols = mat.shape
# Check for null array / empty matrix or rows
if not (nrows and ncols):
return 0
# Initiate null array based on row and column sizes
counts = np.zeros((nrows, ncols))
# For each row
for i in reversed(range(nrows)):
# The matrix must be rectangular (actually square)
assert len(mat[i]) == ncols
# For each element in the row
for j in reversed(range(ncols)):
# If the element is zero, check around it
if mat[i][j] == ZERO:
counts[i][j] = (1 + min(
counts[i][j+1], # east
counts[i+1][j], # south
counts[i+1][j+1] # south-east
)) if i < (nrows - 1) and j < (ncols - 1) else 1 # edges
mx = -1
lx = -1
ly = -1
for row in range(len(counts)):
for col in range(len(counts[row])):
if counts[row, col] > mx:
mx = counts[row, col]
ly = row
lx = col
return mx, ly, lx
def FindEmptySquareAreaPosition(img):
# Returns the Y,X center of the largest empty square in an image and size.
# Prints a nice reference image for sanity check.
mx, ly, lx = FindEmptySquareArea(img, 0)
# Calculate the center
ly = ly+mx/2
lx = lx+mx/2
# Visual output of the square
# Normalize the image
img = (img-img.min())/(img.max()-img.min())
img *= 255
ly = int(ly)
lx = int(lx)
# Draw a box around the edge of the square
img[int(ly),int(lx)]=255
img[int(ly-1),int(lx-1)]=255
img[int(ly-1),int(lx+1)]=255
img[int(ly+1),int(lx-1)]=255
img[int(ly+1),int(lx+1)]=255
for i in range(int(mx)):
img[int(ly+i-mx/2),int(lx-mx/2)] = 255
img[int(ly+i-mx/2),int(lx+mx-mx/2)] = 255
img[int(ly-mx/2),int(lx+i-mx/2)] = 255
img[int(ly+mx-mx/2),int(lx+i-mx/2)] = 255
# Print the image (quick reference)
#plt.imshow(img, cmap = "gray");
return ly, lx, mx
### Initialize CreaTec STMAFM program ###
stm = win32com.client.Dispatch("pstmafm.stmafmrem")
### Import Keras model ###
reconstructed_model = keras.models.load_model("/pretrained_weights/model.h5")
### Simplified program to run autonomous CO functionalization on a single scan area ###
score = 0
target = 0.9
CO_count = 0
pulse = 0
while score < target:
# Load STM Scan Parameters #
SetScanParameters()
# Start Scan #
RunScan()
# Load STM Image #
stm_image = ReadSTMImage(chantopo, unit_volt)
# Convert to Image Format Compatible with SURF
norm_image = NormalizeSTMImage(stm_image)
# Load SURF parameters
surf = cv2.xfeatures2d.SURF_create(3000)
# Find keypoints and descriptors
kp, des = surf.detectAndCompute(norm_image,None)
# If no COs, alert operator
if len(kp) == 0:
print("No objects detected. Please check scan area or image dimensions.")
break
# Filter out non-CO positions
s, kp_XY, kp_size = FilterCO(kp)
# Plot the SURF identified features
surf_image = cv2.drawKeypoints(norm_image,kp_XY,None,(255,0,0),4)
#plt.imshow(surf_image);
# Extract the points from the filtered CO list and convert to X,Y lists
CO_list = [(int(element[0]), int(element[1])) for element in kp_XY]
### Sort the centerpoints from top to bottom, left to right ###
CO_list_sorted = CO_list[np.lexsort((CO_list[:,0], CO_list[:,1]))]
# Generate a CO mask to identify empty areas
_, CO_mask = GenerateCOMask(norm_image, CO_list_sorted)
# Crop and Store CO images
# Cropped images need to be scaled to 32x32 images for analysis by model
CO_crop = CropCO(norm_image, circles)
CO_crop = np.array(CO_crop, dtype="object")
resized_CO = np.array([cv2.resize(np.copy(CO_crop), (32, 32)) for CO in CO_crop])
# Analyze with model
add_norm_CO(resized_CO)
outputs = model.predict_on_batch(resized_CO)
probs = np.squeeze(np.array(outputs))
score = np.median(probs)
if CO_count >= 10:
# Load dropoff parameters
CO_dropoff()
# Target tip to an empty area
ly, lx, mx = FindEmptySquareAreaPosition(CO_mask)
# Run vertical manipulation
stm.btn_vertspec(lx, ly)
# Brief sleep due to STMAFM program quirks
time.sleep(0.05)
# Track the number of tip pulses
pulse += pulse
# Reset counter back to 0
CO_count = 0
else:
# Score comparison
# Typically if the score is less than 0.005, this is, from initial tests, no CO.
# Not always true, but further refinements of the program needed to detect blank or extremely tilted COs.
if score < 0.005:
# Load pickup parameters
CO_pickup()
# Run vertical manipulation at first point in list
stm.btn_vertspec(CO_list_sorted[0][0], CO_list_sorted[0][1])
# Brief sleep due to STMAFM program quirks
time.sleep(0.05)
# Track the number of attempted CO pickups
CO_count += CO_count
# For scores between 0.005 and the target, CO or O-terminated is already attached. Drop-off is needed.
elif score >= 0.005 and score < target:
# Load dropoff parameters
CO_dropoff()
# Target tip to an empty area
ly, lx, mx = FindEmptySquareAreaPosition(CO_mask)
# Run vertical manipulation
stm.btn_vertspec(lx, ly)
# Brief sleep due to STMAFM program quirks
time.sleep(0.05)
# Track the number of tip pulses
pulse += pulse
# Final condition is greater than or equal to target, which means that the model identifies the image as CO-terminated
# within the target parameters.
else:
print("CO termination completed")
print("Target: ", target)
print("Score:", score)
print("# of Pulses: ", pulse)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import wikipedia
import xml.etree.ElementTree as ET
import re
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
from sklearn.model_selection import cross_val_score, KFold
import xgboost as xgb
from sklearn.metrics import r2_score
%matplotlib inline
df = pd.read_csv('sysarmy2019-1.csv')
sal_col = 'Salario mensual BRUTO (en tu moneda local)'
df = df[df['Salario mensual BRUTO (en tu moneda local)'] < 1_000_000]
df = df[df['Años en la empresa actual'] < 40]
df = df[df['Me identifico'] != 'Otros']
df = df[(df['Salario mensual BRUTO (en tu moneda local)'] >= 10_000) & (df['Salario mensual BRUTO (en tu moneda local)'] <= 1_000_000)]
df.head()
h, m = (
df[df['Me identifico'] == 'Hombre'][sal_col].median(),
df[df['Me identifico'] == 'Mujer'][sal_col].median(),
)
(h-m)/h
df.columns
best = {'colsample_bytree': 0.7000000000000001, 'gamma': 0.8500000000000001, 'learning_rate': 0.025, 'max_depth': 16, 'min_child_weight': 15.0, 'n_estimators': 175, 'subsample': 0.8099576733552297}
regions_map = {
'Ciudad Autónoma de Buenos Aires': 'AMBA',
'GBA': 'AMBA',
'Catamarca': 'NOA',
'Chaco': 'NEA',
'Chubut': 'Patagonia',
'Corrientes': 'NEA',
'Entre Ríos': 'NEA',
'Formosa': 'NEA',
'Jujuy': 'NOA',
'La Pampa': 'Pampa',
'La Rioja': 'NOA',
'Mendoza': 'Cuyo',
'Misiones': 'NEA',
'Neuquén': 'Patagonia',
'Río Negro': 'Patagonia',
'Salta': 'NOA',
'San Juan': 'Cuyo',
'San Luis': 'Cuyo',
'Santa Cruz': 'Patagonia',
'Santa Fe': 'Pampa',
'Santiago del Estero': 'NOA',
'Tucumán': 'NOA',
'Córdoba': 'Pampa',
'Provincia de Buenos Aires': 'Pampa',
'Tierra del Fuego': 'Patagonia',
}
class Model:
def __init__(self, **params):
self.regressor_ = xgb.XGBRegressor(**params)
def get_params(self, deep=True):
return self.regressor_.get_params(deep=deep)
def set_params(self, **params):
return self.regressor_.set_params(**params)
def clean_words(self, field, value):
value = value.replace('Microsoft Azure (Tables, CosmosDB, SQL, etc)', 'Microsoft Azure(TablesCosmosDBSQLetc)')
value = value.replace('Snacks, golosinas, bebidas', 'snacks')
value = value.replace('Descuentos varios (Clarín 365, Club La Nación, etc)', 'descuentos varios')
value = value.replace('Sí, de forma particular', 'de forma particular')
value = value.replace('Sí, los pagó un empleador', 'los pagó un empleador')
value = value.replace('Sí, activa', 'activa')
value = value.replace('Sí, pasiva', 'pasiva')
return [self.clean_word(field, v) for v in value.split(',') if self.clean_word(field, v)]
def clean_word(self, field, word):
val = str(word).lower().strip().replace(".", "")
if val in ('ninguno', 'ninguna', 'no', '0', 'etc)', 'nan'):
return ''
if field == 'Lenguajes de programación' and val == 'Microsoft Azure(TablesCosmosDBSQLetc)':
return 'Microsoft Azure (Tables, CosmosDB, SQL, etc)'
if field == '¿A qué eventos de tecnología asististe en el último año?' and val in ('pycon', 'pyconar'):
return 'pyconar'
if field == '¿A qué eventos de tecnología asististe en el último año?' and val in ('nodeconf', 'nodeconfar'):
return 'nodeconfar'
if field == '¿A qué eventos de tecnología asististe en el último año?' and val in ('meetup', 'meetups'):
return 'meetups'
if field == '¿A qué eventos de tecnología asististe en el último año?':
return val.replace(' ', '')
if field == 'Beneficios extra' and val == 'snacks':
return 'snacks, golosinas, bebidas'
if field == 'Beneficios extra' and val == 'descuentos varios':
return 'descuentos varios (clarín 365, club la nación, etc)'
return val
def row_to_words(self, row):
return [
f'{key}={row.fillna("")[key]}'
for key
in (
'Me identifico',
'Nivel de estudios alcanzado',
'Universidad',
'Estado',
'Carrera',
'¿Contribuís a proyectos open source?',
'¿Programás como hobbie?',
'Trabajo de',
'¿Qué SO usás en tu laptop/PC para trabajar?',
'¿Y en tu celular?',
'Tipo de contrato',
'Orientación sexual',
'Cantidad de empleados',
'Actividad principal',
)
] + [
f'{k}={v}' for k in (
'¿Tenés guardias?',
'Realizaste cursos de especialización',
'¿A qué eventos de tecnología asististe en el último año?',
'Beneficios extra',
'Plataformas',
'Lenguajes de programación',
'Frameworks, herramientas y librerías',
'Bases de datos',
'QA / Testing',
'IDEs',
'Lenguajes de programación'
) for v in self.clean_words(k, row.fillna('')[k])
] + [
f'region={regions_map[row["Dónde estás trabajando"]]}'
]
def encode_row(self, row):
ws = self.row_to_words(row)
return pd.Series([w in ws for w in self.valid_words_] + [
row['¿Gente a cargo?'],
row['Años de experiencia'],
row['Tengo'],
])
def fit(self, X, y, **params):
counts = {}
for i in range(X.shape[0]):
for word in self.row_to_words(X.iloc[i]):
counts[word] = counts.get(word, 0) + 1
self.valid_words_ = [word for word, c in counts.items() if c > 0.01*X.shape[0]]
self.regressor_.fit(X.apply(self.encode_row, axis=1).astype(float), y, **params)
return self
def predict(self, X):
return self.regressor_.predict(X.apply(self.encode_row, axis=1).astype(float))
def score(self, X, y):
return r2_score(y, self.predict(X))
kf = KFold(n_splits=5, shuffle=True, random_state=99)
kf_models = []
for train_index, test_index in kf.split(df):
model = Model(**best).fit(df.iloc[train_index], df.iloc[train_index][sal_col].astype(float))
df.loc[df.index[test_index], 'e(salary)'] = model.predict(df.iloc[test_index])
df['Me identifico'] = df['Me identifico'].apply(lambda g: {'Hombre': 'Mujer', 'Mujer': 'Hombre'}[g])
df.loc[df.index[test_index], 'e_gr(salary)'] = model.predict(df.iloc[test_index])
df['Me identifico'] = df['Me identifico'].apply(lambda g: {'Hombre': 'Mujer', 'Mujer': 'Hombre'}[g])
kf_models.append(model)
df['e_h(salary)'] = df.apply(lambda row: row['e(salary)'] if row['Me identifico'] == 'Hombre' else row['e_gr(salary)'], axis=1)
df['e_m(salary)'] = df.apply(lambda row: row['e(salary)'] if row['Me identifico'] == 'Mujer' else row['e_gr(salary)'], axis=1)
df['e_g_diff(salary)'] = (df['e_h(salary)'] - df['e_m(salary)']) / df['e_h(salary)']
r2_score(df[sal_col], df['e(salary)'])
df['e_g_diff(salary)'].median()
```
| github_jupyter |
```
try:
from openmdao.utils.notebook_utils import notebook_mode
except ImportError:
!python -m pip install openmdao[notebooks]
```
# ExecComp
`ExecComp` is a component that provides a shortcut for building an ExplicitComponent that
represents a set of simple mathematical relationships between inputs and outputs. The ExecComp
automatically takes care of all of the component API methods, so you just need to instantiate
it with an equation or a list of equations.
## ExecComp Options
```
import openmdao.api as om
om.show_options_table("openmdao.components.exec_comp.ExecComp")
```
## ExecComp Constructor
The call signature for the `ExecComp` constructor is:
```{eval-rst}
.. automethod:: openmdao.components.exec_comp.ExecComp.__init__
:noindex:
```
The values of the `kwargs` can be `dicts` which define the initial value for the variables along with
other metadata. For example,
```
ExecComp('xdot=x/t', x={'units': 'ft'}, t={'units': 's'}, xdot={'units': 'ft/s')
```
Here is a list of the possible metadata that can be assigned to a variable in this way. The **Applies To** column indicates
whether the metadata is appropriate for input variables, output variables, or both.
```{eval-rst}
================ ====================================================== ============================================================= ============== ========
Name Description Valid Types Applies To Default
================ ====================================================== ============================================================= ============== ========
value Initial value in user-defined units float, list, tuple, ndarray input & output 1
shape Variable shape, only needed if not an array int, tuple, list, None input & output None
shape_by_conn Determine variable shape based on its connection bool input & output False
copy_shape Determine variable shape based on named variable str input & output None
units Units of variable str, None input & output None
desc Description of variable str input & output ""
res_units Units of residuals str, None output None
ref Value of variable when scaled value is 1 float, ndarray output 1
ref0 Value of variable when scaled value is 0 float, ndarray output 0
res_ref Value of residual when scaled value is 1 float, ndarray output 1
lower Lower bound of variable float, list, tuple, ndarray, Iterable, None output None
upper Lower bound of variable float, list, tuple, ndarray, Iterable, None output None
src_indices Global indices of the variable int, list of ints, tuple of ints, int ndarray, Iterable, None input None
flat_src_indices If True, src_indices are indices into flattened source bool input None
tags Used to tag variables for later filtering str, list of strs input & output None
================ ====================================================== ============================================================= ============== ========
```
These metadata are passed to the `Component` methods `add_input` and `add_output`.
For more information about these metadata, see the documentation for the arguments to these Component methods:
- [add_input](../../../_srcdocs/packages/core/component.html#openmdao.core.component.Component.add_input)
- [add_output](../../../_srcdocs/packages/core/component.html#openmdao.core.component.Component.add_output)
## Registering User Functions
To get your own functions added to the internal namespace of ExecComp so you can call them
from within an ExecComp expression, you can use the `ExecComp.register` function.
```{eval-rst}
.. automethod:: openmdao.components.exec_comp.ExecComp.register
:noindex:
```
Note that you're required, when registering a new function, to indicate whether that function
is complex safe or not.
ExecComp Example: Simple
For example, here is a simple component that takes the input and adds one to it.
```
import openmdao.api as om
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', om.ExecComp('y=x+1.'))
model.set_input_defaults('comp.x', 2.0)
prob.setup()
prob.run_model()
print(prob.get_val('comp.y'))
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob.get_val('comp.y'), 3.0, 0.00001)
```
## ExecComp Example: Multiple Outputs
You can also create an ExecComp with multiple outputs by placing the expressions in a list.
```
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', om.ExecComp(['y1=x+1.', 'y2=x-1.']), promotes=['x'])
prob.setup()
prob.set_val('x', 2.0)
prob.run_model()
print(prob.get_val('comp.y1'))
print(prob.get_val('comp.y2'))
assert_near_equal(prob.get_val('comp.y1'), 3.0, 0.00001)
assert_near_equal(prob.get_val('comp.y2'), 1.0, 0.00001)
```
## ExecComp Example: Arrays
You can declare an ExecComp with arrays for inputs and outputs, but when you do, you must also
pass in a correctly-sized array as an argument to the ExecComp call, or set the 'shape' metadata
for that variable as described earlier. If specifying the value directly, it can be the initial value
in the case of unconnected inputs, or just an empty array with the correct size.
```
import numpy as np
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', om.ExecComp('y=x[1]',
x=np.array([1., 2., 3.]),
y=0.0))
prob.setup()
prob.run_model()
print(prob.get_val('comp.y'))
assert_near_equal(prob.get_val('comp.y'), 2.0, 0.00001)
```
## ExecComp Example: Math Functions
Functions from the math library are available for use in the expression strings.
```
import numpy as np
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', om.ExecComp('z = sin(x)**2 + cos(y)**2'))
prob.setup()
prob.set_val('comp.x', np.pi/2.0)
prob.set_val('comp.y', np.pi/2.0)
prob.run_model()
print(prob.get_val('comp.z'))
assert_near_equal(prob.get_val('comp.z'), 1.0, 0.00001)
```
## ExecComp Example: Variable Properties
You can also declare properties like 'units', 'upper', or 'lower' on the inputs and outputs. In this
example we declare all our inputs to be inches to trigger conversion from a variable expressed in feet
in one connection source.
```
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', om.ExecComp('z=x+y',
x={'val': 0.0, 'units': 'inch'},
y={'val': 0.0, 'units': 'inch'},
z={'val': 0.0, 'units': 'inch'}))
prob.setup()
prob.set_val('comp.x', 12.0, units='inch')
prob.set_val('comp.y', 1.0, units='ft')
prob.run_model()
print(prob.get_val('comp.z'))
assert_near_equal(prob.get_val('comp.z'), 24.0, 0.00001)
```
## ExecComp Example: Diagonal Partials
If all of your ExecComp's array inputs and array outputs are the same size and happen to have
diagonal partials, you can make computation of derivatives for your ExecComp faster by specifying a
`has_diag_partials=True` argument
to `__init__` or via the component options. This will cause the ExecComp to solve for its partials
by complex stepping all entries of an array input at once instead of looping over each entry individually.
```
import numpy as np
p = om.Problem()
model = p.model
model.add_subsystem('comp', om.ExecComp('y=3.0*x + 2.5',
has_diag_partials=True,
x=np.ones(5), y=np.ones(5)))
p.setup()
p.set_val('comp.x', np.ones(5))
p.run_model()
J = p.compute_totals(of=['comp.y'], wrt=['comp.x'], return_format='array')
print(J)
from numpy.testing import assert_almost_equal
assert_almost_equal(J, np.eye(5)*3., decimal=6)
```
## ExecComp Example: Options
Other options that can apply to all the variables in the component are variable shape and units.
These can also be set as a keyword argument in the constructor or via the component options. In the
following example the variables all share the same shape, which is specified in the constructor, and
common units that are specified by setting the option.
```
model = om.Group()
xcomp = model.add_subsystem('comp', om.ExecComp('y=2*x', shape=(2,)))
xcomp.options['units'] = 'm'
prob = om.Problem(model)
prob.setup()
prob.set_val('comp.x', [100., 200.], units='cm')
prob.run_model()
print(prob.get_val('comp.y'))
assert_near_equal(prob.get_val('comp.y'), [2., 4.], 0.00001)
```
## ExecComp Example: User function registration
If the function is complex safe, then you don't need to do anything differently than you
would for any other ExecComp.
```
try:
om.ExecComp.register("myfunc", lambda x: x * x, complex_safe=True)
except NameError:
pass
p = om.Problem()
comp = p.model.add_subsystem("comp", om.ExecComp("y = 2 * myfunc(x)"))
p.setup()
p.run_model()
J = p.compute_totals(of=['comp.y'], wrt=['comp.x'])
print(J['comp.y', 'comp.x'][0][0])
assert_near_equal(J['comp.y', 'comp.x'][0][0], 4., 1e-10)
```
## ExecComp Example: Complex unsafe user function registration
If the function isn't complex safe, then derivatives involving that function
will have to be computed using finite difference instead of complex step. The way to specify
that `fd` should be used for a given derivative is to call `declare_partials`.
```
try:
om.ExecComp.register("unsafe", lambda x: x * x, complex_safe=False)
except NameError:
pass
p = om.Problem()
comp = p.model.add_subsystem("comp", om.ExecComp("y = 2 * unsafe(x)"))
# because our function is complex unsafe, we must declare that the partials
# with respect to 'x' use 'fd' instead of 'cs'
comp.declare_partials('*', 'x', method='fd')
p.setup()
p.run_model()
J = p.compute_totals(of=['comp.y'], wrt=['comp.x'])
print(J['comp.y', 'comp.x'][0][0])
assert_near_equal(J['comp.y', 'comp.x'][0][0], 4., 1e-5)
```
## ExecComp Example: Adding Expressions
You can add additional expressions to an `ExecComp` with the "add_expr" method.
```
import numpy as np
class ConfigGroup(om.Group):
def setup(self):
excomp = om.ExecComp('y=x',
x={'val' : 3.0, 'units' : 'mm'},
y={'shape' : (1, ), 'units' : 'cm'})
self.add_subsystem('excomp', excomp, promotes=['*'])
def configure(self):
self.excomp.add_expr('z = 2.9*x',
z={'shape' : (1, ), 'units' : 's'})
p = om.Problem()
p.model.add_subsystem('sub', ConfigGroup(), promotes=['*'])
p.setup()
p.run_model()
print(p.get_val('z'))
print(p.get_val('y'))
assert_almost_equal(p.get_val('z'), 8.7, 1e-8)
assert_almost_equal(p.get_val('y'), 3.0, 1e-8)
```
| github_jupyter |
### Generating `publications.json` partitions
This is a template notebook for generating metadata on publications - most importantly, the linkage between the publication and dataset (datasets are enumerated in `datasets.json`)
Process goes as follows:
1. Import CSV with publication-dataset linkages. Your csv should have at the minimum, fields (spelled like the below):
* `dataset` to hold the dataset_ids, and
* `title` for the publication title.
Update the csv with these field names to ensure this code will run. We read in, dedupe and format the title
2. Match to `datasets.json` -- alert if given dataset doesn't exist yet
3. Generate list of dicts with publication metadata
4. Write to a publications.json file
#### Import CSV containing publication-dataset linkages
Set `linkages_path` to the location of the csv containg dataset-publication linkages and read in csv
```
import pandas as pd
import datetime
import os
file_name = 'foodaps_usda_linkages.csv'
rcm_subfolder = '20190619_usda_foodaps'
linkages_path = os.path.join('/Users/andrewnorris/RichContextMetadata/metadata',rcm_subfolder,file_name)
# linkages_path = os.path.join(os.getcwd(),'SNAP_DATA_DIMENSIONS_SEARCH_DEMO.csv')
linkages_csv = pd.read_csv(linkages_path)
linkages_path
```
Format/clean linkage data - apply `scrub_unicode` to `title` field.
```
import unicodedata
def scrub_unicode (text):
"""
try to handle the unicode edge cases encountered in source text,
as best as possible
"""
x = " ".join(map(lambda s: s.strip(), text.split("\n"))).strip()
x = x.replace('“', '"').replace('”', '"')
x = x.replace("‘", "'").replace("’", "'").replace("`", "'")
x = x.replace("`` ", '"').replace("''", '"')
x = x.replace('…', '...').replace("\\u2026", "...")
x = x.replace("\\u00ae", "").replace("\\u2122", "")
x = x.replace("\\u00a0", " ").replace("\\u2022", "*").replace("\\u00b7", "*")
x = x.replace("\\u2018", "'").replace("\\u2019", "'").replace("\\u201a", "'")
x = x.replace("\\u201c", '"').replace("\\u201d", '"')
x = x.replace("\\u20ac", "€")
x = x.replace("\\u2212", " - ") # minus sign
x = x.replace("\\u00e9", "é")
x = x.replace("\\u017c", "ż").replace("\\u015b", "ś").replace("\\u0142", "ł")
x = x.replace("\\u0105", "ą").replace("\\u0119", "ę").replace("\\u017a", "ź").replace("\\u00f3", "ó")
x = x.replace("\\u2014", " - ").replace('–', '-').replace('—', ' - ')
x = x.replace("\\u2013", " - ").replace("\\u00ad", " - ")
x = str(unicodedata.normalize("NFKD", x).encode("ascii", "ignore").decode("utf-8"))
# some content returns text in bytes rather than as a str ?
try:
assert type(x).__name__ == "str"
except AssertionError:
print("not a string?", type(x), x)
return x
```
Scrub titles of problematic characters, drop nulls and dedupe
```
linkages_csv.head()
linkages_csv['title'] = linkages_csv['title'].apply(scrub_unicode)
linkages_csv = linkages_csv.loc[pd.notnull(linkages_csv.dataset)].drop_duplicates()
linkages_csv = linkages_csv.loc[pd.notnull(linkages_csv.title)].drop_duplicates()
pub_metadata_fields = ['title']
original_metadata_cols = list(set(linkages_csv.columns.values.tolist()) - set(pub_metadata_fields)-set(['dataset']))
```
#### Generate list of dicts of metadata
Read in `datasets.json`. Update `datasets_path` to your local.
```
import json
datasets_path = '/Users/andrewnorris/RCDatasets/datasets.json'
with open(datasets_path) as json_file:
datasets = json.load(json_file)
```
Create list of dictionaries of publication metadata. `format_metadata` iterrates through `linkages_csv` dataframe, splits the `dataset` field (for when multiple datasets are listed); throws an error if the dataset doesn't exist and needs to be added to `datasets.json`.
```
def create_pub_dict(linkages_dataframe,datasets):
pub_dict_list = []
for i, r in linkages_dataframe.iterrows():
r['title'] = scrub_unicode(r['title'])
ds_id_list = [f for f in [d.strip() for d in r['dataset'].split(",")] if f not in [""," "]]
for ds in ds_id_list:
check_ds = [b for b in datasets if b['id'] == ds]
if len(check_ds) == 0:
print('dataset {} isnt listed in datasets.json. Please add to file'.format(ds))
required_metadata = r[pub_metadata_fields].to_dict()
required_metadata.update({'datasets':ds_id_list})
pub_dict = required_metadata
if len(original_metadata_cols) > 0:
original_metadata = r[original_metadata_cols].to_dict()
original_metadata.update({'date_added':datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')})
pub_dict.update({'original':original_metadata})
pub_dict_list.append(pub_dict)
return pub_dict_list
```
Generate publication metadata and export to json
```
linkage_list = create_pub_dict(linkages_csv,datasets)
```
Update `pub_path` to be:
`<name_of_subfolder>_publications.json`
```
json_pub_path = os.path.join('/Users/andrewnorris/RCPublications/partitions/',rcm_subfolder+'_publications.json')
with open(json_pub_path, 'w') as outfile:
json.dump(linkage_list, outfile, indent=2)
```
| github_jupyter |
# Bootstrap distances to the future
Estimate uncertainty of distance to the future values per sample and model using the bootstrap of observed distances across time.
## Define inputs, outputs, and parameters
```
# Define inputs.
model_distances = snakemake.input.model_distances
# Define outputs.
output_table = snakemake.output.output_table
bootstrap_figure_for_simulated_sample_validation = snakemake.output.bootstrap_figure_for_simulated_sample_validation
bootstrap_figure_for_simulated_sample_test = snakemake.output.bootstrap_figure_for_simulated_sample_test
bootstrap_figure_for_natural_sample_validation = snakemake.output.bootstrap_figure_for_natural_sample_validation
bootstrap_figure_for_natural_sample_test = snakemake.output.bootstrap_figure_for_natural_sample_test
# Define parameters.
n_bootstraps = snakemake.params.n_bootstraps
error_types = ["validation", "test"]
```
## Import dependencies
```
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
```
## Configure plots and analyses
```
sns.set_style("white")
# Display figures at a reasonable default size.
mpl.rcParams['figure.figsize'] = (6, 4)
# Disable top and right spines.
mpl.rcParams['axes.spines.top'] = False
mpl.rcParams['axes.spines.right'] = False
# Display and save figures at higher resolution for presentations and manuscripts.
mpl.rcParams['savefig.dpi'] = 200
mpl.rcParams['figure.dpi'] = 120
# Display text at sizes large enough for presentations and manuscripts.
mpl.rcParams['font.weight'] = "normal"
mpl.rcParams['axes.labelweight'] = "normal"
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 14
mpl.rcParams['legend.fontsize'] = 12
mpl.rcParams['xtick.labelsize'] = 12
mpl.rcParams['ytick.labelsize'] = 12
mpl.rc('text', usetex=False)
color_by_predictor = {
'naive': '#cccccc',
'offspring': '#000000',
'normalized_fitness': '#999999',
'fitness': '#000000',
'ep': '#4575b4',
'ep_wolf': '#4575b4',
'ep_star': '#4575b4',
'ep_x': '#4575b4',
'ep_x_koel': '#4575b4',
'ep_x_wolf': '#4575b4',
'oracle_x': '#4575b4',
'rb': '#4575b4',
'cTiter': '#91bfdb',
'cTiter_x': '#91bfdb',
'cTiterSub': '#91bfdb',
'cTiterSub_star': '#91bfdb',
'cTiterSub_x': '#91bfdb',
'fra_cTiter_x': '#91bfdb',
'ne_star': '#2ca25f',
'dms_star': '#99d8c9',
"dms_nonepitope": "#99d8c9",
"dms_entropy": "#99d8c9",
'unnormalized_lbi': '#fc8d59',
'lbi': '#fc8d59',
'delta_frequency': '#d73027',
'ep_x-ne_star': "#ffffff",
'ep_star-ne_star': "#ffffff",
'lbi-ne_star': "#ffffff",
'ne_star-lbi': "#ffffff",
'cTiter_x-ne_star': "#ffffff",
'cTiter_x-ne_star-lbi': "#ffffff",
'fra_cTiter_x-ne_star': "#ffffff"
}
histogram_color_by_predictor = {
'naive': '#cccccc',
'offspring': '#000000',
'normalized_fitness': '#000000',
'fitness': '#000000',
'ep': '#4575b4',
'ep_wolf': '#4575b4',
'ep_star': '#4575b4',
'ep_x': '#4575b4',
'ep_x_koel': '#4575b4',
'ep_x_wolf': '#4575b4',
'oracle_x': '#4575b4',
'rb': '#4575b4',
'cTiter': '#91bfdb',
'cTiter_x': '#91bfdb',
'cTiterSub': '#91bfdb',
'cTiterSub_star': '#91bfdb',
'cTiterSub_x': '#91bfdb',
'fra_cTiter_x': '#91bfdb',
'ne_star': '#2ca25f',
'dms_star': '#99d8c9',
"dms_nonepitope": "#99d8c9",
"dms_entropy": "#99d8c9",
'unnormalized_lbi': '#fc8d59',
'lbi': '#fc8d59',
'delta_frequency': '#d73027',
'ep_x-ne_star': "#999999",
'ep_star-ne_star': "#999999",
'lbi-ne_star': "#999999",
'ne_star-lbi': "#999999",
'cTiter_x-ne_star': "#999999",
'cTiter_x-ne_star-lbi': "#999999",
'fra_cTiter_x-ne_star': "#999999"
}
name_by_predictor = {
"naive": "naive",
"offspring": "observed fitness",
"normalized_fitness": "true fitness",
"fitness": "estimated fitness",
"ep": "epitope mutations",
"ep_wolf": "Wolf epitope mutations",
"ep_star": "epitope ancestor",
"ep_x": "epitope antigenic\nnovelty",
"ep_x_koel": "Koel epitope antigenic novelty",
"ep_x_wolf": "Wolf epitope antigenic novelty",
"oracle_x": "oracle antigenic novelty",
"rb": "Koel epitope mutations",
"cTiter": "antigenic advance",
"cTiter_x": "HI antigenic novelty",
"cTiterSub": "linear HI mut phenotypes",
"cTiterSub_star": "ancestral HI mut phenotypes",
"cTiterSub_x": "HI sub cross-immunity",
"fra_cTiter_x": "FRA antigenic novelty",
"ne_star": "mutational load",
"dms_star": "DMS mutational\neffects",
"dms_nonepitope": "DMS mutational load",
"dms_entropy": "DMS entropy",
"unnormalized_lbi": "unnormalized LBI",
"lbi": "LBI",
"delta_frequency": "delta frequency",
'ep_x-ne_star': "mutational load +\nepitope antigenic\nnovelty",
'ep_star-ne_star': "mutational load +\nepitope ancestor",
'lbi-ne_star': "mutational load +\n LBI",
'ne_star-lbi': "mutational load +\n LBI",
'cTiter_x-ne_star': "mutational load +\nHI antigenic novelty",
'cTiter_x-ne_star-lbi': "mutational load +\nHI antigenic novelty +\nLBI",
'fra_cTiter_x-ne_star': "mutational load +\nFRA antigenic novelty"
}
name_by_sample = {
"simulated_sample_3": "simulated populations",
"natural_sample_1_with_90_vpm_sliding": "natural populations"
}
color_by_model = {name_by_predictor[predictor]: color for predictor, color in color_by_predictor.items()}
predictors_by_sample = {
"simulated_sample_3": [
"naive",
"normalized_fitness",
"ep_x",
"ne_star",
"lbi",
"delta_frequency",
"ep_star-ne_star",
"ep_x-ne_star",
"lbi-ne_star"
],
"natural_sample_1_with_90_vpm_sliding": [
"naive",
"ep_x",
"cTiter_x",
"ne_star",
"dms_star",
"lbi",
"delta_frequency",
"ep_star-ne_star",
"ep_x-ne_star",
"cTiter_x-ne_star",
"ne_star-lbi",
"cTiter_x-ne_star-lbi"
]
}
df = pd.read_table(model_distances)
```
## Bootstrap hypothesis tests
Perform [bootstrap hypothesis tests](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)#Bootstrap_hypothesis_testing) (Efron and Tibshirani 1993) between biologically-informed models and the naive model for each dataset.
The following logic is copied from the article linked above to support the logic of the functions defined below.
Calculate test statistic _t_:
$$
t = \frac{\bar{x}-\bar{y}}{\sqrt{\sigma_x^2/n + \sigma_y^2/m}}
$$
Create two new data sets whose values are $x_i^{'} = x_i - \bar{x} + \bar{z}$ and $y_i^{'} = y_i - \bar{y} + \bar{z}$, where $\bar{z}$ is the mean of the combined sample.
Draw a random sample ($x_i^*$) of size $n$ with replacement from $x_i^{'}$ and another random sample ($y_i^*$) of size $m$ with replacement from $y_i^{'}$.
Calculate the test statistic $t^* = \frac{\bar{x^*}-\bar{y^*}}{\sqrt{\sigma_x^{*2}/n + \sigma_y^{*2}/m}}$
Repeat 3 and 4 $B$ times (e.g. $B=1000$) to collect $B$ values of the test statistic.
Estimate the p-value as $p = \frac{\sum_{i=1}^B I\{t_i^* \geq t\}}{B}$ where $I(\text{condition}) = 1$ when ''condition'' is true and 0 otherwise.
```
def get_model_distances_by_build(df, sample, error_type, predictors):
return df.query(
f"(sample == '{sample}') & (error_type == '{error_type}') & (predictors == '{predictors}')"
)["validation_error"].values
def calculate_t_statistic(x_dist, y_dist):
"""Calculate the t statistic between two given distributions.
"""
# Calculate mean and variance for the two input distributions.
x_mean = x_dist.mean()
x_var = np.var(x_dist)
x_length = x_dist.shape[0]
y_mean = y_dist.mean()
y_var = np.var(y_dist)
y_length = y_dist.shape[0]
# Calculate the test statistic t.
t = (x_mean - y_mean) / np.sqrt((x_var / x_length) + (y_var / y_length))
return t
def bootstrap_t(x_dist_adjusted, y_dist_adjusted):
"""For a given pair of distributions that have been recentered on the mean of the union of their original distributions,
create a single bootstrap sample from each distribution and calculate the corresponding t statistic for that sample.
"""
x_dist_adjusted_sample = np.random.choice(x_dist_adjusted, size=x_dist_adjusted.shape[0], replace=True)
y_dist_adjusted_sample = np.random.choice(y_dist_adjusted, size=y_dist_adjusted.shape[0], replace=True)
return calculate_t_statistic(x_dist_adjusted_sample, y_dist_adjusted_sample)
def compare_distributions_by_bootstrap(x_dist, y_dist, n_bootstraps):
"""Compare the means of two given distributions by a bootstrap hypothesis test.
Returns the p-value, t statistic, and the bootstrap distribution of t values.
"""
# Calculate means of input distributions.
x_mean = x_dist.mean()
y_mean = y_dist.mean()
# Calculate the test statistic t.
t = calculate_t_statistic(x_dist, y_dist)
# Calculate mean of joint distribution.
z_dist = np.concatenate([x_dist, y_dist])
z_mean = z_dist.mean()
# Create new distributions centered on the mean of the joint distribution.
x_dist_adjusted = x_dist - x_mean + z_mean
y_dist_adjusted = y_dist - y_mean + z_mean
bootstrapped_t_dist = np.array([
bootstrap_t(x_dist_adjusted, y_dist_adjusted)
for i in range(n_bootstraps)
])
p_value = (bootstrapped_t_dist >= t).sum() / n_bootstraps
return (p_value, t, bootstrapped_t_dist)
example_model_dist = get_model_distances_by_build(
df,
"simulated_sample_3",
"validation",
"normalized_fitness"
)
example_naive_dist = get_model_distances_by_build(
df,
"simulated_sample_3",
"validation",
"naive"
)
example_model_difference = example_model_dist - example_naive_dist
example_null_difference = example_model_difference - example_model_difference.mean()
example_model_dist
example_naive_dist
fig, ax = plt.subplots(1, 1, figsize=(6, 4))
bins = np.arange(
min(example_model_difference.min(), example_null_difference.min()),
max(example_model_difference.max(), example_null_difference.max()),
0.5
)
ax.hist(example_model_difference, bins=bins, label="true fitness", alpha=0.5)
ax.hist(example_null_difference, bins=bins, label="null model", alpha=0.5)
ax.axvline(x=example_model_difference.mean(), label="model mean", color="blue")
ax.axvline(x=example_null_difference.mean(), label="null model mean", color="orange")
ax.set_xlim(-6, 6)
ax.set_xlabel("Model - naive distance to future (AAs)")
ax.set_ylabel("Number of timepoints")
ax.set_title(
"Example model and null distributions\nfor differences between distances to the future",
fontsize=12
)
ax.legend(frameon=False)
# Compare all model distributions to the corresponding naive model distribution for
# all samples and error types. Store the resulting p-values and metadata in a new
# data frame.
p_values = []
bootstrapped_t_distributions = []
for sample, predictors in predictors_by_sample.items():
sample_df = df.query(f"sample == '{sample}'")
for error_type in error_types:
error_type_df = sample_df.query(f"error_type == '{error_type}'")
naive_dist = error_type_df.query("predictors == 'naive'")["validation_error"].values
for predictor in predictors:
if predictor == "naive":
continue
predictor_dist = error_type_df.query(f"predictors == '{predictor}'")["validation_error"].values
# Calculate the difference between the model's distance to the future
# and the naive model's at the same timepoint. This difference should
# account for timepoint-to-timepoint variation observed across all models.
difference_dist = predictor_dist - naive_dist
# Center the observed distribution by its mean to produce a null distribution
# with the same variance and a mean of zero. We want to test whether the
# observed differences between this model and the naive model are different
# from zero.
null_difference_dist = difference_dist - difference_dist.mean()
# Perform the bootstrap hypothesis test between the differences distributions.
p_value, t, bootstrapped_t_dist = compare_distributions_by_bootstrap(
null_difference_dist,
difference_dist,
n_bootstraps
)
p_values.append({
"sample": sample,
"error_type": error_type,
"predictors": predictor,
"t": t,
"p_value": p_value
})
bootstrapped_t_distributions.append(
pd.DataFrame({
"sample": sample,
"error_type": error_type,
"predictors": predictor,
"empirical_t": t,
"p_value": p_value,
"bootstrap_t": bootstrapped_t_dist
})
)
bootstrapped_t_distributions_df = pd.concat(bootstrapped_t_distributions)
bootstrapped_t_distributions_df.head()
bootstrapped_t_distributions_df.shape
def plot_histogram_by_sample_and_type(df, sample, error_type):
example_df = df.query(f"(sample == '{sample}') & (error_type == '{error_type}')")
example_df = example_df.sort_values("empirical_t", ascending=False).copy()
grouped_df = example_df.groupby("predictors", sort=False)
predictors = grouped_df["predictors"].first().values
empirical_t_values = grouped_df["empirical_t"].first().values
sample_p_values = grouped_df["p_value"].first().values
n_rows = int(np.ceil(sample_p_values.shape[0] / 2.0))
n_cells = 2 * n_rows
fig, all_axes = plt.subplots(
n_rows,
2,
figsize=(8, n_rows),
sharex=True,
sharey=True
)
axes = all_axes.flatten()
bins = np.arange(-5, 5, 0.25)
for i, predictor in enumerate(predictors):
ax = axes[i]
if sample_p_values[i] < 1.0 / n_bootstraps:
p_value = f"$p$ < {1.0 / n_bootstraps}"
else:
p_value = f"$p$ = {sample_p_values[i]}"
ax.hist(
example_df.query(f"predictors == '{predictor}'")["bootstrap_t"].values,
bins=bins,
color=histogram_color_by_predictor[predictor]
)
ax.axvline(
empirical_t_values[i],
color="orange"
)
ax.text(
0.01,
0.9,
f"$t$ = {empirical_t_values[i]:.2f}, {p_value}",
horizontalalignment="left",
verticalalignment="center",
transform=ax.transAxes,
fontsize=10
)
ax.set_title(
name_by_predictor[predictor].replace("\n", " "),
fontsize=10
)
if (i >= n_cells - 2) or (n_cells > len(predictors) and i == n_cells - 3):
ax.set_xlabel("$t$ statistic")
ax.xaxis.set_ticks_position('bottom')
ax.tick_params(which='major', width=1.00, length=5)
ax.xaxis.set_major_locator(ticker.AutoLocator())
else:
ax.xaxis.set_ticks([])
ax.xaxis.set_visible(False)
# Clear subplots that do not have any data.
for i in range(len(predictors), n_cells):
axes[i].axis("off")
fig.text(
0.0,
0.5,
"bootstrap samples",
rotation="vertical",
horizontalalignment="center",
verticalalignment="center"
)
fig.text(
0.5,
0.99,
f"{name_by_sample[sample]}, {error_type} period",
horizontalalignment="center",
verticalalignment="center",
fontsize=12
)
fig.tight_layout(pad=0.75, w_pad=1.0, h_pad=0.1)
return fig, axes
sample = "simulated_sample_3"
error_type = "validation"
fig, axes = plot_histogram_by_sample_and_type(
bootstrapped_t_distributions_df,
sample,
error_type
)
plt.savefig(bootstrap_figure_for_simulated_sample_validation, bbox_inches="tight")
sample = "simulated_sample_3"
error_type = "test"
fig, axes = plot_histogram_by_sample_and_type(
bootstrapped_t_distributions_df,
sample,
error_type
)
plt.savefig(bootstrap_figure_for_simulated_sample_test, bbox_inches="tight")
sample = "natural_sample_1_with_90_vpm_sliding"
error_type = "validation"
fig, axes = plot_histogram_by_sample_and_type(
bootstrapped_t_distributions_df,
sample,
error_type
)
plt.savefig(bootstrap_figure_for_natural_sample_validation, bbox_inches="tight")
sample = "natural_sample_1_with_90_vpm_sliding"
error_type = "test"
fig, axes = plot_histogram_by_sample_and_type(
bootstrapped_t_distributions_df,
sample,
error_type
)
plt.savefig(bootstrap_figure_for_natural_sample_test, bbox_inches="tight")
p_value_df = pd.DataFrame(p_values)
p_value_df
```
Identify models whose mean distances are significantly closer to future populations than the naive model ($\alpha=0.05$).
```
p_value_df[p_value_df["p_value"] < 0.05]
p_value_df.to_csv(output_table, sep="\t", index=False)
```
## Compare distributions of composite and individual models
Perform bootstrap hypothesis tests between composite models and their respective individual models to determine whether any composite models are significantly more accurate. We only perform these for natural populations.
```
composite_models = {
"simulated_sample_3": [
{
"individual": ["ne_star", "lbi"],
"composite": "lbi-ne_star"
},
{
"individual": ["ep_x", "ne_star"],
"composite": "ep_x-ne_star"
},
{
"individual": ["ep_star", "ne_star"],
"composite": "ep_star-ne_star"
}
],
"natural_sample_1_with_90_vpm_sliding": [
{
"individual": ["cTiter_x", "ne_star"],
"composite": "cTiter_x-ne_star"
},
{
"individual": ["ne_star", "lbi"],
"composite": "ne_star-lbi"
},
{
"individual": ["ep_x", "ne_star"],
"composite": "ep_x-ne_star"
},
{
"individual": ["ep_star", "ne_star"],
"composite": "ep_star-ne_star"
}
]
}
composite_vs_individual_p_values = []
for error_type in error_types:
for sample, models in composite_models.items():
for model in models:
composite_dist = get_model_distances_by_build(df, sample, error_type, model["composite"])
for individual_model in model["individual"]:
individual_dist = get_model_distances_by_build(df, sample, error_type, individual_model)
# Calculate the difference between the composite model's distance to the future
# and the individual model's at the same timepoint. This difference should
# account for timepoint-to-timepoint variation observed across all models.
difference_dist = composite_dist - individual_dist
# Center the observed distribution by its mean to produce a null distribution
# with the same variance and a mean of zero. We want to test whether the
# observed differences between the composite and individual models are different
# from zero.
null_difference_dist = difference_dist - difference_dist.mean()
p_value, t, bootstrapped_t_dist = compare_distributions_by_bootstrap(
null_difference_dist,
difference_dist,
n_bootstraps
)
composite_vs_individual_p_values.append({
"sample": sample,
"error_type": error_type,
"individual_model": individual_model,
"composite_model": model["composite"],
"t": t,
"p_value": p_value
})
composite_vs_individual_p_values_df = pd.DataFrame(composite_vs_individual_p_values)
composite_vs_individual_p_values_df.query("p_value < 0.05")
```
## Calculate bootstraps for all models and samples
```
df["error_difference"] = df["validation_error"] - df["null_validation_error"]
bootstrap_distances = []
for (sample, error_type, predictors), group_df in df.groupby(["sample", "error_type", "predictors"]):
if sample not in predictors_by_sample:
continue
if predictors not in predictors_by_sample[sample]:
continue
print(f"Processing: {sample}, {error_type}, {predictors}")
# Calculate difference between validation error
bootstrap_distribution = [
group_df["error_difference"].sample(frac=1.0, replace=True).mean()
for i in range(n_bootstraps)
]
bootstrap_distances.append(pd.DataFrame({
"sample": sample,
"error_type": error_type,
"predictors": predictors,
"bootstrap_distance": bootstrap_distribution
}))
bootstraps_df = pd.concat(bootstrap_distances)
bootstraps_df["model"] = bootstraps_df["predictors"].map(name_by_predictor)
bootstraps_df.head()
def plot_bootstrap_distances(bootstraps_df, predictors, title, width=16, height=8):
fig, axes = plt.subplots(2, 1, figsize=(width, height), gridspec_kw={"hspace": 0.5})
sample_name = bootstraps_df["sample"].drop_duplicates().values[0]
bootstrap_df = bootstraps_df.query("error_type == 'validation'")
bootstrap_df = bootstrap_df[bootstrap_df["predictors"].isin(predictors)].copy()
# Use this order for both validation and test facets as in Tables 1 and 2.
models_order = bootstrap_df.groupby("model")["bootstrap_distance"].mean().sort_values().reset_index()["model"].values
predictors_order = bootstrap_df.groupby("predictors")["bootstrap_distance"].mean().sort_values().reset_index()["predictors"].values
median_naive_distance = bootstrap_df.query("predictors == 'naive'")["bootstrap_distance"].median()
validation_ax = axes[0]
validation_ax = sns.violinplot(
x="model",
y="bootstrap_distance",
data=bootstrap_df,
order=models_order,
ax=validation_ax,
palette=color_by_model,
cut=0
)
max_distance = bootstrap_df["bootstrap_distance"].max() + 0.3
validation_ax.set_ylim(top=max_distance + 0.6)
for index, predictor in enumerate(predictors_order):
if predictor == "naive":
continue
p_value = p_value_df.query(f"(sample == '{sample_name}') & (error_type == 'validation') & (predictors == '{predictor}')")["p_value"].values[0]
if p_value < (1.0 / n_bootstraps):
p_value_string = f"p < {1.0 / n_bootstraps}"
else:
p_value_string = f"p = {p_value:.4f}"
validation_ax.text(
index,
max_distance,
p_value_string,
fontsize=12,
horizontalalignment="center",
verticalalignment="center"
)
validation_ax.axhline(y=median_naive_distance, label="naive", color="#999999", zorder=-10)
validation_ax.title.set_text(f"Validation of {name_by_sample[sample]}")
validation_ax.set_xlabel("Model")
validation_ax.set_ylabel("Bootstrapped model - naive\ndistance to future (AAs)")
bootstrap_df = bootstraps_df.query("error_type == 'test'")
bootstrap_df = bootstrap_df[bootstrap_df["predictors"].isin(predictors)].copy()
median_naive_distance = bootstrap_df.query("predictors == 'naive'")["bootstrap_distance"].median()
test_ax = axes[1]
test_ax = sns.violinplot(
x="model",
y="bootstrap_distance",
data=bootstrap_df,
order=models_order,
ax=test_ax,
palette=color_by_model,
cut=0
)
max_distance = bootstrap_df["bootstrap_distance"].max() + 0.3
test_ax.set_ylim(top=max_distance + 0.6)
for index, predictor in enumerate(predictors_order):
if predictor == "naive":
continue
p_value = p_value_df.query(f"(sample == '{sample_name}') & (error_type == 'test') & (predictors == '{predictor}')")["p_value"].values[0]
if p_value < (1.0 / n_bootstraps):
p_value_string = f"p < {1.0 / n_bootstraps}"
else:
p_value_string = f"p = {p_value:.4f}"
test_ax.text(
index,
max_distance,
p_value_string,
fontsize=12,
horizontalalignment="center",
verticalalignment="center"
)
test_ax.set_xlabel("Model")
test_ax.set_ylabel("Bootstrapped model - naive\ndistance to future (AAs)")
test_ax.axhline(y=median_naive_distance, label="naive", color="#999999", zorder=-10)
test_ax.title.set_text(f"Test of {name_by_sample[sample]}")
sns.despine()
return fig, axes
sample = "simulated_sample_3"
fig, axes = plot_bootstrap_distances(
bootstraps_df.query(f"sample == '{sample}'"),
predictors_by_sample[sample],
name_by_sample[sample],
width=16
)
plt.tight_layout()
#plt.savefig(bootstrap_figure_for_simulated_sample, bbox_inches="tight")
sample = "natural_sample_1_with_90_vpm_sliding"
fig, axes = plot_bootstrap_distances(
bootstraps_df.query(f"sample == '{sample}'"),
predictors_by_sample[sample],
name_by_sample[sample],
width=24
)
plt.tight_layout()
#plt.savefig(bootstrap_figure_for_natural_sample, bbox_inches="tight")
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.metrics import mean_squared_error
from xgboost import XGBRegressor
import optuna
df = pd.read_csv("../input/30days-folds/train_folds.csv")
df_test = pd.read_csv("../input/30-days-of-ml/test.csv")
sample_submission = pd.read_csv("../input/30-days-of-ml/sample_submission.csv")
useful_features = [c for c in df.columns if c not in ("id", "target", "kfold")]
object_cols = [col for col in useful_features if col.startswith("cat")]
df_test = df_test[useful_features]
for col in object_cols:
temp_df = []
temp_test_feat = None
for fold in range(5):
xtrain = df[df.kfold != fold].reset_index(drop=True)
xvalid = df[df.kfold == fold].reset_index(drop=True)
feat = xtrain.groupby(col)["target"].agg("mean")
feat = feat.to_dict()
xvalid.loc[:, f"tar_enc_{col}"] = xvalid[col].map(feat)
temp_df.append(xvalid)
if temp_test_feat is None:
temp_test_feat = df_test[col].map(feat)
else:
temp_test_feat += df_test[col].map(feat)
temp_test_feat /= 5
df_test.loc[:, f"tar_enc_{col}"] = temp_test_feat
df = pd.concat(temp_df)
useful_features = [c for c in df.columns if c not in ("id", "target", "kfold")]
object_cols = [col for col in useful_features if col.startswith("cat")]
df_test = df_test[useful_features]
def run(trial):
fold = 0
learning_rate = trial.suggest_float("learning_rate", 1e-2, 0.25, log=True)
reg_lambda = trial.suggest_loguniform("reg_lambda", 1e-8, 100.0)
reg_alpha = trial.suggest_loguniform("reg_alpha", 1e-8, 100.0)
subsample = trial.suggest_float("subsample", 0.1, 1.0)
colsample_bytree = trial.suggest_float("colsample_bytree", 0.1, 1.0)
max_depth = trial.suggest_int("max_depth", 1, 7)
xtrain = df[df.kfold != fold].reset_index(drop=True)
xvalid = df[df.kfold == fold].reset_index(drop=True)
ytrain = xtrain.target
yvalid = xvalid.target
xtrain = xtrain[useful_features]
xvalid = xvalid[useful_features]
ordinal_encoder = preprocessing.OrdinalEncoder()
xtrain[object_cols] = ordinal_encoder.fit_transform(xtrain[object_cols])
xvalid[object_cols] = ordinal_encoder.transform(xvalid[object_cols])
model = XGBRegressor(
random_state=42,
tree_method="gpu_hist",
gpu_id=1,
predictor="gpu_predictor",
n_estimators=7000,
learning_rate=learning_rate,
reg_lambda=reg_lambda,
reg_alpha=reg_alpha,
subsample=subsample,
colsample_bytree=colsample_bytree,
max_depth=max_depth,
)
model.fit(xtrain, ytrain, early_stopping_rounds=300, eval_set=[(xvalid, yvalid)], verbose=1000)
preds_valid = model.predict(xvalid)
rmse = mean_squared_error(yvalid, preds_valid, squared=False)
return rmse
study = optuna.create_study(direction="minimize")
study.optimize(run, n_trials=5)
study.best_params
sample_submission.target = study
sample_submission.to_csv("submission.csv", index=False)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from typing import List, Union
from scipy.special import erf, binom
from statdepth.depth._depthcalculations import _subsequences
def _norm_cdf(x: np.array, mu: float, sigma: float):
"""
Estimate the CDF at x for the normal distribution parametrized by mu and sigma^2
"""
print(f'erf term is {erf(x - mu) / (sigma * np.sqrt(2))}')
return 0.5 * (1 + erf(x - mu) / (sigma * np.sqrt(2)))
def _uncertain_depth_univariate(data: pd.DataFrame, curve: Union[str, int], sigma2: pd.DataFrame, J: int=2, strict=False):
"""
Calculate uncertain depth for the given curve, assuming each entry in our data comes from a normal distribution
where the mean is the observed value and the variance is the corresponding entry in sigma2.
Parameters:
-----------
data: pd.DataFrame
An n x p matrix, where we have p real-valued functions collected at n discrete time intervals
curve: int or str
Column (function) to calculate depth for
sigma2: pd.DataFrame
An n x p matrix where each entry is the variance of the distribution at that entry
Returns:
----------
pd.Series: Depth values for each function (column)
"""
n, p = data.shape
depth = 0
sigma = sigma2.pow(.5)
# Drop our current curve from our data
if curve in data.columns:
data = data.drop(curve, axis=1)
subseq = _subsequences(data.columns, J)
if J == 2:
for seq in subseq:
d = 1
f1 = seq[0]
f2 = seq[1]
for time in data.index:
p1 = _norm_cdf(data.loc[time, f1], data.loc[time, f1], sigma.loc[time, f1])
p2 = _norm_cdf(data.loc[time, f2], data.loc[time, f2], sigma.loc[time, f2])
print(f'p1 is {p1}, p2 is {p2}')
if strict:
d *= p1 + p2 - 2 * p1 * p2
else:
d += p1 + p2 - 2 * p1 * p2
depth += d
elif J == 3:
for seq in subseq:
d = 1
f1, f_2, f_3 = seq[0], seq[1], seq[2]
for time in data.index:
p1 = _norm_cdf(data.loc[time, f1], data.loc[time, f1], sigma.loc[time, f1])
p2 = _norm_cdf(data.loc[time, f2], data.loc[time, f2], sigma.loc[time, f2])
p3 = _norm_cdf(data.loc[time, f3], data.loc[time, f3], sigma.loc[time, f3])
if strict:
d *= p1 + p2 + p3 - p1 * p2 - p2*p3 - p1*p3
else:
d += p1 + p2 + p3 - p1 * p2 - p2*p3 - p1*p3
depth += d
else: # Handle J=4 later, not sure about computation
pass
return depth / binom(data.shape[1], J) if strict else depth / binom(data.shape[1], J) * n / p # Because in the nonstrict case we are summing 1/|D| n times
from statdepth.testing import generate_noisy_univariate
df = generate_noisy_univariate() * 5
sigma2 = generate_noisy_univariate() * 5
def probabilistic_depth(data: pd.DataFrame, sigma2: pd.DataFrame, J: int=2, strict=False):
depths = []
cols = data.columns
for col in data.columns:
depths.append(_uncertain_depth_univariate(data=data, curve=col, sigma2=sigma2, J=J, strict=strict))
return pd.Series(index=cols, data=depths)
p = '*'
exec(f'p=5{p}5')
p
```
| github_jupyter |
# AR6 WG1 - SPM.4
This notebook reproduces the panel a) of **Figure SPM.4** of the IPCC's *Working Group I contribution to the Sixth Assessment Report* ([AR6 WG1](https://www.ipcc.ch/assessment-report/ar6/)).
The data supporting the SPM figure is published under a Creative Commons CC-BY license at
the [Centre for Environmental Data Analyis (CEDA)](https://catalogue.ceda.ac.uk/uuid/ae4f1eb6fce24adcb92ddca1a7838a5c).
This notebook uses a version of that data which was processed for interoperability with the format used by IPCC WG3, the so-called IAMC format.
The notebook is available under an open-source [BSD-3 License](https://github.com/openscm/AR6-WG1-Data-Compilation/blob/main/LICENSE) in the [openscm/AR6-WG1-Data-Compilation](https://github.com/openscm/AR6-WG1-Data-Compilation) GitHub repository.
The notebook uses the Python package [pyam](https://pyam-iamc.readthedocs.io), which provides a suite of features and methods for the analysis, validation and visualization of reference data and scenario results
generated by integrated assessment models, macro-energy tools and other frameworks
in the domain of energy transition, climate change mitigation and sustainable development.
```
import matplotlib.pyplot as plt
import pyam
import utils
rc = pyam.run_control()
rc.update("plotting.yaml")
```
## Import and inspect the scenario data
Import the scenario data as a [pyam.IamDataFrame](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html)
and display the timeseries data in wide, IAMC-style format
using [timeseries()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.html#pyam.IamDataFrame.timeseries)...
```
df = pyam.IamDataFrame(utils.DATA_DIR / "processed" / "fig-spm4" / "fig-spm4-timeseries.csv")
df
df.timeseries()
```
## Create a simple plot for each species
Use [matplotlib](https://matplotlib.org) and
the [pyam plotting module](https://pyam-iamc.readthedocs.io/en/stable/gallery/index.html)
to create a multi-panel figure.
```
species = ["CO2", "CH4", "N2O", "Sulfur"]
# We first create a matplotlib figure with several "axes" objects (i.e., individual plots)
fig, ax = plt.subplots(1, len(species), figsize=(15, 5))
# Then, we iterate over the axes, plotting the graph for each species as we go along
for i, s in enumerate(species):
# Show legend only for the left-most figure
show_legend = True if i==len(species) - 1 else False
(
df.filter(variable=f"Emissions|{s}")
.plot(ax=ax[i], color="scenario", legend=dict(loc="outside right") if show_legend else False)
)
# We can also modify the axes objects directly to produce a better figure
ax[i].set_title(s)
# Clean and show the plot
plt.tight_layout()
fig
```
| github_jupyter |
# Linear Regression
Example from [Introduction to Computation and Programming Using Python](https://mitpress.mit.edu/books/introduction-computation-and-programming-using-python-revised-and-expanded-edition)
```
import matplotlib.pyplot as plot
from numpy import (
array,
asarray,
correlate,
cov,
genfromtxt,
mean,
median,
polyfit,
std,
var,
)
from scipy.stats import linregress
```
## Calculate regression line using linear fit _y = ax + b_
```
def linear_fit_byhand(x_vals, y_vals):
x_sum = sum(x_vals)
y_sum = sum(y_vals)
xy_sum = sum(x_vals * y_vals)
xsquare_sum = sum(x_vals ** 2)
count = len(x_vals)
# y = ax + b
# a = (NΣXY - (ΣX)(ΣY)) / (NΣX^2 - (ΣX)^2)
a_value = ((count * xy_sum) - (x_sum * y_sum)) / (
(count * xsquare_sum) - x_sum ** 2
)
# b = (ΣY - b(ΣX)) / N
b_value = (y_sum - a_value * x_sum) / count
est_yvals = a_value * x_vals + b_value
# calculate spring constant
k = 1 / a_value
# plot regression line
plot.plot(
x_vals,
est_yvals,
label="Linear fit by hand, k = "
+ str(round(k))
+ ", RSquare = "
+ str(r_square(y_vals, est_yvals)),
)
```
## Least-squares regression of scipy
```
def linear_regression(x_vals, y_vals):
a_value, b_value, r_value, p_value, std_err = linregress(x_vals, y_vals)
est_yvals = a_value * x_vals + b_value
k = 1 / a_value
plot.plot(
x_vals,
est_yvals,
label="Least-squares fit, k = "
+ str(round(k))
+ ", RSquare = "
+ str(r_value ** 2),
)
```
## Calculate regression line using linear fit _y = ax + b_
```
def linear_fit(x_vals, y_vals):
a_value, b_value = polyfit(x_vals, y_vals, 1)
est_yvals = a_value * array(x_vals) + b_value
# calculate spring constant
k = 1 / a_value
# plot regression line
plot.plot(
x_vals,
est_yvals,
label="Linear fit, k = "
+ str(round(k))
+ ", RSquare = "
+ str(r_square(y_vals, est_yvals)),
)
```
## Calculate quadratic fit _ax^2+bx+c_
```
def quadratic_fit(x_vals, y_vals):
a_value, b_value, c_value = polyfit(x_vals, y_vals, 2)
est_yvals = a_value * (x_vals ** 2) + b_value * (x_vals) + c_value
plot.plot(
x_vals,
est_yvals,
label="Quadratic fit, RSquare = " + str(r_square(y_vals, est_yvals)),
)
```
## Calculate cubic fit _ax^3+bx^2+cx+d_
```
def cubic_fit(x_vals, y_vals):
a_value, b_value, c_value, d_value = polyfit(x_vals, y_vals, 3)
est_yvals = a_value * (x_vals ** 3) + b_value * (x_vals ** 2)
est_yvals += c_value * x_vals + d_value
plot.plot(
x_vals,
est_yvals,
label="Cubic fit, RSquare = " + str(r_square(y_vals, est_yvals)),
)
```
## Method to display summary statistics
```
def display_statistics(x_vals, y_vals):
print("Mean(x)=%s Mean(Y)=%s" % (mean(x_vals), mean(y_vals)))
print("Median(x)=%s Median(Y)=%s" % (median(x_vals), median(y_vals)))
print("StdDev(x)=%s StdDev(Y)=%s" % (std(x_vals), std(y_vals)))
print("Var(x)=%s Var(Y)=%s" % (var(x_vals), var(y_vals)))
print("Cov(x,y)=%s" % cov(x_vals, y_vals))
print("Cor(x,y)=%s" % correlate(x_vals, y_vals))
```
## Plot data (x and y values) together with regression lines
```
def plot_data(vals):
x_vals = asarray([i[0] * 9.81 for i in vals])
y_vals = asarray([i[1] for i in vals])
# plot measurement values
plot.plot(x_vals, y_vals, "bo", label="Measured displacements")
plot.title("Measurement Displacement of Spring", fontsize="x-large")
plot.xlabel("|Force| (Newtons)")
plot.ylabel("Distance (meters)")
linear_fit_byhand(x_vals, y_vals)
linear_fit(x_vals, y_vals)
linear_regression(x_vals, y_vals)
quadratic_fit(x_vals, y_vals)
cubic_fit(x_vals, y_vals)
display_statistics(x_vals, y_vals)
```
## Calculate Coefficient of Determination (R^2)
Takes `measured` and `estimated` one dimensional arrays:
- `measured` is the one dimensional array of measured values
- `estimated` is the one dimensional array of predicted values
and calculates
$$R^2$$
where
$$R^2=1-\frac{EE}{MV}$$
and
$$0 \leq R^2 \leq 1$$.
- `EE` is the estimated error
- `MV` is the variance of the actual data
|Result |Interpretation|
|---------|--------------|
|$$R^2=1$$| the model explains all of the variability in the data |
|$$R^2=0$$| there is no linear relationship |
```
def r_square(measured, estimated):
estimated_error = ((estimated - measured) ** 2).sum()
m_mean = measured.sum() / float(len(measured))
m_variance = ((m_mean - measured) ** 2).sum()
return 1 - (estimated_error / m_variance)
```
## Test the equations
```
plot_data(genfromtxt("data/spring.csv", delimiter=","))
plot.legend(loc="best")
plot.tight_layout()
plot.show()
```
| github_jupyter |
Example 4 - Anisotropic Bearings.
====
In this example, we use the rotor seen in Example 5.9.2 from 'Dynamics of Rotating Machinery' by MI Friswell, JET Penny, SD Garvey & AW Lees, published by Cambridge University Press, 2010.
Both bearings have a stiffness of 1 MN/m in the x direction and 0.8 MN/m in the
y direction. Calculate the eigenvalues and mode shapes at 0 and 4,000 rev/min
and plot the natural frequency map for rotational speeds up to 4,500 rev/min.
```
from bokeh.io import output_notebook, show
import ross as rs
import numpy as np
output_notebook()
#Classic Instantiation of the rotor
shaft_elements = []
bearing_seal_elements = []
disk_elements = []
Steel = rs.steel
for i in range(6):
shaft_elements.append(rs.ShaftElement(L=0.25, material=Steel, n=i, i_d=0, o_d=0.05))
disk_elements.append(rs.DiskElement.from_geometry(n=2,
material=Steel,
width=0.07,
i_d=0.05,
o_d=0.28
)
)
disk_elements.append(rs.DiskElement.from_geometry(n=4,
material=Steel,
width=0.07,
i_d=0.05,
o_d=0.35
)
)
bearing_seal_elements.append(rs.BearingElement(n=0, kxx=1e6, kyy=.8e6, cxx=0, cyy=0))
bearing_seal_elements.append(rs.BearingElement(n=6, kxx=1e6, kyy=.8e6, cxx=0, cyy=0))
rotor592c = rs.Rotor(shaft_elements=shaft_elements,
bearing_seal_elements=bearing_seal_elements,
disk_elements=disk_elements,n_eigen = 12)
rotor592c.plot_rotor(plot_type='bokeh')
#From_section class method instantiation.
bearing_seal_elements = []
disk_elements = []
shaft_length_data = 3*[0.5]
i_d = 3*[0]
o_d = 3*[0.05]
disk_elements.append(rs.DiskElement.from_geometry(n=1,
material=Steel,
width=0.07,
i_d=0.05,
o_d=0.28
)
)
disk_elements.append(rs.DiskElement.from_geometry(n=2,
material=Steel,
width=0.07,
i_d=0.05,
o_d=0.35
)
)
bearing_seal_elements.append(rs.BearingElement(n=0, kxx=1e6, kyy=1e6, cxx=0, cyy=0))
bearing_seal_elements.append(rs.BearingElement(n=3, kxx=1e6, kyy=1e6, cxx=0, cyy=0))
rotor592fs = rs.Rotor.from_section(brg_seal_data=bearing_seal_elements,
disk_data=disk_elements,leng_data=shaft_length_data,
i_ds_data=i_d,o_ds_data=o_d )
rotor592fs.plot_rotor(plot_type='bokeh')
#Obtaining results (wn is in rad/s)
print('Normal Instantiation =', rotor592c.wn)
print('\n')
print('From Section Instantiation =', rotor592fs.wn)
#Obtaining results for w=4000RPM (wn is in rad/s)
rotor592c.w=4000*np.pi/30
print('Normal Instantiation =', rotor592c.wn)
```
- Campbell Diagram
```
campbell = rotor592c.run_campbell(np.linspace(0,4000*np.pi/30,100))
show(campbell.plot(plot_type='bokeh'))
```
- Mode Shapes
```
mode_shapes = rotor592c.run_mode_shapes()
for i in np.arange(0,5.1,1):
mode_shapes.plot(mode=int(i))
```
| github_jupyter |
# Heterogeneous Effects
> **Author**
- [Paul Schrimpf *UBC*](https://economics.ubc.ca/faculty-and-staff/paul-schrimpf/)
**Prerequisites**
- [Regression](regression.ipynb)
- [Machine Learning in Economics](ml_in_economics.ipynb)
**Outcomes**
- Understand potential outcomes and treatment effects
- Apply generic machine learning inference to data from a randomized experiment
```
# Uncomment following line to install on colab
#! pip install qeds fiona geopandas xgboost gensim folium pyLDAvis descartes
import pandas as pd
import numpy as np
import patsy
from sklearn import linear_model, ensemble, base, neural_network
import statsmodels.formula.api as smf
import statsmodels.api as sm
from sklearn.utils.testing import ignore_warnings
from sklearn.exceptions import ConvergenceWarning
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
```
In this notebook, we will learn how to apply machine learning methods to analyze
results of a randomized experiment. We typically begin analyzing
experimental results by calculating the difference in mean
outcomes between the treated and control groups. This difference estimates well
the average treatment effect. We can obtain more
nuanced results by recognizing that the effect of most experiments
might be heterogeneous. That is, different people could be affected by
the experiment differently. We will use machine learning methods to
explore this heterogeneity in treatment effects.
## Outline
- [Heterogeneous Effects](#Heterogeneous-Effects)
- [Background and Data](#Background-and-Data)
- [Potential Outcomes and Treatment Effects](#Potential-Outcomes-and-Treatment-Effects)
- [Generic Machine Learning Inference](#Generic-Machine-Learning-Inference)
- [Causal Trees and Forests](#Causal-Trees-and-Forests)
- [References](#References)
## Background and Data
We are going to use data from a randomized experiment in Indonesia
called Program Keluarga Harapan (PKH). PKH was a conditional cash
transfer program designed to improve child health. Eligible pregnant
women would receive a cash transfer if they attended at least 4
pre-natal and 2 post-natal visits, received iron supplements, and had
their baby delivered by a doctor or midwife. The cash transfers were
given quarterly and were about 60-220 dollars or 15-20 percent of
quarterly consumption. PKH eligibility was randomly assigned at the
kecamatan (district) level. All pregnant women living in a treated
kecamatan could choose to participate in the experiment. For more
information see [[hetACE+11]](#het-alatas2011) or [[hetTri16]](#het-triyana2016).
We are using the data provided with [[hetTri16]](#het-triyana2016).
```
url = "https://datascience.quantecon.org/assets/data/Triyana_2016_price_women_clean.csv.gz"
df = pd.read_csv(url)
df.describe()
```
## Potential Outcomes and Treatment Effects
Since program eligibility was randomly assigned (and what
policymakers could choose to change), we will focus on estimating
the effect of eligibility. We will let
$ d_i $ be a 1 if person $ i $ was eligible and be 0 if not.
Let $ y_i $ be an outcome of interest. Below, we
will look at midwife usage and birth weight as outcomes.
It is
useful to think about potential outcomes of the treatment. The potential treated
outcome is $ y_i(1) $. For subjects who actually were treated,
$ y_i(1) = y_i $ is the observed outcome. For untreated subjects,
$ y_i(1) $ is what mother i ‘s baby’s birth weight would have
been if she had been eligible for the program. Similarly, we can
define the potential untreated outcome $ y_i(0) $ .
The individual treatment effect for subject i is $ y_i(1) - y_i(0) $.
Individual treatment effects are impossible to know since we always
only observe $ y_i(1) $ or $ y_i(0) $, but never both.
When treatment is randomly assigned, we can estimate average treatment
effects because
$$
\begin{align*}
E[y_i(1) - y_i(0) ] = & E[y_i(1)] - E[y_i(0)] \\
& \text{random assignment } \\
= & E[y_i(1) | d_i = 1] - E[y_i(0) | d_i = 0] \\
= & E[y_i | d_i = 1] - E[y_i | d_i = 0 ]
\end{align*}
$$
### Average Treatment Effects
Let’s estimate the average treatment effect.
```
# some data prep for later
formula = """
bw ~ pkh_kec_ever +
C(edu)*C(agecat) + log_xp_percap + hh_land + hh_home + C(dist) +
hh_phone + hh_rf_tile + hh_rf_shingle + hh_rf_fiber +
hh_wall_plaster + hh_wall_brick + hh_wall_wood + hh_wall_fiber +
hh_fl_tile + hh_fl_plaster + hh_fl_wood + hh_fl_dirt +
hh_water_pam + hh_water_mechwell + hh_water_well + hh_water_spring + hh_water_river +
hh_waterhome +
hh_toilet_own + hh_toilet_pub + hh_toilet_none +
hh_waste_tank + hh_waste_hole + hh_waste_river + hh_waste_field +
hh_kitchen +
hh_cook_wood + hh_cook_kerosene + hh_cook_gas +
tv + fridge + motorbike + car + goat + cow + horse
"""
bw, X = patsy.dmatrices(formula, df, return_type="dataframe")
# some categories are empty after dropping rows will Null, drop now
X = X.loc[:, X.sum() > 0]
bw = bw.iloc[:, 0]
treatment_variable = "pkh_kec_ever"
treatment = X["pkh_kec_ever"]
Xl = X.drop(["Intercept", "pkh_kec_ever", "C(dist)[T.313175]"], axis=1)
#scale = bw.std()
#center = bw.mean()
loc_id = df.loc[X.index, "Location_ID"].astype("category")
import re
# remove [ ] from names for compatibility with xgboost
Xl = Xl.rename(columns=lambda x: re.sub('\[|\]','_',x))
# Estimate average treatment effects
from statsmodels.iolib.summary2 import summary_col
tmp = pd.DataFrame(dict(birthweight=bw,treatment=treatment,assisted_delivery=df.loc[X.index, "good_assisted_delivery"]))
usage = smf.ols("assisted_delivery ~ treatment", data=tmp).fit(cov_type="cluster", cov_kwds={'groups':loc_id})
health= smf.ols("bw ~ treatment", data=tmp).fit(cov_type="cluster", cov_kwds={'groups':loc_id})
summary_col([usage, health])
```
The program did increase the percent of births assisted by a medical
professional, but on average, did not affect birth weight.
### Conditional Average Treatment Effects
Although we can never estimate individual treatment effects, the
logic that lets us estimate unconditional average treatment effects
also suggests that we can estimate conditional average treatment effects.
$$
\begin{align*}
E[y_i(1) - y_i(0) |X_i=x] = & E[y_i(1)|X_i = x] - E[y_i(0)|X_i=x] \\
& \text{random assignment } \\
= & E[y_i(1) | d_i = 1, X_i=x] - E[y_i(0) | d_i = 0, X_i=x] \\
= & E[y_i | d_i = 1, X_i = x] - E[y_i | d_i = 0, X_i=x ]
\end{align*}
$$
Conditional average treatment effects tell us whether there are
identifiable (by X) groups of people for with varying treatment effects vary.
Since conditional average treatment effects involve conditional
expectations, machine learning methods might be useful.
However, if we want to be able to perform statistical inference, we
must use machine learning methods carefully. We will detail one
approach below. [[hetAI16]](#het-athey2016b) and [[hetWA18]](#het-wager2018) are
alternative approaches.
## Generic Machine Learning Inference
In this section, we will describe the “generic machine learning
inference” method of [[hetCDDFV18]](#het-cddf2018) to explore heterogeneity in
conditional average treatment effects.
This approach allows any
machine learning method to be used to estimate $ E[y_i(1) -
y_i(0) |X_i=x] $.
Inference for functions estimated by machine learning methods is
typically either impossible or requires very restrictive assumptions.
[[hetCDDFV18]](#het-cddf2018) gets around this problem by focusing on inference for
certain summary statistics of the machine learning prediction for
$ E[y_i(1) - y_i(0) |X_i=x] $ rather than
$ E[y_i(1) - y_i(0) |X_i=x] $ itself.
### Best Linear Projection of CATE
Let $ s_0(x) = E[y_i(1) - y_i(0) |X_i=x] $ denote the true
conditional average treatment effect. Let $ S(x) $ be an estimate
or noisy proxy for $ s_0(x) $. One way to summarize how well
$ S(x) $ approximates $ s_0(x) $ is to look at the best linear
projection of $ s_0(x) $ on $ S(x) $.
$$
\DeclareMathOperator*{\argmin}{arg\,min}
\beta_0, \beta_1 = \argmin_{b_0,b_1} E[(s_0(x) -
b_0 - b_1 (S(x)-E[S(x)]))^2]
$$
Showing that $ \beta_0 = E[y_i(1) - y_i(0)] $
is the unconditional average treatment effect is not difficult. More interestingly,
$ \beta_1 $ is related to how well $ S(x) $ approximates
$ s_0(x) $. If $ S(x) = s_0(x) $, then $ \beta_1=1 $. If
$ S(x) $ is completely uncorrelated with $ s_0(x) $, then
$ \beta_1 = 0 $.
The best linear projection of the conditional average treatment
effect tells us something about how well $ S(x) $ approximates
$ s_0(x) $, but does not directly quantify how much the conditional
average treatment effect varies with $ x $. We could try looking
at $ S(x) $ directly, but if $ x $ is high dimensional, reporting or visualizing
$ S(x) $ will be difficult. Moreover, most
machine learning methods have no satisfactory method to determine inferences
on $ S(x) $. This is very problematic if we want to use
$ S(x) $ to shape future policy decisions. For example, we might
want to use $ S(x) $ to target the treatment to people with
different $ x $. If we do this, we need to know whether the
estimated differences across $ x $ in $ S(x) $ are precise or
caused by noise.
### Grouped Average Treatment Effects
To deal with both these issues, [[hetCDDFV18]](#het-cddf2018) focuses on
grouped average treatment effects (GATE) with groups defined by
$ S(x) $. Partition the data into a fixed, finite number of groups
based on $ S(x) $ . Let
$ G_{k}(x) = 1\{\ell_{k-1} \leq S(x) \leq \ell_k \} $ where
$ \ell_k $ could be a constant chosen by the researcher or evenly
spaced quantiles of $ S(x) $. The $ k $ th grouped average
treatment effect is then $ \gamma_k = E[y(1) - y(0) | G_k(x)] $.
If the true $ s_0(x) $ is not constant, and $ S(x) $
approximates $ s_0(x) $ well, then the grouped average treatment
effects will increase with $ k $. If the conditional average treatment effect
has no heterogeneity (i.e. $ s_0(x) $ is constant) and/or
$ S(x) $ is a poor approximation to $ s_0(x) $,
then the grouped average treatment effect will tend to
be constant with $ k $ and may even be non-monotonic due to
estimation error.
### Estimation
We can estimate both the best linear projection of the conditional average treatment
effect and the grouped treatment effects by using
particular regressions. Let $ B(x) $ be an estimate of the outcome
conditional on no treatment, i.e. $ B(x) = \widehat{E[y(0)|x]} $
. Then the estimates of $ \beta $ from the regression
$$
y_i = \alpha_0 + \alpha_1 B(x_i) + \beta_0 (d_i-P(d=1)) + \beta_1
(d_i-P(d=1))(S(x_i) - E[S(x_i)]) + \epsilon_i
$$
are consistent estimates of the best linear projection of the
conditional average treatment effect if $ B(x_i) $ and
$ S(x_i) $ are uncorrelated with $ y_i $ . We can ensure that
$ B(x_i) $ and $ S(x_i) $ are uncorrelated with $ y_i $ by
using the familiar idea of sample-splitting and cross-validation. The
usual regression standard errors will also be valid.
Similarly, we can estimate grouped average treatment effects from the
following regression.
$$
y_i = \alpha_0 + \alpha_1 B(x_i) + \sum_k \gamma_k (d_i-P(d=1)) 1(G_k(x_i)) +
u_i
$$
The resulting estimates of $ \gamma_k $ will be consistent and
asymptotically normal with the usual regression standard errors.
```
# for clustering standard errors
def get_treatment_se(fit, cluster_id, rows=None):
if cluster_id is not None:
if rows is None:
rows = [True] * len(cluster_id)
vcov = sm.stats.sandwich_covariance.cov_cluster(fit, cluster_id.loc[rows])
return np.sqrt(np.diag(vcov))
return fit.HC0_se
def generic_ml_model(x, y, treatment, model, n_split=10, n_group=5, cluster_id=None):
nobs = x.shape[0]
blp = np.zeros((n_split, 2))
blp_se = blp.copy()
gate = np.zeros((n_split, n_group))
gate_se = gate.copy()
baseline = np.zeros((nobs, n_split))
cate = baseline.copy()
lamb = np.zeros((n_split, 2))
for i in range(n_split):
main = np.random.rand(nobs) > 0.5
rows1 = ~main & (treatment == 1)
rows0 = ~main & (treatment == 0)
mod1 = base.clone(model).fit(x.loc[rows1, :], (y.loc[rows1]))
mod0 = base.clone(model).fit(x.loc[rows0, :], (y.loc[rows0]))
B = mod0.predict(x)
S = mod1.predict(x) - B
baseline[:, i] = B
cate[:, i] = S
ES = S.mean()
## BLP
# assume P(treat|x) = P(treat) = mean(treat)
p = treatment.mean()
reg_df = pd.DataFrame(dict(
y=y, B=B, treatment=treatment, S=S, main=main, excess_S=S-ES
))
reg = smf.ols("y ~ B + I(treatment-p) + I((treatment-p)*(S-ES))", data=reg_df.loc[main, :])
reg_fit = reg.fit()
blp[i, :] = reg_fit.params.iloc[2:4]
blp_se[i, :] = get_treatment_se(reg_fit, cluster_id, main)[2:]
lamb[i, 0] = reg_fit.params.iloc[-1]**2 * S.var()
## GATEs
cutoffs = np.quantile(S, np.linspace(0,1, n_group + 1))
cutoffs[-1] += 1
for k in range(n_group):
reg_df[f"G{k}"] = (cutoffs[k] <= S) & (S < cutoffs[k+1])
g_form = "y ~ B + " + " + ".join([f"I((treatment-p)*G{k})" for k in range(n_group)])
g_reg = smf.ols(g_form, data=reg_df.loc[main, :])
g_fit = g_reg.fit()
gate[i, :] = g_fit.params.values[2:] #g_fit.params.filter(regex="G").values
gate_se[i, :] = get_treatment_se(g_fit, cluster_id, main)[2:]
lamb[i, 1] = (gate[i,:]**2).sum()/n_group
out = dict(
gate=gate, gate_se=gate_se,
blp=blp, blp_se=blp_se,
Lambda=lamb, baseline=baseline, cate=cate,
name=type(model).__name__
)
return out
def generic_ml_summary(generic_ml_output):
out = {
x: np.nanmedian(generic_ml_output[x], axis=0)
for x in ["blp", "blp_se", "gate", "gate_se", "Lambda"]
}
out["name"] = generic_ml_output["name"]
return out
kw = dict(x=Xl, treatment=treatment, n_split=11, n_group=5, cluster_id=loc_id)
@ignore_warnings(category=ConvergenceWarning)
def evaluate_models(models, y, **other_kw):
all_kw = kw.copy()
all_kw["y"] = y
all_kw.update(other_kw)
return list(map(lambda x: generic_ml_model(model=x, **all_kw), models))
def generate_report(results):
summaries = list(map(generic_ml_summary, results))
df_plot = pd.DataFrame({
mod["name"]: np.median(mod["cate"], axis=1)
for mod in results
})
print("Correlation in median CATE:")
display(df_plot.corr())
sns.pairplot(df_plot, diag_kind="kde", kind="reg")
print("\n\nBest linear projection of CATE")
df_cate = pd.concat({
s["name"]: pd.DataFrame(dict(blp=s["blp"], se=s["blp_se"]))
for s in summaries
}).T.stack()
display(df_cate)
print("\n\nGroup average treatment effects:")
df_groups = pd.concat({
s["name"]: pd.DataFrame(dict(gate=s["gate"], se=s["gate_se"]))
for s in summaries
}).T.stack()
display(df_groups)
import xgboost as xgb
models = [
linear_model.LassoCV(cv=10, n_alphas=25, max_iter=500, tol=1e-4, n_jobs=1),
ensemble.RandomForestRegressor(n_estimators=200, min_samples_leaf=20),
xgb.XGBRegressor(n_estimators=200, max_depth=3, reg_lambda=2.0, reg_alpha=0.0, objective="reg:squarederror"),
neural_network.MLPRegressor(hidden_layer_sizes=(20, 10), max_iter=500, activation="logistic",
solver="adam", tol=1e-3, early_stopping=True, alpha=0.0001)
]
results = evaluate_models(models, y=bw)
generate_report(results)
```
From the second table above, we see that regardless of the machine
learning method, the estimated intercept (the first row of the table)
is near 0 and statistically insignificant. Given our results for the unconditional
ATE above, we should expect this. The estimate of the
slopes are also either near 0, very imprecise, or both. This means
that either the conditional average treatment effect is near 0 or that all
four machine learning methods are very poor proxies for the true
conditional average treatment effect.
### Assisted Delivery
Let’s see what we get when we look at assisted delivery.
```
ad = df.loc[X.index, "good_assisted_delivery"]#"midwife_birth"]
results_ad = evaluate_models(models, y=ad)
generate_report(results_ad)
```
Now, the results are more encouraging. For all four machine learning
methods, the slope estimate is positive and statistically
significant. From this, we can conclude that the true conditional
average treatment effect must vary with at least some covariates, and
the machine learning proxies are at least somewhat correlated with the
true conditional average treatment effect.
### Covariate Means by Group
Once we’ve detected heterogeneity in the grouped average treatment effects
of using medical professionals for assisted delivery, it’s interesting to see
how effects vary across groups. This could help
us understand why the treatment effect varies or how to
develop simple rules for targeting future treatments.
```
df2 = df.loc[X.index, :]
df2["edu99"] = df2.edu == 99
df2["educ"] = df2["edu"]
df2.loc[df2["edu99"], "educ"] = np.nan
variables = [
"log_xp_percap","agecat","educ","tv","goat",
"cow","motorbike","hh_cook_wood","pkh_ever"
]
def cov_mean_by_group(y, res, cluster_id):
n_group = res["gate"].shape[1]
gate = res["gate"].copy()
gate_se = gate.copy()
dat = y.to_frame()
for i in range(res["cate"].shape[1]):
S = res["cate"][:, i]
cutoffs = np.quantile(S, np.linspace(0, 1, n_group+1))
cutoffs[-1] += 1
for k in range(n_group):
dat[f"G{k}"] = ((cutoffs[k] <= S) & (S < cutoffs[k+1])) * 1.0
g_form = "y ~ -1 + " + " + ".join([f"G{k}" for k in range(n_group)])
g_reg = smf.ols(g_form, data=dat.astype(float))
g_fit = g_reg.fit()
gate[i, :] = g_fit.params.filter(regex="G").values
rows = ~y.isna()
gate_se[i, :] = get_treatment_se(g_fit, cluster_id, rows)
out = pd.DataFrame(dict(
mean=np.nanmedian(gate, axis=0),
se=np.nanmedian(gate_se, axis=0),
group=list(range(n_group))
))
return out
def compute_group_means_for_results(results):
to_cat = []
for res in results:
for v in variables:
to_cat.append(
cov_mean_by_group(df2[v], res, loc_id)
.assign(method=res["name"], variable=v)
)
group_means = pd.concat(to_cat, ignore_index=True)
group_means["plus2sd"] = group_means.eval("mean + 1.96*se")
group_means["minus2sd"] = group_means.eval("mean - 1.96*se")
return group_means
group_means_ad = compute_group_means_for_results(results_ad)
g = sns.FacetGrid(group_means_ad, col="variable", col_wrap=3, hue="method", sharey=False)
g.map(plt.plot, "group", "mean")
g.map(plt.plot, "group", "plus2sd", ls="--")
g.map(plt.plot, "group", "minus2sd", ls="--")
g.add_legend();
```
From this, we see that the group predicted to be most affected by treatment
are less educated, less likely to own a TV or
motorbike, and more likely to participate in the program.
If we wanted to maximize the program impact with a limited budget, targeting the program towards
less educated and less wealthy mothers could be a good idea. The existing financial incentive
already does this to some extent. As one might expect, a fixed-size
cash incentive has a bigger behavioral impact on less wealthy
individuals. If we want to further target these individuals, we
could alter eligibility rules and/or increase the cash transfer
for those with lower wealth.
### Caution
When exploring treatment heterogeneity like above, we need to
interpret our results carefully. In particular, looking at grouped
treatment effects and covariate means conditional on group leads to
many hypothesis tests (although we never stated null hypotheses or
reported p-values, the inevitable eye-balling of differences in the
above graphs compared to their confidence intervals has the same
issues as formal hypothesis tests). When we perform many hypothesis
tests, we will likely stumble upon some statistically
significant differences by chance. Therefore, writing about a single large difference found in the
above analysis as though it is our main
finding would be misleading (and perhaps unethical). The correct thing to do is to present all
results that we have looked at. See [this excellent news article](https://slate.com/technology/2013/07/statistics-and-psychology-multiple-comparisons-give-spurious-results.html)
by statistician Andrew Gelman for more information.
## Causal Trees and Forests
[[hetAI16]](#het-athey2016b) develop the idea of “causal trees.” The purpose and
method are qualitatively similar to the grouped average treatment
effects. The main difference is that the groups in [[hetAI16]](#het-athey2016b)
are determined by a low-depth regression tree instead of by quantiles
of a noisy estimate of the conditional average treatment effect. As
above, sample-splitting is used to facilitate inference.
Causal trees share many downsides of regression trees. In
particular, the branches of the tree and subsequent results can be
sensitive to small changes in the data. [[hetWA18]](#het-wager2018) develop a
causal forest estimator to address this concern. This causal forest
estimates $ E[y_i(1) - y_i(0) |X_i=x] $ directly. Unlike most
machine learning estimators, [[hetWA18]](#het-wager2018) prove that causal
forests are consistent and pointwise asymptotically normal, albeit
with a slower than $ \sqrt{n} $ rate of convergence. In practice,
this means that either the sample size must be very large (and/or $ x $
relatively low dimension) to get precise estimates.
## References
<a id='het-alatas2011'></a>
\[hetACE+11\] Vivi Alatas, Nur Cahyadi, Elisabeth Ekasari, Sarah Harmoun, Budi Hidayat, Edgar Janz, Jon Jellema, H Tuhiman, and M Wai-Poi. Program keluarga harapan : impact evaluation of indonesia’s pilot household conditional cash transfer program. Technical Report, World Bank, 2011. URL: [http://documents.worldbank.org/curated/en/589171468266179965/Program-Keluarga-Harapan-impact-evaluation-of-Indonesias-Pilot-Household-Conditional-Cash-Transfer-Program](http://documents.worldbank.org/curated/en/589171468266179965/Program-Keluarga-Harapan-impact-evaluation-of-Indonesias-Pilot-Household-Conditional-Cash-Transfer-Program).
<a id='het-athey2016b'></a>
\[hetAI16\] Susan Athey and Guido Imbens. Recursive partitioning for heterogeneous causal effects. *Proceedings of the National Academy of Sciences*, 113(27):7353–7360, 2016. URL: [http://www.pnas.org/content/113/27/7353](http://www.pnas.org/content/113/27/7353), [arXiv:http://www.pnas.org/content/113/27/7353.full.pdf](https://arxiv.org/abs/http://www.pnas.org/content/113/27/7353.full.pdf), [doi:10.1073/pnas.1510489113](https://doi.org/10.1073/pnas.1510489113).
<a id='het-cddf2018'></a>
\[hetCDDFV18\] Victor Chernozhukov, Mert Demirer, Esther Duflo, and Iván Fernández-Val. Generic machine learning inference on heterogenous treatment effects in randomized experimentsxo. Working Paper 24678, National Bureau of Economic Research, June 2018. URL: [http://www.nber.org/papers/w24678](http://www.nber.org/papers/w24678), [doi:10.3386/w24678](https://doi.org/10.3386/w24678).
<a id='het-triyana2016'></a>
\[hetTri16\] Margaret Triyana. Do health care providers respond to demand-side incentives? evidence from indonesia. *American Economic Journal: Economic Policy*, 8(4):255–88, November 2016. URL: [http://www.aeaweb.org/articles?id=10.1257/pol.20140048](http://www.aeaweb.org/articles?id=10.1257/pol.20140048), [doi:10.1257/pol.20140048](https://doi.org/10.1257/pol.20140048).
<a id='het-wager2018'></a>
\[hetWA18\] Stefan Wager and Susan Athey. Estimation and inference of heterogeneous treatment effects using random forests. *Journal of the American Statistical Association*, 0(0):1–15, 2018. URL: [https://doi.org/10.1080/01621459.2017.1319839](https://doi.org/10.1080/01621459.2017.1319839), [arXiv:https://doi.org/10.1080/01621459.2017.1319839](https://arxiv.org/abs/https://doi.org/10.1080/01621459.2017.1319839), [doi:10.1080/01621459.2017.1319839](https://doi.org/10.1080/01621459.2017.1319839).
| github_jupyter |
# Construction of Regression Models using Data
Author: Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Jesús Cid Sueiro (jcid@tsc.uc3m.es)
Notebook version: 2.1 (Sep 27, 2019)
Changes: v.1.0 - First version. Extracted from regression_intro_knn v.1.0.
v.1.1 - Compatibility with python 2 and python 3
v.2.0 - New notebook generated. Fuses code from Notebooks R1, R2, and R3
v.2.1 - Updated index notation
```
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import numpy as np
import scipy.io # To read matlab files
import pandas as pd # To read data tables from csv files
# For plots and graphical results
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import pylab
# For the student tests (only for python 2)
import sys
if sys.version_info.major==2:
from test_helper import Test
# That's default image size for this interactive session
pylab.rcParams['figure.figsize'] = 9, 6
```
## 1. The regression problem
The goal of regression methods is to predict the value of some *target* variable $S$ from the observation of one or more *input* variables $X_0, X_1, \ldots, X_{K-1}$ (that we will collect in a single vector $\bf X$).
Regression problems arise in situations where the value of the target variable is not easily accessible, but we can measure other dependent variables, from which we can try to predict $S$.
<img src="figs/block_diagram.png" width=600>
The only information available to estimate the relation between the inputs and the target is a *dataset* $\mathcal D$ containing several observations of all variables.
$$\mathcal{D} = \{{\bf x}_k, s_k\}_{k=0}^{K-1}$$
The dataset $\mathcal{D}$ must be used to find a function $f$ that, for any observation vector ${\bf x}$, computes an output $\hat{s} = f({\bf x})$ that is a good predition of the true value of the target, $s$.
<img src="figs/predictor.png" width=300>
Note that for the generation of the regression model, we exploit the statistical dependence between random variable $S$ and random vector ${\bf X}$. In this respect, we can assume that the available dataset $\mathcal{D}$ consists of i.i.d. points from the joint distribution $p_{S,{\bf X}}(s,{\bf x})$. If we had access to the true distribution, a statistical approach would be more accurate; however, in many situations such knowledge is not available, but using training data to do the design is feasible (e.g., relying on historic data, or by manual labelling of a set of patterns).
## 2. Examples of regression problems.
The <a href=http://scikit-learn.org/>scikit-learn</a> package contains several <a href=http://scikit-learn.org/stable/datasets/> datasets</a> related to regression problems.
* <a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston > Boston dataset</a>: the target variable contains housing values in different suburbs of Boston. The goal is to predict these values based on several social, economic and demographic variables taken frome theses suburbs (you can get more details in the <a href = https://archive.ics.uci.edu/ml/datasets/Housing > UCI repository </a>).
* <a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html#sklearn.datasets.load_diabetes /> Diabetes dataset</a>.
We can load these datasets as follows:
```
from sklearn import datasets
# Load the dataset. Select it by uncommenting the appropriate line
D_all = datasets.load_boston()
#D_all = datasets.load_diabetes()
# Extract data and data parameters.
X = D_all.data # Complete data matrix (including input and target variables)
S = D_all.target # Target variables
n_samples = X.shape[0] # Number of observations
n_vars = X.shape[1] # Number of variables (including input and target)
```
This dataset contains
```
print(n_samples)
```
observations of the target variable and
```
print(n_vars)
```
input variables.
## 3. Scatter plots
### 3.1. 2D scatter plots
When the instances of the dataset are multidimensional, they cannot be visualized directly, but we can get a first rough idea about the regression task if we plot the target variable versus one of the input variables. These representations are known as <i>scatter plots</i>
Python methods `plot` and `scatter` from the `matplotlib` package can be used for these graphical representations.
```
# Select a dataset
nrows = 4
ncols = 1 + (X.shape[1]-1)/nrows
# Some adjustment for the subplot.
pylab.subplots_adjust(hspace=0.2)
# Plot all variables
for idx in range(X.shape[1]):
ax = plt.subplot(nrows,ncols,idx+1)
ax.scatter(X[:,idx], S) # <-- This is the key command
ax.get_xaxis().set_ticks([])
ax.get_yaxis().set_ticks([])
plt.ylabel('Target')
```
## 4. Evaluating a regression task
In order to evaluate the performance of a given predictor, we need to quantify the quality of predictions. This is usually done by means of a loss function $l(s,\hat{s})$. Two common losses are
- Square error: $l(s, \hat{s}) = (s - \hat{s})^2$
- Absolute error: $l(s, \hat{s}) = |s - \hat{s}|$
Note that both the square and absolute errors are functions of the estimation error $e = s-{\hat s}$. However, this is not necessarily the case. As an example, imagine a situation in which we would like to introduce a penalty which increases with the magnitude of the estimated variable. For such case, the following cost would better fit our needs: $l(s,{\hat s}) = s^2 \left(s-{\hat s}\right)^2$.
```
# In this section we will plot together the square and absolute errors
grid = np.linspace(-3,3,num=100)
plt.plot(grid, grid**2, 'b-', label='Square error')
plt.plot(grid, np.absolute(grid), 'r--', label='Absolute error')
plt.xlabel('Error')
plt.ylabel('Cost')
plt.legend(loc='best')
plt.show()
```
In general, we do not care much about an isolated application of the regression model, but instead, we are looking for a generally good behavior, for which we need to average the loss function over a set of samples. In this notebook, we will use the average of the square loss, to which we will refer as the `mean-square error` (MSE).
$$\text{MSE} = \frac{1}{K}\sum_{k=0}^{K-1} \left(s^{(k)}- {\hat s}^{(k)}\right)^2$$
The following code fragment defines a function to compute the MSE based on the availability of two vectors, one of them containing the predictions of the model, and the other the true target values.
```
# We start by defining a function that calculates the average square error
def square_error(s, s_est):
# Squeeze is used to make sure that s and s_est have the appropriate dimensions.
y = np.mean(np.power((np.squeeze(s) - np.squeeze(s_est)), 2))
return y
```
### 4.1. Training and test data
The major goal of the regression problem is that the predictor should make good predictions for arbitrary new inputs, not taken from the dataset used by the regression algorithm.
Thus, in order to evaluate the prediction accuracy of some regression algorithm, we need some data, not used during the predictor design, to *test* the performance of the predictor under new data. To do so, the original dataset is usually divided in (at least) two disjoint sets:
* **Training set**, $\cal{D}_{\text{train}}$: Used by the regression algorithm to determine predictor $f$.
* **Test set**, $\cal{D}_{\text{test}}$: Used to evaluate the performance of the regression algorithm.
A good regression algorithm uses $\cal{D}_{\text{train}}$ to obtain a predictor with small average loss based on $\cal{D}_{\text{test}}$
$$
{\bar R}_{\text{test}} = \frac{1}{K_{\text{test}}}
\sum_{ ({\bf x},s) \in \mathcal{D}_{\text{test}}} l(s, f({\bf x}))
$$
where $K_{\text{test}}$ is the size of the test set.
As a designer, you only have access to training data. However, for illustration purposes, you may be given a test dataset for many examples in this course. Note that in such a case, using the test data to adjust the regression model is completely forbidden. You should work as if such test data set were not available at all, and recur to it just to assess the performance of the model after the design is complete.
To model the availability of a train/test partition, we split next the boston dataset into a training and test partitions, using 60% and 40% of the data, respectively.
```
from sklearn.model_selection import train_test_split
X_train, X_test, s_train, s_test = train_test_split(X, S, test_size=0.4, random_state=0)
```
### 4.2. A first example: A baseline regression model
A first very simple method to build the regression model is to use the average of all the target values in the training set as the output of the model, discarding the value of the observation input vector.
This approach can be considered as a baseline, given that any other method making an effective use of the observation variables, statistically related to $s$, should improve the performance of this method.
The following code fragment uses the train data to compute the baseline regression model, and it shows the MSE calculated over the test partitions.
```
S_baseline = np.mean(s_train)
print('The baseline estimator is:', S_baseline)
#Compute MSE for the train data
#MSE_train = square_error(s_train, S_baseline)
#Compute MSE for the test data. IMPORTANT: Note that we still use
#S_baseline as the prediction.
MSE_test = square_error(s_test, S_baseline)
#print('The MSE for the training data is:', MSE_train)
print('The MSE for the test data is:', MSE_test)
```
## 5. Parametric and non-parametric regression models
Generally speaking, we can distinguish two approaches when designing a regression model:
- Parametric approach: In this case, the estimation function is given <i>a priori</i> a parametric form, and the goal of the design is to find the most appropriate values of the parameters according to a certain goal
For instance, we could assume a linear expression
$${\hat s} = f({\bf x}) = {\bf w}^\top {\bf x}$$
and adjust the parameter vector in order to minimize the average of the quadratic error over the training data. This is known as least-squares regression, and we will study it in Section 8 of this notebook.
- Non-parametric approach: In this case, the analytical shape of the regression model is not assumed <i>a priori</i>.
## 6. Non parametric method: Regression with the $k$-nn method
The principles of the $k$-nn method are the following:
- For each point where a prediction is to be made, find the $k$ closest neighbors to that point (in the training set)
- Obtain the estimation averaging the labels corresponding to the selected neighbors
The number of neighbors is a hyperparameter that plays an important role in the performance of the method. You can test its influence by changing $k$ in the following piece of code.
```
from sklearn import neighbors
n_neighbors = 1
knn = neighbors.KNeighborsRegressor(n_neighbors)
knn.fit(X_train, s_train)
s_hat_train = knn.predict(X_train)
s_hat_test = knn.predict(X_test)
print('The MSE for the training data is:', square_error(s_train, s_hat_train))
print('The MSE for the test data is:', square_error(s_test, s_hat_test))
max_k = 25
n_neighbors_list = np.arange(max_k)+1
MSE_train = []
MSE_test = []
for n_neighbors in n_neighbors_list:
knn = neighbors.KNeighborsRegressor(n_neighbors)
knn.fit(X_train, s_train)
s_hat_train = knn.predict(X_train)
s_hat_test = knn.predict(X_test)
MSE_train.append(square_error(s_train, s_hat_train))
MSE_test.append(square_error(s_test, s_hat_test))
plt.plot(n_neighbors_list, MSE_train,'bo', label='Training square error')
plt.plot(n_neighbors_list, MSE_test,'ro', label='Test square error')
plt.xlabel('$k$')
plt.axis('tight')
plt.legend(loc='best')
plt.show()
```
Although the above figures illustrate evolution of the training and test MSE for different selections of the number of neighbors, it is important to note that **this figure, and in particular the red points, cannot be used to select the value of such parameter**. Remember that it is only legal to use the test data to assess the final performance of the method, what includes also that any parameters inherent to the method should be adjusted using the train data only.
## 7. Hyperparameter selection via cross-validation
An inconvenient of the application of the $k$-nn method is that the selection of $k$ influences the final error of the algorithm. In the previous experiments, we kept the value of $k$ that minimized the square error on the training set. However, we also noticed that the location of the minimum is not necessarily the same from the perspective of the test data. Ideally, we would like that the designed regression model works as well as possible on future unlabeled patterns that are not available during the training phase. This property is known as <i>generalization</i>. Fitting the training data is only pursued in the hope that we are also indirectly obtaining a model that generalizes well. In order to achieve this goal, there are some strategies that try to guarantee a correct generalization of the model. One of such approaches is known as <b>cross-validation</b>
Since using the test labels during the training phase is not allowed (they should be kept aside to simultate the future application of the regression model on unseen patterns), we need to figure out some way to improve our estimation of the hyperparameter that requires only training data. Cross-validation allows us to do so by following the following steps:
- Split the training data into several (generally non-overlapping) subsets. If we use $M$ subsets, the method is referred to as $M$-fold cross-validation. If we consider each pattern a different subset, the method is usually referred to as leave-one-out (LOO) cross-validation.
- Carry out the training of the system $M$ times. For each run, use a different partition as a <i>validation</i> set, and use the restating partitions as the training set. Evaluate the performance for different choices of the hyperparameter (i.e., for different values of $k$ for the $k$-NN method).
- Average the validation error over all partitions, and pick the hyperparameter that provided the minimum validation error.
- Rerun the algorithm using all the training data, keeping the value of the parameter that came out of the cross-validation process.
<img src="https://chrisjmccormick.files.wordpress.com/2013/07/10_fold_cv.png">
**Exercise**: Use `Kfold` function from the `sklearn` library to validate parameter `k`. Use a 10-fold validation strategy. What is the best number of neighbors according to this strategy? What is the corresponding MSE averaged over the test data?
```
from sklearn.model_selection import KFold
max_k = 25
n_neighbors_list = np.arange(max_k)+1
MSE_val = np.zeros((max_k,))
nfolds = 10
kf = KFold(n_splits=nfolds)
for train, val in kf.split(X_train):
for idx,n_neighbors in enumerate(n_neighbors_list):
knn = neighbors.KNeighborsRegressor(n_neighbors)
knn.fit(X_train[train,:], s_train[train])
s_hat_val = knn.predict(X_train[val,:])
MSE_val[idx] += square_error(s_train[val], s_hat_val)
MSE_val = [el/10 for el in MSE_val]
selected_k = np.argmin(MSE_val) + 1
plt.plot(n_neighbors_list, MSE_train,'bo', label='Training square error')
plt.plot(n_neighbors_list, MSE_val,'ro', label='Validation square error')
plt.plot(selected_k, MSE_test[selected_k-1],'gs', label='Test square error')
plt.xlabel('$k$')
plt.axis('tight')
plt.legend(loc='best')
plt.show()
print('Cross-validation selected the following value for the number of neighbors:', selected_k)
print('Test MSE:', MSE_test[selected_k-1])
```
## 8. A parametric regression method: Least squares regression
### 8.1. Problem definition
- The goal is to learn a (possibly non-linear) regression model from a set of $L$ labeled points, $\{{\bf x}_k,s_k\}_{k=0}^{K-1}$.
- We assume a parametric function of the form:
$${\hat s}({\bf x}) = f({\bf x}) = w_0 z_0({\bf x}) + w_1 z_1({\bf x}) + \dots w_{m-1} z_{m-1}({\bf x})$$
where $z_i({\bf x})$ are particular transformations of the input vector variables.
Some examples are:
- If ${\bf z} = {\bf x}$, the model is just a linear combination of the input variables
- If ${\bf z} = \left[\begin{array}{c}1\\{\bf x}\end{array}\right]$, we have again a linear combination with the inclusion of a constant term.
- For unidimensional input $x$, ${\bf z} = [1, x, x^2, \dots,x^{M}]^\top$ would implement a polynomia of degree $m-1$.
- Note that the variables of ${\bf z}$ could also be computed combining different variables of ${\bf x}$. E.g., if ${\bf x} = [x_1,x_2]^\top$, a degree-two polynomia would be implemented with
$${\bf z} = \left[\begin{array}{c}1\\x_1\\x_2\\x_1^2\\x_2^2\\x_1 x_2\end{array}\right]$$
- The above expression does not assume a polynomial model. For instance, we could consider ${\bf z} = [\log(x_1),\log(x_2)]$
Least squares (LS) regression finds the coefficients of the model with the aim of minimizing the square of the residuals. If we define ${\bf w} = [w_0,w_1,\dots,w_M]^\top$, the LS solution would be defined as
\begin{equation}{\bf w}_{LS} = \arg \min_{\bf w} \sum_{k=0}^{K-1} e_k^2 = \arg \min_{\bf w} \sum_{k=0}^{K-1} \left[s_k - {\hat s}_k \right]^2 \end{equation}
### 8.2. Vector Notation
In order to solve the LS problem it is convenient to define the following vectors and matrices:
- We can group together all available target values to form the following vector
$${\bf s} = \left[s_0, s_1, \dots, s_{K-1} \right]^\top$$
- The estimation of the model for a single input vector ${\bf z}_k$ (which would be computed from ${\bf x}_k$), can be expressed as the following inner product
$${\hat s}_k = {\bf z}_k^\top {\bf w}$$
- If we now group all input vectors into a matrix ${\bf Z}$, so that each row of ${\bf Z}$ contains the transpose of the corresponding ${\bf z}_k$, we can express
$$
\hat{{\bf s}}
= \left[{\hat s}_0, {\hat s}_1, \dots, {\hat s}_{K-1} \right]^\top
= {\bf Z} {\bf w}, \;\;\;\; \text{with} \;\;
{\bf Z} = \left[\begin{array}{c} {\bf z}_0^\top \\
{\bf z}_1^\top \\
\vdots \\
{\bf z}_{K-1}^\top \\
\end{array}\right]$$
### 8.3. Least-squares solution
- Using the previous notation, the cost minimized by the LS model can be expressed as
$$C({\bf w}) = \sum_{k=0}^{K-1} \left[s_k - {\hat s}_k \right]^2 = \|{\bf s} - {\hat{\bf s}}\|^2 = \|{\bf s} - {\bf Z}{\bf w}\|^2$$
- Since the above expression depends quadratically on ${\bf w}$ and is non-negative, we know that there is only one point where the derivative of $C({\bf w})$ becomes zero, and that point is necessarily a minimum of the cost
$$\nabla_{\bf w} \|{\bf s} - {\bf Z}{\bf w}\|^2\Bigg|_{{\bf w} = {\bf w}_{LS}} = {\bf 0}$$
<b>Exercise:</b>
Solve the previous problem to show that
$${\bf w}_{LS} = \left( {\bf Z}^\top{\bf Z} \right)^{-1} {\bf Z}^\top{\bf s}$$
The next fragment of code adjusts polynomia of increasing order to randomly generated training data.
```
n_points = 20
n_grid = 200
frec = 3
std_n = 0.2
max_degree = 20
colors = 'brgcmyk'
#Location of the training points
X_tr = (3 * np.random.random((n_points,1)) - 0.5)
#Labels are obtained from a sinusoidal function, and contaminated by noise
S_tr = np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
#Equally spaced points in the X-axis
X_grid = np.linspace(np.min(X_tr),np.max(X_tr),n_grid)
#We start by building the Z matrix
Z = []
for el in X_tr.tolist():
Z.append([el[0]**k for k in range(max_degree+1)])
Z = np.matrix(Z)
Z_grid = []
for el in X_grid.tolist():
Z_grid.append([el**k for k in range(max_degree+1)])
Z_grid = np.matrix(Z_grid)
plt.plot(X_tr,S_tr,'b.')
for k in [1, 2, n_points]: # range(max_degree+1):
Z_iter = Z[:,:k+1]
# Least square solution
#w_LS = (np.linalg.inv(Z_iter.T.dot(Z_iter))).dot(Z_iter.T).dot(S_tr)
# Least squares solution, with leass numerical errors
w_LS, resid, rank, s = np.linalg.lstsq(Z_iter, S_tr)
#estimates at all grid points
fout = Z_grid[:,:k+1].dot(w_LS)
fout = np.array(fout).flatten()
plt.plot(X_grid,fout,colors[k%len(colors)]+'-',label='Degree '+str(k))
plt.legend(loc='best')
plt.ylim(1.2*np.min(S_tr), 1.2*np.max(S_tr))
plt.show()
```
It may seem that increasing the degree of the polynomia is always beneficial, as we can implement a more expressive function. A polynomia of degree $M$ would include all polynomia of lower degrees as particular cases. However, if we increase the number of parameters without control, the polynomia would eventually get expressive enough to adjust any given set of training points to arbitrary precision, what does not necessarily mean that the solution is obtaining a model that can be extrapolated to new data.
The conclusions is that, when adjusting a parametric model using least squares, we need to validate the model, for which we can use the cross-validation techniques we introudece in Section 7. In this contexts, validating the model implies:
- Validating the kind of model that will be used, e.g., linear, polynomial, logarithmic, etc ...
- Validating any additional parameters that the nodel may have, e.g., if selecting a polynomial model, the degree of the polynomia.
The code below shows the performance of different models. However, no validation process is considered, so the reported test MSEs could not be used as criteria to select the best model.
```
# Linear model with no bias
w_LS, resid, rank, s = np.linalg.lstsq(X_train, s_train)
s_hat_test = X_test.dot(w_LS)
print('Test MSE for linear model without bias:', square_error(s_test, s_hat_test))
# Linear model with no bias
Z_train = np.hstack((np.ones((X_train.shape[0],1)), X_train))
Z_test = np.hstack((np.ones((X_test.shape[0],1)), X_test))
w_LS, resid, rank, s = np.linalg.lstsq(Z_train, s_train)
s_hat_test = Z_test.dot(w_LS)
print('Test MSE for linear model with bias:', square_error(s_test, s_hat_test))
# Polynomial model degree 2
Z_train = np.hstack((np.ones((X_train.shape[0],1)), X_train, X_train**2))
Z_test = np.hstack((np.ones((X_test.shape[0],1)), X_test, X_test**2))
w_LS, resid, rank, s = np.linalg.lstsq(Z_train, s_train)
s_hat_test = Z_test.dot(w_LS)
print('Test MSE for polynomial model (order 2):', square_error(s_test, s_hat_test))
```
| github_jupyter |
# Web Data Scraping
[Spring 2021 ITSS Mini-Course](https://www.colorado.edu/cartss/programs/interdisciplinary-training-social-sciences-itss/mini-course-web-data-scraping) — ARSC 5040
[Brian C. Keegan, Ph.D.](http://brianckeegan.com/)
[Assistant Professor, Department of Information Science](https://www.colorado.edu/cmci/people/information-science/brian-c-keegan)
University of Colorado Boulder
Copyright and distributed under an [MIT License](https://opensource.org/licenses/MIT)
## Class outline
* **Week 1**: Introduction to Jupyter, browser console, structured data, ethical considerations
* **Week 2**: Scraping HTML with `requests` and `BeautifulSoup`
* **Week 3**: Scraping web data with Selenium
* **Week 4**: Scraping an API with `requests` and `json`, Wikipedia and Reddit
* **Week 5**: Scraping data from Twitter
## Acknowledgements
Thank you also to Professor [Terra KcKinnish](https://www.colorado.edu/economics/people/faculty/terra-mckinnish) for coordinating the ITSS seminars.
## Class 5 goals
* Sharing accomplishments and challenges with last week's material
* Using the `twitter` wrapper library to handle authentication
* Retrieving and parsing a single tweet
* Rehydrating a list of tweet IDs
* Pulling a user's timeline
* Pulling a user's friend and follower lists
* Using the search endpoint of the API
* Listen to the streaming API
* Detecting bot accounts using IU's Bot-o-Meter
Start with our usual suspect packages.
```
# Lets Jupyter Notebook display images in-line
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
# Import our helper libraries
import numpy as np
import pandas as pd
from datetime import datetime, timedelta
import json
import requests
from bs4 import BeautifulSoup
import time
from urllib.parse import quote, unquote
```
We're going to use a library called VADER to help with sentiment analysis of tweets. We need to do some setup first! You should only need to do this step once.
```
import nltk
nltk.download('vader_lexicon')
```
Now try to import.
```
# Import the VADER sentiment analyzer
from nltk.sentiment.vader import SentimentIntensityAnalyzer
# Instantiate the model
sia = SentimentIntensityAnalyzer()
```
## Installing a Twitter API wrapper
As was the case with Reddit, we will take advantage of a wrapper library to handle the heavy lifting of authenticating, making specific requests, handling rate-limiting, *etc*. There are no shortage of Python wrappers for the Twitter API, but the most popular are:
* [twitter](https://github.com/python-twitter-tools/twitter)
* [python-twitter](https://python-twitter.readthedocs.io/en/latest/)
* [Tweepy](http://docs.tweepy.org/en/latest/)
* [Twython](https://twython.readthedocs.io/en/latest/)
There are other wrapper libraries linked from the [Twitter developer utilities documentation](https://developer.twitter.com/en/docs/twitter-api/tools-and-libraries).
I'm going to use `twitter` just because it is very lightweight and replicates the official Twitter API's design.
You will need to install this since it does not come with conda by default. At the Terminal:
`pip install twitter`
Once you've installed it, you can import the `twitter` wrapper library.
```
import twitter
```
## Authenticating
I don't want to share my Twitter credentials with the world, so I load from from my local machine. If you wanted to do this, it should take this format of:
```
{"consumer_key":"API key",
"consumer_secret":"API secret key",
"access_token_key":"Access token",
"access_token_secret":"Access token secret"
}
```
```
# Load my key information from disk
with open('twitter_keys.json','r') as f:
twitter_keys = json.load(f)
# Authenticate with the Twitter API using the twitter_keys dictionary
# The "tweet_mode='extended' allows us to see the full 280 characters in tweets
api = twitter.Twitter(auth=twitter.OAuth(twitter_keys['access_token_key'],
twitter_keys['access_token_secret'],
twitter_keys['consumer_key'],
twitter_keys['consumer_secret']),
)
```
Alternatively, you can just enter your keys directly into the `Api` function.
```
api = twitter.Api(consumer_key = 'API key',
consumer_secret = 'API secret key',
access_token_key = 'Access token',
access_token_secret = 'Access token secret',
tweet_mode='extended')
```
Test that you can connect to the API. Retrieve *Daily Camera* journalist [@mitchellbyars](https://twitter.com/mitchellbyars)'s account information.
```
api.users.show(screen_name='mitchellbyars')
```
We can also retrieve his most recent tweets. Obnoxiously, you also need to add a parameter `tweet_mode='extended'` ([docs](https://developer.twitter.com/en/docs/twitter-api/tweets/timelines/migrate/standard-to-twitter-api-v2)) to get the full 280 characters of text.
```
api.statuses.user_timeline(screen_name='mitchellbyars',
count=5,
tweet_mode='extended')
```
## Getting the payload of a single tweet
Wikipedia helpfully maintains a [List of most-retweeted tweets](https://en.wikipedia.org/wiki/List_of_most-retweeted_tweets). Go to one of the tweets and pull out the ID at the end of the URL.
```
tweet = api.statuses.show(_id='849813577770778624')
tweet
```
Access the attributes of this dictionary.
```
# When was the tweet created
tweet['created_at']
# Number of favorites (at the time of the API call)
tweet['favorite_count']
# Number of retweets (at the time of the API call)
tweet['retweet_count']
# Text of the tweet
tweet['text']
# Location (if it geo-located)
tweet['geo']
# List of hashtags present
tweet['entities']['hashtags']
# Tweet ID
tweet['id']
# A guess at the language of the tweet
tweet['lang']
```
These next two attributes return `User` and `Media` objects rather than simple strings, ints, *etc*. that have their own attributes and methods.
```
tweet['user']
```
We can access attributes of this `User` object.
```
# Screen name of the user
tweet['user']['screen_name']
# Displayed name of the user
tweet['user']['name']
# User biography
tweet['user']['description']
# Account creation time
tweet['user']['created_at']
# Self-reported location
tweet['user']['location']
# Number of tweets from the user
tweet['user']['statuses_count']
# Number of followers
tweet['user']['followers_count']
# Number of friends (accounts this account follows, followees, etc.)
tweet['user']['friends_count']
```
Similarly, the `Media` object inside this list contains information about the type and the URLs of the media inside this object. If there were multiple images in this tweet, there would be a `Media` item in the list for each of them.
```
tweet['entities']['media'][0]['media_url']
```
## Rehydrating a list of tweets
Twitter's Terms of Service do not allow datasets of statuses to be shared, but researchers are permitted to share the identifiers for tweets in their datasets. Researchers then need to "rehydrate" these statuses by requesting the full payloads from Twitter's API. A list of resources with links to tweet IDs used in research:
* [DocNow's Tweet ID Datasets](https://www.docnow.io/catalog/)
* [FollowTheHashtag's Free Twitter Datasets](http://followthehashtag.com/datasets/)
* [AcademicTorrents](http://academictorrents.com/browse.php?search=twitter)
* [FiveThirtyEight's Russian Troll Tweets](https://github.com/fivethirtyeight/russian-troll-tweets/)
* [Harvard Dataverse](https://dataverse.harvard.edu/dataverse/harvard?q=twitter&types=datasets&sort=score&order=desc&page=1)
This has some privacy benefits: Twitter's [compliance statement](https://developer.twitter.com/en/docs/twitter-api/enterprise/compliance-firehose-api/overview) describes that users should retain the option to delete tweets or their accounts and this rehydration arrangement—theoretically—prevents their tweet content from circulating without their consent. In practice, many of the largest Twitter corpora come from the streaming API (more on that later in this notebook) and Twitter has a "[compliance stream](https://developer.twitter.com/en/docs/tweets/compliance/api-reference/compliance-firehose)" that indicates that a user has deleted a tweet, protected their account, Twitter has suspended an account, Twitter has withheld the status, *etc*. and the tweet should be removed from your streaming dataset as well. The Sunlight Foundation and ProPublica maintain a list of deleted tweets from politicians called [Politiwoops](https://projects.propublica.org/politwoops/).
I am going to use a [list of tweets made by Senators in the 115th Congress](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/UIVHQR) collected by Justin Littman in 2017. Load the text file:
```
with open('senators-115.txt','r') as f:
senators_tweet_ids = [tweet_id.strip() for tweet_id in f.readlines()]
"There are {0:,} tweets IDs in the file.".format(len(senators_tweet_ids))
```
Look at the first 10 statuses.
```
senators_tweet_ids[:10]
```
Use the `statuses.lookup` API endpoint, which accepts a string containing up to 100 comma-separated tweet IDs. We will use the "`map=True`" parameter to keep track of any tweets that were not returned (which should be `None`s rather than `Status`es).
```
senators_10_tweets = api.statuses.lookup(_id=','.join(senators_tweet_ids[:50]),
map=True,
tweet_mode='extended'
)
```
Inspect.
```
list(senators_10_tweets['id'].values())[0]
```
Now for a bit of accounting on rate limits. According to the API documentation for [get statuses/lookup](https://developer.twitter.com/en/docs/tweets/post-and-engage/api-reference/get-statuses-lookup), you can ask for up to 100 tweets per request and you can make 900 requests per 15-minutes. This means you can theoretically rehydrate 90,000 tweets per 15 minutes, or 360,000 tweets per hour. So it would take approximately 90 minutes to rehydrate all 500,000 of these senators' tweets.
Access the field with the rate limit status for looking up statuses.
```
api.application.rate_limit_status()['resources']['statuses']['/statuses/lookup']
```
Parse out when the the API limit will reset.
```
# Access the reset field
reset_datetime = api.application.rate_limit_status()['resources']['statuses']['/statuses/lookup']['reset']
# Convert from UNIX time to something interpretable
print(datetime.fromtimestamp(reset_datetime))
```
There should be 10 `Status` objects returned by the API.
```
len(senators_10_tweets['id'])
```
Write a loop to go through the ten tweets and print out information.
```
for status in senators_10_tweets['id'].values():
screen_name = status['user']['screen_name']
created = pd.to_datetime(status['created_at'])
text = status['full_text']
retweets = status['retweet_count']
formatted_str = '{0} on {1} said {2}, which received {3} retweets.\n'
print(formatted_str.format(screen_name,created,text,retweets))
```
Alternatively, extract the relevant fields, save these as a list of dictionaries, and convert the list of dictionaries to a DataFrame. Note that when an account retweets another account, a second `Status` object is embedded under the "`.retweeted_status`" attribute that contains the parent tweet's information. In these cases, the "`.created_at`" from the Senator's account is when s/he retweeted the status and the "`.created_at`" for the "`.retweeted_status`" is when the parent tweet was first posted.
```
list(senators_10_tweets['id'].values())[1]
# Make an empty list to store the data after we process it below
statuses_list = []
# Loop through each of the status objects
for status in senators_10_tweets['id'].values():
# Check to make sure the status is not empty/None
if status != None:
# Create an empty dictionary to store relevant fields
payload = {}
payload['id'] = status['id_str']
payload['screen_name'] = status['user']['screen_name']
payload['created'] = pd.to_datetime(status['created_at'])
payload['retweets'] = status['retweet_count']
payload['favorites'] = status['favorite_count']
# If an account retweets another account, we should store that information
if 'retweeted_status' in status:
payload['text'] = status['retweeted_status']['full_text']
payload['retweeted'] = True
payload['retweeted_screen_name'] = status['retweeted_status']['user']['screen_name']
payload['retweeted_created'] = status['retweeted_status']['created_at']
# If there is no retweeted_status then it's not a retweet
else:
payload['text'] = status['full_text']
payload['retweeted'] = False
payload['retweeted_screen_name'] = False
payload['retweeted_created'] = False
# Store the payload dictionary in our list
statuses_list.append(payload)
# Conver to a DataFrame
df = pd.DataFrame(statuses_list)
# Inspect
df.head()
```
## Pulling a user's timeline
In general, if you want to retrieve the tweets from a user's timeline, we can use Twitter's API to get the 3,200 most recent tweets from the [get statuses/user_timeline](https://developer.twitter.com/en/docs/tweets/timelines/api-reference/get-statuses-user_timeline.html) API endpoint. This will include retweets of other statuses. We can retrieve up to 200 tweets per request and can make 900 requests per 15-minute window, so we can get 18,000 tweets per window or 72,000 tweets per hour. This means we could theoretically get up to 22 users' most recent 3,200 tweets per hour.
Alexandria Ocasio-Cortez has written 6,975 tweets on her personal account, "[AOC](https://twitter.com/aoc)". She also has an official account "[RepAOC](https://twitter.com/repaoc)", but this only has 14 tweets. Let's get her 3,200 most-recent tweets from the API. Disappointingly, `python-twitter` does not handle the "pagination" for us so we can only ask for 200 tweets at a time and have to update when to ask for the next tweets.
```
aoc_tweets = api.statuses.user_timeline(screen_name='aoc',
count=200,
tweet_mode='extended')
print("{0:,} tweets were returned.".format(len(aoc_tweets)))
```
The first tweet returned is the most recent tweet.
```
aoc_tweets[0]['created_at']
```
The last tweet returned (the 200th).
```
aoc_tweets[-1]['created_at']
```
We really care about the final tweet's ID so we can make an API query that asks for the next 200 statuses before the last tweet returned.
```
aoc_tweets[-1]['id']
```
Now we make the next query.
```
aoc_tweets2 = api.statuses.user_timeline(screen_name='aoc',
count=200,
tweet_mode='extended',
include_rts=True,
max_id=aoc_tweets[-1]['id'])
```
Check the first tweet in here.
```
aoc_tweets2[0]['id']
```
Compare to the last tweet in the first set (`aoc_tweets`).
```
aoc_tweets[-1]['id']
```
Let's also check in on how much of our API rate limit we've used.
```
api.application.rate_limit_status()['resources']['statuses']['/statuses/user_timeline']
```
Now we'll write a loop to get all 3,200 tweets.
```
# Start with the list of the 200 most-recent tweets
aoc_timeline_tweets = api.statuses.user_timeline(screen_name='aoc',
count=200,
tweet_mode='extended',
include_rts=True)
# Initialize a counter so we don't go overboard with our requests
request_counter = 1
# While our request counter hasn't gone past 16, but try for more :)
while request_counter < 20:
# Get the most oldest tweet id
final_status_id = aoc_timeline_tweets[-1]['id']
# Pass this tweet ID into the max_id parameter, minus 1 so we don't duplicate it
aoc_timeline_tweets += api.statuses.user_timeline(screen_name='aoc',
count=200,
tweet_mode='extended',
include_rts=True,
max_id=final_status_id-1)
# Increment our request_counter
request_counter += 1
```
I just used a chunk of my API rate limit.
```
api.application.rate_limit_status()['resources']['statuses']['/statuses/user_timeline']
```
Somehow a few more tweets snuck in there.
```
len(aoc_timeline_tweets)
```
We can also abstract this into a function we could use for anyone's timeline.
```
def get_all_user_timeline(screen_name,count=200,include_rts=True,exclude_replies=False):
# Start with the list of the 200 most-recent tweets
timeline_tweets = api.statuses.user_timeline(screen_name = screen_name,
count = count,
tweet_mode = 'extended',
include_rts = include_rts,
exclude_replies = exclude_replies
)
# Initialize a counter so we don't go overboard with our requests
request_counter = 1
# While our request counter hasn't gone past 16, but try for 20
while request_counter < 20:
# Get the most oldest tweet id
final_status_id = timeline_tweets[-1]['id']
# Pass this tweet ID into the max_id parameter, minus 1 so we don't duplicate it
timeline_tweets += api.statuses.user_timeline(screen_name = screen_name,
count = count,
tweet_mode = 'extended',
include_rts = include_rts,
exclude_replies = exclude_replies,
max_id = final_status_id-1)
# Increment our request_counter
request_counter += 1
return timeline_tweets
```
Test this on [@joebiden](https://twitter.com/joebiden).
```
biden_tweets = get_all_user_timeline('joebiden')
len(biden_tweets)
```
I've added a lot more sugar into our loop to grab information about replies, user mentions, and hashtags.
```
# Make an empty list to store the data after we process it below
statuses_list = []
# Loop through each of the status objects
for status in biden_tweets:
# Check to make sure the status is not empty/None
if status != None:
# Create an empty dictionary to store relevant fields
payload = {}
payload['id'] = status['id_str']
payload['screen_name'] = status['user']['screen_name']
payload['created'] = pd.to_datetime(status['created_at'])
payload['retweets'] = status['retweet_count']
payload['favorites'] = status['favorite_count']
payload['reply_screen_name'] = status['in_reply_to_screen_name']
payload['reply_id'] = status['in_reply_to_status_id']
payload['source'] = BeautifulSoup(status['source']).text
if len(status['entities']['user_mentions']) > 0:
payload['user_mentions'] = '; '.join([m['screen_name'] for m in status['entities']['user_mentions']])
else:
payload['user_mentions'] = None
if len(status['entities']['hashtags']) > 0:
payload['hashtags'] = '; '.join([h['text'] for h in status['entities']['hashtags']])
else:
payload['hashtags'] = None
# If an account retweets another account, we should store that information
if 'retweeted_status' in status:
payload['text'] = status['retweeted_status']['full_text']
payload['retweeted'] = True
payload['retweeted_screen_name'] = status['retweeted_status']['user']['screen_name']
payload['retweeted_created'] = status['retweeted_status']['created_at']
payload['retweeted_source'] = BeautifulSoup(status['retweeted_status']['source']).text
if len(status['retweeted_status']['entities']['hashtags']) > 0:
payload['hashtags'] = '; '.join([h['text'] for h in status['retweeted_status']['entities']['hashtags']])
else:
payload['hashtags'] = None
# If there is no retweeted_status then it's not a retweet
else:
payload['text'] = status['full_text']
payload['retweeted'] = False
payload['retweeted_screen_name'] = False
payload['retweeted_created'] = False
# Store the payload dictionary in our list
statuses_list.append(payload)
# Conver to a DataFrame
df = pd.DataFrame(statuses_list)
# Inspect
df.head()
```
Convert the "created" column into a proper `datetime` object and extract the dates as another column.
```
df['timestamp'] = pd.to_datetime(df['created'])
df['date'] = df['timestamp'].apply(lambda x:x.date())
df['weekday'] = df['timestamp'].apply(lambda x:x.weekday)
df['hour'] = df['timestamp'].apply(lambda x:x.hour)
```
Make a plot of the number of tweets by date.
```
# Group by date and aggregate by number of tweets on that date
_s = df.groupby(pd.Grouper(key='timestamp',freq='1D')).agg({'id':len,'retweeted':'sum','reply_id':lambda x:sum(x.notnull())})
# Reindex the data to be continuous over the range, fill in missing dates as 0s
_s.columns = ['Tweets','Retweets','Replies']
_s_frac = _s[['Retweets','Replies']].div(_s['Tweets'],axis=0).fillna(0)
# Make the plot
f,axs = plt.subplots(2,1,figsize=(8,6),sharex=True)
_s['Tweets'].rolling(3).mean().plot(ax=axs[0])
_s_frac.rolling(3).mean().plot(ax=axs[1],legend=False)
axs[0].legend(loc='center left',bbox_to_anchor=(1,.5))
axs[1].legend(loc='center left',bbox_to_anchor=(1,.5))
axs[0].set_ylabel('Count')
axs[1].set_ylabel('Fraction of tweets')
# Annotate the plot with lines corresponding to major events
for ax in axs:
ax.axvline(pd.Timestamp('2020-04-08'),lw=3,c='k',alpha=.25) # Biden cinches
ax.axvline(pd.Timestamp('2020-08-20'),lw=3,c='k',alpha=.25) # DNC speech
ax.axvline(pd.Timestamp('2020-11-03'),lw=3,c='k',alpha=.25) # Election day
ax.axvline(pd.Timestamp('2021-01-21'),lw=3,c='k',alpha=.25) # Swearing in
axs[0].text(x=pd.Timestamp('2020-04-08')+pd.Timedelta(3,'d'),y=47.5,s='Biden\ncinches',va='center')
axs[0].text(x=pd.Timestamp('2020-08-20')+pd.Timedelta(3,'d'),y=47.5,s='DNC\nspeech',va='center')
axs[0].text(x=pd.Timestamp('2020-11-03')+pd.Timedelta(3,'d'),y=47.5,s='Election',va='center')
axs[0].text(x=pd.Timestamp('2021-01-21')+pd.Timedelta(3,'d'),y=47.5,s='Sworn\nin',va='center')
f.tight_layout()
# f.savefig('aoc_activity.png',dpi=300,bbox_inches='tight')
# Group by the date and aggregate by the sum of retweets and favorites for all tweets on that date
_s = df.groupby(pd.Grouper(key='timestamp',freq='1D')).agg({'retweets':'sum','favorites':'sum'})
# Make the plot
f,ax = plt.subplots(1,1,figsize=(8,4))
ax = _s.rolling(3).mean().plot(legend=False,lw=2,ax=ax)
ax.set_ylim((0,7000000))
ax.legend(loc='center left',bbox_to_anchor=(1,.5))
ax.set_title('Daily engagement with @joebiden tweets')
# Annotate the plot with lines corresponding to major events
ax.axvline(pd.Timestamp('2020-04-08'),lw=3,c='k',alpha=.25) # Biden cinches
ax.axvline(pd.Timestamp('2020-08-20'),lw=3,c='k',alpha=.25) # DNC speech
ax.axvline(pd.Timestamp('2020-11-03'),lw=3,c='k',alpha=.25) # Election day
ax.axvline(pd.Timestamp('2021-01-21'),lw=3,c='k',alpha=.25) # Swearing in
_y = 6.5e6
ax.text(x=pd.Timestamp('2020-04-08')+pd.Timedelta(3,'d'),y=_y,s='Biden\ncinches',va='center')
ax.text(x=pd.Timestamp('2020-08-20')+pd.Timedelta(3,'d'),y=_y,s='DNC\nspeech',va='center')
ax.text(x=pd.Timestamp('2020-11-03')+pd.Timedelta(3,'d'),y=_y,s='Election',va='center')
ax.text(x=pd.Timestamp('2021-01-21')+pd.Timedelta(3,'d'),y=_y,s='Sworn\nin',va='center')
f.tight_layout()
# f.savefig('joebiden_engagement.png',dpi=300,bbox_inches='tight')
```
We can also do a bit of sentiment analysis. You'll likely need to [install the NLTK data](https://www.nltk.org/data.html) for this to work. We are going to use the [VADER sentiment analysis tool](https://github.com/cjhutto/vaderSentiment) that was specifically trained for social media text: [see paper here](http://comp.social.gatech.edu/papers/icwsm14.vader.hutto.pdf).
```
# Get sentiment scores for each tweet's text
df['sentiment'] = df['text'].apply(lambda x:sia.polarity_scores(x)['compound'])
```
Plot out the daily sentiment of tweets with major events annotated.
```
# Group by the date and aggregate by the average sentiment for all tweets on that date
_s = df.groupby(pd.Grouper(key='timestamp',freq='1D')).agg({'sentiment':'mean'})
# Make the plot with a 7-day rolling average
f,ax = plt.subplots(1,1,figsize=(8,4))
ax = _s.rolling(3).mean().fillna(method='ffill').plot(legend=False,ax=ax)
ax.set_ylim((-.3,.8))
ax.axhline(0,ls='--',c='k',lw=1)
ax.set_title('Sentiment of @joebiden tweets')
ax.set_ylabel('Compound VADER score')
# Annotate the plot with lines corresponding to major events
ax.axvline(pd.Timestamp('2020-04-08'),lw=3,c='k',alpha=.25) # Biden cinches
ax.axvline(pd.Timestamp('2020-08-20'),lw=3,c='k',alpha=.25) # DNC speech
ax.axvline(pd.Timestamp('2020-11-03'),lw=3,c='k',alpha=.25) # Election day
ax.axvline(pd.Timestamp('2021-01-21'),lw=3,c='k',alpha=.25) # Swearing in
_y = .7
ax.text(x=pd.Timestamp('2020-04-08')+pd.Timedelta(3,'d'),y=_y,s='Biden\ncinches',va='center')
ax.text(x=pd.Timestamp('2020-08-20')+pd.Timedelta(3,'d'),y=_y,s='DNC\nspeech',va='center')
ax.text(x=pd.Timestamp('2020-11-03')+pd.Timedelta(3,'d'),y=_y,s='Election',va='center')
ax.text(x=pd.Timestamp('2021-01-21')+pd.Timedelta(3,'d'),y=_y,s='Sworn\nin',va='center')
f.tight_layout()
# f.savefig('joebiden_sentiment.png',dpi=300,bbox_inches='tight')
```
Compute the engagement for @joebiden tweets, ignoring retweets and replies, and normalizing for total tweet activity on that day.
```
c1 = ~df['retweeted']
c2 = df['reply_id'].isnull()
pure_tweets_df = df[c1 & c2]
print("There are {0:,} tweets that are not retweets or replies.".format(len(pure_tweets_df)))
_s = pure_tweets_df.groupby(pd.Grouper(key='timestamp',freq='1D')).agg({'retweets':'sum','favorites':'sum','id':len})
_s = _s[['retweets','favorites']].div(_s['id'],axis=0)
_s.columns = ['Retweets','Favorites']
# Make the plot
f,ax = plt.subplots(1,1,figsize=(8,4))
ax = _s.rolling(3).mean().plot(legend=False,lw=2,ax=ax)
# ax.set_yscale('symlog')
# ax.set_ylim((1e1,1e6))
ax.legend(loc='center left',bbox_to_anchor=(1,.5))
ax.set_title('Daily engagement with @joebiden tweets')
ax.set_ylabel('Engagement per tweet')
# Annotate the plot with lines corresponding to major events
ax.axvline(pd.Timestamp('2020-04-08'),lw=3,c='k',alpha=.25) # Biden cinches
ax.axvline(pd.Timestamp('2020-08-20'),lw=3,c='k',alpha=.25) # DNC speech
ax.axvline(pd.Timestamp('2020-11-03'),lw=3,c='k',alpha=.25) # Election day
ax.axvline(pd.Timestamp('2021-01-21'),lw=3,c='k',alpha=.25) # Swearing in
_y = 5.5e5
ax.text(x=pd.Timestamp('2020-04-08')+pd.Timedelta(3,'d'),y=_y,s='Biden\ncinches',va='center')
ax.text(x=pd.Timestamp('2020-08-20')+pd.Timedelta(3,'d'),y=_y,s='DNC\nspeech',va='center')
ax.text(x=pd.Timestamp('2020-11-03')+pd.Timedelta(3,'d'),y=_y,s='Election',va='center')
ax.text(x=pd.Timestamp('2021-01-21')+pd.Timedelta(3,'d'),y=_y,s='Sworn\nin',va='center')
f.tight_layout()
# f.savefig('aoc_engagement_no_rt_reply.png',dpi=300,bbox_inches='tight')
```
Plot favorites per retweet.
```
f,ax = plt.subplots(1,1,figsize=(8,4))
(_s['Retweets']/_s['Favorites']).fillna(0).rolling(3).mean().plot(ax=ax)
ax.set_ylim((0,.5))
# Annotate the plot with lines corresponding to major events
ax.axvline(pd.Timestamp('2020-04-08'),lw=3,c='k',alpha=.25) # Biden cinches
ax.axvline(pd.Timestamp('2020-08-20'),lw=3,c='k',alpha=.25) # DNC speech
ax.axvline(pd.Timestamp('2020-11-03'),lw=3,c='k',alpha=.25) # Election day
ax.axvline(pd.Timestamp('2021-01-21'),lw=3,c='k',alpha=.25) # Swearing in
_y = .45
ax.text(x=pd.Timestamp('2020-04-08')+pd.Timedelta(3,'d'),y=_y,s='Biden\ncinches',va='center')
ax.text(x=pd.Timestamp('2020-08-20')+pd.Timedelta(3,'d'),y=_y,s='DNC\nspeech',va='center')
ax.text(x=pd.Timestamp('2020-11-03')+pd.Timedelta(3,'d'),y=_y,s='Election',va='center')
ax.text(x=pd.Timestamp('2021-01-21')+pd.Timedelta(3,'d'),y=_y,s='Sworn\nin',va='center')
```
What are the top tweets by retweets per favorite? The're primarily from before her primary win.
```
df['rt_fav_ratio'] = (df['retweets']/df['favorites']).replace({np.inf:np.nan})
top_retweets = df['rt_fav_ratio'].dropna().sort_values(ascending=False).head(10)
df.loc[top_retweets.index,['created','text','retweets','favorites']]
```
Is there an intereseting relationship between seniment and retweet/favorite ratio? We can specify a simple univariate LOESS regression for the relationship between sentiment and the retweet-per-favorite ratio. It appears that extremely negative and positive tweets have higher ratios than neutral tweets.
```
g = sb.lmplot(x='sentiment',y='rt_fav_ratio',data=df,lowess=True,aspect=2,
line_kws={'color':'red','linewidth':10,'alpha':.5})
ax = g.axes[0,0]
ax.set_ylim((0,.6))
```
## Pulling a user's friends
In the parlance of the Twitter API, the people who follow an account are "followers" and the people followed by an account are "friends". There's unfortuantely no timestamp meta-data about when friend and follower relationships were created. The API limits on this are much more stringent than other API calls: only 200 accounts per request and only 15 requests per 15-minute window: basically 200 accounts per minute or 3,000 accounts before you hit the rate limit. AOC has 1,417 friends, so it takes 8 API requests to get them all, leaving me with 7 requests in this 15-minute window.
```
friends = api.friends.list(screen_name='joebiden',count=200,skip_status=True)
print("There are {0:,} friends.".format(len(friends['users'])))
friends['users'][0]
```
We can check my API rate limit status too.
```
api.application.rate_limit_status()['resources']['friends']['/friends/list']
datetime.fromtimestamp(api.application.rate_limit_status()['resources']['friends']['/friends/list']['reset'])
```
I think "friends" convey much more valuable information about an account than followers, primarily because an account doesn't choose who follows them. However, if you wanted to get the followers of an account, we use the `GetFollowers` method. I'm only going to grab 200 so I don't burn more API calls.
```
followers = api.followers.list(screen_name='joebiden',skip_status=True,total_count=200)
api.application.rate_limit_status()['resources']['followers']['/followers/list']
```
We can access these user objects to pull out interesting meta-data.
```
friends['users'][0]['screen_name']
friends['users'][0]['description']
friends['users'][0]['name']
friends['users'][0]['created_at']
friends['users'][0]['statuses_count']
friends['users'][0]['followers_count']
friends['users'][0]['friends_count']
friends['users'][0]['verified']
friends['users'][0]['id']
```
Loop through all the friends of @joebiden and turn it into a DataFrame.
```
friends_payloads = []
for friend in friends['users']:
p = {}
p['name'] = friend['name']
p['description'] = friend['description']
p['screen_name'] = friend['screen_name']
p['created_at'] = friend['created_at']
p['statuses_count'] = friend['statuses_count']
p['followers_count'] = friend['followers_count']
p['friends_count'] = friend['friends_count']
p['verified'] = friend['verified']
p['id'] = friend['id']
friends_payloads.append(p)
friends_df = pd.DataFrame(friends_payloads)
friends_df['created_at'] = pd.to_datetime(friends_df['created_at'])
friends_df['created_at'] = friends_df['created_at'].dt.tz_convert(None)
friends_df['account_age'] = friends_df['created_at'].apply(lambda x:(datetime.now() - x)/pd.Timedelta(1,'d'))
friends_df.head()
```
In this sample of Twitter accounts, are there any interesting trends in verified accounts?
```
f,axs = plt.subplots(1,4,figsize=(16,4),sharey=True)
sb.barplot(x='verified',y='followers_count',data=friends_df,ax=axs[0],estimator=np.mean,errwidth=5)
sb.barplot(x='verified',y='friends_count',data=friends_df,ax=axs[1],estimator=np.mean,errwidth=5)
sb.barplot(x='verified',y='statuses_count',data=friends_df,ax=axs[2],estimator=np.mean,errwidth=5)
sb.barplot(x='verified',y='account_age',data=friends_df,ax=axs[3],estimator=np.mean,errwidth=5)
axs[0].set_title('Followers')
axs[1].set_title('Friends')
axs[2].set_title('Statuses')
axs[3].set_title('Account age (days)')
# As we'll see below, having more than 5,000 friends could complicate our sampling
axs[0].axhline(5000,ls='--',c='k')
axs[1].axhline(5000,ls='--',c='k')
axs[2].axhline(3200,ls='--',c='k')
for ax in axs:
ax.set_ylim((1e0,1e8))
ax.set_yscale('symlog')
ax.set_ylabel(None)
f.tight_layout()
```
Are these differences statistically-significant? Let's run some [t-tests](https://en.wikipedia.org/wiki/T-test).
```
from scipy import stats
for var in ['followers_count','friends_count','statuses_count','account_age']:
vals1 = friends_df.loc[friends_df['verified'] == True,var]
vals2 = friends_df.loc[friends_df['verified'] == False,var]
test,pvalue = stats.ttest_ind(vals1,vals2)
str_fmt = "The differences in {0}: t = {1:.2f} \t p={2:.3f}"
print(str_fmt.format(var,test,pvalue))
```
Or use the non-parametric [Mann-Whitney U-test](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test) since our data is so skewed.
```
for var in ['followers_count','friends_count','statuses_count','account_age']:
vals1 = friends_df.loc[friends_df['verified'] == True,var]
vals2 = friends_df.loc[friends_df['verified'] == False,var]
test,pvalue = stats.mannwhitneyu(vals1,vals2)
str_fmt = "The differences in {0}: U = {1:,.0f} \t p = {2:,.3f}"
print(str_fmt.format(var,test,pvalue))
```
Unsurprisingly, verified accounts have more followers and are older than non-verified accounts. But they also appear to be more active and have more friends.
### Ego-network
We can make a 1.5-step ego-network of the accounts @aoc follows and the accounts each of them follow. Using the `GetFriends` is too "expensive" because it cost us 8 API calls to get a single account's followers since it only returns 200 accounts at a time. Twitter also exposes a [get friends/ids](https://developer.twitter.com/en/docs/accounts-and-users/follow-search-get-users/api-reference/get-friends-ids) end-point that will return up to 5,000 user IDs per request. The number of requests remains 15 requests per 15-minute window, but we can now get the friend networks for 15 accounts per 15 minutes rather than maybe only 1 or 2. The challenge with this is that we will need to "rehydrate" these user IDs at some point.
Here, we'll use the "total_count" parameter to limit it to 5,000 accounts in case one of these accounts follows thousands of accounts.
Store the data in a dictionary keyed by account name and with the list of user IDs integers as values. Initialize with @aoc.
```
api.users.lookup(screen_name='joebiden')[0]['id']
friends_d = {'939091':api.friends.ids(user_id='939091',count=5000)}
api.application.rate_limit_status()['resources']['friends']['/friends/ids']
datetime.fromtimestamp(1616698614)
```
How many accounts have more than 5000 friends? So about 10% of our network will be incomplete if we limit to only a single "page" of 5,000 user IDs per follower.
```
gt5000_friends_df = friends_df[friends_df['friends_count'] > 5000]
print("There are {0:,} accounts with more than 5,000 friends.".format(len(gt5000_friends_df)))
gt5000_friends_df.head()
```
Who are some of these high-friend accounts? Even at 5,000 friends per request, it will still cost you 119 API requests (and thus 119 minutes) to get Barack Obama's 593,000 friends.
```
# Make a list of the high-friend accounts to skip
gt5000_friends_ids = gt5000_friends_df['id'].values.tolist()
```
This loop will go through the list of `aoc_friends` (a list of `User` objects) and then get the 5,000 friends' user IDs for each of them. With these rate limits of 15 requests per 15 min, it will take 1417 minutes (23.6 hours) to get our sample of data for AOC's 1,417 friends. You can now start to see the appeal of parallelizing requests!
You probably don't want to run this loop.
```
# Loop through each of @joebiden's friend IDs
for friend_id in friends_d['939091']['ids']:
# Check to make sure the friend ID isn't already in the dictionary and is not a high-friend account
if friend_id not in friends_d.keys() and friend_id not in gt5000_friends_ids:
# Try to get the account's friends
try:
friends_d[friend_id] = api.friends.ids(user_id=friend_id,count=5000)
# If you get a TwitterError, assume its a rate limit problem
except twitter.TwitterError:
# Get the current rate limit status
reset_time = api.application.rate_limit_status()['resources']['friends']['/friends/ids']['reset']
# Wait until the API limit refreshed and add a second for good measure
sleep_time = (datetime.fromtimestamp(reset_time) - datetime.now())/timedelta(seconds=1) + 1
# Print out to make sure
print("At {0}, sleeping for {1} seconds.".format(datetime.now(),sleep_time))
# Sleep until our API limit refreshes
time.sleep(sleep_time)
# Try to get the friend ID again
friends_d[friend_id] = api.friends.ids(user_id=friend_id,count=5000)
# Write the friend IDs out to disk after each friend ID
with open('joebiden_friends_ids.json','w') as f:
json.dump(friends_d,f)
```
Instead, I've done this scraping for you and saved the results in a JSON file.
```
with open('joebiden_friends_ids.json','r') as f:
friends_d = json.load(f)
```
Now we want to make a network of who follows whom.
```
friends_l = []
# Turn the dictionary into an edgelist
for user_id, friend_ids in friends_d.items():
for friend_id in friend_ids['ids']:
friends_l.append((str(user_id),str(friend_id)))
# Turn the list of dictionaries into a DataFrame
friends_gdf = pd.DataFrame(friends_l,columns=['user_id','friend_id'])
# Get the unique user_ids for AOC's friends
unique_friend_ids = friends_gdf['user_id'].unique()
# Just keep friends of joebiden in the list
# Throw away friends of friends who aren't direct friends of joebiden
subset_friends_df = friends_gdf[friends_gdf['friend_id'].isin(unique_friend_ids)]
# Print out number of edges remaining
print('Edges before: {:,}'.format(len(friends_gdf)),'\nEdges after: {:,}'.format(len(subset_friends_df)))
# Inspect
subset_friends_df.head()
```
Map the numeric user_id back to screen_name.
```
ids_to_screen_name_map = {str(user['id']):user['screen_name'] for user in friends['users']}
ids_to_screen_name_map['939091'] = 'joebiden'
```
Building on the [shared audience measure](http://faculty.washington.edu/kstarbi/Stewart_Starbird_Drawing_the_Lines_of_Contention-final.pdf) used by Stewart, *et al.* (2017), I computed [Jaccard coefficients](https://en.wikipedia.org/wiki/Jaccard_index) for the friend sets of each account. The intuituion here is that if two accounts are friends with all the same accounts, their score would be 1 while if two accounts had no friends in common, their score would be 0. This has the benefit of giving us a numerical weight to otherwise binary friend relationships: friend relations are "stronger" if they are more strongly embedded in a network with other overlapping friend relations and "weaker" if there is less overlap. This requires pair-wise evaluations of $1420*1419=2,014,980$ combinations, which takes about 20 minutes on my computer.
```
jaccard_l = []
gt5000_friends_ids
for f1 in unique_friend_ids:
for f2 in unique_friend_ids:
if f1 != f2 and int(f1) not in gt5000_friends_ids and int(f2) not in gt5000_friends_ids:
try:
# f1_int = int(f1)
# f2_int = int(f2)
jaccard = len(set(friends_d[f1]['ids']) & set(friends_d[f2]['ids']))/len(set(friends_d[f1]['ids']) | set(friends_d[f2]['ids']))
jaccard_l.append({'user':f1,'friend':f2,'jaccard':jaccard})
except:
print(f1,f2)
pass
friend_jaccard_df = pd.DataFrame(jaccard_l)[['user','friend','jaccard']]
friend_jaccard_df.to_csv('all_friend_jaccard.csv')
friend_jaccard_df.head()
```
We can combine the `subset_friends_df` with `friend_jaccard_df` using pandas's [`merge`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html) command.
```
# Left-join the subset_friends and friend_jaccard DataFrames
friend_el = pd.merge(subset_friends_df,
friend_jaccard_df,
left_on=['user_id','friend_id'],
right_on=['user','friend'],how='left')
# Keep a few columns
friend_el = friend_el[['user','friend','jaccard']]
# Map the user_ids back to screen_names
friend_el['user'] = friend_el['user'].apply(str).map(ids_to_screen_name_map)
friend_el['friend'] = friend_el['friend'].apply(str).map(ids_to_screen_name_map)
# Save to disk
# friend_el.to_csv('aoc_friends_edgelist.csv')
# Inspect
friend_el.tail()
```
We are going to use the [`networkx`](https://networkx.github.io/documentation/stable/) (that should come with Anaconda by default) to convert this edgelist into a Graph object.
```
# Import networkx
import networkx as nx
```
This raw network is very dense: there are about 100 times more edges than nodes. A general heuristic for graph visualization is you want the number of nodes and edges to be about the same order of magnitude to prevent [overplotting](https://www.displayr.com/what-is-overplotting/).
```
g_dense = nx.from_pandas_edgelist(friend_el,source='user',target='friend',edge_attr='jaccard',create_using=nx.Graph)
print("There are {0:,} nodes and {1:,} edges.".format(g_dense.number_of_nodes(),g_dense.number_of_edges()))
nx.write_gexf(g_dense,'all_friends.gexf')
```
Visualize. There are better packages for doing this (like Gephi), but let's do something easy.
```
f,ax = plt.subplots(1,1,figsize=(12,12))
pos = nx.layout.rescale_layout_dict(nx.layout.spring_layout(g_dense,iterations=100),1.2)
nx.draw_networkx_nodes(g_dense,pos,node_size=[i*1e3 for i in nx.degree_centrality(g_dense).values()])
nx.draw_networkx_edges(g_dense,pos,width=[d['jaccard']*33 for i,j,d in g_dense.edges(data=True)],alpha=.2)
nx.draw_networkx_labels(g_dense,pos,font_size=8);
```
## Using the streaming API
We can also sit on Twitter's [Streaming API](https://developer.twitter.com/en/docs/tweets/sample-realtime/api-reference/get-statuses-sample) and get a sample of tweets that are produced in real time. The `.GetStreamSample()` method returns a [generator](https://wiki.python.org/moin/Generators), which is an advanced type of object that doesn't store any data *per se* but points to successive locations where you can find data. In this case, the generator points to where we can find the next tweet in the sample. For 10,000 tweets on a stream sampling approximately 1% of live tweets, this may take 2–3 minutes.
```
# Make the generator
stream = twitter.TwitterStream(auth=twitter.OAuth(twitter_keys['access_token_key'],
twitter_keys['access_token_secret'],
twitter_keys['consumer_key'],
twitter_keys['consumer_secret']))
tweet_stream = stream.statuses.sample()
# Make an empty list to store the tweet statuses
stream_list = []
# Start iterating through the stream
for status in tweet_stream:
# As long as we have fewer than this many tweets
if len(stream_list) < 1000:
# And if it's not a delete status request
if 'delete' not in status:
# Add another tweet to our list
stream_list.append(status)
# Otherwise stop
else:
break
"There are {0:,} tweets from the stream.".format(len(stream_list))
```
Look at one of our statuses.
```
stream_list[0]
```
We can adapt our previous tweet_cleaner code to turn this JSON data into a DataFrame.
```
def tweet_cleaner(status):
payload = {}
payload['screen_name'] = status['user']['screen_name']
payload['created'] = pd.to_datetime(status['created_at'])
payload['retweets'] = status['retweet_count']
payload['favorites'] = status['favorite_count']
payload['id'] = status['id']
payload['reply_screen_name'] = status['in_reply_to_screen_name']
payload['reply_id'] = status['in_reply_to_status_id']
payload['source'] = BeautifulSoup(status['source']).text
payload['lang'] = status['lang']
if status['place']:
payload['place'] = status['place']['country']
else:
payload['place'] = None
if len(status['entities']['user_mentions']) > 0:
payload['user_mentions'] = '; '.join([m['screen_name'] for m in status['entities']['user_mentions']])
else:
payload['user_mentions'] = None
if len(status['entities']['hashtags']) > 0:
payload['hashtags'] = '; '.join([h['text'] for h in status['entities']['hashtags']])
else:
payload['hashtags'] = None
# If an account retweets another account, we should store that information
if 'retweeted_status' in status:
rt_status = status['retweeted_status']
if 'extended_tweet' in rt_status:
payload['text'] = rt_status['extended_tweet']['full_text']
if len(rt_status['extended_tweet']['entities']['hashtags']) > 0:
payload['hashtags'] = '; '.join([h['text'] for h in rt_status['extended_tweet']['entities']['hashtags']])
else:
payload['hashtags'] = None
else:
try:
payload['text'] = rt_status['text']
except:
payload['text'] = rt_status['full_text']
if len(rt_status['entities']['hashtags']) > 0:
payload['hashtags'] = '; '.join([h['text'] for h in rt_status['entities']['hashtags']])
else:
payload['hashtags'] = None
payload['is_retweet'] = True
payload['retweeted_screen_name'] = rt_status['user']['screen_name']
payload['retweeted_created'] = rt_status['created_at']
payload['retweeted_source'] = BeautifulSoup(rt_status['source']).text
else:
if status['truncated']:
payload['text'] = status['extended_tweet']['full_text']
else:
try:
payload['text'] = status['text']
except:
payload['text'] = status['full_text']
payload['is_retweet'] = False
payload['retweeted_screen_name'] = False
payload['retweeted_created'] = False
payload['retweeted_source'] = False
return payload
```
Loop through our list of dictionaries (including the delete stream objects) and flatten the dictionaries out into something we can read into a DataFrame. Include some exception handling that will keep track of which tweets throw errors and prints out the first 50 of those tweet's index position in the `stream_list` for us to diagnose.
```
stream_statuses_flat = []
errors = []
for i,status in enumerate(stream_list):
try:
payload = tweet_cleaner(status)
stream_statuses_flat.append(payload)
except:
errors.append(str(i))
if len(errors) == 0:
print("There were no errors!")
else:
print("There were errors at the following indices:", '; '.join(errors[:50]))
```
Make our DataFrame, clean up some columns, and make some new ones.
```
stream_df = pd.DataFrame(stream_statuses_flat)
stream_df['created'] = pd.to_datetime(stream_df['created'])
stream_df['created'] = stream_df['created'].dt.tz_convert(None)
stream_df.tail()
```
Where are people writing their tweets in this sample?
```
stream_df['source'].value_counts().head(20)
```
What languages are these tweets in?
```
stream_df['lang'].value_counts().head(10)
```
If a tweet is geolocated, where is it?
```
stream_df['place'].value_counts()
```
How many tweets are retweets?
```
stream_df['is_retweet'].value_counts()
```
How many tweets are replies?
```
stream_df['reply_id'].notnull().value_counts()
```
Which users are getting a lot of retweets right now?
```
stream_df['retweeted_screen_name'].value_counts().head(20)
```
### Filtered streams
We can also filter the tweets in the stream. Here we only get tweets mentioning "Biden" and that have been auto-classified as written in English.
```
# Make the generator
filtered_stream = stream.statuses.filter(track='Biden',languages='en')
# Make an empty list to store the tweet statuses
filtered_stream_list = []
# What time did we start?
start = time.time()
# Start iterating through the stream
for status in filtered_stream:
# As long as we have fewer than this many tweets
if len(filtered_stream_list) < 1000:
# And if it's not a delete status request
if 'delete' not in status:
# Add another tweet to our list
filtered_stream_list.append(status)
# Otherwise stop
else:
break
# What time did we stop?
stop = time.time()
elapsed = stop - start
"There are {0:,} tweets from the stream after {1:.0f} seconds.".format(len(filtered_stream_list),elapsed)
```
Clean this up into a DataFrame.
```
filtered_stream_statuses_flat = []
filtered_errors = []
for i,status in enumerate(filtered_stream_list):
try:
payload = tweet_cleaner(status)
filtered_stream_statuses_flat.append(payload)
except:
filtered_errors.append(str(i))
if len(filtered_errors) == 0:
print("There were no errors!")
else:
print("There were errors at the following indices:", '; '.join(filtered_errors[:50]))
filtered_stream_df = pd.DataFrame(filtered_stream_statuses_flat)
filtered_stream_df['created'] = pd.to_datetime(filtered_stream_df['created'])
filtered_stream_df['created'] = filtered_stream_df['created'].dt.tz_convert(None)
filtered_stream_df.tail()
```
Let's measure the sentiment of the tweets in this filtered s ample and plot the distribution of their sentiment values.
```
# Compute the sentiment scores
filtered_stream_df['sentiment'] = filtered_stream_df['text'].apply(lambda x:sia.polarity_scores(x)['compound'])
# Plot the distribution
filtered_stream_df['sentiment'].plot(kind='hist',bins=20)
```
How many retweets in this sample?
```
filtered_stream_df['is_retweet'].value_counts()
```
Given the higher fraction of retweets, who is being retweeted?
```
filtered_stream_df['retweeted_screen_name'].value_counts().head(10)
```
## Search API
Twitter's [search API](https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets) provides an endpoint to search for tweets matching a query for terms, accounts, hashtags, language, locations, and date ranges. This API endpoint has a rate limit of 180 requests per 15-minute window with 100 statuses per request: or 18,000 statuses per window or 72,000 statuses per hour.
You can explore some of the search functionality through Twitter's [advanced search interface](https://twitter.com/search-advanced). Note that the [standard search API](https://developer.twitter.com/en/docs/tweets/search/overview/standard) only provides a limited access to sample of tweets in the past 7 days, you'll need to pay more to access [historical APIs](https://developer.twitter.com/en/docs/tutorials/choosing-historical-api.html).
```
query = api.search.tweets(q='boulder',
count=100,
lang='en',
result_type='recent',
tweet_mode='extended')
```
Loop through these 100 tweets.
```
search_statuses_flat = []
search_errors = []
for i,status in enumerate(query['statuses']):
try:
payload = tweet_cleaner(status)
search_statuses_flat.append(payload)
except:
search_errors.append(str(i))
if len(search_errors) == 0:
print("There were no errors!")
else:
print("There were errors at the following indices:", '; '.join(search_errors[:50]))
search_df = pd.DataFrame(search_statuses_flat)
search_df['created'] = pd.to_datetime(search_df['created'])
search_df['created'] = search_df['created'].dt.tz_convert(None)
search_df.tail()
```
Write a loop to try to get more tweets. The `query` dictionary includes a sub-dictionary under the "search_metadata" key that includes information about paginating to find the next set of results.
```
search_tweets = []
while True:
# When to stop?
if len(search_tweets) == 2500:
break
# Get the first set of tweets
if len(search_tweets) == 0:
query = api.search.tweets(q='boulder',
count=100,
lang='en',
result_type='recent',
tweet_mode='extended')
search_tweets += query['statuses']
# Keep getting tweets
else:
# Find the last tweet to use as a max_id
max_id = search_tweets[-1]['id']
# Get the next set of tweets
query = api.search.tweets(q='boulder',
count=100,
lang='en',
result_type='recent',
tweet_mode='extended',
max_id = max_id - 1)
# Add them to the list of tweets
search_tweets += query['statuses']
print("There are {0:,} tweets in the collection.".format(len(search_tweets)))
search_statuses_flat = []
search_errors = []
for i,status in enumerate(search_tweets):
try:
payload = tweet_cleaner(status)
search_statuses_flat.append(payload)
except:
search_errors.append(str(i))
if len(search_errors) == 0:
print("There were no errors!")
else:
print("There were errors at the following indices:", '; '.join(search_errors[:50]))
search_df = pd.DataFrame(search_statuses_flat)
search_df['created'] = pd.to_datetime(search_df['created'])
search_df['created'] = search_df['created'].dt.tz_convert(None)
search_df.tail()
# Compute the sentiment scores
search_df['sentiment'] = search_df['text'].apply(lambda x:sia.polarity_scores(x)['compound'])
# Plot the distribution
search_df['sentiment'].plot(kind='hist',bins=20)
```
| github_jupyter |
Copyright © 2017-2021 ABBYY Production LLC
```
#@title
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Identity recurrent neural network (IRNN)
[Download the tutorial as a Jupyter notebook](https://github.com/neoml-lib/neoml/blob/master/NeoML/docs/en/Python/tutorials/IRNN.ipynb)
In this tutorial, we'll demonstrate that an identity recurrent neural network (IRNN) can efficiently process long temporal sequences, reproducing one of the experiments described in the [Identity RNN article](https://arxiv.org/pdf/1504.00941.pdf).
The experiment tests the IRNN on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, first transforming its 28 x 28 images into 784-pixel-long sequences. The article claims that IRNN can achieve 0.9+ accuracy in these conditions.
The tutorial includes the following steps:
* [Download and prepare the dataset](#Download-and-prepare-the-dataset)
* [Build the network](#Build-the-network)
* [Train the network and evaluate the results](#Train-the-network-and-evaluate-the-results)
## Download and prepare the dataset
We will download the MNIST dataset from scikit-learn.
```
from sklearn.datasets import fetch_openml
X, y = fetch_openml('mnist_784', version=1, return_X_y=True, as_frame=False)
```
Now we need to normalize it and convert to 32-bit datatypes for NeoML.
```
import numpy as np
# Normalize
X = (255 - X) * 2 / 255 - 1
# Fix data types
X = X.astype(np.float32)
y = y.astype(np.int32)
```
Finally, we'll split the data into subsets used for training and for testing.
```
# Split into train/test
train_size = 60000
X_train, X_test = X[:train_size], X[train_size:]
y_train, y_test = y[:train_size], y[train_size:]
del X, y
```
## Build the network
### Choose the device
We need to create a math engine that will perform all calculations and allocate data for the neural network. The math engine is tied to the processing device.
In this tutorial we'll use a single-threaded CPU math engine.
```
import neoml
math_engine = neoml.MathEngine.CpuMathEngine(1)
```
### Create the network and connect layers
Create a `neoml.Dnn.Dnn` object that represents a neural network (a directed graph of layers). The network requires a math engine to perform its operations; it must be specified at creation and can't be changed later.
```
dnn = neoml.Dnn.Dnn(math_engine)
```
A `neoml.Dnn.Source` layer feeds the data into the network.
```
data = neoml.Dnn.Source(dnn, 'data') # source for data
```
Now we need to transpose this data into sequences of 784 pixels each. We can do that using the `neoml.Dnn.Transpose` layer, which swaps 2 dimensions of the blob.
Original data will be wrapped into a 2-dimensional blob with `BatchWidth` equal to batch size and `Channels` equal to image size. (We're creating blobs before training the network, [see below](#Train-the-network-and-evaluate-the-results).) This layer will transform it into sequences (`BatchLength`) of image size, where each element of the sequence will be of size `1`.
```
transpose = neoml.Dnn.Transpose(data, first_dim='batch_length',
second_dim='channels', name='transpose')
```
We add the `neoml.Dnn.Irnn` layer, connecting its input to the output of the transposition layer.
```
hidden_size = 100
irnn = neoml.Dnn.Irnn(transpose, hidden_size, identity_scale=1.,
input_weight_std=1e-3, name='irnn')
```
But recurrent layers in NeoML usually return whole sequences. To reproduce the experiment, we only need the last element of each. The `neoml.Dnn.SubSequence` layer will help us here.
```
subseq = neoml.Dnn.SubSequence(irnn, start_pos=-1,
length=1, name='subseq')
```
Now we use a fully-connected layer to form logits (non-normalized distribution) over MNIST classes.
```
n_classes = 10
fc = neoml.Dnn.FullyConnected(subseq, n_classes, name='fc')
```
To train the network, we also need to define a loss function to be optimized. In this tutorial we'll be optimizing cross-entropy loss.
A loss function needs to compare the network output with the correct labels, so we'll add another source layer to pass the correct labels in.
```
labels = neoml.Dnn.Source(dnn, 'labels') # Source for labels
loss = neoml.Dnn.CrossEntropyLoss((fc, labels), name='loss')
```
NeoML also provides a `neoml.Dnn.Accuracy` layer to calculate network accuracy. Let's connect this layer and create an additional `neoml.Dnn.Sink` layer for extracting its output.
```
# Auxilary layers in order to get statistics
accuracy = neoml.Dnn.Accuracy((fc, labels), name='accuracy')
# accuracy layers writes its result to its output
# We need additional sink layer to extract it
accuracy_sink = neoml.Dnn.Sink(accuracy, name='accuracy_sink')
```
### Create a solver
Solver is an object that optimizes the weights using gradient values. It is necessary for training the network. In this sample we'll use a `neoml.Dnn.AdaptiveGradient` solver, which is the NeoML implementation of [Adam](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Adam).
```
lr = 1e-6
# Create solver
dnn.solver = neoml.Dnn.AdaptiveGradient(math_engine, learning_rate=lr,
l1=0., l2=0., # no regularization
max_gradient_norm=1., # clip gradients
moment_decay_rate=0.9,
second_moment_decay_rate=0.999)
```
## Train the network and evaluate the results
NeoML networks accept data only as `neoml.Blob.Blob`.
Blobs are 7-dimensional arrays located in device memory. Each dimension has a specific purpose:
1. `BatchLength` - temporal axis (used in recurrent layers)
2. `BatchWidth` - classic batch
3. `ListSize` - list axis, used when objects are related to the same entity, but without ordering (unlike `BatchLength`)
4. `Height` - height of the image
5. `Width` - width of the image
6. `Depth` - depth of the 3-dimensional image
7. `Channels` - channels of the image (also used when object is a 1-dimensional vector)
We will use `ndarray` to split data into batches, then create blobs from these batches right before feeding them into the network.
```
def irnn_data_iterator(X, y, batch_size, math_engine):
"""Slices numpy arrays into batches and wraps them in blobs"""
def make_blob(data, math_engine):
"""Wraps numpy data into neoml blob"""
shape = data.shape
if len(shape) == 2: # data
# Wrap 2-D array into blob of (BatchWidth, Channels) shape
return neoml.Blob.asblob(math_engine, data,
(1, shape[0], 1, 1, 1, 1, shape[1]))
elif len(shape) == 1: # dense labels
# Wrap 1-D array into blob of (BatchWidth,) shape
return neoml.Blob.asblob(math_engine, data,
(1, shape[0], 1, 1, 1, 1, 1))
else:
assert(False)
start = 0
data_size = y.shape[0]
while start < data_size:
yield (make_blob(X[start : start+batch_size], math_engine),
make_blob(y[start : start+batch_size], math_engine))
start += batch_size
```
To train the network, call `dnn.learn` with data as its argument.
To run the network without training, call `dnn.run` with data as its argument.
The input data is a `dict` where each key is a `neoml.Dnn.Source` layer name and the corresponding value is the `neoml.Blob.Blob` that should be passed in to this layer.
```
def run_net(X, y, batch_size, dnn, is_train):
"""Runs dnn on given data"""
start = time.time()
total_loss = 0.
run_iter = dnn.learn if is_train else dnn.run
math_engine = dnn.math_engine
layers = dnn.layers
loss = layers['loss']
accuracy = layers['accuracy']
sink = layers['accuracy_sink']
accuracy.reset = True # Reset previous statistics
# Iterate over batches
for X_batch, y_batch in irnn_data_iterator(X, y, batch_size, math_engine):
# Run the network on the batch data
run_iter({'data': X_batch, 'labels': y_batch})
total_loss += loss.last_loss * y_batch.batch_width # Update epoch loss
accuracy.reset = False # Don't reset statistics within one epoch
avg_loss = total_loss / y.shape[0]
avg_acc = sink.get_blob().asarray()[0]
run_time = time.time() - start
return avg_loss, avg_acc, run_time
```
*Note*: It will take 3-4 hours to train. You may uncomment print statements to see the progress.
```
%%time
import time
batch_size = 40
n_epoch = 200
for epoch in range(n_epoch):
# Train
train_loss, train_acc, run_time = run_net(X_train, y_train, batch_size,
dnn, is_train=True)
# print(f'Train #{epoch}\tLoss: {train_loss:.4f}\t'
# f'Accuracy: {train_acc:.4f}\tTime: {run_time:.2f} sec')
# Test
test_loss, test_acc, run_time = run_net(X_test, y_test, batch_size,
dnn, is_train=False)
# print(f'Test #{epoch}\tLoss: {test_loss:.4f}\t'
# f'Accuracy: {test_acc:.4f}\tTime: {run_time:.2f} sec')
print(f'Final test acc: {test_acc:.4f}')
```
As we can see, this model actually has achieved 0.9+ accuracy on these long sequences, confirming the paper's results.
| github_jupyter |
# Descriptor Example: Attribute Validation
## LineItem Take #3: A Simple Descriptor
```
class Quantity:
def __init__(self, storage_name):
self.storage_name = storage_name
def __set__(self, instance, value):
if value > 0:
instance.__dict__[self.storage_name] = value
else:
raise ValueError('value must be > 0')
class LineItem:
weight = Quantity('weight')
price = Quantity('price')
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
truffle = LineItem('White truffle', 100, 0)
truffle = LineItem('White truffle', 100, 1)
truffle.__dict__
```
## LineItem Take #4: Automatic Storage Attribute Names
```
class Quantity:
__counter = 0
def __init__(self):
cls = self.__class__
prefix = cls.__name__
index = cls.__counter
self.storage_name = '_{}#{}'.format(prefix, index)
cls.__counter += 1
def __get__(self, instance, owner):
return getattr(instance, self.storage_name)
def __set__(self, instance, value):
if value > 0:
setattr(instance, self.storage_name, value)
else:
raise ValueError('value must be > 0')
class LineItem:
weight = Quantity()
price = Quantity()
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
coconuts = LineItem('Brazilian coconut', 20, 17.95)
coconuts.weight, coconuts.price
getattr(coconuts, '_Quantity#0'), getattr(coconuts, '_Quantity#1')
LineItem.weight
class Quantity:
__counter = 0
def __init__(self):
cls = self.__class__
prefix = cls.__name__
index = cls.__counter
self.storage_name = '_{}#{}'.format(prefix, index)
cls.__counter += 1
def __get__(self, instance, owner):
if instance is None:
return self
else:
return getattr(instance, self.storage_name)
def __set__(self, instance, value):
if value > 0:
setattr(instance, self.storage_name, value)
else:
raise ValueError('value must be > 0')
class LineItem:
weight = Quantity()
price = Quantity()
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
LineItem.price
br_nuts = LineItem('Brazil nuts', 10, 34.95)
br_nuts.price
```
## LineItem Take #5: A New Descriptor Type
```
import abc
class AutoStorage:
__counter = 0
def __init__(self):
cls = self.__class__
prefix = cls.__name__
index = cls.__counter
self.storage_name = '_{}#{}'.format(prefix, index)
cls.__counter += 1
def __get__(self, instance, owner):
if instance is None:
return self
else:
return getattr(instance, self.storage_name)
def __set__(self, instance, value):
setattr(instance, self.storage_name, value)
class Validated(abc.ABC, AutoStorage):
def __set__(self, instance, value):
value = self.validate(instance, value)
super().__set__(instance, value)
@abc.abstractmethod
def validate(self, instance, value):
"""return validated value or raise ValueError"""
class Quantity(Validated):
"""a number greater than zero"""
def validate(self, instance, value):
if value <= 0:
raise ValueError('value must be > 0')
return value
class NonBlank(Validated):
"""a string with at least one non-space character"""
def validate(self, instance, value):
value = value.strip()
if len(value) == 0:
raise ValueError('value cannot be empty or blank')
return value
class LineItem:
description = NonBlank()
weight = Quantity()
price = Quantity()
def __init__(self, description, weight, price):
self.description = description
self.weight = weight
self.price = price
def subtotal(self):
return self.weight * self.price
```
# Overriding Versus Nonoverriding Descriptors
```
### auxiliary functions for display only ###
def cls_name(obj_or_cls):
cls = type(obj_or_cls)
if cls is type:
cls = obj_or_cls
return cls.__name__.split('.')[-1]
def display(obj):
cls = type(obj)
if cls is type:
return '<class {}>'.format(obj.__name__)
elif cls in [type(None), int]:
return repr(obj)
else:
return '<{} object>'.format(cls_name(obj))
def print_args(name, *args):
pseudo_args = ', '.join(display(x) for x in args)
print('-> {}.__{}__({})'.format(cls_name(args[0]), name, pseudo_args))
### essential classes for this example ###
class Overriding:
"""a.k.a. data descriptor or enforded descriptor"""
def __get__(self, instance, owner):
print_args('get', self, instance, owner)
def __set__(self, instance, value):
print_args('set', self, instance, value)
class OverridingNoGet:
"""an overriding descriptor without ``__get__``"""
def __set__(self, instance, value):
print_args('set', self, instance, value)
class NonOverriding:
"""a.k.a. non-data or shadowable descriptor"""
def __get__(self, instance, owner):
print_args('get', self, instance, owner)
class Managed:
over = Overriding()
over_no_get = OverridingNoGet()
non_over = NonOverriding()
def spam(self):
print('-> Managed.spam({})'.format(display(self)))
```
## Overriding Descriptor
```
obj = Managed()
obj.over
Managed.over
obj.over = 7
obj.over
obj.__dict__['over'] = 8
vars(obj)
obj.over
```
## Overriding Descriptor Without __get__
```
obj.over_no_get
Managed.over_no_get
obj.over_no_get = 7
obj.over_no_get
obj.__dict__['over_no_get'] = 9
obj.over_no_get
obj.over_no_get = 7
obj.over_no_get
```
## Nonoverriding Descriptor
```
obj = Managed()
obj.non_over
obj.non_over = 7
obj.non_over
Managed.non_over
del obj.non_over
obj.non_over
```
## Overwriting a Descriptor in the Class
```
obj = Managed()
Managed.over = 1
Managed.over_no_get = 2
Managed.non_over = 3
obj.over, obj.over_no_get, obj.non_over
```
# Methods Are Descriptors
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.metrics import mean_squared_error,r2_score
from sklearn.preprocessing import MinMaxScaler
from scipy.stats import iqr
from keras.models import load_model
from keras.models import Sequential
from keras.models import Model
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.layers import Input
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
from keras.layers import Activation
import import_ipynb
import libs
X_train,y_train,X_test,y_test,data_hr1,sc1=libs.get_data(k2=12)
print("X_train",X_train.shape,np.min(X_train),np.max(X_train))
print("y_train",y_train.shape,np.min(y_train),np.max(y_train))
print("X_test",X_test.shape,np.min(X_test),np.max(X_test))
print("y_test",y_train.shape,np.min(y_test),np.max(y_test))
print("data_hr1",data_hr1.shape,np.min(data_hr1),np.max(data_hr1))
X=[]
y=[]
for i in range(1,13):
_,_,X_t,y_t,_,_=libs.get_data(k2=i)
X.append(X_t)
y.append(y)
model=[]
pred=[]
nr=12
for i in range(nr):
model.append(load_model('model/corr_selection/model'+str(i+1)+'.h5'))
pred1= model[i].predict(X[i])
pred1=pred1.reshape(-1,1)
pred1 = sc1.inverse_transform(pred1)
pred1=pred1[:,0]
pred.append(pred1[-24*4:])
pred=np.array(pred)
fig = plt.figure(figsize=(8,1.2*nr))
#ax = fig.add_subplot(1,1,1)
for i in range(nr):
#pred1= model[i].predict(X_test[:,:,:2+(i+1)*2])
predi=pred[i][-24*4:]
#print(pred1.shape)
ax = fig.add_subplot(nr,1,i+1)
# This is where I manually set ticks. Can I use Datetime data instead???
major_ticks = np.arange(0, len(predi)+1, 24)
minor_ticks = np.arange(0, len(predi)+1, 1)
plt.plot(data_hr1[-len(predi):,0],marker='.',ls='--',c='orchid', label = 'Real')
ax.plot(predi, marker='.',ls='-',c='dodgerblue', label = 'Pred')
#ax.legend()
#ax.set_title('model'+str(i+1))
ax.set_xticks(major_ticks)
ax.set_xticks(minor_ticks, minor=True)
ax.grid(which='both')
ax.grid(which='minor', alpha=0.3)
ax.grid(which='major', alpha=1.0,linewidth=1.8,color='orange',axis='x')
if i!=nr-1: ax.xaxis.set_ticklabels([])
ax.set_xlim([-0.5, 96.5])
plt.savefig('result.png')
plt.show()
fig = plt.figure(figsize=(20,4))
#ax = fig.add_subplot(1,1,1)
#pred1= model[i].predict(X_test[:,:,:2+(i+1)*2])
#print(pred1.shape)
#ax = fig.add_subplot(1,1,1)
# This is where I manually set ticks. Can I use Datetime data instead???
major_ticks = np.arange(0, len(predi)+1, 24)
minor_ticks = np.arange(0, len(predi)+1, 1)
for i in range(nr):
predi=pred[i]
plt.plot(predi, label = 'Pred')
plt.plot(data_hr1[-len(predi):,0],marker='o',ls='--',c='r', label = 'Real')
plt.legend()
#plt.set_title('model'+str(i+1))
#plt.set_xticks(major_ticks)
#plt.set_xticks(minor_ticks, minor=True)
plt.grid(which='both')
plt.grid(which='minor', alpha=0.3)
plt.grid(which='major', alpha=1.0,linewidth=1.8,color='r',axis='x')
plt.show()
y_true.shape
7*24
len(pred)
from matplotlib.font_manager import FontProperties
fontP = FontProperties()
fontP.set_size('small')
#rmse/max
y_true=sc1.inverse_transform(y_test[:,:,0]).flatten()[-len(predi):]
nrmse=[]
for j in range(len(pred)):
nrmsei=[]
for i in range(4):
r=np.sqrt(mean_squared_error(y_true[i*24:(i+1)*24],pred[j][i*24:(i+1)*24]))
nrmsei.append(r/np.max(y_true[i*24:(i+1)*24]))
nrmse.append(nrmsei)
nrmse=np.array(nrmse);nrmse.shape
fig = plt.figure()
ax = plt.subplot(111)
for i in range(len(nrmse)):
ax.plot(nrmse[i],label="model"+str(i))
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.grid()
plt.ylabel("nrmse")
plt.xlabel("day")
plt.show()
y_true=sc1.inverse_transform(y_test[:,:,0]).flatten()[-len(predi):]
y_true.shape
rmse=[]
nrmse_max=[]
nrmse_mean=[]
nrmse_std=[]
nrmse_iqr=[]
wape=[]
for i in range(len(pred)):
r=np.sqrt(mean_squared_error(y_true,pred[i]))
rmse.append(r)
nrmse_max.append(r/np.max(y_true))
nrmse_mean.append(r/np.mean(y_true))
nrmse_std.append(r/np.std(y_true))
nrmse_iqr.append(r/iqr(y_true))
wape.append(np.mean(np.abs((y_true -pred[i])/np.mean(y_true))))
print("model"+str(i)+"==========================")
print('rmse: %.3f'%r)
print('rmse/max',r/np.max(y_true))
print('rmse/mean',r/np.mean(y_true))
print('rmse/std',r/np.std(y_true))
print('rmse/IQR',r/iqr(y_true))
print('WAPE: ',wape[i])
error_list=[nrmse_max,nrmse_mean,nrmse_std,nrmse_iqr,wape]
#max nrmse
'''plt.figure(figsize=(10,2))
plt.bar(["model"+str(i+1) for i in range(12)],wape)
plt.grid()
plt.show()'''
fig=plt.figure(figsize=(10,2*5))
for i in range(5):
fig.add_subplot(5,1,i+1).bar(["model"+str(i+1) for i in range(12)],error_list[i])
#fig.add_subplot(5,1,1).set_title('wape')
fig.add_subplot(5,1,i+1).set_ylabel("nrmse")
fig.add_subplot(5,1,i+1).grid()
plt.show()
```
| github_jupyter |
# Introduction to GeoPandas
This quick tutorial introduces the key concepts and basic features of GeoPandas to help you get started with your projects.
## Concepts
GeoPandas, as the name suggests, extends the popular data science library [pandas](https://pandas.pydata.org) by adding support for geospatial data. If you are not familiar with `pandas`, we recommend taking a quick look at its [Getting started documentation](https://pandas.pydata.org/docs/getting_started/index.html#getting-started) before proceeding.
The core data structure in GeoPandas is the `geopandas.GeoDataFrame`, a subclass of `pandas.DataFrame`, that can store geometry columns and perform spatial operations. The `geopandas.GeoSeries`, a subclass of `pandas.Series`, handles the geometries. Therefore, your `GeoDataFrame` is a combination of `pandas.Series`, with traditional data (numerical, boolean, text etc.), and `geopandas.GeoSeries`, with geometries (points, polygons etc.). You can have as many columns with geometries as you wish; there's no limit typical for desktop GIS software.

Each `GeoSeries` can contain any geometry type (you can even mix them within a single array) and has a `GeoSeries.crs` attribute, which stores information about the projection (CRS stands for Coordinate Reference System). Therefore, each `GeoSeries` in a `GeoDataFrame` can be in a different projection, allowing you to have, for example, multiple versions (different projections) of the same geometry.
Only one `GeoSeries` in a `GeoDataFrame` is considered the _active_ geometry, which means that all geometric operations applied to a `GeoDataFrame` operate on this _active_ column.
<div class="alert alert-info">
User Guide
See more on [data structures in the User Guide](../docs/user_guide/data_structures.rst).
</div>
Let's see how some of these concepts work in practice.
## Reading and writing files
First, we need to read some data.
### Reading files
Assuming you have a file containing both data and geometry (e.g. GeoPackage, GeoJSON, Shapefile), you can read it using `geopandas.read_file()`, which automatically detects the filetype and creates a `GeoDataFrame`. This tutorial uses the `"nybb"` dataset, a map of New York boroughs, which is part of the GeoPandas installation. Therefore, we use `geopandas.datasets.get_path()` to retrieve the path to the dataset.
```
import geopandas
path_to_data = geopandas.datasets.get_path("nybb")
gdf = geopandas.read_file(path_to_data)
gdf
```
### Writing files
To write a `GeoDataFrame` back to file use `GeoDataFrame.to_file()`. The default file format is Shapefile, but you can specify your own with the `driver` keyword.
```
gdf.to_file("my_file.geojson", driver="GeoJSON")
```
<div class="alert alert-info">
User Guide
See more on [reading and writing data in the User Guide](../docs/user_guide/io.rst).
</div>
## Simple accessors and methods
Now we have our `GeoDataFrame` and can start working with its geometry.
Since there was only one geometry column in the New York Boroughs dataset, this column automatically becomes the _active_ geometry and spatial methods used on the `GeoDataFrame` will be applied to the `"geometry"` column.
### Measuring area
To measure the area of each polygon (or MultiPolygon in this specific case), access the `GeoDataFrame.area` attribute, which returns a `pandas.Series`. Note that `GeoDataFrame.area` is just `GeoSeries.area` applied to the _active_ geometry column.
But first, to make the results easier to read, set the names of the boroughs as the index:
```
gdf = gdf.set_index("BoroName")
gdf["area"] = gdf.area
gdf["area"]
```
### Getting polygon boundary and centroid
To get the boundary of each polygon (LineString), access the `GeoDataFrame.boundary`:
```
gdf['boundary'] = gdf.boundary
gdf['boundary']
```
Since we have saved boundary as a new column, we now have two geometry columns in the same `GeoDataFrame`.
We can also create new geometries, which could be, for example, a buffered version of the original one (i.e., `GeoDataFrame.buffer(10)`) or its centroid:
```
gdf['centroid'] = gdf.centroid
gdf['centroid']
```
### Measuring distance
We can also measure how far each centroid is from the first centroid location.
```
first_point = gdf['centroid'].iloc[0]
gdf['distance'] = gdf['centroid'].distance(first_point)
gdf['distance']
```
Note that `geopandas.GeoDataFrame` is a subclass of `pandas.DataFrame`, so we have all the pandas functionality available to use on the geospatial dataset — we can even perform data manipulations with the attributes and geometry information together.
For example, to calculate the average of the distances measured above, access the 'distance' column and call the mean() method on it:
```
gdf['distance'].mean()
```
## Making maps
GeoPandas can also plot maps, so we can check how the geometries appear in space. To plot the active geometry, call `GeoDataFrame.plot()`. To color code by another column, pass in that column as the first argument. In the example below, we plot the active geometry column and color code by the `"area"` column. We also want to show a legend (`legend=True`).
```
gdf.plot("area", legend=True)
```
Switching the active geometry (`GeoDataFrame.set_geometry`) to centroids, we can plot the same data using point geometry.
```
gdf = gdf.set_geometry("centroid")
gdf.plot("area", legend=True)
```
And we can also layer both `GeoSeries` on top of each other. We just need to use one plot as an axis for the other.
```
ax = gdf["geometry"].plot()
gdf["centroid"].plot(ax=ax, color="black")
```
Now we set the active geometry back to the original `GeoSeries`.
```
gdf = gdf.set_geometry("geometry")
```
<div class="alert alert-info">
User Guide
See more on [mapping in the User Guide](../docs/user_guide/mapping.rst).
</div>
## Geometry creation
We can further work with the geometry and create new shapes based on those we already have.
### Convex hull
If we are interested in the convex hull of our polygons, we can access `GeoDataFrame.convex_hull`.
```
gdf["convex_hull"] = gdf.convex_hull
ax = gdf["convex_hull"].plot(alpha=.5) # saving the first plot as an axis and setting alpha (transparency) to 0.5
gdf["boundary"].plot(ax=ax, color="white", linewidth=.5) # passing the first plot and setting linewitdth to 0.5
```
### Buffer
In other cases, we may need to buffer the geometry using `GeoDataFrame.buffer()`. Geometry methods are automatically applied to the active geometry, but we can apply them directly to any `GeoSeries` as well. Let's buffer the boroughs and their centroids and plot both on top of each other.
```
# buffering the active geometry by 10 000 feet (geometry is already in feet)
gdf["buffered"] = gdf.buffer(10000)
# buffering the centroid geometry by 10 000 feet (geometry is already in feet)
gdf["buffered_centroid"] = gdf["centroid"].buffer(10000)
ax = gdf["buffered"].plot(alpha=.5) # saving the first plot as an axis and setting alpha (transparency) to 0.5
gdf["buffered_centroid"].plot(ax=ax, color="red", alpha=.5) # passing the first plot as an axis to the second
gdf["boundary"].plot(ax=ax, color="white", linewidth=.5) # passing the first plot and setting linewitdth to 0.5
```
<div class="alert alert-info">
User Guide
See more on [geometry creation and manipulation in the User Guide](../docs/user_guide/geometric_manipulations.rst).
</div>
## Geometry relations
We can also ask about the spatial relations of different geometries. Using the geometries above, we can check which of the buffered boroughs intersect the original geometry of Brooklyn, i.e., is within 10 000 feet from Brooklyn.
First, we get a polygon of Brooklyn.
```
brooklyn = gdf.loc["Brooklyn", "geometry"]
brooklyn
```
The polygon is a [shapely geometry object](https://shapely.readthedocs.io/en/stable/manual.html#geometric-objects), as any other geometry used in GeoPandas.
```
type(brooklyn)
```
Then we can check which of the geometries in `gdf["buffered"]` intersects it.
```
gdf["buffered"].intersects(brooklyn)
```
Only Bronx (on the north) is more than 10 000 feet away from Brooklyn. All the others are closer and intersect our polygon.
Alternatively, we can check which buffered centroids are entirely within the original boroughs polygons. In this case, both `GeoSeries` are aligned, and the check is performed for each row.
```
gdf["within"] = gdf["buffered_centroid"].within(gdf)
gdf["within"]
```
We can plot the results on the map to confirm the finding.
```
gdf = gdf.set_geometry("buffered_centroid")
ax = gdf.plot("within", legend=True, categorical=True, legend_kwds={'loc': "upper left"}) # using categorical plot and setting the position of the legend
gdf["boundary"].plot(ax=ax, color="black", linewidth=.5) # passing the first plot and setting linewitdth to 0.5
```
## Projections
Each `GeoSeries` has its Coordinate Reference System (CRS) accessible at `GeoSeries.crs`. The CRS tells GeoPandas where the coordinates of the geometries are located on the earth's surface. In some cases, the CRS is geographic, which means that the coordinates are in latitude and longitude. In those cases, its CRS is WGS84, with the authority code `EPSG:4326`. Let's see the projection of our NY boroughs `GeoDataFrame`.
```
gdf.crs
```
Geometries are in `EPSG:2263` with coordinates in feet. We can easily re-project a `GeoSeries` to another CRS, like `EPSG:4326` using `GeoSeries.to_crs()`.
```
gdf = gdf.set_geometry("geometry")
boroughs_4326 = gdf.to_crs("EPSG:4326")
boroughs_4326.plot()
boroughs_4326.crs
```
Notice the difference in coordinates along the axes of the plot. Where we had 120 000 - 280 000 (feet) before, we now have 40.5 - 40.9 (degrees). In this case, `boroughs_4326` has a `"geometry"` column in WGS84 but all the other (with centroids etc.) remain in the original CRS.
<div class="alert alert-warning">
Warning
For operations that rely on distance or area, you always need to use a projected CRS (in meters, feet, kilometers etc.) not a geographic one (in degrees). GeoPandas operations are planar, whereas degrees reflect the position on a sphere. Therefore, spatial operations using degrees may not yield correct results. For example, the result of `gdf.area.sum()` (projected CRS) is 8 429 911 572 ft<sup>2</sup> but the result of `boroughs_4326.area.sum()` (geographic CRS) is 0.083.
</div>
<div class="alert alert-info">
User Guide
See more on [projections in the User Guide](../docs/user_guide/projections.rst).
</div>
## What next?
With GeoPandas we can do much more than what has been introduced so far, from [aggregations](../docs/user_guide/aggregation_with_dissolve.rst), to [spatial joins](../docs/user_guide/mergingdata.rst), to [geocoding](../docs/user_guide/geocoding.rst), and [much more](../gallery/index.rst).
Head over to the [User Guide](../docs/user_guide.rst) to learn more about the different features of GeoPandas, the [Examples](../gallery/index.rst) to see how they can be used, or to the [API reference](../docs/reference.rst) for the details.
| github_jupyter |
# Advanced Usage Exampes for Seldon Client
## Istio Gateway Request with token over HTTPS - no SSL verification
Test against a current kubeflow cluster with Dex token authentication.
1. Install kubeflow with Dex authentication
```
INGRESS_HOST=!kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
ISTIO_GATEWAY=INGRESS_HOST[0]
ISTIO_GATEWAY
```
Get a token from the Dex gateway. At present as Dex does not support curl password credentials you will need to get it from your browser logged into the cluster. Open up a browser console and run `document.cookie`
```
TOKEN = "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NjM2MjA0ODYsImlhdCI6MTU2MzUzNDA4NiwiaXNzIjoiMzQuNjUuNzMuMjU1IiwianRpIjoiYjllNDQxOGQtZjNmNC00NTIyLTg5ODEtNDcxOTY0ODNmODg3IiwidWlmIjoiZXlKcGMzTWlPaUpvZEhSd2N6b3ZMek0wTGpZMUxqY3pMakkxTlRvMU5UVTJMMlJsZUNJc0luTjFZaUk2SWtOcFVYZFBSMFUwVG1wbk1GbHBNV3RaYW1jMFRGUlNhVTU2VFhSUFZFSm9UMU13ZWxreVVYaE9hbGw0V21wVk1FNXFXVk5DVjNoMldUSkdjeUlzSW1GMVpDSTZJbXQxWW1WbWJHOTNMV0YxZEdoelpYSjJhV05sTFc5cFpHTWlMQ0psZUhBaU9qRTFOak0yTWpBME9EWXNJbWxoZENJNk1UVTJNelV6TkRBNE5pd2lZWFJmYUdGemFDSTZJbE5OWlZWRGJUQmFOVkZoUTNCdVNHTndRMWgwTVZFaUxDSmxiV0ZwYkNJNkltRmtiV2x1UUhObGJHUnZiaTVwYnlJc0ltVnRZV2xzWDNabGNtbG1hV1ZrSWpwMGNuVmxMQ0p1WVcxbElqb2lZV1J0YVc0aWZRPT0ifQ.7CQIz4A1s9m6lJeWTqpz_JKGArGX4e_zpRCOXXjVRJgguB3z48rSfei_KL7niMCWpruhU11c8UIw9E79PwHNNw"
```
## Start Seldon Core
Use the setup notebook to [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core) with [Istio Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Istio). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
**Note** When running helm install for this example you will need to set the istio.gateway flag to kubeflow-gateway (```--set istio.gateway=kubeflow-gateway```).
```
deployment_name = "test1"
namespace = "default"
from seldon_core.seldon_client import (
SeldonCallCredentials,
SeldonChannelCredentials,
SeldonClient,
)
sc = SeldonClient(
deployment_name=deployment_name,
namespace=namespace,
gateway_endpoint=ISTIO_GATEWAY,
debug=True,
channel_credentials=SeldonChannelCredentials(verify=False),
call_credentials=SeldonCallCredentials(token=TOKEN),
)
r = sc.predict(gateway="istio", transport="rest", shape=(1, 4))
print(r)
```
Its not presently possible to use gRPC without getting access to the certificates. We will update this once its clear how to obtain them from a Kubeflow cluser setup.
## Istio - SSL Endpoint - Client Side Verification - No Authentication
1. First run through the [Istio Secure Gateway SDS example](https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-sds/) and make sure this works for you.
* This will create certificates for `httpbin.example.com` and test them out.
1. Update your `/etc/hosts` file to include an entry for the ingress gateway for `httpbin.example.com` e.g. add a line like: `10.107.247.132 httpbin.example.com` replacing the ip address with your ingress gateway ip address.
```
# Set to folder where the httpbin certificates are
ISTIO_HTTPBIN_CERT_FOLDER = "/home/clive/work/istio/httpbin.example.com"
```
## Start Seldon Core
Use the setup notebook to [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core) with [Istio Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Istio). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
**Note** When running ```helm install``` for this example you will need to set the ```istio.gateway``` flag to ```mygateway``` (```--set istio.gateway=mygateway```) used in the example.
```
deployment_name = "mymodel"
namespace = "default"
from seldon_core.seldon_client import (
SeldonCallCredentials,
SeldonChannelCredentials,
SeldonClient,
)
sc = SeldonClient(
deployment_name=deployment_name,
namespace=namespace,
gateway_endpoint="httpbin.example.com",
debug=True,
channel_credentials=SeldonChannelCredentials(
certificate_chain_file=ISTIO_HTTPBIN_CERT_FOLDER
+ "/2_intermediate/certs/ca-chain.cert.pem",
root_certificates_file=ISTIO_HTTPBIN_CERT_FOLDER
+ "/4_client/certs/httpbin.example.com.cert.pem",
private_key_file=ISTIO_HTTPBIN_CERT_FOLDER
+ "/4_client/private/httpbin.example.com.key.pem",
),
)
r = sc.predict(gateway="istio", transport="rest", shape=(1, 4))
print(r)
r = sc.predict(gateway="istio", transport="grpc", shape=(1, 4))
print(r)
```
| github_jupyter |
## Introduction to PySpark
This article aims to give hands on experience in working with the DataFrame API in PySpark. You can download this article as a notebook and run the code yourself by clicking on the download button above and selecting `.ipynb`.
We will not aim to cover all the PySpark DataFrame functionality or go into detail of how Spark works, but instead focus on practicality by performing some common operations on an example dataset. There are a handful of exercises that you can complete while reading this article.
Prerequisites for this article are some basic knowledge of Python and pandas. If you are completely new to Python then it is recommended to complete an introductory course first; your organisation may have specific Python training. Other resources include the [Python Tutorial](https://docs.python.org/3/tutorial/) and [10 Minutes to pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/10min.html). If you are an R user, the [Introduction to sparklyr](../sparklyr-intro/sparklyr-intro) article follows similar format.
### PySpark: a quick introduction
Although this article focusses on practical usage to enable you to quickly use PySpark, you do need to understand some basic theory of Spark and distributed computing.
Spark is a powerful tool used to process huge data in an efficient way. We can access Spark in Python with the PySpark package. Spark has DataFrames, consisting of rows and columns, similar to pandas. Many of the operations are also similar, if not identically named: e.g. you can select and add columns, filter rows, group and aggregate.
The key difference between PySpark and pandas is where the DataFrame is processed:
- pandas DataFrames are processed on the *driver*; this could be on a local machine using a desktop IDE such as Spyder or PyCharm, or on a server, e.g. in a dedicated Docker container (such as a CDSW session). The amount of data you can process is limited to the driver memory, so pandas is suitable for smaller data.
- PySpark DataFrames are processed on the *Spark cluster*. This is a big pool of linked machines, called *nodes*. PySpark DataFrames are distributed into *partitions*, and are processed in parallel on the nodes in the Spark cluster. You can have much greater memory capacity with Spark and so is suitable for big data.
The DataFrame is also processed differently:
- In pandas, the DataFrame changes in memory at each point, e.g. you could create a DataFrame by reading from a CSV file, select some columns, filter the rows, add a column and then write the data out. With each operation, the DataFrame is physically changing in memory. This can be useful for debugging as it is easy to see intermediate outputs.
- In PySpark, DataFrames are *lazily evaluated*. We give Spark a set of instructions, called *transformations*, which are only evaluated when necessary, for instance to get a row count or write out data to a file, referred to as an *action*. In the example above, the plan is triggered once the data are set to write out to a file.
For more detail on how Spark works, you can refer to the articles in the Understanding and Optimising Spark chapter of this book. [Databricks: Learning Spark](https://pages.databricks.com/rs/094-YMS-629/images/LearningSpark2.0.pdf) is another useful resource.
### The `pyspark` Package
As with all coding scripts or notebooks the first thing we do is to import the relevant packages. When coding in PySpark there are two particular imports we need.
Firstly we will import the [`SparkSession`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.SparkSession.html) class, which we will use to create a `SparkSession` object for processing data using Spark.
The second is the [`pyspark.sql.functions`](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql.html#functions) module, which contain functions that can be applied to Spark DataFrames, or columns within the DataFrames. The standard method is to [import the `functions` module with the alias `F`](../ancillary-topics/module-imports), which means whenever we want to call a function from this module we write `F.function_name()`.
```
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
```
We will also want to import the [`pandas`](https://pandas.pydata.org/) module as `pd` as it makes viewing data much easier and neater, and `yaml` for reading the config file, which contains the file path of the source data.
```
import pandas as pd
import yaml
with open("../../config.yaml") as f:
config = yaml.safe_load(f)
```
As `pyspark.sql.functions` is just a Python package, [`dir()`](https://docs.python.org/3/library/functions.html#dir) to list the functions and [`help()`](https://docs.python.org/3/library/functions.html#help) both work as normal, although the easiest way is to look at the [documentation](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql.html#functions). There are a lot of functions in this module and you will be very unlikely to use them all.
### Create a Spark session: `SparkSession.builder`
With our `SparkSession` class imported we now want to create a connection to the Spark cluster. We use [`SparkSession.builder`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.SparkSession.html#pyspark.sql.SparkSession.builder) and assign this to `spark`. `SparkSession.builder` has many options; see the [Guidance on Spark Sessions](../spark-overview/spark-session-guidance) and also [Example Spark Sessions](../spark-overview/example-spark-sessions) to get an idea of what sized session to use.
For this article, we are using a tiny dataset by Spark standards, and so are using a *local* session. This also means that you can run this code without having access to a Spark cluster.
Note that only one Spark session can be running at once. If a session already exists then a new one will not created, instead the connection to the existing session will be used, hence the [`.getOrCreate()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.SparkSession.builder.getOrCreate.html) method.
```
spark = (SparkSession.builder.master("local[2]")
.appName("pyspark-intro")
.getOrCreate())
```
### Reading data: `spark.read.csv()`
For this article we will look at some open data on animal rescue incidents from the London Fire Brigade. The data are stored as a CSV, although the parquet file format is the most common when using Spark. The reason for using CSV in this article is because it is a familiar file format and allows you to adapt this code easily for your own sample data. See the article on [Reading Data in PySpark](../pyspark-intro/reading-data-pyspark) for more information.
Often your data will be large and stored using Hadoop, on the Hadoop Distributed File System (HDFS). This example uses a local file, enabling us to get started quickly; see the article on [Data Storage](../spark-overview/data-storage) for more information.
To read in from a CSV file, use [`spark.read.csv()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrameReader.csv.html). The file path is stored in the config file as `rescue_path_csv`. Using `header=True` means that the DataFrame will use the column headers from the CSV as the column names. CSV files do not contain information about the [data types](../spark-overview/data-types), so use `inferSchema=True` which makes Spark scan the file to infer the data types.
```
rescue_path_csv = config["rescue_path_csv"]
rescue = spark.read.csv(rescue_path_csv, header=True, inferSchema=True)
```
### Preview data: `.printSchema()`
To view the column names and data types use [`.printSchema()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.printSchema.html). This will not return any actual data.
```
rescue.printSchema()
```
### Show data: `.show()`
The [`.show()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.show.html) function is an action that previews a DataFrame. `.show()` is an *action*, meaning that all previous *transformations* will be ran on the Spark cluster; often this will have many *transformations*, but here we only have one, reading in the data.
By default `.show()` will display 20 rows and will truncate the columns to a fixed width.
When there are many columns the output can be hard to read, e.g. although the output of `.show(3)` looks fine in this article, in the notebook version every row will appear over several lines:
```
rescue.show(3)
```
### Convert to pandas: `.toPandas()`
The returned results can look pretty ugly with `.show()` when you have a lot of columns, so often the best way to view the data is to convert to a pandas DataFrame. Be careful: the DataFrame is currently on the *Spark cluster* with lots of memory capacity, whereas pandas DataFrames are stored on the *driver*, which will have much less. Trying to use [`.toPandas()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.toPandas.html) and a huge PySpark DF will not work. If converting to a pandas DF just to view the data, use [`.limit()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.limit.html) to just bring back a small number of rows.
`.toPandas()` is an *action* and will process the whole plan on the Spark cluster. In this example, it will read the CSV file, return three rows, and then convert the result to the driver as a pandas DF.
```
rescue.limit(3).toPandas()
```
See the article on [Returning Data from Cluster to Driver](../pyspark-intro/returning-data) for more details.
### Select columns `.select()`
Often your data will have too many columns that are not relevant, so we can use [`.select()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.select.html) to just get the ones that are of interest. In our example, we can reduce the number of columns returned so that the output of `.show()` is much neater.
Selecting columns is a *transformation*, and so will only be processed once an *action* is called. As such we are chaining this with `.show()` to preview the data:
```
(rescue
.select("IncidentNumber", "DateTimeofCall", "FinalDescription")
.show(5, truncate=False))
```
### Get the row count: `.count()`
Sometimes you data will be small enough that you do not even need to use Spark. As such it is useful to know the row count, and then make the decision on whether to use Spark or just use pandas.
Note that unlike pandas DataFrames the row count is not automatically determined when the data are read in as a PySpark DataFrame. This is an example of *lazy evaluation*. As such, [`.count()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.count.html) is an *action* and has to be explicitly called.
```
rescue.count()
```
### Drop columns: `.drop()`
[`.drop()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.drop.html) is the opposite of `select()`; we specify the columns that we want to remove. There are a lot of columns related to the location of the animal rescue incidents that we will not use that can be removed with `.drop()`. You do not need to specify the columns as a list with `[]`, just use a comma separator.
Note that we have written over the our previous DataFrame by re-assiging to `rescue`; unlike pandas DFs, PySpark DFs are *immutable*.
We then use `.printSchema()` to verify that the columns have been removed.
```
rescue = rescue.drop(
"WardCode",
"BoroughCode",
"Easting_m",
"Northing_m",
"Easting_rounded",
"Northing_rounded"
)
rescue.printSchema()
```
### Rename columns: `.withColumnRenamed()`
The source data has the column names in `CamelCase`, but when using Python we generally prefer to use `snake_case`. The source data also has columns containing special characters (`£`) which can be problematic when writing out data as it is not compatible will all storage formats.
To rename columns, use [`.withColumnRenamed()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.withColumnRenamed.html). This has two arguments: the original column name, followed by the new name. You need a separate `.withColumnRenamed()` statement for each column name but can chain these together. Once again we are re-assigning the DataFrame as it is immutable.
```
rescue = (rescue
.withColumnRenamed("IncidentNumber", "incident_number")
.withColumnRenamed("AnimalGroupParent", "animal_group")
.withColumnRenamed("CalYear", "cal_year")
.withColumnRenamed("IncidentNotionalCost(£)", "total_cost")
.withColumnRenamed("PumpHoursTotal", "job_hours")
.withColumnRenamed("PumpCount", "engine_count"))
rescue.printSchema()
```
Be careful using `.withColumnRenamed()`; if the column is not in the DataFrame then nothing will happen and an error will not be raised until you try and use the new column name.
### Exercise 1
#### Exercise 1a
Rename the following columns in the `rescue` DataFrame:
`FinalDescription` --> `description`
`PostcodeDistrict` --> `postcode_district`
#### Exercise 1b
Select these columns and the six columns that were renamed in the cell above; you should have eight in total. Reassign the result to the `rescue` DataFrame.
#### Exercise 1c
Preview the first five rows of the DataFrame.
<details>
<summary><b>Exercise 1: Solution</b></summary>
```
# 1a: Chain two .withColumnRenamed() operations
rescue = (rescue
.withColumnRenamed("FinalDescription", "description")
.withColumnRenamed("PostcodeDistrict", "postcode_district"))
# 1b: Select the eight columns and assign to rescue
rescue = rescue.select(
"incident_number",
"animal_group",
"cal_year",
"total_cost",
"job_hours",
"engine_count",
"description",
"postcode_district")
# 1c Preview with .show(); can also use .limit(5).toPandas()
rescue.show(5)
```
</details>
### Filter rows: `.filter()` and `F.col()`
Rows of PySpark DataFrames can be filtered with [`.filter()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.filter.html), which takes a logical condition. [`F.col()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.col.html) is used to reference a column in a DataFrame by name and is the [most robust method to use](../pyspark-intro/f-col).
For instance, if we want to select all the rows where `animal_group` is equal to `Hamster`, we can use `F.col("animal_group") == "Hamster")`. Note the double equals sign used in a condition. We do not want to change the `rescue` DataFrame, so assign it to a new DF, `hamsters`:
```
hamsters = rescue.filter(F.col("animal_group") == "Hamster")
hamsters.select("incident_number", "animal_group").show(3)
```
Alternatively you can just input a string into `.filter()`, although this can be messy to look at if the condition contains a string:
```
cats = rescue.filter("animal_group == 'Cat'")
cats.select("incident_number", "animal_group").show(3)
```
Multiple conditions should be in brackets; putting each condition on a new line makes the code easier to read:
```
expensive_olympic_dogs = rescue.filter(
(F.col("animal_group") == "Dog") &
(F.col("total_cost") >= 750) &
(F.col("cal_year") == 2012))
(expensive_olympic_dogs
.select("incident_number", "animal_group", "cal_year", "total_cost")
.show())
```
### Exercise 2
Create a new DataFrame which consists of all the rows where `animal_group` is equal to `"Fox"` and preview the first ten rows.
<details>
<summary><b>Exercise 2: Solution</b></summary>
```
# Use F.col() to filter, ensuring that a new DataFrame is created
# Can also use a string condition instead, e.g. rescue.filter("animal_group == 'Fox'")
foxes = rescue.filter(F.col("animal_group") == "Fox")
# Preview with .show() or .limit(10).toPandas()
foxes.show(10)
```
</details>
### Adding Columns: `.withColumn()`
[`.withColumn()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.withColumn.html) can be used to either create a new column, or overwrite an existing one. The first argument is the new column name, the second is the value of the column. Often this will be derived from other columns. For instance, we do not have a column for how long an incident took in the data, but do have the columns available to derive this:
- `job_hours` gives the total number of hours for engines attending the incident, e.g. if 2 engines attended for an hour `job_hours` will be `2`
- `engine_count` gives the number of engines in attendance
So to get the duration of the incident, which we will call `incident_duration`, we have to divide `job_hours` by `engine_count`:
```
rescue = rescue.withColumn(
"incident_duration",
F.col("job_hours") / F.col("engine_count")
)
```
Now preview the data with `.limit(5).toPandas()`:
```
rescue.limit(5).toPandas()
```
Note that previewing the data took longer to process than defining the new column. Why? Remember that Spark is built on the concept of **transformations** and **actions**:
* **Transformations** are lazily evaluated expressions. These form the set of instructions called the execution plan.
* **Actions** trigger computation to be performed on the cluster and results returned to the driver. It is actions that trigger the execution plan.
Multiple transformations can be combined, as we did to preprocess the `rescue` DataFrame above. Only when an action is called, for example `.toPandas()`, `.show()` or `.count()`, are these transformations and action executed on the cluster, after which the results are returned to the driver.
### Sorting: `.orderBy()`
An important Spark concept is that DataFrames are [not ordered by default](../spark-concepts/df-order), unlike a pandas DF which has an index. Remember that a Spark DataFrame is distributed into partitions, and there is no guarantee of the order of the rows within these partitions, or which partition a particular row is on.
To sort the data, use [`.orderBy()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.orderBy.html). By default this will sort ascending; to sort descending, use `ascending=False`. To show the highest cost incidents:
```
(rescue
.orderBy("total_cost", ascending=False)
.select("incident_number", "total_cost", "animal_group")
.show(10))
```
Horses make up a lot of the more expensive calls, which makes sense, given that they are large animals.
There are actually multiple ways to sort data in PySpark; as well as `.orderBy()`, you can use [`.sort()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.sort.html) or even an SQL expression. The same is true of sorting the data descending, where [`F.desc(column_name)`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.desc.html) can be used instead of `ascending=False`. The most important principle here is consistency; try and use the same syntax as your colleagues to make the code easier to read.
Note that sorting the DataFrame is an expensive operation, as the rows move between partitions. This is a key Spark concept called a [*shuffle*](../spark-concepts/shuffling). When you are ready to optimise your Spark code you will want to read the article on Shuffling.
### Exercise 3
Sort the incidents in terms of their duration, look at the top 10 and the bottom 10. Do you notice anything strange?
<details>
<summary><b>Exercise 3: Solution</b></summary>
```
# To get the top 10, sort the DF descending
top10 = (rescue
.orderBy("incident_duration", ascending=False)
.limit(10))
top10.show()
# The bottom 10 can just be sorted ascending
# Note that .tail() does not exist in Spark 2.4
bottom10 = (rescue
.orderBy("incident_duration")
.limit(10))
# When previewing the results, the incident_duration are all null
bottom10.show()
```
</details>
### Grouping and Aggregating: `.groupBy()`, `.agg()` and `.alias()`
In most cases, we want to get insights into the raw data, for instance, by taking the sum or average of a column, or getting the largest or smallest values. This is key to what the Office for National Statistics does: we release statistics!
In Spark, grouping and aggregating is similar to pandas or SQL: we first group the data with [`.groupBy()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.groupBy.html), then aggregate it in some way with a function from the `functions` module inside [`.agg()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.agg.html), e.g. [`F.sum()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.sum.html), [`F.max()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.max.html). For instance, to find the average cost by `animal_group` we use [`F.mean()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.mean.html):
```
cost_by_animal = (rescue
.groupBy("animal_group")
.agg(F.mean("total_cost")))
cost_by_animal.show(5)
```
The new column has been returned as `avg(total_cost)`. We want to give it a more sensible name and avoid using brackets. We saw earlier that we could use `.withColumnRenamed()` once the column has been created, but it is easier to use [`.alias()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.Column.alias.html) to rename the column directly in the aggregation:
```
cost_by_animal = (rescue
.groupBy("animal_group")
.agg(F.mean("total_cost").alias("average_cost")))
cost_by_animal.show(5)
```
Remember that Spark DFs are not ordered unless we specifically do so with `.orderBy()`; now we have renamed the column `average_cost` this is easy to do:
```
cost_by_animal.orderBy("average_cost", ascending=False).show(10)
```
It looks like `Goat` could be an outlier as it is significantly higher than the other higher average cost incidents. We can investigate this in more detail using `.filter()`:
```
goats = rescue.filter(F.col("animal_group") == "Goat")
goats.count()
```
Just one expensive goat incident! Lets see the description:
```
goats.select("incident_number", "animal_group", "description").toPandas()
```
Note that we did not use `.limit()` before `.toPandas()` here. This is because we know the row count is tiny, and so there was no danger of overloading the driver with too much data.
### Reading data from a Parquet file: `spark.read.parquet()`
The next section covers how to join data in Spark, but before we do, we need to read in another dataset. In our rescue data, we have a column for the postcode district, which represents the first part of the postcode. We have data for the population by postcode in another dataset, `population`.
This data are stored as a parquet file. Parquet files the most efficient way to store data when using Spark. They are compressed and so take up much less storage space, and reading parquet files with Spark is many times quicker than reading CSVs. The drawback is that they are not human readable, although you can store them as a Hive table which means they can easily be interrogated with SQL. See the article on Parquet files for more information.
The syntax for reading in a parquet file is similar to a CSV: [`spark.read.parquet()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrameReader.parquet.html). There is no need for the `header` or `inferSchema` argument as unlike CSVs parquet files already have the schema defined. We can then preview the data with `.limit(5).toPandas()`:
```
population_path = config["population_path"]
population = spark.read.parquet(population_path)
population.limit(5).toPandas()
```
### Joining Data `.join()`
Now we have read the population data in, we can join it to the rescue data to get the population by postcode. This can be done with the [`.join()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.join.html) method.
This article assumes that you are familiar with joins. Those who know SQL will be familiar with this term, although pandas and R users sometimes use the term *merge*. If you do not know how a join works, please read about [joins in SQL](https://en.wikipedia.org/wiki/Join_(SQL)) first; the principles are the same in Spark. Joins are expensive in Spark as they involve shuffling the data and this can make larger joins slow. See the article on [Optimising Joins](../spark-concepts/join-concepts) for more information on how to make them more efficient.
`.join()` is a DataFrame method, so we start with the `rescue` DataFrame. The other arguments we need are:
- `other`: the DataFrame on the right hand side of the join, `population`;
- `on`: which specifies the mapping. Here we have a common column name and so can simply supply the column name;
- `how`: the type of join to use, `"left"` in this case.
To make the code easier to read we have put these arguments on new lines:
```
rescue_with_pop = (
rescue.join(
population,
on="postcode_district",
how="left"))
```
Once again, note how quick this code runs. This is because although a join is an expensive operation, we have only created the plan at this point. We need an action to run the plan and return a result; sort the joined DataFrame, subset the columns and then use `.limit(5).toPandas()`:
```
rescue_with_pop = (rescue_with_pop
.orderBy("incident_number")
.select("incident_number", "animal_group", "postcode_district", "population"))
rescue_with_pop.limit(5).toPandas()
```
### Writing data: file choice
In this article so far, we have been calling actions to preview the data, bringing back only a handful of rows each time. This is useful when developing and debugging code, but in production pipelines you will want to write the results.
The format in which the results are written out depends on what you want to do next with the data:
- If the data are intended to be human readable, e.g. as the basis for a presentation, or as a publication on the ONS website, then you will likely want to output the data as a CSV
- If the data are intended to be used as an input to another Spark process, then use parquet or a Hive table.
There are other use cases, e.g. JSON can be useful if you want the results to analysed with a different programming language, although here we only focus on CSV and parquet. See the article on [Writing Data](../spark-functions/writing-data) for more information.
### Write to a parquet: `.write.parquet()`
To write out our DataFrame as a parquet file, use [`rescue_with_pop.write.parquet()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrameWriter.parquet.html), using the file path as the argument.
The key difference between writing out data with Spark and writing out data with pandas is that the data will be distributed, which means that multiple files will be created, stored in a parent folder. Spark can read in these parent folders as one DataFrame. There will be one file written out per partition of the DataFrame.
```
output_path_parquet = config["rescue_with_pop_path_parquet"]
rescue_with_pop.write.parquet(output_path_parquet)
```
It is worth looking at the raw data that is written out to see that it has been stored in several files in a parent folder.
When reading the data in, Spark will treat every individual file as a partition. See the article on [Managing Partitions](../spark-concepts/partitions) for more information.
### Write to a CSV: `.write.csv()` and `.coalesce()`
CSVs will also be written out in a distributed manner as multiple files. While this is desirable in a parquet, it is not very useful with CSV, as the main benefit is to make them human readable. First, write out the data with [`.write.csv()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrameWriter.csv.html), using the path defined in the config:
```
output_path_csv = config["rescue_with_pop_path_csv"]
rescue_with_pop.write.csv(output_path_csv, header=True)
```
Again, look at the raw data in a file browser. You can see that it has written out a folder called `rescue_with_pop.csv`, with multiple files inside. Each of these on their own is a legitimate CSV file, with the correct headers.
To reduce the number of partitions, use [`.coalesce(numPartitions)`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.coalesce.html); this will combine existing partitions. Setting `numPartitions` to `1` will put all of the data on the same partition.
As the file will already exist, we need to tell Spark to overwrite the existing file. Use [`.mode("overwrite")`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrameWriter.mode.html) after `.write` to do this:
```
(rescue_with_pop
.coalesce(1)
.write
.mode("overwrite")
.csv(output_path_csv, header=True))
```
Checking the file again, you can see that although the folder still exists, it will contain only one CSV file.
A neater way of writing out CSV files to a Hadoop file system is with [Pydoop](../ancillary-topics/pydoop), allowing you to convert the Spark DataFrame to pandas first before making use of [`.to_csv()`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html) from pandas, which has many options to control the output format.
### Removing files
Spark has no native way of removing files, so either use the standard Python methods, or delete them manually through a file browser. If on a local file system, use [`os.remove()`](https://docs.python.org/3/library/os.html#os.remove) to delete a file and [`shutil.rmtree()`](https://docs.python.org/3/library/shutil.html#shutil.rmtree) to remove a directory. If using HDFS or similar, then use [`subprocess.run()`](https://docs.python.org/3/library/subprocess.html#subprocess.run). Be careful when using the `subprocess` module as you will not get a warning when deleting files.
```
import subprocess
for f in [output_path_parquet, output_path_csv]:
cmd = f"hdfs dfs -rm -r -skipTrash {f}"
subprocess.run(cmd, shell=True)
```
### Further Resources
Spark at the ONS Articles:
- [Avoiding Module Import Conflicts](../ancillary-topics/module-imports)
- [Guidance on Spark Sessions](../spark-overview/spark-session-guidance)
- [Example Spark Sessions](../spark-overview/example-spark-sessions)
- [Reading Data in PySpark](../pyspark-intro/reading-data-pyspark)
- [Data Storage](../spark-overview/data-storage)
- [Data Types in Spark](../spark-overview/data-types)
- [Returning Data from Cluster to Driver](../pyspark-intro/returning-data)
- [Reference columns by name: `F.col()`](../pyspark-intro/f-col)
- [Spark DataFrames Are Not Ordered](../spark-concepts/df-order)
- [Shuffling](../spark-concepts/shuffling)
- [Optimising Joins](../spark-concepts/join-concepts)
- [Writing Data](../spark-functions/writing-data)
- [Managing Partitions](../spark-concepts/partitions)
- [Pydoop](../ancillary-topics/pydoop)
PySpark Documentation:
- [`SparkSession`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.SparkSession.html)
- [`pyspark.sql.functions`](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql.html#functions)
- [`SparkSession.builder`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.SparkSession.html#pyspark.sql.SparkSession.builder)
- [`.getOrCreate()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.SparkSession.builder.getOrCreate.html)
- [`spark.read.csv()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrameReader.csv.html)
- [`.printSchema()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.printSchema.html)
- [`.show()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.show.html)
- [`.toPandas()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.toPandas.html)
- [`.limit()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.limit.html)
- [`.select()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.select.html)
- [`.count()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.count.html)
- [`.drop()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.drop.html)
- [`.withColumnRenamed()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.withColumnRenamed.html)
- [`.filter()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.filter.html)
- [`F.col()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.col.html)
- [`.withColumn()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.withColumn.html)
- [`.orderBy()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.orderBy.html)
- [`.sort()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.sort.html)
- [`F.desc()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.desc.html)
- [`.groupBy()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.groupBy.html)
- [`.agg()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.agg.html)
- [`F.sum()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.sum.html)
- [`F.max()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.max.html)
- [`F.mean()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.mean.html)
- [`.alias()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.Column.alias.html)
- [`spark.read.parquet()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrameReader.parquet.html)
- [`.join()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.join.html)
- [`.write.parquet()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrameWriter.parquet.html)
- [`.write.csv()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrameWriter.csv.html)
- [`.coalesce()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.coalesce.html)
- [`.mode()`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrameWriter.mode.html)
Python Documentation
- [Python Tutorial](https://docs.python.org/3/tutorial/)
- [`dir()`](https://docs.python.org/3/library/functions.html#dir)
- [`help()`](https://docs.python.org/3/library/functions.html#help)
- [`os.remove()`](https://docs.python.org/3/library/os.html#os.remove)
- [`shutil.rmtree()`](https://docs.python.org/3/library/shutil.html#shutil.rmtree)
- [`subprocess.run()`](https://docs.python.org/3/library/subprocess.html#subprocess.run)
pandas Documentation:
- [10 Minutes to pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/10min.html)
- [`.to_csv()`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html)
Other Links:
- [Databricks: Learning Spark](https://pages.databricks.com/rs/094-YMS-629/images/LearningSpark2.0.pdf)
- [Wikipedia; Join (SQL)](https://en.wikipedia.org/wiki/Join_(SQL))
#### Acknowledgements
Thanks to Karina Marks, Chris Musselle, Greg Payne and Dave Beech for creating the initial version of this article.
| github_jupyter |
```
# import libraries
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
import numpy as np
from tqdm import tqdm
from torch.utils.tensorboard import SummaryWriter
from utils import get_num_correct
from custom_dataset import ChestXRayDataset
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # set the device type
device
# declare tranformations
transform = {
'train': transforms.Compose([
transforms.Resize((224, 224)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
]),
'test': transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
])
}
# get train and test directories
dirs = {
'train': {
'covid': 'COVID-19 Radiography Database/train/covid',
'normal': 'COVID-19 Radiography Database/train/normal',
'viral': 'COVID-19 Radiography Database/train/viral'
},
'test': 'COVID-19 Radiography Database/test'
}
# prepare the train data-loader
train_set = ChestXRayDataset(dirs['train'], transform['train'])
train_loader = torch.utils.data.DataLoader(train_set, batch_size=8, shuffle=True, num_workers=2)
# prepare the test and validation data-loader
test_set = datasets.ImageFolder(dirs['test'], transform['test'])
valid_size = 0.5 # fraction of test_set to be used as validation set
# obtain test indices that will be used for validation
num_test = len(test_set)
indices = list(range(num_test))
np.random.shuffle(indices)
split = int(np.floor(valid_size*num_test))
test_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining test and validation batches
valid_sampler = SubsetRandomSampler(valid_idx)
test_sampler = SubsetRandomSampler(test_idx)
# prepare the data loaders
valid_loader = torch.utils.data.DataLoader(test_set, batch_size=8, sampler=valid_sampler, num_workers=2)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=8, sampler=test_sampler, num_workers=2)
# resnet18 = torchvision.models.resnet18(pretrained=True)
resnet18 = torchvision.models.resnet18(pretrained=False)
resnet18.load_state_dict(
torch.load('../models/resnet18.pth',
map_location=device)
) # load the pretrained resnet18 model
# change the last fc layer so that it could output 3 classes
resnet18.fc = torch.nn.Linear(in_features=512, out_features=3)
resnet18.to(device) # move to GPU (if available)
criterion = nn.CrossEntropyLoss() # loss function (categorical cross-entropy)
optimizer = optim.Adam(resnet18.parameters(), lr=3e-5)
comment = '-resnet18-covid' # # will be used for naming the run
tb = SummaryWriter(comment=comment)
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf # set initial minimum to infinity
num_epochs = 3 # number of epochs used for training
len_train = len(train_set)
len_val = len(valid_loader.sampler)
len_test = len(test_loader.sampler)
for epoch in range(num_epochs):
train_loss, train_correct = 0, 0 # will be used to track the running loss and correct
#######################
# fine-tune the model #
#######################
train_loop = tqdm(train_loader)
resnet18.train() # set the model to train mode
for batch in train_loop:
images, labels = batch[0].to(device), batch[1].to(device) # load the batch to the available device
preds = resnet18(images) # forward pass
loss = criterion(preds, labels) # calculate loss
optimizer.zero_grad() # clear the accumulated gradients from the previous pass
loss.backward() # backward pass
optimizer.step() # perform a single optimization step
train_loss += loss.item() * labels.size(0) # update the running loss
train_correct += get_num_correct(preds, labels) # update running num correct
train_loop.set_description(f'Epoch [{epoch+1:2d}/{num_epochs}]')
train_loop.set_postfix(loss=loss.item(), acc=train_correct/len_train)
# add train loss and train accuracy for the current epoch to tensorboard
tb.add_scalar('Train Loss', train_loss, epoch)
tb.add_scalar('Train Accuracy', train_correct/len_train, epoch)
resnet18.eval() # set the model to evaluation mode
with torch.no_grad(): # turn off grad tracking, as we don't need gradients for validation
valid_loss, valid_correct = 0, 0 # will be used to track the running validation loss and correct
######################
# validate the model #
######################
for batch in valid_loader:
images, labels = batch[0].to(device), batch[1].to(device) # load the batch to the available device
preds = resnet18(images) # forward pass
loss = criterion(preds, labels) # calculate the loss
valid_loss += loss.item() * labels.size(0) # update the running loss
valid_correct += get_num_correct(preds, labels) # update running num correct
# add validation loss and validation accuracy for the current epoch to tensorboard
tb.add_scalar('Validation Loss', valid_loss, epoch)
tb.add_scalar('Validation Accuracy', valid_correct/len_val, epoch)
# print training/validation statistics
# calculate average loss over an epoch
train_loss = train_loss/len_train
valid_loss = valid_loss/len_val
train_loop.write(f'\t\tAvg training loss: {train_loss:.6f}\tAvg validation loss: {valid_loss:.6f}')
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
train_loop.write(f'\t\tvalid_loss decreased ({valid_loss_min:.6f} --> {valid_loss:.6f}) saving model...')
torch.save(resnet18.state_dict(), f'./model/lr3e-5{comment}.pth')
valid_loss_min = valid_loss
test_loss, test_correct = 0, 0 # will be used to track the running test loss and correct
##################
# test the model #
##################
for batch in test_loader:
images, labels = batch[0].to(device), batch[1].to(device) # load the batch to available device
preds = resnet18(images) # forward pass
loss = criterion(preds, labels) # calculate the loss
test_loss += loss.item() * labels.size(0) # update the running loss
test_correct += get_num_correct(preds, labels) # update running num correct
# add test loss and test accuracy for the current epoch to tensorboard
tb.add_scalar('Test Loss', test_loss, epoch)
tb.add_scalar('Test Accuracy', test_correct/len_test, epoch)
```
| github_jupyter |
# Sentiment Analysis with TreeLSTMs in TensorFlow Fold
The [Stanford Sentiment Treebank](http://nlp.stanford.edu/sentiment/treebank.html) is a corpus of ~10K one-sentence movie reviews from Rotten Tomatoes. The sentences have been parsed into binary trees with words at the leaves; every sub-tree has a label ranging from 0 (highly negative) to 4 (highly positive); 2 means neutral.
For example, `(4 (2 Spiderman) (3 ROCKS))` is sentence with two words, corresponding a binary tree with three nodes. The label at the root, for the entire sentence, is `4` (highly positive). The label for the left child, a leaf corresponding to the word `Spiderman`, is `2` (neutral). The label for the right child, a leaf corresponding to the word `ROCKS` is `3` (moderately positive).
This notebook shows how to use TensorFlow Fold train a model on the treebank using binary TreeLSTMs and [GloVe](http://nlp.stanford.edu/projects/glove/) word embedding vectors, as described in the paper [Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks](http://arxiv.org/pdf/1503.00075.pdf) by Tai et al. The original [Torch](http://torch.ch) source code for the model, provided by the authors, is available [here](https://github.com/stanfordnlp/treelstm).
The model illustrates three of the more advanced features of Fold, namely:
1. [Compositions](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/blocks.md#wiring-things-together-in-more-complicated-ways) to wire up blocks to form arbitrary directed acyclic graphs
2. [Forward Declarations](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/blocks.md#recursion-and-forward-declarations) to create recursive blocks
3. [Metrics](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/py/td.md#class-tdmetric) to create models where the size of the output is not fixed, but varies as a function of the input data.
```
# boilerplate
import codecs
import functools
import os
import tempfile
import zipfile
from nltk.tokenize import sexpr
import numpy as np
from six.moves import urllib
import tensorflow as tf
sess = tf.InteractiveSession()
import tensorflow_fold as td
```
## Get the data
Begin by fetching the word embedding vectors and treebank sentences.
```
data_dir = tempfile.mkdtemp()
print('saving files to %s' % data_dir)
def download_and_unzip(url_base, zip_name, *file_names):
zip_path = os.path.join(data_dir, zip_name)
url = url_base + zip_name
print('downloading %s to %s' % (url, zip_path))
urllib.request.urlretrieve(url, zip_path)
out_paths = []
with zipfile.ZipFile(zip_path, 'r') as f:
for file_name in file_names:
print('extracting %s' % file_name)
out_paths.append(f.extract(file_name, path=data_dir))
return out_paths
full_glove_path, = download_and_unzip(
'http://nlp.stanford.edu/data/', 'glove.840B.300d.zip',
'glove.840B.300d.txt')
train_path, dev_path, test_path = download_and_unzip(
'http://nlp.stanford.edu/sentiment/', 'trainDevTestTrees_PTB.zip',
'trees/train.txt', 'trees/dev.txt', 'trees/test.txt')
```
Filter out words that don't appear in the dataset, since the full dataset is a bit large (5GB). This is purely a performance optimization and has no effect on the final results.
```
filtered_glove_path = os.path.join(data_dir, 'filtered_glove.txt')
def filter_glove():
vocab = set()
# Download the full set of unlabeled sentences separated by '|'.
sentence_path, = download_and_unzip(
'http://nlp.stanford.edu/~socherr/', 'stanfordSentimentTreebank.zip',
'stanfordSentimentTreebank/SOStr.txt')
with codecs.open(sentence_path, encoding='utf-8') as f:
for line in f:
# Drop the trailing newline and strip backslashes. Split into words.
vocab.update(line.strip().replace('\\', '').split('|'))
nread = 0
nwrote = 0
with codecs.open(full_glove_path, encoding='utf-8') as f:
with codecs.open(filtered_glove_path, 'w', encoding='utf-8') as out:
for line in f:
nread += 1
line = line.strip()
if not line: continue
if line.split(u' ', 1)[0] in vocab:
out.write(line + '\n')
nwrote += 1
print('read %s lines, wrote %s' % (nread, nwrote))
filter_glove()
```
Load the filtered word embeddings into a matrix and build an dict from words to indices into the matrix. Add a random embedding vector for out-of-vocabulary words.
```
def load_embeddings(embedding_path):
"""Loads embedings, returns weight matrix and dict from words to indices."""
print('loading word embeddings from %s' % embedding_path)
weight_vectors = []
word_idx = {}
with codecs.open(embedding_path, encoding='utf-8') as f:
for line in f:
word, vec = line.split(u' ', 1)
word_idx[word] = len(weight_vectors)
weight_vectors.append(np.array(vec.split(), dtype=np.float32))
# Annoying implementation detail; '(' and ')' are replaced by '-LRB-' and
# '-RRB-' respectively in the parse-trees.
word_idx[u'-LRB-'] = word_idx.pop(u'(')
word_idx[u'-RRB-'] = word_idx.pop(u')')
# Random embedding vector for unknown words.
weight_vectors.append(np.random.uniform(
-0.05, 0.05, weight_vectors[0].shape).astype(np.float32))
return np.stack(weight_vectors), word_idx
weight_matrix, word_idx = load_embeddings(filtered_glove_path)
```
Finally, load the treebank data.
```
def load_trees(filename):
with codecs.open(filename, encoding='utf-8') as f:
# Drop the trailing newline and strip \s.
trees = [line.strip().replace('\\', '') for line in f]
print('loaded %s trees from %s' % (len(trees), filename))
return trees
train_trees = load_trees(train_path)
dev_trees = load_trees(dev_path)
test_trees = load_trees(test_path)
```
## Build the model
We want to compute a hidden state vector $h$ for every node in the tree. The hidden state is the input to a linear layer with softmax output for predicting the sentiment label.
At the leaves of the tree, words are mapped to word-embedding vectors which serve as the input to a binary tree-LSTM with $0$ for the previous states. At the internal nodes, the LSTM takes $0$ as input, and previous states from its two children. More formally,
\begin{align}
h_{word} &= TreeLSTM(Embedding(word), 0, 0) \\
h_{left, right} &= TreeLSTM(0, h_{left}, h_{right})
\end{align}
where $TreeLSTM(x, h_{left}, h_{right})$ is a special kind of LSTM cell that takes two hidden states as inputs, and has a separate forget gate for each of them. Specifically, it is [Tai et al.](http://arxiv.org/pdf/1503.00075.pdf) eqs. 9-14 with $N=2$. One modification here from Tai et al. is that instead of L2 weight regularization, we use recurrent droupout as described in the paper [Recurrent Dropout without Memory Loss](http://arxiv.org/pdf/1603.05118.pdf).
We can implement $TreeLSTM$ by subclassing the TensorFlow [`BasicLSTMCell`](https://www.tensorflow.org/versions/r1.0/api_docs/python/contrib.rnn/rnn_cells_for_use_with_tensorflow_s_core_rnn_methods#BasicLSTMCell).
```
class BinaryTreeLSTMCell(tf.contrib.rnn.BasicLSTMCell):
"""LSTM with two state inputs.
This is the model described in section 3.2 of 'Improved Semantic
Representations From Tree-Structured Long Short-Term Memory
Networks' <http://arxiv.org/pdf/1503.00075.pdf>, with recurrent
dropout as described in 'Recurrent Dropout without Memory Loss'
<http://arxiv.org/pdf/1603.05118.pdf>.
"""
def __init__(self, num_units, keep_prob=1.0):
"""Initialize the cell.
Args:
num_units: int, The number of units in the LSTM cell.
keep_prob: Keep probability for recurrent dropout.
"""
super(BinaryTreeLSTMCell, self).__init__(num_units)
self._keep_prob = keep_prob
def __call__(self, inputs, state, scope=None):
with tf.variable_scope(scope or type(self).__name__):
lhs, rhs = state
c0, h0 = lhs
c1, h1 = rhs
concat = tf.contrib.layers.linear(
tf.concat([inputs, h0, h1], 1), 5 * self._num_units)
# i = input_gate, j = new_input, f = forget_gate, o = output_gate
i, j, f0, f1, o = tf.split(value=concat, num_or_size_splits=5, axis=1)
j = self._activation(j)
if not isinstance(self._keep_prob, float) or self._keep_prob < 1:
j = tf.nn.dropout(j, self._keep_prob)
new_c = (c0 * tf.sigmoid(f0 + self._forget_bias) +
c1 * tf.sigmoid(f1 + self._forget_bias) +
tf.sigmoid(i) * j)
new_h = self._activation(new_c) * tf.sigmoid(o)
new_state = tf.contrib.rnn.LSTMStateTuple(new_c, new_h)
return new_h, new_state
```
Use a placeholder for the dropout keep probability, with a default of 1 (for eval).
```
keep_prob_ph = tf.placeholder_with_default(1.0, [])
```
Create the LSTM cell for our model. In addition to recurrent dropout, apply dropout to inputs and outputs, using TF's build-in dropout wrapper. Put the LSTM cell inside of a [`td.ScopedLayer`](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/py/td.md#class-tdscopedlayer) in order to manage variable scoping. This ensures that our LSTM's variables are encapsulated from the rest of the graph and get created exactly once.
```
lstm_num_units = 300 # Tai et al. used 150, but our regularization strategy is more effective
tree_lstm = td.ScopedLayer(
tf.contrib.rnn.DropoutWrapper(
BinaryTreeLSTMCell(lstm_num_units, keep_prob=keep_prob_ph),
input_keep_prob=keep_prob_ph, output_keep_prob=keep_prob_ph),
name_or_scope='tree_lstm')
```
Create the output layer using [`td.FC`](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/py/td.md#class-tdfc).
```
NUM_CLASSES = 5 # number of distinct sentiment labels
output_layer = td.FC(NUM_CLASSES, activation=None, name='output_layer')
```
Create the word embedding using [`td.Embedding`](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/py/td.md#class-tdembedding). Note that the built-in Fold layers like `Embedding` and `FC` manage variable scoping automatically, so there is no need to put them inside scoped layers.
```
word_embedding = td.Embedding(
*weight_matrix.shape, initializer=weight_matrix, name='word_embedding')
```
We now have layers that encapsulate all of the trainable variables for our model. The next step is to create the Fold blocks that define how inputs (s-expressions encoded as strings) get processed and used to make predictions. Naturally this requires a recursive model, which we handle in Fold using a [forward declaration](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/blocks.md#recursion-and-forward-declarations). The recursive step is to take a subtree (represented as a string) and convert it into a hidden state vector (the LSTM state), thus embedding it in a $n$-dimensional space (where here $n=300$).
```
embed_subtree = td.ForwardDeclaration(name='embed_subtree')
```
The core the model is a block that takes as input a list of tokens. The tokens will be either:
* `[word]` - a leaf with a single word, the base-case for the recursion, or
* `[lhs, rhs]` - an internal node consisting of a pair of sub-expressions
The outputs of the block will be a pair consisting of logits (the prediction) and the LSTM state.
```
def logits_and_state():
"""Creates a block that goes from tokens to (logits, state) tuples."""
unknown_idx = len(word_idx)
lookup_word = lambda word: word_idx.get(word, unknown_idx)
word2vec = (td.GetItem(0) >> td.InputTransform(lookup_word) >>
td.Scalar('int32') >> word_embedding)
pair2vec = (embed_subtree(), embed_subtree())
# Trees are binary, so the tree layer takes two states as its input_state.
zero_state = td.Zeros((tree_lstm.state_size,) * 2)
# Input is a word vector.
zero_inp = td.Zeros(word_embedding.output_type.shape[0])
word_case = td.AllOf(word2vec, zero_state)
pair_case = td.AllOf(zero_inp, pair2vec)
tree2vec = td.OneOf(len, [(1, word_case), (2, pair_case)])
return tree2vec >> tree_lstm >> (output_layer, td.Identity())
```
Note that we use the call operator `()` to create blocks that reference the `embed_subtree` forward declaration, for the recursive case.
Define a per-node loss function for training.
```
def tf_node_loss(logits, labels):
return tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels)
```
Additionally calculate fine-grained and binary hits (i.e. un-normalized accuracy) for evals. Fine-grained accuracy is defined over all five class labels and will be calculated for all labels, whereas binary accuracy is defined of negative vs. positive classification and will not be calcluated for neutral labels.
```
def tf_fine_grained_hits(logits, labels):
predictions = tf.cast(tf.argmax(logits, 1), tf.int32)
return tf.cast(tf.equal(predictions, labels), tf.float64)
def tf_binary_hits(logits, labels):
softmax = tf.nn.softmax(logits)
binary_predictions = (softmax[:, 3] + softmax[:, 4]) > (softmax[:, 0] + softmax[:, 1])
binary_labels = labels > 2
return tf.cast(tf.equal(binary_predictions, binary_labels), tf.float64)
```
The [`td.Metric`](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/py/td.md#class-tdmetric) block provides a mechaism for accumulating results across sequential and recursive computations without having the thread them through explictly as return values. Metrics are wired up here inside of a [`td.Composition`](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/blocks.md#wiring-things-together-in-more-complicated-ways) block, which allows us to explicitly specify the inputs of sub-blocks with calls to `Block.reads()` inside of a [`Composition.scope()`](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/py/td.md#tdcompositionscope) context manager.
For training, we will sum the loss over all nodes. But for evals, we would like to separately calcluate accuracies for the root (i.e. entire sentences) to match the numbers presented in the literature. We also need to distinguish between neutral and non-neutral sentiment labels, because binary sentiment doesn't get calculated for neutral nodes.
This is easy to do by putting our block creation code for calculating metrics inside of a function and passing it indicators. Note that this needs to be done in Python-land, because we can't inspect the contents of a tensor inside of Fold (since it hasn't been run yet).
```
def add_metrics(is_root, is_neutral):
"""A block that adds metrics for loss and hits; output is the LSTM state."""
c = td.Composition(
name='predict(is_root=%s, is_neutral=%s)' % (is_root, is_neutral))
with c.scope():
# destructure the input; (labels, (logits, state))
labels = c.input[0]
logits = td.GetItem(0).reads(c.input[1])
state = td.GetItem(1).reads(c.input[1])
# calculate loss
loss = td.Function(tf_node_loss)
td.Metric('all_loss').reads(loss.reads(logits, labels))
if is_root: td.Metric('root_loss').reads(loss)
# calculate fine-grained hits
hits = td.Function(tf_fine_grained_hits)
td.Metric('all_hits').reads(hits.reads(logits, labels))
if is_root: td.Metric('root_hits').reads(hits)
# calculate binary hits, if the label is not neutral
if not is_neutral:
binary_hits = td.Function(tf_binary_hits).reads(logits, labels)
td.Metric('all_binary_hits').reads(binary_hits)
if is_root: td.Metric('root_binary_hits').reads(binary_hits)
# output the state, which will be read by our by parent's LSTM cell
c.output.reads(state)
return c
```
Use [NLTK](http://www.nltk.org/) to define a `tokenize` function to split S-exprs into left and right parts. We need this to run our `logits_and_state()` block since it expects to be passed a list of tokens and our raw input is strings.
```
def tokenize(s):
label, phrase = s[1:-1].split(None, 1)
return label, sexpr.sexpr_tokenize(phrase)
```
Try it out.
```
tokenize('(X Y)')
tokenize('(X Y Z)')
```
Embed trees (represented as strings) by tokenizing and piping (`>>`) to `label_and_logits`, distinguishing between neutral and non-neutral labels. We don't know here whether or not we are the root node (since this is a recursive computation), so that gets threaded through as an indicator.
```
def embed_tree(logits_and_state, is_root):
"""Creates a block that embeds trees; output is tree LSTM state."""
return td.InputTransform(tokenize) >> td.OneOf(
key_fn=lambda pair: pair[0] == '2', # label 2 means neutral
case_blocks=(add_metrics(is_root, is_neutral=False),
add_metrics(is_root, is_neutral=True)),
pre_block=(td.Scalar('int32'), logits_and_state))
```
Put everything together and create our top-level (i.e. root) model. It is rather simple.
```
model = embed_tree(logits_and_state(), is_root=True)
```
Resolve the forward declaration for embedding subtrees (the non-root case) with a second call to `embed_tree`.
```
embed_subtree.resolve_to(embed_tree(logits_and_state(), is_root=False))
```
[Compile](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/running.md#batching-inputs) the model.
```
compiler = td.Compiler.create(model)
print('input type: %s' % model.input_type)
print('output type: %s' % model.output_type)
```
## Setup for training
Calculate means by summing the raw metrics.
```
metrics = {k: tf.reduce_mean(v) for k, v in compiler.metric_tensors.items()}
```
Magic numbers.
```
LEARNING_RATE = 0.05
KEEP_PROB = 0.75
BATCH_SIZE = 100
EPOCHS = 20
EMBEDDING_LEARNING_RATE_FACTOR = 0.1
```
Training with [Adagrad](https://www.tensorflow.org/versions/master/api_docs/python/train/optimizers#AdagradOptimizer).
```
train_feed_dict = {keep_prob_ph: KEEP_PROB}
loss = tf.reduce_sum(compiler.metric_tensors['all_loss'])
opt = tf.train.AdagradOptimizer(LEARNING_RATE)
```
Important detail from section 5.3 of [Tai et al.]((http://arxiv.org/pdf/1503.00075.pdf); downscale the gradients for the word embedding vectors 10x otherwise we overfit horribly.
```
grads_and_vars = opt.compute_gradients(loss)
found = 0
for i, (grad, var) in enumerate(grads_and_vars):
if var == word_embedding.weights:
found += 1
grad = tf.scalar_mul(EMBEDDING_LEARNING_RATE_FACTOR, grad)
grads_and_vars[i] = (grad, var)
assert found == 1 # internal consistency check
train = opt.apply_gradients(grads_and_vars)
saver = tf.train.Saver()
```
The TF graph is now complete; initialize the variables.
```
sess.run(tf.global_variables_initializer())
```
## Train the model
Start by defining a function that does a single step of training on a batch and returns the loss.
```
def train_step(batch):
train_feed_dict[compiler.loom_input_tensor] = batch
_, batch_loss = sess.run([train, loss], train_feed_dict)
return batch_loss
```
Now similarly for an entire epoch of training.
```
def train_epoch(train_set):
return sum(train_step(batch) for batch in td.group_by_batches(train_set, BATCH_SIZE))
```
Use [`Compiler.build_loom_inputs()`](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/py/td.md#tdcompilerbuild_loom_inputsexamples-metric_labelsfalse-chunk_size100-orderedfalse) to transform `train_trees` into individual loom inputs (i.e. wiring diagrams) that we can use to actually run the model.
```
train_set = compiler.build_loom_inputs(train_trees)
```
Use [`Compiler.build_feed_dict()`](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/py/td.md#tdcompilerbuild_feed_dictexamples-batch_sizenone-metric_labelsfalse-orderedfalse) to build a feed dictionary for validation on the dev set. This is marginally faster and more convenient than calling `build_loom_inputs`. We used `build_loom_inputs` on the train set so that we can shuffle the individual wiring diagrams into different batches for each epoch.
```
dev_feed_dict = compiler.build_feed_dict(dev_trees)
```
Define a function to do an eval on the dev set and pretty-print some stats, returning accuracy on the dev set.
```
def dev_eval(epoch, train_loss):
dev_metrics = sess.run(metrics, dev_feed_dict)
dev_loss = dev_metrics['all_loss']
dev_accuracy = ['%s: %.2f' % (k, v * 100) for k, v in
sorted(dev_metrics.items()) if k.endswith('hits')]
print('epoch:%4d, train_loss: %.3e, dev_loss_avg: %.3e, dev_accuracy:\n [%s]'
% (epoch, train_loss, dev_loss, ' '.join(dev_accuracy)))
return dev_metrics['root_hits']
```
Run the main training loop, saving the model after each epoch if it has the best accuracy on the dev set. Use the [`td.epochs`](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/py/td.md#tdepochsitems-nnone-shuffletrue-prngnone) utility function to memoize the loom inputs and shuffle them after every epoch of training.
```
best_accuracy = 0.0
save_path = os.path.join(data_dir, 'sentiment_model')
for epoch, shuffled in enumerate(td.epochs(train_set, EPOCHS), 1):
train_loss = train_epoch(shuffled)
accuracy = dev_eval(epoch, train_loss)
if accuracy > best_accuracy:
best_accuracy = accuracy
checkpoint_path = saver.save(sess, save_path, global_step=epoch)
print('model saved in file: %s' % checkpoint_path)
```
The model starts to overfit pretty quickly even with dropout, as the LSTM begins to memorize the training set (which is rather small).
## Evaluate the model
Restore the model from the last checkpoint, where we saw the best accuracy on the dev set.
```
saver.restore(sess, checkpoint_path)
```
See how we did.
```
test_results = sorted(sess.run(metrics, compiler.build_feed_dict(test_trees)).items())
print(' loss: [%s]' % ' '.join(
'%s: %.3e' % (name.rsplit('_', 1)[0], v)
for name, v in test_results if name.endswith('_loss')))
print('accuracy: [%s]' % ' '.join(
'%s: %.2f' % (name.rsplit('_', 1)[0], v * 100)
for name, v in test_results if name.endswith('_hits')))
```
Not bad! See section 3.5.1 of [our paper](https://arxiv.org/abs/1702.02181) for discussion and a comparison of these results to the state of the art.
| github_jupyter |
Lists and Tuples
===
In this notebook, you will learn about lists, a super important data structure, that allows you to store more than one value in a single variable. This is one of the most powerful ideas in programming and introduces a number of other central concepts such as loops.
[Previous: Variables, Strings, and Numbers](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/var_string_num.ipynb) |
[Home](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/index.ipynb) |
[Next: Introducing Functions](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/introducing_functions.ipynb)
Contents
===
- [Lists](#Lists)
- [Introducing Lists](#Introducing-Lists)
- [Example](#Example)
- [Naming and defining a list](#Naming-and-defining-a-list)
- [Accessing one item in a list](#Accessing-one-item-in-a-list)
- [Exercises](#Exercises-lists)
- [Lists and Looping](#Lists-and-Looping)
- [Accessing all elements in a list](#Accessing-all-elements-in-a-list)
- [Enumerating a list](#Enumerating-a-list)
- [Exercises](#Exercises-loops)
- [Common List Operations](#Common-List-Operations)
- [Modifying elements in a list](#Modifying-elements-in-a-list)
- [Finding an element in a list](#Finding-an-element-in-a-list)
- [Testing whether an element is in a list](#Testing-whether-an-element-is-in-a-list)
- [Adding items to a list](#Adding-items-to-a-list)
- [Creating an empty list](#Creating-an-empty-list)
- [Sorting a list](#Sorting-a-list)
- [Finding the length of a list](#Finding-the-length-of-a-list)
- [Exercises](#Exercises-operations)
- [Removing Items from a List](#Removing-Items-from-a-List)
- [Removing items by position](#Removing-items-by-position)
- [Removing items by value](#Removing-items-by-value)
- [Popping items](#Popping-items)
- [Exercises](#Exercises-removing)
- [Want to see what functions are?](#Want-to-see-what-functions-are?)
- [Slicing a List](#Slicing-a-list)
- [Copying a list](#Copying-a-list)
- [Exercises](#Exercises_slicing)
- [Numerical Lists](#Numerical-Lists)
- [The *range()* function](#The-*range()*-function)
- [The *min()*, *max()*, *sum()* functions](#min_max_sum)
- [Exercises](#Exercises_numerical)
- [List Comprehensions](#List-Comprehensions)
- [Numerical comprehensions](#Numerical-comprehensions)
- [Non-numerical comprehensions](#Non-numerical-comprehensions)
- [Exercises](#Exercises_comprehensions)
- [Strings as Lists](#Strings-as-lists)
- [Strings as a list of characters](#Strings-as-a-list-of-characters)
- [Slicing strings](#Slicing-strings)
- [Finding substrings](#Finding-substrings)
- [Replacing substrings](#Replacing-substrings)
- [Counting substrings](#Counting-substrings)
- [Splitting strings](#Splitting-strings)
- [Other string methods](#Other-string-methods)
- [Exercises](#Exercises-strings-as-lists)
- [Challenges](#Challenges-strings-as-lists)
- [Tuples](#Tuples)
- [Defining tuples, and accessing elements](#Defining-tuples,-and-accessing-elements)
- [Using tuples to make strings](#Using-tuples-to-make-strings)
- [Exercises](#Exercises_tuples)
- [Coding Style: PEP 8](#Coging-Style:-PEP-8)
- [Why have style conventions?](#Why-have-style-conventions?)
- [What is a PEP?](#What-is-a-PEP?)
- [Basic Python style guidelines](#Basic-Python-style-guidelines)
- [Exercises](#Exercises-pep8)
- [Overall Challenges](#Overall-Challenges)
Lists
===
Introducing Lists
===
Example
---
A list is a collection of items that is stored in a variable. The items should be related in some way, but there are no restrictions on what can be stored in a list. Here is a simple example of a list, and how we can quickly access each item in the list.
<div class="alert alert-block alert-info">
Lists are called "arrays" in many languages. Python has a related data-structure called an array that is part of the `numpy` (numerical python) package. We will talk about differences between lists and arrays later on.
</div>
Naming and defining a list
---
Since lists are collection of objects, it is good practice to give them a plural name. If each item in your list is an image, call the list `images`. If each item is a trial, call it `trials`. This gives you a straightforward way to refer to the entire list ('images'), and to a single item in the list ('image').
In Python, lists are designated by square brackets. You can define an empty list like this:
```
images = []
```
To define a list with some initial values, you include the values within the square brackets
```
images = ['dog', 'cat', 'panda']
```
Accessing one item in a list
---
Items in a list are identified by their position in the list, starting with zero. This sometimes trips people up.
To access the first element in a list, you give the name of the list, followed by a zero in parentheses.
```
images = ['dog', 'cat', 'panda']
print images[0]
```
The number in parentheses is called the **index** of the item. Because lists start at zero, the index of an item is always one less than its position in the list. So to get the second item in the list, we need to use an index of 1.
```
images = ['dog', 'cat', 'panda']
print images[1]
```
### Accessing the last items in a list
You can probably see that to get the last item in this list, we would use an index of 2. This works, but it would only work because our list has exactly three items. Because it is so common for us to need the *last* value of the list, Python provides a simple way of doing it without needing to know how long the list is. To get the last item of the list, we use -1.
```
###highlight=[4]
images = ['dog', 'cat', 'panda']
print images[-1]
```
This syntax also works for the second to last item, the third to last, and so forth.
```
###highlight=[4]
images = ['dog', 'cat', 'panda']
print images[-2]
```
If you attemp to use a negative number larger than the length of the list you will get an IndexError:
```
###highlight=[4]
images = ['dog', 'cat', 'panda']
print images[-4]
```
<div class="alert alert-block alert-info">
If you are used to the syntax of some other languages, you may be tempted to get the last element in a list using syntax like `images[len(images)]`. This syntax will give you the same output as `images[-1]` but is more verbose, less clear, and thus dispreferred.
</div>
[top](#)
<a id="Exercises-lists"></a>
Exercises
---
#### First List
- Store the values 'python', 'c', and 'java' in a list. Print each of these values out, using their position in the list.
#### First Neat List
- Store the values 'python', 'c', and 'java' in a list. Print a statement about each of these values, using their position in the list.
- Your statement could simply be, 'A nice programming language is *value*.'
#### Your First List
- Think of something you can store in a list. Make a list with three or four items, and then print a message that includes at least one item from your list. Your sentence could be as simple as, "One item in my list is a ____."
[top](#)
Lists and Looping
===
Accessing all elements in a list
---
This is one of the most important concepts related to lists. You can have a list with a million items in it, and in three lines of code you can write a sentence for each of those million items. If you want to understand lists, and become a competent programmer, make sure you take the time to understand this section.
We use a loop to access all the elements in a list. A loop is a block of code that repeats itself until it runs out of items to work with, or until a certain condition is met. In this case, our loop will run once for every item in our list. With a list that is three items long, our loop will run three times.
Let's take a look at how we access all the items in a list, and then try to understand how it works.
```
images = ['dog', 'cat', 'red tailed raccoon']
for image in images:
print image
```
<div class="alert alert-block alert-info">
If you want to see all the values in a list, e.g., for purposes of debugging, you you can simply print a list like so:
`print images` to see all the values of the list.
</div>
```
print images
```
We have already seen how to create a list, so we are really just trying to understand how the last two lines work. These last two lines make up a loop, and the language here can help us see what is happening:
for image in images:
- The keyword "for" tells Python to get ready to use a loop.
- The variable "image", with no "s" on it, is a temporary placeholder variable. This is the variable that Python will place each item in the list into, one at a time.
<div class="alert alert-block alert-info">
This variable can be given any name, e.g., cur_image, or image_to__show but using a convention like image/images makes your code more understandable.
</div>
- The first time through the loop, the value of "image" will be 'dog'.
- The second time through the loop, the value of "image" will be 'cat'.
- The third time through, "name" will be 'red tailed raccoon'.
- After this, there are no more items in the list, and the loop will end.
<div class="alert alert-block alert-info">
Notice that the last element in the list has several words. Despite containing multiple words, it is a single string. List values need not be strings. They can be any data-type including other lists, files, and functions. See [https://swcarpentry.github.io/python-novice-inflammation/03-lists/](these examples) for slightly more involved usages of lists.</div>
The site <a href="http://pythontutor.com/visualize.html#code=dogs+%3D+%5B'border+collie',+'australian+cattle+dog',+'labrador+retriever'%5D%0A%0Afor+dog+in+dogs%3A%0A++++print(dog)&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=3&curInstr=0">pythontutor.com</a> allows you to run Python code one line at a time. As you run the code, there is also a visualization on the screen that shows you how the variable "dog" holds different values as the loop progresses. There is also an arrow that moves around your code, showing you how some lines are run just once, while other lines are run multiple tiimes. If you would like to see this in action, click the Forward button and watch the visualization, and the output as it is printed to the screen. Tools like this are incredibly valuable for seeing what Python is doing with your code.
### Doing more with each item
We can do whatever we want with the value of "dog" inside the loop. In this case, we just print the name of the dog.
print dog
We are not limited to just printing the word dog. We can do whatever we want with this value, and this action will be carried out for every item in the list. Let's say something about each dog in our list.
```
###highlight=[5]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print('I like ' + dog + 's.')
```
Visualize this on <a href="http://pythontutor.com/visualize.html#code=dogs+%3D+%5B'border+collie',+'australian+cattle+dog',+'labrador+retriever'%5D%0A%0Afor+dog+in+dogs%3A%0A++++print('I+like+'+%2B+dog+%2B+'s.')&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=3&curInstr=0">pythontutor</a>.
### Inside and outside the loop
Python uses indentation to decide what is inside the loop and what is outside the loop. Code that is inside the loop will be run for every item in the list. Code that is not indented, which comes after the loop, will be run once just like regular code.
```
###highlight=[6,7,8]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print('I like ' + dog + 's.')
print('No, I really really like ' + dog +'s!\n')
print("\nThat's just how I feel about dogs.")
```
Notice that the last line only runs once, after the loop is completed. Also notice the use of newlines ("\n") to make the output easier to read. Run this code on <a href="http://pythontutor.com/visualize.html#code=dogs+%3D+%5B'border+collie',+'australian+cattle+dog',+'labrador+retriever'%5D%0A%0Afor+dog+in+dogs%3A%0A++++print('I+like+'+%2B+dog+%2B+'s.')%0A++++print('No,+I+really+really+like+'+%2B+dog+%2B's!%5Cn')%0A++++%0Aprint(%22%5CnThat's+just+how+I+feel+about+dogs.%22)&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=3&curInstr=0">pythontutor</a>.
[top](#)
Enumerating a list
---
When you are looping through a list, you may sometimes not only want to access the current list element, but also want to know the index of the current item. The preferred (*Pythonic*) way of doing this is to use the `enumerate()` function which conveniently tracks the index of each item for you, as you loop through the list:
To enumerate a list, you need to add an *index* variable to hold the current index. So instead of
for dog in dogs:
You have
for index, dog in enumerate(dogs)
The value in the variable *index* is always an integer. If you want to print it in a string, you have to turn the integer into a string:
str(index)
The index always starts at 0, so in this example the value of *place* should actually be the current index, plus one:
```
people = ['Desia', 'Pablo', 'Matt', 'Vincent', 'Tamara', 'Mengguo', 'Ian', 'Rui', 'Yuvraj', 'Steven', 'Katharine', 'Sasha', 'Nathan', 'Kristina', 'Olivia']
for i, person in enumerate(sorted(people)):
print "Person number " + str(i) + " in the class is " + person
```
### A common looping error
One common looping error occurs when instead of using the single variable *dog* inside the loop, we accidentally use the variable that holds the entire list:
```
###highlight=[5]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print(dogs)
```
In this example, instead of printing each dog in the list, we print the entire list every time we go through the loop. Python puts each individual item in the list into the variable *dog*, but we never use that variable. Sometimes you will just get an error if you try to do this:
```
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print('I like ' + dogs + 's.')
```
<a id="Exercises-loops"></a>
Exercises
---
#### First List - Loop
- Repeat *First List*, but this time use a loop to print out each value in the list.
#### First Neat List - Loop
- Repeat *First Neat List*, but this time use a loop to print out your statements. Make sure you are writing the same sentence for all values in your list. Loops are not effective when you are trying to generate different output for each value in your list.
#### Your First List - Loop
- Repeat *Your First List*, but this time use a loop to print out your message for each item in your list. Again, if you came up with different messages for each value in your list, decide on one message to repeat for each value in your list.
[top](#)
Common List Operations
===
Modifying elements in a list
---
You can change the value of any element in a list if you know the position of that item.
```
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs[0] = 'australian shepherd'
print(dogs)
```
Finding an element in a list
---
If you want to find out the position of an element in a list, you can use the index() function.
```
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print(dogs.index('australian cattle dog'))
```
This method returns a ValueError if the requested item is not in the list.
```
###highlight=[4]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print(dogs.index('poodle'))
```
Testing whether an item is in a list
---
You can test whether an item is in a list using the "in" keyword. This will become more useful after learning how to use if-else statements.
```
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print('australian cattle dog' in dogs)
print('poodle' in dogs)
```
Adding items to a list
---
### Appending items to the end of a list
We can add an item to a list using the append() method. This method adds the new item to the end of the list.
```
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs.append('poodle')
for dog in dogs:
print(dog.title() + "s are cool.")
```
### Inserting items into a list
We can also insert items anywhere we want in a list, using the **insert()** function. We specify the position we want the item to have, and everything from that point on is shifted one position to the right. In other words, the index of every item after the new item is increased by one.
```
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs.insert(1, 'poodle')
print(dogs)
```
Note that you have to give the position of the new item first, and then the value of the new item. If you do it in the reverse order, you will get an error.
Creating an empty list
---
Now that we know how to add items to a list after it is created, we can use lists more dynamically. We are no longer stuck defining our entire list at once.
A common approach with lists is to define an empty list, and then let your program add items to the list as necessary. This approach works, for example, when starting to build an interactive web site. Your list of users might start out empty, and then as people register for the site it will grow. This is a simplified approach to how web sites actually work, but the idea is realistic.
Here is a brief example of how to start with an empty list, start to fill it up, and work with the items in the list. The only new thing here is the way we define an empty list, which is just an empty set of square brackets.
```
# Create an empty list to hold our users.
names = []
# Add some users.
names.append('Desia')
names.append('Pablo')
names.append('Matt')
# Greet everyone.
for name in names:
print "Welcome, " + name + '!'
```
If we don't change the order in our list, we can use the list to figure out who our oldest and newest users are.
```
###highlight=[10,11,12]
# Create an empty list to hold our users.
names = []
# Add some users.
names.append('Desia')
names.append('Pablo')
names.append('Matt')
# Greet everyone.
for name in names:
print "Welcome, " + name + '!'
# Recognize our first user, and welcome our newest user.
print("\nThank you for being our very first user, " + names[0].title() + '!')
print("And a warm welcome to our newest user, " + names[-1].title() + '!')
```
Note that the code welcoming our newest user will always work, because we have used the index -1. If we had used the index 2 we would always get the third user, even as our list of users grows and grows.
Sorting a List
---
We can sort a list alphabetically, in either order.
```
students = ['bernice', 'aaron', 'cody']
# Put students in alphabetical order.
students.sort()
# Display the list in its current order.
print("Our students are currently in alphabetical order.")
for student in students:
print(student.title())
#Put students in reverse alphabetical order.
students.sort(reverse=True)
# Display the list in its current order.
print("\nOur students are now in reverse alphabetical order.")
for student in students:
print(student.title())
```
### *sorted()* vs. *sort()*
Whenever you consider sorting a list, keep in mind that you can not recover the original order. If you want to display a list in sorted order, but preserve the original order, you can use the *sorted()* function. The *sorted()* function also accepts the optional *reverse=True* argument.
```
students = ['bernice', 'aaron', 'cody']
# Display students in alphabetical order, but keep the original order.
print("Here is the list in alphabetical order:")
for student in sorted(students):
print(student.title())
# Display students in reverse alphabetical order, but keep the original order.
print("\nHere is the list in reverse alphabetical order:")
for student in sorted(students, reverse=True):
print(student.title())
print("\nHere is the list in its original order:")
# Show that the list is still in its original order.
for student in students:
print(student.title())
```
### Reversing a list
We have seen three possible orders for a list:
- The original order in which the list was created
- Alphabetical order
- Reverse alphabetical order
There is one more order we can use, and that is the reverse of the original order of the list. The *reverse()* function gives us this order.
```
students = ['bernice', 'aaron', 'cody']
students.reverse()
print(students)
```
Note that reverse is permanent, although you could follow up with another call to *reverse()* and get back the original order of the list.
### Sorting a numerical list
All of the sorting functions work for numerical lists as well.
```
numbers = [1, 3, 4, 2]
# sort() puts numbers in increasing order.
numbers.sort()
print(numbers)
# sort(reverse=True) puts numbers in decreasing order.
numbers.sort(reverse=True)
print(numbers)
numbers = [1, 3, 4, 2]
# sorted() preserves the original order of the list:
print(sorted(numbers))
print(numbers)
numbers = [1, 3, 4, 2]
# The reverse() function also works for numerical lists.
numbers.reverse()
print(numbers)
```
Finding the length of a list
---
You can find the length of a list using the *len()* function.
```
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print(user_count)
```
There are many situations where you might want to know how many items in a list. If you have a list that stores your users, you can find the length of your list at any time, and know how many users you have.
```
# Create an empty list to hold our users.
usernames = []
# Add some users, and report on how many users we have.
usernames.append('bernice')
user_count = len(usernames)
print("We have " + str(user_count) + " user!")
usernames.append('cody')
usernames.append('aaron')
user_count = len(usernames)
print("We have " + str(user_count) + " users!")
```
On a technical note, the *len()* function returns an integer, which can't be printed directly with strings. We use the *str()* function to turn the integer into a string so that it prints nicely:
```
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print("This will cause an error: " + user_count)
###highlight=[5]
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print("This will work: " + str(user_count))
```
<a id="Exercises-operations"></a>
Exercises
---
#### Working List
- Make a list that includes four careers, such as 'programmer' and 'truck driver'.
- Use the *list.index()* function to find the index of one career in your list.
- Use the *in* function to show that this career is in your list.
- Use the *append()* function to add a new career to your list.
- Use the *insert()* function to add a new career at the beginning of the list.
- Use a loop to show all the careers in your list.
#### Starting From Empty
- Create the list you ended up with in *Working List*, but this time start your file with an empty list and fill it up using *append()* statements.
- Print a statement that tells us what the first career you thought of was.
- Print a statement that tells us what the last career you thought of was.
#### Ordered Working List
- Start with the list you created in *Working List*.
- You are going to print out the list in a number of different orders.
- Each time you print the list, use a for loop rather than printing the raw list.
- Print a message each time telling us what order we should see the list in.
- Print the list in its original order.
- Print the list in alphabetical order.
- Print the list in its original order.
- Print the list in reverse alphabetical order.
- Print the list in its original order.
- Print the list in the reverse order from what it started.
- Print the list in its original order
- Permanently sort the list in alphabetical order, and then print it out.
- Permanently sort the list in reverse alphabetical order, and then print it out.
#### Ordered Numbers
- Make a list of 5 numbers, in a random order.
- You are going to print out the list in a number of different orders.
- Each time you print the list, use a for loop rather than printing the raw list.
- Print a message each time telling us what order we should see the list in.
- Print the numbers in the original order.
- Print the numbers in increasing order.
- Print the numbers in the original order.
- Print the numbers in decreasing order.
- Print the numbers in their original order.
- Print the numbers in the reverse order from how they started.
- Print the numbers in the original order.
- Permanently sort the numbers in increasing order, and then print them out.
- Permanently sort the numbers in descreasing order, and then print them out.
#### List Lengths
- Copy two or three of the lists you made from the previous exercises, or make up two or three new lists.
- Print out a series of statements that tell us how long each list is.
[top](#)
Removing Items from a List
===
Hopefully you can see by now that lists are a dynamic structure. We can define an empty list and then fill it up as information comes into our program. To become really dynamic, we need some ways to remove items from a list when we no longer need them. You can remove items from a list through their position, or through their value.
Removing items by position
---
If you know the position of an item in a list, you can remove that item using the *del* command. To use this approach, give the command *del* and the name of your list, with the index of the item you want to move in square brackets:
```
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
# Remove the first dog from the list.
del dogs[0]
print(dogs)
```
Removing items by value
---
You can also remove an item from a list if you know its value. To do this, we use the *remove()* function. Give the name of the list, followed by the word remove with the value of the item you want to remove in parentheses. Python looks through your list, finds the first item with this value, and removes it.
```
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
# Remove australian cattle dog from the list.
dogs.remove('australian cattle dog')
print(dogs)
```
Be careful to note, however, that *only* the first item with this value is removed. If you have multiple items with the same value, you will have some items with this value left in your list.
```
letters = ['a', 'b', 'c', 'a', 'b', 'c']
# Remove the letter a from the list.
letters.remove('a')
print(letters)
```
Popping items from a list
---
There is a cool concept in programming called "popping" items from a collection. Every programming language has some sort of data structure similar to Python's lists. All of these structures can be used as queues, and there are various ways of processing the items in a queue.
One simple approach is to start with an empty list, and then add items to that list. When you want to work with the items in the list, you always take the last item from the list, do something with it, and then remove that item. The *pop()* function makes this easy. It removes the last item from the list, and gives it to us so we can work with it. This is easier to show with an example:
```
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
last_dog = dogs.pop()
print(last_dog)
print(dogs)
```
This is an example of a first-in, last-out approach. The first item in the list would be the last item processed if you kept using this approach. We will see a full implementation of this approach later on, when we learn about *while* loops.
You can actually pop any item you want from a list, by giving the index of the item you want to pop. So we could do a first-in, first-out approach by popping the first iem in the list:
```
###highlight=[3]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
first_dog = dogs.pop(0)
print(first_dog)
print(dogs)
```
<a id="Exercises-removing"></a>
Exercises
---
#### Famous People
- Make a list that includes the names of four famous people.
- Remove each person from the list, one at a time, using each of the four methods we have just seen:
- Pop the last item from the list, and pop any item except the last item.
- Remove one item by its position, and one item by its value.
- Print out a message that there are no famous people left in your list, and print your list to prove that it is empty.
[top](#)
Want to see what functions are?
===
At this point, you might have noticed we have a fair bit of repetetive code in some of our examples. This repetition will disappear once we learn how to use functions. If this repetition is bothering you already, you might want to go look at [Introducing Functions](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/introducing_functions.ipynb) before you do any more exercises in this section.
Slicing a List
===
Since a list is a collection of items, we should be able to get any subset of those items. For example, if we want to get just the first three items from the list, we should be able to do so easily. The same should be true for any three items in the middle of the list, or the last three items, or any x items from anywhere in the list. These subsets of a list are called *slices*.
To get a subset of a list, we give the position of the first item we want, and the position of the first item we do *not* want to include in the subset. So the slice *list[0:3]* will return a list containing items 0, 1, and 2, but not item 3. Here is how you get a batch containing the first three items.
```
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[0:3]
for user in first_batch:
print(user.title())
```
If you want to grab everything up to a certain position in the list, you can also leave the first index blank:
```
###highlight=[5]
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[:3]
for user in first_batch:
print(user.title())
```
When we grab a slice from a list, the original list is not affected:
```
###highlight=[7,8,9]
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[0:3]
# The original list is unaffected.
for user in usernames:
print(user.title())
```
We can get any segment of a list we want, using the slice method:
```
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab a batch from the middle of the list.
middle_batch = usernames[1:4]
for user in middle_batch:
print(user.title())
```
To get all items from one position in the list to the end of the list, we can leave off the second index:
```
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab all users from the third to the end.
end_batch = usernames[2:]
for user in end_batch:
print(user.title())
```
### Copying a list
You can use the slice notation to make a copy of a list, by leaving out both the starting and the ending index. This causes the slice to consist of everything from the first item to the last, which is the entire list.
```
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Make a copy of the list.
copied_usernames = usernames[:]
print("The full copied list:\n\t", copied_usernames)
# Remove the first two users from the copied list.
del copied_usernames[0]
del copied_usernames[0]
print("\nTwo users removed from copied list:\n\t", copied_usernames)
# The original list is unaffected.
print("\nThe original list:\n\t", usernames)
```
<a id="Exercises-slicing"></a>
Exercises
---
#### Alphabet Slices
- Store the first ten letters of the alphabet in a list.
- Use a slice to print out the first three letters of the alphabet.
- Use a slice to print out any three letters from the middle of your list.
- Use a slice to print out the letters from any point in the middle of your list, to the end.
#### Protected List
- Your goal in this exercise is to prove that copying a list protects the original list.
- Make a list with three people's names in it.
- Use a slice to make a copy of the entire list.
- Add at least two new names to the new copy of the list.
- Make a loop that prints out all of the names in the original list, along with a message that this is the original list.
- Make a loop that prints out all of the names in the copied list, along with a message that this is the copied list.
[top](#)
Numerical Lists
===
There is nothing special about lists of numbers, but there are some functions you can use to make working with numerical lists more efficient. Let's make a list of the first ten numbers, and start working with it to see how we can use numbers in a list.
```
# Print out the first ten numbers.
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for number in numbers:
print(number)
```
The *range()* function
---
This works, but it is not very efficient if we want to work with a large set of numbers. The *range()* function helps us generate long lists of numbers. Here are two ways to do the same thing, using the *range* function.
```
# Print the first ten numbers.
for number in range(1,11):
print(number)
```
The range function takes in a starting number, and an end number. You get all integers, up to but not including the end number. You can also add a *step* value, which tells the *range* function how big of a step to take between numbers:
```
# Print the first ten odd numbers.
for number in range(1,21,2):
print(number)
```
If we want to store these numbers in a list, we can use the *list()* function. This function takes in a range, and turns it into a list:
```
# Create a list of the first ten numbers.
numbers = list(range(1,11))
print(numbers)
```
This is incredibly powerful; we can now create a list of the first million numbers, just as easily as we made a list of the first ten numbers. It doesn't really make sense to print the million numbers here, but we can show that the list really does have one million items in it, and we can print the last ten items to show that the list is correct.
```
# Store the first million numbers in a list.
numbers = list(range(1,1000001))
# Show the length of the list:
print("The list 'numbers' has " + str(len(numbers)) + " numbers in it.")
# Show the last ten numbers:
print("\nThe last ten numbers in the list are:")
for number in numbers[-10:]:
print(number)
```
There are two things here that might be a little unclear. The expression
str(len(numbers))
takes the length of the *numbers* list, and turns it into a string that can be printed.
The expression
numbers[-10:]
gives us a *slice* of the list. The index `-1` is the last item in the list, and the index `-10` is the item ten places from the end of the list. So the slice `numbers[-10:]` gives us everything from that item to the end of the list.
The *min()*, *max()*, and *sum()* functions
---
There are three functions you can easily use with numerical lists. As you might expect, the *min()* function returns the smallest number in the list, the *max()* function returns the largest number in the list, and the *sum()* function returns the total of all numbers in the list.
```
ages = [23, 16, 14, 28, 19, 11, 38]
youngest = min(ages)
oldest = max(ages)
total_years = sum(ages)
print("Our youngest reader is " + str(youngest) + " years old.")
print("Our oldest reader is " + str(oldest) + " years old.")
print("Together, we have " + str(total_years) + " years worth of life experience.")
```
<a id="Exercises-numerical"></a>
Exercises
---
#### First Twenty
- Use the *range()* function to store the first twenty numbers (1-20) in a list, and print them out.
#### Larger Sets
- Take the *first\_twenty.py* program you just wrote. Change your end number to a much larger number. How long does it take your computer to print out the first million numbers? (Most people will never see a million numbers scroll before their eyes. You can now see this!)
#### Five Wallets
- Imagine five wallets with different amounts of cash in them. Store these five values in a list, and print out the following sentences:
- "The fattest wallet has $ *value* in it."
- "The skinniest wallet has $ *value* in it."
- "All together, these wallets have $ *value* in them."
[top](#)
List Comprehensions
===
I thought carefully before including this section. If you are brand new to programming, list comprehensions may look confusing at first. They are a shorthand way of creating and working with lists. It is good to be aware of list comprehensions, because you will see them in other people's code, and they are really useful when you understand how to use them. That said, if they don't make sense to you yet, don't worry about using them right away. When you have worked with enough lists, you will want to use comprehensions. For now, it is good enough to know they exist, and to recognize them when you see them. If you like them, go ahead and start trying to use them now.
Numerical Comprehensions
---
Let's consider how we might make a list of the first ten square numbers. We could do it like this:
```
# Store the first ten square numbers in a list.
# Make an empty list that will hold our square numbers.
squares = []
# Go through the first ten numbers, square them, and add them to our list.
for number in range(1,11):
new_square = number**2
squares.append(new_square)
# Show that our list is correct.
for square in squares:
print(square)
```
This should make sense at this point. If it doesn't, go over the code with these thoughts in mind:
- We make an empty list called *squares* that will hold the values we are interested in.
- Using the *range()* function, we start a loop that will go through the numbers 1-10.
- Each time we pass through the loop, we find the square of the current number by raising it to the second power.
- We add this new value to our list *squares*.
- We go through our newly-defined list and print out each square.
Now let's make this code more efficient. We don't really need to store the new square in its own variable *new_square*; we can just add it directly to the list of squares. The line
new_square = number**2
is taken out, and the next line takes care of the squaring:
```
###highlight=[8]
# Store the first ten square numbers in a list.
# Make an empty list that will hold our square numbers.
squares = []
# Go through the first ten numbers, square them, and add them to our list.
for number in range(1,11):
squares.append(number**2)
# Show that our list is correct.
for square in squares:
print(square)
```
List comprehensions allow us to collapse the first three lines of code into one line. Here's what it looks like:
```
###highlight=[2,3]
# Store the first ten square numbers in a list.
squares = [number**2 for number in range(1,11)]
# Show that our list is correct.
for square in squares:
print(square)
```
It should be pretty clear that this code is more efficient than our previous approach, but it may not be clear what is happening. Let's take a look at everything that is happening in that first line:
We define a list called *squares*.
Look at the second part of what's in square brackets:
for number in range(1,11)
This sets up a loop that goes through the numbers 1-10, storing each value in the variable *number*. Now we can see what happens to each *number* in the loop:
number**2
Each number is raised to the second power, and this is the value that is stored in the list we defined. We might read this line in the following way:
squares = [raise *number* to the second power, for each *number* in the range 1-10]
### Another example
It is probably helpful to see a few more examples of how comprehensions can be used. Let's try to make the first ten even numbers, the longer way:
```
# Make an empty list that will hold the even numbers.
evens = []
# Loop through the numbers 1-10, double each one, and add it to our list.
for number in range(1,11):
evens.append(number*2)
# Show that our list is correct:
for even in evens:
print(even)
```
Here's how we might think of doing the same thing, using a list comprehension:
evens = [multiply each *number* by 2, for each *number* in the range 1-10]
Here is the same line in code:
```
###highlight=[2,3]
# Make a list of the first ten even numbers.
evens = [number*2 for number in range(1,11)]
for even in evens:
print(even)
```
Non-numerical comprehensions
---
We can use comprehensions with non-numerical lists as well. In this case, we will create an initial list, and then use a comprehension to make a second list from the first one. Here is a simple example, without using comprehensions:
```
# Consider some students.
students = ['bernice', 'aaron', 'cody']
# Let's turn them into great students.
great_students = []
for student in students:
great_students.append(student.title() + " the great!")
# Let's greet each great student.
for great_student in great_students:
print("Hello, " + great_student)
```
To use a comprehension in this code, we want to write something like this:
great_students = [add 'the great' to each *student*, for each *student* in the list of *students*]
Here's what it looks like:
```
###highlight=[5,6]
# Consider some students.
students = ['bernice', 'aaron', 'cody']
# Let's turn them into great students.
great_students = [student.title() + " the great!" for student in students]
# Let's greet each great student.
for great_student in great_students:
print("Hello, " + great_student)
```
<a id="Exercises-comprehensions"></a>
Exercises
---
If these examples are making sense, go ahead and try to do the following exercises using comprehensions. If not, try the exercises without comprehensions. You may figure out how to use comprehensions after you have solved each exercise the longer way.
#### Multiples of Ten
- Make a list of the first ten multiples of ten (10, 20, 30... 90, 100). There are a number of ways to do this, but try to do it using a list comprehension. Print out your list.
#### Cubes
- We saw how to make a list of the first ten squares. Make a list of the first ten cubes (1, 8, 27... 1000) using a list comprehension, and print them out.
#### Awesomeness
- Store five names in a list. Make a second list that adds the phrase "is awesome!" to each name, using a list comprehension. Print out the awesome version of the names.
#### Working Backwards
- Write out the following code without using a list comprehension:
plus_thirteen = [number + 13 for number in range(1,11)]
[top](#)
Strings as Lists
===
Now that you have some familiarity with lists, we can take a second look at strings. A string is really a list of characters, so many of the concepts from working with lists behave the same with strings.
Strings as a list of characters
---
We can loop through a string using a *for* loop, just like we loop through a list:
```
message = "Hello!"
for letter in message:
print(letter)
```
We can create a list from a string. The list will have one element for each character in the string:
```
message = "Hello world!"
message_list = list(message)
print(message_list)
```
Slicing strings
---
We can access any character in a string by its position, just as we access individual items in a list:
```
message = "Hello World!"
first_char = message[0]
last_char = message[-1]
print(first_char, last_char)
```
We can extend this to take slices of a string:
```
message = "Hello World!"
first_three = message[:3]
last_three = message[-3:]
print(first_three, last_three)
```
Finding substrings
---
Now that you have seen what indexes mean for strings, we can search for *substrings*. A substring is a series of characters that appears in a string.
You can use the *in* keyword to find out whether a particular substring appears in a string:
```
message = "I like cats and dogs."
dog_present = 'dog' in message
print(dog_present)
```
If you want to know where a substring appears in a string, you can use the *find()* method. The *find()* method tells you the index at which the substring begins.
```
message = "I like cats and dogs."
dog_index = message.find('dog')
print(dog_index)
```
Note, however, that this function only returns the index of the first appearance of the substring you are looking for. If the substring appears more than once, you will miss the other substrings.
```
###highlight=[2]
message = "I like cats and dogs, but I'd much rather own a dog."
dog_index = message.find('dog')
print(dog_index)
```
If you want to find the last appearance of a substring, you can use the *rfind()* function:
```
###highlight=[3,4]
message = "I like cats and dogs, but I'd much rather own a dog."
last_dog_index = message.rfind('dog')
print(last_dog_index)
```
Replacing substrings
---
You can use the *replace()* function to replace any substring with another substring. To use the *replace()* function, give the substring you want to replace, and then the substring you want to replace it with. You also need to store the new string, either in the same string variable or in a new variable.
```
message = "I like cats and dogs, but I'd much rather own a dog."
message = message.replace('dog', 'snake')
print(message)
```
Counting substrings
---
If you want to know how many times a substring appears within a string, you can use the *count()* method.
```
message = "I like cats and dogs, but I'd much rather own a dog."
number_dogs = message.count('dog')
print(number_dogs)
```
Splitting strings
---
Strings can be split into a set of substrings when they are separated by a repeated character. If a string consists of a simple sentence, the string can be split based on spaces. The *split()* function returns a list of substrings. The *split()* function takes one argument, the character that separates the parts of the string.
```
message = "I like cats and dogs, but I'd much rather own a dog."
words = message.split(' ')
print(words)
```
Notice that the punctuation is left in the substrings.
It is more common to split strings that are really lists, separated by something like a comma. The *split()* function gives you an easy way to turn comma-separated strings, which you can't do much with in Python, into lists. Once you have your data in a list, you can work with it in much more powerful ways.
```
animals = "dog, cat, tiger, mouse, liger, bear"
# Rewrite the string as a list, and store it in the same variable
animals = animals.split(',')
print(animals)
```
Notice that in this case, the spaces are also ignored. It is a good idea to test the output of the *split()* function and make sure it is doing what you want with the data you are interested in.
One use of this is to work with spreadsheet data in your Python programs. Most spreadsheet applications allow you to dump your data into a comma-separated text file. You can read this file into your Python program, or even copy and paste from the text file into your program file, and then turn the data into a list. You can then process your spreadsheet data using a *for* loop.
Other string methods
---
There are a number of [other string methods](http://docs.python.org/3.3/library/stdtypes.html#string-methods) that we won't go into right here, but you might want to take a look at them. Most of these methods should make sense to you at this point. You might not have use for any of them right now, but it is good to know what you can do with strings. This way you will have a sense of how to solve certain problems, even if it means referring back to the list of methods to remind yourself how to write the correct syntax when you need it.
<a id="Exercises-strings-as-lists"></a>
Exercises
---
#### Listing a Sentence
- Store a single sentence in a variable. Use a for loop to print each character from your sentence on a separate line.
#### Sentence List
- Store a single sentence in a variable. Create a list from your sentence. Print your raw list (don't use a loop, just print the list).
#### Sentence Slices
- Store a sentence in a variable. Using slices, print out the first five characters, any five consecutive characters from the middle of the sentence, and the last five characters of the sentence.
#### Finding Python
- Store a sentence in a variable, making sure you use the word *Python* at least twice in the sentence.
- Use the *in* keyword to prove that the word *Python* is actually in the sentence.
- Use the *find()* function to show where the word *Python* first appears in the sentence.
- Use the *rfind()* function to show the last place *Python* appears in the sentence.
- Use the *count()* function to show how many times the word *Python* appears in your sentence.
- Use the *split()* function to break your sentence into a list of words. Print the raw list, and use a loop to print each word on its own line.
- Use the *replace()* function to change *Python* to *Ruby* in your sentence.
<a id="Challenges-strings-as-lists"></a>
Challenges
---
#### Counting DNA Nucleotides
- [Project Rosalind](http://rosalind.info/problems/locations/) is a [problem set](http://rosalind.info/problems/list-view/) based on biotechnology concepts. It is meant to show how programming skills can help solve problems in genetics and biology.
- If you have understood this section on strings, you have enough information to solve the first problem in Project Rosalind, [Counting DNA Nucleotides](http://rosalind.info/problems/dna/). Give the sample problem a try.
- If you get the sample problem correct, log in and try the full version of the problem!
#### Transcribing DNA into RNA
- You also have enough information to try the second problem, [Transcribing DNA into RNA](http://rosalind.info/problems/rna/). Solve the sample problem.
- If you solved the sample problem, log in and try the full version!
#### Complementing a Strand of DNA
- You guessed it, you can now try the third problem as well: [Complementing a Strand of DNA](http://rosalind.info/problems/revc/). Try the sample problem, and then try the full version if you are successful.
[top](#)
Tuples
===
Tuples are basically lists that can never be changed. Lists are dynamic; they grow as you append and insert items and they can shrink as you remove items. You can modify any element you want to in a list. Sometimes we like this behavior, but other times we may want to ensure that no user or no part of a program can change a list. That's what tuples are for.
<div class="alert alert-block alert-info">
Technically, lists are *mutable* objects and tuples are *immutable* objects. Mutable objects can change (think of *mutations*), and immutable objects can not change.
</div>
Defining tuples, and accessing elements
---
You define a tuple just like you define a list, except you use parentheses instead of square brackets. Once you have a tuple, you can access individual elements just like you can with a list, and you can loop through the tuple with a *for* loop:
```
colors = ('red', 'green', 'blue')
print("The first color is: " + colors[0])
print("\nThe available colors are:")
for color in colors:
print("- " + color)
```
If you try to add something to a tuple, you will get an error:
```
colors = ('red', 'green', 'blue')
colors.append('purple')
```
The same kind of thing happens when you try to remove something from a tuple, or modify one of its elements. Once you define a tuple, you can be confident that its values will not change.
Using tuples to make strings
---
We have seen that it is pretty useful to be able to mix raw English strings with values that are stored in variables, as in the following:
```
animal = 'dog'
print("I have a " + animal + ".")
```
This was especially useful when we had a series of similar statements to make:
```
animals = ['dog', 'cat', 'bear']
for animal in animals:
print("I have a " + animal + ".")
```
I like this approach of using the plus sign to build strings because it is fairly intuitive. We can see that we are adding several smaller strings together to make one longer string. This is intuitive, but it is a lot of typing. There is a shorter way to do this, using *placeholders*.
Python ignores most of the characters we put inside of strings. There are a few characters that Python pays attention to, as we saw with strings such as "\t" and "\n". Python also pays attention to "%s" and "%d". These are placeholders. When Python sees the "%s" placeholder, it looks ahead and pulls in the first argument after the % sign:
```
animal = 'dog'
print("I have a %s." % animal)
```
This is a much cleaner way of generating strings that include values. We compose our sentence all in one string, and then tell Python what values to pull into the string, in the appropriate places.
This is called *string formatting*, and it looks the same when you use a list:
```
animals = ['dog', 'cat', 'bear']
for animal in animals:
print("I have a %s." % animal)
```
If you have more than one value to put into the string you are composing, you have to pack the values into a tuple:
```
animals = ['dog', 'cat', 'bear']
print("I have a %s, a %s, and a %s." % (animals[0], animals[1], animals[2]))
```
### String formatting with numbers
If you recall, printing a number with a string can cause an error:
```
number = 23
print("My favorite number is " + number + ".")
```
Python knows that you could be talking about the value 23, or the characters '23'. So it throws an error, forcing us to clarify that we want Python to treat the number as a string. We do this by *casting* the number into a string using the *str()* function:
```
###highlight=[3]
number = 23
print("My favorite number is " + str(number) + ".")
```
The format string "%d" takes care of this for us. Watch how clean this code is:
```
###highlight=[3]
number = 23
print("My favorite number is %d." % number)
```
If you want to use a series of numbers, you pack them into a tuple just like we saw with strings:
```
numbers = [7, 23, 42]
print("My favorite numbers are %d, %d, and %d." % (numbers[0], numbers[1], numbers[2]))
```
Just for clarification, look at how much longer the code is if you use concatenation instead of string formatting:
```
###highlight=[3]
numbers = [7, 23, 42]
print("My favorite numbers are " + str(numbers[0]) + ", " + str(numbers[1]) + ", and " + str(numbers[2]) + ".")
```
You can mix string and numerical placeholders in any order you want.
```
names = ['eric', 'ever']
numbers = [23, 2]
print("%s's favorite number is %d, and %s's favorite number is %d." % (names[0].title(), numbers[0], names[1].title(), numbers[1]))
```
There are more sophisticated ways to do string formatting in Python 3, but we will save that for later because it's a bit less intuitive than this approach. For now, you can use whichever approach consistently gets you the output that you want to see.
<a id="Exercises-tuples"></a>
Exercises
---
#### Gymnast Scores
- A gymnast can earn a score between 1 and 10 from each judge; nothing lower, nothing higher. All scores are integer values; there are no decimal scores from a single judge.
- Store the possible scores a gymnast can earn from one judge in a tuple.
- Print out the sentence, "The lowest possible score is \_\_\_, and the highest possible score is \_\_\_." Use the values from your tuple.
- Print out a series of sentences, "A judge can give a gymnast ___ points."
- Don't worry if your first sentence reads "A judge can give a gymnast 1 points."
- However, you get 1000 bonus internet points if you can use a for loop, and have correct grammar. [hint](#hints_gymnast_scores)
#### Revision with Tuples
- Choose a program you have already written that uses string concatenation.
- Save the program with the same filename, but add *\_tuple.py* to the end. For example, *gymnast\_scores.py* becomes *gymnast\_scores_tuple.py*.
- Rewrite your string sections using *%s* and *%d* instead of concatenation.
- Repeat this with two other programs you have already written.
[top](#)
Coding Style: PEP 8
===
You are now starting to write Python programs that have a little substance. Your programs are growing a little longer, and there is a little more structure to your programs. This is a really good time to consider your overall style in writing code.
Why do we need style conventions?
---
The people who originally developed Python made some of their decisions based on the realization that code is read much more often than it is written. The original developers paid as much attention to making the language easy to read, as well as easy to write. Python has gained a lot of respect as a programming language because of how readable the code is. You have seen that Python uses indentation to show which lines in a program are grouped together. This makes the structure of your code visible to anyone who reads it. There are, however, some styling decisions we get to make as programmers that can make our programs more readable for ourselves, and for others.
There are several audiences to consider when you think about how readable your code is.
#### Yourself, 6 months from now
- You know what you are thinking when you write code for the first time. But how easily will you recall what you were thinking when you come back to that code tomorrow, next week, or six months from now? We want our code to be as easy to read as possible six months from now, so we can jump back into our projects when we want to.
#### Other programmers you might want to collaborate with
- Every significant project is the result of collaboration these days. If you stay in programming, you will work with others in jobs and in open source projects. If you write readable code with good commments, people will be happy to work with you in any setting.
#### Potential employers
- Most people who hire programmers will ask to see some code you have written, and they will probably ask you to write some code during your interview. If you are in the habit of writing code that is easy to read, you will do well in these situations.
What is a PEP?
---
A PEP is a *Python Enhancement Proposal*. When people want to suggest changes to the actual Python language, someone drafts a Python Enhancement Proposal. One of the earliest PEPs was a collection of guidelines for writing code that is easy to read. It was PEP 8, the [Style Guide for Python Code](http://www.python.org/dev/peps/pep-0008/). There is a lot in there that won't make sense to you for some time yet, but there are some suggestions that you should be aware of from the beginning. Starting with good style habits now will help you write clean code from the beginning, which will help you make sense of your code as well.
Basic Python style guidelines
---
#### Indentation
- Use 4 spaces for indentation. This is enough space to give your code some visual structure, while leaving room for multiple indentation levels. There are configuration settings in most editors to automatically convert tabs to 4 spaces, and it is a good idea to check this setting. On Geany, this is under Edit>Preferences>Editor>Indentation; set Width to 4, and Type to *Spaces*.
#### Line Length
- Use up to 79 characters per line of code, and 72 characters for comments. This is a style guideline that some people adhere to and others completely ignore. This used to relate to a limit on the display size of most monitors. Now almost every monitor is capable of showing much more than 80 characters per line. But we often work in terminals, which are not always high-resolution. We also like to have multiple code files open, next to each other. It turns out this is still a useful guideline to follow in most cases. There is a secondary guideline of sticking to 99 characters per line, if you want longer lines.
Many editors have a setting that shows a vertical line that helps you keep your lines to a certain length. In Geany, you can find this setting under Edit>Preferences>Editor>Display. Make sure "Long Line Marker" is enabled, and set "Column" to 79.
#### Blank Lines
- Use single blank lines to break up your code into meaningful blocks. You have seen this in many examples so far. You can use two blank lines in longer programs, but don't get excessive with blank lines.
#### Comments
- Use a single space after the pound sign at the beginning of a line. If you are writing more than one paragraph, use an empty line with a pound sign between paragraphs.
#### Naming Variables
- Name variables and program files using only lowercase letters, underscores, and numbers. Python won't complain or throw errors if you use capitalization, but you will mislead other programmers if you use capital letters in variables at this point.
That's all for now. We will go over more style guidelines as we introduce more complicated programming structures. If you follow these guidelines for now, you will be well on your way to writing readable code that professionals will respect.
<a id="Exercises-pep8"></a>
Exercises
---
#### Skim PEP 8
- If you haven't done so already, skim [PEP 8 - Style Guide for Python Code](http://www.python.org/dev/peps/pep-0008/#block-comments). As you continue to learn Python, go back and look at this every once in a while. I can't stress enough that many good programmers will take you much more seriously from the start if you are following community-wide conventions as you write your code.
#### Implement PEP 8
- Take three of your longest programs, and add the extension *\_pep8.py* to the filename of each program. Revise your code so that it meets the styling conventions listed above.
[top](#)
Overall Challenges
===
#### Programming Words
- Make a list of the most important words you have learned in programming so far. You should have terms such as list,
- Make a corresponding list of definitions. Fill your list with 'definition'.
- Use a for loop to print out each word and its corresponding definition.
- Maintain this program until you get to the section on Python's Dictionaries.
[top](#)
- - -
[Previous: Variables, Strings, and Numbers](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/var_string_num.ipynb) |
[Home](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/index.ipynb) |
[Next: Introducing Functions](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/introducing_functions.ipynb)
Hints
===
These are placed at the bottom, so you can have a chance to solve exercises without seeing any hints.
#### Gymnast Scores
- Hint: Use a slice.
[top](#)
| github_jupyter |
# Operations on word vectors
Welcome to your first assignment of this week!
Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings.
**After this assignment you will be able to:**
- Load pre-trained word vectors, and measure similarity using cosine similarity
- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______.
- Modify word embeddings to reduce their gender bias
Let's get started! Run the following cell to load the packages you will need.
```
import numpy as np
from w2v_utils import *
```
Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the `word_to_vec_map`.
```
words, word_to_vec_map = read_glove_vecs('../../readonly/glove.6B.50d.txt')
```
You've loaded:
- `words`: set of words in the vocabulary.
- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.
You've seen that one-hot vectors do not do a good job capturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are.
# 1 - Cosine similarity
To measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows:
$$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$
where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value.
<img src="images/cosine_sim.png" style="width:800px;height:250px;">
<caption><center> **Figure 1**: The cosine of the angle between two vectors is a measure of how similar they are</center></caption>
**Exercise**: Implement the function `cosine_similarity()` to evaluate similarity between word vectors.
**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$
```
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similariy between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = None
# Compute the L2 norm of u (≈1 line)
norm_u = None
# Compute the L2 norm of v (≈1 line)
norm_v = None
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = None
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
```
**Expected Output**:
<table>
<tr>
<td>
**cosine_similarity(father, mother)** =
</td>
<td>
0.890903844289
</td>
</tr>
<tr>
<td>
**cosine_similarity(ball, crocodile)** =
</td>
<td>
0.274392462614
</td>
</tr>
<tr>
<td>
**cosine_similarity(france - paris, rome - italy)** =
</td>
<td>
-0.675147930817
</td>
</tr>
</table>
After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave.
## 2 - Word analogy task
In the word analogy task, we complete the sentence <font color='brown'>"*a* is to *b* as *c* is to **____**"</font>. An example is <font color='brown'> '*man* is to *woman* as *king* is to *queen*' </font>. In detail, we are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity.
**Exercise**: Complete the code below to be able to perform word analogies!
```
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings e_a, e_b and e_c (≈1-3 lines)
e_a, e_b, e_c = None
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = None
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if None > None:
max_cosine_sim = None
best_word = None
### END CODE HERE ###
return best_word
```
Run the cell below to test your code, this may take 1-2 minutes.
```
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
```
**Expected Output**:
<table>
<tr>
<td>
**italy -> italian** ::
</td>
<td>
spain -> spanish
</td>
</tr>
<tr>
<td>
**india -> delhi** ::
</td>
<td>
japan -> tokyo
</td>
</tr>
<tr>
<td>
**man -> woman ** ::
</td>
<td>
boy -> girl
</td>
</tr>
<tr>
<td>
**small -> smaller ** ::
</td>
<td>
large -> larger
</td>
</tr>
</table>
Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?.
### Congratulations!
You've come to the end of this assignment. Here are the main points you should remember:
- Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.)
- For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started.
Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook.
Congratulations on finishing the graded portions of this notebook!
## 3 - Debiasing word vectors (OPTIONAL/UNGRADED)
In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded.
Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.)
```
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
```
Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
```
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable.
But let's try with some other words.
```
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch!
We'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing.
### 3.1 - Neutralize bias for non-gender specific words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "orthogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$.
Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below.
<img src="images/neutral.png" style="width:800px;height:300px;">
<caption><center> **Figure 2**: The word vector for "receptionist" represented before and after applying the neutralize operation. </center></caption>
**Exercise**: Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$:
$$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$
$$e^{debiased} = e - e^{bias\_component}\tag{3}$$
If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.
<!--
**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:
$$u = u_B + u_{\perp}$$
where : $u_B = $ and $ u_{\perp} = u - u_B $
!-->
```
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = None
# Compute e_biascomponent using the formula give above. (≈ 1 line)
e_biascomponent = None
# Neutralize e by substracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = None
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
```
**Expected Output**: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$).
<table>
<tr>
<td>
**cosine similarity between receptionist and g, before neutralizing:** :
</td>
<td>
0.330779417506
</td>
</tr>
<tr>
<td>
**cosine similarity between receptionist and g, after neutralizing:** :
</td>
<td>
-3.26732746085e-17
</tr>
</table>
### 3.2 - Equalization algorithm for gender-specific words
Next, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this.
The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works:
<img src="images/equalize10.png" style="width:800px;height:400px;">
The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are:
$$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$
$$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{5}$$
$$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$
$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{7}$$
$$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{8}$$
$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {||(e_{w1} - \mu_{\perp}) - \mu_B||} \tag{9}$$
$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {||(e_{w2} - \mu_{\perp}) - \mu_B||} \tag{10}$$
$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$
$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$
**Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!
```
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = None
e_w1, e_w2 = None
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = None
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = None
mu_orth = None
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = None
e_w2B = None
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = None
corrected_e_w2B = None
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = None
e2 = None
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
```
**Expected Output**:
cosine similarities before equalizing:
<table>
<tr>
<td>
**cosine_similarity(word_to_vec_map["man"], gender)** =
</td>
<td>
-0.117110957653
</td>
</tr>
<tr>
<td>
**cosine_similarity(word_to_vec_map["woman"], gender)** =
</td>
<td>
0.356666188463
</td>
</tr>
</table>
cosine similarities after equalizing:
<table>
<tr>
<td>
**cosine_similarity(u1, gender)** =
</td>
<td>
-0.700436428931
</td>
</tr>
<tr>
<td>
**cosine_similarity(u2, gender)** =
</td>
<td>
0.700436428931
</td>
</tr>
</table>
Please feel free to play with the input words in the cell above, to apply equalization to other pairs of words.
These debiasing algorithms are very helpful for reducing bias, but are not perfect and do not eliminate all traces of bias. For example, one weakness of this implementation was that the bias direction $g$ was defined using only the pair of words _woman_ and _man_. As discussed earlier, if $g$ were defined by computing $g_1 = e_{woman} - e_{man}$; $g_2 = e_{mother} - e_{father}$; $g_3 = e_{girl} - e_{boy}$; and so on and averaging over them, you would obtain a better estimate of the "gender" dimension in the 50 dimensional word embedding space. Feel free to play with such variants as well.
### Congratulations
You have come to the end of this notebook, and have seen a lot of the ways that word vectors can be used as well as modified.
Congratulations on finishing this notebook!
**References**:
- The debiasing algorithm is from Bolukbasi et al., 2016, [Man is to Computer Programmer as Woman is to
Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf)
- The GloVe word embeddings were due to Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (https://nlp.stanford.edu/projects/glove/)
| github_jupyter |
# **Birth weight prediction**
---
## Load Libraries
```
import os
import numpy as np
import pandas as pd
import seaborn as sn
import matplotlib.pyplot as plt
from joblib import dump, load
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
```
## Load The data
```
data = pd.read_csv('./inputs/baby-weights-dataset2.csv')
data.columns
data.head()
```
### Dropping bias columns
```
columns = ['ID', 'MARITAL', 'FEDUC', 'MEDUC', 'HISPMOM', 'HISPDAD']
data.drop(columns, inplace=True, axis=1, errors='ignore')
data.columns
data.head()
```
## Correlation Analysis
### Standardize the data
```
y = data[['BWEIGHT']]
sc_y = StandardScaler()
# fit and transform the data
y_std = sc_y.fit_transform(y)
y_std = pd.DataFrame(y_std, columns = y.columns)
y_std.head()
df = pd.DataFrame(sc_y.inverse_transform(y_std), columns=['y_std'])
#Pounds to kilograms
df['y_kg'] = df['y_std'].apply(lambda x: x * .454)
plt.scatter(df.index, df['y_kg'], color = 'red', label = 'Real data')
plt.legend()
plt.show()
X = data.drop(['BWEIGHT'], axis=1)
sc_X = StandardScaler()
# fit and transform the data
X_std = sc_X.fit_transform(X)
X_std = pd.DataFrame(X_std, columns = X.columns)
X_std.head()
#To save scalers
os.makedirs('outputs', exist_ok=True)
dump(sc_X, './outputs/std_scaler_X.bin', compress=True)
dump(sc_y, './outputs/std_scaler_y.bin', compress=True)
corrMatrix = X_std.corr()
print (corrMatrix)
sn.heatmap(corrMatrix)
plt.show()
```
### Dropping high correlated features
```
corr_abs = corrMatrix.abs()
upper_tri = corr_abs.where(np.triu(np.ones(corr_abs.shape),k=1).astype(np.bool))
to_drop = [column for column in upper_tri.columns if any(upper_tri[column] > 0.7)]
print(to_drop)
#data_std.drop(to_drop, inplace=True, axis=1, errors='ignore')
#data_std.columns
```
## Split the data
```
# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X_std, y_std, test_size=0.3, random_state=42)
X_test, X_val, y_test, y_val = train_test_split(X_test, y_test, test_size=0.33, random_state=42)
X_val.to_json('./inputs/X_validation_data.json', orient="split")
y_val.to_json('./inputs/y_validation_data.json', orient="split")
import json
columns = X_val.columns.to_list()
with open("./outputs/columns.txt", "w") as fp:
json.dump(columns, fp)
```
## Model Training
```
from azureml.core import Workspace
ws = Workspace.get(name='demo-aml',subscription_id='YOUR-SUSCRIPTION-ID',resource_group='demo-aml')
from azureml.core import Experiment
experiment_name = 'BABY-WEIGHT-EXP'
exp = Experiment(workspace=ws, name=experiment_name)
```
### Model1: Xgboost
```
run = exp.start_logging()
from xgboost import XGBRegressor
# define model
model = XGBRegressor()
# fit model
model.fit(X_train, y_train)
```
#### Evaluation
```
yhat = model.predict(X_test)
df1 = pd.DataFrame(sc_y.inverse_transform(yhat), columns=['y_hat'])
df2 = pd.DataFrame(sc_y.inverse_transform(y_test), columns=['y_test'])
df = pd.concat([df1, df2], axis=1)
#Pounds to kilograms
df['y_hat_kg'] = df['y_hat'].apply(lambda x: x * .454)
df['y_test_kg'] = df['y_test'].apply(lambda x: x * .454)
df.head()
plt.scatter(df.index, df['y_test_kg'], color = 'red', label = 'Real data')
plt.scatter(df.index, df['y_hat_kg'], color = 'blue', label = 'Predicted data')
plt.title('Prediction')
plt.legend()
run.log_image(name='prediction_model', plot=plt)
plt.show()
# evaluate predictions
mse = mean_squared_error(y_test, yhat)
mae = mean_absolute_error(y_test, yhat)
r2 = r2_score(y_test, yhat)
r2 = r2_score(y_test, yhat)
print('MSE: %.3f' % mse)
run.log('MSE', mse)
print('MAE: %.3f' % mae)
run.log('MAE', mae)
print('R2: %.3f' % r2)
run.log('R2',r2)
#Save model
model.save_model("./outputs/model.bst")
#dump(value=model, filename='./outputs/model.pkl')
tags = { "Model": 'BABY-WEIGHT',
"Type": 'Xgboost',
"User": 'alejandra.taborda-londono@capgemini.com'}
run.set_tags(tags)
run.complete()
### Model Registry
model_name = 'BABY-WEIGHT'
run.register_model(model_name= model_name,
model_path = './outputs/',
tags = tags,
description="Neural Network for Birth weight prediction")
```
| github_jupyter |
```
from IPython.display import HTML
# Cell visibility - COMPLETE:
#tag = HTML('''<style>
#div.input {
# display:none;
#}
#</style>''')
#display(tag)
#Cell visibility - TOGGLE:
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<p style="text-align:right">
Promijeni vidljivost <a href="javascript:code_toggle()">ovdje</a>.</p>''')
display(tag)
```
## Grafovi funkcija
U ovom jednostavnom primjeru mogu se analizirati učinci frekvencije, amplitude, faze, prigušenja i ostalih relevantnih parametara na valne oblike nekih osnovnih funkcija koje se često koriste u teoriji upravljanja. Za svaku funkciju (odabranu putem padajućeg izbornika), vizualizira se odgovarajući valni oblik na grafu, a dotični se signal dalje može mijenjati u skladu s vrijednostima odabranima putem interaktivnih klizača. Mogu se analizirati sljedeće funkcije:
* sinusni val,
* kosinusni val,
* prigušeni val,
* dirac delta funkcija,
* step-funkcija,
* rampa-funkcija (jedinična kosa funkcija),
* trokutasti val,
```
%matplotlib inline
#%config InlineBackend.close_figures=False
from ipywidgets import interactive
from ipywidgets import widgets
from IPython.display import Latex, display, Markdown # For displaying Markdown and LaTeX code
import matplotlib.pyplot as plt
import numpy as np
import math
import matplotlib.patches as mpatches
from IPython.display import HTML, clear_output
from IPython.display import display
# Functions descriptions
sine_text = "Sinus: Sinusni val ili sinusoida je matematička krivulja koja opisuje glatke periodične oscilacije. Sinusni val je kontinuirani val. Ime je dobio po funkciji sinus."
cosine_text = "Kosinus: Kosinusni val je signalni valni oblik vrlo sličan sinusoidi, osim što se svaka točka na kosinusnom valu javlja točno 1/4 ciklusa ranije od odgovarajuće točke na sinusnom valu. Uobičajeno, ako kosinusni val i odgovarajući sinusni val imaju istu frekvenciju, kosinusni val vodi sinusni val za 90 stupnjeva faze."
dumped_text = "Prigušeni val: Prigušeni val je signal čija se amplituda titranja s vremenom smanjuje, na kraju prelazeći u nulu (eksponencijalno propadajući sinusni val). Idealni prigušeni val je sinusoida koja eksponencijalno opada; oscilirajući sinusni (ili kosinusni) val u kojem se vršna amplituda eksponencijalnom brzinom smanjuje od početnog maksimuma prema nuli."
delta_text = "Delta funkcija: U matematici je Diracova delta funkcija (δ funkcija) generalizirana funkcija ili distribucija koju je uveo fizičar Paul Dirac. Koristi se za modeliranje gustoće idealizirane točke ili mase naboja kao funkcije jednake nuli - svugdje osim za nulu - i čiji je integral na cijeloj realnoj osi jednak jedinici."
step_text = "Step funkcija: Funkcija koja se može zapisati kao konačna linearna kombinacija indikatorskih funkcija intervala. Neformalno gledano, step funkcija je konstantna funkcija 'u komadima', koja ima samo konačno puno dijelova."
ramp_text = "Rampa funkcija: Funkcija rampa je unarna realna funkcija, čiji je graf oblikovan kao rampa. Može se izraziti brojnim definicijama, npr. '0 za negativne ulaze, izlaz jednak ulazu za ne-negativne ulaze'. Pojam 'rampa' može se koristiti i za druge funkcije dobivene skaliranjem i pomicanjem, a početna funkcija u ovom primjeru je funkcija jedinične rampe (nagib 1, počevši od 0)."
triang_text = "Trokutasti val: Trokutasti val je nesinusoidalni valni oblik nazvan zbog svoje trokutaste forme. To je periodična, komadno-linearna, kontinuirana realna funkcija."
# Sine widgets
slider_a = widgets.FloatSlider(description='Amplituda', min=0., max=4., step=0.25, continuous_update=False)
slider_f = widgets.FloatSlider(description='Frekvencija', min=0., max=10., step=0.5, continuous_update=False)
slider_p = widgets.FloatSlider(description='Faza', min=-10.0, max=10.0, step=0.5, continuous_update=False)
slider_f.value = 5
slider_a.value = 2
slider_p.value = 0
formula_sine = r'$f(t)= Asin(2{\pi}ft + {\varphi})$'
# Cosine widgets
slider_acos = widgets.FloatSlider(description='Amplituda', min=0., max=4., step=0.25, continuous_update=False)
slider_fcos = widgets.FloatSlider(description='Frekvencija', min=0., max=10., step=0.5, continuous_update=False)
slider_pcos = widgets.FloatSlider(description='Faza', min=-10.0, max=10.0, step=0.5, continuous_update=False)
slider_fcos.value = 5
slider_acos.value = 2
slider_pcos.value = 0
formula_cosine = r'$f(t)= Acos(2{\pi}ft + {\varphi})$'
# Damping widgets
slider_adamp = widgets.FloatSlider(description='Amplituda', min=0., max=4., step=0.25, continuous_update=False)
slider_fdamp = widgets.FloatSlider(description='Frekvencija', min=0., max=10., step=0.5, continuous_update=False)
slider_pdamp = widgets.FloatSlider(description='Faza', min=-10.0, max=10.0, step=0.5, continuous_update=False)
slider_d = widgets.FloatSlider(description='Prigušenje', min=0., max=3., step=0.2, continuous_update=False)
slider_fdamp.value = 5
slider_adamp.value = 2
slider_pdamp.value = 0
slider_d.value = 0
formula_damp = r'$f(t)= Ae^{-{\lambda}t}cos(2{\pi}ft + {\varphi})$'
# Delta widgets
slider_adelta = widgets.FloatSlider(description='Parametar a', value = 0.01, min=0.01, max=1.5, step=0.01, continuous_update=False)
formula = r'$\delta_{a}(x)= \frac{1}{|a|\sqrt{\pi}}e^{-(x/a)^2}$'
# Step widget
formula_step = r'$ f(x) = \begin{cases} b, & \mbox{if } x > a \\ 0, & \mbox{if } x \leq a \end{cases} $'
slider_astep = widgets.FloatSlider(description='Parametar a', value = 0., min=-10, max=10, step=0.5, continuous_update=False)
slider_bstep = widgets.FloatSlider(description='Parametar b', value = 1, min=0, max=5., step=0.5, continuous_update=False)
# Ramp widgets
formula_ramp = r'$ f(x) = \begin{cases} x-a, & \mbox{if } x > a \\ 0, & \mbox{if } x \leq a \end{cases} $'
slider_aramp = widgets.FloatSlider(description='Parametar a', value = 0., min=-10, max=10, step=0.5, continuous_update=False)
# Triangle widgets
slider_atri = widgets.FloatSlider(description='Amplituda', min=1, max=4., step=0.5, continuous_update=False)
slider_ptri = widgets.FloatSlider(description='Period', min=1, max=10., step=0.5, continuous_update=False)
formula_triangle = r'$f(t)= \frac{2a}{\pi}arcsin(sin(\frac{2\pi}{p}t))$'
# Layouts
info_layout = widgets.Layout(border='solid black', width = '100%', height = '200', padding='5px')
panel_layout = widgets.Layout(border='solid blue', width = '35%', height = '175', padding='5px')
plot_layout = widgets.Layout(border='solid red', width = '65%', height = '175', padding='5px')
output_info = widgets.Output(layout = info_layout)
output_panel = widgets.Output(layout = panel_layout)
output_plot = widgets.Output(layout = plot_layout)
# Dropdown widget
dd_order = widgets.Dropdown(
options=['Sinusni val', 'Kosinusni val', 'Prigušeni val', 'Delta funkcija', 'Step funkcija', 'Rampa funkcija', 'Trokutasti val'],
value='Sinusni val',
description='Odaberi funkciju:',
disabled=False,
style = {'description_width': 'initial'},
)
# Functions
def f_sin(A, frequency, phase):
plt.figure(figsize=(10,5))
t = np.linspace(-10, 10, num=1000)
plt.plot(t, A * np.sin(t*frequency + phase), 'b-')
plt.xlim(-10, 10)
plt.ylim(-5, 5)
plt.grid(True)
plt.axhline(y=0,lw=0.8,color='k')
plt.axvline(x=0,lw=0.8,color='k')
plt.xlabel('$t [s]$')
plt.ylabel('$f(t)$')
plt.grid(True)
with output_plot:
output_plot.clear_output(wait=True)
plt.show()
def f_cos(A, frequency, phase):
plt.figure(figsize=(10,5))
t = np.linspace(-10, 10, num=1000)
plt.plot(t, A * np.cos(t*frequency + phase), 'r-')
plt.xlim(-10, 10)
plt.ylim(-5, 5)
plt.grid(True)
plt.axhline(y=0,lw=0.8,color='k')
plt.axvline(x=0,lw=0.8,color='k')
plt.xlabel('$t [s]$')
plt.ylabel('$f(t)$')
plt.grid(True)
with output_plot:
output_plot.clear_output(wait=True)
plt.show()
def f_damping(A, frequency, phase, decay):
plt.figure(figsize=(10,5))
x = np.linspace(0, 10, num=1000)
#plt.plot(x, [A * math.exp(-decay * t) *(np.cos(t*frequency + phase) + np.sin(t*frequency + phase)) for t in x])
plt.plot(x, [A * math.exp(-decay * t) *(np.cos(t*frequency + phase)) for t in x], "g-")
plt.xlim(0, 10)
plt.ylim(-5, 5)
plt.grid(True)
plt.axhline(y=0,lw=0.8,color='k')
plt.axvline(x=0,lw=0.8,color='k')
plt.xlabel('$t [s]$')
plt.ylabel('$f(t)$')
plt.grid(True)
with output_plot:
output_plot.clear_output(wait=True)
plt.show()
def f_delta(a):
plt.figure(figsize=(10,5))
x = np.linspace(-10, 10, num=1000)
#plt.plot(x, [A * math.exp(-decay * t) *(np.cos(t*frequency + phase) + np.sin(t*frequency + phase)) for t in x])
plt.plot(x, [1 / (abs(a)*math.sqrt(np.pi)) * np.e **(-(t/a)**2) for t in x], "b-")
plt.xlim(-10, 10)
plt.ylim(-3, 6)
plt.grid(True)
plt.axhline(y=0,lw=0.8,color='k')
plt.axvline(x=0,lw=0.8,color='k')
plt.xlabel('$t [s]$')
plt.ylabel('$f(t)$')
plt.grid(True)
with output_plot:
output_plot.clear_output(wait=True)
plt.show()
def f_step(a,b):
plt.figure(figsize=(10,5))
step = lambda x, a, b: b if x > a else 0
x = np.linspace(-10, 10, num=1000)
#plt.plot(x, [A * math.exp(-decay * t) *(np.cos(t*frequency + phase) + np.sin(t*frequency + phase)) for t in x])
plt.plot(x, [step(t, a, b) for t in x] , "r-")
plt.xlim(-10, 10)
plt.ylim(-3, 6)
plt.grid(True)
plt.axhline(y=0,lw=0.8,color='k')
plt.axvline(x=0,lw=0.8,color='k')
plt.xlabel('$t [s]$')
plt.ylabel('$f(t)$')
plt.grid(True)
with output_plot:
output_plot.clear_output(wait=True)
plt.show()
def f_ramp(a):
plt.figure(figsize=(10,5))
step = lambda x, a: x - a if x > a else 0
x = np.linspace(-10, 10, num=1000)
#plt.plot(x, [A * math.exp(-decay * t) *(np.cos(t*frequency + phase) + np.sin(t*frequency + phase)) for t in x])
plt.plot(x, [step(t, a) for t in x] , "g-")
plt.xlim(-10, 10)
plt.ylim(-3, 6)
plt.grid(True)
plt.axhline(y=0,lw=0.8,color='k')
plt.axvline(x=0,lw=0.8,color='k')
plt.xlabel('$t [s]$')
plt.ylabel('$f(t)$')
plt.grid(True)
with output_plot:
output_plot.clear_output(wait=True)
plt.show()
def f_triangle(a, p):
plt.figure(figsize=(10,5))
t = np.linspace(-10, 10, num=1000)
plt.plot(t, [2 * a / np.pi * np.arcsin(np.sin(2*np.pi * x / p)) for x in t], 'b-')
plt.xlim(-10, 10)
plt.ylim(-5, 5)
plt.grid(True)
plt.axhline(y=0,lw=0.8,color='k')
plt.axvline(x=0,lw=0.8,color='k')
plt.xlabel('$t [s]$')
plt.ylabel('$f(t)$')
plt.grid(True)
with output_plot:
output_plot.clear_output(wait=True)
plt.show()
def first_setup():
with output_info:
output_info.clear_output()
display(Markdown(sine_text))
with output_panel:
output_panel.clear_output()
display(Markdown(formula_sine))
display(interactive(f_sin, A=slider_a, frequency=slider_f, phase=slider_p))
def dropdown_eventhandler(change):
if (dd_order.value == 'Sinusni val'):
with output_info:
output_info.clear_output()
display(Markdown(sine_text))
with output_panel:
output_panel.clear_output()
display(Markdown(formula_sine))
display(interactive(f_sin, A=slider_a, frequency=slider_f, phase=slider_p))
if (dd_order.value == 'Kosinusni val'):
with output_info:
output_info.clear_output()
display(Markdown(cosine_text))
with output_panel:
output_panel.clear_output()
display(Markdown(formula_cosine))
display(interactive(f_cos, A=slider_acos, frequency=slider_fcos, phase=slider_pcos))
if (dd_order.value == 'Prigušeni val'):
with output_info:
output_info.clear_output()
display(Markdown(dumped_text))
with output_panel:
output_panel.clear_output()
display(Markdown(formula_damp))
display(interactive(f_damping, A=slider_adamp, frequency=slider_fdamp, phase=slider_pdamp, decay=slider_d))
if (dd_order.value == 'Delta funkcija'):
with output_info:
output_info.clear_output()
display(Markdown(delta_text))
with output_panel:
output_panel.clear_output()
display(Markdown(formula))
display(interactive(f_delta, a = slider_adelta))
if (dd_order.value == 'Step funkcija'):
with output_info:
output_info.clear_output()
display(Markdown(step_text))
with output_panel:
output_panel.clear_output()
display(Markdown(formula_step))
display(interactive(f_step, a = slider_astep, b = slider_bstep))
if (dd_order.value == 'Rampa funkcija'):
with output_info:
output_info.clear_output()
display(Markdown(ramp_text))
with output_panel:
output_panel.clear_output()
display(Markdown(formula_ramp))
display(interactive(f_ramp, a = slider_aramp))
if (dd_order.value == 'Trokutasti val'):
with output_info:
output_info.clear_output()
display(Markdown(triang_text))
with output_panel:
output_panel.clear_output()
display(Markdown(formula_triangle))
display(interactive(f_triangle, a = slider_atri, p = slider_ptri))
display(dd_order, output_info, widgets.HBox([output_panel, widgets.Label(" "), output_plot]) )
first_setup()
dd_order.observe(dropdown_eventhandler, names='value')
```
| github_jupyter |
# Name
Deploying a trained model to Cloud Machine Learning Engine
# Label
Cloud Storage, Cloud ML Engine, Kubeflow, Pipeline
# Summary
A Kubeflow Pipeline component to deploy a trained model from a Cloud Storage location to Cloud ML Engine.
# Details
## Intended use
Use the component to deploy a trained model to Cloud ML Engine. The deployed model can serve online or batch predictions in a Kubeflow Pipeline.
## Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------|-----------------|---------|
| model_uri | The URI of a Cloud Storage directory that contains a trained model file.<br/> Or <br/> An [Estimator export base directory](https://www.tensorflow.org/guide/saved_model#perform_the_export) that contains a list of subdirectories named by timestamp. The directory with the latest timestamp is used to load the trained model file. | No | GCSPath | | |
| project_id | The ID of the Google Cloud Platform (GCP) project of the serving model. | No | GCPProjectID | | |
| model_id | The name of the trained model. | Yes | String | | None |
| version_id | The name of the version of the model. If it is not provided, the operation uses a random name. | Yes | String | | None |
| runtime_version | The Cloud ML Engine runtime version to use for this deployment. If it is not provided, the default stable version, 1.0, is used. | Yes | String | | None |
| python_version | The version of Python used in the prediction. If it is not provided, version 2.7 is used. You can use Python 3.5 if runtime_version is set to 1.4 or above. Python 2.7 works with all supported runtime versions. | Yes | String | | 2.7 |
| model | The JSON payload of the new [model](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models). | Yes | Dict | | None |
| version | The new [version](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions) of the trained model. | Yes | Dict | | None |
| replace_existing_version | Indicates whether to replace the existing version in case of a conflict (if the same version number is found.) | Yes | Boolean | | FALSE |
| set_default | Indicates whether to set the new version as the default version in the model. | Yes | Boolean | | FALSE |
| wait_interval | The number of seconds to wait in case the operation has a long run time. | Yes | Integer | | 30 |
## Input data schema
The component looks for a trained model in the location specified by the `model_uri` runtime argument. The accepted trained models are:
* [Tensorflow SavedModel](https://cloud.google.com/ml-engine/docs/tensorflow/exporting-for-prediction)
* [Scikit-learn & XGBoost model](https://cloud.google.com/ml-engine/docs/scikit/exporting-for-prediction)
The accepted file formats are:
* *.pb
* *.pbtext
* model.bst
* model.joblib
* model.pkl
`model_uri` can also be an [Estimator export base directory, ](https://www.tensorflow.org/guide/saved_model#perform_the_export)which contains a list of subdirectories named by timestamp. The directory with the latest timestamp is used to load the trained model file.
## Output
| Name | Description | Type |
|:------- |:---- | :--- |
| job_id | The ID of the created job. | String |
| job_dir | The Cloud Storage path that contains the trained model output files. | GCSPath |
## Cautions & requirements
To use the component, you must:
* [Set up the cloud environment](https://cloud.google.com/ml-engine/docs/tensorflow/getting-started-training-prediction#setup).
* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/#gcp-service-accounts) in a Kubeflow cluster. For example:
```
```python
mlengine_deploy_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))
```
* Grant read access to the Cloud Storage bucket that contains the trained model to the Kubeflow user service account.
## Detailed description
Use the component to:
* Locate the trained model at the Cloud Storage location you specify.
* Create a new model if a model provided by you doesn’t exist.
* Delete the existing model version if `replace_existing_version` is enabled.
* Create a new version of the model from the trained model.
* Set the new version as the default version of the model if `set_default` is enabled.
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
```
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
```
2. Load the component using KFP SDK
```
import kfp.components as comp
mlengine_deploy_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/a97f1d0ad0e7b92203f35c5b0b9af3a314952e05/components/gcp/ml_engine/deploy/component.yaml')
help(mlengine_deploy_op)
```
### Sample
Note: The following sample code works in IPython notebook or directly in Python code.
In this sample, you deploy a pre-built trained model from `gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/` to Cloud ML Engine. The deployed model is `kfp_sample_model`. A new version is created every time the sample is run, and the latest version is set as the default version of the deployed model.
#### Set sample parameters
```
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
# Optional Parameters
EXPERIMENT_NAME = 'CLOUDML - Deploy'
TRAINED_MODEL_PATH = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/'
```
#### Example pipeline that uses the component
```
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='CloudML deploy pipeline',
description='CloudML deploy pipeline'
)
def pipeline(
model_uri = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/',
project_id = PROJECT_ID,
model_id = 'kfp_sample_model',
version_id = '',
runtime_version = '1.10',
python_version = '',
version = '',
replace_existing_version = 'False',
set_default = 'True',
wait_interval = '30'):
task = mlengine_deploy_op(
model_uri=model_uri,
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=runtime_version,
python_version=python_version,
version=version,
replace_existing_version=replace_existing_version,
set_default=set_default,
wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
```
#### Compile the pipeline
```
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
```
#### Submit the pipeline for execution
```
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
```
## References
* [Component python code](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/ml_engine/_deploy.py)
* [Component docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile)
* [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/ml_engine/deploy/sample.ipynb)
* [Cloud Machine Learning Engine Model REST API](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models)
* [Cloud Machine Learning Engine Version REST API](https://cloud.google.com/ml-engine/reference/rest/v1/projects.versions)
## License
By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control.
| github_jupyter |
# MSTICpy - Mordor data provider and browser
### Description
This notebook provides a guided example of using the Mordor data provider and browser included with MSTICpy.
For more information on the Mordor data sets see the [Open Threat Research Forge Mordor GitHub repo](https://github.com/OTRF/mordor)
You must have msticpy installed to run this notebook:
```
%pip install --upgrade msticpy
```
MSTICpy versions >= 0.8.5
### Contents:
- Using the Mordor data provider to retrieve data sets
- Listing queries
- Running a query to retrieve data
- Optional parameters
- Searching for queries by Mordor property
- Mordor Browser
## Using the Data Provider to download datasets
Using the data provider you can download and render event data as a pandas DataFrame.
> **Note** - Mordor includes both host event data and network capture data.<br>
> Although Capture files can be downloaded and unpacked<br>
> they currently cannot be populated into a pandas DataFrame.
> This is the case for most `network` datasets.<br>
> `Host` event data is retrieved and populated into DataFrames.
```
from msticpy.data import QueryProvider
CACHE_FOLDER = "~/.msticpy/mordor"
mdr_data = QueryProvider("Mordor", save_folder=CACHE_FOLDER)
mdr_data.connect()
```
### List Queries
> Note: Many Mordor data entries have multiple data sets, so we see more queries than Mordor entries.
(Only first 15 shown)
```
mdr_data.list_queries()[:15]
```
### Retrieving/querying a data set
```
mdr_data.atomic.windows.credential_access.host.covenant_dcsync_dcerpc_drsuapi_DsGetNCChanges().head(3)
```
### Optional parameters
The data provider and the query functions support some parameters to control
aspects of the query operation.
- **use_cached** : bool, optional<br>
Try to use locally saved file first,
by default True. If you’ve previously downloaded a file, it will use
this rather than downloading a new copy.
- **save_folder** : str, optional<br>
Path to output folder, by default
".". The path that downloaded and extracted files are saved to.
- **silent** : bool<br>
If True, suppress feedback. By default, False.
If you specify these when you initialize the data provider, the settings
will apply to all queries.
```
mdr_data = QueryProvider("Mordor", save_folder="./mordor")
mdr_data.connect()
```
Using these parameters in the query will override the provider settings
and defaults for that query.
```
mdr_data.atomic.windows.credential_access.host.covenant_dcsync_dcerpc_drsuapi_DsGetNCChanges(silent=True, save_folder="./mordor").head(2)
!dir mordor
```
## Getting summary data about a query
Call the query function with a single "?" parameter.
```
mdr_data.atomic.windows.credential_access.host.covenant_dcsync_dcerpc_drsuapi_DsGetNCChanges("?")
```
### Searching for Queries with QueryProvider.search_queries()
Search queries for matching attributes.
#### Parameters
**search** : str Search string.
Substrings separated by commas will be treated as OR terms - e.g. "a, b" == "a" or "b".<br>
Substrings separated by "+" will be treated as AND terms - e.g. "a + b" == "a" and "b"
#### Returns
List of matching query names.
```
mdr_data.search_queries("AWS")
mdr_data.search_queries("Empire + T1222")
mdr_data.search_queries("Empire + Credential")
```
## Mordor Browser
We've also built a more specialized browser for Mordor data. This uses the metadata in the repository to let you view full details of the dataset.
You can also preview the dataset (if it is convertible to a DataFrame).
For details of the data shown please see the [Mordor GitHub repo](https://github.com/OTRF/mordor)<br> and the [Threat Hunter Playbook](https://threathunterplaybook.com/introduction.html)
```
from msticpy.data.browsers.mordor_browser import MordorBrowser
mdr_browser = MordorBrowser()
```
### Mordor Browser Details
The top scrollable list is a list of the Mordor datasets. Selecting one of these updates the data in the lower half of the browser.
#### Filter Drop-down
To narrow your search you can filter using a text search or filter by Mitre Attack Techniques or Tactics.
- The Filter text box uses the same syntax as the provider `search_queries()` function.
- Simple text string will find matches for datasets that contain this string
- Strings separated by "," are treated as OR terms - i.e. it will match items that contain ANY of the substrings
- Strings separated by "+" are treated as AND terms - i.e. it will match items that contain ALL of the substrings
- The Mitre Techniques and Tactics lists are multi-select lists. Only items that have techniques and tactics matching
the selected items will be show.
- Reset Filter button will clear any filtering.
#### Main Details Window
- title, ID, author, creation date, modification date and description are self-explanatory.
- tags can be used for searching
- file_paths (see below)
- attacks - lists related Mitre Technique and Tactics. The item title is a link to the Mitre page describing the technique or tactic.
- notebooks - if there is a notebook in the Threat Hunter Playbook site, a link to it is shown here. (multiple notebooks might be shown)
- simulation - raw data listing the steps in the attack (and useful for replaying the attack in a demo environment).
- references - links to any external data about the attack.
#### File_paths
This section allows you to select, download and (in most cases) display the event data relating to the attack.
Select a file and click on the Download button.
The zipped file is downloaded and extracted. If it is event data, this is converted to a
pandas DataFrame and displayed below the rest of the data.
The current dataset is available as an attribute of the browser:
```
mdr_browser.current_dataset
```
Datasets that you've downloaded and displayed in this session are also cached in the browser and available in the
`mdr_browser.datasets` attribute.
#### Downloaded files
By default files are downloaded and extracted to the current folder. You can change this with the
`save_folder` parameter when creating the `MordorBrowser` object.
You can also specify the `use_cached` parameter. By default, this is `True`, which causes downloaded files not
to be deleted after extraction. These local copies are used if you try to view the same data set again.
This also works across sessions.
If `use_cache` is set to False, files are deleted immediately after downloading, extracting and populating the
DataFrame.
### Using the standard query browser
> **Note** - In the `Example` section, ignore the examples of parameters<br>
> passed to the query - these are not needed and ignored.
```
mdr_data.browse_queries()
```
| github_jupyter |
```
import tensorflow as tf
```
tf.train.Coordinator: help multiple thread stop together and report exceptions to a program that waits for them to stop.
tf.train.QueueRunner: create a number of threads cooperatiing to **enqueue** tensors in the **same** queue.
## Coordinator
### Key method
tf.train.Coordinator.should_stop
return `True` if the threads should stop. This is called from threads, so the thread know if it should stop.
tf.train.Coordinator.request_stop
request that the threads should stop. when this is called, calls to `should_top` will return `True`.
tf.train.Coordinator.join
This call blocks until a set of threads have terminated.
note: there is a message about **exc_info**, check it, I think this a oppotunity to study how to hanle exception in thread.
### Simple example
```
import time
import threading
import random
def worker(coord):
while not coord.should_stop():
time.sleep(10)
thread_id = threading.current_thread().name
print("worker %s running" % (thread_id,))
rand_int = random.randint(0,10)
if rand_int > 5:
print("worker %s requst stop" % (thread_id,))
coord.request_stop()
print("worker %s stopped" % (thread_id,))
coord = tf.train.Coordinator()
threads = [threading.Thread(target=worker, args=(coord,),name=str(i)) for i in range(3)]
for t in threads:
t.start()
coord.join(threads)
```
## Queue
### RandomShuffleQueue
A queue is a TensorFlow data structure that stores tensors across multiple steps, and expose operations that enqueue and dequeue tensors.
classic usage of Queue is:
* Multiple threads prepare training examples and enqueue them.
* A training thread executes a training op that dequeues mini-batches from the queue
## QueueRunner
The `QueueRunner` class creates a number of threads that repeatly run an `enqueue` op. These threads can use a coordinator to stop together. In addition a queue runner runs a closer thread that automatically closes the queue if an exception is reported to the coordinator.
## Put it together
```
def simple_shuffle_batch(source, capacity, batch_size=10):
queue = tf.RandomShuffleQueue(capacity=capacity, min_after_dequeue=int(0.9*capacity),
shapes=source.shape, dtypes=source.dtype)
enqueue = queue.enqueue(source)
num_threads = 4
qr = tf.train.QueueRunner(queue,[enqueue]*num_threads)
tf.train.add_queue_runner(qr)
return queue.dequeue_many(batch_size)
```
The `simple_shuffle_batch` use a `QueueRunner` to execute the `enqueue` ops. but the queue runner don't start yet. Now we need start the queue runner and start a main thread to dequeue elements from queue.
```
input = tf.constant(list(range(1, 100)))
input = tf.data.Dataset.from_tensor_slices(input)
input = input.make_one_shot_iterator().get_next()
get_batch = simple_shuffle_batch(input, capacity=20)
# start queue runner directly
sess = tf.Session()
with sess.as_default() as sess:
tf.train.start_queue_runners()
while True:
try:
print(sess.run(get_batch))
except tf.errors.OutOfRangeError:
print("queue is empty")
break
input.dtype
```
Or, we can start queue runners indirectly with `tf.train.MonitorSession`
```
input = tf.constant(list(range(1, 100)))
input = tf.data.Dataset.from_tensor_slices(input)
input = input.make_one_shot_iterator().get_next()
get_batch = simple_shuffle_batch(input, capacity=20)
# start queue runner directly
with tf.train.MonitoredSession() as sess:
while not sess.should_stop():
print(sess.run(get_batch))
```
## To-Do list
`tf.train.shuffle_batch`.
| github_jupyter |
```
class Opion():
def __init__(self):
self.dataroot= r'I:\irregular holes\paris_eval_gt' #image dataroot
self.maskroot= r'I:\irregular holes\testing_mask_dataset'#mask dataroot
self.batchSize= 1 # Need to be set to 1
self.fineSize=256 # image size
self.input_nc=3 # input channel size for first stage
self.input_nc_g=6 # input channel size for second stage
self.output_nc=3# output channel size
self.ngf=64 # inner channel
self.ndf=64# inner channel
self.which_model_netD='basic' # patch discriminator
self.which_model_netF='feature'# feature patch discriminator
self.which_model_netG='unet_csa'# seconde stage network
self.which_model_netP='unet_256'# first stage network
self.triple_weight=1
self.name='CSA_inpainting'
self.n_layers_D='3' # network depth
self.gpu_ids=[0]
self.model='csa_net'
self.checkpoints_dir=r'.\checkpoints' #
self.norm='instance'
self.fixed_mask=1
self.use_dropout=False
self.init_type='normal'
self.mask_type='center'
self.lambda_A=100
self.threshold=5/16.0
self.stride=1
self.shift_sz=1 # size of feature patch
self.mask_thred=1
self.bottleneck=512
self.gp_lambda=10.0
self.ncritic=5
self.constrain='MSE'
self.strength=1
self.init_gain=0.02
self.cosis=1
self.gan_type='lsgan'
self.gan_weight=0.2
self.overlap=4
self.skip=0
self.display_freq=1000
self.print_freq=50
self.save_latest_freq=5000
self.save_epoch_freq=2
self.continue_train=False
self.epoch_count=1
self.phase='train'
self.which_epoch=''
self.niter=20
self.niter_decay=100
self.beta1=0.5
self.lr=0.0002
self.lr_policy='lambda'
self.lr_decay_iters=50
self.isTrain=True
import time
from util.data_load import Data_load
from models.models import create_model
import torch
import os
import torchvision
from torch.utils import data
import torchvision.transforms as transforms
opt = Opion()
transform_mask = transforms.Compose(
[transforms.Resize((opt.fineSize,opt.fineSize)),
transforms.ToTensor(),
])
transform = transforms.Compose(
[
transforms.Resize((opt.fineSize,opt.fineSize)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5] * 3, std=[0.5] * 3)])
dataset_test = Data_load(opt.dataroot, opt.maskroot, transform, transform_mask)
iterator_test = (data.DataLoader(dataset_test, batch_size=opt.batchSize,shuffle=True))
print(len(dataset_test))
model = create_model(opt)
total_steps = 0
load_epoch=30
model.load(load_epoch)
save_dir = './measure/true'
if os.path.exists(save_dir) is False:
os.makedirs(save_dir)
epoch=1
i=0
for image, mask in (iterator_test):
iter_start_time = time.time()
image=image.cuda()
mask=mask.cuda()
mask=mask[0][0]
mask=torch.unsqueeze(mask,0)
mask=torch.unsqueeze(mask,1)
mask=mask.byte()
model.set_input(image,mask)
model.set_gt_latent()
model.test()
real_A,real_B,fake_B=model.get_current_visuals()
pic = (torch.cat([real_A, real_B,fake_B], dim=0) + 1) / 2.0
torchvision.utils.save_image(pic, '%s/Epoch_(%d)_(%dof%d).jpg' % (
save_dir, epoch, total_steps + 1, len(dataset_test)), nrow=1)
```
| github_jupyter |
#Errors and Exception Handling
In this lecture we will learn about Errors and Exception Handling in Python. You've definitely already encountered errors by this point in the course. For example:
```
print 'Hello
```
Note how we get a SyntaxError, with the further description that it was an EOL (End of Line Error) while scanning the string literal. This is specific enough for us to see that we forgot a single quote at the end of the line. Understanding these various error types will help you debug your code much faster.
This type of error and description is known as an Exception. Even if a statement or expression is syntactically correct, it may cause an error when an attempt is made to execute it. Errors detected during execution are called exceptions and are not unconditionally fatal.
You can check out the full list of built-in exceptions [here](https://docs.python.org/2/library/exceptions.html). now lets learn how to handle errors and exceptions in our own code.
##try and except
The basic terminology and syntax used to handle errors in Python is the **try** and **except** statements. The code which can cause an exception to occue is put in the *try* block and the handling of the exception is the implemented in the *except* block of code. The syntax form is:
try:
You do your operations here...
...
except ExceptionI:
If there is ExceptionI, then execute this block.
except ExceptionII:
If there is ExceptionII, then execute this block.
...
else:
If there is no exception then execute this block.
We can also just check for any exception with just using except: To get a better understanding of all this lets check out an example: We will look at some code that opens and writes a file:
```
try:
f = open('testfile','w')
f.write('Test write this')
except IOError:
# This will only check for an IOError exception and then execute this print statement
print "Error: Could not find file or read data"
else:
print "Content written successfully"
f.close()
```
Now lets see what would happen if we did not have write permission (opening only with 'r'):
```
try:
f = open('testfile','r')
f.write('Test write this')
except IOError:
# This will only check for an IOError exception and then execute this print statement
print "Error: Could not find file or read data"
else:
print "Content written successfully"
f.close()
```
Great! Notice how we only printed a statement! The code still ran and we were able to continue doing actions and running code blocks. This is extremely useful when you have to account for possible input errors in your code. You can be prepared for the error and keep running code, instead of your code just breaking as we saw above.
We could have also just said except: if we weren't sure what exception would occur. For example:
```
try:
f = open('testfile','r')
f.write('Test write this')
except:
# This will check for any exception and then execute this print statement
print "Error: Could not find file or read data"
else:
print "Content written successfully"
f.close()
```
Great! Now we don't actually need to memorize that list of exception types! Now what if we kept wanting to run code after the exception occurred? This is where **finally** comes in.
##finally
The finally: block of code will always be run regardless if there was an exception in the try code block. The syntax is:
try:
Code block here
...
Due to any exception, this code may be skipped!
finally:
This code block would always be executed.
For example:
```
try:
f = open("testfile", "w")
f.write("Test write statement")
finally:
print "Always execute finally code blocks"
```
We can use this in conjunction with except. Lets see a new example that will take into account a user putting in the wrong input:
```
def askint():
try:
val = int(raw_input("Please enter an integer: "))
except:
print "Looks like you did not enter an integer!"
finally:
print "Finally, I executed!"
print val
askint()
askint()
```
Notice how we got an error when trying to print val (because it was never properly assigned) Lets remedy this by asking the user and checking to make sure the input type is an integer:
```
def askint():
try:
val = int(raw_input("Please enter an integer: "))
except:
print "Looks like you did not enter an integer!"
val = int(raw_input("Try again-Please enter an integer: "))
finally:
print "Finally, I executed!"
print val
askint()
```
Hmmm...that only did one check. How can we continually keep checking? We can use a while loop!
```
def askint():
while True:
try:
val = int(raw_input("Please enter an integer: "))
except:
print "Looks like you did not enter an integer!"
continue
else:
print 'Yep thats an integer!'
break
finally:
print "Finally, I executed!"
print val
askint()
```
**Great! Now you know how to handle errors and exceptions in Python with the try, except, else, and finally notation!**
| github_jupyter |
### Problem Statement
The task is to predict whether a potential promotee at checkpoint in the test set will be promoted or not after the evaluation process.
```
import pandas as pd
import numpy as np
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
from sklearn import cross_validation, metrics
from sklearn.grid_search import GridSearchCV
import matplotlib.pylab as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 4
train_data='D:/My Personal Documents/Learnings/Data Science/Data Sets/WNS Analytics/train_LZdllcl.csv'
test_data='D:/My Personal Documents/Learnings/Data Science/Data Sets/WNS Analytics/test_2umaH9m.csv'
train_result='D:/My Personal Documents/Learnings/Data Science/Data Sets/WNS Analytics/train_result.csv'
train=pd.read_csv(train_data)
test=pd.read_csv(test_data)
test_results = pd.read_csv(train_result)
target = 'is_promoted'
IDcol = 'employee_id'
exclusion=['education_Below Secondary','region_region_10','region_region_21','region_region_12','region_region_19','region_region_6','region_region_14','region_region_33','region_region_3']
train['source']='train'
test['source']='test'
data = pd.concat([train, test],ignore_index=True)
print (train.shape, test.shape, data.shape)
```
def f_trans(col1,col2):
if(col1 in (5.0,4.0,3.0) and col2==1):
return 1
else:
return 0
data['top_performer'] = data.apply(lambda x: f_trans(x['previous_year_rating'],x['KPIs_met >80%']), axis=1)
def age_trans(age):
if(age<30):
return 'Young'
elif(age>=30 and age<40):
return 'Middle Age'
elif(age >=40):
return 'Senior'
data['age']=data['age'].apply(age_trans)
```
def rating_trans(rating):
if(rating>=4.0):
return 'High'
elif(rating==3.0):
return 'Medium'
elif(rating < 3.0):
return 'low'
data['previous_year_rating']=data['previous_year_rating'].apply(rating_trans)
data.previous_year_rating[data.previous_year_rating.isnull()]=3.0
data.education[data.education.isnull()]="Bachelor's"
data=pd.get_dummies(data,columns=['department','education','gender','recruitment_channel','region','previous_year_rating'])
#data.drop(exclusion,axis=1,inplace=True)
train=data[data.source=='train']
train.drop('source',axis=1,inplace=True)
test=data[data.source=='test']
test.drop(['source','is_promoted'],axis=1,inplace=True)
from sklearn.model_selection import train_test_split
trainset, testset = train_test_split(train, test_size=0.0001)
testset.head()
def modelfit(alg, dtrain, dtest, predictors,useTrainCV=True, cv_folds=5, early_stopping_rounds=50):
if useTrainCV:
xgb_param = alg.get_xgb_params()
xgtrain = xgb.DMatrix(dtrain[predictors].values, label=dtrain[target].values)
xgtest = xgb.DMatrix(dtest[predictors].values)
cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds,
metrics='auc', early_stopping_rounds=early_stopping_rounds)
alg.set_params(n_estimators=cvresult.shape[0])
#Fit the algorithm on the data
alg.fit(dtrain[predictors], dtrain[target],eval_metric='auc')
#Predict training set:
dtrain_predictions = alg.predict(dtrain[predictors])
dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1]
#Print model report:
print ("\nModel Report")
print ("Accuracy : %.4g" % metrics.accuracy_score(dtrain[target].values, dtrain_predictions))
print ("AUC Score (Train): %f" % metrics.roc_auc_score(dtrain[target], dtrain_predprob))
# Predict on testing data:
dtest['predprob'] = alg.predict_proba(dtest[predictors])[:,1]
results = test_results.merge(dtest[['employee_id','predprob']], on='employee_id')
print ('AUC Score (Test): %f' % metrics.roc_auc_score(results['is_promoted'], results['predprob']))
feat_imp = pd.Series(alg.get_booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances')
plt.ylabel('Feature Importance Score')
predictors = [x for x in train.columns if x not in [target, IDcol]]
xgb1 = XGBClassifier(
learning_rate =0.1,
n_estimators=200,
max_depth=5,
min_child_weight=4,
reg_alpha=1,
gamma=0,
subsample=0.85,
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb1, trainset, testset, predictors)
pred=xgb1.predict(test[predictors])
s=pd.Series(pred.tolist()).astype(int)
s.to_csv('D:/My Personal Documents/Learnings/Data Science/Data Sets/WNS Analytics/s19.csv')
s.value_counts()
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test6 = {
'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]
}
gsearch6 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=5,
min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test6, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch6.fit(train[predictors],train[target])
gsearch6.grid_scores_, gsearch6.best_params_, gsearch6.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test5 = {
'subsample':[i/100.0 for i in range(75,90,5)],
'colsample_bytree':[i/100.0 for i in range(75,90,5)]
}
gsearch5 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch5.fit(train[predictors],train[target])
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test2 = {
'max_depth':[4,5,6,8],
'min_child_weight':[4,5,6,8]
}
gsearch2 = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=200, max_depth=5,
min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch2.fit(train[predictors],train[target])
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
test.info()
```
| github_jupyter |
# Sentiment Analysis Using RNN
We use an Sequential LSTM to create a supervised learning approach for predicting the sentiment of an article. This notebook was adapted from https://www.kaggle.com/ngyptr/lstm-sentiment-analysis-keras.
#### Data and Packages Importing
```
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D
from sklearn.model_selection import train_test_split
from keras.utils.np_utils import to_categorical
import re
training_data_folder = "./Sentiment Training Data/"
```
#### Initializing the Word Dictionaries
We use Pandas to initialize the word dictionary corpus from NTU and ensure that all the words are standardized into lower case format.
```
vocabulary = pd.read_csv(training_data_folder + "vocabulary.csv")
vocabulary["Word"] = vocabulary["Word"].str.lower()
vocabulary_string = ""
for word in vocabulary["Word"]:
vocabulary_string += word + " "
max_features = 2000
```
### Tokenizer
We tokenize the word dictionary in order to train the RNN.
```
tokenizer = Tokenizer(num_words=len(vocabulary["Word"]), split=" ", char_level=False)
tokenizer.fit_on_texts(vocabulary["Word"].values)
X = tokenizer.texts_to_sequences(vocabulary["Word"].values)
temp = []
for i in X:
temp.append(len(i))
X = pad_sequences(X, maxlen = max_features)
Y = []
for i, row in vocabulary.iterrows():
y = row["Sentiment"]
Y.append(y)
```
#### Building the LSTM Model
```
embed_dim = 128
lstm_out = 196
max_features = X.shape[0]
model = Sequential()
model.add(Embedding(max_features, embed_dim,input_length = X.shape[1]))
model.add(SpatialDropout1D(0.4))
model.add(LSTM(lstm_out, dropout=0.5, recurrent_dropout=0.5))
model.add(Dense(2,activation='softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer='adam',metrics = ['accuracy'])
print(model.summary())
```
#### Building a Training and Test Set
```
Y_dummies = []
for i in Y:
if i > 0.5:
Y_dummies.append([y, (1-y)])
else:
Y_dummies.append([(1-y), y])
Y_dummies = np.matrix(Y_dummies)
X_train, X_test, Y_train, Y_test = train_test_split(X,Y_dummies, test_size = 0.5, random_state = 42)
print(X_train.shape,Y_train.shape)
print(X_test.shape,Y_test.shape)
```
#### Training the RNN
Multiple resources recommended that the RNN should be trained with around 4000 iterations of data from start to the end of the network. After experimentation, we identified a batch_size of 32 to minimze the loss. Hence, we calculate the epoch size based on the batch size, in order to ensure to that we can achieve the 4000 iterations.
```
batch_size = 32
epochs = int(4000 / (X_train.shape[0] / batch_size))
model.fit(X_train, Y_train, epochs = epochs, batch_size=batch_size, verbose = 1)
```
#### Evaluating the Model
```
score,acc = model.evaluate(X_test, Y_test, verbose = 1, batch_size = batch_size)
print("score: %.2f" % (score))
print("acc: %.2f" % (acc))
```
#### Testing RNN Accuracy on Hand-Labelled Articles
```
import pandas as pd
import numpy as np
df1 = pd.read_csv("./Sentiment Analysis Data/Classified Articles.csv")
df2 = pd.read_csv("./Sentiment Analysis Data/Articles Reading Assignment.csv")
df2 = df2.dropna()
df2["Sentiment"] += 1
df2["Sentiment"] /= 2
df2["Content"] = ["" for i in range(len(df2))]
df2["Content Length"] = [0 for i in range(len(df2))]
for i, row in df2.iterrows():
x = row["URL"]
key_words = df1[df1["source_url"] == x][:1]["contents"].values[0]
df2.at[i, "Content"] = str(key_words)
df2.at[i, "Content Length"] = len(key_words)
tokenizer.fit_on_texts(df2['Content'].values)
X = tokenizer.texts_to_sequences(df2['Content'].values)
X = pad_sequences(X, maxlen = 2000)
predictions = model.predict(X, batch_size=batch_size, verbose=1, steps=None)
numerical_predictions_assigned = []
qualitative_prediction_assigned = []
for i in predictions:
i = i[1]
if i > 0.60:
numerical_predictions_assigned.append(1)
elif i > 0.40:
numerical_predictions_assigned.append(0.5)
else:
numerical_predictions_assigned.append(0)
if i > 0.8:
qualitative_prediction_assigned.append("Strongly Positive")
elif i > 0.6:
qualitative_prediction_assigned.append("Moderately Positive")
elif i > 0.4:
qualitative_prediction_assigned.append("Neutral")
elif i > 0.2:
qualitative_prediction_assigned.append("Moderately Negative")
else:
qualitative_prediction_assigned.append("Strongly Negative")
df2["Predicted Quantitative Sentiment"] = numerical_predictions_assigned
df2["Predicted Qualitative Sentiment"] = qualitative_prediction_assigned
df2.head()
success_count = 0
total_count = 0
for i, row in df2.iterrows():
manual_sentiment = row["Sentiment"]
predicted_sentiment = row["Predicted Quantitative Sentiment"]
qualitative_sentiment = row["Predicted Qualitative Sentiment"]
if manual_sentiment == predicted_sentiment:
success_count += 1
total_count += 1
# print(manual_sentiment, predicted_sentiment, qualitative_sentiment)
print("The accuracy on the manually labelled data set is {}".format(float(success_count/total_count)))
```
#### Testing RNN Accuracy on Spring 2018 Data
```
sp18_df = pd.read_csv("./Sentiment Analysis Data/Classified Articles.csv")
print(sp18_df.shape)
sp18_df.head()
tokenizer.fit_on_texts(sp18_df['contents'].values)
X = tokenizer.texts_to_sequences(sp18_df['contents'].values)
X = pad_sequences(X, maxlen = 2000)
predictions = model.predict(X, batch_size=batch_size, verbose=1, steps=None)
numerical_predictions_assigned = []
qualitative_prediction_assigned = []
for i in predictions:
if i[1] > i[0]:
numerical_predictions_assigned.append(1)
else:
numerical_predictions_assigned.append(0)
i = i[1]
if i > 0.8:
qualitative_prediction_assigned.append("Strongly Positive")
elif i > 0.6:
qualitative_prediction_assigned.append("Moderately Positive")
elif i > 0.4:
qualitative_prediction_assigned.append("Neutral")
elif i > 0.2:
qualitative_prediction_assigned.append("Moderately Negative")
else:
qualitative_prediction_assigned.append("Strongly Negative")
sp18_df["Predicted Quantitative Sentiment"] = numerical_predictions_assigned
sp18_df["Predicted Qualitative Sentiment"] = qualitative_prediction_assigned
sp18_df.head()
success_count = 0
total_count = 0
for i, row in sp18_df.iterrows():
manual_sentiment = row["marks"]
predicted_sentiment = row["Predicted Quantitative Sentiment"]
qualitative_sentiment = row["Predicted Qualitative Sentiment"]
if manual_sentiment == predicted_sentiment:
success_count += 1
total_count += 1
# print(manual_sentiment, predicted_sentiment, qualitative_sentiment)
print("The accuracy on the manually labelled data set is {}".format(float(success_count/total_count)))
```
| github_jupyter |
## Section 7.1: A First Plotly Streaming Plot
Welcome to Plotly's Python API User Guide.
> Links to the other sections can be found on the User Guide's [homepage](https://plot.ly/python/user-guide#Table-of-Contents:)
Section 7 is divided, into separate notebooks, as follows:
* [7.0 Streaming API introduction](https://plot.ly/python/intro_streaming)
* [7.1 A First Plotly Streaming Plot](https://plot.ly/python/streaming_part1)
<hr>
Check which version is installed on your machine and please upgrade if needed.
```
# (*) Import plotly package
import plotly
# Check plolty version (if not latest, please upgrade)
plotly.__version__
```
<hr>
Import a few modules and sign in to Plotly using our credentials file:
```
# (*) To communicate with Plotly's server, sign in with credentials file
import plotly.plotly as py
# (*) Useful Python/Plotly tools
import plotly.tools as tls
# (*) Graph objects to piece together plots
from plotly.graph_objs import *
import numpy as np # (*) numpy for math functions and arrays
```
Finally, retrieve the stream ids in our credentials file as set up in <a href="https://plot.ly/python/streaming-tutorial#Get-your-stream-tokens" target="_blank">subsection 7.1</a>:
```
stream_ids = tls.get_credentials_file()['stream_ids']
```
### 7.1 A first Plotly streaming plot
Making Plotly streaming plots sums up to working with two objects:
* A stream id object (`Stream` in the `plotly.graph_objs` module),
* A stream link object (`py.Stream`).
The stream id object is a graph object that embeds a particular stream id to each of your plot's traces. As all graph objects, the stream id object is equipped with extensive `help()` documentation, key validation and a nested update method. In brief, the stream id object initializes the connection between a trace in your Plotly graph and a data stream(for that trace).
Meanwhile, the stream link object, like all objects in the `plotly.plotly` module, links content in your Python/IPython session to Plotly's servers. More precisely, it is the interface that updates the data in the plotted traces in real-time(as identified with the unique stream id used in the corresponding stream id object).
If you find the `Stream`/`py.Stream` terminalogy too confusing --- and you do not mind not having access to the methods associated with Plotly graph objects' --- you can forgo the use of the stream id object and substitute it by a python `dict()` in the following examples.
So, we start by making an instance of the stream id object:
```
help(Stream) # call help() to see the specifications of the Stream object!
# Get stream id from stream id list
stream_id = stream_ids[0]
# Make instance of stream id object
stream = Stream(
token=stream_id, # (!) link stream id to 'token' key
maxpoints=80 # (!) keep a max of 80 pts on screen
)
```
The `'maxpoints'` key set the maxiumum number of points to keep on the plot from an incoming stream.
Streaming Plotly plots are initialized with a standard (i.e. REST API) call to `py.plot()` or `py.iplot()` that embeds your unique stream ids in each of the plot's traces.
Each Plotly trace object (e.g. `Scatter`, `Bar`, `Histogram`, etc. More in <a href="https://plot.ly/python/overview#0.4-Plotly's-graph-objects" target="_blank">Section 0</a>) has a `'stream'` key made available to link the trace object in question to a corresponding stream object.
In our first example, we link a scatter trace object to the stream:
```
# Initialize trace of streaming plot by embedding the unique stream_id
trace1 = Scatter(
x=[],
y=[],
mode='lines+markers',
stream=stream # (!) embed stream id, 1 per trace
)
data = Data([trace1])
```
Then, add a title to the layout object and initialize your Plotly streaming plot:
```
# Add title to layout object
layout = Layout(title='Time Series')
# Make a figure object
fig = Figure(data=data, layout=layout)
# (@) Send fig to Plotly, initialize streaming plot, open new tab
unique_url = py.plot(fig, filename='s7_first-stream')
```
Great! Your Plotly streaming plot is intialized. Here's a screenshot:
<img src="http://i.imgur.com/Lx7ICLI.png" />
<br>
Now, let's add data on top of it, or more precisely, send a *stream* of data to it.
So, first
```
help(py.Stream) # run help() of the Stream link object
# (@) Make instance of the `Stream link` object
# with the same stream id as the `Stream Id` object
s = py.Stream(stream_id)
# (@) Open the stream
s.open()
```
We can now use the Stream Link object `s` in order to `stream` data to our plot.
As an example, we will send a time stream and some random numbers(for 200 iterations):
```
# (*) Import module keep track and format current time
import datetime
import time
i = 0 # a counter
k = 5 # some shape parameter
N = 200 # number of points to be plotted
# Delay start of stream by 5 sec (time to switch tabs)
time.sleep(5)
while i<N:
i += 1 # add to counter
# Current time on x-axis, random numbers on y-axis
x = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')
y = (np.cos(k*i/50.)*np.cos(i/50.)+np.random.randn(1))[0]
# (-) Both x and y are numbers (i.e. not lists nor arrays)
# (@) write to Plotly stream!
s.write(dict(x=x, y=y))
# (!) Write numbers to stream to append current data on plot,
# write lists to overwrite existing data on plot (more in 7.2).
time.sleep(0.08) # (!) plot a point every 80 ms, for smoother plotting
# (@) Close the stream when done plotting
s.close()
```
A stream of data totalling 200 points is sent to Plotly's servers in real-time.
Watching this unfold in an opened plot.ly tab looks something like (in the old UI):
```
from IPython.display import YouTubeVideo
YouTubeVideo('OVQ2Guypp_M', width='100%', height='350')
```
It took Plotly around 15 seconds to plot the 200 data points sent in the data stream. After that, the generated plot looks like any other Plotly plot.
With that said, if you have enough computer resources to let a server run indefinitely, why not have
>>> while True:
as the while-loop expression and never close the stream.
Luckily, it turns out that Plotly has access to such computer resources; a simulation generated using the same code as the above has been running since March 2014.
This plot is embedded below:
```
# Embed never-ending time series streaming plot
tls.embed('streaming-demos','12')
# Note that the time point correspond to internal clock of the servers,
# that is UTC time.
```
Anyone can view your streaming graph in real-time. All viewers will see the same data simultaneously (try it! Open up this notebook up in two different browser windows and observer
that the graphs are plotting identical data!).
Simply put, Plotly's streaming API is awesome!
In brief: to make a Plotly streaming plot:
1. Make a `stream id object` (`Stream` in the `plotly.graph_objs` module) containing the `stream id`(which is found in the **settings** of your Plotly account) and the maximum number of points to be keep on screen (which is optional).
2. Provide the `stream id object` as the key value for the `stream` attribute in your trace object.
3. Make a `stream link object` (`py.Stream`) containing the same stream id as the `stream id object` and open the stream with the `.open()` method.
4. Write data to the `stream link object` with the `.write()` method. When done, close the stream with the `.close()` method.
Here are the links to the subsections' notebooks:
* [7.0 Streaming API introduction](https://plot.ly/python/intro_streaming)
* [7.1 A first Plotly streaming plot](https://plot.ly/python/streaming_part1)
<div style="float:right; \">
<img src="http://i.imgur.com/4vwuxdJ.png"
align=right style="float:right; margin-left: 5px; margin-top: -10px" />
</div>
<h4>Got Questions or Feedback? </h4>
Reach us here at: <a href="https://community.plot.ly" target="_blank">Plotly Community</a>
<h4> What's going on at Plotly? </h4>
Check out our twitter:
<a href="https://twitter.com/plotlygraphs" target="_blank">@plotlygraphs</a>
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install publisher --upgrade
import publisher
publisher.publish(
's7_streaming_p1-first-stream', 'python/streaming_part1//', 'Getting Started with Plotly Streaming',
'Getting Started with Plotly Streaming',
title = 'Getting Started with Plotly Streaming',
thumbnail='', language='python',
layout='user-guide', has_thumbnail='false')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/maragraziani/interpretAI_DigiPath/blob/main/hands-on-session-2/hands-on-session-2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# <center> Hands-on Session 2</center>
## <center> Explainable Graph Representations in Digital Pathology</center>
**Presented by:**
- Guillaume Jaume
- Pre-doc researcher with EPFL & IBM Research
- gja@zurich.ibm.com
<br/>
- Pushpak Pati
- Pre-doc researcher with ETH & IBM Research
- pus@zurich.ibm.com
#### Content
* [Introduction & Motivation](#Intro)
* [Installation & Data](#Section0)
* [(1) Cell Graph construction](#Section1)
* [(2) Cell Graph classification](#Section2)
* [(3) Cell Graph explanation](#Section3)
* [(4) Nuclei concept analysis](#Section4)
#### Take-away
* Motivation of entity-graph modeling for model explainability
* Getting familiar with the histocartography library and BRACS dataset
* Tools to construct and analyze cell-graphs
* Understand and use post-hoc graph explainability techniques
## Introduction & Motivation:
The first part of this tutorial will guide you to build **interpretable entity-based representations** of tissue regions.
The motivation for shifting from pixel- to entity-based analysis is as follows:
- Cancer diagnosis and prognosis from tissue specimens highly depend on the phenotype and topological distribution of constituting histological entities, *e.g.,* cells, nuclei, tissue regions. To adequately characterize the tissue composition and utilize the tissue structure-to-function relationship, an entity-paradigm is imperative.
### <center> "*Tissue composition matters for analyzing tissue functionality.*" </center>
<figure class="image">
<img src="Figures/fig1_1.png" width="750">
<img src="Figures/fig1_2.png" width="750">
</figure>
- The entity-based processing enables to delineate the diagnostically relevant and irrelevant histopathological entities. The set of entities and corresponding inter- and intra-entity interactions can be customized by using task-specific prior pathological knowledge.
### <center> "*Entity-paradigm enables to incorporate pathological prior during diagnosis.*" </center>
<figure class="image">
<img src="Figures/fig2.png" width="750">
</figure>
- Unlike most of the deep learning techniques operating at pixel-level, the entity-based analysis preserves the notion of histopathological entities, which the pathologists can relate to and reason with. Thus, explainability of the entity-graph based methodologies can be interpreted by pathologists, which can potentially lead to build trust and adoption of AI in clinical practice. Notably, the produced explanations in the entity-space are better localized, and therefore better discernible.
### <center> "*Pathologically comprehensible and localized explanations in the entity-space.*" </center>
<figure class="image">
<img src="Figures/fig3.png" width="750">
</figure>
- Further, the light-weight and flexible graph representation allows to scale to large and arbitrary tissue regions by including arbitrary number of nodes and edges.
### <center> "*Context vs Resolution trade-off.*" </center>
<figure class="image">
<img src="Figures/context.png" width="550">
</figure>
In this tutorial, we will focus on nuclei as entities to build **Cell-graphs**. A similar approach can naturally be extended to other histopathological entities, such as tissue regions, glands.
**References:**
- [Hierarchical Graph Representations in Digital Pathology.](https://arxiv.org/pdf/2102.11057.pdf) Pati et al., arXiv:2102.11057, 2021.
- [CGC-Net: Cell Graph Convolutional Network for Grading of Colorectal Cancer Histology Images.](https://arxiv.org/pdf/1909.01068.pdf) Zhou et al., IEEE CVPR Workshops, 2019.
<div id="Section0"></div>
## Installation and Data
- Running on **Colab**: this tutorial requires a GPU. Colab allows you to use a K80 GPU for 12h. Please do the following steps:
- Open the tab *Runtime*
- Click on *Change Runtime Type*
- Set the hardware to *GPU* and *Save*
- Installation of the **histocartography** library, a Python-based library to facilitate entity-graph analysis and explainability in Computational Pathology. Documentation and examples can be checked [here](https://github.com/histocartography/histocartography).
<figure class="image">
<img src="Figures/hcg_logo.png" width="450">
</figure>
- Downloading samples from the **BRACS** dataset, a large cohort of H&E stained breast carcinoma tissue regions. More information and download link to the dataset can be found [here](https://www.bracs.icar.cnr.it/).
<figure class="image">
<img src="Figures/bracs_logo.png" width="450">
</figure>
```
# installing missing packages
!pip install histocartography
!pip install mpld3
# Required only if you run this code on Colab:
# Get dependent files
!wget https://raw.githubusercontent.com/maragraziani/interpretAI_DigiPath/main/hands-on-session-2/cg_bracs_cggnn_3_classes_gin.yml
!wget https://raw.githubusercontent.com/maragraziani/interpretAI_DigiPath/main/hands-on-session-2/utils.py
# Get images
import os
!mkdir images
os.chdir('images')
!wget --content-disposition https://ibm.box.com/shared/static/6320wnhxsjte9tjlqb02zn0jaxlca5vb.png
!wget --content-disposition https://ibm.box.com/shared/static/d8rdupnzbo9ufcnc4qaluh0s2w7jt8mh.png
!wget --content-disposition https://ibm.box.com/shared/static/yj6kho8j5ovypafnheoju7y18bvtk32h.png
os.chdir('..')
import os
from glob import glob
from PIL import Image
# 1. set up inline show
import matplotlib.pyplot as plt
%matplotlib inline
import mpld3
mpld3.enable_notebook()
# 2. visualize the images: We will work with these 3 samples throughout the tutorial
images = [(Image.open(path), os.path.basename(path).split('.')[0])
for path in glob(os.path.join('images', '*.png'))]
for image, image_name in images:
print('Image:', image_name)
display(image)
```
<div id="Section1"></div>
## 1) Image-to-Graph: Cell-Graph construction
This code enables to build a cell-graph for an input H&E image. The step-by-step procedure to define a cell-graph is as follows,
- **Nodes**: Detecting nuclei using HoverNet
- **Node features**: Extracting features to characterize the nuclei
- **Edges**: Constructing k-NN graph to denote the intter-nuclei interactions
**References:**
- [Hierarchical Graph Representations in Digital Pathology.](https://arxiv.org/pdf/2102.11057.pdf) Pati et al., arXiv:2102.11057, 2021.
- [Hover-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images.](https://arxiv.org/pdf/1812.06499.pdf) Graham et al., Medical Image Analysis, 2019.
- [PanNuke Dataset Extension, Insights and Baselines.](https://arxiv.org/abs/2003.10778) Gamper et al., arXiv:2003.10778, 2020.
```
import os
from glob import glob
from PIL import Image
import numpy as np
import torch
from tqdm import tqdm
from dgl.data.utils import save_graphs
from histocartography.preprocessing import NucleiExtractor, DeepFeatureExtractor, KNNGraphBuilder, NucleiConceptExtractor
import warnings
warnings.filterwarnings("ignore")
# Define nuclei extractor: HoverNet pre-trained on the PanNuke dataset.
nuclei_detector = NucleiExtractor()
# Define a deep feature extractor with ResNet34 and patches 72 resized to 224 to match ResNet input
feature_extractor = DeepFeatureExtractor(architecture='resnet34', patch_size=72, resize_size=224)
# Define a graph builder to build a DGLGraph object
graph_builder = KNNGraphBuilder(k=5, thresh=50, add_loc_feats=True)
# Define nuclei concept extractor: extract nuclei-level attributes - will be useful later for understanding the model
nuclei_concept_extractor = NucleiConceptExtractor(
concept_names='area,eccentricity,roundness,roughness,shape_factor,mean_crowdedness,glcm_entropy,glcm_contrast'
)
# Load image fnames to process
image_fnames = glob(os.path.join('images', '*.png'))
# Create output directories
os.makedirs('cell_graphs', exist_ok=True)
os.makedirs('nuclei_concepts', exist_ok=True)
for image_name in tqdm(image_fnames):
print('Processing...', image_name)
# 1. load image
image = np.array(Image.open(image_name))
# 2. nuclei detection
nuclei_map, nuclei_centroids = nuclei_detector.process(image)
# 3. nuclei feature extraction
features = feature_extractor.process(image, nuclei_map)
# 4. build the cell graph
cell_graph = graph_builder.process(
instance_map=nuclei_map,
features=features
)
# 5. extract the nuclei-level concept, i.e., properties: shape, size, etc.
concepts = nuclei_concept_extractor.process(image, nuclei_map)
# 6. print graph properties
print('Number of nodes:', cell_graph.number_of_nodes())
print('Number of edges:', cell_graph.number_of_edges())
print('Number of features per node:', cell_graph.ndata['feat'].shape[1])
# 7. save graph with DGL library and concepts
image_id = os.path.basename(image_name).split('.')[0]
save_graphs(os.path.join('cell_graphs', image_id + '.bin'), [cell_graph])
with open(os.path.join('nuclei_concepts', image_id + '.npy'), 'wb') as f:
np.save(f, concepts)
from histocartography.visualization import OverlayGraphVisualization, InstanceImageVisualization
from utils import *
# Visualize the nuclei detection
visualizer = InstanceImageVisualization()
viz_nuclei = visualizer.process(image, instance_map=nuclei_map)
show_inline(viz_nuclei)
# Visualize the resulting cell graph
visualizer = OverlayGraphVisualization(
instance_visualizer=InstanceImageVisualization(
instance_style="filled+outline"
)
)
viz_cg = visualizer.process(
canvas=image,
graph=cell_graph,
instance_map=nuclei_map
)
show_inline(viz_cg)
```
<div id="Section2"></div>
## 2) Cell-graph classification
Given the set of cell graphs generated for the 4000 H&E images in the BRACS dataset, a Graph Neural Network (GNN) is trained to classify each sample as either *Benign*, *Atypical* or *Malignant*.
A GNN is an artifical neural network designed to operate on graph-structured data. They work in an analogous way as Convolutional Neural Networks (CNNs). For each node, a GNN layer is aggregating and updating information from its neighbors to contextualize the node feature representation. More information about GNNs can be found [here](https://github.com/guillaumejaume/graph-neural-networks-roadmap).
<figure class="image">
<img src="Figures/gnn.png" width="650">
</figure>
**References:**
- [Hierarchical Graph Representations in Digital Pathology.](https://arxiv.org/pdf/2102.11057.pdf) Pati et al., arXiv:2102.11057, 2021.
- [Benchmarking Graph Neural Networks.](https://arxiv.org/pdf/2003.00982.pdf) Dwivedi et al., NeurIPS, 2020.
```
import os
import yaml
from histocartography.ml import CellGraphModel
# 1. load CG-GNN config
config_fname = 'cg_bracs_cggnn_3_classes_gin.yml'
with open(config_fname, 'r') as file:
config = yaml.load(file)
# 2. declare cell graph model: A pytorch model for predicting the tumor type given an input cell-graph
model = CellGraphModel(
gnn_params=config['gnn_params'],
classification_params=config['classification_params'],
node_dim=514,
num_classes=3,
pretrained=True
)
# 3. print model
print('PyTorch Model is defined as:', model)
```
<div id="Section3"></div>
### 3) Cell Graph explanation: Apply GraphGradCAM to CG-GNN
As presented in the first hands-on session, GradCAM is a popular (post-hoc) feature attribution method that allows to highlight regions of the input that are activated by the neural network, *i.e.,* elements of the input that *explain* the prediction. As the input is now a set of *interpretable* biologically-defined nuclei, the explanation is also biologically *interpretable*.
We use a modified version of GradCAM that can work with GNNs: GraphGradCAM. Specifically, GraphGradCAM follows 2 steps:
- Computation of channel-wise importance score:
<figure class="image">
<img src="Figures/eq1.png" width="180">
</figure>
where, $w_k^{(l)}$ is the importance score of channel $k$ in layer $l$. $|V|$ is the number of nodes in the graph, $H^{(l)}_{n, k}$ are the node embeddings in channel $k$ at layer $l$ and, $y_{\max}$ is the logit value of the predicted class.
- Node-wise importance score computation:
<figure class="image">
<img src="Figures/eq2.png" width="250">
</figure>
where, $L(l, v)$ denotes the importance of node $v \in V$ in layer $l$, and $d(l)$ denotes the number of node attributes at layer $l$.
**Note:** GraphGradCAM is one of the feature attribution methods to determine input-level importance scores. There exists a rich literature proposing other approaches. For instance, the GNNExplainer, GraphGradCAM++, GraphLRP etc.
**References:**
- [Grad-CAM : Visual Explanations from Deep Networks.](https://arxiv.org/pdf/1610.02391.pdf) Selvaraju et al., ICCV, 2017.
- [Explainability methods for graph convolutional neural networks.](https://openaccess.thecvf.com/content_CVPR_2019/papers/Pope_Explainability_Methods_for_Graph_Convolutional_Neural_Networks_CVPR_2019_paper.pdf) Pope et al., CVPR, 2019.
- [Quantifying Explainers of Graph Neural Networks in Computational Pathology.](https://arxiv.org/pdf/2011.12646.pdf) Jaume et al., CVPR, 2021.
```
import torch
from glob import glob
import tqdm
import numpy as np
from PIL import Image
from dgl.data.utils import load_graphs
from histocartography.interpretability import GraphGradCAMExplainer
from histocartography.utils.graph import set_graph_on_cuda
is_cuda = torch.cuda.is_available()
INDEX_TO_TUMOR_TYPE = {
0: 'Benign',
1: 'Atypical',
2: 'Malignant'
}
# 1. Define a GraphGradCAM explainer
explainer = GraphGradCAMExplainer(model=model)
# 2. Load preprocessed cell graphs, concepts & images
cg_fnames = glob(os.path.join('cell_graphs', '*.bin'))
image_fnames = glob(os.path.join('images', '*.png'))
concept_fnames = glob(os.path.join('nuclei_concepts', '*.npy'))
cg_fnames.sort()
image_fnames.sort()
concept_fnames.sort()
# 3. Explain all our samples
output = []
for cg_name, image_name, concept_name in zip(cg_fnames, image_fnames, concept_fnames):
print('Processing...', image_name)
image = np.array(Image.open(image_name))
concepts = np.load(concept_name)
graph, _ = load_graphs(cg_name)
graph = graph[0]
graph = set_graph_on_cuda(graph) if is_cuda else graph
importance_scores, logits = explainer.process(
graph,
output_name=cg_name.replace('.bin', '')
)
print('logits: ', logits)
print('prediction: ', INDEX_TO_TUMOR_TYPE[np.argmax(logits)], '\n')
output.append({
'image_name': os.path.basename(image_name).split('.')[0],
'image': image,
'graph': graph,
'importance_scores': importance_scores,
'logits': logits,
'concepts': concepts
})
from histocartography.visualization import OverlayGraphVisualization, InstanceImageVisualization
INDEX_TO_TUMOR_TYPE = {
0: 'Benign',
1: 'Atypical',
2: 'Malignant'
}
# Visualize the cell graph along with its relative node importance
visualizer = OverlayGraphVisualization(
instance_visualizer=InstanceImageVisualization(),
colormap='plasma'
)
for i, instance in enumerate(output):
print(instance['image_name'], instance['logits'])
node_attributes = {}
node_attributes["color"] = instance['importance_scores']
node_attributes["thickness"] = 15
node_attributes["radius"] = 10
viz_cg = visualizer.process(
canvas=instance['image'],
graph=instance['graph'],
node_attributes=node_attributes,
)
show_inline(viz_cg, title='Sample: {}'.format(INDEX_TO_TUMOR_TYPE[np.argmax(instance['logits'])]))
```
<div id="Section4"></div>
### 4) Nuclei concept analysis: These nodes are important, but why?
We were able to identify what are the important nuclei, *i.e.,* the discriminative nodes, using GraphGradCAM. We would like to push our analysis one step further to understand if the attributes (shape, size, etc.) of the important nuclei match prior pathological knowledge. For instance, it is known that cancerous nuclei are larger than benign ones or that atypical nuclei are expected to have irregular shapes.
To this end, we extract a set of nuclei-level attributes on the most important nuclei.
**Note**: A *quantitative* analysis can be performed by studying nuclei-concept distributions and how they align with prior pathological knowledge. However, this analysis is beyond the scope of this tutorial. The reader can refer to [this work](https://arxiv.org/pdf/2011.12646.pdf) for more details.
**References:**
- [Quantifying Explainers of Graph Neural Networks in Computational Pathology.](https://arxiv.org/pdf/2011.12646.pdf) Jaume et al., CVPR, 2021.
```
for i, out in enumerate(output):
if 'benign' in out['image_name']:
benign_data = out
elif 'atypical' in out['image_name']:
atypical_data = out
elif 'malignant' in out['image_name']:
malignant_data = out
```
#### Nuclei visualization
- Visualizing the 20 most important nuclei
- Visualizing 20 random nuclei for comparison
```
# Top k nuclei
from utils import get_patches, plot_patches
k = 20
nuclei = get_patches(out=benign_data, k=k)
plot_patches(nuclei, ncol=10)
nuclei = get_patches(out=atypical_data, k=k)
plot_patches(nuclei, ncol=10)
nuclei = get_patches(out=malignant_data, k=k)
plot_patches(nuclei, ncol=10)
# Top k nuclei
from utils import get_patches, plot_patches
k = 20
nuclei = get_patches(out=benign_data, k=k, random=True)
plot_patches(nuclei, ncol=10)
nuclei = get_patches(out=atypical_data, k=k, random=True)
plot_patches(nuclei, ncol=10)
nuclei = get_patches(out=malignant_data, k=k, random=True)
plot_patches(nuclei, ncol=10)
#area,eccentricity,roundness,roughness,shape_factor,mean_crowdedness,glcm_entropy,glcm_contrast
FEATURE_TO_INDEX = {
'area': 0,
'eccentricity': 1,
'roundness': 2,
'roughness': 3,
'shape_factor': 4,
'mean_crowdedness': 5,
'glcm_entropy': 6,
'glcm_contrast': 7,
}
def compute_concept_ratio(data1, data2, feature, k):
index = FEATURE_TO_INDEX[feature]
important_indices = (-data1['importance_scores']).argsort()[:k]
important_data1 = data1['concepts'][important_indices, index]
important_indices = (-data2['importance_scores']).argsort()[:k]
important_data2 = data2['concepts'][important_indices, index]
return sum(important_data1) / sum(important_data2)
```
#### Pathological fact: "Cancerous nuclei are expected to be larger than benign ones": area(Malignant) > area(Benign)
```
k = 20
ratio = compute_concept_ratio(malignant_data, benign_data, 'area', k)
print('Ratio between the area of important malignant and benign nuclei: ', round(ratio, 4))
```
#### Pathological fact: "Atypical nuclei are hyperchromatic (solid) and Malignant are vesicular (porous)": contrast(Malignant) > contrast(Atypical)
```
k = 20
ratio = compute_concept_ratio(malignant_data, atypical_data, 'glcm_contrast', k)
print('Ratio between the contrast of important malignant and atypical nuclei: ', round(ratio, 4))
```
#### Pathological fact: "Benign nuclei are crowded than Atypical": crowdedness(Atypical) > crowdedness(Benign)
```
k = 20
ratio = compute_concept_ratio(atypical_data, benign_data, 'mean_crowdedness', k)
print('Ratio between the crowdeness of important atypical and benign nuclei: ', round(ratio, 4))
```
## Conclusion:
Considering the adoption of Graph Neural Networks in various domains, such as pathology, radiology, computation biology, satellite and natural images, graph interpretability and explainability is imperative. The presented algorithms and tools aim to motivate and instruct in the aforementioned direction. Though the presented technologies are demonstrated for digital pathology, they can be seamlessly transferred to other domains by building domain specific relevant graph representations. Potentially, entity-graph modeling and analysis can identify relevant cues for explainable stratification.
<figure class="image">
<img src="Figures/conclusion.png" width="850">
</figure>
| github_jupyter |
<a href="https://colab.research.google.com/github/jonkrohn/ML-foundations/blob/master/notebooks/single-point-regression-gradient.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Gradient of a Single-Point Regression
In this notebook, we calculate the gradient of quadratic cost with respect to a straight-line regression model's parameters. We keep the partial derivatives as simple as possible by limiting the model to handling a single data point.
```
import torch
```
Let's use the same data as we did in the [*Regression in PyTorch* notebook](https://github.com/jonkrohn/ML-foundations/blob/master/notebooks/regression-in-pytorch.ipynb) as well as for demonstrating the Moore-Penrose Pseudoinverse in the [*Linear Algebra II* notebook](https://github.com/jonkrohn/ML-foundations/blob/master/notebooks/2-linear-algebra-ii.ipynb):
```
xs = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7.])
ys = torch.tensor([1.86, 1.31, .62, .33, .09, -.67, -1.23, -1.37])
```
The slope of a line is given by $y = mx + b$:
```
def regression(my_x, my_m, my_b):
return my_x*my_m + my_b
```
Let's initialize $m$ and $b$ with the same "random" near-zero values as we did in the *Regression in PyTorch* notebook:
```
m = torch.tensor([0.9]).requires_grad_()
b = torch.tensor([0.1]).requires_grad_()
```
To keep the partial derivatives as simple as possible, let's move forward with a single instance $i$ from the eight possible data points:
```
i = 7
x = xs[i]
y = ys[i]
x
y
```
**Step 1**: Forward pass
We can flow the scalar tensor $x$ through our regression model to produce $\hat{y}$, an estimate of $y$. Prior to any model training, this is an arbitrary estimate:
```
yhat = regression(x, m, b)
yhat
```
**Step 2**: Compare $\hat{y}$ with true $y$ to calculate cost $C$
In the *Regression in PyTorch* notebook, we used mean-squared error, which averages quadratic cost over multiple data points. With a single data point, here we can use quadratic cost alone. It is defined by: $$ C = (\hat{y} - y)^2 $$
```
def squared_error(my_yhat, my_y):
return (my_yhat - my_y)**2
C = squared_error(yhat, y)
C
```
**Step 3**: Use autodiff to calculate gradient of $C$ w.r.t. parameters
```
C.backward()
```
The partial derivative of $C$ with respect to $m$ ($\frac{\partial C}{\partial m}$) is:
```
m.grad
```
And the partial derivative of $C$ with respect to $b$ ($\frac{\partial C}{\partial b}$) is:
```
b.grad
```
**Return to *Calculus II* slides here to derive $\frac{\partial C}{\partial m}$ and $\frac{\partial C}{\partial b}$.**
$$ \frac{\partial C}{\partial m} = 2x(\hat{y} - y) $$
```
2*x*(yhat.item()-y)
```
$$ \frac{\partial C}{\partial b} = 2(\hat{y}-y) $$
```
2*(yhat.item()-y)
```
### The Gradient of Cost, $\nabla C$
The gradient of cost, which is symbolized $\nabla C$ (pronounced "nabla C"), is a vector of all the partial derivatives of $C$ with respect to each of the individual model parameters:
$\nabla C = \nabla_p C = \left[ \frac{\partial{C}}{\partial{p_1}}, \frac{\partial{C}}{\partial{p_2}}, \cdots, \frac{\partial{C}}{\partial{p_n}} \right]^T $
In this case, there are only two parameters, $m$ and $b$:
$\nabla C = \left[ \frac{\partial{C}}{\partial{m}}, \frac{\partial{C}}{\partial{b}} \right]^T $
```
nabla_C = torch.tensor([m.grad.item(), b.grad.item()]).T
nabla_C
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Automated Machine Learning
_**Text Classification Using Deep Learning**_
## Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Data](#Data)
1. [Train](#Train)
1. [Evaluate](#Evaluate)
## Introduction
This notebook demonstrates classification with text data using deep learning in AutoML.
AutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data. Depending on the compute cluster the user provides, AutoML tried out Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used, and Bidirectional Long-Short Term neural network (BiLSTM) when a CPU compute is used, thereby optimizing the choice of DNN for the uesr's setup.
Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.
An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade).
Notebook synopsis:
1. Creating an Experiment in an existing Workspace
2. Configuration and remote run of AutoML for a text dataset (20 Newsgroups dataset from scikit-learn) for classification
3. Registering the best model for future use
4. Evaluating the final model on a test set
## Setup
```
import logging
import os
import shutil
import pandas as pd
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.core.run import Run
from azureml.widgets import RunDetails
from azureml.core.model import Model
from helper import run_inference, get_result_df
from azureml.train.automl import AutoMLConfig
from azureml.automl.core.featurization import FeaturizationConfig
from sklearn.model_selection import train_test_split
```
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
```
print("This notebook was created using version 1.34.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
```
As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
```
subscription_id = '<your subscription_id>'
resource_group = '<your resource_group>'
workspace_name = '<your workspace_name>'
ws = Workspace(subscription_id, resource_group, workspace_name)
experiment_name = 'livedoor-news-classification'
experiment = Experiment(ws, experiment_name)
```
## Set up a compute cluster
This section uses a user-provided compute cluster (named "dnntext-cluster" in this example). If a cluster with this name does not exist in the user's workspace, the below code will create a new cluster. You can choose the parameters of the cluster as mentioned in the comments.
Whether you provide/select a CPU or GPU cluster, AutoML will choose the appropriate DNN for that setup - BiLSTM or BERT text featurizer will be included in the candidate featurizers on CPU and GPU respectively. If your goal is to obtain the most accurate model, we recommend you use GPU clusters since BERT featurizers usually outperform BiLSTM featurizers.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your cluster.
amlcompute_cluster_name = "gpucluster24"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_NC24rs_V3",
# CPU for BiLSTM, such as "STANDARD_D2_V2"
# To use BERT (this is recommended for best performance), select a GPU such as "STANDARD_NC24rs_V3"
# or similar GPU option available in your workspace
min_nodes = 0,
max_nodes = 10,
vm_priority='lowpriority') ## vm_priority='lowpriority' | 'dedicated'
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
```
### Get data
For this notebook we will use 20 Newsgroups data from scikit-learn. We filter the data to contain four classes and take a sample as training data. Please note that for accuracy improvement, more data is needed. For this notebook we provide a small-data example so that you can use this template to use with your larger sized data.
```
data_dir = "data" # Local directory to store data
blobstore_datadir = data_dir # Blob store directory to store data in
target_column_name = 'label'
df = Dataset.get_by_name(ws, name='livedoor-news')
df = df.to_pandas_dataframe()
df.drop(labels=['url', 'date'], axis=1, inplace=True)
data_train, data_test = train_test_split(df, test_size=0.25, random_state=0)
# Only checking the loaded dataset
print ('** train - columns:')
print (data_train.columns)
'''
print ('** train - dataset:')
print (data_train)
print ('** test - dataset:')
print (data_test)
'''
```
#### Fetch data and upload to datastore for use in training for Remote Compute (AmlCompute)
```
if not os.path.isdir(data_dir):
os.mkdir(data_dir)
train_data_fname = data_dir + '/train_data.csv'
test_data_fname = data_dir + '/test_data.csv'
data_train.to_csv(train_data_fname, index=False)
data_test.to_csv(test_data_fname, index=False)
datastore = ws.get_default_datastore()
datastore.upload(src_dir=data_dir, target_path=blobstore_datadir,
overwrite=True)
train_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/train_data.csv')])
```
### Prepare AutoML run
This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade).
Reference this page [実験の設定を構成する](https://docs.microsoft.com/ja-jp/azure/machine-learning/how-to-configure-auto-train#configure-your-experiment-settings) for Configration detail.
blocking_model name: [azureml.train.automl.constants.SupportedModels.Classification](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification?view=azure-ml-py)
SupportedTranformers name is here: [SupportedTransformers](https://docs.microsoft.com/en-us/python/api/azureml-automl-core/azureml.automl.core.constants.supportedtransformers?view=azure-ml-py)
```
featurization_config = FeaturizationConfig(dataset_language='jpn')
#featurization_config.blocked_transformers = ['TfIdf','CountVectorizer']
automl_settings = {
'experiment_timeout_minutes': 60,
'primary_metric' : 'AUC_weighted', #'AUC_weighted', 'accuracy'
'experiment_exit_score': 0.9,
'max_concurrent_iterations': 4,
'max_cores_per_iteration': -1,
'enable_dnn': True,
'enable_early_stopping': True,
'force_text_dnn': True, # enable BERT featurization
'validation_size': 0.15,
'verbosity': logging.INFO,
'featurization': featurization_config,
'enable_voting_ensemble': False, # this cut the final ensumble job
'enable_stack_ensemble': False # this cut the final ensumble job
# 'iterations': 2 ## mainly for DEBUG: Test purpose to stop job earlier to undestand AutoMLConfig parameters behavior or so.
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_debug.log',
compute_target = compute_target,
training_data = train_dataset,
label_column_name = target_column_name,
**automl_settings
)
```
#### Submit AutoML Run
If you want to see running state more graphical output of AutoML job.
1) please disble show_output like this.
```
automl_run = experiment.submit(automl_config)
```
2) run the following code after experiment.submit(automl_config)
```
RunDetails(automl_run).show()
```
```
#automl_run = experiment.submit(automl_config)
automl_run = experiment.submit(automl_config, show_output=True)
#RunDetails(automl_run).show() --- it shows more graphical training status.
#automl_run.wait_for_completion()
```
Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!
### Retrieve the Best Model
Below we select the best model pipeline from our iterations, use it to test on test data on the same compute cluster.
You can test the model locally to get a feel of the input/output. When the model contains BERT, this step will require pytorch and pytorch-transformers installed in your local environment. The exact versions of these packages can be found in the **automl_env.yml** file located in the local copy of your MachineLearningNotebooks folder here:
MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl_env.yml
```
best_run, fitted_model = automl_run.get_output()
```
You can now see what text transformations are used to convert text data to features for this dataset, including deep learning transformations based on BiLSTM or Transformer (BERT is one implementation of a Transformer) models.
```
text_transformations_used = []
for column_group in fitted_model.named_steps['datatransformer'].get_featurization_summary():
text_transformations_used.extend(column_group['Transformations'])
text_transformations_used
```
### Registering the best model
We now register the best fitted model from the AutoML Run for use in future deployments.
Get results stats, extract the best model from AutoML run, download and register the resultant best model
```
summary_df = get_result_df(automl_run)
best_dnn_run_id = summary_df['run_id'].iloc[0]
best_dnn_run = Run(experiment, best_dnn_run_id)
model_dir = 'Model' # Local folder where the model will be stored temporarily
if not os.path.isdir(model_dir):
os.mkdir(model_dir)
best_dnn_run.download_file('outputs/model.pkl', model_dir + '/model.pkl')
```
Register the model in your Azure Machine Learning Workspace. If you previously registered a model, please make sure to delete it so as to replace it with this new model.
```
# Register the model
model_name = 'textDNN-article'
model = Model.register(model_path = model_dir + '/model.pkl',
model_name = model_name,
tags=None,
workspace=ws)
```
## Evaluate on Test Data
We now use the best fitted model from the AutoML Run to make predictions on the test set.
Test set schema should match that of the training set.
```
test_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/test_data.csv')])
# preview the first 3 rows of the dataset
test_dataset.take(3).to_pandas_dataframe()
test_experiment = Experiment(ws, experiment_name + "_test")
script_folder = os.path.join(os.getcwd(), 'inference')
os.makedirs(script_folder, exist_ok=True)
shutil.copy('infer.py', script_folder)
test_run = run_inference(test_experiment, compute_target, script_folder, best_dnn_run,
train_dataset, test_dataset, target_column_name, model_name)
```
Display computed metrics
```
test_run
RunDetails(test_run).show()
test_run.wait_for_completion()
pd.Series(test_run.get_metrics())
```
| github_jupyter |
# How do I _create_, _start_, & _monitor_ a task?
### Overview
We are getting into advanced techniques here and will need to leverage a few other cookbooks. You will need an app and some files in your project, then it is easy to start one. The beginning of this notebook **replicates** the <a href="tasks_create.ipynb"> tasks_create </a> notebook.
### Prerequisites
1. You need to be a member (or owner) of _at least one_ project.
2. You need your _authentication token_ and the API needs to know about it. See <a href="Setup_API_environment.ipynb">**Setup_API_environment.ipynb**</a> for details.
3. You understand how to <a href="projects_listAll.ipynb" target="_blank">list</a> projects you are a member of (we will just use that call directly and pick one here).
4. You understand how to <a href="apps_listAll.ipynb" target="_blank">list</a> apps within one of your projects (we will just use that call directly here). This is a **great place** to get the **app_id** you will need in this recipe.
5. You have at least one app in your project, maybe from <a href="apps_copyFromPublicApps.ipynb" target="_blank">copying one</a>
6. You may want to review how to <a href="apps_detailOne.ipynb" target="_blank"> get details </a> of your app (we will assume you do, and pass the appropriate inputs).
### WARNING
This will burn through some processing credits (**about \$0.50**). You can create _DRAFT_ tasks to just see how it works, swap the commenting in **Build and run tasks** to only run:
```python
# task created in DRAFT state
task = api.tasks.create(name=task_name, project=my_project.id, app=single_app.id, inputs=inputs, run=False)
# task created and RUN immediately
task = api.tasks.create(name=task_name, project=my_project.id, app=single_app.id, inputs=inputs, run=True)
```
## Imports
We import the _Api_ class from the official sevenbridges-python bindings below.
```
import sevenbridges as sbg
from time import sleep
```
## Initialize the object
The _Api_ object needs to know your **auth\_token** and the correct path. Here we assume you are using the .sbgrc file in your home directory. For other options see <a href="Setup_API_environment.ipynb">Setup_API_environment.ipynb</a>
```
# User input: specify platform {cgc, sbg}
prof = 'cgc'
config_config_file = sbg.Config(profile=prof)
api = sbg.Api(config=config_config_file)
```
## Find your project
First, we identify an **interesting project** (by _name_) by searching though all of our projects<sup>1</sup>
```
# [USER INPUT] Set project name:
project_name = 'Michael Diamond' # project to copy app into
a_name = 'SBG FASTA Indices'
# FIND my project
my_project = [p for p in api.projects.query(limit=100).all() \
if p.name == project_name]
if not my_project:
print('Target project (%s) not found, check spelling' % project_name)
raise KeyboardInterrupt
else:
my_project = my_project[0]
```
## Copy an app to use for the task
Next, we find an **interesting app** (by _name_), again by searching though all of the apps _within_ that project<sup>1</sup>. Note, we are reusing my_project from above.)
<sup>1</sup> A _cleaner_ way to do this would be to identify by project and app **id**. Stay tuned for updates.
```
# Apps already in my project
my_apps = api.apps.query(project = my_project.id, limit=100)
# Apps in the Public Reference
my_app_source = [a for a in api.apps.query(visibility='public', limit=100).all() \
if a.name == a_name]
if not my_app_source:
print('App (%s) does not exist in Public Reference Apps' % (a_name))
raise KeyboardInterrupt
else:
my_app_source = my_app_source[0]
# Make sure NOT to copy the same app 2x, this angers the Platform considerably
duplicate_app = [a for a in my_apps.all() if a.name == my_app_source.name]
if duplicate_app:
print('App already exists in second project, using that one')
my_new_app = [a for a in api.apps.query(limit = 100, \
project = my_project.id).all() \
if a.name == a_name][0]
else:
print('App (%s) does not exist in Project (%s); copying now' % \
(a_name, my_project.name))
my_new_app = my_app_source.copy(project = my_project.id, name = a_name)
# re-list apps in target project to verify the copy worked
my_apps = api.apps.query(project = my_project.id, limit=100)
my_app_names = [a.name for a in my_apps.all()]
if a_name in my_app_names:
print('Sucessfully copied one app!')
else:
print('Something went wrong...')
```
## Copy a public reference file to use for an input
We will first find our _source\_project_ (the Public Reference Files), then list the files within the source project<sup>2</sup>, and copy a file from **_source\_project_ -> _my\_project_**.
<sup>2</sup> Files are only accessible **within** a project - here the Public Reference project (**warning** we may change this project name in the future).
```
# [USER INPUT] Set file and Public Reference Project names here:
source_project_id = 'admin/sbg-public-data'
f_name = 'ucsc.hg19.fasta' # file to copy
# LIST all file names in target project
my_files = [f.name for f in api.files.query(limit = 100, project = my_project.id).all()]
# Find source file
source_file = [f for f in api.files.query(limit = 100, project = source_project_id).all() \
if f.name == f_name]
# Make sure that file exists in Public Reference
if not source_file:
print("File (%s) does not exist in Public Reference, please check spelling" \
% (f_name))
raise KeyboardInterrupt
else:
source_file = source_file[0]
# Check if first file already exists in the target project
if source_file.name in my_files:
print('File already exists in second project, using that one')
my_new_file = [f for f in api.files.query(limit = 100, \
project = my_project.id).all() \
if f.name == source_file.name][0]
else:
print('File (%s) does not exist in Project (%s); copying now' % \
(source_file.name, my_project.id))
my_new_file = source_file.copy(project = my_project.id, \
name = source_file.name)
# re-list files in target project to verify the copy worked
my_files = [f.name for f in api.files.query(limit = 100, project = my_project.id).all()]
if source_file.name in my_files:
print('Sucessfully copied one file!')
else:
print('Something went wrong...')
```
## Create & start the task
Here we use the reference file and set one of the 11 optional configuration inputs. Note that input files are passed a _file_ (or a _list_ of _files_) while configuration parameters are passed just the values.
```
# Task description
task_name = 'task created with task_create.ipynb'
inputs = {'reference':my_new_file} # 'fasta' is a 'File_Inputs'
# if your task has any 'Config_inputs' that can be specified by
# {'id':value} # value can be str, bool, float, etc depending on tool
# Create and RUN a task
my_task = api.tasks.create(name=task_name, project=my_project.id, \
app=my_new_app.id, inputs=inputs, run=True)
```
## Print task status
Here we poll the recently created task.
```
details = my_task.get_execution_details()
print('Your task is in %s status' % (details.status))
```
## Wait for task completion
Simple loop to ping for task completion.
```
# [USER INPUT] Set loop time (seconds):
loop_time = 120
flag = {'taskRunning': True}
while flag['taskRunning']:
print('Pinging CGC for task completion, will download summary files once all tasks completed.')
details = my_task.get_execution_details()
if details.status == 'COMPLETED':
flag['taskRunning'] = False
print('Task has completed, life is beautiful')
elif details.status == 'FAILED':
print('Task failed, can not continue')
raise KeyboardInterrupt
else:
sleep(loop_time)
```
## Get task outputs
Here we poll the recently created task.
```
my_details = api.tasks.get(id = my_task.id)
print(my_details.outputs)
```
## Additional Information
Detailed documentation of this particular REST architectural style request is available [here](http://docs.cancergenomicscloud.org/docs/create-a-new-task) and [here](http://docs.cancergenomicscloud.org/docs/perform-an-action-on-a-specific-task)
| github_jupyter |
```
import pandas as pd
import sqlalchemy
import multiprocessing
import numpy as np
data = pd.read_excel('data/Budget-2018-19_Corrected.xlsx')
data.head()
data.columns = data.iloc[1]
data.drop([0,1], axis=0, inplace=True)
data.head()
data.columns
data['HEAD OF ACCOUNT'].head()
# Scheme Names
schemes = {'CSS': 'Centrally Sponsored Scheme',
'EAP': 'Externally Aided Projects',
'EAP-SS': 'Externally Aided Projects-State Share',
'EE-CS': 'Establishment Expenditure-Central Share',
'EE-SS': 'Establishmet Expenditure-State Share',
'EE': 'Establishmet Expenditure',
'Plan': 'Plan',
'RIDF-LS': 'Rural Infrastructure Development fund-Loan Share',
'RIDF-SS': 'Rural Infrastructure Development fund-State Share',
'SOPD EE-SSA': 'Establishment Expenditure-Six Schedule Area',
'SOPD-G': 'State Own Priority Scheme-General',
'SOPD-GSP': 'State Own Priority Scheme-GOI Special Scheme',
'SOPD-ODS': 'State Own Priority Scheme-Other Development Scheme',
'SOPD-SCSP': 'State Own Priority Scheme-SCSP',
'SOPD-SCSP SS': 'State Own Priority Scheme-SCSP State Share',
'SOPD-SS': 'State Own Priority Scheme-State Share',
'SOPD-TSP': 'State Own Priority Scheme-TSP',
'TG-AC': 'Transfer Grants to Autonomous Councils',
'TG-DC': 'Transfer Grants to Development Councils',
'TG-EI': 'Transfer Grants to Educational Institutions',
'TG-FFC': 'Transfer Grants to Finance Commission Grants',
'TG-IB': 'Transfer Grants to Individual Benefeciaries',
'TG-PRI': 'Transfer Grants to Panchayat Raj Institutions',
'TG-SFC': 'Transfer Grants to State Finance Commission Grants',
'TG-SSA': 'Transfer Grants to Sixth Schedule Areas',
'TG-UL': 'Transfer Grants to Urban Local Bodies'}
# Sixth Schedule Area
codes_for_areas = {'GA': 'General Area',
'KN': 'Karbi Anglong Non-entrusted',
'NN': 'North Cachar Non-entrusted',
'BN': 'Bodoland Non-entrusted',
'KE': 'Karbi Anglong Entrusted',
'NE': 'North Cachar Entrusted',
'BE': 'Bodoland Entrusted'}
data['HEAD DESCRIPTION'].str.split('$').iloc[10000]
scheme_names = data['HEAD OF ACCOUNT'].apply(lambda x: '-'.join(x.split('-')[7:][:-2])).unique()
scheme_names
set(scheme_names) - set(schemes.keys())
data[data['HEAD OF ACCOUNT'].apply(lambda x: '-'.join(x.split('-')[7:][:-2]) in ['Plan'])]
area_codes = data['HEAD OF ACCOUNT'].apply(lambda x: x.split('-')[-2]).unique()
area_codes
set(area_codes) - set(codes_for_areas.keys())
data['HEAD DESCRIPTION'].apply(lambda x: x.split('-')).iloc[0]
data['HEAD OF ACCOUNT'].apply(lambda x: x.split('-')[:7])
def decipher_head_of_account(row, scheme_map, area_map, hard_check=True):
head_of_account = row['HEAD OF ACCOUNT']
hod_split = head_of_account.split('-')
desc = row['HEAD DESCRIPTION'].split('$')
row['Major Head'] = desc[0]
row['Sub-Major Head'] = desc[1]
row['Minor Head'] = desc[2]
row['Sub-Minor Head'] = desc[3]
row['Detailed Head'] = desc[4]
row['Object Head'] = desc[5]
row['Voucher Head'] = desc[6]
scheme_code = '-'.join(hod_split[7:-2])
area_code = hod_split[-2]
if hard_check:
row['Scheme'] = scheme_map[scheme_code]
row['Area'] = area_map[area_code]
else:
if scheme_code in scheme_map:
row['Scheme'] = scheme_map[scheme_code]
else:
row['Scheme'] = scheme_code
if area_code in area_map:
row['Area'] = area_map[area_code]
else:
row['Area'] = area_code
row['Voted/Charged'] = 'Charged' if hod_split[-1] == 'C' else 'Voted'
return row
# multiprocessing for some speed boost
def _apply_df(args):
df, func, kwargs = args
return df.apply(func, **kwargs)
def apply_by_multiprocessing(df, func, **kwargs):
workers = kwargs.pop('workers')
pool = multiprocessing.Pool(processes=workers)
result = pool.map(_apply_df, [(d, func, kwargs)
for d in np.array_split(df, workers)])
pool.close()
return pd.concat(list(result))
assam_processed_data = apply_by_multiprocessing(data, decipher_head_of_account, args=[schemes, codes_for_areas], axis=1, workers=4)
# %timeit apply_by_multiprocessing(data, decipher_head_of_account, args=[schemes, codes_for_areas], axis=1, workers=4)
# 30.8 s ± 1.1 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
# %timeit data.apply(decipher_head_of_account, axis=1, args=[schemes, codes_for_areas])
# 1min 52s ± 1.28 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
assam_processed_data
assam_processed_data.columns
cols = ['#', 'GRANT NUMBER', 'BUDGET ENTITY', 'HEAD OF ACCOUNT',
'HEAD DESCRIPTION', 'HEAD DESCRIPTION ASSAMESE', 'Major Head',
'Sub-Major Head', 'Minor Head', 'Sub-Minor Head', 'Detailed Head',
'Object Head', 'Voucher Head', 'Scheme', 'Area', 'Voted/Charged',
'ACTUALS 2016-17', 'BUDGET 2017-18', 'REVISED 2017-18', 'BUDGET 2018-19']
assam_processed_data.to_csv('assam_processed_data.csv', columns=cols, index=False)
def save_to_sqlite(o, combined_data):
'''
Save the combined data to sqlite file.
Args:
o (str): output file name.
Return:
True if sqlite file saved else raise error.
'''
engine = sqlalchemy.create_engine('sqlite:///{}'.format(o))
combined_data.to_sql(name='budget_2018_19', if_exists='replace', con=engine, chunksize=10000)
return True
save_to_sqlite('assam_budget.sqlite', assam_processed_data)
```
| github_jupyter |
## Logisitic Regression classifier with L2 Regularization
### Load the libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.decomposition import PCA
from sklearn.impute import SimpleImputer
from matplotlib import style
```
### Logistic Regression Classifier
```
class LogisticRegression(object):
def __init__(self, learningRate, numIterations = 10, penalty = None, C = 0.01):
self.learningRate = learningRate
self.numIterations = numIterations
self.penalty = penalty
self.C = C
## END
def train(self, X_train, y_train, tol = 10 ** -4):
# Value indicating the weight change between epochs in which gradient descent should terminated. Defaults to 10 ** -4
tolerance = tol * np.ones([1, np.shape(X_train)[1] + 1])
self.weights = np.zeros(np.shape(X_train)[1] + 1)
X_train = np.c_[np.ones([np.shape(X_train)[0], 1]), X_train]
self.costs = []
for i in range(self.numIterations):
z = np.dot(X_train, self.weights)
errors = y_train - sigmoid_function(z)
if self.penalty is not None:
delta_w = self.learningRate * (self.C * np.dot(errors, X_train) + np.sum(self.weights))
else:
delta_w = self.learningRate * np.dot(errors, X_train)
self.iterationsPerformed = i
if np.all(abs(delta_w) >= tolerance):
self.weights += delta_w
if self.penalty is not None:
self.costs.append(reg_logLiklihood(X_train, self.weights, y_train, self.C))
else:
self.costs.append(logLiklihood(z, y_train))
else:
break
return self
## END
def predict(self, X_test, pi = 0.5):
z = self.weights[0] + np.dot(X_test, self.weights[1:])
probs = np.array([sigmoid_function(i) for i in z])
predictions = np.where(probs >= pi, 1, 0)
return predictions, probs
## END
def performanceEval(self, predictions, y_test):
TP, TN, FP, FN, P, N = 0, 0, 0, 0, 0, 0
for idx, test_sample in enumerate(y_test):
if predictions[idx] == 1 and test_sample == 1:
TP += 1
P += 1
elif predictions[idx] == 0 and test_sample == 0:
TN += 1
N += 1
elif predictions[idx] == 0 and test_sample == 1:
FN += 1
P += 1
elif predictions[idx] == 1 and test_sample == 0:
FP += 1
N += 1
accuracy = (TP + TN) / (P + N)
sensitivity = TP / P
specificity = TN / N
PPV = TP / (TP + FP)
NPV = TN / (TN + FN)
FNR = 1 - sensitivity
FPR = 1 - specificity
performance = {'Accuracy': accuracy,
'Sensitivity': sensitivity,
'Specificity': specificity,
'Precision': PPV,
'NPV': NPV,
'FNR': FNR,
'FPR': FPR}
return performance
## END
def predictionPlot(self, X_test, y_test):
zs = self.weights[0] + np.dot(X_test, self.weights[1:])
probs = np.array([sigmoid_function(i) for i in zs])
plt.figure()
plt.plot(np.arange(-10, 10, 0.1), sigmoid_function(np.arange(-10, 10, 0.1)))
colors = ['r','b']
probs = np.array(probs)
for idx,cl in enumerate(np.unique(y_test)):
plt.scatter(x = zs[np.where(y_test == cl)[0]],
y = probs[np.where(y_test == cl)[0]],
alpha = 0.8,
c = colors[idx],
marker = 'o',
label = cl,
s = 30)
plt.xlabel('z')
plt.ylim([-0.1, 1.1])
plt.axhline(0.0, ls = 'dotted', color = 'k')
plt.axhline(1.0, ls = 'dotted', color = 'k')
plt.axvline(0.0, ls = 'dotted', color = 'k')
plt.ylabel('$\phi (z)$')
plt.legend(loc = 'upper left')
plt.title('Logistic Regression Prediction Curve')
plt.show()
## END
def plotCost(self):
plt.figure()
plt.plot(np.arange(1, self.iterationsPerformed + 1), self.costs, marker = '.')
plt.xlabel('Iterations')
plt.ylabel('Log-Liklihood J(w)')
## END
def plotDecisionRegions(self, X_test, y_test, pi = 0.5, res = 0.01):
x = np.arange(min(X_test[:,0]) - 1, max(X_test[:,0]) + 1, 0.01)
y = np.arange(min(X_test[:,1]) - 1, max(X_test[:,1]) + 1, 0.01)
xx, yy = np.meshgrid(x, y, indexing = 'xy')
data_points = np.transpose([xx.ravel(), yy.ravel()])
preds, probs = self.predict(data_points, pi)
colors = ['r','b']
probs = np.array(probs)
for idx,cl in enumerate(np.unique(y_test)):
plt.scatter(x = X_test[:,0][np.where(y_test == cl)[0]],
y = X_test[:,1][np.where(y_test == cl)[0]],
alpha = 0.8,
c = colors[idx],
marker = 'o',
label = cl,
s = 30)
preds = preds.reshape(xx.shape)
plt.contourf(xx, yy, preds, alpha = 0.3)
plt.legend(loc = 'best')
plt.xlabel('$x_1$', size = 'x-large')
plt.ylabel('$x_2$', size = 'x-large')
## END
## END
## Utility Functions ##
def sigmoid_function(args):
return (1/(1 + np.exp(-args)))
## END
def log_function(n):
return np.log(n)
## END
def logLiklihood(z, y):
"""Log-liklihood function (cost function to be minimized in logistic regression classification)"""
return -1 * np.sum((y * np.log(sigmoid_function(z))) + ((1 - y) * np.log(1 - sigmoid_function(z))))
## END
def reg_logLiklihood(x, weights, y, C):
"""
Regularizd log-liklihood function
(cost function to minimized in logistic regression classification with L2 regularization)
"""
z = np.dot(x, weights)
reg_term = (1 / (2 * C)) * np.dot(weights.T, weights)
return -1 * np.sum((y * np.log(sigmoid_function(z))) + ((1 - y) * np.log(1 - sigmoid_function(z)))) + reg_term
## END
```
### Breast Cancer Winconsin Dataset Description
```
dataset_description = open("./datasets/breast-cancer-wisconsin.names")
print(dataset_description.read())
```
### Import Breast Cancer Wisconsin dataset
### Change the column and index names for the dataset.
```
df = pd.read_csv("datasets/breast-cancer-wisconsin.data", header = None)
df.rename(columns = {0:"id",
1:"clump-thickness",
2:"cell-size",
3:"cell-shape",
4:"marginal-adhesion",
5:"epithelial-cell-size",
6:"bare-nuclei",
7:"bland-chromatin",
8:"normal-nucleoli",
9:"mitoses",
10:"class"},
inplace = True)
```
### Print the first 10 rows of the dataset
```
df.head(10)
```
### Count the number of values present within the class column of the dataset
```
df['class'].value_counts()
# 2 is for benign cancer
# 4 is for malignant cancer
```
### Create the feature and label vectors
```
label_vector = df.iloc[:, 10] #class labels: 2 = benign, 4 = malignant
feature_vector = df.iloc[:, 1:10] #features vectors
feature_vector
```
### Encode the label values to the binary values (0 and 1)
fit_transform(X, y=None, **fit_params)
Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
Parameters
X: {array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
y: ndarray of shape (n_samples,), default=None. Target values.
Returns
X_new: ndarray array of shape (n_samples, n_features_new)
Transformed array.
```
# positive class = 1 (malignant(4)), negative class = 0 (benign(2))
encoder = LabelEncoder()
label_vector = encoder.fit_transform(label_vector)
print("label_vector has {} values".format(np.size(label_vector)))
label_vector
```
### Replace missing feature values with mean feature value
```
feature_vector = feature_vector.replace('?', np.nan)
imr = SimpleImputer(missing_values=np.nan, strategy='mean')
imr = imr.fit(feature_vector)
features = imr.transform(feature_vector.values)
# feature_vector.fillna(value=feature_vector.mean())
```
### Split data into training (70%) and testing (30%) sets
```
X_train, X_test, Y_train, Y_test = train_test_split(features, label_vector, test_size = 0.3, random_state = 2020)
print("shape of input training data:",X_train.shape)
print("shape of output training data:",Y_train.shape)
print("shape of input testing data:",X_test.shape)
print("shape of output testing data:",Y_test.shape)
```
### sklearn.preprocessing.StandardScaler()
It removes the mean and scales each feature/variable to unit variance.
This operation is performed feature-wise in an independent way.
It can be influenced by outliers (if they exist in the dataset) since it involves the estimation of the empirical mean and standard deviation of each feature.
transform(X[, copy]): Perform standardization by centering and scaling
fit_transform(X[, y]): Fit to data, then transform it.
```
#Z-score normalization
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
```
### Principle component analysis (dimensionality reduction)
```
pca = PCA(n_components = 2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
```
### Training logistic regression classifier with L2 penalty
```
LR = LogisticRegression(learningRate = 0.01, numIterations = 20, penalty = "L2", C = 0.01)
LR.train(X_train_pca, Y_train, tol = 10 ** -3) # tol=0.001
```
### Testing fitted model on test data with cutoff probability 50%
```
predictions, probs = LR.predict(X_test_pca, 0.5)
performance = LR.performanceEval(predictions, Y_test)
LR.plotDecisionRegions(X_test_pca, Y_test)
LR.predictionPlot(X_test_pca, Y_test)
```
### Print out performance values
```
for key, value in performance.items():
print('%s : %.2f' % (key, value))
```
### Training logistic regression classifier with No penalty
```
LR = LogisticRegression(learningRate = 0.01, numIterations = 20, penalty = None, C = 0.01)
LR.train(X_train_pca, Y_train, tol = 10 ** -3) # tol=0.001
predictions, probs = LR.predict(X_test_pca, 0.5)
performance = LR.performanceEval(predictions, Y_test)
LR.plotDecisionRegions(X_test_pca, Y_test)
LR.predictionPlot(X_test_pca, Y_test)
for key, value in performance.items():
print('%s : %.2f' % (key, value))
```
| github_jupyter |
```
import torch
import random
import torchvision
from torch.utils.data import Dataset, DataLoader
import torch.nn as nn
import torch.nn.functional as F
import argparse,os,time
import os
import time
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
num_gpus=4
data=pd.read_csv("./Titanic_data/train.csv")
data
train_data=data.iloc[:,[1,2,4,5,6,7]]
train_data
train_data=train_data.replace("male",0)
train_data=train_data.replace("female",1)
train_data=train_data.dropna(axis=0)
train_data
y_data=train_data.loc[:,"Survived"].values
x_data=train_data.loc[:,"Pclass":].values
mean_x=np.mean(x_data, axis=0)
std_x=np.std(x_data, axis=0)
x_train=(x_data - mean_x) / std_x
class CustomDataset(Dataset):
def __init__(self,x_dat,y_dat):
x = x_dat
y = y_dat
self.len = x.shape[0]
y=y.astype('int')
x=x.astype('float32')
self.x_data = torch.tensor(x)
self.y_data = torch.tensor(y)
def __getitem__(self, index):
return self.x_data[index], self.y_data[index]
def __len__(self):
return self.len
train_data_x, val_data_x, train_data_label, val_data_label = train_test_split(x_data, y_data,
test_size=0.2)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.fc = nn.Sequential(
nn.Linear(5, 1000),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(1000, 1000),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(1000, 1000),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(1000, 1000),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(1000, 1000),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(1000, 1000),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(1000, 1000),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(1000, 2),
nn.LeakyReLU(0.2, inplace=True)
)
def forward(self, x):
x=x.view(batch_size//num_gpus,-1)
out=self.fc(x)
return out
batch_size=32
train_dataset = CustomDataset(train_data_x,train_data_label)
train_loader = DataLoader(dataset=train_dataset,pin_memory=True,
batch_size=batch_size,
shuffle=True,
num_workers=60,drop_last=True)
val_dataset = CustomDataset(val_data_x,val_data_label)
val_loader = DataLoader(dataset=val_dataset,pin_memory=True,
batch_size=batch_size,
shuffle=True,
num_workers=60,drop_last=True)
model=nn.DataParallel(Model().cuda())
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(),weight_decay=0.005)
trn_loss_list = []
val_loss_list = []
total_epoch=100
model_char="minloss"
model_name=""
patience=5
start_early_stop_check=0
saving_start_epoch=10
for epoch in range(total_epoch):
trn_loss = 0.0
for i, data in enumerate(train_loader, 0):
inputs, labels = data
if torch.cuda.is_available():
inputs=inputs.cuda()
labels=labels.cuda()
# grad init
optimizer.zero_grad()
# forward propagation
output= model(inputs)
# calculate loss
loss=criterion(output, labels)
# back propagation
loss.backward()
# weight update
optimizer.step()
# trn_loss summary
trn_loss += loss.item()
# del (memory issue)
del loss
del output
with torch.no_grad():
val_loss = 0.0
cor_match = 0
for j, val in enumerate(val_loader):
val_x, val_label = val
if torch.cuda.is_available():
val_x = val_x.cuda()
val_label =val_label.cuda()
val_output = model(val_x)
v_loss = criterion(val_output, val_label)
val_loss += v_loss
_, predicted=torch.max(val_output,1)
cor_match+=np.count_nonzero(predicted.cpu().detach()==val_label.cpu().detach())
del val_output
del v_loss
del predicted
trn_loss_list.append(trn_loss/len(train_loader))
val_loss_list.append(val_loss/len(val_loader))
val_acc=cor_match/(len(val_loader)*batch_size)
now = time.localtime()
print ("%04d/%02d/%02d %02d:%02d:%02d" % (now.tm_year, now.tm_mon, now.tm_mday, now.tm_hour, now.tm_min, now.tm_sec))
print("epoch: {}/{} | trn loss: {:.4f} | val loss: {:.4f} | val accuracy: {:.4f}% \n".format(
epoch+1, total_epoch, trn_loss / len(train_loader), val_loss / len(val_loader), val_acc*100
))
if epoch+1>2:
if val_loss_list[-1]>val_loss_list[-2]:
start_early_stop_check=1
else:
val_loss_min=val_loss_list[-1]
if start_early_stop_check:
early_stop_temp=val_loss_list[-patience:]
if all(early_stop_temp[i]<early_stop_temp[i+1] for i in range (len(early_stop_temp)-1)):
print("Early stop!")
break
if epoch+1>saving_start_epoch:
if val_loss_list[-1]<val_loss_min:
if os.path.isfile(model_name):
os.remove(model_name)
val_loss_min=val_loss_list[-1]
model_name="Custom_model_"+model_char+"_{:.3f}".format(val_loss_min)
torch.save(model, model_name)
print("Model replaced and saved as ",model_name)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/kevincong95/cs231n-emotiw/blob/master/notebooks/audio/1.0-la-audio-error-analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!git clone 'https://github.com/kevincong95/cs231n-emotiw.git'
```
## Retrieve and Preprocess the Raw Data
```
# Switch to TF 1.x and navigate to the directory
%tensorflow_version 1.x
!pwd
import os
os.chdir('cs231n-emotiw')
!pwd
# Install required packages
!pip install -r 'requirements-predictions.txt'
!wget 'https://storage.googleapis.com/cs231n-emotiw/data/Train_labels.txt'
!wget 'https://storage.googleapis.com/cs231n-emotiw/data/Val_labels.txt'
!wget 'https://storage.googleapis.com/cs231n-emotiw/data/train-full.zip'
!wget 'https://storage.googleapis.com/cs231n-emotiw/data/val-full.zip'
!unzip '/content/val-full.zip'
%tensorflow_version 1.x
import os
os.chdir('/content/cs231n-emotiw')
from src.preprocessors.audio_preprocessor import AudioPreprocessor
audio_preprocessor_val = AudioPreprocessor(video_folder='Val/' , output_folder='val-full-4/' , label_path='./Val_labels.txt')
audio_preprocessor_val.preprocess(batch_size=32)
```
## Retrieve Preprocessed Data
```
!cp '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw/val-final-audio.zip' '/content/'
!unzip val-final-audio.zip
%tensorflow_version 2.x
!wget 'https://storage.googleapis.com/cs231n-emotiw/data/val-final-audio.zip'
!unzip val-final-audio.zip
import numpy as np
import glob
#X_train = np.load('/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/notebooks/audio-final/audio-pickle-all-X-openl3-train.pkl', allow_pickle=True)
#Y_train = np.load('/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/notebooks/audio-final/audio-pickle-all-Y-openl3-train.pkl' , allow_pickle=True)
X_val = np.load('/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw/audio-pickle-all-X-openl3.pkl' , allow_pickle=True)
Y_val = np.load('/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw/audio-pickle-all-Y-openl3.pkl' , allow_pickle=True)
s = np.arange(X_val.shape[0])
np.random.shuffle(s)
X_val = X_val[s]
Y_val = Y_val[s]
video_path = '/content/Val/'
videos = glob.glob(video_path + '/*.mp4')
videos = np.asarray(videos)
videos = videos[s]
```
## Retrieve Model and Predict/Evaluate
```
import tensorflow as tf
!wget 'https://storage.googleapis.com/cs231n-emotiw/models/openl3-cnn-lstm-tuned-lr.h5'
model = tf.keras.models.load_model('openl3-cnn-lstm-tuned-lr.h5')
model.summary()
print(X_val.shape)
```
## Evaluate Model Performance
```
predictions = model.predict(X_val)
model.evaluate(X_val , Y_val)
```
#### F1 Score
```
from sklearn.metrics import f1_score
y_pred = model.predict(X_val)
Y_class = y_pred.argmax(axis=-1)
f1_score(Y_val, Y_class, average='micro')
from sklearn import metrics
import pandas as pd
import seaborn as sn
import matplotlib.pyplot as plt
classes=['Pos' , 'Neu' , 'Neg']
con_mat = tf.math.confusion_matrix(labels=Y_val, predictions=Y_class).numpy()
con_mat_norm = np.around(con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis], decimals=2)
con_mat_df = pd.DataFrame(con_mat_norm,
index = classes,
columns = classes)
figure = plt.figure(figsize=(11, 9))
plt.title("Audio Model Confusion Matrix")
sn.heatmap(con_mat_df, annot=True,cmap=plt.cm.Blues)
from sklearn.metrics import classification_report
target_names = ["Positive" , "Neutral" , "Negative"]
print(classification_report(Y_val, Y_class, target_names=target_names))
```
## Visualize some of the Embeddings on a per frame basis
```
import matplotlib.pyplot as plt
import cv2
class_names = ["Positive" , "Neutral" , "Negative"]
def plot_embeddings(i, time_point, predictions_array, true_label, embeddings):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
embed_sqr = np.concatenate([embeddings[i][time_point]]*embeddings.shape[2])
embed_sqr = cv2.resize(embed_sqr, (28, 28))
plt.imshow(embed_sqr, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array[i])
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(3))
plt.yticks([])
thisplot = plt.bar(range(3), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_embeddings(i , 1 , predictions, Y_val , X_val)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], Y_val)
plt.show()
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_embeddings(i , 1 , predictions, Y_val , X_val)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], Y_val)
plt.tight_layout()
plt.show()
```
## Visualize the input spectrograms and how they affect predictions
```
# TO DO
import librosa
import librosa.display
from librosa.feature import melspectrogram
import sys
from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_audio
import os
!mkdir '/content/Val/tmp'
files = sorted(glob.glob('/content/cs231n-emotiw/Val/*.mp4'))
def plot_spectrogram(i, video_path, video_name , predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
output_wav_file = video_name[:-3] + 'extracted_audio.wav'
ffmpeg_extract_audio(video_path + video_name, video_path + "/tmp/" + output_wav_file)
y, sr = librosa.load(video_path + "/tmp/" + output_wav_file)
S = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=128,
fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
librosa.display.specshow(S_dB, x_axis='time',
y_axis='mel', sr=sr,
fmax=8000)
#plt.colorbar(format='%+2.0f dB')
#plt.title('Mel-frequency spectrogram')
plt.tight_layout()
plt.yticks([])
plt.xticks([])
plt.ylabel("")
predicted_label = np.argmax(predictions_array[i])
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
video_name = '/content/Val/100_1.mp4'
video_path = '/content/Val/'
video_name = os.path.basename(video_name)
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_spectrogram(0 , video_path , video_name , predictions, Y_val)
plt.subplot(1,2,2)
plot_value_array(0, predictions[0], Y_val)
plt.show()
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
import glob
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
video_name = os.path.basename(videos[i])
plot_spectrogram(i , video_path , video_name , predictions, Y_val)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], Y_val)
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Interpret Models
You can use Azure Machine Learning to interpret a model by using an *explainer* that quantifies the amount of influence each feature contribues to the predicted label. There are many common explainers, each suitable for different kinds of modeling algorithm; but the basic approach to using them is the same.
## Before You Start
Before you start this lab, ensure that you have completed the *Create an Azure Machine Learning Workspace* and *Create a Compute Instance* tasks in [Lab 1: Getting Started with Azure Machine Learning](./labdocs/Lab01.md). Then open this notebook in Jupyter on your Compute Instance.
## Install SDK packages
Let's start by ensuring that you have the latest version of the Azure ML SDK installed, including the *explain* optional package. In addition, you'll install the Azure ML Interpretability library. You can use this to interpret many typical kinds of model, even if they haven't been trained in an Azure ML experiment or registered in an Azure ML workspace.
```
!pip install --upgrade azureml-sdk[notebooks,explain]
!pip install --upgrade azureml-interpret
```
## Explain a model
Let's start with a model that is trained outside of Azure Machine Learning - Run the cell below to train a decision tree classification model.
```
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# load the diabetes dataset
print("Loading Data...")
data = pd.read_csv('data/diabetes.csv')
# Separate features and labels
features = ['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']
labels = ['not-diabetic', 'diabetic']
X, y = data[features].values, data['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
print('Model trained.')
```
The training process generated some model evaluation metrics based on a hold-back validation dataset, so you have an idea of how accurately it predicts; but how do the features in the data influence the prediction?
### Get an explainer for the model
Let's get a suitable explainer for the model from the Azure ML interpretability library you installed earlier. There are many kinds of explainer. In this example you'll use a *Tabular Explainer*, which is a "black box" explainer that can be used to explain many kinds of model by invoking an appropriate [SHAP](https://github.com/slundberg/shap) model explainer.
```
from interpret.ext.blackbox import TabularExplainer
# "features" and "classes" fields are optional
tab_explainer = TabularExplainer(model,
X_train,
features=features,
classes=labels)
print(tab_explainer, "ready!")
```
### Get *global* feature importance
The first thing to do is try to explain the model by evaluating the overall *feature importance* - in other words, quantifying the extent to which each feature influences the prediction based on the whole training dataset.
```
# you can use the training data or the test data here
global_tab_explanation = tab_explainer.explain_global(X_train)
# Get the top features by importance
global_tab_feature_importance = global_tab_explanation.get_feature_importance_dict()
for feature, importance in global_tab_feature_importance.items():
print(feature,":", importance)
```
The feature importance is ranked, with the most important feature listed first.
### Get *local* feature importance
So you have an overall view, but what about explaining individual observations? Let's generate *local* explanations for individual predictions, quantifying the extent to which each feature influenced the decision to predict each of the possible label values. In this case, it's a binary model, so there are two possible labels (non-diabetic and diabetic); and you can quantify the influence of each feature for each of these label values for individual observations in a dataset. You'll just evaluate the first two cases in the test dataset.
```
# Get the observations we want to explain (the first two)
X_explain = X_test[0:2]
# Get predictions
predictions = model.predict(X_explain)
# Get local explanations
local_tab_explanation = tab_explainer.explain_local(X_explain)
# Get feature names and importance for each possible label
local_tab_features = local_tab_explanation.get_ranked_local_names()
local_tab_importance = local_tab_explanation.get_ranked_local_values()
for l in range(len(local_tab_features)):
print('Support for', labels[l])
label = local_tab_features[l]
for o in range(len(label)):
print("\tObservation", o + 1)
feature_list = label[o]
total_support = 0
for f in range(len(feature_list)):
print("\t\t", feature_list[f], ':', local_tab_importance[l][o][f])
total_support += local_tab_importance[l][o][f]
print("\t\t ----------\n\t\t Total:", total_support, "Prediction:", labels[predictions[o]])
```
## Adding explainability to a model training experiment
As you've seen, you can generate explanations for models trained outside of Azure Machine Learning; but when you use experiments to train and register models in your Azure Machine Learning workspace, you can generate model explanations and log them.
Run the code in the following cell to connect to your workspace.
> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
```
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
```
### Train and explain a model using an experiment
OK, let's create an experiment and put the files it needs in a local folder - in this case we'll just use the same CSV file of diabetes data to train the model.
```
import os, shutil
from azureml.core import Experiment
# Create a folder for the experiment files
experiment_folder = 'diabetes_train_and_explain'
os.makedirs(experiment_folder, exist_ok=True)
# Copy the data file into the experiment folder
shutil.copy('data/diabetes.csv', os.path.join(experiment_folder, "diabetes.csv"))
```
Now we'll create a training script that looks similar to any other Azure ML training script except that is includes the following features:
- The same libraries to generate model explanations we used before are imported and used to generate a global explanation
- The **ExplanationClient** library is used to upload the explanation to the experiment output
```
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Import Azure ML run library
from azureml.core.run import Run
# Import libraries for model explanation
from azureml.interpret import ExplanationClient
from interpret.ext.blackbox import TabularExplainer
# Get the experiment run context
run = Run.get_context()
# load the diabetes dataset
print("Loading Data...")
data = pd.read_csv('diabetes.csv')
features = ['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']
labels = ['not-diabetic', 'diabetic']
# Separate features and labels
X, y = data[features].values, data['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes.pkl')
# Get explanation
explainer = TabularExplainer(model, X_train, features=features, classes=labels)
explanation = explainer.explain_global(X_test)
# Get an Explanation Client and upload the explanation
explain_client = ExplanationClient.from_run(run)
explain_client.upload_model_explanation(explanation, comment='Tabular Explanation')
# Complete the run
run.complete()
```
Now you can run the experiment. Note that the **azureml-interpret** library is included in the training environment so the script can create a **TabularExplainer** and use the **ExplainerClient** class.
```
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Create a Python environment for the experiment
explain_env = Environment("explain-env")
# Create a set of package dependencies (including the azureml-interpret package)
packages = CondaDependencies.create(conda_packages=['scikit-learn','pandas','pip'],
pip_packages=['azureml-defaults','azureml-interpret'])
explain_env.python.conda_dependencies = packages
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
environment=explain_env)
# submit the experiment
experiment_name = 'diabetes_train_and_explain'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
```
## Retrieve the feature importance values
With the experiment run completed, you can use the **ExplanationClient** class to retrieve the feature importance from the explanation registered for the run.
```
from azureml.interpret import ExplanationClient
# Get the feature explanations
client = ExplanationClient.from_run(run)
engineered_explanations = client.download_model_explanation()
feature_importances = engineered_explanations.get_feature_importance_dict()
# Overall feature importance
print('Feature\tImportance')
for key, value in feature_importances.items():
print(key, '\t', value)
```
## View the model explanation in Azure Machine Learning studio
You can also click the **View run details** link in the Run Details widget to see the run in Azure Machine Learning studio, and view the **Explanations** tab. Then:
1. Select the **Tabular Explanation** explainer.
2. View the **Global Importance** chart, which shows the overall global feature importance.
3. View the **Summary Importance** chart, which shows each data point from the test data in a *swarm*, *violin*, or *box* plot.
4. Select an individual point to see the **Local Feature Importance** for the individual prediction for the selected data point.
**More Information**: For more information about using explainers in Azure ML, see [the documentation](https://docs.microsoft.com/azure/machine-learning/how-to-machine-learning-interpretability).
| github_jupyter |
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import xgboost as xgb
import math
from sklearn.model_selection import train_test_split
from datetime import datetime
import matplotlib.pyplot as plt
%matplotlib inline
df_train = pd.read_csv("../data/train.csv", parse_dates=['timestamp'])
df_test = pd.read_csv("../data/test.csv", parse_dates=['timestamp'])
df_macro = pd.read_csv("../data/macro.csv", parse_dates=['timestamp'])
# df_train.head()
print(df_train.shape)
print(df_test.shape)
print(df_macro.shape)
df_train['year_month'] = (df_train.timestamp.dt.month + df_train.timestamp.dt.year * 100)
grouped_df = df_train.groupby('year_month')['price_doc'].aggregate(np.mean).reset_index()
grouped_df.columns = ['year_month','price_doc_mean']
grouped_df2 = df_train.groupby('year_month')['price_doc'].aggregate(np.std).reset_index()
grouped_df2.columns = ['year_month','price_doc_std']
grouped_df_train = grouped_df.merge(grouped_df2,on='year_month')
print(df_train['year_month'].min(),df_train['year_month'].max())
df_train = df_train.merge(grouped_df_train, on='year_month')
df_train['price_doc_zScore'] = (df_train['price_doc'] - df_train['price_doc_mean'])/df_train['price_doc_std']
df_train.boxplot('price_doc_zScore','sub_area',figsize=(20,10))
grouped_df = df_train.groupby('sub_area')['price_doc_zScore'].aggregate(np.mean).reset_index()
df_train.drop(['price_doc_zScore'], axis=1, inplace=True)
grouped_df.head()
df_test['year_month'] = (df_test.timestamp.dt.month + df_test.timestamp.dt.year * 100)
print(df_test['year_month'].min(),df_test['year_month'].max())
Y_train = df_train['price_doc'].values
Y_train = np.log1p(Y_train)
id_test = df_test['id']
df_train.drop(['id', 'price_doc'], axis=1, inplace=True)
df_test.drop(['id'], axis=1, inplace=True)
# Build df_all = (df_train+df_test).join(df_macro)
num_train = len(df_train)
df_all = pd.concat([df_train, df_test])
df_all = df_all.join(df_macro, on='timestamp', rsuffix='_macro')
# df_all = df_all.merge(grouped_df, on='sub_area')
print(df_all.shape)
# Add month-year
month_year = (df_all.timestamp.dt.month + df_all.timestamp.dt.year * 100)
month_year_cnt_map = month_year.value_counts().to_dict()
df_all['month_year_cnt'] = month_year.map(month_year_cnt_map)
# Add week-year count
week_year = (df_all.timestamp.dt.weekofyear + df_all.timestamp.dt.year * 100)
week_year_cnt_map = week_year.value_counts().to_dict()
df_all['week_year_cnt'] = week_year.map(week_year_cnt_map)
# Add month and day-of-week
# df_all['year'] = df_all.timestamp.dt.year
df_all['month'] = df_all.timestamp.dt.month
df_all['dow'] = df_all.timestamp.dt.dayofweek
# df_all['year_month_dow'] = df_all['year_month']*10 + df_all['dow']
# Other feature engineering
df_all['rel_floor'] = df_all['floor'] / df_all['max_floor'].astype(float)
df_all['rel_kitch_sq'] = df_all['kitch_sq'] / df_all['full_sq'].astype(float)
# Remove timestamp column (may overfit the model in train)
df_all.drop(['timestamp', 'timestamp_macro'], axis=1, inplace=True)
df_all.drop(['price_doc_mean', 'price_doc_std'], axis=1, inplace=True)
dtype_df = df_all.dtypes.reset_index()
dtype_df.columns = ["Count", "Column Type"]
dtype_df.groupby("Column Type").aggregate('count').reset_index()
# Deal with categorical values
df_numeric = df_all.select_dtypes(exclude=['object'])
df_obj = df_all.select_dtypes(include=['object']).copy()
for c in df_obj:
df_obj[c] = pd.factorize(df_obj[c])[0]
df_values = pd.concat([df_numeric, df_obj], axis=1)
# df_values=df_values.replace([np.inf, -np.inf], np.nan)
# df_values=df_values.fillna(0)
# Convert to numpy values
X_all = df_values.values
print(X_all.shape)
X_train = X_all[:num_train]
X_test = X_all[num_train:]
df_columns = df_values.columns
print('X_train shape:', X_train.shape)
print('Y_train shape',Y_train.shape)
print(len(X_train), 'train samples')
print(len(X_test), 'test samples')
test_train_ratio = float(len(X_test))/(len(X_train)+len(X_test))
test_train_ratio
# def rmsle(preds, dtrain):
# labels = dtrain.get_label()
# assert len(preds) == len(labels)
# labels = labels.tolist()
# preds = preds.tolist()
# terms_to_sum = [(math.log(labels[i] + 1) - math.log(max(0, preds[i]) + 1)) ** 2.0 for i, pred in enumerate(labels)]
# return 'rmsle', (sum(terms_to_sum) * (1.0 / len(preds))) ** 0.5
# We take all float/int columns except for ID, timestamp, and the target value
# train_columns = list(
# set(df_train.select_dtypes(include=['float64', 'int64']).columns) - set(['id', 'timestamp', 'price_doc']))
# y_train = df_train['price_doc'].values
# x_train = df_train[train_columns].values
# x_test = df_test[train_columns].values
# Train/Valid split
# x_train, y_train, x_valid, y_valid = x_train[:split], y_train[:split], x_train[split:], y_train[split:]
x_train, x_valid, y_train, y_valid = train_test_split(X_train, Y_train, test_size=test_train_ratio, random_state=0)
d_train = xgb.DMatrix(x_train, label=y_train, feature_names=df_columns)
d_valid = xgb.DMatrix(x_valid, label=y_valid, feature_names=df_columns)
dtest = xgb.DMatrix(X_test, feature_names=df_columns)
modelName = 'xgb_v2'
xgb_params = {
'eta': 0.05,
'max_depth': 5,
'subsample': 0.7,
'colsample_bytree': 0.7,
'objective': 'reg:linear',
'eval_metric': 'rmse',
# 'feval': rmsle,
'silent': 1
}
# params = {}
# params['objective'] = 'reg:linear'
# params['eta'] = 0.02
# params['silent'] = 1
watchlist = [(d_train, 'train'), (d_valid, 'valid')]
num_boost_round = 383
clf = xgb.train(dict(xgb_params, silent=0), d_train, num_boost_round, watchlist, early_stopping_rounds=100)
# model = xgb.train(, dtrain, num_boost_round=num_boost_round)
num_boost_round = 262
clf = xgb.train(dict(xgb_params, silent=0), d_train, num_boost_round)
# dtrain = xgb.DMatrix(x_train, y_train, feature_names=df_columns)
# dtest = xgb.DMatrix(x_valid, feature_names=df_columns)
# params = {}
# params['objective'] = 'reg:linear'
# params['eta'] = 0.02
# params['silent'] = 1
# watchlist = [(d_train, 'train'), (d_valid, 'valid')]
# clf = xgb.train(params, d_train, 800, watchlist, feval=rmsle, early_stopping_rounds=100)
y_pred = clf.predict(dtest)
y_pred = np.expm1(y_pred)
date = datetime.now().strftime("%Y%m%d_%H%M")
df_sub = pd.DataFrame({'id': id_test, 'price_doc': y_pred})
fout = '../submission/'+date+"_"+modelName+'.csv'
df_sub.to_csv(fout, index=False)
fig, ax = plt.subplots(1, 1, figsize=(8, 16))
xgb.plot_importance(clf, max_num_features=50, height=0.5, ax=ax)
fig.savefig('../submission/'+date+"_"+modelName+".png")
```
| github_jupyter |
```
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
%matplotlib inline
```
### Device configuration
```
# device configuration
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
```
### Hyper parameters
```
# hyper parameters
num_epochs = 5
num_classes = 10
batch_size = 128
learning_rate = 0.001
```
### MNIST dataset and loader
```
# MNIST dataset and loader
train_dataset = torchvision.datasets.MNIST(root='./mnist', download=True, train=True, transform=torchvision.transforms.ToTensor())
test_dataset = torchvision.datasets.MNIST(root='./mnist', download=True, train=False, transform=torchvision.transforms.ToTensor())
train_dataloader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_dataloader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
```
### Convnet definition
```
# cnn definition
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, padding=2),
nn.BatchNorm2d(num_features=16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(in_features=32 * 7 * 7, out_features=num_classes)
#self.softmax = nn.Softmax(dim=1)
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
x = x.reshape(x.size(0), -1)
x = self.fc(x)
#x = self.softmax(x) no need to do softmax, which is inside CrossEntropyLoss
return x
model = ConvNet(num_classes).to(device)
```
### Loss and optimizer
```
# loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(params=model.parameters(), lr=learning_rate)
```
### Train the model
```
# train the model
losses = []
for epoch in range(num_epochs):
print('epoch: ', epoch)
for i, (images, labels) in enumerate(train_dataloader):
images = images.to(device)
labels = labels.to(device)
# forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# backward pass
model.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0:
print(' step: {}, loss: {}'.format(i, loss.item()))
losses.append(loss)
```
### Train loss curve
```
plt.figure(figsize=(20, 10))
plt.plot(losses)
plt.title('training loss')
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.grid(True)
```
### Test the model
```
# test the model
model.eval()
with torch.no_grad():
total = 0
correct = 0
for images, labels in test_dataloader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, prediction = torch.max(outputs.data, 1)
total += batch_size
correct += (prediction == labels).sum().item()
print('test accuracy: {}'.format(correct / total))
```
| github_jupyter |
(pandas_intro)=
# Introduction
```{index} Pandas: basics
```
[Pandas](https://pandas.pydata.org/docs/) is an open source library for Python that can be used for data manipulation and analysis. If your data can be put into a spreadsheet, Pandas is exactly what you need!
Pandas is a very powerful tool with highly optimised performance. The full documentation can be found [here](https://pandas.pydata.org/docs/index.html).
To start working with pandas you can simply import it. A standard alias used for pandas is "pd":
```
import pandas as pd
```
(pandas_objects)=
## Pandas objects
**DataFrame** is a 2D data structure with columns and rows, much like a table or Excel spreadsheet. To manually create a DataFrame, we can create a dictionary of lists. The keys are used as a table header and values in each list as rows:
```
df = pd.DataFrame({
"Rock_type": ["granite",
"andesite",
"limestone"],
"Density": [2.6, 2.8, 2.3],
"Main_mineral": ["quartz",
"feldspar",
"calcite"]})
# Use df.head() to display
# top rows
df.head()
```
To extract data from specific column, we can call the column name in two ways:
```
# First method
# Similar to calling dictionary keys
# Works for all column names
print(df["Rock_type"])
# Second method
# This will only work if the column name
# has no spaces and is not named like
# any pandas attribute, e.g. T will mean
# transpose and it won't extract a column
print(df.Rock_type)
pd.Series(["granite", "andesite", "limestone"])
```
(reading_files)=
## Reading files
``` {index} Pandas: reading and writing files
```
Most of the time we will want to load data in different file formats, rather than manually creating a dataframe. Pandas have a very easy syntax for reading files:
pd.read_*
where * can be [csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html), [excel](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html), [html](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_html.html), [sql](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_sql.html) and so on. For .txt file extentions we can use
pd.read_csv(file_name, delimiter=" ") or pd.read_fwf(file_name)
In this example we will look at New Zealand earthquake data in .csv format from [IRIS](https://ds.iris.edu/ieb/). With .head() we can specify how many rows to print, in this case, we want to display first 4 rows (that includes header):
```
nz_eqs = pd.read_csv("../../geosciences/data/nz_largest_eq_since_1970.csv")
nz_eqs.head(4)
```
We can check DataFrame shape by using:
```
nz_eqs.shape
```
We have 25,000 rows and 11 columns, that's a lot of data!
(writing_files)=
## Writing files
Let's say we want to export this data as an excel spreadsheet but we only want to export magnitude, latitude, longitude and depth columns:
```
nz_eqs.to_excel("../../geosciences/data/nz_eqs.xlsx",
sheet_name="Earthquakes",
columns=["mag", "lat", "lon",
"depth_km"])
```
ExcelWriter object allows us to export more than one sheet into the same file:
```
with pd.ExcelWriter("../../geosciences/data/nz_eqs.xlsx") as writer:
nz_eqs.to_excel(writer, sheet_name="Earthquakes",
columns=["mag", "lat", "lon",
"depth_km"])
nz_eqs.to_excel(writer, sheet_name="Extra info",
columns=["region", "iris_id",
"timestamp"])
```
# References
The notebook was compiled based on these tutorials:
* [Pandas official Getting Started tutorials](https://pandas.pydata.org/docs/getting_started/index.html#getting-started)
* [Kaggle tutorial](https://www.kaggle.com/learn/pandas)
| github_jupyter |
### train
```
import multiprocessing
import threading
import tensorflow as tf
from agent.access import Access
from agent.main import Agent
NUMS_CPU = multiprocessing.cpu_count()
state_size = 58
batch_size = 50
action_size = 3
max_episodes = 1
GD = {}
class Worker(Agent):
def __init__(self, name, access, batch_size, state_size, action_size):
super().__init__(name, access, batch_size, state_size, action_size)
def run(self, sess, max_episodes, t_max=8):
episode_score_list = []
episode = 0
while episode < max_episodes:
episode += 1
episode_socre, _ = self.run_episode(sess, t_max)
episode_score_list.append(episode_socre)
GD[str(self.name)] = episode_score_list
if self.name == 'W0':
print('Episode: %f, score: %f' % (episode, episode_socre))
print('\n')
# config = tf.ConfigProto()
# config.gpu_options.allow_growth = True
# with tf.Session(config=config) as sess:
with tf.Session() as sess:
with tf.device("/cpu:0"):
A = Access(batch_size, state_size, action_size)
F_list = []
for i in range(NUMS_CPU):
F_list.append(Worker('W%i' % i, A, batch_size, state_size, action_size))
COORD = tf.train.Coordinator()
sess.run(tf.global_variables_initializer())
sess.graph.finalize()
threads_list = []
for ac in F_list:
job = lambda: ac.run(sess, max_episodes)
t = threading.Thread(target=job)
t.start()
threads_list.append(t)
COORD.join(threads_list)
A.save(sess, 'model/saver_1.ckpt')
```
### test
```
tf.reset_default_graph()
import tensorflow as tf
from agent.access import Access
from agent.framework import Framework
from emulator.main import Account
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
state_size = 58
batch_size = 50
action_size = 3
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
with tf.Session(config=config) as sess:
with tf.device("/cpu:0"):
A = Access(batch_size, state_size, action_size)
W = Framework('W0', A, batch_size, state_size, action_size)
A.restore(sess,'model/saver_1.ckpt')
W.init_or_update_local(sess)
env = Account()
state = env.reset()
for _ in range(200):
action = W.get_deterministic_policy_action(sess, state)
state, reward, done = env.step(action)
value, reward = env.plot_data()
pd.Series(value).plot(figsize=(16,6))
pd.Series(reward).plot(figsize=(16,6))
pd.Series(np.zeros_like(reward)).plot(figsize=(16,6), color='r')
```
| github_jupyter |
# String Formatting
String formatting lets you inject items into a string rather than trying to chain items together using commas or string concatenation. As a quick comparison, consider:
player = 'Thomas'
points = 33
'Last night, '+player+' scored '+str(points)+' points.' # concatenation
f'Last night, {player} scored {points} points.' # string formatting
There are three ways to perform string formatting.
* The oldest method involves placeholders using the modulo `%` character.
* An improved technique uses the `.format()` string method.
* The newest method, introduced with Python 3.6, uses formatted string literals, called *f-strings*.
Since you will likely encounter all three versions in someone else's code, we describe each of them here.
## Formatting with placeholders
You can use <code>%s</code> to inject strings into your print statements. The modulo `%` is referred to as a "string formatting operator".
```
a = "something"
print("I'm going to inject %s here." %"some")
```
You can pass multiple items by placing them inside a tuple after the `%` operator.
```
print("I'm going to inject %s text here, and %s text here." %('some','more'))
```
You can also pass variable names:
```
x, y = 'some', 'more'
print("I'm going to inject %s text here, and %s text here."%(x,y))
```
### Format conversion methods.
It should be noted that two methods <code>%s</code> and <code>%r</code> convert any python object to a string using two separate methods: `str()` and `repr()`. We will learn more about these functions later on in the course, but you should note that `%r` and `repr()` deliver the *string representation* of the object, including quotation marks and any escape characters.
```
print('He said his name was %s.' %'Fred')
print('He said his name was %r.' %'Fred')
```
As another example, `\t` inserts a tab into a string.
```
print('I once caught a fish %s.' %'this \tbig')
print('I once caught a fish %r.' %'this \tbig')
```
The `%s` operator converts whatever it sees into a string, including integers and floats. The `%d` operator converts numbers to integers first, without rounding. Note the difference below:
```
print('I wrote %s programs today.' %3.75)
print('I wrote %d programs today.' %3.75)
```
### Padding and Precision of Floating Point Numbers
Floating point numbers use the format <code>%5.2f</code>. Here, <code>5</code> would be the minimum number of characters the string should contain; these may be padded with whitespace if the entire number does not have this many digits. Next to this, <code>.2f</code> stands for how many numbers to show past the decimal point. Let's see some examples:
```
print('Floating point numbers: %5.2f' %(13.144))
print('Floating point numbers: %1.0f' %(13.144))
print('Floating point numbers: %6.5f' %(13.144))
print('Floating point numbers: %10.2f' %(13.144))
print('Floating point numbers: %25.2f' %(13.144))
```
For more information on string formatting with placeholders visit https://docs.python.org/3/library/stdtypes.html#old-string-formatting
### Multiple Formatting
Nothing prohibits using more than one conversion tool in the same print statement:
```
print('First: %s, Second: %5.2f, Third: %r' %('hi!',3.1415
,'bye!'))
```
## Formatting with the `.format()` method
A better way to format objects into your strings for print statements is with the string `.format()` method. The syntax is:
'String here {} then also {}'.format('something1','something2')
For example:
```
print('This is a string with an {}'.format('insert'))
```
### The .format() method has several advantages over the %s placeholder method:
#### 1. Inserted objects can be called by index position:
```
print('The {2} {1} {0}'.format(50,'brown','quick'))
```
#### 2. Inserted objects can be assigned keywords:
```
print('First Object: {a}, Second Object: {b}, Third Object: {c}'.format(a=1,b='Two',c=12.3))
```
#### 3. Inserted objects can be reused, avoiding duplication:
```
print('A %s saved is a %s earned.' %('penny','penny'))
# vs.
print('A {p} saved is a {p} earned.'.format(p='penny'))
```
### Alignment, padding and precision with `.format()`
Within the curly braces you can assign field lengths, left/right alignments, rounding parameters and more
```
print('{0:8} | {1:9}'.format('Fruit', 'Quantity'))
print('{0:8} | {1:9}'.format('Apples', 3.))
print('{0:8} | {1:9}'.format('Oranges', 10))
```
By default, `.format()` aligns text to the left, numbers to the right. You can pass an optional `<`,`^`, or `>` to set a left, center or right alignment:
```
print('{0:<8} | {1:^8} | {2:>8}'.format('Left','Center','Right'))
print('{0:<8} | {1:^8} | {2:>8}'.format(11,22,33))
```
You can precede the aligment operator with a padding character
```
print('{0:=<8} | {1:-^8} | {2:.>8}'.format('Left','Center','Right'))
print('{0:=<8} | {1:-^8} | {2:.>8}'.format(11,22,33))
```
Field widths and float precision are handled in a way similar to placeholders. The following two print statements are equivalent:
```
print('This is my ten-character, two-decimal number:%10.2f' %13.579)
print('This is my ten-character, two-decimal number:{0:10.2f}'.format(13.579))
```
Note that there are 5 spaces following the colon, and 5 characters taken up by 13.58, for a total of ten characters.
For more information on the string `.format()` method visit https://docs.python.org/3/library/string.html#formatstrings
## Formatted String Literals (f-strings)
Introduced in Python 3.6, f-strings offer several benefits over the older `.format()` string method described above. For one, you can bring outside variables immediately into to the string rather than pass them as arguments through `.format(var)`.
```
name = 'Fred'
print(f"He said his name is {name}.")
```
Pass `!r` to get the string representation:
```
print(f"He said his name is {name!r}")
```
#### Float formatting follows `"result: {value:{width}.{precision}}"`
Where with the `.format()` method you might see `{value:10.4f}`, with f-strings this can become `{value:{10}.{6}}`
```
num = 23.45678
print("My 10 character, four decimal number is:{0:10.4f}".format(num))
print(f"My 10 character, four decimal number is:{num:{10}.{6}}")
```
Note that with f-strings, *precision* refers to the total number of digits, not just those following the decimal. This fits more closely with scientific notation and statistical analysis. Unfortunately, f-strings do not pad to the right of the decimal, even if precision allows it:
```
num = 23.45
print("My 10 character, four decimal number is:{0:10.4f}".format(num))
print(f"My 10 character, four decimal number is:{num:{10}.{6}}")
```
If this becomes important, you can always use `.format()` method syntax inside an f-string:
```
num = 23.45
print("My 10 character, four decimal number is:{0:10.4f}".format(num))
print(f"My 10 character, four decimal number is:{num:10.4f}")
```
For more info on formatted string literals visit https://docs.python.org/3/reference/lexical_analysis.html#f-strings
That is the basics of string formatting!
| github_jupyter |
# The Exponential Distribution and the Poisson Process
## Introduction
One simplifying assumption that is often made is to assume that certain $r.v.\DeclareMathOperator*{\argmin}{argmin}
\DeclareMathOperator*{\argmax}{argmax}
\DeclareMathOperator*{\plim}{plim}
\newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}}
\newcommand{\ffrac}{\displaystyle \frac}
\newcommand{\asim}{\overset{\text{a}}{\sim}}
\newcommand{\space}{\text{ }}
\newcommand{\bspace}{\;\;\;\;}
\newcommand{\QQQ}{\boxed{?\:}}
\newcommand{\void}{\left.\right.}
\newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}
\newcommand{\d}[1]{\displaystyle{#1}}
\newcommand{\CB}[1]{\left\{ #1 \right\}}
\newcommand{\SB}[1]{\left[ #1 \right]}
\newcommand{\P}[1]{\left( #1 \right)}
\newcommand{\abs}[1]{\left| #1 \right|}
\newcommand{\norm}[1]{\left\| #1 \right\|}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\Exp}{\mathrm{E}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\EE}{\mathbb{E}}
\newcommand{\II}{\mathbb{I}}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\ZZ}{\mathbb{Z}}
\newcommand{\QQ}{\mathbb{Q}}
\newcommand{\PP}{\mathbb{P}}
\newcommand{\AcA}{\mathcal{A}}
\newcommand{\FcF}{\mathcal{F}}
\newcommand{\AsA}{\mathscr{A}}
\newcommand{\FsF}{\mathscr{F}}
\newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]}
\newcommand{\Avar}[2][\,\!]{\mathrm{Avar}_{#1}\left[#2\right]}
\newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{#1}\left(#2\right)}
\newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{#1}\left(#2\right)}
\newcommand{\I}[1]{\mathrm{I}\left( #1 \right)}
\newcommand{\N}[1]{\mathcal{N} \left( #1 \right)}
\newcommand{\ow}{\text{otherwise}}$ are exponentially distributed.
- easy to work with
- good approximation to the actual distribution
## The Exponential Distribution
### Definition
A continuous $r.v.$ $X$ is said to have an exponential distribution with parameter $\lambda, \lambda>0$, if its probability density function is given by
$\bspace f\P{x} = \begin{cases}
\lambda e^{-\lambda x}, &\text{if } x \geq 0\\
0 , &\text{if }x<0
\end{cases}$
or *equivalently*, if its cumulative density function is given by
$\bspace F\P{x} = \d{\int_{-\infty}^{x} f\P{s}\;\dd s} = \begin{cases}
1-e^{-\lambda x}, &\text{if } x \geq 0\\
0 , &\text{if }x<0
\end{cases}$
Its mean, $\Exp\SB{X}$ is given by
$\bspace\begin{align}
\Exp\SB{X} &= \int_{-\infty}^{\infty} xf\P{x}\;\dd x\\
&= \int_{0}^{\infty} x\cdot \lambda e^{-\lambda x}\;\dd x\\
&= \left.\P{\ffrac{1}{\lambda} - x}e^{-\lambda x}\right|_{0}^\infty = \ffrac{1}{\lambda}
\end{align}$
Its moment generating function $\phi\P{t}$ is given by
$\bspace\begin{align}
\phi\P{t} &= \Exp\SB{e^{tX}}\\
&= \int_{0}^{\infty} e^{tx}\cdot \lambda e^{-\lambda x}\;\dd x\\
&= \left. \ffrac{\lambda}{t-\lambda}e^{\P{t-\lambda} x} \right|_{0}^\infty = \ffrac{-\lambda}{t-\lambda}, \bspace t<\lambda
\end{align}$
Other useful moments and variance, from $\text{mgf}$, we have $\Exp\SB{X^2} = 2/\lambda^2$ and $\Var{X} = 1/\lambda^2$
**e.g.1** Exponential Random Variables and Expected Discounted Returns
Here's the short version. $R\P{x}$ is the random rate at which you are receiving rewards at time $x$. For the discounted rate $\alpha$ we have the total discounted reward
$\bspace R = \d{\int_0^\infty e^{-\alpha x} R\P{x}\;\dd x} \Rightarrow \Exp\SB{R} = \int_0^\infty e^{-\alpha x}\Exp\SB{R\P{x}}\;\dd x$
If we let $T$ be an exponential random variable with parameter $\alpha$, which is independent of all other $r.v.$. Prove $\Exp\SB{R} = \Exp\SB{\d \int_0^T R\P{x}\;\dd x}$
> First define, for each $x \geq 0$, a $r.v.$ $I\P{x}$ by
>
>$\bspace I\P{x} = \begin{cases}
1,&\text{if } x\leq T\\
0,&\text{if } x > T
\end{cases} \Rightarrow \d \int_0^T R\P{x}\;\dd x = \int_{0}^\infty R\P{x}I\P{x} \;\dd x$
>
>Thus,
>
>$\bspace\begin{align}
\Exp\SB{\d \int_0^T R\P{x}\;\dd x} &= \Exp\SB{\d \int_0^\infty R\P{x}I\P{x}\;\dd x}\\
&= \d \int_0^\infty \Exp\SB{R\P{x}}\Exp\SB{I\P{x}}\;\dd x\\
&= \int_0^\infty \Exp\SB{R\P{x}}P\CB{T\geq x}\;\dd x\\
&= \int_0^\infty \Exp\SB{R\P{x}}\cdot e^{-\alpha x}\;\dd x = \Exp\SB{R}
\end{align}$
***
### Properties of the Exponential Distribution
A $r.v.$ $X$ is said to be *without memory*, or ***memoryless***, if
$\bspace P\CB{X>s+t \mid X>t} = P\CB{X>s} \iff \ffrac{P\CB{X > s+t,X>t}}{P\CB{X>t}} = P\CB{X>s},\bspace \forall s,t \geq 0$
Consider the expondentially distributed $X$ which is certainly satisfy this condition since
$\bspace e^{-\lambda t} \cdot e^{-\lambda s} = e^{-\lambda\P{s+t}}$
To illustrate this see the following examples.
**e.g.2**
Suppose that the amount of time one spends in a bank is exponentially distributed with mean ten minutes, that is $\lambda = 0.1$. What is the probability that a customer will spend more than fifteen minutes in the bank given that she is still in the bank after ten minutes?
>Let $X$ represents the amount of time that the customer spends in the bank.
>
>$P\CB{X > 15 \mid X>10} = P\CB{X>5} = e^{-5\lambda} \approx 0.604$
***
**e.g.4**
The dollar amount of damage of an accident is exponentially distributed with mean $1000$. However the insurance company only pays that amount exceeding $400$. Find the mean and the variance of the amount the insurance company pays per accident.
>Let $X$ denote the dollar amount of damage resulting from an accident, then payment from insurance company is $\P{X-400}^+$, or define a indicator $r.v.$ $I = \begin{cases}
1,&\text{if } X>400\\
0,&\text{if } X \leq 400
\end{cases}$
>Then letting $Y = \P{X-400}^+$, we have $\Exp\SB{Y\mid I} = 1000\cdot I$, $\Var{Y\mid I} = 10^6\cdot I$. Then
>
>$\bspace \Exp\SB{Y} = \Exp\SB{\Exp\SB{Y\mid I}} = 10^3\Exp\SB{I} = 10^3\cdot P\CB{X>400} = 10^3\cdot \exp\CB{-\ffrac{1}{1000}\cdot 400} \approx 670.32$
>
>$\bspace \Var{Y} = \Exp\SB{\Var{Y\mid I}} + \Var{\Exp\SB{Y \mid I}} = 10^6 \cdot e^{-0.4} + 10^6\cdot e^{-0.4}\P{1-e^{-0.4}}$
***
It turns out that exponential distribution is the only memoryless one. To see this, suppose $X$ is memoryless and let $\bar F\P{X} = P\CB{X>x} = 1 - F\P{X}$. Then by the definition we have $\bar F\P{s+t} = \bar F\P{s} \bar F\P{t}$, meaning that $\bar F$ satisfies the functional equation $g\P{s+t} = g\P{s}g\P{t}$. However, it turns out that the only *right continuous* solution of this functional equation is $g\P{x} = e^{-\lambda x}$.
> Given the equation we have $g\P{m/n} = g^m\P{1/n}$. Especially, we have $g\P{1} = g^n\P{1/n}$. Hence
>
>$\bspace g\P{m/n} = g^{m/n}\P{1}$
>
>which implies, since $g$ is *right continuous*, that $g\P{x} = \P{g\P{1}}^x$. Since $g\P{1} = g^2\P{0.5}\geq0$, we obtain $g\P{x} = e^{-\lambda x}$ where $\lambda = -\log \P{g\P{1}}$.
So we must have $\bar F\P{x} = e^{-\lambda x}$, and consequently, $F\P{x} = 1-e^{-\lambda x}$, which shows that $X$ is exponentially distributed.
Then we define the ***failure rate function***: $r\P{t} = \ffrac{f\P{t}}{1-F\P{t}}$. And to interpret this, suppose that an item with lifetime $X$, has survived for $t$ hours, then what's the probability that it doesn't survive for an additional time $\dd t$?
$\bspace \begin{align}
P\CB{X \in\P{t,t+\dd t}\mid X>t} &= \ffrac{P\CB{X \in\P{t,t+\dd t},X>t}}{P\CB{X>t}}\\
&= \ffrac{P\CB{X \in\P{t,t+\dd t}}}{P\CB{X>t}}\approx \ffrac{f\P{t}\dd t}{1-F\P{t}} = r\P{t}\dd t
\end{align}$
$\bspace r\P{t}$ *represents the conditional probability density that a* $t$*-year-old item will fail.*
And by the memoryless property, for the exponential distribution, we have
$\bspace r\P{t} = \ffrac{f\P{t}}{1-F\P{t}} = \ffrac{\lambda e^{-\lambda t}}{e^{-\lambda t}} = \lambda$
Here $\lambda$ is often referred to as the ***rate*** of the distribution (also as the *reciprocal倒数* of the mean $1/\lambda$). Besides, this **rate function** uniquely determine the cdf $F$:
$\bspace r\P{t} = \ffrac{\ffrac{\dd}{\dd t}F\P{t}}{1-F\P{t}}\Rightarrow \log\P{1-F\P{t}}= -\d{\int_0^t r\P t\;\dd t + k} \Rightarrow 1-F\P t = e^k \exp\CB{-\d \int_0^t r\P t\;\dd t}$
And $k$ can be determined by letting $t=0$, so that the final solution is
$\bspace F\P t = 1-\exp\CB{-\d\int_0^t r\P t\;\dd t}$
More than that, by letting $r\P t = \lambda$, we have $F\P t = 1 - e^{-\lambda t}$ showing that this $r.v.$ is exponential.
**e.g.6** Hyperexponential $r.v.$
Let $X_1,\dots,X_n$ be *independent* exponential random variables with respective rates $\lambda_1,\dots,\lambda_n$, where $\lambda_i \neq \lambda_j$ when $i\neq j$. Then let $T$ be independent of all of them and suppose that
$\bspace \d\sum_{j=1}^n P_j = 1$, where $P_j = P\CB{T = j}$
Then $X_T$ is said to be a ***hyperexponential*** $r.v.$. Then what's its distribution? Surely we condition on $T$ and this yields
$\bspace \begin{align}
1-F\P t &= P\CB{X>t}\\
&= \sum_{i=1}^{n} P\CB{X>t \mid T = i}\cdot P\CB{T = i}\\
&= \sum_{i=1}^{n} P_i \cdot e^{-\lambda_i t}
\end{align}$
Then we differentiate the equation on $t$ leading to the pdf and failure function
$\bspace f\P t = \d\sum_{i=1}^n \lambda_i P_i e^{-\lambda_i t},\bspace r\P t = \ffrac{\d{\sum_{i=1}^n \lambda_i P_i e^{-\lambda_i t}}}{\d{\sum_{i=1}^n P_i e^{-\lambda_i t}}}$
Also note that
$\bspace\begin{align}
P\CB{T = j \mid X>t}&= \ffrac{P\CB{X>t \mid T=j}\cdot P\CB{T=j}}{P\CB{X>t}}\\
&= \ffrac{P_j e^{-\lambda_j t}}{{\sum_{i=1}^n P_i e^{-\lambda_i t}}}\\
\Longrightarrow r\P{t} &= \d{\sum_{j=1}^n\lambda_j} P\CB{T=j\mid X>t}
\end{align}$
And one last conclusion, that if $\lambda_1<\lambda_i, \forall\; i>1$, then
$\bspace\begin{align}
P\CB{T=1\mid X>t}&= \ffrac{P_1 e^{-\lambda_1 t}}{P_1 e^{-\lambda_1 t} + \sum_{i=1}^n P_i e^{-\lambda_i t}}\\
&= \ffrac{P_1}{P_1 + \sum_{i=1}^n P_i e^{-\P{\lambda_i -\lambda_1} t}} \to 1,\bspace t\to \infty
\end{align}$
So that actually $\d{\lim_{t\to\infty} r\P t = \min_i \lambda_i}$
***
### Further Properties of the Exponential Distribution
Let $X_1,\dots,X_n$ be *independent* exponential random variables with common mean $1/\lambda$. Following the discussion in Chap_02, we know that the distribution of their sum has a *gamma* distribution with parameters $n$ and $\lambda$. Here's another proof by using *mathematical induction*.
The conclusion holds true at $n=1$, so now assume that $X_1 + X_2+\cdots+X_{n-1}$ has density given by
$\bspace\d{f_{X_1+ X_2+\cdots+X_{n-1}}\P t = \lambda e^{-\lambda t}\ffrac{ \P{\lambda t}^{n-2}}{\P{n-2}!}}$
Hence, the $n$ sum is, by the independency,
$\bspace\begin{align}
f_{X_1+ X_2+\cdots+X_{n-1} \:+ X_n}\P t &= \int_0^\infty f_{X_n}\P{t-s} f_{X_1+ X_2+\cdots+X_{n-1}}\P s \;\dd s\\
&= \int_0^t \lambda e^{-\lambda\P{t-s}} \lambda e^{-\lambda s} \ffrac{\P{\lambda s}^{n-2}}{\P{n-2}!} \;\dd s = \lambda e^{-\lambda t} \ffrac{\P{\lambda t}^{n-1}}{\P{n-1}!}
\end{align}$
***
Another useful thing, that is to determine the probability that one exponenetial $r.v.$ is smaller than another. Suppose that $X_1$ and $X_2$ are independent exponential $r.v.$ with respective rates $\lambda_1$ and $\lambda_2$, then what's $P\CB{X_1<X_2}$? Still, we condition on $X_1$ to find the result.
$\bspace\begin{align}
P\CB{X_1<X_2}&= \int_0^\infty P\CB{X_1<X_2\mid X_1 = x}\cdot P\CB{X_1 = x}\;\dd x\\
&= \int_0^\infty P\CB{x<X_2}\cdot \lambda_1 e^{-\lambda_{\void_1} x}\;\dd x\\
&= \int_0^\infty e^{-\lambda_{\void_2} x}\cdot \lambda_1 e^{-\lambda_{\void_1} x}\;\dd x\\
&= \ffrac{\lambda_1}{\lambda_1 + \lambda_2}
\end{align}$
More than this, we have the minimum of them is a exponential with a rate equal to the sum of all individual rates.
$\bspace\begin{align}
P\CB{\min_i\P{X_i}>x} &= P\CB{X_1>x,X_2>x,\dots,X_n>x}\\
&= \prod_{i=1}^n P\CB{X_i>x} = \prod_{i=1}^n e^{-\lambda_{\void_i}x} = \exp\CB{-\P{\sum_i \lambda_i} x}
\end{align}$
**e.g.7** Analyzing Greedy Algorithms for the Assignment Problem
Attractive! Skipped, though
***
Combine the two preceding results, we have
$\bspace \d{P\CB{X_i = \min_j X_j} = P\CB{X_i <\min_{j\neq i}X_j} = \ffrac{\lambda_i}{\lambda_i + \sum_{j\neq i} \lambda_j} = \ffrac{\lambda_i}{\sum_j \lambda_j}}$
$Proposition$
If $X_1,\dots,X_n$ are independent exponential $r.v.$ with respective rates $\lambda_1,\dots,\lambda_n$, then $\d{\min_i X_i}$ is exponential with rate $\d{\sum_{i=1}^{n}\lambda_i}$. Further, $\d{\min_i X_i}$ and the rank order of the variables $X_1,\dots,X_n$ are independent.
>The only thing we need to prove here is the last conclusion, the independency. And that can be easily seen using the property that it's **memoryless**
>
>$\bspace \d{P\CB{X_{i_1}<\cdots X_{i_n}\mid \min_i X_i >t} = P\CB{X_{i_1}-t<\cdots X_{i_n}-t\mid \min_i X_i >t} = P\CB{X_{i_1}<\cdots X_{i_n}}}$
**e.g.9** and **e.g.10** are also very interesting, and comprehensive. The strategies basically, are taking conditional probability, using recursions, or breaking one $r.v.$ to the sum of $k$ $r.v.$s.
***
### Convolutions of Exponential $r.v.$ (skipped)
The sum of $n$ independent exponential $r.v.$ with different respective rates $\lambda_i$, is said to be a ***hypoexponential*** $r.v.$. To calculate its pdf, let's start with case $n=2$. Now,
$\bspace\begin{align}
f_{X_1+X_2}\P t &= \int_0^t f_{X_1} \P s f_{X_2} \P{t-s}\;\dd s\\
&= \int_0^t \lambda_1 e^{-\lambda_{\void_1} s} \lambda_2 e^{-\lambda_{\void_2} \P{t-s}}\;\dd s\\
&= \lambda_1\lambda_2e^{-\lambda_{\void_2} t} \int_0^t e^{-\P{\lambda_{\void_1}-\lambda_{\void_2}}s}\;\dd s\\
&= \ffrac{\lambda_1\lambda_2e^{-\lambda_{\void_2} t}}{\lambda_1-\lambda_2} \P{1-e^{-\P{\lambda_{\void_1}-\lambda_{\void_2}}t}} = \ffrac{\lambda_1}{\lambda_1-\lambda_2}\lambda_2 e^{-\lambda_{\void_2}t} + \ffrac{\lambda_2}{\lambda_2-\lambda_1}\lambda_1 e^{-\lambda_{\void_1}t}
\end{align}$
And similarly, we have when $n=3$,
$\bspace\d{f_{X_1+X_2+X_3}\P t = \sum_{i=1}^3 \P{\lambda_i e^{-\lambda_{\void_i} t} \prod_{j\neq i} \ffrac{\lambda_j}{\lambda_j-\lambda_i} }}$
which suggests the general result, letting $S = X_1 + \cdots+X_n$,
$\bspace \d{f_{S}\P t = \sum_{i=1}^n \P{\lambda_i e^{-\lambda_{\void_i}t} \prod_{j\neq i} \ffrac{\lambda_j}{\lambda_j - \lambda_i}}}$
This can be proved by induction on $n$, still very complex through. Skipped. Alright, now we integrate both sides of the expression for $f_{S}$ from $t$ to $\infty$. The ***tail distribution function*** of $S$ is given by
$\d{\bspace P\CB{S>t} = \sum_{i=1}^n C_{i,n} e^{-\lambda_{\void_i}t},\bspace C_{i,n} = \prod_{j\neq i} \ffrac{\lambda_j}{\lambda_j - \lambda_i}}$
And the following **failure rate function** of $S$, is given by
$\bspace r_S\P t = \ffrac{\sum_{i=1}^n C_{i,n}\lambda_i e^{-\lambda_{\void_i}t}}{\sum_{i=1}^n C_{i,n} e^{-\lambda_{\void_i}t}}$
In addition, if we let $\lambda_j = \d{\min_j \lambda_j}$, then it follows that $\d{\lim_{t\to\infty}r_S\P t = \lambda_j}$
$Remark$
>Although
>
>$\bspace \d{1 = \int_0^\infty f_S\P t\;\dd t = \sum_{i=1}^n C_{i,n} = \sum_{i=1}^n \prod_{i\neq j} \ffrac{\lambda_j}{\lambda_j - \lambda_i}}$
>
>$C_{i,n}$ are NOT probabilities. Some of them may be negative. Also, this is far from same with **hyperexponential** density.
***
**e.g.11** Coxian $r.v.$
Whatever, skipped
***
## The Poisson Process
### Counting Processes
A stochastic process $\CB{N\P t , t\geq 0}$ is said to be a ***counting process*** if $N\P t$ represents the total number of "events" that occur by time $t$. And we can see this process must satisfy
1. $N\P{t}\geq 0$
2. $N\P t$ is integer valued
3. If $s<t$, then $N\P s \leq N\P t$
4. For $s<t$, $N\P t - N\P s$ equals the number of events that occur in $(s,t]$
A counting process is said to possess ***independent increments*** if the numbers of events that occur in disjoint time intervals are independent, like $N\P{10}$ and $\P{N\P{15} - N\P{10}}$. And a counting process is said to possess ***stationary increments*** if the distribution of the number of events that occur in any interval of time depends only on the *length* of the time interval, say for all $s$ the number of events in the interval $\P{s,s+t}$ has the same distribution.
### Definition of the Poisson Process
$Def.1$ $o\P h$
The function $f\P \cdot$ is said to be $o\P h$ if $\d{\lim_{h\to\infty} \ffrac{f\P h}{h} = 0}$
And with this we could now write
$\bspace\begin{align}
P\P{t < X <t+h} \approx f\P t \cdot h \Longrightarrow f\P t \cdot h + o\P h\\
P\P{t < X <t+h\mid X>t} \approx r\P t \cdot h \Longrightarrow r\P t \cdot h + o\P h
\end{align}$
$Def.2$
The counting process $\CB{N\P t, t\geq 0}$ is said to be a ***Poisson process*** with rate $\lambda >0$ if the following *axioms* hold:
1. $N\P 0 = 0$
2. $\CB{N\P t, t\geq 0}$ has independent increments
3. $P\CB{N\P{t+h} - N\P t = 1} = \lambda \cdot h + o\P h$
4. $P\CB{N\P{t+h} - N\P t \geq 2} = o\P h$
$Theorem.1$
If $\CB{N\P t, t\geq 0}$ is a **Poisson process** with rate $\lambda>0$, then for all $s>0$, $t>0$, $N\P{s+t} - N\P{s}$ is a Poisson $r.v.$ with mean $\lambda t$. That is to say, that the number of events in *any* interval of length $t$ is a Poisson $r.v.$ with mean $\lambda t$.
$Proof$
>We begin by deriving $\Exp\SB{e^{-uN\P t}}$, the ***Laplace transform*** of $N\P t$. First fix $u>0$ and define $g\P t = \Exp\SB{e^{-uN\P t}}$. Then a differential equation
>
>$\bspace\begin{align}
g\P{t+h} &= \Exp\SB{e^{-uN\P{t+h}}}\\
&= \Exp\SB{e^{-u\P{N\P{t+h} +N\P t - N\P t }}}\\
&= \Exp\SB{e^{-uN\P{t}} \cdot e^{-u\P{N\P{t+h} - N\P t}}}\\
&= \Exp\SB{e^{-uN\P{t}}} \cdot \Exp\SB{e^{-u\P{N\P{t+h} - N\P t}}} = g\P t \cdot \Exp\SB{e^{-u\P{N\P{t+h} - N\P t}}}
\end{align}$
>
>Then from the last two axioms, we have
>
>$\bspace\begin{align}P\CB{N\P{t+h} - N\P t = 0} &= 1 -P\CB{N\P{t+h} - N\P t = 1} -P\CB{N\P{t+h} - N\P t \geq 2}\\
&= 1 - \lambda\cdot h + o\P h\\
\Longrightarrow \Exp\SB{e^{-u\P{N\P{t+h} - N\P t}}} &= 1-\lambda \cdot h + o\P h + e^{-u}\P{\lambda \cdot h + o\P h} + o\P h\\
&= 1-\lambda \cdot h + \lambda e^{-u} \cdot h + o\P h
\end{align}$
>Therefore, $g\P{t+h} = g\P t \big(1+\lambda\P{e^{-u}-1}h + o\P h\big)$. Then solve this by first rewrite
>
>$\bspace \ffrac{g\P{t+h} - g\P t}{h} = g\P t \lambda \P{e^{-u}-1} + \ffrac{o\P h}{h}$
>
>Letting $h\to0$ yields the differential equation
>
>$\bspace g'\P t = g\P t \lambda \P{e^{-u} - 1 } \Longrightarrow \log\big(g\P t\big) = \lambda \P{e^{-u} - 1}t + C$
>
>Noting that $g\P 0 = \Exp\SB{e^{-uN\P 0}} = \Exp\SB{e^0} = 1$, it follows that $C=0$, and so the **Laplace transform** of $N\P t$ is
>
>$\bspace\Exp\SB{e^{-uN\P t}} = g\P t = e^{\lambda \P{e^{-u} \;-1}t}$
>
>Comparing this to the **Laplace transform** of a typical Poisson $r.v.$ $X$, with mean $\lambda t$,
>
>$\bspace\Exp\SB{e^{-uX}} = \d{\sum_i e^{-u i} \cdot\ffrac{e^{-\lambda t} \P{\lambda t}^i}{i!} = e^{-\lambda t}\sum_i \ffrac{\P{\lambda t e^{-u}}^i}{i!} } = e^{-\lambda t} \cdot e^{\lambda t e^{-u}} = e^{\lambda t \P{e^{-u}\; -1}}$
>
>Like $\text{mgf}$, **Laplace transform** uniquely determines the distribution, we can thus conclude that $N\P t$ is **Poisson** $r.v.$ with mean $\lambda t$.
>
>Then, to show that $N_s\P{t} =N\P{t+s} - N\P s$ is also Poisson with mean $\lambda t$, fix $s$ and then $N_s\P t$ is the number of events in the first $t$ time units when we start our count at time $s$. Then we can verify that $\CB{N_s\P t, t\geq 0}$ satisfies all the axioms for being a Poisson process with rate $\lambda$. Consequently, using the preceding result, we can conclude that $N_s\P t$ is a Poisson $r.v.$ with mean $\lambda t$.
$Remark$
>Since the distribution of $N\P{t+s} - N\P t$ is the same for all $s$, it follows that the **Poisson process** has **stationary increments**.
>
>Also, this conclusion is a consequnce of the *Poisson approximation to the **binomial** distribution*. To see this we can subdivide the interval $\SB{0,k}$ into $k$ equal parts. Using $Axiom.4$, as $k$ increases to $\infty$, the probability of having two or more events in any of the $k$ subintervals goes to $0$. Hence, $N\P t$ will equal to the number of subintervals in which an event occurs.
>
>Then by $Axiom.2$, about the independent increments, and the preceding **stationary increments** conclusion, we know that this number will have a binomial distribution with parameters $k$ and $p = \lambda t/k +o\P{t/k}$.
>
>Hence, by the Poisson approximation to the binomial, we see that when $k$ goes to $\infty$, $N\P t$ will have a **Poisson distribution** with mean:
>
>$\bspace\d{\lim_{t\to\infty} k\cdot\SB{\ffrac{\lambda t}{k} + o\P{\ffrac{t}{k}}} =\lambda t + \lim_{t\to\infty} \ffrac{t\cdot o\P{t/k}}{t/k} = \lambda t }$
>
>The last step can be seen by: $k\to \infty\Rightarrow t/k \to 0 \Rightarrow \ffrac{o\P{t/k}}{t/k}\to 0$
***
### Interarrival and Waiting Time Distributions
Consider a Poisson process, and let us denote the time when the first event happened by $T_1$. Further, for $n>1$, let $T_n$ denote the *elapsed time* between the $\P{n-1}$st and the $n$th event. Then the sequence $\CB{T_n, n=1,2,\dots}$ is called the ***sequence of interarrival times***. What's the distribution of $T_n$?
First note that $P\CB{T_1>t} = P\CB{N\P t = 0} = e^{-\lambda t}$. Hence, $T_1$ has an **exponential** distribution with mean $1/\lambda$. Now
$\bspace
P\CB{T_2 > t} = \Exp\big[P\CB{T_2>t\mid T_1}\big] = \d\int_0^\infty P\CB{T_2>t\mid T_1 = s} \cdot P\CB{T_1 = s} \;\dd s$
However, $P\CB{T_2>t\mid T_1 = s} = P\CB{0\text{ events in }(s,s+t]\mid T_1 = s}$ and because of the independent and stationary increments, we have
$\bspace P\CB{T_2>t\mid T_1 = s} = P\CB{0\text{ events in }(s,s+t]} = e^{-\lambda t}\Rightarrow P\CB{T_2 > t} = \d\int_0^\infty e^{-\lambda t} \cdot \lambda e^{-\lambda s} \;\dd s = e^{-\lambda t}$
Repeating the same argument yields the following
$Proposition.1$
$T_n$, $n=1,2,\dots$ are $i.i.d.$ **exponential** $r.v.$s with mean $1/\lambda$.
$Remark$
>Define $S_n$ as the ***waiting time*** until the $n$th events. Of course, $S_n = \sum\limits_{i=1}^n T_i$, for $n\geq 1$. Then as the sum of $i.i.d.$ **exponential** $r.v.$s with common rate $\lambda$, we have its density:
>
>$\bspace f_{S_n}\P t = \lambda e^{-\lambda t}\ffrac{\P{\lambda t}^{n-1}}{ \P{n-1}! },\bspace t\geq 0$
>
>Another way to derive this is to first notice that $N\P t \geq n \iff S_n \leq t$, hence
>
>$\bspace\begin{align}
F_{S_n}\P t &= P\CB{S_n \leq t} = P\CB{N\P t \geq n} = \sum_{j=n}^\infty e^{-\lambda t}\ffrac{\P{\lambda t}^j}{j!}\\
f_{S_n}\P t &= -\sum_{j=n}^\infty \lambda e^{-\lambda t}\ffrac{\P{\lambda t}^j}{j!} + \sum_{j=n}^\infty \lambda e^{-\lambda t}\ffrac{\P{\lambda t}^\P{j-1}}{\P{j-1}!} \\
&= -\sum_{j=n}^\infty \lambda e^{-\lambda t}\ffrac{\P{\lambda t}^j}{j!} + \sum_{j=n+1}^\infty \lambda e^{-\lambda t}\ffrac{\P{\lambda t}^\P{j-1}}{\P{j-1}!} + \lambda e^{-\lambda t}\ffrac{\P{\lambda t}^\P{n-1}}{\P{n-1}!} = \lambda e^{-\lambda t}\ffrac{\P{\lambda t}^\P{n-1}}{\P{n-1}!}
\end{align}$
**e.g.13**
Suppose that people immigrate into a territory at a Poisson rate $\lambda = 1$ per day.
$\P{\text{a}}$ What is the expected time until the tenth immigrant arrives?
> $\Exp\SB{S_{10}} = 10\cdot \ffrac{1}{\lambda} = 10$ days
$\P{\text{b}}$ What is the probability that the elapsed time between the tenth and the eleventh arrival exceeds two days?
>$P\CB{T_{11}>2} = e^{-\lambda \cdot 2} = e^{-2} \approx 0.133$
***
And this newly defined $T_n$ leads to a new definition of a counting process. Given a sequence $\CB{T_n,n\geq1}$, the $n$th event of this process occurs at time $S_n \equiv T_1+T_2+\cdots+T_n$. So we define
$\bspace N\P t \equiv \max\CB{n:S_n\leq t}$
this resultant counting process $\CB{N\P t, t\geq 0}$ will be Poisson process with rate $\lambda$.
$Remark$
>This new definition leads to a new method to find the pdf of $S_n$.
>
>$\begin{align}
P\CB{t<S_n<t+h} &= P\CB{N\P t = n-1, \text{another one event in }(t,t+h]} + o\P h\\
&= P\CB{N\P t = n-1}\cdot P\CB{\text{another one event in }(t,t+h]} + o\P h\\
&= e^{-\lambda t}\ffrac{\P{\lambda t}^{n-1}}{\P{n-1}!} \cdot\P{\lambda h+o\P h} + o\P h\\
&= \lambda e^{-\lambda t}\ffrac{\P{\lambda t}^{n-1}}{\P{n-1}!}\cdot h + o\P h\\
\Rightarrow f_{S_n}\P t &= \lambda e^{-\lambda t}\ffrac{\P{\lambda t}^{n-1}}{\P{n-1}!}
\end{align}$
***
### Further Properties of Poisson Processes
Consider a Poisson process $\CB{N\P t,t\geq 0}$ having rate $\lambda$, suppose we can classify the event as type **I** or a type **II** event, with probability $p$ and $1-p$ respectively. Let $N_1\P t$ and $N_2\P t$ denote respectively the number of type I and type II events occuring in $\SB{0,t}$. Then we have
$Proposition.2$
$N\P t = N_1\P t + N_2\P t$ and actually, $\CB{N_1\P t,t\geq 0}$ and $\CB{N_2\P t,t\geq 0}$ are both Poisson processes having rate $\lambda p$ and $\lambda\P{1-p}$, respectively. Also, the two processes are independent.
$Proof$
>First we prove that $\CB{N_i\P t,t\geq 0}$ are Poisson process, by checking the definition
>
>- $N_i\P{0} = 0$ follows from the fact that $N\P0 = 0$
>- Stationary and independent increment also holds. Whatever, just believe this.
>- $\begin{align}
P\CB{N_1\P h = 1} &= P\CB{N_1\P h = 1\mid N\P h = 1} \cdot P\CB{N\P h = 1} \\
&\bspace+ P\CB{N_1\P h = 1\mid N\P h \geq 2} \cdot P\CB{N\P h \geq 2} \\
&= p\P{\lambda h + o\P h} + o\P{h}\\
&= \lambda p h + o\P h
\end{align}$, and same for $P\CB{N_2\P h = 1}$
>- $P\CB{N_1\P h \geq 2}\leq P\CB{N\P h \geq 2} = o\P{h}$, similar for $P\CB{N_2\P h \geq 2}$
>
>Since the probability of a type I event in $(t,t+h]$ is independent of those in intervals that do not overlap $\P{t,t+h}$, therefore, it's independent of knowledge of when type II events occur.
**e.g.14**
Easy one.
**e.g.15**
Interesting, skipped though.
***
**e.g.16** About dividing into $r$ groups, $r>2$
Consider a system in which individuals at any time are classified as being in one of $r$ possible states, and assume that an individual changes states in accordance with a Markov chain having transition probabilities $P_{ij}$, $i , j = 1,\dots, r$. The individuals move independently. Suppose that the numbers of people initially in states $1,\dots, r$ are independent Poisson $r.v.$ with respective means $\lambda_1,\dots, \lambda_r$. We are interested in determining the *joint distribution* of the numbers of individuals in states $1,\dots, r$ at some time $n$.
>For fixed $i$, let $N_j\P i$, $j=1,2,\dots,r$ denote the number of those individuals initially in state $i$, that are in state $j$ at time $n$. And now each of the Poisson distributed number of people in state $i$ will, independently, be in state $j$ at time $n$ with probability $P_{ij}^n$.
>
>Hence, $N_j\P i$, $j=1,2,\dots,r$ will be independent Poisson random variables with respective means $\lambda_iP_{ij}^n$, $j=1,2,\dots,r$. Because *the sum of independent Poisson $r.v.$s is also a Poisson $r.v.$*. it follows that the number of individuals in state $j$ at time $n$, namely, $\sum\limits_{i=1}^r N_j\P i$, will be independent Poisson $r.v.$s with respective means $\sum\limits_i \lambda_i P_{ij}^n$, for $j=1,2,\dots,r$.
***
**e.g.17** Coupon collecting problem
Interesting, easy to understand, skipped though.
***
We now trying to find the probability that $n$ events occur in one Poisson process before $m$ events have occurred in a second and independent Poisson process. Let $\CB{N_1\P t,t\geq 0}$ and $\CB{N_2\P t,t\geq 0}$ be two independent Poisson processes having respective rates $\lambda_1$ and $\lambda_2$. Let $S_n^1$ denote the time of the $n$th event of the first process and $S_m^2$ the second one.
What's $P\CB{S_n^1 < S_m^2}$?
The answer is easy to obtain when $n=m=1$, which is $P\CB{S_1^1 < S_1^2} = \ffrac{\lambda_1}{\lambda_1 + \lambda_2}$. Follow the intuition, we then consider $P\CB{S_2^1 < S_1^2}$ which gives us (**memoryless property** will remove the effect of conditioning on $S_1^1 < S_1^2$)
$\bspace\d{P\CB{S_2^1 < S_1^2} = \P{P\CB{S_1^1 < S_1^2}}^2 = \P{\ffrac{\lambda_1}{\lambda_1 + \lambda_2}}^2}$
In fact, this reasoning shows that given *all* that has previously occurred, which is independent from the future, each event that occurs is going to be an event of $N_1\P t$ process with probability $\ffrac{\lambda_1}{\lambda_1 + \lambda_2}$ or an event of $N_2\P t$ process $w.p.$ $\ffrac{\lambda_2}{\lambda_1 + \lambda_2}$. Thus
$\bspace\d{P\CB{S_n^1 < S_m^2} = \sum_{k=n}^{n+m-1} \binom{n+m-1}{k} \P{\ffrac{\lambda_1}{\lambda_1 + \lambda_2}}^k\P{\ffrac{\lambda_2}{\lambda_1 + \lambda_2}}^{n+m-1-k}}$
### Conditional Distribution of the Arrival Times
Given the fact that *exactly one* event has occured by time $t$, what's the distribution of the time at which the event occurred? Owing to the statinary and independent increments, we guess it's uniformly distributed over $\SB{0,t}$. We now check that, and for $s\leq k$
$\bspace\begin{align}
P\CB{T_1< s\mid N\P t = 1} &= \ffrac{P\CB{T_1< s, N\P t = 1}}{P\CB{ N\P t = 1}}\\
&= \ffrac{P\CB{\text{one event in }[0,s),\text{zero event in }\SB{s,t}}}{P\CB{ N\P t = 1}}\\
&= \ffrac{P\CB{\text{one event in }[0,s)}\cdot P\CB{\text{zero event in }\SB{s,t}}}{P\CB{ N\P t = 1}}\\
&= \ffrac{e^{\lambda s}\ffrac{\lambda s}{1!} e^{\lambda \P{t-s}}\ffrac{\P{\lambda \P{t-s}}^0}{0!}}{e^{\lambda t}\ffrac{\lambda t}{1!}} = \ffrac{s}{t}
\end{align}$
$Remark$
>About the ***order statistics***: Let $Y_1,\dots,Y_n$ be $n$ $r.v.$. We say $Y_{\P{1}},\dots,Y_{\P n}$ are the **order statistics** corresponding to $Y_1,\dots,Y_n$ if $Y_{\P k}$ is the $k$th smallest value among $Y_1,\dots,Y_n$. Now if suppose further, that $Y_i$ are $i.i.d.$ continuous $r.v.$ with pdf $f$, we now have the the joint density of the **order statistics**.
>
>$\d{f\P{y_1,\dots,y_n} = n! \prod_{i=1}^n f\P{y_i},\bspace y_1<y_2<\cdots<y_n}$
>
>Especially, when all $Y_i$ are uniformly distributed over $\P{0,t}$, we have
>
>$\d{f\P{y_1,\dots,y_n} = \ffrac{n!}{t^n},\bspace 0< y_1<y_2<\cdots<y_n<t}$
With this we have the following Theorem
$Theorem.2$
Given that $N\P t = n$, the **arrival times** $S_1,S_2,\dots,S_n$ have the same distribution as the **order statistics** corresponding to $n$ independent $r.v.$ uniformly distributed on the interval $\P{0,t}$.
$Proof$
>Note that event $\P{s_1,s_2,\dots,s_n, N\P t = n}$ where $s_n$ marks the arrival time, is equivalent to $\P{T_1 = s_1,T_2 = s_2,\dots,T_n = s_n,T_{n+1}>t-s_n}$ where $T_n$ marks the interarrival time. *Since $T_n$ are $i.i.d.$ exponential $r.v.$ with rate $\lambda$*, we have
>
>$\begin{align}
f\P{s_1,s_2,\dots,s_n\mid N\P t = n} &= \ffrac{f\P{s_1,s_2,\dots,s_n,n}}{P\CB{N\P t = n}}\\
&= \ffrac{f\P{s_1,s_2,\dots,s_n,n}}{P\CB{N\P t = n}}\\
&= \ffrac{\lambda e^{-\lambda s_{\void_1}}\cdot\lambda e^{-\lambda \P{s_{\void_2} - s_{\void_1}}} \cdots \lambda e^{-\lambda \P{s_{\void_n} - s_{\void_{n-1}}\,}} \cdot e^{-\lambda \P{t-s_{\void_n}}}}{ e^{-\lambda t } \ffrac{ \P{\lambda t}^n}{n!}}\\
&= \ffrac{n!}{t^n},\bspace 0<s_1<s_2<\cdots<s_n < t
\end{align}$
$Proposition.3$
Assume Possion process $\CB{N\P t,t\geq 0}$ with rate $\lambda$ and $k$ possible sub types. Further, suppose that if an event occurs at time $y$ then it will be classified as a type $i$ event independently with probability $P_i\P y$, $i=1,\dots,k$ where $\sum_{i=1}^k P_i\P y = 1$.
If $N_i\P t$, $i=1,\dots,k$ represents the number of type $i$ events occuring by time $t$, then $N_i\P t$ are independent Poisson $r.v.$ with means
$\bspace\d{ \Exp\SB{N_i\P t} = \lambda \int_0^t P_i\P s \;\dd s}$
Whatever, skipped. **e.g.21** and **e.g.22** are very interesting, though.
***
$Proposition.1$
Given that $S_n = t$, he set $S_1, S_2, \dots, S_{n-1}$ has the distribution of a set of $n-1$ independent uniform $\P{0,t}$ $r.v.$.
$Proof$
> Other than using the same reasoning of how we prove the $Theorem 5.2$, here's a loose proof
>
>$\begin{align}
&\P{S_1,S_2\dots,S_{n-1} \mid S_n = t}\\
\sim\; & \P{S_1,S_2\dots,S_{n-1} \mid S_n = t, N\P{t^-} = n-1} \\
\sim\; & \P{S_1,S_2\dots,S_{n-1} \mid N\P{t^-} = n-1}
\end{align}$
>
>where $t^-$ is infinitesimally smaller than $t$ and $\sim$ means "has the same distribution as".
### Estimating Software Reliability
skipped
## Generalizations of the Poisson Process
### Nonhomogeneous Poisson Process
As is also known as ***nonstationary Poisson Process***, we obtain this by allowing the arrival rate at time $t$ to be a function of $t$, meaning $\lambda \P t$
$Def.3$
The counting process $\CB{N\P t, t \geq 0}$ is said to be a ***Nonhomogeneous Poisson Process*** with *intensity function* $\lambda \P t$, $t\geq 0$, if
1. $N\P 0 = 0$
2. $\CB{N\P t, t \geq 0}$ has independent increments
3. $P\CB{N\P{t+h} - N\P t \geq 2} = o\P h$
4. $P\CB{N\P{t+h} - N\P t = 1} = \lambda\P t \cdot h + o\P h$
We also define the ***mean value function*** of a **nonhomogeneous poisson process** as
$\bspace m\P t = \d\int_0^t \lambda \P y \;\dd y$
$Theorem.3$
If $\CB{N\P t, t \geq 0}$ is a **nonstationary Poisson process** with **intensity function** $\lambda \P t$, $t\geq 0$, then $N\P{t+s} - N\P{s}$ is a Poisson $r.v.$ with *mean* $m\P{t+s} - m\P s = \d\int_{s}^{t+s} \lambda \P y\;\dd y$.
$Proof$
>Mimick the proof of $Theorem5.1$, the stationary one, we have, by letting $g\P t=\Exp\SB{e^{-uN\P{t}}}$
>
>$\bspace g\P{t+h}=g\P{t} \Exp\SB{e^{-uN_t\P{h}}}$
>where $N_t\P{h} = N\P{t+h} - N\P{t}$. Since $P\CB{N\P{t+h} - N\P t = 0} = 1- \lambda\P t \cdot h + o\P h$, the preceding is
>
>$\bspace\begin{align}
g\P{t+h} &= g\P{t} \big(e^0\cdot \P{1- \lambda\P t \cdot h + o\P h} + e^{-u}\cdot \P{\lambda\P t \cdot h + o\P h} + e^{-2u}\cdot o\P h\big)\\
&= g\P t\P{1-\lambda \P t\cdot h + e^{-u} \lambda \P t\cdot h + o\P h}\\
\Rightarrow g\P{t+h} - g\P h &= g\P{t}\lambda \P t\P{e^{-u} -1}h + o\P h
\end{align}$
>
>Dividing by $h$ and letting $h\to0$ yields the differential equation $g'\P t = g\P t \lambda \P t \P{e^{-u} - 1}$. Then we write $\ffrac{g'\P t}{g\P t} = \lambda\P t\P{e^{-u} -1}$. Solve this we have
>
>$\bspace g\P t = \exp\CB{m\P t \P{e^{-u}-1}}$
>
>However, $\exp\CB{m\P t \P{e^{-u}-1}}$ is also the **Laplace transform** of a Poisson $r.v.$ with mean $m\P t$. We assert that $N\P t $ is also Poisson with mean $m\P t$. Then $N\P{t+s} - N\P{s} = N_s\P t$ is a nonstationary Poisson process with **intensity function** $\lambda_s \P t = \lambda \P{s+t}$, $t>0$. Consequently, $N_s\P t$ is Poisson with mean
>
>$\bspace\d{\int_0^t \lambda_s\P y \;\dd y = \int_0^t \lambda\P{s+y} \;\dd y = \int_s^{s+t} \lambda\P x \;\dd y}$
One special case is the time sampling of an ordinary Poisson process. Consider an ordinary Poisson process with rate $\lambda$, $\CB{N\P t, t\geq0}$ and suppose that an event occuring at time $t$ is independently of what has occurred prior to $t$. We count the event with probability $p\P t$ and let $N_c\P t$ denote the counted events by time $t$, thus, the counting process $\CB{N_c\P t, t\geq 0}$ is a nonhomogeneous Poisson process with **intensity function** $\lambda \P t = \lambda \cdot p\P t$. We verify this by noting that this process satisfies the four nonhomogeneous Poisson process axioms.
1. $N_c\P 0 = 0$
2. The number of *counted events* in $\P{s,s+t}$ depends solely on the number of *events* of the Poisson process in $\P{s,s+t}$, which is independent of what has occurred prior to time $s$. Consequently, the number of *counted events* in $\P{s,s+t}$ is independent of the process of *counted events* prior to $s$.
3. Let $N_c\P{t,t+h} = N_c\P{t+h} - N_c\P{t}$, with a similar definition of $N\P{t,t+h}$.$\\[0.6em]$
$\bspace P\CB{N_c\P{t,t+h} \geq 2} \leq P\CB{N\P{t,t+h} \geq 2} = o\P h\\[0.6em]$
4. Compute $P\CB{N_c\P{t,t+h} =1}$ by conditioning on $N\P{t,t+h}$.$\\[0.6em]$
$\bspace\begin{align}
P\CB{N_c\P{t,t+h} =1} &= P\CB{N_c\P{t,t+h} =1\mid N\P{t,t+h} = 1}\cdot P\CB{N\P{t,t+h} =1}\\
&\bspace + P\CB{N_c\P{t,t+h} =1\mid N\P{t,t+h} \geq 2}\cdot P\CB{N\P{t,t+h} \geq 2} \\
&= P\CB{N_c\P{t,t+h} =1\mid N\P{t,t+h} = 1} \cdot \lambda h + o\P h\\
&= p\P t \cdot \lambda h+ o\P h
\end{align}$
**e.g.24**
Arrivals in a hot dog stores from $8$ A.M. consitute a nonhomogeneous Poisson process with **intensity function** $\lambda\P t$ given by
$\bspace\lambda\P t = \begin{cases}
0, & 0 \leq t 8\\
5+5\P{t-8},& 8\leq t \leq 11\\
20,& 11\leq t \leq 13\\
20-2\P{t-13},& 13\leq t \leq 17\\
0, & 17 < t \leq 24
\lambda\P{t-24},& t>24
\end{cases}$
>The number of arrivals between $8:30$ A.M. and $9:30$ A.M. will be a Poisson with mean $m\P{9.5} - m\P{8.5}$, so that the probability of this number being $0$ is
>
>$\bspace \d{ \exp\CB{-\int_{8.5}^{9.5} 5+5\P{t-8} \;\dd t} = e^{-10}}$
>
>And the mean number of arrivals is $ \d{\int_{8.5}^{9.5} 5+5\P{t-8} \;\dd t = 10}$
***
We can further suppose that an event at time $s$ is a type $1$ event $w.p.$ $P_1\P s$ or $2$ for $P_2\P{s} = 1 - P_1\P s$. Let $N_i\P t$, $t\geq0$ denote the number of type $i$ events by time $t$, then we have the similar result like stationary Poisson proecss that are
- $\CB{N_1\P t, t\geq0}$ and $\CB{N_2\P t, t\geq0}$ are independent nonhomogeneous Poisson process with respective **intensity function** $\lambda_i\P t = \lambda\P t P_i\P t = \lambda P_i\P t$, $i=1,2$.
- $N_1\P t$ and $N_2\P t$ are independent Poisson $r.v.$ with $\Exp\SB{N_i\P t} = \lambda \d{\int_0^t P_i\P s \;\dd s}$, $i=1,2$
If we let $S_n$ denote the time of the $n$th event of the nonhomogeneous Poisson process, then we can obtain its density as follows:
$\bspace\begin{align}
P\CB{t<S_n < t+h} &= P\CB{N\P t = n-1,\text{ one event in }\P{t,t+h}} + o\P h\\
&= e^{-m\P t} \ffrac{\P{m\P t}^{n-1}}{\P{n-1}!} \cdot P\CB{N\P{t+h} - N\P t = 1}+ o\P h\\
&= \lambda \P t e^{-m\P t} \ffrac{\P{m\P t}^{n-1}}{\P{n-1}!}+ o\P h
\end{align}$
which implies that $f_{S_n}\P t = \lambda \P t e^{-m\P t} \ffrac{\P{m\P t}^{n-1}}{\P{n-1}!}$, where $m\P t = \d\int_0^t \lambda \P s\;\dd s$
### Compound Poisson Process
A stochastic process $\CB{X\P t, t\geq 0}$ is said to be a ***compound Poisson process*** if it can be represented as $X\P t = \sum_{i=1}^{N\P t} Y_i$, $t\geq0$ where
- $\CB{N\P t,t\geq0}$ is a Poisson process
- $\CB{Y_i,i\geq1}$ is a family off $i.i.d.$ that is also independent of $\CB{N\P t,t\geq0}$
And the result in Chap_03 tells us that $\Exp\SB{X\P t} = \lambda t \Exp\SB{Y_1}$, $\Var{X\P t} = \lambda t\Exp\SB{Y_1^2}$.
A lot of contents are skipped and then it comes to another result, that if $\CB{X\P t, t\geq0}$ and $\CB{Y\P t, t\geq0}$ are independent compound Poisson processes with respective Poisson parameters and distributions $\lambda_1$, $F_1$ and $\lambda_2$, $F_2$, then $\CB{X\P t + Y\P t, t\geq0}$ is also a compound Poisson process, with Poisson parameter $\lambda_1 + \lambda_2$ and distribution function $F$ given by
$\bspace F\P x = \ffrac{\lambda_1}{\lambda_1 + \lambda_2}F_1\P x +\ffrac{\lambda_2}{\lambda_1 + \lambda_2}F_2\P x$
### Conditional or Mixed Poisson Processes
skipped
## Random Intensity Functions and Hawkes Processes
skipped
| github_jupyter |
# Nonlinear Equations
We want to find a root of the nonlinear function $f$ using different methods.
1. Bisection method
2. Newton method
3. Chord method
4. Secant method
5. Fixed point iterations
```
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
import sympy as sym
t = sym.symbols('t')
f_sym = t/8. * (63.*t**4 - 70.*t**2. +15.) # Legendre polynomial of order 5
f_prime_sym = sym.diff(f_sym,t)
f = sym.lambdify(t, f_sym, 'numpy')
f_prime = sym.lambdify(t,f_prime_sym, 'numpy')
phi = lambda x : 63./70.*x**3 + 15./(70.*x)
#phi = lambda x : 70.0/15.0*x**3 - 63.0/15.0*x**5
#phi = lambda x : sqrt((63.*x**4 + 15.0)/70.)
# Let's plot
n = 1025
x = linspace(-1,1,n)
c = zeros_like(x)
_ = plot(x,f(x))
_ = plot(x,c)
_ = grid()
# Initial data for the variuos algorithms
# interval in which we seek the solution
a = 0.7
b = 1.
# initial points
x0 = (a+b)/2.0
x00 = b
# stopping criteria
eps = 1e-10
n_max = 1000
```
## Bisection method
$$
x^k = \frac{a^k+b^k}{2}
$$
```
if (f(a_k) * f(x_k)) < 0:
b_k1 = x_k
a_k1 = a_k
else:
a_k1 = x_k
b_k1 = b_k
```
```
def bisect(f,a,b,eps,n_max):
assert f(a)*f(b) < 0
a_new = a
b_new = b
x = mean([a,b])
err = eps + 1.
errors = [err]
it = 0
while (err > eps and it < n_max):
if ( f(a_new) * f(x) < 0 ):
# root in (a_new,x)
b_new = x
else:
# root in (x,b_new)
a_new = x
x_new = mean([a_new,b_new])
#err = 0.5 *(b_new -a_new)
err = abs(f(x_new))
#err = abs(x-x_new)
errors.append(err)
x = x_new
it += 1
semilogy(errors)
print(it)
print(x)
print(err)
return errors
%time errors_bisect = bisect(f,a,b,eps,n_max)
# is the number of iterations coherent with the theoretical estimation?
```
In order to find out other methods for solving non-linear equations, let's compute the Taylor's series of $f(x^k)$ up to the first order
$$
f(x^k) \simeq f(x^k) + (x-x^k)f^{\prime}(x^k)
$$
which suggests the following iterative scheme
$$
x^{k+1} = x^k - \frac{f(x^k)}{f^{\prime}(x^k)}
$$
The following methods are obtained applying the above scheme where
$$
f^{\prime}(x^k) \approx q^k
$$
## Newton's method
$$
q^k = f^{\prime}(x^k)
$$
$$
x^{k+1} = x^k - \frac{f(x^k)}{q^k}
$$
```
def newton(f,f_prime,x0,eps,n_max):
it = 0
err = eps + 1.
errors = [err]
x = x0
q = f_prime(x)
err = abs(f(x))
errors.append(err)
while (it < n_max and f(x) > eps):
assert f_prime(x) != 0
q_next = f_prime(x)
x_next = x - f(x)/q
err = abs(f(x_next))
errors.append(err)
x = x_next
q = q_next
it += 1
semilogy(errors)
print(it)
print(x)
print(err)
return errors
%time errors_newton = newton(f,f_prime,1.1,eps,n_max)
```
## Chord method
$$
q^k \equiv q = \frac{f(b)-f(a)}{b-a}
$$
$$
x^{k+1} = x^k - \frac{f(x^k)}{q}
$$
```
def chord(f,a,b,x0,eps,n_max):
pass # TODO
errors_chord = chord (f,a,b,x0,eps,n_max)
```
## Secant method
$$
q^k = \frac{f(x^k)-f(x^{k-1})}{x^k - x^{k-1}}
$$
$$
x^{k+1} = x^k - \frac{f(x^k)}{q^k}
$$
Note that this algorithm requirs **two** initial points
```
def secant(f,x0,x00,eps,n_max):
pass # TODO
errors_secant = secant(f,x0,x00,eps,n_max)
```
## Fixed point iterations
$$
f(x)=0 \to x-\phi(x)=0
$$
$$
x^{k+1} = \phi(x^k)
$$
```
def fixed_point(phi,x0,eps,n_max):
pass # TODO
errors_fixed = fixed_point(phi,0.3,eps,n_max)
```
## Comparison
```
# plot the error convergence for the methods
loglog(errors_bisect, label='bisect')
loglog(errors_chord, label='chord')
loglog(errors_secant, label='secant')
loglog(errors_newton, label ='newton')
loglog(errors_fixed, label ='fixed')
_ = legend()
# Let's compare the scipy implmentation of Newton's method with our..
import scipy.optimize as opt
%time opt.newton(f, 1.0, f_prime, tol = eps)
```
| github_jupyter |
# Example: Human segmentation with TransUnet and transfer learning from ImageNet-trained VGG16 model
```
import numpy as np
from glob import glob
import tensorflow as tf
from PIL import Image
import matplotlib.pyplot as plt
from tensorflow import keras
from keras.models import load_model
from keras_unet_collection import models, utils
from sklearn.metrics import f1_score, jaccard_score
# !pip install tensorflow-gpu==2.5.0
#!pip install keras-unet-collection
```
This example requires `keras-unet-collection`:
```
pip install keras-unet-collection
```
```
print(tf.__version__)
# the indicator of a fresh run
FIRST_TIME_RUNNING = False
# user-specified working directory
FILE_PATH = 'dataset/'
FILE_PATH_LABEL = 'dataset/'
```
## The Monuseg dataset
The dataset for this challenge was obtained by carefully annotating tissue images of several patients with tumors of different organs and who were diagnosed at multiple hospitals. This dataset was created by downloading H&E stained tissue images captured at 40x magnification from TCGA archive.
H&E staining is a routine protocol to enhance the contrast of a tissue section and is commonly used for tumor assessment (grading, staging, etc.).
Given the diversity of nuclei appearances across multiple organs and patients, and the richness of staining protocols adopted at multiple hospitals, the training dataset will enable the development of robust and generalizable nuclei segmentation techniques that will work right out of the box.
```
# train files path
path_train_img = FILE_PATH + 'train folder/img/'
path_train_mask = FILE_PATH_LABEL + 'train folder/mask/'
# validation file path
path_valid_img = FILE_PATH + 'validation folder/img/'
path_valid_mask = FILE_PATH_LABEL + 'validation folder/mask/'
# test files path
path_test_img = FILE_PATH + 'test folder/img/'
path_test_mask = FILE_PATH_LABEL + 'test folder/mask/'
# predict files path
path_predict_mask = FILE_PATH + 'predict folder/'
train_input_names = np.array(sorted(glob(path_train_img+'*.png')))
train_label_names = np.array(sorted(glob(path_train_mask+'*.png')))
test_input_names = np.array(sorted(glob(path_test_img+'*.png')))
test_label_names = np.array(sorted(glob(path_test_mask+'*.png')))
# get valid files name
valid_input_names = np.array(sorted(glob(path_valid_img+'*.png')))
valid_label_names = np.array(sorted(glob(path_valid_mask+'*.png')))
# get predict file names
predict_label_names = np.array(sorted(glob(path_predict_mask+'*.png')))
```
### Exploratory data analysis
```
def ax_decorate_box(ax):
[j.set_linewidth(0) for j in ax.spines.values()]
ax.tick_params(axis="both", which="both", bottom=False, top=False, labelbottom=False, left=False, right=False, labelleft=False)
return ax
I_MAX = 10 # explore 10 images
input_example = utils.image_to_array(train_input_names[:I_MAX], size=128, channel=3)
label_example = utils.image_to_array(train_label_names[:I_MAX], size=128, channel=1)
i_example = 2
fig, AX = plt.subplots(1, 2, figsize=(7, 3))
plt.subplots_adjust(0, 0, 1, 1, hspace=0, wspace=0.1)
for ax in AX:
ax = ax_decorate_box(ax)
AX[0].pcolormesh(np.mean(input_example[i_example, ...], axis=-1), cmap=plt.cm.gray)
AX[1].pcolormesh(label_example[i_example, ..., 0] > 0, cmap=plt.cm.gray)
AX[0].set_title("Original", fontsize=14)
AX[1].set_title("Segmentation mask", fontsize=14)
```
## Attention U-net with an ImageNet-trained backbone
Attention U-net is applied for this segmentation task. This architecture is modified from the conventionally used U-net by assigning attention gates on each upsampling level.
Attention gates take upsampled (i.e., decoded) and downsampled (i.e., encoded) tensors as queries and keys, respectively. These queries and keys are mapped to intermediate channel sizes and fed into the additive attention learning. The resulting vector is rescaled by a sigmoid function and multiplied with the downsampled tensor (keys, but here treated as "values" of self-attention). The attention gate output replaces the downsampled tensor and is concatenated with the upsampled tensor.
Based on the amount and complexity of COCO samples, ImageNet-trained VGG16 is applied as an encoder backbone. This transfer learning strategy is expected to improve the segmentation performance based on two reasons:
* The ImageNet and COCO containts (somewhat) similar kinds of natural images with a high overlap of data distribution;
* The VGG16 architecture is a combination of same-padding convolution and max-pooling kernels, capable of extracting hierarchical features that can be processed by attention gates (ResNet backbone contains zero padding layers and is suboptimal in this case).
The code cell below configures the attention U-net with an ImageNet-trained VGG16 backbone. Hyper-parameters are explained through the Python helper function:
```python
from keras_unet_collection import models
help(models.att_unet_2d)
```
```
'''
model = models.transunet_2d((128, 128, 3), filter_num=[64, 128, 256, 512], n_labels=2, stack_num_down=2, stack_num_up=2,
embed_dim=768, num_mlp=3072, num_heads=12, num_transformer=12,
activation='ReLU', mlp_activation='GELU', output_activation='Softmax',
batch_norm=True, pool=True, unpool='bilinear', name='transunet')
'''
model = models.swin_unet_2d((128, 128, 3), filter_num_begin=64, n_labels=2, depth=4, stack_num_down=2, stack_num_up=2,
patch_size=(2, 2), num_heads=[4, 8, 8, 8], window_size=[4, 2, 2, 2], num_mlp=512,
output_activation='Softmax', shift_window=True, name='swin_unet')
```
The second layer of the configured model, i.e., right after an input layer, is expected to be the VGG16 backbone.
```
model.layers[1].name
```
For simplicity, this segmentation model is trained with cross-entropy loss with SGD optimizer and a learning rate of 1E-2.
```
model.compile(loss=keras.losses.binary_crossentropy, optimizer=keras.optimizers.SGD(lr=1e-2), metrics=[tf.keras.metrics.MeanIoU(num_classes=2)])
```
## Create Keras metric
## Training
The segmentation model is trained with 200 epoches with early stopping. Each epoch contains 100 batches and each batch contains 32 samples.
*The training process here is far from systematic, and is provided for illustration purposes only.*
```
L = len(train_input_names)
ind_all = utils.shuffle_ind(L)
L_train = int(0.9*L)
L_valid = L - L_train
ind_train = ind_all[:L_train]; ind_valid = ind_all[L_train:]
def input_data_process(input_array):
'''converting pixel vales to [0, 1]'''
return input_array/255.
def target_data_process(target_array):
'''Converting human, non-human pixels into two categories.'''
target_array[target_array > 0] = 1 # grouping all other non-human categories
return keras.utils.to_categorical(target_array, num_classes=2)
valid_input = input_data_process(utils.image_to_array(valid_input_names, size=128, channel=3))
valid_label = target_data_process(utils.image_to_array(valid_label_names, size=128, channel=1))
N_EPOCH = 400 # number of epoches
N_BATCH = 100 # number of batches per epoch
N_SAMPLE = 32 # number of samples per batch
tol = 0 # current early stopping patience
max_tol = 2 # the max-allowed early stopping patience
min_del = 0 # the lowest acceptable loss value reduction
# loop over epoches
for epoch in range(N_EPOCH):
# initial loss record
if epoch == 0:
y_pred = model.predict([valid_input])
record = np.mean(keras.losses.binary_crossentropy(valid_label, y_pred))
print('\tInitial loss = {}'.format(record))
# loop over batches
for step in range(N_BATCH):
# selecting smaples for the current batch
ind_train_shuffle = utils.shuffle_ind(L_train)[:N_SAMPLE]
# batch data formation
# augmentation is not applied
train_input = input_data_process(utils.image_to_array(train_input_names[ind_train_shuffle], size=128, channel=3))
train_label = target_data_process(utils.image_to_array(train_label_names[ind_train_shuffle], size=128, channel=1))
# train on batch
loss_ = model.train_on_batch([train_input,], [train_label,])
# ** training loss is not stored ** #
# epoch-end validation
print('Epoch = ', epoch)
y_pred = model.predict([valid_input])
record_temp = np.mean(keras.losses.binary_crossentropy(valid_label, y_pred))
# ** validation loss is not stored ** #
# if loss is reduced
if record - record_temp > min_del:
print('Validation performance is improved from {} to {}'.format(record, record_temp))
record = record_temp # update the loss record
tol = 0 # refresh early stopping patience
# ** model checkpoint is not stored ** #
# if loss not reduced
else:
print('Validation performance {} is NOT improved'.format(record_temp))
tol += 1
if tol >= max_tol:
print('Early stopping')
break
else:
# Pass to the next epoch
continue
model.save('swin-unet')
```
## Evaluation
The testing set performance is evaluated with cross-entropy and example outputs.
```
# Define IoU metric
def mean_iou(y_true, y_pred):
prec = []
for t in np.arange(0.5, 1.0, 0.05):
y_pred_ = tf.to_int32(y_pred > t)
score, up_opt = tf.metrics.mean_iou(y_true, y_pred_, 2)
K.get_session().run(tf.local_variables_initializer())
with tf.control_dependencies([up_opt]):
score = tf.identity(score)
prec.append(score)
return K.mean(K.stack(prec), axis=0)
model = load_model('swin-unet')
test_input = input_data_process(utils.image_to_array(test_input_names, size=128, channel=3))
test_label = target_data_process(utils.image_to_array(test_label_names, size=128, channel=1))
# prediction on test_input without threshold
# y_pred = model.predict([test_input], verbose=1)
y_pred = model.predict(test_input)
# prediction on test_input with threshold
y_pred_t = (y_pred > 0.5).astype(np.uint8)
print('Testing set cross-entropy = {}'.format(np.mean(keras.losses.binary_crossentropy(test_label, y_pred))))
```
**Example of outputs**
As a common practice in computer vision projects, only nice looking samples are plotted :
```
print(y_pred_t[0, ..., 0].shape)
Image.fromarray(255*np.array(y_pred_t[11, ..., 0])).resize((128, 128)).save('mask.png')
for i in range(len(y_pred_t)):
if (i + 1 < 10):
image_name = '000' + str(i + 1) + '.png'
Image.fromarray(255*np.array(y_pred_t[i, ..., 0])).save(image_name)
else:
image_name = '00' + str(i + 1) + '.png'
Image.fromarray(255*np.array(y_pred_t[i, ..., 0])).save(image_name)
f1, iou = [], []
for i in range(len(test_input_names)):
INPUT = np.array(Image.open(test_label_names[i]))
INPUT = np.where(INPUT >= 127, 1, 0)
LABEL = np.array(Image.open(predict_label_names[i]))
LABEL = np.where(LABEL >= 127, 1, 0)
y_true, y_pred = INPUT.flatten(), LABEL.flatten()
f1.append(f1_score(y_true, y_pred))
iou.append(jaccard_score(y_true, y_pred))
np.mean(f1), np.mean(iou)
```
| github_jupyter |
# Introduction to Modeling Libraries
```
import numpy as np
import pandas as pd
np.random.seed(12345)
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
PREVIOUS_MAX_ROWS = pd.options.display.max_rows
pd.options.display.max_rows = 20
np.set_printoptions(precision=4, suppress=True)
```
## Interfacing Between pandas and Model Code
```
import pandas as pd
import numpy as np
data = pd.DataFrame({
'x0': [1, 2, 3, 4, 5],
'x1': [0.01, -0.01, 0.25, -4.1, 0.],
'y': [-1.5, 0., 3.6, 1.3, -2.]})
data
data.columns
data.values
df2 = pd.DataFrame(data.values, columns=['one', 'two', 'three'])
df2
model_cols = ['x0', 'x1']
data.loc[:, model_cols].values
data['category'] = pd.Categorical(['a', 'b', 'a', 'a', 'b'],
categories=['a', 'b'])
data
dummies = pd.get_dummies(data.category, prefix='category')
data_with_dummies = data.drop('category', axis=1).join(dummies)
data_with_dummies
```
## Creating Model Descriptions with Patsy
y ~ x0 + x1
```
data = pd.DataFrame({
'x0': [1, 2, 3, 4, 5],
'x1': [0.01, -0.01, 0.25, -4.1, 0.],
'y': [-1.5, 0., 3.6, 1.3, -2.]})
data
import patsy
y, X = patsy.dmatrices('y ~ x0 + x1', data)
y
X
np.asarray(y)
np.asarray(X)
patsy.dmatrices('y ~ x0 + x1 + 0', data)[1]
coef, resid, _, _ = np.linalg.lstsq(X, y)
coef
coef = pd.Series(coef.squeeze(), index=X.design_info.column_names)
coef
```
### Data Transformations in Patsy Formulas
```
y, X = patsy.dmatrices('y ~ x0 + np.log(np.abs(x1) + 1)', data)
X
y, X = patsy.dmatrices('y ~ standardize(x0) + center(x1)', data)
X
new_data = pd.DataFrame({
'x0': [6, 7, 8, 9],
'x1': [3.1, -0.5, 0, 2.3],
'y': [1, 2, 3, 4]})
new_X = patsy.build_design_matrices([X.design_info], new_data)
new_X
y, X = patsy.dmatrices('y ~ I(x0 + x1)', data)
X
```
### Categorical Data and Patsy
```
data = pd.DataFrame({
'key1': ['a', 'a', 'b', 'b', 'a', 'b', 'a', 'b'],
'key2': [0, 1, 0, 1, 0, 1, 0, 0],
'v1': [1, 2, 3, 4, 5, 6, 7, 8],
'v2': [-1, 0, 2.5, -0.5, 4.0, -1.2, 0.2, -1.7]
})
y, X = patsy.dmatrices('v2 ~ key1', data)
X
y, X = patsy.dmatrices('v2 ~ key1 + 0', data)
X
y, X = patsy.dmatrices('v2 ~ C(key2)', data)
X
data['key2'] = data['key2'].map({0: 'zero', 1: 'one'})
data
y, X = patsy.dmatrices('v2 ~ key1 + key2', data)
X
y, X = patsy.dmatrices('v2 ~ key1 + key2 + key1:key2', data)
X
```
## Introduction to statsmodels
### Estimating Linear Models
```
import statsmodels.api as sm
import statsmodels.formula.api as smf
def dnorm(mean, variance, size=1):
if isinstance(size, int):
size = size,
return mean + np.sqrt(variance) * np.random.randn(*size)
# For reproducibility
np.random.seed(12345)
N = 100
X = np.c_[dnorm(0, 0.4, size=N),
dnorm(0, 0.6, size=N),
dnorm(0, 0.2, size=N)]
eps = dnorm(0, 0.1, size=N)
beta = [0.1, 0.3, 0.5]
y = np.dot(X, beta) + eps
X[:5]
y[:5]
X_model = sm.add_constant(X)
X_model[:5]
model = sm.OLS(y, X)
results = model.fit()
results.params
print(results.summary())
data = pd.DataFrame(X, columns=['col0', 'col1', 'col2'])
data['y'] = y
data[:5]
results = smf.ols('y ~ col0 + col1 + col2', data=data).fit()
results.params
results.tvalues
results.predict(data[:5])
```
### Estimating Time Series Processes
```
init_x = 4
import random
values = [init_x, init_x]
N = 1000
b0 = 0.8
b1 = -0.4
noise = dnorm(0, 0.1, N)
for i in range(N):
new_x = values[-1] * b0 + values[-2] * b1 + noise[i]
values.append(new_x)
MAXLAGS = 5
model = sm.tsa.AR(values)
results = model.fit(MAXLAGS)
results.params
```
## Introduction to scikit-learn
```
train = pd.read_csv('datasets/titanic/train.csv')
test = pd.read_csv('datasets/titanic/test.csv')
train[:4]
train.isnull().sum()
test.isnull().sum()
impute_value = train['Age'].median()
train['Age'] = train['Age'].fillna(impute_value)
test['Age'] = test['Age'].fillna(impute_value)
train['IsFemale'] = (train['Sex'] == 'female').astype(int)
test['IsFemale'] = (test['Sex'] == 'female').astype(int)
predictors = ['Pclass', 'IsFemale', 'Age']
X_train = train[predictors].values
X_test = test[predictors].values
y_train = train['Survived'].values
X_train[:5]
y_train[:5]
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
y_predict = model.predict(X_test)
y_predict[:10]
```
(y_true == y_predict).mean()
```
from sklearn.linear_model import LogisticRegressionCV
model_cv = LogisticRegressionCV(10)
model_cv.fit(X_train, y_train)
from sklearn.model_selection import cross_val_score
model = LogisticRegression(C=10)
scores = cross_val_score(model, X_train, y_train, cv=4)
scores
```
## Continuing Your Education
```
pd.options.display.max_rows = PREVIOUS_MAX_ROWS
```
| github_jupyter |
# Neural Network
**Learning Objectives:**
* Use the `DNNRegressor` class in TensorFlow to predict median housing price
The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively.
<p>
Let's use a set of features to predict house value.
## Set Up
In this first cell, we'll load the necessary libraries.
```
import math
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
```
Next, we'll load our data set.
```
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
```
## Examine the data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column.
This will include things like mean, standard deviation, max, min, and various quantiles.
```
df.head()
df.describe()
```
This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well
```
df['num_rooms'] = df['total_rooms'] / df['households']
df['num_bedrooms'] = df['total_bedrooms'] / df['households']
df['persons_per_house'] = df['population'] / df['households']
df.describe()
df.drop(['total_rooms', 'total_bedrooms', 'population', 'households'], axis = 1, inplace = True)
df.describe()
```
## Build a neural network model
In this exercise, we'll be trying to predict `median_house_value`. It will be our label (sometimes also called a target). We'll use the remaining columns as our input features.
To train our model, we'll first use the [LinearRegressor](https://www.tensorflow.org/api_docs/python/tf/contrib/learn/LinearRegressor) interface. Then, we'll change to DNNRegressor
```
featcols = {
colname : tf.feature_column.numeric_column(colname) \
for colname in 'housing_median_age,median_income,num_rooms,num_bedrooms,persons_per_house'.split(',')
}
# Bucketize lat, lon so it's not so high-res; California is mostly N-S, so more lats than lons
featcols['longitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('longitude'),
np.linspace(-124.3, -114.3, 5).tolist())
featcols['latitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'),
np.linspace(32.5, 42, 10).tolist())
featcols.keys()
# Split into train and eval
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
SCALE = 100000
BATCH_SIZE= 100
OUTDIR = './housing_trained'
train_input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[featcols.keys()],
y = traindf["median_house_value"] / SCALE,
num_epochs = None,
batch_size = BATCH_SIZE,
shuffle = True)
eval_input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[featcols.keys()],
y = evaldf["median_house_value"] / SCALE, # note the scaling
num_epochs = 1,
batch_size = len(evaldf),
shuffle=False)
# Linear Regressor
def train_and_evaluate(output_dir, num_train_steps):
myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = featcols.values(),
optimizer = myopt)
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = eval_input_fn,
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE)
# DNN Regressor
def train_and_evaluate(output_dir, num_train_steps):
myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate
estimator = # TODO: Implement DNN Regressor model
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = eval_input_fn,
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE)
from google.datalab.ml import TensorBoard
pid = TensorBoard().start(OUTDIR)
TensorBoard().stop(pid)
```
| github_jupyter |
# Heat equation
Numerical resolution of the one-dimensional heat equation:
$$
\alpha \frac{\partial^2 p}{\partial x^2} = \frac{\partial^2 p}{\partial t}
$$
with given boundary conditions in the ending points of a line.
```
#We'll need these libraries
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
```
The most advisable numerical scheme is that of central differences in space and backward difference in time.
$$
p_i^{n+1} = p_i^n + \frac{\alpha \Delta t}{\Delta x^2} \left( p_{i+1}^n + p_{i-1}^n - 2 p_i^n \right)
$$
```
# Numerical parameters
a = 1
nx = 20
nt = 1000
dx = 1/(nx - 1)
dt = 1/(nt - 1)
# Grid
xs = np.linspace(0, 1, nx)
ts = np.linspace(0, 200, nt)
# Initial guess
p = np.zeros((nx, nt))
# Boundary conditions
p[0, :] = 0
p[-1, :] = 1 + 0*np.sin(2 * np.pi * ts / 100)
p[:, 0] = 0
p[:, -1] = 0
c = a * dt / dx**2
for n in range(1, nt):
for i in range(1, nx-1):
p[i, n] = p[i, n-1] + c * (p[i-1, n-1] + p[i+1, n-1] - 2*p[i, n-1])
```
## Visualization
The Laplace equation often models stationary physical phenomena such as:
- Electrostatic fields
- Steady fluid flows
- Steady heat flows
Where the flow is recovered from $p(x, y)$ via a gradient operator:
$\vec f(x, y) = - \vec{\nabla p}$
```
plt.plot(xs, p[:, 1:-2:50]);
```
## Two dimensions
Numerical resolution of the two-dimensional heat equation:
$$
\alpha \frac{\partial^2 p}{\partial x^2} + \alpha \frac{\partial^2 p}{\partial y^2} = \frac{\partial^2 p}{\partial t}
$$
with given boundary conditions in the boundary of a rectangular region.
The most advisable numerical scheme is that of central differences in space and backward difference in time.
$$
p_i^{n+1} = p_i^n + \frac{\alpha \Delta t}{\Delta x^2} \left( p_{i+1,j}^n + p_{i-1,j}^n - 2 p_{ij}^n \right) + \frac{\alpha \Delta t}{\Delta y^2} \left( p_{i,j+1}^n + p_{i,j-1}^n - 2 p_{ij}^n \right)
$$
```
# Numerical parameters
a = 1
nx = 20
ny = 20
nt = 1000
dx = 1/(nx - 1)
dy = 1/(ny - 1)
dt = 1/(nt - 1)
# Grid
xs = np.linspace(0, 1, nx)
ys = np.linspace(0, 1, ny)
ts = np.linspace(0, 200, nt)
xm, ym = np.meshgrid(xs, ys)
# Initial guess
p = np.zeros((nx, ny, nt))
# Boundary conditions
p[+0, :, :] = 0
p[-1, :, :] = 1
p[ :, 0, :] = 0
p[ :,-1, :] = 0
cx = a * dt / dx**2
cy = a * dt / dy**2
for n in range(1, nt):
for i in range(1, nx-1):
for j in range(1, ny-1):
p[i, j, n] = p[i, j, n-1] + cx * (p[i-1, j, n-1] + p[i+1, j, n-1] - 2*p[i, j, n-1]) + cy * (p[i, j-1, n-1] + p[i, j-1, n-1] - 2*p[i, j, n-1])
plt.contourf(xs, ys, p[:, :, 5].T, cmap='coolwarm', alpha = 0.7);
```
| github_jupyter |
# End-to-End Machine Leanrning Project
In this chapter you will work through an example project end to end, pretending to be a recently hired data scientist at a real estate company. Here are the main steps you will go through:
1. Look at the big picture
2. Get the data
3. Discover and visualize the data to gain insights.
4. Prepare the data for Machine learning algorithms.
5. Select a model and train it
6. Fine-tune your model.
7. Present your solution
8. Launch, monitor, and maintain your system.
## Working with Real Data
When you are learning about Machine Leaning, it is best to experimentwith real-world data, not artificial datasets.
Fortunately, there are thousands of open datasets to choose from, ranging across all sorts of domains. Here are a few places you can look to get data:
* Popular open data repositories:
- [UC Irvine Machine Learning Repository](http://archive.ics.uci.edu/ml/)
- [Kaggle](https://www.kaggle.com/datasets) datasets
- Amazon's [AWS](https://registry.opendata.aws/) datasets
* Meta Portals:
- [Data Portals](http://dataportals.org/)
- [OpenDataMonitor](http://opendatamonitor.eu/)
- [Quandl](http://quandl.com)
## Frame the Problem
The problem is that your model' output (a prediction of a district's median housing price) will be fed to another ML system along with many other signals*. This downstream will determine whether it is worth investing in a given area or not. Getting this right is critical, as it directly affects revenue.
```
Other Signals
|
Upstream Components --> (District Data) --> [District Pricing prediction model](your component) --> (District prices) --> [Investment Analaysis] --> Investments
```
### Pipelines
A sequence of data processing components is called a **data pipeline**. Pipelines are very common in Machine Learning systems, since a lot of data needs to manipulated to make sure the machine learning model/algorithms understands the data, as algorithms understand only numbers.
## Download the Data:
You could use your web browser and download the data, but it is preferabble to make a function to do the same.
```
import os
import tarfile
import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handsonml-2/master"
HOUSING_PATH = os.path.join("datasets", "housing")
HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
"""
Function to download the housing_data
"""
os.makedirs(housing_path, exist_ok=True)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
import numpy as np
```
| github_jupyter |
# Relativistic Kinematics Tutorial
## Brief Intro to Special Relativity
Before talking about relativisic kinematics, let's briefly run through Einstein's theories of Special and General Relavitity. Einstein's theory of special relativity was derived from the following two postulates:
* Postulate 1: The laws of physics are the same in all inertial reference frames.
* Postulate 2: The speed of light in a vacuum is equal to the value $c$, independent of the motion of the source
To summarize, the laws of physics do not change regardless of what your perspective is. Nothing can go faster than the speed of light in the universe. Everyone *always* measures the speed of light to be $c$ ($\approx 3\times 10^8$ m/s) regardless of how fast or slow one is moving.
One consequence of this is that we find that space and time are interwoven into a 4-dimensional fabric known as space-time. This fabric can be bent and stretched which causes changes in how we perceive time.
***Keep these ideas in mind***
## What is Relativistic Kinematics?
Relativistic kinematics sounds like a complicated term that the average person would be intimidated by. However, it basically it is just a precise way of calculating things like velocity, momentum, and energy. Usually, we only need to worry about this level of precision when we are dealing with objects moving at incredibly fast speeds.
But what is considered a fast speed?
You might have heard before that the speed of light is the speed limit of the universe. This means that no object that has mass can reach or exceed that speed (**Sidenote**: This law only says that anything *inside* space cannot exceed this speed. Space itself is actually expanding at an accelerated rate that can be faster than the speed of light!). What can reach the speed of light? Only particles that have a mass of zero can reach the speed of light, therefore only photons, the particles of light, are able to reach this speed.
Before we go any further, we need to define a term known as the Lorentz factor which plays a key role in understanding relativistic kinematics. The Lorentz factor tells us how time, length, and relativistic mass change for an object in motion.
$$\gamma = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}} = \frac{1}{\sqrt{1-\beta^2}}$$ $\beta = \frac{v}{c}$ tells you the fraction of light speed that the object of interest is moving at.
## Time Dilation
What is time dilation? To start off, time isn't the same for everyone as you might have thought. Time changes depending on how fast you are travelling as well as the gravitational bodies you are near. The formula to calculate time dilation (considering only speed and not gravitational effects) is: $$\Delta t = \frac{\Delta t'}{\sqrt{1-\frac{v^2}{c^2}}} = \gamma \Delta t' = \gamma \tau$$ $\Delta t' = \tau$ is known as the proper time which is the time measured in the reference frame in which the clock is at rest. $\Delta t$ is defined as the time dilation.
From this formula, we can tell that the time that an observer measures in a moving reference frame will always be longer unless that particle is moving at the speed of light. This is because we are multiplying $\tau$ by a number larger than one! As you travel at higher speeds, the Lorentz factor decreases, which makes sense because v/c is getting larger. The smallest the Lorentz factor can be is 1, which occurs when the velocity is 0. The largest this factor can be is undefined, or approaching infinity, which occurs when the velocity is equal to the speed of light. Here is a video below that might help you understand this mind-boggling concept better!
<a href="https://www.youtube.com/watch?v=n2s1-RHuljo" target="_blank">Time Dilation and the Twin "Paradox"</a>
<a href="https://www.youtube.com/watch?v=n2s1-RHuljo" target="_blank"><img src="http://i3.ytimg.com/vi/n2s1-RHuljo/hqdefault.jpg"
alt="IMAGE ALT TEXT HERE" width="480" border="10" /></a>
## A brief aside on General Relativity
General relativity is the theory that Einstein developed during the years 1905-1915 and explains the motion of objects due to gravitational effects. It can be summarized as such:
*"Matter tells space how to curve and space tells matter how to move"* - John Archibald Wheeler
This theory tells us that time is also dilated by massive objects. If you have seen the movie *Interstellar* then you have witnessed the effects of general relativity. Being close to a massive object like a stellar black hole (which can be anywhere from a few times to a few millions times the mass of the sun) causes space-time to be stretched massively! Because of this, time happens a lot slower than it would if you were on Earth. The more massive the object, the more space-time is stretched. The more space-time is stretched, the longer it is going to take for time to occur for you.
Fun Fact: Since sea level is closer to the center of the Earth where gravity is the strongest, time is technically happening slower than it is on a mountatin. The time difference is infinitesimal, but still not the same!
Check out this great visual below of the effect mass has on space-time!
```
from IPython.display import Image
from IPython.core.display import HTML
Image(url= "http://sci.esa.int/science-e-media/img/72/ESA_LISA-Pathfinder_spacetime_curvature_above_orig.jpg")
```
# Length Contraction
As objects approach the speed of light, they contract in the direction of motion! In other words, when a stationary observer measures the length of a rod moving at near the speed of light, they see it as smaller than it is in its actual reference frame. In the reference frame of the rod, the rod is its actual length and time runs normally. It's only when we observe moving reference frames that we notice these differences. The equation to calculate length contraction is:
$$L=\frac{1}{\gamma}L_p = \sqrt{1-\frac{v^2}{c^2}} L_p$$
where $L_p$ is the proper length, or the length of the object at rest in a reference frame. It makes sense mathematically that the length is smaller in a moving reference frame because we are dividing by gamma, which is always greater than 1 unless v=c. The effect speed has on length contraction is shown with the baseball figure below.
```
from IPython.display import Image
from IPython.core.display import HTML
Image(url= "http://www.patana.ac.th/secondary/science/anrophysics/relativity_option/images/length_cont1.JPG")
```
# Relativistic Energy and Momentum
Many of you have probably heard of Albert Einstein's famous equation: $E=mc^2$ It turns out that this isn't always true. $E=mc^2$ is only true for objects at rest! The equation for objects in motion is $E=\gamma m c^2$ where the Lorentz factor comes into play again. When we multiply $mc^2$ by the Lorentz factor, the mass increases. Therefore, as objects approach the speed of light, they become more massive.
Our Newtonian understanding of momentum is also incorrect. In general physics classes, we learn that $p=mv$, however the relativistic momentum changes at near light speeds. Therefore our new formula for momentum is: $$p =\gamma mv$$
In particle physics, we often don't know the masses of particles, but know the energies and momenta in the x, y, and z directions. CERN can reach energies of up to 13 TeV! At particle accelerators like CERN, we are able to use machines that can accelerate particles to near the speed of light and then collide them. We then study the byproducts of these collisions. Since some particles have shorter lifetimes than others, it is possible that some decay before they reach our detectors. We then have to trace our decay products back to something called the displaced vertex (where the original particle was in space right before it decayed). We are able to calculate the invariant mass (mass of particles at rest in their reference frame) using the following formula:
$$E^2=m^2c^4+p^2c^2$$
This equation allows us to calculate the masses of particles that arise from these collisions.
This equation also proves that although photons have no mass, they still have a momentum! The energy for photons is defined as: $$E = pc$$ We can manipulate the formula and solve for the rest mass as follows:
$$m = \sqrt{E^2 - (p^2_x + p^2_y + p_z^2)}$$
Now we can apply our knowledge of time dilation! Since these particles are moving at near the speed of light, time is happening slower for them relative to us. Essentially, particles will have a longer lifetime when they travel at higher speeds. Because of this, they are able to travel greater distances! We can utilize time dilation to observe some particles before they decay and therefore learn more about them.
##### Fun Fact: There are other ways of measuring masses of particles
In some cases, particles can move faster than the speed of light in different materials! When this happens, a cone of light is given off in the form of Cherenkov radiation. The angle at which this light is given off is directly related to the velocity. Knowing the velocity combined with the momentum, we can measure the mass.
<a href="https://cds.cern.ch/record/1125472" target="_blank">Here is a great video explaining how the Large Hadron Collider works at CERN in Geneva, Switzerland</a>
<a href="https://cds.cern.ch/record/1125472" target="_blank"><img src="https://c1.staticflickr.com/3/2326/2046228644_05507000b3_z.jpg?zz=1"
alt="Photo credit: CERN (https://www.flickr.com/photos/11304375@N07/2046228644)" width="480" border="10" /></a>
Photo credit: CERN (https://www.flickr.com/photos/11304375@N07/2046228644)
#### Some things to keep in mind
* When we calculate masses of particles, we write them in terms of $MeV/c^2$ instead of multiplying by $c^2$
### Summary
* The speed of light is the same for all observers.
* Massive objects bend the fabric of space-time.
* Crazy things begin to happen when you approach the speed of light!
* Time happens slower
* Objects contract in length
* Mass increases
* We can measure masses of particles given energies and momenta
This is a lot to take in all at once, which is why we have provided some great resources below that you can check out to make more sense of these mind-blowing laws of physics! To learn more about the math behind some of these formulas check out Paul Avery's University of Florida lecture notes below.
<a href="https://www.youtube.com/watch?v=NnMIhxWRGNw" target="_blank">Here's a cool link to a video about Special Relativity. The image is also a link.</a>
<a href="https://www.youtube.com/watch?v=NnMIhxWRGNw" target="_blank"><img src="https://i.ytimg.com/vi/NnMIhxWRGNw/hqdefault.jpg?custom=true&w=196&h=110&stc=true&jpg444=true&jpgq=90&sp=68&sigh=_LCQE3ecxvENaJHs45OT55-b8dI"
alt="IMAGE ALT TEXT HERE" width="480" border="10" /></a>
## Other resources
* [Great source explaining Einstein's theories of special and general relativity as well as E=mc^2](http://www.emc2-explained.info/)
* [Relativisitc Kinematics I (Paul Avery - University of Florida)](http://www.phys.ufl.edu/~avery/course/4390/f2015/lectures/relativistic_kinematics_1.pdf)
* [Relativisitc Kinematics II (Paul Avery - University of Florida)](http://www.phys.ufl.edu/~avery/course/4390/f2015/lectures/relativistic_kinematics_2.pdf)
* [Relativistic Mechanics (Wikipedia)](https://en.wikipedia.org/wiki/Relativistic_mechanics)
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex SDK: Custom training tabular regression model for batch prediction with explainabilty
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch_explain.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
## Overview
This tutorial demonstrates how to use the Vertex SDK to train and deploy a custom tabular regression model for batch prediction with explanation.
### Dataset
The dataset used for this tutorial is the [Boston Housing Prices dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html). The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD.
### Objective
In this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a batch prediction with explanations on the uploaded model. You can alternatively create custom models using `gcloud` command-line tool or online using Cloud Console.
The steps performed include:
- Create a Vertex custom job for training a model.
- Train the TensorFlow model.
- Retrieve and load the model artifacts.
- View the model evaluation.
- Set explanation parameters.
- Upload the model as a Vertex `Model` resource.
- Make a batch prediction with explanations.
### Costs
This tutorial uses billable components of Google Cloud:
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
### Set up your local development environment
If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
- The Cloud Storage SDK
- Git
- Python 3
- virtualenv
- Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).
2. [Install Python 3](https://cloud.google.com/python/setup#installing_python).
3. [Install virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3.
4. Activate that environment and run `pip3 install Jupyter` in a terminal shell to install Jupyter.
5. Run `jupyter notebook` on the command line in a terminal shell to launch Jupyter.
6. Open this notebook in the Jupyter Notebook Dashboard.
## Installation
Install the latest version of Vertex SDK for Python.
```
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
```
### Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
```
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
This tutorial does not require a GPU runtime.
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)
4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
```
import google.cloud.aiplatform as aip
```
## Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
```
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
```
#### Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify `(None, None)` to use a container image to run on a CPU.
Learn more [here](https://cloud.google.com/vertex-ai/docs/general/locations#accelerators) hardware accelerator support for your region
*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
```
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
```
#### Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).
For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
```
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
```
#### Set machine type
Next, set the machine type to use for training and prediction.
- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction.
- `machine type`
- `n1-standard`: 3.75GB of memory per vCPU.
- `n1-highmem`: 6.5GB of memory per vCPU
- `n1-highcpu`: 0.9 GB of memory per vCPU
- `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]
*Note: The following is not supported for training:*
- `standard`: 2 vCPUs
- `highcpu`: 2, 4 and 8 vCPUs
*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
```
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
```
# Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
### Examine the training package
#### Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
- PKG-INFO
- README.md
- setup.cfg
- setup.py
- trainer
- \_\_init\_\_.py
- task.py
The files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.
The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`).
#### Package Assembly
In the following cells, you will assemble the training package.
```
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
```
#### Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:
- Get the directory where to save the model artifacts from the command line (`--model_dir`), and if not specified, then from the environment variable `AIP_MODEL_DIR`.
- Loads Boston Housing dataset from TF.Keras builtin datasets
- Builds a simple deep neural network model using TF.Keras model API.
- Compiles the model (`compile()`).
- Sets a training distribution strategy according to the argument `args.distribute`.
- Trains the model (`fit()`) with epochs specified by `args.epochs`.
- Saves the trained model (`save(args.model_dir)`) to the specified model directory.
- Saves the maximum value for each feature `f.write(str(params))` to the specified parameters file.
```
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
BATCH_SIZE = 16
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_dnn_model()
# Train the model
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)
model.save(args.model_dir)
```
#### Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
```
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz
```
### Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
#### Create custom training job
A custom training job is created with the `CustomTrainingJob` class, with the following parameters:
- `display_name`: The human readable name for the custom training job.
- `container_uri`: The training container image.
- `requirements`: Package requirements for the training container image (e.g., pandas).
- `script_path`: The relative path to the training script.
```
job = aip.CustomTrainingJob(
display_name="boston_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
```
### Prepare your command-line arguments
Now define the command-line arguments for your custom training container:
- `args`: The command-line arguments to pass to the executable that is set as the entry point into the container.
- `--model-dir` : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.
- direct: You pass the Cloud Storage location as a command line argument to your training script (set variable `DIRECT = True`), or
- indirect: The service passes the Cloud Storage location as the environment variable `AIP_MODEL_DIR` to your training script (set variable `DIRECT = False`). In this case, you tell the service the model artifact location in the job specification.
- `"--epochs=" + EPOCHS`: The number of epochs for training.
- `"--steps=" + STEPS`: The number of steps per epoch.
```
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
```
#### Run the custom training job
Next, you run the custom job to start the training job by invoking the method `run`, with the following parameters:
- `args`: The command-line arguments to pass to the training script.
- `replica_count`: The number of compute instances for training (replica_count = 1 is single node training).
- `machine_type`: The machine type for the compute instances.
- `accelerator_type`: The hardware accelerator type.
- `accelerator_count`: The number of accelerators to attach to a worker replica.
- `base_output_dir`: The Cloud Storage location to write the model artifacts to.
- `sync`: Whether to block until completion of the job.
```
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
```
## Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras `model.load_model()` method passing it the Cloud Storage path where the model is saved -- specified by `MODEL_DIR`.
```
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
```
## Evaluate the model
Now let's find out how good the model is.
### Load evaluation data
You will load the Boston Housing test (holdout) data from `tf.keras.datasets`, using the method `load_data()`. This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).
You don't need the training data, and hence why we loaded it as `(_, _)`.
Before you can run the data through evaluation, you need to preprocess it:
`x_test`:
1. Normalize (rescale) the data in each column by dividing each value by the maximum value of that column. This replaces each single value with a 32-bit floating point number between 0 and 1.
```
import numpy as np
from tensorflow.keras.datasets import boston_housing
(_, _), (x_test, y_test) = boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float32)
return feature
# Let's save one data item that has not been scaled
x_test_notscaled = x_test[0:1].copy()
for _ in range(13):
x_test[_] = scale(x_test[_])
x_test = x_test.astype(np.float32)
print(x_test.shape, x_test.dtype, y_test.shape)
print("scaled", x_test[0])
print("unscaled", x_test_notscaled)
```
### Perform the model evaluation
Now evaluate how well the model in the custom job did.
```
local_model.evaluate(x_test, y_test)
```
## Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
```
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
serving_output = list(loaded.signatures["serving_default"].structured_outputs.keys())[0]
print("Serving function output:", serving_output)
input_name = local_model.input.name
print("Model input name:", input_name)
output_name = local_model.output.name
print("Model output name:", output_name)
```
### Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex `Model` resource. These settings are referred to as the explanation metadata, which consists of:
- `parameters`: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:
- Shapley - *Note*, not recommended for image data -- can be very long running
- XRAI
- Integrated Gradients
- `metadata`: This is the specification for how the algoithm is applied on your custom model.
#### Explanation Parameters
Let's first dive deeper into the settings for the explainability algorithm.
#### Shapley
Assigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.
Use Cases:
- Classification and regression on tabular data.
Parameters:
- `path_count`: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).
For any non-trival number of features, this is too compute expensive. You can reduce the number of paths over the features to M * `path_count`.
#### Integrated Gradients
A gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.
Use Cases:
- Classification and regression on tabular data.
- Classification on image data.
Parameters:
- `step_count`: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
#### XRAI
Based on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.
Use Cases:
- Classification on image data.
Parameters:
- `step_count`: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
In the next code cell, set the variable `XAI` to which explainabilty algorithm you will use on your custom model.
```
XAI = "ig" # [ shapley, ig, xrai ]
if XAI == "shapley":
PARAMETERS = {"sampled_shapley_attribution": {"path_count": 10}}
elif XAI == "ig":
PARAMETERS = {"integrated_gradients_attribution": {"step_count": 50}}
elif XAI == "xrai":
PARAMETERS = {"xrai_attribution": {"step_count": 50}}
parameters = aip.explain.ExplanationParameters(PARAMETERS)
```
#### Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of:
- `outputs`: A scalar value in the output to attribute -- what to explain. For example, in a probability output \[0.1, 0.2, 0.7\] for classification, one wants an explanation for 0.7. Consider the following formulae, where the output is `y` and that is what we want to explain.
y = f(x)
Consider the following formulae, where the outputs are `y` and `z`. Since we can only do attribution for one scalar value, we have to pick whether we want to explain the output `y` or `z`. Assume in this example the model is object detection and y and z are the bounding box and the object classification. You would want to pick which of the two outputs to explain.
y, z = f(x)
The dictionary format for `outputs` is:
{ "outputs": { "[your_display_name]":
"output_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human readable name you assign to the output to explain. A common example is "probability".<br/>
- "output_tensor_name": The key/value field to identify the output layer to explain. <br/>
- [layer]: The output layer to explain. In a single task model, like a tabular regressor, it is the last (topmost) layer in the model.
</blockquote>
- `inputs`: The features for attribution -- how they contributed to the output. Consider the following formulae, where `a` and `b` are the features. We have to pick which features to explain how the contributed. Assume that this model is deployed for A/B testing, where `a` are the data_items for the prediction and `b` identifies whether the model instance is A or B. You would want to pick `a` (or some subset of) for the features, and not `b` since it does not contribute to the prediction.
y = f(a,b)
The minimum dictionary format for `inputs` is:
{ "inputs": { "[your_display_name]":
"input_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human readable name you assign to the input to explain. A common example is "features".<br/>
- "input_tensor_name": The key/value field to identify the input layer for the feature attribution. <br/>
- [layer]: The input layer for feature attribution. In a single input tensor model, it is the first (bottom-most) layer in the model.
</blockquote>
Since the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:
<blockquote>
- "modality": "image": Indicates the field values are image data.
</blockquote>
Since the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:
<blockquote>
- "encoding": "BAG_OF_FEATURES" : Indicates that the inputs are set of tabular features.<br/>
- "index_feature_mapping": [ feature-names ] : A list of human readable names for each feature. For this example, we use the feature names specified in the dataset.<br/>
- "modality": "numeric": Indicates the field values are numeric.
</blockquote>
```
INPUT_METADATA = {
"input_tensor_name": serving_input,
"encoding": "BAG_OF_FEATURES",
"modality": "numeric",
"index_feature_mapping": [
"crim",
"zn",
"indus",
"chas",
"nox",
"rm",
"age",
"dis",
"rad",
"tax",
"ptratio",
"b",
"lstat",
],
}
OUTPUT_METADATA = {"output_tensor_name": serving_output}
input_metadata = aip.explain.ExplanationMetadata.InputMetadata(INPUT_METADATA)
output_metadata = aip.explain.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)
metadata = aip.explain.ExplanationMetadata(
inputs={"features": input_metadata}, outputs={"medv": output_metadata}
)
```
## Upload the model
Next, upload your model to a `Model` resource using `Model.upload()` method, with the following parameters:
- `display_name`: The human readable name for the `Model` resource.
- `artifact`: The Cloud Storage location of the trained model artifacts.
- `serving_container_image_uri`: The serving container image.
- `sync`: Whether to execute the upload asynchronously or synchronously.
- `explanation_parameters`: Parameters to configure explaining for `Model`'s predictions.
- `explanation_metadata`: Metadata describing the `Model`'s input and output for explanation.
If the `upload()` method is run asynchronously, you can subsequently block until completion with the `wait()` method.
```
model = aip.Model.upload(
display_name="boston_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
explanation_parameters=parameters,
explanation_metadata=metadata,
sync=False,
)
model.wait()
```
## Send a batch prediction request
Send a batch prediction to your deployed model.
### Make test items
You will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
### Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Unlike image, video and text, the batch input file for tabular is only supported for CSV. For CSV file, you make:
- The first line is the heading with the feature (fields) heading names.
- Each remaining line is a separate prediction request with the corresponding feature values.
For example:
"feature_1", "feature_2". ...
value_1, value_2, ...
```
! gsutil cat $IMPORT_FILE | head -n 1 > tmp.csv
! gsutil cat $IMPORT_FILE | tail -n 10 >> tmp.csv
! cut -d, -f1-16 tmp.csv > batch.csv
gcs_input_uri = BUCKET_NAME + "/test.csv"
! gsutil cp batch.csv $gcs_input_uri
```
### Make the batch explanation request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
- `job_display_name`: The human readable name for the batch prediction job.
- `gcs_source`: A list of one or more batch request input files.
- `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.
- `instances_format`: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
- `predictions_format`: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
- `generate_explanations`: Set to `True` to generate explanations.
- `sync`: If set to True, the call will block while waiting for the asynchronous batch job to complete.
```
MIN_NODES = 1
MAX_NODES = 1
batch_predict_job = model.batch_predict(
job_display_name="boston_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="csv",
predictions_format="jsonl",
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
generate_explanation=True,
sync=False,
)
print(batch_predict_job)
```
### Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter `sync` to `True` in the `batch_predict()` method to block until the batch prediction job is completed.
```
if not os.environ["IS_TESTING"]:
batch_predict_job.wait()
```
### Get the explanations
Next, get the explanation results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more explanation requests in a CSV format:
- CSV header + predicted_label
- CSV row + explanation, per prediction request
```
if not os.environ["IS_TESTING"]:
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
explanation_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("explanation"):
explanation_results.append(blob.name)
tags = list()
for explanation_result in explanation_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{explanation_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
print(line)
```
# Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- AutoML Training Job
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
```
top_directory = '/Users/iaincarmichael/Dropbox/Research/law/law-net/'
from __future__ import division
import os
import sys
import time
from math import *
import copy
import cPickle as pickle
# data
import numpy as np
import pandas as pd
# viz
import matplotlib.pyplot as plt
# graph
import igraph as ig
# NLP
from nltk.corpus import stopwords
# our code
sys.path.append(top_directory + 'code/')
from load_data import load_and_clean_graph, case_info
from pipeline.download_data import download_bulk_resource
from pipeline.make_clean_data import *
from viz import print_describe
sys.path.append(top_directory + 'explore/vertex_metrics_experiment/code/')
from make_snapshots import *
from make_edge_df import *
from attachment_model_inference import *
from compute_ranking_metrics import *
from pipeline_helper_functions import *
from make_case_text_files import *
from bag_of_words import *
from similarity_matrix import *
# directory set up
data_dir = top_directory + 'data/'
experiment_data_dir = data_dir + 'vertex_metrics_experiment/'
court_name = 'scotus'
# jupyter notebook settings
%load_ext autoreload
%autoreload 2
%matplotlib inline
G = load_and_clean_graph(data_dir, court_name)
active_years = range(1900, 2015 + 1)
seed_ranking = 4343
R = 1000
results = pd.DataFrame(columns = ['mean_score', 'similarity', 'method'])
```
# just similarity
```
columns_to_use = ['similarity']
LogReg = fit_logistic_regression(experiment_data_dir, columns_to_use)
test_case_rank_scores_sim = compute_ranking_metrics_LR(G, LogReg, columns_to_use, experiment_data_dir,
active_years, R, seed=seed_ranking, print_progress=True)
results.loc['sim', :] = [np.mean(test_case_rank_scores_sim), True, 'similarity']
print_describe(test_case_rank_scores_sim)
plt.hist(test_case_rank_scores_sim)
plt.xlabel('rank scores')
plt.title('only similarity')
```
# all metrics (no similarity)
```
columns_to_use = ['indegree', 'decayed_indegree', 's_pagerank', 'hubs', 'age']
LogReg = fit_logistic_regression(experiment_data_dir, columns_to_use)
test_case_rank_scores_all = compute_ranking_metrics_LR(G, LogReg, columns_to_use, experiment_data_dir,
active_years, R, seed=seed_ranking, print_progress=True)
results.loc['all', :] = [np.mean(test_case_rank_scores_all), False, 'combined']
print_describe(test_case_rank_scores_all)
plt.hist(test_case_rank_scores_all)
plt.xlabel('rank scores')
plt.title('all metrics no similarity')
```
# indegree (no similarity)
```
columns_to_use = ['indegree']
LogReg = fit_logistic_regression(experiment_data_dir, columns_to_use)
test_case_rank_scores_indeg = compute_ranking_metrics_LR(G, LogReg, columns_to_use, experiment_data_dir,
active_years, R, seed=seed_ranking, print_progress=True)
results.loc['indeg', :] = [np.mean(test_case_rank_scores_indeg), False, 'indegree']
print_describe(test_case_rank_scores_indeg)
plt.hist(test_case_rank_scores_indeg)
plt.xlabel('rank scores')
plt.title('indegree, no similarity')
```
# s_pagerank (no similarity)
```
columns_to_use = ['s_pagerank']
LogReg = fit_logistic_regression(experiment_data_dir, columns_to_use)
test_case_rank_scores_pr = compute_ranking_metrics_LR(G, LogReg, columns_to_use, experiment_data_dir,
active_years, R, seed=seed_ranking, print_progress=True)
results.loc['s_pagerank', :] = [np.mean(test_case_rank_scores_pr), False, 'pagerank']
print_describe(test_case_rank_scores_pr)
plt.hist(test_case_rank_scores_pr)
plt.xlabel('rank scores')
plt.title('s_pagerank, no similarity')
```
# hubs (no similarity)
```
columns_to_use = ['hubs']
LogReg = fit_logistic_regression(experiment_data_dir, columns_to_use)
test_case_rank_scores_hubs = compute_ranking_metrics_LR(G, LogReg, columns_to_use, experiment_data_dir,
active_years, R, seed=seed_ranking, print_progress=True)
results.loc['hubs', :] = [np.mean(test_case_rank_scores_hubs), False, 'hubs']
print_describe(test_case_rank_scores_hubs)
plt.hist(test_case_rank_scores_hubs)
plt.xlabel('rank scores')
plt.title('hubs, no similarity')
```
# all metrics with similarity
```
columns_to_use = ['indegree', 'decayed_indegree', 's_pagerank', 'hubs', 'age', 'similarity']
LogReg = fit_logistic_regression(experiment_data_dir, columns_to_use)
test_case_rank_scores_allsim = compute_ranking_metrics_LR(G, LogReg, columns_to_use, experiment_data_dir,
active_years, R, seed=seed_ranking, print_progress=True)
results.loc['all_sim', :] = [np.mean(test_case_rank_scores_allsim), True, 'combined']
print_describe(test_case_rank_scores_allsim)
plt.hist(test_case_rank_scores_allsim)
plt.xlabel('rank scores')
plt.title('all metrics with similarity')
```
# indegree with similarity
```
columns_to_use = ['indegree', 'similarity']
LogReg = fit_logistic_regression(experiment_data_dir, columns_to_use)
test_case_rank_scores_indegsim = compute_ranking_metrics_LR(G, LogReg, columns_to_use, experiment_data_dir,
active_years, R, seed=seed_ranking,print_progress=True)
results.loc['indeg_sim', :] = [np.mean(test_case_rank_scores_indegsim), True, 'indegree']
print_describe(test_case_rank_scores_indegsim)
plt.hist(test_case_rank_scores_indegsim)
plt.xlabel('rank scores')
plt.title('indegree with similarity')
```
# s_pagerank with similarity
```
columns_to_use = ['s_pagerank', 'similarity']
LogReg = fit_logistic_regression(experiment_data_dir, columns_to_use)
test_case_rank_scores_prsim = compute_ranking_metrics_LR(G, LogReg, columns_to_use, experiment_data_dir,
active_years, R, seed=seed_ranking)
results.loc['s_pagerank_sim', :] = [np.mean(test_case_rank_scores_prsim), True, 'pagerank']
print_describe(test_case_rank_scores_prsim)
plt.hist(test_case_rank_scores_prsim)
plt.xlabel('rank scores')
plt.title('s_pagerank with similarity')
```
# hubs with similarity
```
columns_to_use = ['hubs', 'similarity']
LogReg = fit_logistic_regression(experiment_data_dir, columns_to_use)
test_case_rank_scores_hubssim = compute_ranking_metrics_LR(G, LogReg, columns_to_use, experiment_data_dir,
active_years, R, seed=seed_ranking)
results.loc['hubs_sim', :] = [np.mean(test_case_rank_scores_hubssim), True, 'hubs']
print_describe(test_case_rank_scores_hubssim)
plt.hist(test_case_rank_scores_hubssim)
plt.xlabel('rank scores')
plt.title('hubs with similarity')
```
# results
```
# stupid formatting
results['similarity'] = results['similarity'].astype(np.bool)
results.sort_values(by='mean_score', ascending=False)
# without similarity
results[~results['similarity']].sort_values(by='mean_score', ascending=False)
# with similarity
results[results['similarity']].sort_values(by='mean_score', ascending=False)
```
# surgery
| github_jupyter |
# Determining the difference in variant calling in human-only samples `004` and `005`
**Gregory Way 2018**
Samples `004` and `005` are human tumors.
They were previously included in the entire `disambiguate` pipeline, where the WES reads were aligned to both human and mouse genomes.
In the pipeline, all WES reads are aligned to both genomes.
Reads are then "disambiguated" to either species in which the species with the highest alignment score per read is assigned that given read.
There was some interesting variants observed in the human only samples that appeared to belong only to mouse.
We hypothesized that the reason we are observing these variants is because of an error in the disambiguation step.
To determine if this was the case, I also aligned the two samples to the human genome only, and called variants in these files.
The following script compares the variants called in `004` and `005` between the `disambiguate` pipeline and the `human-only` pipeline.
## Note - we use the human-only pipeline reads for all downstream analyses for these two samples
Specifically, this means that the entire analysis pipeline is performed using variants called from the human-only pipeline for sampels `004` and `005`.
```
import os
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib_venn import venn2
%matplotlib inline
human_dir = os.path.join('results', 'annotated_vcfs_humanonly')
disambig_dir = os.path.join('results', 'annotated_merged_vcfs')
human_samples = ['004-primary', '005-primary']
human_sample_dict = {}
for pdx_dir, pdx_type in [[human_dir, 'human'], [disambig_dir, 'disambig']]:
for human_sample in human_samples:
file = os.path.join(pdx_dir,
'{}.annotated.hg19_multianno.csv'.format(human_sample))
pdx_df = pd.read_csv(file)
pdx_key = '{}_{}'.format(pdx_type, human_sample)
human_sample_dict[pdx_key] = pdx_df
human_004 = set(human_sample_dict['human_004-primary'].cosmic70)
disambig_004 = set(human_sample_dict['disambig_004-primary'].cosmic70)
human_005 = set(human_sample_dict['human_005-primary'].cosmic70)
disambig_005 = set(human_sample_dict['disambig_005-primary'].cosmic70)
```
## Visualize the difference in called variants
```
venn2([human_004, disambig_004], set_labels = ('human', 'disambig'))
plt.title("COSMIC Variants in Sample 004")
file = os.path.join('figures', 'human_only_venn_004.png')
plt.savefig(file)
venn2([human_005, disambig_005], set_labels = ('human', 'disambig'))
plt.title("COSMIC Variants in Sample 005")
file = os.path.join('figures', 'human_only_venn_005.png')
plt.savefig(file)
```
## What are the variants themselves?
```
mouse_only_004 = disambig_004 - human_004
human_sample_dict['disambig_004-primary'].query('cosmic70 in @mouse_only_004')
human_only_004 = human_004 - disambig_004
human_sample_dict['human_004-primary'].query('cosmic70 in @human_only_004')
mouse_only_005 = disambig_005 - human_005
human_sample_dict['disambig_005-primary'].query('cosmic70 in @mouse_only_005')
human_only_005 = human_005 - disambig_005
human_sample_dict['human_005-primary'].query('cosmic70 in @human_only_005')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jwkanggist/EverybodyTensorflow2.0/blob/master/lab24_basic_bilstm_timepredict_tf2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# LAB24: Basic BiLSTM to Predict Time Series
- Train a basic BiLSTM to predict a time series (seq2seq)
```
# preprocessor parts
from __future__ import absolute_import, division, print_function, unicode_literals
# Install TensorFlow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import numpy as np
from tensorflow.keras.callbacks import TensorBoard
import matplotlib.pyplot as plt
# for Tensorboard use
LOG_DIR = 'drive/data/tb_logs'
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip
import os
if not os.path.exists(LOG_DIR):
os.makedirs(LOG_DIR)
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR))
get_ipython().system_raw('./ngrok http 6006 &')
!curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
model_config = \
{
'n_input' : 1,
'n_units' : 200,
'n_output':1,
'num_steps' : 30
}
# dataset loading part
# 데이터 파이프라인 부분
def gen_seq_data(datanum,shift_sample,sqe_sample_length):
data_step = 0.1
start_n = np.random.random_integers(low=0, high=30,size=datanum)
tx = np.zeros(shape=(datanum,sqe_sample_length))
ty = np.zeros(shape=(datanum,sqe_sample_length))
x_batch = np.zeros(shape=(datanum,sqe_sample_length))
y_batch = np.zeros(shape=(datanum,sqe_sample_length))
for i in range(datanum):
n = start_n[i]
tx[i,:] = np.arange(start=n, stop=n + sqe_sample_length*data_step, step=data_step)
ty[i,:] = tx[i,:] + shift_sample * data_step
x_batch[i,:] = tx[i,:] * np.sin(tx[i,:]) / 3 + 2 * np.sin(5 * tx[i,:])
y_batch[i,:] = ty[i,:] * np.sin(ty[i,:]) / 3 + 2 * np.sin(5 * ty[i,:])
return x_batch, y_batch, tx, ty
shift_sample =2
datanum = 20000
x_train,y_train,tx_train,ty_train =gen_seq_data(datanum,shift_sample=shift_sample,
sqe_sample_length = model_config['num_steps'])
x_test, y_test, tx_test, ty_test = gen_seq_data(1000,shift_sample=shift_sample,
sqe_sample_length=model_config['num_steps'])
x_train = x_train.reshape((-1,\
model_config['num_steps'],
model_config['n_input']))
y_train = y_train.reshape((-1, \
model_config['num_steps'],
model_config['n_output']))
x_test = x_test.reshape((-1,\
model_config['num_steps'],
model_config['n_input']))
y_test = y_test.reshape((-1, \
model_config['num_steps'],
model_config['n_output']))
print("x_train.shape = %s" % str(x_train.shape))
print("c.shape = %s" % str(y_train.shape))
print("x_test.shape = %s" % str(x_test.shape))
print("y_test.shape = %s" % str(y_test.shape))
# model building and training setting part
# Y = f(X ; W)
dropout_rate=0.1
net_in = tf.keras.layers.Input(shape=(model_config['num_steps'],1))
net = tf.keras.layers.Bidirectional(
tf.keras.layers.LSTM(units=model_config['n_units'],
activation='tanh',
return_sequences=True,
return_state=False),
merge_mode='sum')(net_in)
net = tf.keras.layers.Conv1D(filters=1,kernel_size=1,activation='relu')(net)
net = tf.reshape(net,shape=[-1,model_config['num_steps']])
net = tf.keras.layers.Dense(units=model_config['num_steps'], activation='relu')(net)
net = tf.keras.layers.Dropout(dropout_rate)(net)
net = tf.keras.layers.Dense(units=model_config['num_steps'], activation=None)(net)
model = tf.keras.models.Model(inputs=net_in,outputs=net)
opt_fn = tf.keras.optimizers.Adam(learning_rate=1e-3,
beta_1=0.9,
beta_2=0.999)
model.compile(optimizer=opt_fn,
loss='mse',
metrics=['mse'])
tensorboard_callback = TensorBoard(log_dir=LOG_DIR,
histogram_freq=1,
write_graph=True,
write_images=True)
model.summary()
# model training and evaluation part
training_epochs = 30
batch_size = 128
model.fit(x_train, y_train,
epochs=training_epochs,
validation_data=(x_test, y_test),
batch_size=batch_size,
callbacks=[tensorboard_callback])
model.evaluate(x_test, y_test, verbose=2)
# prediction
test_index = 12
pred_y_train = model.predict(x_train[test_index,:,:].reshape((1,model_config['num_steps'],1)))
pred_y_test = model.predict(x_test[test_index,:,:].reshape((1,model_config['num_steps'],1)))
x_train_reshaped = x_train[test_index,:,:].reshape((model_config['num_steps']))
y_train_reshaped = y_train[test_index,:,:].reshape((model_config['num_steps']))
pred_y_train = pred_y_train.reshape((model_config['num_steps']))
x_test_reshaped = x_test[test_index,:,:].reshape((model_config['num_steps']))
y_test_reshaped = y_test[test_index,:,:].reshape((model_config['num_steps']))
pred_y_test = pred_y_test.reshape((model_config['num_steps']))
print(tx_train.shape)
plt.figure(1)
plt.plot(tx_train[test_index,:],x_train_reshaped ,color='b',marker='o',label='train_input')
plt.plot(ty_train[test_index,:],y_train_reshaped ,color='r',marker='x',label='train_output')
plt.plot(ty_train[test_index,:],pred_y_train,color='m',marker='x',label='pred_output')
plt.legend()
plt.figure(2)
plt.plot(tx_test[test_index,:],x_test_reshaped ,color='b',marker='o',label='test_input')
plt.plot(ty_test[test_index,:],y_test_reshaped ,color='r',marker='x',label='test_output')
plt.plot(ty_test[test_index,:],pred_y_test,color='m',marker='x',label='pred_output')
plt.legend()
plt.show()
```
| github_jupyter |
```
import open3d as o3d
import numpy as np
import os
import sys
# monkey patches visualization and provides helpers to load geometries
sys.path.append('..')
import open3d_tutorial as o3dtut
# change to True if you want to interact with the visualization windows
o3dtut.interactive = not "CI" in os.environ
```
# Multiway registration
Multiway registration is the process of aligning multiple pieces of geometry in a global space. Typically, the input is a set of geometries (e.g., point clouds or RGBD images) $\{\mathbf{P}_{i}\}$. The output is a set of rigid transformations $\{\mathbf{T}_{i}\}$, so that the transformed point clouds $\{\mathbf{T}_{i}\mathbf{P}_{i}\}$ are aligned in the global space.
Open3D implements multiway registration via pose graph optimization. The backend implements the technique presented in [\[Choi2015\]](../reference.html#choi2015).
## Input
The first part of the tutorial code reads three point clouds from files. The point clouds are downsampled and visualized together. They are misaligned.
```
def load_point_clouds(voxel_size=0.0):
pcds = []
for i in range(3):
pcd = o3d.io.read_point_cloud("../../test_data/ICP/cloud_bin_%d.pcd" %
i)
pcd_down = pcd.voxel_down_sample(voxel_size=voxel_size)
pcds.append(pcd_down)
return pcds
voxel_size = 0.02
pcds_down = load_point_clouds(voxel_size)
o3d.visualization.draw_geometries(pcds_down,
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
```
## Pose graph
A pose graph has two key elements: nodes and edges. A node is a piece of geometry $\mathbf{P}_{i}$ associated with a pose matrix $\mathbf{T}_{i}$ which transforms $\mathbf{P}_{i}$ into the global space. The set $\{\mathbf{T}_{i}\}$ are the unknown variables to be optimized. `PoseGraph.nodes` is a list of `PoseGraphNode`. We set the global space to be the space of $\mathbf{P}_{0}$. Thus $\mathbf{T}_{0}$ is the identity matrix. The other pose matrices are initialized by accumulating transformation between neighboring nodes. The neighboring nodes usually have large overlap and can be registered with [Point-to-plane ICP](../Basic/icp_registration.ipynb#point-to-plane-ICP).
A pose graph edge connects two nodes (pieces of geometry) that overlap. Each edge contains a transformation matrix $\mathbf{T}_{i,j}$ that aligns the source geometry $\mathbf{P}_{i}$ to the target geometry $\mathbf{P}_{j}$. This tutorial uses [Point-to-plane ICP](../Basic/icp_registration.ipynb#point-to-plane-ICP) to estimate the transformation. In more complicated cases, this pairwise registration problem should be solved via [Global registration](global_registration.ipynb).
[\[Choi2015\]](../reference.html#choi2015) has observed that pairwise registration is error-prone. False pairwise alignments can outnumber correctly aligned pairs. Thus, they partition pose graph edges into two classes. **Odometry edges** connect temporally close, neighboring nodes. A local registration algorithm such as ICP can reliably align them. **Loop closure edges** connect any non-neighboring nodes. The alignment is found by global registration and is less reliable. In Open3D, these two classes of edges are distinguished by the `uncertain` parameter in the initializer of `PoseGraphEdge`.
In addition to the transformation matrix $\mathbf{T}_{i}$, the user can set an information matrix $\mathbf{\Lambda}_{i}$ for each edge. If $\mathbf{\Lambda}_{i}$ is set using function `get_information_matrix_from_point_clouds`, the loss on this pose graph edge approximates the RMSE of the corresponding sets between the two nodes, with a line process weight. Refer to Eq (3) to (9) in [\[Choi2015\]](../reference.html#choi2015) and [the Redwood registration benchmark](http://redwood-data.org/indoor/registration.html) for details.
The script creates a pose graph with three nodes and three edges. Among the edges, two of them are odometry edges (`uncertain = False`) and one is a loop closure edge (`uncertain = True`).
```
def pairwise_registration(source, target):
print("Apply point-to-plane ICP")
icp_coarse = o3d.pipelines.registration.registration_icp(
source, target, max_correspondence_distance_coarse, np.identity(4),
o3d.pipelines.registration.TransformationEstimationPointToPlane())
icp_fine = o3d.pipelines.registration.registration_icp(
source, target, max_correspondence_distance_fine,
icp_coarse.transformation,
o3d.pipelines.registration.TransformationEstimationPointToPlane())
transformation_icp = icp_fine.transformation
information_icp = o3d.pipelines.registration.get_information_matrix_from_point_clouds(
source, target, max_correspondence_distance_fine,
icp_fine.transformation)
return transformation_icp, information_icp
def full_registration(pcds, max_correspondence_distance_coarse,
max_correspondence_distance_fine):
pose_graph = o3d.pipelines.registration.PoseGraph()
odometry = np.identity(4)
pose_graph.nodes.append(o3d.pipelines.registration.PoseGraphNode(odometry))
n_pcds = len(pcds)
for source_id in range(n_pcds):
for target_id in range(source_id + 1, n_pcds):
transformation_icp, information_icp = pairwise_registration(
pcds[source_id], pcds[target_id])
print("Build o3d.pipelines.registration.PoseGraph")
if target_id == source_id + 1: # odometry case
odometry = np.dot(transformation_icp, odometry)
pose_graph.nodes.append(
o3d.pipelines.registration.PoseGraphNode(
np.linalg.inv(odometry)))
pose_graph.edges.append(
o3d.pipelines.registration.PoseGraphEdge(source_id,
target_id,
transformation_icp,
information_icp,
uncertain=False))
else: # loop closure case
pose_graph.edges.append(
o3d.pipelines.registration.PoseGraphEdge(source_id,
target_id,
transformation_icp,
information_icp,
uncertain=True))
return pose_graph
print("Full registration ...")
max_correspondence_distance_coarse = voxel_size * 15
max_correspondence_distance_fine = voxel_size * 1.5
with o3d.utility.VerbosityContextManager(
o3d.utility.VerbosityLevel.Debug) as cm:
pose_graph = full_registration(pcds_down,
max_correspondence_distance_coarse,
max_correspondence_distance_fine)
```
Open3D uses the function `global_optimization` to perform pose graph optimization. Two types of optimization methods can be chosen: `GlobalOptimizationGaussNewton` or `GlobalOptimizationLevenbergMarquardt`. The latter is recommended since it has better convergence property. Class `GlobalOptimizationConvergenceCriteria` can be used to set the maximum number of iterations and various optimization parameters.
Class `GlobalOptimizationOption` defines a couple of options. `max_correspondence_distance` decides the correspondence threshold. `edge_prune_threshold` is a threshold for pruning outlier edges. `reference_node` is the node id that is considered to be the global space.
```
print("Optimizing PoseGraph ...")
option = o3d.pipelines.registration.GlobalOptimizationOption(
max_correspondence_distance=max_correspondence_distance_fine,
edge_prune_threshold=0.25,
reference_node=0)
with o3d.utility.VerbosityContextManager(
o3d.utility.VerbosityLevel.Debug) as cm:
o3d.pipelines.registration.global_optimization(
pose_graph,
o3d.pipelines.registration.GlobalOptimizationLevenbergMarquardt(),
o3d.pipelines.registration.GlobalOptimizationConvergenceCriteria(),
option)
```
The global optimization performs twice on the pose graph. The first pass optimizes poses for the original pose graph taking all edges into account and does its best to distinguish false alignments among uncertain edges. These false alignments have small line process weights, and they are pruned after the first pass. The second pass runs without them and produces a tight global alignment. In this example, all the edges are considered as true alignments, hence the second pass terminates immediately.
## Visualize optimization
The transformed point clouds are listed and visualized using `draw_geometries`.
```
print("Transform points and display")
for point_id in range(len(pcds_down)):
print(pose_graph.nodes[point_id].pose)
pcds_down[point_id].transform(pose_graph.nodes[point_id].pose)
o3d.visualization.draw_geometries(pcds_down,
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
```
## Make a combined point cloud
`PointCloud` has a convenience operator `+` that can merge two point clouds into a single one. In the code below, the points are uniformly resampled using `voxel_down_sample` after merging. This is recommended post-processing after merging point clouds since it can relieve duplicated or over-densified points.
```
pcds = load_point_clouds(voxel_size)
pcd_combined = o3d.geometry.PointCloud()
for point_id in range(len(pcds)):
pcds[point_id].transform(pose_graph.nodes[point_id].pose)
pcd_combined += pcds[point_id]
pcd_combined_down = pcd_combined.voxel_down_sample(voxel_size=voxel_size)
o3d.io.write_point_cloud("multiway_registration.pcd", pcd_combined_down)
o3d.visualization.draw_geometries([pcd_combined_down],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
```
<div class="alert alert-info">
**Note:**
Although this tutorial demonstrates multiway registration for point clouds, the same procedure can be applied to RGBD images. See [Make fragments](../ReconstructionSystem/make_fragments.rst) for an example.
</div>
| github_jupyter |
# Coupling a Landlab groundwater with a Mesa agent-based model
This notebook shows a toy example of how one might couple a simple groundwater model (Landlab's `GroundwaterDupuitPercolator`, by [Litwin et al. (2020)](https://joss.theoj.org/papers/10.21105/joss.01935)) with an agent-based model (ABM) written using the [Mesa](https://mesa.readthedocs.io/en/latest/) Agent-Based Modeling (ABM) package.
The purpose of this tutorial is to demonstrate the technical aspects of creating an integrated Landlab-Mesa model. The example is deliberately very simple in terms of the processes and interactions represented, and not meant to be a realistic portrayal of water-resources decision making. But the example does show how one might build a more sophisticated and interesting model using these basic ingredients.
(Greg Tucker, November 2021; created from earlier notebook example used in May 2020
workshop)
## Running the groundwater model
The following section simply illustrates how to create a groundwater model using the `GroundwaterDupuitPercolator` component.
Imports:
```
from landlab import RasterModelGrid, imshow_grid
from landlab.components import GroundwaterDupuitPercolator
import matplotlib.pyplot as plt
```
Set parameters:
```
base_depth = 22.0 # depth of aquifer base below ground level, m
initial_water_table_depth = 2.0 # starting depth to water table, m
dx = 100.0 # cell width, m
pumping_rate = 0.001 # pumping rate, m3/s
well_locations = [800, 1200]
K = 0.001 # hydraulic conductivity, (m/s)
n = 0.2 # porosity, (-)
dt = 3600.0 # time-step duration, s
background_recharge = 0.1 / (3600 * 24 * 365.25) # recharge rate from infiltration, m/s
```
Create a grid and add fields:
```
# Raster grid with closed boundaries
# boundaries = {'top': 'closed','bottom': 'closed','right':'closed','left':'closed'}
grid = RasterModelGrid((41, 41), xy_spacing=dx) # , bc=boundaries)
# Topographic elevation field (meters)
elev = grid.add_zeros("topographic__elevation", at="node")
# Field for the elevation of the top of an impermeable geologic unit that forms
# the base of the aquifer (meters)
base = grid.add_zeros("aquifer_base__elevation", at="node")
base[:] = elev - base_depth
# Field for the elevation of the water table (meters)
wt = grid.add_zeros("water_table__elevation", at="node")
wt[:] = elev - initial_water_table_depth
# Field for the groundwater recharge rate (meters per second)
recharge = grid.add_zeros("recharge__rate", at="node")
recharge[:] = background_recharge
recharge[well_locations] -= pumping_rate / (
dx * dx
) # pumping rate, in terms of recharge
```
Instantiate the component (note use of an array/field instead of a scalar constant for `recharge_rate`):
```
gdp = GroundwaterDupuitPercolator(
grid,
hydraulic_conductivity=K,
porosity=n,
recharge_rate=recharge,
regularization_f=0.01,
)
```
Define a couple of handy functions to run the model for a day or a year:
```
def run_for_one_day(gdp, dt):
num_iter = int(3600.0 * 24 / dt)
for _ in range(num_iter):
gdp.run_one_step(dt)
def run_for_one_year(gdp, dt):
num_iter = int(365.25 * 3600.0 * 24 / dt)
for _ in range(num_iter):
gdp.run_one_step(dt)
```
Run for a year and plot the water table:
```
run_for_one_year(gdp, dt)
imshow_grid(grid, wt, colorbar_label="Water table elevation (m)")
```
### Aside: calculating a pumping rate in terms of recharge
The pumping rate at a particular grid cell (in volume per time, representing pumping from a well at that location) needs to be given in terms of a recharge rate (depth of water equivalent per time) in a given grid cell. Suppose for example you're pumping 16 gallons/minute (horrible units of course). That equates to:
16 gal/min x 0.00378541 m3/gal x (1/60) min/sec =
```
Qp = 16.0 * 0.00378541 / 60.0
print(Qp)
```
...equals about 0.001 m$^3$/s. That's $Q_p$. The corresponding negative recharge in a cell of dimensions $\Delta x$ by $\Delta x$ would be
$R_p = Q_p / \Delta x^2$
```
Rp = Qp / (dx * dx)
print(Rp)
```
## A very simple ABM with farmers who drill wells into the aquifer
For the sake of illustration, our ABM will be extremely simple. There are $N$ farmers, at random locations, who each pump at a rate $Q_p$ as long as the water table lies above the depth of their well, $d_w$. Once the water table drops below their well, the well runs dry and they switch from crops to pasture.
### Check that Mesa is installed
For the next step, we must verify that Mesa is available. If it is not, use one of the installation commands below to install, then re-start the kernel (Kernel => Restart) and continue.
```
try:
from mesa import Model
except ModuleNotFoundError:
print(
"""
Mesa needs to be installed in order to run this notebook.
Normally Mesa should be pre-installed alongside the Landlab notebook collection.
But it appears that Mesa is not already installed on the system on which you are
running this notebook. You can install Mesa from a command prompt using either:
`conda install -c conda-forge mesa`
or
`pip install mesa`
"""
)
raise
```
### Defining the ABM
In Mesa, an ABM is created using a class for each Agent and a class for the Model. Here's the Agent class (a Farmer). Farmers have a grid location and an attribute: whether they are actively pumping their well or not. They also have a well depth: the depth to the bottom of their well. Their action consists of checking whether their well is wet or dry; if wet, they will pump, and if dry, they will not.
```
from mesa import Agent, Model
from mesa.space import MultiGrid
from mesa.time import RandomActivation
class FarmerAgent(Agent):
"""An agent who pumps from a well if it's not dry."""
def __init__(self, unique_id, model, well_depth=5.0):
super().__init__(unique_id, model)
self.pumping = True
self.well_depth = well_depth
def step(self):
x, y = self.pos
print(f"Farmer {self.unique_id}, ({x}, {y})")
print(f" Depth to the water table: {self.model.wt_depth_2d[x,y]}")
print(f" Depth to the bottom of the well: {self.well_depth}")
if self.model.wt_depth_2d[x, y] >= self.well_depth: # well is dry
print(" Well is dry.")
self.pumping = False
else:
print(" Well is pumping.")
self.pumping = True
```
Next, define the model class. The model will take as a parameter a reference to a 2D array (with the same dimensions as the grid) that contains the depth to water table at each grid location. This allows the Farmer agents to check whether their well has run dry.
```
class FarmerModel(Model):
"""A model with several agents on a grid."""
def __init__(self, N, width, height, well_depth, depth_to_water_table):
self.num_agents = N
self.grid = MultiGrid(width, height, True)
self.depth_to_water_table = depth_to_water_table
self.schedule = RandomActivation(self)
# Create agents
for i in range(self.num_agents):
a = FarmerAgent(i, self, well_depth)
self.schedule.add(a)
# Add the agent to a random grid cell (excluding the perimeter)
x = self.random.randrange(self.grid.width - 2) + 1
y = self.random.randrange(self.grid.width - 2) + 1
self.grid.place_agent(a, (x, y))
def step(self):
self.wt_depth_2d = self.depth_to_water_table.reshape(
(self.grid.width, self.grid.height)
)
self.schedule.step()
```
### Setting up the Landlab grid, fields, and groundwater simulator
```
base_depth = 22.0 # depth of aquifer base below ground level, m
initial_water_table_depth = 2.8 # starting depth to water table, m
dx = 100.0 # cell width, m
pumping_rate = 0.004 # pumping rate, m3/s
well_depth = 3 # well depth, m
background_recharge = 0.002 / (365.25 * 24 * 3600) # recharge rate, m/s
K = 0.001 # hydraulic conductivity, (m/s)
n = 0.2 # porosity, (-)
dt = 3600.0 # time-step duration, s
num_agents = 12 # number of farmer agents
run_duration_yrs = 15 # run duration in years
grid = RasterModelGrid((41, 41), xy_spacing=dx)
elev = grid.add_zeros("topographic__elevation", at="node")
base = grid.add_zeros("aquifer_base__elevation", at="node")
base[:] = elev - base_depth
wt = grid.add_zeros("water_table__elevation", at="node")
wt[:] = elev - initial_water_table_depth
depth_to_wt = grid.add_zeros("water_table__depth_below_ground", at="node")
depth_to_wt[:] = elev - wt
recharge = grid.add_zeros("recharge__rate", at="node")
recharge[:] = background_recharge
recharge[well_locations] -= pumping_rate / (
dx * dx
) # pumping rate, in terms of recharge
gdp = GroundwaterDupuitPercolator(
grid,
hydraulic_conductivity=K,
porosity=n,
recharge_rate=recharge,
regularization_f=0.01,
)
```
### Set up the Farmer model
```
nc = grid.number_of_node_columns
nr = grid.number_of_node_rows
farmer_model = FarmerModel(
num_agents, nc, nr, well_depth, depth_to_wt.reshape((nr, nc))
)
```
Check the spatial distribution of wells:
```
import numpy as np
def get_well_count(model):
well_count = np.zeros((nr, nc), dtype=int)
pumping_well_count = np.zeros((nr, nc), dtype=int)
for cell in model.grid.coord_iter():
cell_content, x, y = cell
well_count[x][y] = len(cell_content)
for agent in cell_content:
if agent.pumping:
pumping_well_count[x][y] += 1
return well_count, pumping_well_count
well_count, p_well_count = get_well_count(farmer_model)
imshow_grid(grid, well_count.flatten())
```
#### Set the initial recharge field
```
recharge[:] = -(pumping_rate / (dx * dx)) * p_well_count.flatten()
imshow_grid(grid, -recharge * 3600 * 24, colorbar_label="Pumping rate (m/day)")
```
### Run the model
```
for i in range(run_duration_yrs):
# Run the groundwater simulator for one year
run_for_one_year(gdp, dt)
# Update the depth to water table
depth_to_wt[:] = elev - wt
# Run the farmer model
farmer_model.step()
# Count the number of pumping wells
well_count, pumping_well_count = get_well_count(farmer_model)
total_pumping_wells = np.sum(pumping_well_count)
print(f"In year {i + 1} there are {total_pumping_wells} pumping wells")
print(f" and the greatest depth to water table is {np.amax(depth_to_wt)} meters.")
# Update the recharge field according to current pumping rate
recharge[:] = (
background_recharge - (pumping_rate / (dx * dx)) * pumping_well_count.flatten()
)
print(f"Total recharge: {np.sum(recharge)}")
print("")
plt.figure()
imshow_grid(grid, wt)
imshow_grid(grid, wt)
# Display the area of water table that lies below the well depth
depth_to_wt[:] = elev - wt
too_deep = depth_to_wt > well_depth
imshow_grid(grid, too_deep)
```
This foregoing example is very simple, and leaves out many aspects of the complex problem of water extraction as a "tragedy of the commons". But it does illustrate how one can build a model that integrates agent-based dynamics with continuum dynamics by combining Landlab grid-based model code with Mesa ABM code.
| github_jupyter |
```
import numpy as np
import torch
import random
device = 'cuda' if torch.cuda.is_available() else 'cpu'
import os,sys
opj = os.path.join
from tqdm import tqdm
# import acd
from random import randint
from copy import deepcopy
import pickle as pkl
import argparse
sys.path.append('../../lib/disentangling-vae')
import main
sys.path.append('../../src/vae')
sys.path.append('../../src/vae/models')
sys.path.append('../../src/dsets/images')
from dset import get_dataloaders
from model import init_specific_model
from losses import get_loss_f
from training import Trainer
sys.path.append('../../lib/trim')
# trim modules
from trim import DecoderEncoder
```
### Train model
```
args = main.parse_arguments()
args.dataset = "dsprites"
args.model_type = "Burgess"
args.latent_dim = 10
args.img_size = (1, 64, 64)
args.rec_dist = "bernoulli"
args.reg_anneal = 0
args.beta = 0
args.lamPT = 1
args.lamNN = 0.1
args.lamH = 0
args.lamSP = 1
class p:
'''Parameters for Gaussian mixture simulation
'''
# parameters for generating data
seed = 13
dataset = "dsprites"
# parameters for model architecture
model_type = "Burgess"
latent_dim = 10
img_size = (1, 64, 64)
# parameters for training
train_batch_size = 64
test_batch_size = 100
lr = 1e-4
rec_dist = "bernoulli"
reg_anneal = 0
num_epochs = 100
# hyperparameters for loss
beta = 0.0
lamPT = 0.0
lamNN = 0.0
lamH = 0.0
lamSP = 0.0
# parameters for exp
warm_start = None # which parameter to warm start with respect to
seq_init = 1 # value of warm_start parameter to start with respect to
# SAVE MODEL
out_dir = "/home/ubuntu/local-vae/notebooks/ex_dsprites/results" # wooseok's setup
# out_dir = '/scratch/users/vision/chandan/local-vae' # chandan's setup
dirname = "vary"
pid = ''.join(["%s" % randint(0, 9) for num in range(0, 10)])
def _str(self):
vals = vars(p)
return 'beta=' + str(vals['beta']) + '_lamPT=' + str(vals['lamPT']) + '_lamNN=' + str(vals['lamNN']) + '_lamSP=' + str(vals['lamSP']) \
+ '_seed=' + str(vals['seed']) + '_pid=' + vals['pid']
def _dict(self):
return {attr: val for (attr, val) in vars(self).items()
if not attr.startswith('_')}
class s:
'''Parameters to save
'''
def _dict(self):
return {attr: val for (attr, val) in vars(self).items()
if not attr.startswith('_')}
# calculate losses
def calc_losses(model, data_loader, loss_f):
"""
Tests the model for one epoch.
Parameters
----------
data_loader: torch.utils.data.DataLoader
loss_f: loss object
Return
------
"""
model.eval()
rec_loss = 0
kl_loss = 0
pt_loss = 0
nn_loss = 0
h_loss = 0
sp_loss = 0
for batch_idx, (data, _) in enumerate(data_loader):
data = data.to(device)
recon_data, latent_dist, latent_sample = model(data)
latent_map = DecoderEncoder(model, use_residuals=True)
latent_output = latent_map(latent_sample, data)
_ = loss_f(data, recon_data, latent_dist, model.training, storer=None,
latent_sample=latent_sample, latent_output=latent_output, n_data=None)
rec_loss += loss_f.rec_loss.item()
kl_loss += loss_f.kl_loss.item()
pt_loss += loss_f.pt_loss.item() if type(loss_f.pt_loss) == torch.Tensor else 0
nn_loss += loss_f.nearest_neighbor_loss.item()if type(loss_f.nearest_neighbor_loss) == torch.Tensor else 0
h_loss += loss_f.hessian_loss.item()if type(loss_f.hessian_loss) == torch.Tensor else 0
sp_loss += loss_f.sp_loss.item()if type(loss_f.sp_loss) == torch.Tensor else 0
n_batch = batch_idx + 1
rec_loss /= n_batch
kl_loss /= n_batch
pt_loss /= n_batch
nn_loss /= n_batch
h_loss /= n_batch
sp_loss /= n_batch
return (rec_loss, kl_loss, pt_loss, nn_loss, h_loss, sp_loss)
for arg in vars(args):
setattr(p, arg, getattr(args, arg))
# create dir
out_dir = opj(p.out_dir, p.dirname)
os.makedirs(out_dir, exist_ok=True)
# seed
random.seed(p.seed)
np.random.seed(p.seed)
torch.manual_seed(p.seed)
# get dataloaders
train_loader = get_dataloaders(p.dataset,
batch_size=p.train_batch_size,
logger=None)
# prepare model
model = init_specific_model(model_type=p.model_type,
img_size=p.img_size,
latent_dim=p.latent_dim,
hidden_dim=None).to(device)
# train
optimizer = torch.optim.Adam(model.parameters(), lr=p.lr)
loss_f = get_loss_f(decoder=model.decoder, **vars(p))
trainer = Trainer(model, optimizer, loss_f, device=device)
# trainer(train_loader, epochs=p.num_epochs)
# calculate losses
print('calculating losses and metric...')
rec_loss, kl_loss, pt_loss, nn_loss, h_loss, sp_loss = calc_losses(model, train_loader, loss_f)
s.reconstruction_loss = rec_loss
s.kl_normal_loss = kl_loss
s.pt_local_independence_loss = pt_loss
s.nearest_neighbor_loss = nn_loss
s.hessian_loss = h_loss
s.sparsity_loss = sp_loss
# s.disentanglement_metric = calc_disentangle_metric(model, test_loader).mean().item()
s.net = model
print(s.reconstruction_loss, s.kl_normal_loss, s.pt_local_independence_loss, s.nearest_neighbor_loss, s.hessian_loss, s.sparsity_loss)
```
| github_jupyter |
## Compressing Word Embeddings
Downloadable version of GloVe embedding (with fallback source).
Then require two main sections :
* Lloyd embedding generation
* Sparsified embedding generation
and then saving of the created embeddings to ```.hkl``` files.
### Download Source Embedding(s)
The following needs to be Pythonized :
```
RCL_BASE=('http://redcatlabs.com/downloads/'+
'deep-learning-workshop/notebooks/data/'+
'research/ICONIP-2016/')
"""
# http://redcatlabs.com/downloads/deep-learning-workshop/LICENSE
# Files in : ${RCL_BASE} :
# :: These are either as downloaded from GloVe site, or generated by Levy code
# The downloadable pretrained GloVe is much larger, since it is 400k words,
# whereas the 'home-grown' GloVe (based on the 1-billion word Corpus, and a wikipedia
# snapshot has a vocabularly of 2^17 words, i.e. a ~131k vocab size)
# 507,206,240 Oct 25 2015 2-pretrained-vectors_glove.6B.300d.hkl
# 160,569,440 May 14 14:57 1-glove-1-billion-and-wiki_window11-lc-36_vectors.2-17.hkl
"""
import os, requests
def get_embedding_file( hkl ):
if os.path.isfile( hkl ):
print("%s already available locally" % (hkl,))
else:
# ... requests.get( RCL_BASE + basename(hkl))
print("Downloading : %s" % (hkl,))
with open(hkl, 'wb') as handle:
response = requests.get(RCL_BASE + (hkl.replace('../data/','')), stream=True)
if not response.ok:
# Something went wrong
print("Failed to download %s" % (hkl,))
for block in response.iter_content(64*1024):
handle.write(block)
print("Downloading : %s :: DONE" % (hkl,))
```
### Load the embedding file
```
default_embedding_file = '../data/1-glove-1-billion-and-wiki_window11-lc-36_vectors.2-17.hkl'
#default_embedding_file = '../data/2-pretrained-vectors_glove.6B.300d.hkl'
get_embedding_file( default_embedding_file )
import time
import numpy as np
import theano
import lasagne
# http://blog.mdda.net/oss/2016/04/07/nvidia-on-fedora-23
#theano.config.nvcc.flags = '-D_GLIBCXX_USE_CXX11_ABI=0'
import sklearn.preprocessing
import hickle
d = hickle.load(default_embedding_file)
vocab, embedding = d['vocab'], d['embedding']
vocab_np = np.array(vocab, dtype=str)
vocab_orig=vocab_np.copy()
#dictionary = dict( (word.lower(), i) for i,word in enumerate(vocab) )
dictionary = dict( (word, i) for i,word in enumerate(vocab) if i<len(embedding) )
print("Embedding loaded :", embedding.shape) # (vocab_size, embedding_dimension)=(rows, columns)
def NO_NEED_save_to_txt(embedding_save, save_filename_txt):
with open(save_filename_txt, 'wb') as f:
embedding_save = embedding_normed
for l in range(0, embedding_save.shape[0]):
f.write("%s %s\n" % (
vocab[l],
' '.join([ ('0' if x==0. else ("%.6f" % (x,))) for x in embedding_save[l, :].tolist() ]), )
)
def save_embedding_to_hickle(vocab, embedding_save, save_filename_hkl, vocab_orig=None):
print("About to save to %s" % (save_filename_hkl,))
d=dict(
vocab=vocab,
vocab_orig=vocab if vocab_orig is None else vocab,
embedding=embedding_save,
)
hickle.dump(d, save_filename_hkl, mode='w', compression='gzip')
print("Saved to %s" % (save_filename_hkl,))
```
# Lloyd's Method : 32->3 bits
```
quantisation_levels = 8
def np_int_list(n, mult=100., size=3): # size includes the +/-
return "[ " + (', '.join([ ('% +*d') % (size,x,) for x in (n * mult).astype(int).tolist()])) + " ]"
## Quantise each entry into 'pct' (as an integer) level (optimised per vector location)
# Suppose that v is a vector of levels
# and c is a list of numbers that needs to be quantised,
# each c becomes c' where c' is the closest value in v
# :: update v so that (c - c')^2 is as low as possible
c_length = embedding.shape[0]
embedding_quantised = np.zeros_like(embedding)
t0 = time.time()
for d in range(embedding.shape[1]): # Quantise each dimension separately
levels = quantisation_levels
i_step = int(c_length/levels)
i_start = int(i_step/2)
v_indices = np.arange(start=i_start, stop=c_length, step=i_step, dtype='int')
#if d != 9: continue # Weird distribution
#if d != 1: continue # Very standard example
# Initialise v by sorting c, and placing them evenly through the list
e_column = embedding[:,d].astype('float32')
c_sorted = np.sort( e_column )
v_init = c_sorted[ v_indices ]
# the v_init are the initial centers
v=v_init
t1 = time.time()
epochs=0
for epoch in range(0, 1000):
#print(" Dimension:%3d, Epoch:%3d, %s" % (d, epoch, np_int_list(v),))
# works out the values in their middles
mids_np = (v[:-1] + v[1:])/2.
mids = mids_np.tolist()
mids.insert( 0, c_sorted[0] )
mids.append( c_sorted[-1] +1 )
centroids=[]
for i in range( 0, len(mids)-1 ):
pattern = np.where( (mids[i] <= c_sorted) & (c_sorted < mids[i+1]) )
centroids.append( c_sorted[ pattern ].mean() )
centroids_np = np.array(centroids)
if np.allclose(v, centroids_np):
if epochs>200: # This only prints out for 'long convergence cases'
print(" NB : long running convergence : embedding[%3d] - took %d epochs" % (d, epochs,))
break
v = centroids_np
epochs += 1
if d % 10 ==0:
print("Ran embedding[%3d] - average time for convergence : %6.2fms" % (d, (time.time() - t1)/epochs*1000.,))
#print("Check col updated: before ", np_int_list(embedding[0:20,d]))
# Ok, so now we have the centers in v, and the mids in 'mids'
for i in range( 0, len(mids)-1 ):
pattern = np.where( (mids[i] <= e_column) & (e_column < mids[i+1]) )
embedding_quantised[pattern, d] = v[i]
#print("Check col updated: after ", np_int_list(embedding_quantised[0:20,d]))
if False:
offset=101010 # Check rare-ish words
for d in range(5, embedding_quantised.shape[1], 25):
print("Col %3d updated: " % (d,), np_int_list(embedding_quantised[(offset+0):(offset+20),d]))
embedding_normed = sklearn.preprocessing.normalize(embedding_quantised, norm='l2', axis=1, copy=True)
print("Quantisation finished : results in embedding_quantised and (same, but normalised) in embedding_normed")
```
To save the created embedding, execute the following :
```
# Save the embedding_normed as a hickle file (easy to reload into the 'explore' workbook)
save_embedding_to_hickle(vocab, embedding_normed, '../data/lloyds_normed_%d.hkl' % (quantisation_levels, ) )
```
# Non-Negative Sparse Embeddings
```
python sparsify_lasagne.py
--mode=train \
--version=21 \
--save='./sparse.6B.300d_S-21_2n-shuf-noise-after-norm_.2.01_6-75_%04d.hkl' \
--sparsity=0.0675 \
--random=1 \
--iters=4000 | tee sparse.6B.300d_S-21_2n-shuf-noise-after-norm_.2.01_6-75.log
#sparse_dim = 1024, pre-num_units=embedding_dim*8,
```
```
# -> 4.0 l2 in 4.0k epochs (sigma=39) # sparsity_std_:, 0.4742,
python sparsify_lasagne.py
--mode=predict \
--version=21 \
--load='./sparse.6B.300d_S-21_2n-shuf-noise-after-norm_.2.01_6-75_4000.hkl' \
--sparsity=0.0675 \
--random=1 \
--output=sparse.6B.300d_S-21_2n-shuf-noise-after-norm_.2.01_6-75_4000_GPU-sparsity_recreate.hkl \
--direct=sparse.6B.300d_S-21_2n-shuf-noise-after-norm_.2.01_6-75_4000_GPU-sparse_matrix.hkl
```
```
sparse_dim,sparsity_goal = 1024, 0.0675
#sparse_dim,sparsity_goal = 4096, 0.0150
shuffle_vocab = True
batchsize = 16384 # (GTX760 requires <20000)
pre_normalize = False
default_save_file_fmt = './data/sparse.6B.300d_jupyter_%%04d.hkl'
"""
parser = argparse.ArgumentParser(description='')
parser.add_argument('-m','--mode', help='(train|predict)', type=str, default=None)
parser.add_argument('-i','--iters', help='Number of iterations', type=int, default=10000)
parser.add_argument('-o','--output', help='hickle to *create* embedding for testing', type=str, default=None)
parser.add_argument('-d','--direct', help='hickle to *create* *binary* embedding for testing', type=str, default=None)
parser.add_argument('-p','--param', help='Set param value initially', type=float, default=None)
args = parser.parse_args()
print("Mode : %s" % (args.mode,))
"""
if shuffle_vocab:
np.random.seed(1) # No need to get fancy - just want to mix up the word frequencies into different batches
perm = np.random.permutation(len(embedding))
embedding = embedding[perm]
vocab = vocab_np[perm].tolist()
dictionary = dict( (word, i) for i,word in enumerate(vocab) )
print("Embedding loaded :", embedding.shape) # (vocab_size, embedding_dimension)=(rows, columns)
print("Device=%s, OpenMP=%s" % (theano.config.device, ("True" if theano.config.openmp else "False"), ))
def np_int_list(n, mult=100., size=3): # size includes the +/-
return "[ " + (', '.join([ ('% +*d') % (size,x,) for x in (n * mult).astype(int).tolist()])) + " ]"
embedding_dim = embedding.shape[1]
mode='train'
#mode='predict'
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
class SparseWinnerTakeAllLayer(lasagne.layers.Layer):
def __init__(self, incoming, sparsity=0.05, **kwargs):
super(SparseWinnerTakeAllLayer, self).__init__(incoming, **kwargs)
self.sparsity = sparsity
def get_output_for(self, input, **kwargs):
"""
Parameters
----------
input : tensor
output from the previous layer
"""
# Sort within batch (Very likely on the CPU)
# theano.tensor.sort(self, axis, kind, order)
sort_input = input.sort( axis=0, kind='quicksort' )
# Find kth value
hurdles_raw = sort_input[ int( batchsize * (1.0 - self.sparsity) ), : ]
hurdles = theano.tensor.maximum(hurdles_raw, 0.0) # rectification...
# switch based on >kth value (or create mask), all other entries are zero
masked = theano.tensor.switch( theano.tensor.ge(input, hurdles), input, 0.0)
return masked
class SparseWinnerTakeAllLayerApprox(lasagne.layers.Layer):
def __init__(self, incoming, approx_sparsity=0.12, **kwargs):
super(SparseWinnerTakeAllLayerApprox, self).__init__(incoming, **kwargs)
self.sparsity = approx_sparsity
def get_output_for(self, input, **kwargs):
"""
Parameters
----------
input : tensor
output from the previous layer
"""
# input_shape is [ #in_batch, #vector_entries ] ~ [ 20k, 1024 ]
current_sparsity = self.sparsity
#print(current_sparsity) # A theano variable
if False:
# This is an 'advanced' tail-aware hurdle-level prediction.
# In the end, it works less well than the binary-search version below
# Find the max value in each column - this is the k=1 (top-most) entry
hurdles_max = input.max( axis=0 )
input = lasagne.layers.get_output(embedding_batch_middle)
# Find the max value in each column - this is the k=1 (top-most) entry
hurdles_max = input.max( axis=0 )
# Find the min value in each column - this is the k=all (bottom-most) entry
#hurdles_min = input.min( axis=0 )
# Let's guess (poorly) that the sparsity hurdle is (0... sparsity ...100%) within these bounds
#hurdles_guess = hurdles_max * (1.0 - current_sparsity) + hurdles_min * current_sparsity
#hurdles_guess = (hurdles_min + hurdles_max)/2.0
# New approach : We know that the mean() is zero and the std() is 1
# simulations suggest that the more stable indicators are at fractions of the max()
hurdles_hi = hurdles_max * 0.5
hurdles_lo = hurdles_max * 0.3
# Now, let's find the actual sparsity that this creates
sparsity_flag_hi = theano.tensor.switch( theano.tensor.ge(input, hurdles_hi), 1.0, 0.0)
sparsity_real_hi = sparsity_flag_hi.mean(axis=0) # Should be ~ sparsity (likely to be lower, though)
sparsity_flag_lo = theano.tensor.switch( theano.tensor.ge(input, hurdles_lo), 1.0, 0.0)
sparsity_real_lo = sparsity_flag_lo.mean(axis=0) # Should be ~ sparsity (likely to be higher, though)
# But this is wrong! Let's do another estimate (will be much closer, hopefully) using this knowledge
# For each column, the new hurdle guess
#hurdles_better = hurdles_max - (hurdles_max - hurdles_guess) * (
# current_sparsity / (sparsity_guess_real + 0.00001) ) )
if False: # This assumes that the distribution tails are linear (which is not true)
hurdles_interp = hurdles_hi + (hurdles_lo-hurdles_hi) * (
(current_sparsity - sparsity_real_hi) / ((sparsity_real_lo - sparsity_real_hi)+0.00001) )
else: # Assume that the areas under the tails are ~ exp(-x*x)
# See (2) in : https://math.uc.edu/~brycw/preprint/z-tail/z-tail.pdf
# *** See (Remark 15) in : http://m-hikari.com/ams/ams-2014/ams-85-88-2014/epureAMS85-88-2014.pdf
def tail_transform(z):
return theano.tensor.sqrt( -theano.tensor.log( z ) )
tail_target = tail_transform(current_sparsity)
tail_hi = tail_transform(sparsity_real_hi)
tail_lo = tail_transform(sparsity_real_lo)
hurdles_interp = hurdles_hi + (hurdles_lo-hurdles_hi) * (
(tail_target - tail_hi) / ((tail_lo - tail_hi)+0.00001) )
#hurdles = theano.tensor.maximum(hurdles_better, 0.0) # rectification... at mininim...
# (also solves everything-blowing-up problem)
hurdles = hurdles_interp.clip(hurdles_max*0.2, hurdles_max*0.9)
if True: # Simple, but effective : Binary search
hurdles_hi, hurdles_lo = [], []
hurdles_guess = []
sparsity_flag = []
sparsity_real = []
sparsity_hi, sparsity_lo = [], []
# Find the max value in each column - this is the k=1 (top-most) entry
hurdles_max = input.max( axis=0 )
hurdles_hi.append(hurdles_max)
sparsity_hi.append(hurdles_max * (1./batchsize) )
hurdles_lo_temp = input.mean( axis=0 ) # Different estimate idea...
hurdles_lo.append(hurdles_lo_temp)
sparsity_lo_temp = theano.tensor.switch( theano.tensor.ge(input, hurdles_lo_temp), 1.0, 0.0)
sparsity_lo.append( sparsity_lo_temp.mean(axis=0) )
for i in range(10):
if True: # WINS THE DAY!
hurdles_guess.append(
(
(hurdles_lo[-1] + hurdles_hi[-1]) * 0.5
)
)
if False: # A 'better approximation' that is actually worse
hurdles_guess.append(
(
hurdles_hi[-1] + (hurdles_lo[-1] - hurdles_hi[-1]) *
(current_sparsity - sparsity_hi[-1]) / ((sparsity_lo[-1] - sparsity_hi[-1])+0.000001)
).clip(hurdles_lo[-1], hurdles_hi[-1])
)
if False: # Another 'better approximation' that is actually worse
# switch on closeness to getting it correct
hurdles_guess.append(
theano.tensor.switch( theano.tensor.lt( sparsity_lo[-1], current_sparsity * 2.0 ),
(
hurdles_hi[-1] + (hurdles_lo[-1] - hurdles_hi[-1]) *
(current_sparsity - sparsity_hi[-1]) / ((sparsity_lo[-1] - sparsity_hi[-1])+0.000001)
).clip(hurdles_lo[-1], hurdles_hi[-1]),
(
(hurdles_lo[-1] + hurdles_hi[-1]) * 0.5
)
)
)
sparsity_flag.append( theano.tensor.switch( theano.tensor.ge(input, hurdles_guess[-1] ), 1.0, 0.0) )
sparsity_real.append( sparsity_flag[-1].mean(axis=0) )
# So, based on whether the real sparsity is greater or less than the real value, change the hi or lo values
hurdles_lo.append(
theano.tensor.switch( theano.tensor.gt(current_sparsity, sparsity_real[-1]),
hurdles_lo[-1], hurdles_guess[-1])
)
hurdles_hi.append(
theano.tensor.switch( theano.tensor.le(current_sparsity, sparsity_real[-1]),
hurdles_hi[-1], hurdles_guess[-1])
)
hurdles = hurdles_guess[-1]
#hurdles = hurdles_lo[-1] # Better to bound this at the highest relevant sparsity...
masked = theano.tensor.switch( theano.tensor.ge(input, hurdles), input, 0.0)
return masked
embedding_N = (embedding) # No Normalization by default
if pre_normalize:
embedding_std = np.std(embedding, axis=1)
embedding_N = embedding / embedding_std[:, np.newaxis] # Try Normalizing std(row) == 1, making sure shapes are right
embedding_shared = theano.shared(embedding_N.astype('float32')) # 400000, 300
embedding_shared.name = "embedding_shared"
batch_start_index = theano.tensor.scalar('batch_start_index', dtype='int32')
embedding_batch = embedding_shared[ batch_start_index:(batch_start_index+batchsize) ]
network = lasagne.layers.InputLayer(
( batchsize, embedding_dim ),
input_var=embedding_batch,
)
pre_hidden_dim=embedding_dim*8 ## For sparse_dim=1024 and below
if sparse_dim>1024*1.5:
pre_hidden_dim=sparse_dim*2 ## Larger sparse_dim
network = lasagne.layers.DenseLayer(
network,
num_units=pre_hidden_dim,
nonlinearity=lasagne.nonlinearities.rectify,
W=lasagne.init.GlorotUniform(),
b=lasagne.init.Constant(0.)
)
network = lasagne.layers.DenseLayer(
network,
num_units=sparse_dim,
nonlinearity=lasagne.nonlinearities.identity,
W=lasagne.init.GlorotUniform(),
b=lasagne.init.Constant(0.)
)
sparse_embedding_batch_linear=network
#def hard01(x):
# # http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.switch
# #return theano.tensor.switch( theano.tensor.gt(x, 0.), 0.95, 0.05)
# return theano.tensor.switch( theano.tensor.gt(x, 0.), 1.0, 0.0)
if mode == 'train':
# This adds some 'fuzziness' to smooth out the training process
sigma = theano.tensor.scalar(name='sigma', dtype='float32')
embedding_batch_middle = lasagne.layers.batch_norm(
lasagne.layers.NonlinearityLayer( network, nonlinearity=lasagne.nonlinearities.rectify )
)
embedding_batch_middle = lasagne.layers.GaussianNoiseLayer(
embedding_batch_middle,
sigma=0.2 * theano.tensor.exp((-0.01) * sigma ) # Noise should die down over time...
)
sparsity_blend = theano.tensor.exp((-10.) * sigma ) # Goes from 1 to epsilon
current_sparsity = 0.50*(sparsity_blend) + sparsity_goal*(1. - sparsity_blend)
sparse_embedding_batch_squashed = SparseWinnerTakeAllLayerApprox(
embedding_batch_middle,
approx_sparsity=current_sparsity
)
elif mode == 'predict':
embedding_batch_middle = lasagne.layers.batch_norm(
lasagne.layers.NonlinearityLayer( network, nonlinearity=lasagne.nonlinearities.rectify )
)
#sparse_embedding_batch_squashed = SparseWinnerTakeAllLayer(
# embedding_batch_middle,
# sparsity=sparsity_goal,
# )
sparse_embedding_batch_squashed = SparseWinnerTakeAllLayerApprox(
embedding_batch_middle,
approx_sparsity=sparsity_goal, # Jam the actual (final) value in...
)
sparse_embedding_batch_probs = sparse_embedding_batch_squashed
network = sparse_embedding_batch_squashed
network = lasagne.layers.DenseLayer(
network,
num_units=embedding_dim,
nonlinearity=lasagne.nonlinearities.linear,
W=lasagne.init.GlorotUniform(),
b=lasagne.init.Constant(0.)
)
prediction = lasagne.layers.get_output(network)
l2_error = lasagne.objectives.squared_error( prediction, embedding_batch )
l2_error_mean = l2_error.mean() # This is a per-element error term
interim_output = lasagne.layers.get_output(sparse_embedding_batch_probs)
# Count the number of positive entries
sparse_flag = theano.tensor.switch( theano.tensor.ge(interim_output, 0.0001), 1.0, 0.0)
#sparsity_mean = sparse_flag.mean() / sparsity_goal # This is a number 0..1, where 1.0 = perfect = on-target
sparsity_mean = sparse_flag.mean() * 100. # This is realised sparsity
sparsity_std = (sparse_flag.mean(axis=1) / sparsity_goal).std() # assess the 'quality' of the sparsity per-row
# This is to monitor learning (not direct it)
sparsity_probe = sparse_flag.mean(axis=1) / sparsity_goal # sparsity across rows may not be ===1.0
#sparsity_probe = sparse_flag.mean(axis=0) / sparsity_goal # sparsity across columns should be ===1.0 (if approx works)
sparsity_cost=0.0
if mode == 'train':
mix = theano.tensor.scalar(name='mix', dtype='float32')
sparsity_cost = -mix*sparsity_mean/1000. # The 1000 factor is because '10' l2 is Ok, and 1 sparsity_mean is Great
if version==20 or version==21:
sparsity_cost = mix*0.
cost = l2_error_mean + sparsity_cost
params = lasagne.layers.get_all_params(network, trainable=True)
epoch_base=0
if args.load:
load_vars = hickle.load(args.load)
print("Saved file had : Epoch:%4d, sigma:%5.2f" % (load_vars['epoch'], load_vars['sigma'], ) )
#fraction_of_vocab=fraction_of_vocab
epoch_base = load_vars['epoch']
if 'layer_names' in load_vars:
layer_names = load_vars['layer_names']
else:
i=0
layer_names=[]
while "Lasagne%d" % (i,) in load_vars:
layer_names.append( "Lasagne%d" % (i,) )
i=i+1
layers = [ load_vars[ ln ] for ln in layer_names ]
lasagne.layers.set_all_param_values(network, layers)
if mode == 'train':
updates = lasagne.updates.adam( cost, params )
iterate_net = theano.function(
[batch_start_index,sigma,mix],
[l2_error_mean,sparsity_mean,sparsity_std,sparsity_probe],
updates=updates,
allow_input_downcast=True,
on_unused_input='warn',
)
print("Built Theano op graph")
sigma_ = 0.0
mix_ = 0.0
if args.param:
mix_=args.param
t0 = time.time()
for epoch in range(epoch_base, epoch_base+args.iters):
t1 = time.time()
fraction_of_vocab = 1.0
max_l2_error_mean=-1000.0
batch_list = np.array( range(0, int(embedding.shape[0]*fraction_of_vocab), batchsize) )
batch_list = np.random.permutation( batch_list )
for b_start in batch_list.astype(int).tolist():
#l2_error_mean_,sparsity_mean_ = iterate_net(b_start)
l2_error_mean_,sparsity_mean_,sparsity_std_,sparsity_probe_ = iterate_net(b_start, sigma_, mix_)
print(" epoch:,%4d, b:,%7d, l2:,%9.2f, sparsity_mean_:,%9.4f, sparsity_std_:,%9.4f, sigma:,%5.2f, mix:,%5.2f, " %
(epoch, b_start, 1000*l2_error_mean_, sparsity_mean_, sparsity_std_, sigma_, mix_, ))
if b_start==0:
#print("Hurdles : " + np_int_list( sparsity_probe_[0:100] ))
print(" Row-wise sparsity : " + np_int_list( sparsity_probe_[0:30] ))
#print(" %d, vector_probe : %s" % (epoch, np_int_list( np.sort(sparsity_probe_[0:100]) ), ))
#print(" %d, vector_probe : %s" % (epoch, np_int_list( sparsity_probe_[0:100] ), ))
#print(" vector_probe : " + np_int_list( sparsity_probe_[0:1000] ))
if max_l2_error_mean<l2_error_mean_:
max_l2_error_mean=l2_error_mean_
print("Time per 100k words %6.2fs" % ((time.time() - t1)/embedding.shape[0]/fraction_of_vocab*1000.*100., ))
#exit()
boil_limit=10.
if pre_normalize:
boil_limit=40.
if max_l2_error_mean*1000.<boil_limit:
print("max_l2_error_mean<%6.2f - increasing sparseness emphasis" % (boil_limit,))
sigma_ += 0.01
mix_ += 0.1
if (epoch +1) % 10 == 0:
save_vars = dict(
version=version,
epoch=epoch,
sigma=sigma_,
mix=mix_,
fraction_of_vocab=fraction_of_vocab
)
layer_names = []
for i,p in enumerate(lasagne.layers.get_all_param_values(network)):
if len(p)>0:
name = "Lasagne%d" % (i,)
save_vars[ name ] = p
layer_names.append( name )
save_vars[ 'layer_names' ] = layer_names
epoch_thinned = int(epoch/100)*100
hickle.dump(save_vars, args.save % (epoch_thinned,), mode='w', compression='gzip')
if args.load and mode == 'predict':
print("Parameters : ", lasagne.layers.get_all_params(network))
get_sparse_linear = theano.function( [batch_start_index], [ lasagne.layers.get_output(sparse_embedding_batch_linear), ]) # allow_input_downcast=True
predict_net = theano.function( [batch_start_index], [l2_error_mean,sparsity_mean], allow_input_downcast=True )
predict_emb = theano.function( [batch_start_index], [prediction], allow_input_downcast=True )
predict_bin = theano.function( [batch_start_index], [ lasagne.layers.get_output(sparse_embedding_batch_squashed),])
print("Built Theano op graph")
if True: # Shows the error predictions with hard01 sigmoid
for b_start in range(0, int(embedding.shape[0]), batchsize):
l2_error_mean_,sparsity_mean_ = predict_net(b_start)
print(" epoch:%4d, b:%7d, l2:%12.4f, sparsity:%6.4f - hard01" %
(epoch_base, b_start, 1000*l2_error_mean_, sparsity_mean_, ))
if False: # Shows the linear range of the sparse layer (pre-squashing)
for b_start in range(0, int(embedding.shape[0]), batchsize * 5):
sparse_embedding_batch_linear_, = get_sparse_linear(b_start)
for row in range(0,100,5):
print(np_int_list( sparse_embedding_batch_linear_[row][0:1000:50], mult=10, size=4 ))
if args.output:
predictions=[]
for b_start in range(0, int(embedding.shape[0]), batchsize):
prediction_, = predict_emb(b_start)
predictions.append( np.array( prediction_ ) )
print(" epoch:%3d, b:%7d, Downloading - reconstructed array" %
(epoch_base, b_start, ))
embedding_prediction = np.concatenate(predictions, axis=0)
predictions=None
print("About to save to %s" % (args.output,))
d=dict(
vocab=vocab,
vocab_orig=vocab_orig,
embedding=embedding_prediction,
)
hickle.dump(d, args.output, mode='w', compression='gzip')
if args.direct:
predictions=[]
for b_start in range(0, int(embedding.shape[0]), batchsize):
binarised_, = predict_bin(b_start)
#predictions.append( np.where( binarised_>0.5, 1., 0. ).astype('float32') )
predictions.append( binarised_.astype('float32') )
#print(" epoch:%3d, b:%7d, Downloading - hard01 to binary" %
print(" epoch:%3d, b:%7d, Downloading - sparse data" %
(epoch_base, b_start, ))
embedding_prediction = np.concatenate(predictions, axis=0)
predictions=None
print("About to save sparse version to %s" % (args.direct,))
d=dict(
vocab=vocab,
vocab_orig=vocab_orig,
embedding=embedding_prediction,
)
hickle.dump(d, args.direct, mode='w', compression='gzip')
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from metrics import CorefEvaluator
from document import Document
import json
import os
from ClEval import ClEval, print_clusters
from datetime import datetime
def get_timestamp():
return str(datetime.timestamp(datetime.now())).split('.')[0]
get_timestamp()
```
# Dummy text
```
txt = """
Microsoft (NASDAQ:MSFT) was once considered a mature tech stock that was owned for stability and income instead of growth. But over the past five years, Microsoft stock rallied roughly 300% as a visionary CEO turned its business upside down.
Satya Nadella, who succeeded Steve Ballmer in 2014, reduced Microsoft's dependence on sales of Windows and Office licenses and expanded its ecosystem with a "mobile first, cloud first" mantra. Nadella ditched the company's Windows Phone and smartphone ambitions, launched mobile versions of its apps on iOS and Android, and aggressively expanded its cloud services.
That transformation initially throttled earnings growth, but it paid off as the commercial cloud business -- which included Office 365, Dynamics 365, and Azure -- became its new growth engine. Microsoft also expanded its Surface and Xbox businesses to maintain a healthy presence in the PC and gaming markets, respectively.
Those strengths buoyed Microsoft's results throughout the COVID-19 crisis, and its stock has risen nearly 11% year to date even as the S&P 500 slipped over 12%. But looking further ahead, will Microsoft continue to outperform the market?
"""
```
# Visualization setup
```
from VISUALIZATION import highlighter as viz
from IPython.core.display import display, HTML
display(HTML(open('VISUALIZATION/highlighter/highlight.css').read()))
display(HTML(open('VISUALIZATION/highlighter/highlight.js').read()))
```
# Model setup
- Stanford CoreNLP Deterministic
- NeuralCoref
- SpanBERT Large
```
from MODEL_WRAPPERS.Corenlp import CoreNLP
corenlp = CoreNLP(ram="8G", viz=viz)
from MODEL_WRAPPERS.Neuralcoref import Coref
params = {
"greed": 0.50,
"max_dist": 100,
"max_dist_match": 500
}
neuralcoref = Coref(params, spacy_size="lg", viz=viz)
from MODEL_WRAPPERS.Spanbert import SpanBert
spanbert = SpanBert(viz=viz)
models = [corenlp, neuralcoref, spanbert]
```
# Batch prediction, testing iterating models
```
import timeit
import json
for model in models:
print("Prediction with {}".format(str(model.__class__)))
start = timeit.default_timer()
clusters = predict(model, txt)
#show(model)
stop = timeit.default_timer()
print('Time: ', stop - start)
print()
```
# Dataset Loader
```
data_path = os.path.join(os.path.dirname(os.getcwd()), "coreference_data")
dev_path = os.path.join(data_path, "dev_data")
datasets = [os.path.join(dev_path, f) for f in os.listdir(dev_path)]
datasets
```
## News datasets
```
news_path = os.path.join(data_path, "news_data")
datasets = [os.path.join(news_path, f) for f in os.listdir(news_path)]
datasets
```
# out of domain
```
outdomain = os.path.join(data_path, "out_of_domain")
datasets = [os.path.join(outdomain, f) for f in os.listdir(outdomain)]
datasets
```
# Preco large
```
preco_large = os.path.join(data_path, "big_files", "preco.coreflite")
```
## Generalized API
### Temp data
```
gum, lit, onto, prec = datasets
#model = corenlp # corenlp, neuralcoref, spanbert
onto_test = os.path.join(data_path, "ontonotes_test.coreflite")
gum_news = os.path.join(data_path, "gum_news.coreflite")
gum_no_news = os.path.join(data_path, "gum_no_news.coreflite")
from tqdm import tqdm
#dataset = onto_test
#model = corenlp
#outliers = []
dataset = preco_large
model = spanbert
#GUM_VERSION_2 = ["GUM_interview_ants.conll", "GUM_interview_brotherhood.conll", "GUM_interview_cocktail.conll", "GUM_interview_cyclone.conll", "GUM_interview_daly.conll", "GUM_interview_dungeon.conll", "GUM_interview_gaming.conll", "GUM_interview_herrick.conll", "GUM_interview_hill.conll", "GUM_interview_libertarian.conll", "GUM_interview_licen.conll", "GUM_interview_mckenzie.conll", "GUM_interview_messina.conll", "GUM_interview_peres.conll", "GUM_news_asylum.conll", "GUM_news_crane.conll", "GUM_news_defector.conll", "GUM_news_flag.conll", "GUM_news_hackers.conll", "GUM_news_ie9.conll", "GUM_news_imprisoned.conll", "GUM_news_korea.conll", "GUM_news_nasa.conll", "GUM_news_sensitive.conll", "GUM_news_stampede.conll", "GUM_news_taxes.conll", "GUM_news_warhol.conll", "GUM_news_warming.conll", "GUM_news_worship.conll", "GUM_voyage_athens.conll", "GUM_voyage_chatham.conll", "GUM_voyage_cleveland.conll", "GUM_voyage_coron.conll", "GUM_voyage_cuba.conll", "GUM_voyage_fortlee.conll", "GUM_voyage_merida.conll", "GUM_voyage_oakland.conll", "GUM_voyage_thailand.conll", "GUM_voyage_vavau.conll", "GUM_voyage_york.conll", "GUM_whow_arrogant.conll", "GUM_whow_basil.conll", "GUM_whow_cactus.conll", "GUM_whow_chicken.conll", "GUM_whow_cupcakes.conll", "GUM_whow_flirt.conll", "GUM_whow_glowstick.conll", "GUM_whow_joke.conll", "GUM_whow_languages.conll", "GUM_whow_overalls.conll", "GUM_whow_packing.conll", "GUM_whow_parachute.conll", "GUM_whow_quidditch.conll", "GUM_whow_skittles.conll"]
with open(dataset, "r", encoding="utf8") as data:
modelstr = str(model.__class__).split(".")[1]
datastr = dataset.split("\\")[-1].split(".")[0]
filename = "{}_{}_{}.txt".format(modelstr, datastr, get_timestamp())
filepath = os.path.join(os.getcwd(), "logs", filename)
all_docs = data.readlines()
#print("Total of {} docs".format(len(all_docs)))
FILES_TO_COUNT = len(all_docs)
dataset_scorer = CorefEvaluator()
#for i, doc in tqdm(enumerate(all_docs)):
for i in tqdm(range(len(all_docs))):
doc = all_docs[i]
doc = json.loads(doc)
docname = doc["doc_key"]
cleval = ClEval(model=model)
cleval.pipeline(doc,
tokens=False,
adjust_wrong_offsets=True,
no_singletons=True)
#cleval.compare()
conll_score, lea_score = cleval.show_score(verbose=False)
print("{} - {}. {}/{}\t\tconll: {}\t lea: {}".format(
datastr, docname, i+1, len(all_docs),
conll_score, lea_score
))
#cleval.show_score(verbose=True)
#print("p/r/f", cleval.scorer.get_prf_conll())
dataset_scorer.update(cleval.doc)
#if i == 19:
# break
dataset_scorer.detailed_score(modelstr, datastr)
print(dataset_scorer.get_conll())
#cleval.write_scores_to_file(modelstr, datastr)
#cleval.show()
g = cleval.gold_clusters
gs = cleval.gold_clusters_no_singletons
viz.raw_render(cleval.tokens, g)
viz.raw_render(cleval.tokens, gs)
from utils import flatten
combined_tokens = zip(cleval.tokens, cleval.pred_tokens())
for c in combined_tokens:
print(c)
pflat = flatten(cleval.pred_clusters)
cl = sorted(pflat, key=lambda x: x[0])
for c in cl:
print(c, cleval.tokens[c[0]-1:c[1]+1])
print(c, cleval.pred_tokens()[c[0]:c[1]+1])
pflat = flatten(cleval.gold_clusters)
cl = sorted(pflat, key=lambda x: x[0])
for c in cl:
print(c, cleval.tokens[c[0]:c[1]+1])
tokens = cleval.tokens
clusters = cleval.pred_clusters
for clust in clusters:
for mention in clust:
print(mention)
viz.raw_render(cleval.tokens, cleval.gold_clusters)
spandoc = cleval.model.doc
spandoc.keys()
len(cleval.tokens)
len(spandoc["document"])
spandoc["document"][0:20]
spandoc["antecedent_indices"][0:20]
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
#run
import os
import re
import gc
import json
import glob
import math
import time
import torch
import os,sys
import random
import string
import pickle
import logging
import itertools
import unicodedata
import torch.nn as nn
from fastai.imports import *
import torch.nn.functional as F
from torch import optim
from collections import Counter
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from __future__ import print_function, division, unicode_literals
from numpy.random import choice as random_choice,randint as random_randint, shuffle as random_shuffle, seed as random_seed, rand
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#run
# Set a logger for the module
LOGGER = logging.getLogger(__name__) # Every log will use the module name
LOGGER.addHandler(logging.StreamHandler())
LOGGER.setLevel(logging.DEBUG)
random_seed(123)
#run
class Configuration(object):
"""Dump stuff here"""
CONFIG = Configuration()
#pylint:disable=attribute-defined-outside-init
# Parameters for the model:
CONFIG.input_layers = 2
CONFIG.output_layers = 2
CONFIG.amount_of_dropout = 0.2
CONFIG.hidden_size = 500
CONFIG.number_of_chars = 100
CONFIG.max_input_len = 60
CONFIG.inverted = False
# Parameters for the dataset
MIN_INPUT_LEN = 5
AMOUNT_OF_NOISE = 0.2 / CONFIG.max_input_len
CHARS = list("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ .")
PADDING = "☕"
# RUN ONCE
#file_list = glob.glob("/content/drive/My Drive/1-billion/training/*.txt")
#os.listdir("/content/drive/My Drive/1-billion/heldout")
```
Run only once
>
Reads all the txt files in variable file_list to result.txt
```
'''with open("/content/drive/My Drive/1-billion/result.txt", "wb") as outfile:
for f in file_list:
with open(f, "rb") as infile:
outfile.write(infile.read())'''
#run
DATA_FILES_FULL_PATH = '/content/drive/My Drive/1-billion/'
#NEWS_FILE_NAME = 'del_v2.txt'
NEWS_FILE_NAME = '/content/drive/My Drive/1-billion/result.txt'
NEWS_FILE_NAME_CLEAN = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.clean")
NEWS_FILE_NAME_FILTERED = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.filtered")
NEWS_FILE_NAME_SPLIT = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.split")
NEWS_FILE_NAME_TRAIN = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.train")
NEWS_FILE_NAME_VALIDATE = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.validate")
NEWS_FILE_NAME_TEST = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.test")
NEWS_FILE_NAME_TRAIN_20 = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.train_20")
NEWS_FILE_NAME_TRAIN_40 = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.train_40")
NEWS_FILE_NAME_TRAIN_60 = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.train_60")
NEWS_FILE_NAME_TRAIN_80 = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.train_80")
NEWS_FILE_NAME_TRAIN_90 = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.train_90")
NEWS_FILE_NAME_TRAIN_100 = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.train_100")
#NEWS_FILE_NAME_VALIDATE_20 = os.path.join(DATA_FILES_FULL_PATH, "news.2013.en.validate_20")
CHAR_FREQUENCY_FILE_NAME = os.path.join(DATA_FILES_FULL_PATH, "char_frequency.json")
SAVED_MODEL_FILE_NAME = os.path.join(DATA_FILES_FULL_PATH, "keras_spell_e{}.h5") # an HDF5 file
all_letters = all_letters = string.ascii_letters + " .,);'(-"
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
def normalizeString(s):
s = unicodeToAscii(s.strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
s = re.sub(r"\s+", r" ", s).strip()
return s
def cleaned_text(text):
result = re.compile(r'[^\S\n]+', re.UNICODE).sub(' ', text.strip())
result = re.compile(r'[\-\˗\֊\‐\‑\‒\–\—\⁻\₋\−\﹣\-]', re.UNICODE).sub('-', result)
result = re.compile(r''|[ʼ՚'‘’‛❛❜ߴߵ`‵´ˊˋ{}{}{}{}{}{}{}{}{}]'.format(chr(768), chr(769), chr(832), chr(833), chr(2387), chr(5151),
chr(5152), chr(65344), chr(8242)), re.UNICODE).sub("'", result)
result = re.compile(r'[\(\[\{\⁽\₍\❨\❪\﹙\(]', re.UNICODE).sub("(", result)
result = re.compile(r'[\)\]\}\⁾\₎\❩\❫\﹚\)]', re.UNICODE).sub(")", result)
result = re.compile(r'[^\w\s{}{}]'.format(re.escape("""¥£₪$€฿₨"""), re.escape("""-!?/;"'%&<>.()[]{}@#:,|=*""")), re.UNICODE).sub('', result)
result = unicodeToAscii(result)
result = normalizeString(result)
return result
def preprocesses_data_clean():
with open(NEWS_FILE_NAME_CLEAN, "w") as clean_data:
for line in open(NEWS_FILE_NAME, 'r', encoding="utf-8"):
cleaned_line = cleaned_text(line)
clean_data.write(cleaned_line + "\n")
```
```
# RUN ONCE
def preprocesses_data_analyze_chars():
"""Pre-process the data - step 2 - analyze the characters"""
counter = Counter()
LOGGER.info("Reading data:")
for line in open(NEWS_FILE_NAME_CLEAN, 'r', encoding="utf-8"):
decoded_line = line#.decode('utf-8')
counter.update(decoded_line)
LOGGER.info("Done.\nWriting to file:")
with open(CHAR_FREQUENCY_FILE_NAME, 'w', encoding="utf-8") as output_file:
output_file.write(json.dumps(counter))
most_popular_chars = {key for key, _value in counter.most_common(CONFIG.number_of_chars)}
LOGGER.info("The top %s chars are:", CONFIG.number_of_chars)
LOGGER.info("".join(sorted(most_popular_chars)))
# RUN ONCE
def read_top_chars():
"""Read the top chars we saved to file"""
chars = json.loads(open(CHAR_FREQUENCY_FILE_NAME).read())
counter = Counter(chars)
most_popular_chars = {key for key, _value in counter.most_common(CONFIG.number_of_chars)}
return most_popular_chars
def preprocesses_data_filter():
"""Pre-process the data - step 3 - filter only sentences with the right chars"""
most_popular_chars = read_top_chars()
LOGGER.info("Reading and filtering data:")
with open(NEWS_FILE_NAME_FILTERED, "w") as output_file:
for line in open(NEWS_FILE_NAME_CLEAN, 'r', encoding='utf-8'):
decoded_line = line#.decode('utf-8')
if decoded_line and not bool(set(decoded_line) - most_popular_chars):
output_file.write(line)
LOGGER.info("Done.")
# RUN ONCE
def read_filtered_data():
"""Read the filtered data corpus"""
LOGGER.info("Reading filtered data:")
lines = open(NEWS_FILE_NAME_FILTERED).read().decode('utf-8').split("\n")
LOGGER.info("Read filtered data - %s lines", len(lines))
return lines
def preprocesses_split_lines():
LOGGER.info("Reading filtered data:")
answers = set()
with open(NEWS_FILE_NAME_SPLIT, "wb") as output_file:
for line in open(NEWS_FILE_NAME_FILTERED,'r'):
#line = _line.decode('utf-8')
while len(line) > MIN_INPUT_LEN:
if len(line) <= CONFIG.max_input_len:
answer = line
line = ""
else:
space_location = line.rfind(" ", MIN_INPUT_LEN, CONFIG.max_input_len - 1)
if space_location > -1:
answer = line[:space_location]
line = line[len(answer) + 1:]
else:
space_location = line.rfind(" ") # no limits this time
if space_location == -1:
break # we are done with this line
else:
line = line[space_location + 1:]
continue
answers.add(answer)
output_file.write(answer.encode('utf-8') + b"\n")
```
Processing the whole data was consuming up almost all the available RAM on colab (~25 GB). So i split the data into multiple parts and processesed the separately.
```
# RUN ONCE
def preprocess_partition_data():
"""Set asside data for validation"""
answers = open(NEWS_FILE_NAME_SPLIT).read().split("\n")
print('shuffle', end=" ")
random_shuffle(answers)
split_at = len(answers) // 5
with open(NEWS_FILE_NAME_TRAIN_20, "wb") as output_file:
output_file.write("\n".join(answers[:split_at]).encode('utf-8'))
with open(NEWS_FILE_NAME_TRAIN_40, "wb") as output_file:
output_file.write("\n".join(answers[split_at:2*split_at]).encode('utf-8'))
with open(NEWS_FILE_NAME_TRAIN_60, "wb") as output_file:
output_file.write("\n".join(answers[2*split_at:3*split_at]).encode('utf-8'))
with open(NEWS_FILE_NAME_TRAIN_80, "wb") as output_file:
output_file.write("\n".join(answers[3*split_at:4*split_at]).encode('utf-8'))
with open(NEWS_FILE_NAME_TRAIN_90, "wb") as output_file:
output_file.write("\n".join(answers[4*split_at:math.floor(4.5*split_at)]).encode('utf-8'))
with open(NEWS_FILE_NAME_VALIDATE, "wb") as output_file:
output_file.write("\n".join(answers[math.ceil(4.5*split_at):]).encode('utf-8'))
#run
def add_noise_to_string(a_string, amount_of_noise):
"""Add some artificial spelling mistakes to the string"""
if rand() < amount_of_noise * len(a_string):
# Replace a character with a random character
random_char_position = random_randint(len(a_string))
a_string = a_string[:random_char_position] + random_choice(CHARS[:-1]) + a_string[random_char_position + 1:]
if rand() < amount_of_noise * len(a_string):
# Delete a character
random_char_position = random_randint(len(a_string))
a_string = a_string[:random_char_position] + a_string[random_char_position + 1:]
if len(a_string) < CONFIG.max_input_len and rand() < amount_of_noise * len(a_string):
# Add a random character
random_char_position = random_randint(len(a_string))
a_string = a_string[:random_char_position] + random_choice(CHARS[:-1]) + a_string[random_char_position:]
if rand() < amount_of_noise * len(a_string):
# Transpose 2 characters
random_char_position = random_randint(len(a_string) - 1)
a_string = (a_string[:random_char_position] + a_string[random_char_position + 1] + a_string[random_char_position] +
a_string[random_char_position + 2:])
return a_string
def generate_question(answer):
"""Generate a question by adding noise"""
question = add_noise_to_string(answer, AMOUNT_OF_NOISE)
return question, answer
def generate_news_data():
"""Generate some news data"""
print ("Generating Data")
with open(NEWS_FILE_NAME_TRAIN_60,'r') as f:
answers = f.read().split("\n")
print('shuffle', end=" ")
random_shuffle(answers)
print("Done")
pairs = []
lolz = len(answers)
nn=0
start = time.time()
for answer in answers:
if answer != "":
question = add_noise_to_string(answer, AMOUNT_OF_NOISE)
nn+=1
if random_randint(10000) == 8: # Show some progress
print (lolz)
print(nn)
pairs.append((question,answer))
end = time.time()
print(end - start)
tuple_pairs = tuple(pairs)
pairs = None
del pairs, answers
gc.collect()
return tuple_pairs
```
Uncomment and Run the following cell just once. this will do all the preprocessing and save preprocessed files in the drive. Hence the user won't have to carry out preprocessing every time he/she tries running the code.
```
#preprocesses_data_clean()
#preprocesses_data_analyze_chars()
#preprocesses_data_filter()
#preprocesses_split_lines()
#preprocess_partition_data()
pairs = generate_news_data()
#Run everything after this
PAD_token = 0 # Used for padding short sentences
SOS_token = 1 # Start-of-sentence token
EOS_token = 2 # End-of-sentence token
all_letters = string.ascii_letters + " .,);'(-"
alpha2index = {'☕': 0, "SOS":SOS_token, "EOS":EOS_token}
index2alpha = {PAD_token: "☕", SOS_token: "SOS", EOS_token: "EOS"}
for num,let in enumerate(all_letters):
alpha2index[let] = num+3
index2alpha[num+3] = let
num_words = len(alpha2index)
import torch.nn as nn
import torch.nn.functional as F
class EncoderRNN(nn.Module):
def __init__(self, hidden_size, embedding, n_layers=1, dropout=0):
super(EncoderRNN, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.embedding = embedding
self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=dropout, bidirectional=True)
def forward(self, input_seq, input_lengths, hidden=None):
embedded = self.embedding(input_seq)
packed = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)
outputs,hidden = self.gru(packed, hidden)
outputs,_ = nn.utils.rnn.pad_packed_sequence(outputs)
outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:]
return outputs, hidden
def binaryMatrix(L, value=alpha2index[PADDING]):
m =[]
for i,seq in enumerate(L):
m.append([])
for token in seq:
if token==alpha2index[PADDING]:
m[i].append(0)
else:
m[i].append(1)
return m
def indexesFromSentence(sentence):
return [alpha2index[alpha] for alpha in sentence] + [EOS_token]
def zeroPadding(l, fillvalue=PAD_token):
return list(itertools.zip_longest(*l, fillvalue=fillvalue))
def inputVar(L):
indexes_batch = [indexesFromSentence(sentt) for sentt in L]
lengths = torch.tensor([len(index) for index in indexes_batch])
padList = zeroPadding(indexes_batch)
padVar = torch.LongTensor(padList)
return padVar, lengths
def outputVar(L):
indexes_batch = [indexesFromSentence(sentt) for sentt in L]
max_target_len = max([len(index) for index in indexes_batch])
padList = zeroPadding(indexes_batch)
mask = binaryMatrix(padList)
mask = torch.BoolTensor(mask)
padVar = torch.LongTensor(padList)
return padVar, mask, max_target_len
def batch2TrainData(pair_batch):
pair_batch.sort(key=lambda x: len(x[0]), reverse=True)
input_batch, output_batch = [],[]
for pair in pair_batch:
input_batch.append(pair[0])
output_batch.append(pair[1])
inp, lengths = inputVar(input_batch)
output, mask, max_target_len = outputVar(output_batch)
return inp, lengths, output, mask, max_target_len
small_batch_size = 3
batches = batch2TrainData([random.choice(pairs) for _ in range(small_batch_size)])
input_variable, lengths, target_variable, mask, max_target_len = batches
print("input_variable:", input_variable)
print("lengths:", lengths)
print("target_variable:", target_variable)
print("mask:", mask)
print("max_target_len:", max_target_len)
# Luong attention layer
class Attn(nn.Module):
def __init__(self, method, hidden_size):
super(Attn, self).__init__()
self.method = method
if self.method not in ['dot', 'general', 'concat']:
raise ValueError(self.method, "is not an appropriate attention method.")
self.hidden_size = hidden_size
if self.method == 'general':
self.attn = nn.Linear(self.hidden_size, hidden_size)
elif self.method == 'concat':
self.attn = nn.Linear(self.hidden_size * 2, hidden_size)
self.v = nn.Parameter(torch.FloatTensor(hidden_size))
def dot_score(self, hidden, encoder_output):
return torch.sum(hidden * encoder_output, dim=2)
def general_score(self, hidden, encoder_output):
energy = self.attn(encoder_output)
return torch.sum(hidden * energy, dim=2)
def concat_score(self, hidden, encoder_output):
energy = self.attn(torch.cat((hidden.expand(encoder_output.size(0), -1, -1), encoder_output), 2)).tanh()
return torch.sum(self.v * energy, dim=2)
def forward(self, hidden, encoder_outputs):
# Calculate the attention weights (energies) based on the given method
if self.method == 'general':
attn_energies = self.general_score(hidden, encoder_outputs)
elif self.method == 'concat':
attn_energies = self.concat_score(hidden, encoder_outputs)
elif self.method == 'dot':
attn_energies = self.dot_score(hidden, encoder_outputs)
# Transpose max_length and batch_size dimensions
attn_energies = attn_energies.t()
# Return the softmax normalized probability scores (with added dimension)
return F.softmax(attn_energies, dim=1).unsqueeze(1)
class LuongAttnDecoderRNN(nn.Module):
def __init__(self, attn_model, embedding, hidden_size, output_size, n_layers, dropout):
super(LuongAttnDecoderRNN, self).__init__()
self.attn_model = attn_model
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout=dropout
self.embedding = embedding
self.embedding_dropout = nn.Dropout(dropout)
self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=dropout, bidirectional=False)
self.concat = nn.Linear(hidden_size * 2, hidden_size)
self.out = nn.Linear(self.hidden_size, self.output_size)
self.attn = Attn(attn_model, hidden_size)
def forward(self, input_seq, hidden, encoder_outputs):
#input_seq = input_seq.view(1,-1)
embedded = self.embedding(input_seq)
embedded = self.embedding_dropout(embedded)
rnn_output, hidden = self.gru(embedded, hidden)
attn_weights = self.attn(rnn_output, encoder_outputs)
context = attn_weights.bmm(encoder_outputs.transpose(0, 1))
rnn_output = rnn_output.squeeze(0)
context = context.squeeze(1)
concat_input = torch.cat((rnn_output, context), 1)
concat_output = torch.tanh(self.concat(concat_input))
output = self.out(concat_output)
output = F.softmax(output, dim=1)
return output, hidden
def maskNLLLoss(inp, target, mask):
nTotal = mask.sum()
crossEntropy = -torch.log(torch.gather(inp, 1, target.view(-1, 1)).squeeze(1))
loss = crossEntropy.masked_select(mask).mean()
loss = loss.to(device)
return loss, nTotal.item()
def train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder, embedding, encoder_optimizer,
decoder_optimizer, batch_size, clip, max_length=CONFIG.max_input_len):
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
#Setting device options
input_variable = input_variable.to(device)
target_variable = target_variable.to(device)
lengths = lengths.to(device)
mask = mask.to(device)
#Initialize variables
loss = 0
print_losses = []
n_totals = 0
encoder_outputs, encoder_hidden = encoder(input_variable, lengths)
#print("ENCODER Input - ", input_variable)
decoder_input = torch.LongTensor([[SOS_token for _ in range(batch_size)]])
decoder_input = decoder_input.to(device)
decoder_hidden = encoder_hidden[:decoder.n_layers]
use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False
temp_list = []
nqueries=0 #F
total_fscore = 0 #F
total_accuracy = 0
if use_teacher_forcing:
for t in range(max_target_len):
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_outputs)
decoder_input = target_variable[t].view(1,-1)
mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t])
loss += mask_loss
print_losses.append(mask_loss.item()*nTotal)
n_totals += nTotal
assert math.isnan(mask_loss)==False,print("mask Loss -",mask_loss, "decoder Output - ", decoder_output, "target_variable[t] -", target_variable[t])
_, topi = decoder_output.topk(1)
y_true = target_variable[t].cpu()
y_pred = torch.squeeze(topi).cpu()
fscore = f1_score(y_true, y_pred, average='micro')
total_fscore += fscore
accuracy = accuracy_score(y_true, y_pred)
total_accuracy += accuracy
nqueries +=1
else:
for t in range(max_target_len):
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_outputs)#########Check
_, topi = decoder_output.topk(1)
temp_list.append(topi)
decoder_input = torch.LongTensor([[topi[i][0] for i in range(batch_size)]])
decoder_input = decoder_input.to(device)
mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t])
loss += mask_loss
print_losses.append(mask_loss.item()*nTotal)
n_totals+=nTotal
y_true = target_variable[t].cpu()
y_pred = torch.squeeze(topi).cpu()
fscore = f1_score(y_true, y_pred, average='micro')
total_fscore += fscore
accuracy = accuracy_score(y_true, y_pred)
total_accuracy += accuracy
nqueries +=1
F1_score = total_fscore/nqueries
acc_score = total_accuracy/nqueries
loss.backward()
_ = nn.utils.clip_grad_norm_(encoder.parameters(), clip)
_ = nn.utils.clip_grad_norm_(decoder.parameters(), clip)
encoder_optimizer.step()
decoder_optimizer.step()
return sum(print_losses)/n_totals, F1_score,acc_score
def trainIters(model_name, pairs, encoder,decoder, encoder_optimizer, decoder_optimizer, embedding, encoder_n_layers,
decoder_n_layers, save_dir, n_iteration, batch_size, print_every, save_every, clip, corpus_name, loadFilename):
training_batches = [batch2TrainData([random.choice(pairs) for _ in range(batch_size)]) for _ in range(n_iteration)]
print("initializing...")
start_iteration = 1
print_loss = 0
net_f1score = 0 #F
f1batch_count = 0 #F
if loadFilename:
start_iteration = checkpoint['iteration']+1
print("Training...")
for iteration in range(start_iteration,n_iteration+1):
training_batch = training_batches[iteration-1]
input_variable, lengths, target_variable, mask, max_target_len = training_batch
loss, F1_score, acc_score = train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder,
embedding, encoder_optimizer, decoder_optimizer, batch_size, clip)
print_loss+= loss
if random.random()>0.9999:
net_f1score = 0 #F
f1batch_count = 0
print_loss = 0
print('---------------------------------------------------zeroing---------------------------------------------------')
net_f1score += F1_score #F
f1batch_count += 1 #F
F1_net = net_f1score/f1batch_count
print_loss_avg = print_loss/f1batch_count
if iteration % print_every == 0:
print("Iteration : {}; % complete : {:0.1f}%; Average Loss : {:.4f}%; Avg Accuracy: {:.3f}%; Inst_accuracy : {:.3f}%".format(iteration, iteration/n_iteration*100, print_loss_avg*100, F1_net*100, acc_score*100))
#print_loss=0
if iteration % save_every == 0:
directory = Path(save_dir + model_name + '/' + corpus_name + '/' + "{}-{}_{}".format(encoder_n_layers, decoder_n_layers, hidden_size))
if not os.path.exists(directory):
torch.save({
'iteration' : iteration,
'en' : encoder.state_dict(),
'de' : decoder.state_dict(),
'en_opt' : encoder_optimizer.state_dict(),
'de_opt' : decoder_optimizer.state_dict(),
'loss' : loss,
'embedding' : embedding.state_dict()
},'/content/drive/My Drive/1-billion/deep_spell_attn/'+ '{}_{}.tar'.format(iteration, 'checkpoint'))
class GreedySearchDecoder(nn.Module):
def __init__(self, encoder, decoder):
super(GreedySearchDecoder, self).__init__()
self.encoder = encoder
self.decoder = decoder
def forward(self, input_seq, input_length, max_length):
encoder_outputs, encoder_hidden = self.encoder(input_seq, input_length)
decoder_hidden = encoder_hidden[:decoder.n_layers]
decoder_input = torch.ones(1, 1, device=device, dtype=torch.long)*SOS_token
all_tokens = torch.zeros([0], device=device, dtype=torch.long)
all_scores = torch.zeros([0], device=device)
for _ in range(max_length):
decoder_output, decoder_hidden = self.decoder(decoder_input, decoder_hidden, encoder_outputs)
decoder_scores, decoder_input = torch.max(decoder_output, dim=1)
all_tokens = torch.cat((all_tokens, decoder_input), dim=0)
all_scores = torch.cat((all_scores, decoder_scores), dim=0)
decoder_input = torch.unsqueeze(decoder_input, 0)
return all_tokens, all_scores
def evaluate(encoder, decoder, searcher, sentence, max_length=CONFIG.max_input_len):
### Format input sentence as a batch
# words -> indexes
indexes_batch = [indexesFromSentence(sentence)]
# Create lengths tensor
lengths = torch.tensor([len(indexes) for indexes in indexes_batch])
# Transpose dimensions of batch to match models' expectations
input_batch = torch.LongTensor(indexes_batch).transpose(0, 1)
# Use appropriate device
input_batch = input_batch.to(device)
lengths = lengths.to(device)
# Decode sentence with searcher
tokens, scores = searcher(input_batch, lengths, max_length)
# indexes -> words
decoded_words = [index2alpha[token.item()] for token in tokens]
return decoded_words
def evaluateInput(encoder, decoder, searcher):
input_sentence = ''
while(1):
try:
# Get input sentence
input_sentence = input('> ')
# Check if it is quit case
if input_sentence == 'q' or input_sentence == 'quit': break
# Normalize sentence
input_sentence = cleaned_text(input_sentence)
# Evaluate sentence
output_words = evaluate(encoder, decoder, searcher, input_sentence)
# Format and print response sentence
output_words[:] = [x for x in output_words if not (x == 'PAD')] #x == 'EOS' or
print('Bot:', ''.join(output_words))
except KeyError:
print("Error: Encountered unknown word.")
model_name = 'seq2seq_model'
attn_model = 'dot'
hidden_size = 500
encoder_n_layers = 2
decoder_n_layers = 2
dropout = 0.2
batch_size = 64
# Set checkpoint to load from; set to None if starting from scratch
loadFilename = "/content/drive/My Drive/1-billion/deep_spell_attn/240000_checkpoint.tar"
checkpoint_iter = 4000
if loadFilename:
# If loading on same machine the model was trained on
checkpoint = torch.load(loadFilename)
# If loading a model trained on GPU to CPU
#checkpoint = torch.load(loadFilename, map_location=torch.device('cpu'))
encoder_sd = checkpoint['en']
decoder_sd = checkpoint['de']
encoder_optimizer_sd = checkpoint['en_opt']
decoder_optimizer_sd = checkpoint['de_opt']
embedding_sd = checkpoint['embedding']
print('Building encoder and decoder ...')
# Initialize word embeddings
embedding = nn.Embedding(num_words, hidden_size)
if loadFilename:
embedding.load_state_dict(embedding_sd)
# Initialize encoder & decoder models
encoder = EncoderRNN(hidden_size, embedding, encoder_n_layers, dropout)
decoder = LuongAttnDecoderRNN(attn_model, embedding, hidden_size, num_words, decoder_n_layers, dropout)
if loadFilename:
encoder.load_state_dict(encoder_sd)
decoder.load_state_dict(decoder_sd)
# Use appropriate device
encoder = encoder.to(device)
decoder = decoder.to(device)
print('Models built and ready to go!')
save_dir = "/content/drive/My Drive/1-billion/deep_spell_attn/"
corpus_name="model"
# Configure training/optimization
clip = 50.0
teacher_forcing_ratio = 1
learning_rate = 0.0001
decoder_learning_ratio = 5.0
n_iteration = 300000
print_every = 10
save_every = 2500
# Ensure dropout layers are in train mode
encoder.train()
decoder.train()
# Initialize optimizers
print('Building optimizers ...')
encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate * decoder_learning_ratio)
if loadFilename:
encoder_optimizer.load_state_dict(encoder_optimizer_sd)
decoder_optimizer.load_state_dict(decoder_optimizer_sd)
# If you have cuda, configure cuda to call
for state in encoder_optimizer.state.values():
for k, v in state.items():
if isinstance(v, torch.Tensor):
state[k] = v.cuda()
for state in decoder_optimizer.state.values():
for k, v in state.items():
if isinstance(v, torch.Tensor):
state[k] = v.cuda()
# Run training iterations
print("Starting Training!")
trainIters(model_name, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer,
embedding, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size,
print_every, save_every, clip, corpus_name, loadFilename)
```
Uncomment if you Want to Do live interactive Testing of the generated results
```
'''encoder.eval()
decoder.eval()
# Initialize search module
searcher = GreedySearchDecoder(encoder, decoder)
# Begin chatting (uncomment and run the following line to begin)
evaluateInput(encoder, decoder, searcher)'''
```
| github_jupyter |
# Joint TV for multi-contrast MR
This demonstration shows how to do a synergistic reconstruction of two MR images with different contrast. Both MR images show the same underlying anatomy but of course with different contrast. In order to make use of this similarity a joint total variation (TV) operator is used as a regularisation in an iterative image reconstruction approach.
This demo is a jupyter notebook, i.e. intended to be run step by step.
You could export it as a Python file and run it one go, but that might
make little sense as the figures are not labelled.
Author: Christoph Kolbitsch, Evangelos Papoutsellis, Edoardo Pasca
First version: 16th of June 2021
CCP PETMR Synergistic Image Reconstruction Framework (SIRF).
Copyright 2021 Rutherford Appleton Laboratory STFC.
Copyright 2021 Physikalisch-Technische Bundesanstalt.
This is software developed for the Collaborative Computational
Project in Positron Emission Tomography and Magnetic Resonance imaging
(http://www.ccppetmr.ac.uk/).
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Initial set-up
```
# Make sure figures appears inline and animations works
%matplotlib notebook
# Make sure everything is installed that we need
!pip install brainweb nibabel --user
# Initial imports etc
import numpy
from numpy.linalg import norm
import matplotlib.pyplot as plt
import random
import os
import sys
import shutil
import brainweb
from tqdm.auto import tqdm
# Import SIRF functionality
import notebook_setup
import sirf.Gadgetron as mr
from sirf_exercises import exercises_data_path
# Import CIL functionality
from cil.framework import AcquisitionGeometry, BlockDataContainer, BlockGeometry, ImageGeometry
from cil.optimisation.functions import Function, OperatorCompositionFunction, SmoothMixedL21Norm, L1Norm, L2NormSquared, BlockFunction, MixedL21Norm, IndicatorBox, TotalVariation, LeastSquares, ZeroFunction
from cil.optimisation.operators import GradientOperator, BlockOperator, ZeroOperator, CompositionOperator, LinearOperator, FiniteDifferenceOperator
from cil.optimisation.algorithms import PDHG, FISTA, GD
from cil.plugins.ccpi_regularisation.functions import FGP_TV
```
# Utilities
```
# First define some handy function definitions
# To make subsequent code cleaner, we have a few functions here. You can ignore
# ignore them when you first see this demo.
def plot_2d_image(idx,vol,title,clims=None,cmap="viridis"):
"""Customized version of subplot to plot 2D image"""
plt.subplot(*idx)
plt.imshow(vol,cmap=cmap)
if not clims is None:
plt.clim(clims)
plt.colorbar()
plt.title(title)
plt.axis("off")
def crop_and_fill(templ_im, vol):
"""Crop volumetric image data and replace image content in template image object"""
# Get size of template image and crop
idim_orig = templ_im.as_array().shape
idim = (1,)*(3-len(idim_orig)) + idim_orig
offset = (numpy.array(vol.shape) - numpy.array(idim)) // 2
vol = vol[offset[0]:offset[0]+idim[0], offset[1]:offset[1]+idim[1], offset[2]:offset[2]+idim[2]]
# Make a copy of the template to ensure we do not overwrite it
templ_im_out = templ_im.copy()
# Fill image content
templ_im_out.fill(numpy.reshape(vol, idim_orig))
return(templ_im_out)
# This functions creates a regular (pattern='regular') or random (pattern='random') undersampled k-space data
# with an undersampling factor us_factor and num_ctr_lines fully sampled k-space lines in the k-space centre.
# For more information on this function please see the notebook f_create_undersampled_kspace
def create_undersampled_kspace(acq_orig, us_factor, num_ctr_lines, pattern='regular'):
"""Create a regular (pattern='regular') or random (pattern='random') undersampled k-space data"""
# Get ky indices
ky_index = acq_orig.parameter_info('kspace_encode_step_1')
# K-space centre in the middle of ky_index
ky0_index = len(ky_index)//2
# Fully sampled k-space centre
ky_index_subset = numpy.arange(ky0_index-num_ctr_lines//2, ky0_index+num_ctr_lines//2)
if pattern == 'regular':
ky_index_outside = numpy.arange(start=0, stop=len(ky_index), step=us_factor)
elif pattern == 'random':
ky_index_outside = numpy.asarray(random.sample(list(ky_index), len(ky_index)//us_factor))
else:
raise ValueError('pattern should be "random" or "linear"')
# Combine fully sampled centre and outer undersampled region
ky_index_subset = numpy.concatenate((ky_index_subset, ky_index_outside), axis=0)
# Ensure k-space points are note repeated
ky_index_subset = numpy.unique(ky_index_subset)
# Create new k-space data
acq_new = preprocessed_data.new_acquisition_data(empty=True)
# Select raw data
for jnd in range(len(ky_index_subset)):
cacq = preprocessed_data.acquisition(ky_index_subset[jnd])
acq_new.append_acquisition(cacq)
acq_new.sort()
return(acq_new)
```
### Joint TV reconstruction of two MR images
Assume we want to reconstruct two MR images $u$ and $v$ and utilse the similarity between both images using a joint TV ($JTV$) operator we can formulate the reconstruction problem as:
$$
\begin{equation}
(u^{*}, v^{*}) \in \underset{u,v}{\operatorname{argmin}} \frac{1}{2} \| A_{1} u - g\|^{2}_{2} + \frac{1}{2} \| A_{2} v - h\|^{2}_{2} + \alpha\,\mathrm{JTV}_{\eta, \lambda}(u, v)
\end{equation}
$$
* $JTV_{\eta, \lambda}(u, v) = \sum \sqrt{ \lambda|\nabla u|^{2} + (1-\lambda)|\nabla v|^{2} + \eta^{2}}$
* $A_{1}$, $A_{2}$: __MR__ `AcquisitionModel`
* $g_{1}$, $g_{2}$: __MR__ `AcquisitionData`
### Solving this problem
In order to solve the above minimization problem, we will use an alternating minimisation approach, where one variable is fixed and we solve wrt to the other variable:
$$
\begin{align*}
u^{k+1} & = \underset{u}{\operatorname{argmin}} \frac{1}{2} \| A_{1} u - g\|^{2}_{2} + \alpha_{1}\,\mathrm{JTV}_{\eta, \lambda}(u, v^{k}) \quad \text{subproblem 1}\\
v^{k+1} & = \underset{v}{\operatorname{argmin}} \frac{1}{2} \| A_{2} v - h\|^{2}_{2} + \alpha_{2}\,\mathrm{JTV}_{\eta, 1-\lambda}(u^{k+1}, v) \quad \text{subproblem 2}\\
\end{align*}$$
We are going to use a gradient descent approach to solve each of these subproblems alternatingly.
The *regularisation parameter* `alpha` should be different for each subproblem. But not to worry at this stage. Maybe we should use $\alpha_{1}, \alpha_{2}$ in front of the two JTVs and a $\lambda$, $1-\lambda$ for the first JTV and $1-\lambda$, $\lambda$, for the second JTV with $0<\lambda<1$.
This notebook builds on several other notebooks and hence certain steps will be carried out with minimal documentation. If you want more explainations, then we would like to ask you to refer to the corresponding notebooks which are mentioned in the following list. The steps we are going to carry out are
- (A) Get a T1 and T2 map from brainweb which we are going to use as ground truth $u_{gt}$ and $v_{gt}$ for our reconstruction (further information: `introduction` notebook)
- (B) Create __MR__ `AcquisitionModel` $A_{1}$ and $A_{2}$ and simulate undersampled __MR__ `AcquisitionData` $g_{1}$ and $g_{2}$ (further information: `acquisition_model_mr_pet_ct` notebook)
- (C) Set up the joint TV reconstruction problem
- (D) Solve the joint TV reconstruction problem (further information on gradient descent: `gradient_descent_mr_pet_ct` notebook)
# (A) Get brainweb data
We will download and use data from the brainweb.
```
fname, url= sorted(brainweb.utils.LINKS.items())[0]
files = brainweb.get_file(fname, url, ".")
data = brainweb.load_file(fname)
brainweb.seed(1337)
for f in tqdm([fname], desc="mMR ground truths", unit="subject"):
vol = brainweb.get_mmr_fromfile(f, petNoise=1, t1Noise=0.75, t2Noise=0.75, petSigma=1, t1Sigma=1, t2Sigma=1)
T2_arr = vol['T2']
T1_arr = vol['T1']
# Normalise image data
T2_arr /= numpy.max(T2_arr)
T1_arr /= numpy.max(T1_arr)
# Display it
plt.figure();
slice_show = T1_arr.shape[0]//2
plot_2d_image([1,2,1], T1_arr[slice_show, :, :], 'T1', cmap="Greys_r")
plot_2d_image([1,2,2], T2_arr[slice_show, :, :], 'T2', cmap="Greys_r")
```
Ok, we got to two images with T1 and T2 contrast BUT they brain looks a bit small. Spoiler alert: We are going to reconstruct MR images with a FOV 256 x 256 voxels. As the above image covers 344 x 344 voxels, the brain would only cover a small part of our MR FOV. In order to ensure the brain fits well into our MR FOV, we are going to scale the images.
In order to do this we are going to use an image `rescale` from the skimage package and simply rescale the image by a factor 2 and then crop it. To speed things up, we are going to already select a single slice because also our MR scan is going to be 2D.
```
from skimage.transform import rescale
# Select central slice
central_slice = T1_arr.shape[0]//2
T1_arr = T1_arr[central_slice, :, :]
T2_arr = T2_arr[central_slice, :, :]
# Rescale by a factor 2.0
T1_arr = rescale(T1_arr, 2,0)
T2_arr = rescale(T2_arr, 2.0)
# Select a central ROI with 256 x 256
# We could also skip this because it is automaticall done by crop_and_fill()
# but we would like to check if we did the right thing
idim = [256, 256]
offset = (numpy.array(T1_arr.shape) - numpy.array(idim)) // 2
T1_arr = T1_arr[offset[0]:offset[0]+idim[0], offset[1]:offset[1]+idim[1]]
T2_arr = T2_arr[offset[0]:offset[0]+idim[0], offset[1]:offset[1]+idim[1]]
# Now we make sure our image is of shape (1, 256, 256) again because in __SIRF__ even 2D images
# are expected to have 3 dimensions.
T1_arr = T1_arr[numpy.newaxis,...]
T2_arr = T2_arr[numpy.newaxis,...]
# Display it
plt.figure();
slice_show = T1_arr.shape[0]//2
plot_2d_image([1,2,1], T1_arr[slice_show, :, :], 'T1', cmap="Greys_r")
plot_2d_image([1,2,2], T2_arr[slice_show, :, :], 'T2', cmap="Greys_r")
```
Now, that looks better. Now we have got images we can use for our MR simulation.
# (B) Simulate undersampled MR AcquisitionData
```
# Create MR AcquisitionData
mr_acq = mr.AcquisitionData(exercises_data_path('MR', 'PTB_ACRPhantom_GRAPPA')
+ '/ptb_resolutionphantom_fully_ismrmrd.h5' )
# Calculate CSM
preprocessed_data = mr.preprocess_acquisition_data(mr_acq)
csm = mr.CoilSensitivityData()
csm.smoothness = 200
csm.calculate(preprocessed_data)
# Calculate image template
recon = mr.FullySampledReconstructor()
recon.set_input(preprocessed_data)
recon.process()
im_mr = recon.get_output()
# Display the coil maps
plt.figure();
csm_arr = numpy.abs(csm.as_array())
plot_2d_image([1,2,1], csm_arr[0, 0, :, :], 'Coil 0', cmap="Greys_r")
plot_2d_image([1,2,2], csm_arr[2, 0, :, :], 'Coil 2', cmap="Greys_r")
```
We want to use these coilmaps to simulate our MR raw data. Nevertheless, they are obtained from a phantom scan which unfortunately has got some signal voids inside. If we used these coil maps directly, then these signal voids would cause artefacts. We are therefore going to interpolate the coil maps first.
We are going to calculate a mask from the `ImageData` `im_mr`:
```
im_mr_arr = numpy.squeeze(numpy.abs(im_mr.as_array()))
im_mr_arr /= numpy.max(im_mr_arr)
mask = numpy.zeros_like(im_mr_arr)
mask[im_mr_arr > 0.2] = 1
plt.figure();
plot_2d_image([1,1,1], mask, 'Mask', cmap="Greys_r")
```
Now we are going to interpolate between the values defined by the mask:
```
from scipy.interpolate import griddata
# Target grid for a square image
xi = yi = numpy.arange(0, im_mr_arr.shape[0])
xi, yi = numpy.meshgrid(xi, yi)
# Define grid points in mask
idx = numpy.where(mask == 1)
x = xi[idx[0], idx[1]]
y = yi[idx[0], idx[1]]
# Go through each coil and interpolate linearly
csm_arr = csm.as_array()
for cnd in range(csm_arr.shape[0]):
cdat = csm_arr[cnd, 0, idx[0], idx[1]]
cdat_intp = griddata((x,y), cdat, (xi,yi), method='linear')
csm_arr[cnd, 0, :, :] = cdat_intp
# No extrapolation was done by griddate and we will set these values to 0
csm_arr[numpy.isnan(csm_arr)] = 0
# Display the coil maps
plt.figure();
plot_2d_image([1,2,1], numpy.abs(csm_arr[0, 0, :, :]), 'Coil 0', cmap="Greys_r")
plot_2d_image([1,2,2], numpy.abs(csm_arr[2, 0, :, :]), 'Coil 2', cmap="Greys_r")
```
This is not the world's best interpolation but it will do for the moment. Let's replace the data in the coils maps with the new interpolation
```
csm.fill(csm_arr);
```
Next we are going to create the two __MR__ `AcquisitionModel` $A_{1}$ and $A_{2}$
```
# Create undersampled acquisition data
us_factor = 2
num_ctr_lines = 30
pattern = 'random'
acq_us = create_undersampled_kspace(preprocessed_data, us_factor, num_ctr_lines, pattern)
# Create two MR acquisition models
A1 = mr.AcquisitionModel(acq_us, im_mr)
A1.set_coil_sensitivity_maps(csm)
A2 = mr.AcquisitionModel(acq_us, im_mr)
A2.set_coil_sensitivity_maps(csm)
```
and simulate undersampled __MR__ `AcquisitionData` $g_{1}$ and $g_{2}$
```
# MR
u_gt = crop_and_fill(im_mr, T1_arr)
g1 = A1.forward(u_gt)
v_gt = crop_and_fill(im_mr, T2_arr)
g2 = A2.forward(v_gt)
```
Lastly we are going to add some noise
```
g1_arr = g1.as_array()
g1_max = numpy.max(numpy.abs(g1_arr))
g1_arr += (numpy.random.random(g1_arr.shape) - 0.5 + 1j*(numpy.random.random(g1_arr.shape) - 0.5)) * g1_max * 0.01
g1.fill(g1_arr)
g2_arr = g2.as_array()
g2_max = numpy.max(numpy.abs(g2_arr))
g2_arr += (numpy.random.random(g2_arr.shape) - 0.5 + 1j*(numpy.random.random(g2_arr.shape) - 0.5)) * g2_max * 0.01
g2.fill(g2_arr)
```
Just to check we are going to apply the backward/adjoint operation to do a simply image reconstruction.
```
# Simple reconstruction
u_simple = A1.backward(g1)
v_simple = A2.backward(g2)
# Display it
plt.figure();
plot_2d_image([1,2,1], numpy.abs(u_simple.as_array())[0, :, :], '$u_{simple}$', cmap="Greys_r")
plot_2d_image([1,2,2], numpy.abs(v_simple.as_array())[0, :, :], '$v_{simple}$', cmap="Greys_r")
```
These images look quite poor compared to the ground truth input images, because they are reconstructed from an undersampled k-space. In addition, you can see a strange "structure" going through the centre of the brain. This has something to do with the coil maps. As mentioned above, our coil maps have these two "holes" in the centre and this creates this artefacts. Nevertheless, this is not going to be a problem for our reconstruction as we will see later on.
# (C) Set up the joint TV reconstruction problem
So far we have used mainly __SIRF__ functionality, now we are going to use __CIL__ in order to set up the reconstruction problem and then solve it. In order to be able to reconstruct both $u$ and $v$ at the same time, we will make use of `BlockDataContainer`. In the following we will define an operator which allows us to project a $(u,v)$ `BlockDataContainer` object into either $u$ or $v$. In literature, this operator is called **[Projection Map (or Canonical Projection)](https://proofwiki.org/wiki/Definition:Projection_(Mapping_Theory))** and is defined as:
$$ \pi_{i}: X_{1}\times\cdots\times X_{n}\rightarrow X_{i}$$
with
$$\pi_{i}(x_{0},\dots,x_{i},\dots,x_{n}) = x_{i},$$
mapping an element $x$ from a Cartesian Product $X =\prod_{k=1}^{n}X_{k}$ to the corresponding element $x_{i}$ determined by the index $i$.
```
class ProjectionMap(LinearOperator):
def __init__(self, domain_geometry, index, range_geometry=None):
self.index = index
if range_geometry is None:
range_geometry = domain_geometry.geometries[self.index]
super(ProjectionMap, self).__init__(domain_geometry=domain_geometry,
range_geometry=range_geometry)
def direct(self,x,out=None):
if out is None:
return x[self.index]
else:
out.fill(x[self.index])
def adjoint(self,x, out=None):
if out is None:
tmp = self.domain_geometry().allocate()
tmp[self.index].fill(x)
return tmp
else:
out[self.index].fill(x)
```
In the following we define the `SmoothJointTV` class. Our plan is to use the Gradient descent (`GD`) algorithm to solve the above problems. This implements the `__call__` method required to monitor the objective value and the `gradient` method that evaluates the gradient of `JTV`.
For the two subproblems, the first variations with respect to $u$ and $v$ variables are:
$$
\begin{equation}
\begin{aligned}
& A_{1}^{T}*(A_{1}u - g_{1}) - \alpha_{1} \mathrm{div}\bigg( \frac{\nabla u}{|\nabla(u, v)|_{2,\eta,\lambda}}\bigg)\\
& A_{2}^{T}*(A_{2}v - g_{2}) - \alpha_{2} \mathrm{div}\bigg( \frac{\nabla v}{|\nabla(u, v)|_{2,\eta,1-\lambda}}\bigg)
\end{aligned}
\end{equation}
$$
where $$|\nabla(u, v)|_{2,\eta,\lambda} = \sqrt{ \lambda|\nabla u|^{2} + (1-\lambda)|\nabla v|^{2} + \eta^{2}}.$$
```
class SmoothJointTV(Function):
def __init__(self, eta, axis, lambda_par):
r'''
:param eta: smoothing parameter making SmoothJointTV differentiable
'''
super(SmoothJointTV, self).__init__(L=8)
# smoothing parameter
self.eta = eta
# GradientOperator
FDy = FiniteDifferenceOperator(u_simple, direction=1)
FDx = FiniteDifferenceOperator(u_simple, direction=2)
self.grad = BlockOperator(FDy, FDx)
# Which variable to differentiate
self.axis = axis
if self.eta==0:
raise ValueError('Need positive value for eta')
self.lambda_par=lambda_par
def __call__(self, x):
r""" x is BlockDataContainer that contains (u,v). Actually x is a BlockDataContainer that contains 2 BDC.
"""
if not isinstance(x, BlockDataContainer):
raise ValueError('__call__ expected BlockDataContainer, got {}'.format(type(x)))
tmp = numpy.abs((self.lambda_par*self.grad.direct(x[0]).pnorm(2).power(2) + (1-self.lambda_par)*self.grad.direct(x[1]).pnorm(2).power(2)+\
self.eta**2).sqrt().sum())
return tmp
def gradient(self, x, out=None):
denom = (self.lambda_par*self.grad.direct(x[0]).pnorm(2).power(2) + (1-self.lambda_par)*self.grad.direct(x[1]).pnorm(2).power(2)+\
self.eta**2).sqrt()
if self.axis==0:
num = self.lambda_par*self.grad.direct(x[0])
else:
num = (1-self.lambda_par)*self.grad.direct(x[1])
if out is None:
tmp = self.grad.range.allocate()
tmp[self.axis].fill(self.grad.adjoint(num.divide(denom)))
return tmp
else:
self.grad.adjoint(num.divide(denom), out=out[self.axis])
```
Now we are going to put everything together and define our two objective functions which solve the two subproblems which we defined at the beginning
```
alpha1 = 0.05
alpha2 = 0.05
lambda_par = 0.5
eta = 1e-12
# BlockGeometry for the two modalities
bg = BlockGeometry(u_simple, v_simple)
# Projection map, depending on the unkwown variable
L1 = ProjectionMap(bg, index=0)
L2 = ProjectionMap(bg, index=1)
# Fidelity terms based on the acqusition data
f1 = 0.5*L2NormSquared(b=g1)
f2 = 0.5*L2NormSquared(b=g2)
# JTV for each of the subproblems
JTV1 = alpha1*SmoothJointTV(eta=eta, axis=0, lambda_par = lambda_par )
JTV2 = alpha2*SmoothJointTV(eta=eta, axis=1, lambda_par = 1-lambda_par)
# Compose the two objective functions
objective1 = OperatorCompositionFunction(f1, CompositionOperator(A1, L1)) + JTV1
objective2 = OperatorCompositionFunction(f2, CompositionOperator(A2, L2)) + JTV2
```
# (D) Solve the joint TV reconstruction problem
```
# We start with zero-filled images
x0 = bg.allocate(0.0)
# We use a fixed step-size for the gradient descent approach
step_size = 0.1
# We are also going to log the value of the objective functions
obj1_val_it = []
obj2_val_it = []
for i in range(10):
gd1 = GD(x0, objective1, step_size=step_size, \
max_iteration = 4, update_objective_interval = 1)
gd1.run(verbose=1)
# We skip the first one because it gets repeated
obj1_val_it.extend(gd1.objective[1:])
# Here we are going to do a little "trick" in order to better see, when each subproblem is optimised, we
# are going to append NaNs to the objective function which is currently not optimised. The NaNs will not
# show up in the final plot and hence we can nicely see each subproblem.
obj2_val_it.extend(numpy.ones_like(gd1.objective[1:])*numpy.nan)
gd2 = GD(gd1.solution, objective2, step_size=step_size, \
max_iteration = 4, update_objective_interval = 1)
gd2.run(verbose=1)
obj2_val_it.extend(gd2.objective[1:])
obj1_val_it.extend(numpy.ones_like(gd2.objective[1:])*numpy.nan)
x0.fill(gd2.solution)
print('* * * * * * Outer Iteration ', i, ' * * * * * *\n')
```
Finally we can look at the images $u_{jtv}$ and $v_{jtv}$ and compare them to the simple reconstruction $u_{simple}$ and $v_{simple}$ and the original ground truth images.
```
u_jtv = numpy.squeeze(numpy.abs(x0[0].as_array()))
v_jtv = numpy.squeeze(numpy.abs(x0[1].as_array()))
plt.figure()
plot_2d_image([2,3,1], numpy.squeeze(numpy.abs(u_simple.as_array()[0, :, :])), '$u_{simple}$', cmap="Greys_r")
plot_2d_image([2,3,2], u_jtv, '$u_{JTV}$', cmap="Greys_r")
plot_2d_image([2,3,3], numpy.squeeze(numpy.abs(u_gt.as_array()[0, :, :])), '$u_{gt}$', cmap="Greys_r")
plot_2d_image([2,3,4], numpy.squeeze(numpy.abs(v_simple.as_array()[0, :, :])), '$v_{simple}$', cmap="Greys_r")
plot_2d_image([2,3,5], v_jtv, '$v_{JTV}$', cmap="Greys_r")
plot_2d_image([2,3,6], numpy.squeeze(numpy.abs(v_gt.as_array()[0, :, :])), '$v_{gt}$', cmap="Greys_r")
```
And let's look at the objective functions
```
plt.figure()
plt.plot(obj1_val_it, 'o-', label='subproblem 1')
plt.plot(obj2_val_it, '+-', label='subproblem 2')
plt.xlabel('Number of iterations')
plt.ylabel('Value of objective function')
plt.title('Objective functions')
plt.legend()
# Logarithmic y-axis
plt.yscale('log')
```
# Next steps
The above is a good demonstration for a synergistic image reconstruction of two different images. The following gives a few suggestions of what to do next and also how to extend this notebook to other applications.
## Number of iterations
In our problem we have several regularisation parameters such as $\alpha_{1}$, $\alpha_{2}$ and $\lambda$. In addition, the number of inner iterations for each subproblem (currently set to 3) and the number of outer iterations (currently set to 10) also determine the final solution. Of course, for infinite number of total iterations it shouldn't matter but usually we don't have that much time.
__TODO__: Change the number of iterations and see what happens to the objective functions. For a given number of total iterations, do you think it is better to have a high number of inner or high number of outer iterations? Why? Does this also depend on the undersampling factor?
## Spatial misalignment
In the above example we simulated our data such that there is a perfect spatial match between $u$ and $v$. For real world applications this usually cannot be assumed.
__TODO__: Add spatial misalignment between $u$ and $v$. This can be achieved e.g. by calling `numpy.roll` on `T2_arr` before calling `v_gt = crop_and_fill(im_mr, T2_arr)`. What is the effect on the reconstructed images? For a more "advanced" misalignment, have a look at notebook `BrainWeb`.
__TODO__: One way to minimize spatial misalignment is to use image registration to ensure both $u$ and $v$ are well aligned. In the notebook `sirf_registration` you find information about how to register two images and also how to resample one image based on the spatial transformation estimated from the registration. Try to use this to correct for the misalignment you introduced above. For a real world example, at which point in the code would you have to carry out the registration+resampling? (some more information can also be found at the end of notebook `de_Pierro_MAPEM`)
## Pathologies
The images $u$ and $v$ show the same anatomy, just with a different contrast. Clinically more useful are of course images which show complementary image information.
__TODO__: Add a pathology to either $u$ and $v$ and see how this effects the reconstruction. For something more advanced, have a loot at the notebook `BrainWeb`.
## Single anatomical prior
So far we have alternated between two reconstruction problems. Another option is to do a single regularised reconstruction and simply use a previously reconstructed image for regularisation.
__TODO__: Adapt the above code such that $u$ is reconstructed first without regularisation and is then used for a regularised reconstruction of $v$ without any further updates of $u$.
## Complementary k-space trajectories
We used the same k-space trajectory for $u$ and $v$. This is of course not ideal for such an optimisation, because the same k-space trajectory also means the same pattern of undersampling artefacts. Of course the artefacts in each image will be different because of the different image content but it still would be better if $u$ and $v$ were acquired with different k-space trajectories.
__TODO__: Create two different k-space trajectories and compare the results to a reconstruction using the same k-space trajectories.
__TODO__: Try different undersampling factors and compare results for _regular_ and _random_ undersampling patterns.
## Other regularisation options
In this example we used a TV-based regularisation, but of course other regularisers could also be used, such as directional TV.
__TODO__: Have a look at the __CIL__ notebook `02_Dynamic_CT` and adapt the `SmoothJointTV` class above to use directional TV.
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/get_image_resolution.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/get_image_resolution.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/get_image_resolution.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
naip = ee.Image('USDA/NAIP/DOQQ/m_3712213_sw_10_1_20140613')
Map.setCenter(-122.466123, 37.769833, 17)
Map.addLayer(naip, {'bands': ['N', 'R','G']}, 'NAIP')
naip_resolution =naip.select('N').projection().nominalScale()
print("NAIP resolution: ", naip_resolution.getInfo())
landsat = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
landsat_resolution =landsat.select('B1').projection().nominalScale()
print("Landsat resolution: ", landsat_resolution.getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
```
%matplotlib inline
import importlib
importlib.reload(RooFitMP_analysis)
import RooFitMP_analysis
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import glob
from RooFitMP_analysis import *
df_split_timings_1538069 = build_comb_df_split_timing_info('../rootbench/1538069.burrell.nikhef.nl.out')
dfs_1538069 = load_result_file('../rootbench/1538069.burrell.nikhef.nl/RoofitMPworkspace_1549966927.json', match_y_axes=True)
df_total_timings_1538069 = dfs_1538069['BM_RooFit_MP_GradMinimizer_workspace_file']
df_meta_1538069 = df_total_timings_1538069.drop(['real_time', 'real or ideal'], axis=1).dropna().set_index('benchmark_number', drop=True)
df_baseline_timings_1538069 = dfs_1538069['BM_RooFit_RooMinimizer_workspace_file']
vanilla_t, vanilla_t_std = df_baseline_timings_1538069['real_time'].mean() / 1000, df_baseline_timings_1538069['real_time'].std() / 1000
_df = combine_detailed_with_gbench_timings_by_name(df_total_timings_1538069,
df_split_timings_1538069,
{'update': 'update state',
'gradient': 'gradient work',
'terminate': 'terminate'},
add_ideal=['gradient'])
_g = sns.relplot(data=_df,
x='NumCPU', y='time [s]', style="real or ideal",
hue='timing_type',
markers=True, err_style="bars", legend='full', kind="line")
linestyle = {'color': 'black', 'lw': 0.7}
_g.axes[0,0].axhline(vanilla_t, **linestyle)
_g.axes[0,0].axhline(vanilla_t - vanilla_t_std, alpha=0.5, **linestyle)
_g.axes[0,0].axhline(vanilla_t + vanilla_t_std, alpha=0.5, **linestyle)
_timing_types = {
'update': 'update state',
'gradient': 'gradient work',
'terminate': 'terminate',
'partial derivatives': 'partial derivative'
}
_df = combine_detailed_with_gbench_timings_by_name(df_total_timings_1538069,
df_split_timings_1538069,
timing_types=_timing_types,
add_ideal=['gradient'],
exclude_from_rest=['partial derivatives'])
_g = sns.relplot(data=_df,
x='NumCPU', y='time [s]', style="real or ideal",
hue='timing_type',
markers=True, err_style="bars", legend='full', kind="line")
linestyle = {'color': 'black', 'lw': 0.7}
_g.axes[0,0].axhline(vanilla_t, **linestyle)
_g.axes[0,0].axhline(vanilla_t - vanilla_t_std, alpha=0.5, **linestyle)
_g.axes[0,0].axhline(vanilla_t + vanilla_t_std, alpha=0.5, **linestyle)
```
# Run with optConst = 1
```
df_split_timings_1604382 = build_comb_df_split_timing_info('../rootbench/1604382.burrell.nikhef.nl.out')
dfs_1604382 = load_result_file('../rootbench/1604382.burrell.nikhef.nl/RoofitMPworkspaceNoOptConst_1551699016.json', match_y_axes=True)
df_total_timings_1604382 = dfs_1604382['BM_RooFit_MP_GradMinimizer_workspace_file_noOptConst']
df_meta_1604382 = df_total_timings_1604382.drop(['real_time', 'real or ideal'], axis=1).dropna().set_index('benchmark_number', drop=True)
df_baseline_timings_1604382 = dfs_1604382['BM_RooFit_RooMinimizer_workspace_file_noOptConst']
vanilla_t, vanilla_t_std = df_baseline_timings_1604382['real_time'].mean() / 1000, df_baseline_timings_1604382['real_time'].std() / 1000
_timing_types = {
'update': 'update state',
'gradient': 'gradient work',
'terminate': 'terminate',
'partial derivatives': 'partial derivative'
}
_df = combine_detailed_with_gbench_timings_by_name(df_total_timings_1604382,
df_split_timings_1604382,
timing_types=_timing_types,
add_ideal=['gradient'],
exclude_from_rest=['partial derivatives'])
_g = sns.relplot(data=_df,
x='NumCPU', y='time [s]', style="real or ideal",
hue='timing_type',
markers=True, err_style="bars", legend='full', kind="line")
linestyle = {'color': 'black', 'lw': 0.7}
_g.axes[0,0].axhline(vanilla_t, **linestyle)
_g.axes[0,0].axhline(vanilla_t - vanilla_t_std, alpha=0.5, **linestyle)
_g.axes[0,0].axhline(vanilla_t + vanilla_t_std, alpha=0.5, **linestyle)
```
# Large workspace run
```
df_split_timings_1604381 = build_comb_df_split_timing_info('../rootbench/1604381.burrell.nikhef.nl.out')
dfs_1604381 = load_result_file('../rootbench/1604381.burrell.nikhef.nl/RoofitMPworkspace_1551694135.json', match_y_axes=True)
```
Ok, that one failed after 4 runs, but let's see how that went anyway to get a better feeling for it:
```
_df = df_split_timings_1604381[df_split_timings_1604381['timing_type'] == 'gradient work']
_x = np.arange(len(_df))
plt.bar(_x, _df['time [s]'])
_df['time [s]'].describe()
```
# More timings
In the next benchmarks, we added a lot more timing output that needs to be incorporated into the analysis:
- Line search timings: single lines starting with `line_search: `
- update_real timings on queue process
- update_real timings on worker processes
- absolute time stamps (in nanoseconds since epoch) for:
+ start migrad (this line has changed!)
+ end migrad (same)
+ for each worker: lines that contain either two or three stamps:
- time of ask for task and time of rejection
- time of ask for task, time of start, time of end of task
+ maybe the update_real/update_state ones as well, if useful
As an additional book keeping complication, we need to run the large workspaces separately for different NumCPU parameters, both to speed up the runs (let them run on the cluster in parallel) and because we are currently getting crashes when running everything in one go; when running with 10 repeats per NumCPU the whole thing just stops after 4 repeats of the single-worker run; when running with 1 repeat it just crashes after the single-worker run (though it does write out the benchmark data to JSON, so that's promising for running the tasks separately).
## Try-out: 32 core large workspace run
```
df_split_timings_1604381 = build_comb_df_split_timing_info('../rootbench/1633603.burrell.nikhef.nl.out')
```
Yeah, so, we're going to have to make adjustments, due to the new timings having been added.
Let's paste the necessary functions back in here and adjust them.
```
def extract_split_timing_info(fn):
"""
Group lines by benchmark iteration, starting from migrad until
after the forks have been terminated.
"""
with open(fn, 'r') as fh:
lines = fh.read().splitlines()
bm_iterations = []
start_indices = []
end_indices = []
for ix, line in enumerate(lines):
if 'start migrad' in line:
if lines[ix - 1] == 'start migrad': # sometimes 'start migrad' appears twice
start_indices.pop()
start_indices.append(ix)
elif 'terminate' in line:
end_indices.append(ix)
if len(start_indices) != len(end_indices):
raise Exception(f"Number of start and end indices unequal (resp. {len(start_indices)} and {len(end_indices)})!")
for ix in range(len(start_indices)):
bm_iterations.append(lines[start_indices[ix] + 1:end_indices[ix] + 1])
return bm_iterations
def separate_partderi_job_time_lines(lines):
partial_derivatives = []
job_times = []
for line in lines:
# if line[:18] == '[#1] DEBUG: -- job':
if line[:9] == 'worker_id' or line[:24] == '[#0] DEBUG: -- worker_id':
partial_derivatives.append(line)
else:
job_times.append(line)
return partial_derivatives, job_times
def group_timing_lines(bm_iteration_lines):
"""
Group lines (from one benchmark iteration) by gradient call,
specifying:
- Update time
- Gradient work time
- For all partial derivatives a sublist of all lines
Finally, the terminate time for the entire bm_iteration is also
returned (last line in the list).
"""
gradient_calls = []
start_indices = []
end_indices = []
# flag to check whether we are still in the same gradient call
in_block = False
for ix, line in enumerate(bm_iteration_lines[:-1]): # -1: leave out terminate line
if not in_block and (line[:9] == 'worker_id' or line[:24] == '[#0] DEBUG: -- worker_id'):
start_indices.append(ix)
in_block = True
elif line[:12] == 'update_state' or line[:27] == '[#0] DEBUG: -- update_state':
end_indices.append(ix)
in_block = False
if len(start_indices) != len(end_indices):
raise Exception(f"Number of start and end indices unequal (resp. {len(start_indices)} and {len(end_indices)})!")
for ix in range(len(start_indices)):
partial_derivatives, job_times = separate_partderi_job_time_lines(bm_iteration_lines[start_indices[ix]:end_indices[ix]])
gradient_calls.append({
'gradient_total': bm_iteration_lines[end_indices[ix]],
'partial_derivatives': partial_derivatives,
'job_times': job_times
})
try:
terminate_line = bm_iteration_lines[-1]
except IndexError:
terminate_line = None
return gradient_calls, terminate_line
def build_df_split_timing_run(timing_grouped_lines_list, terminate_line):
data = {'time [s]': [], 'timing_type': [], 'worker_id': [], 'task': []}
for gradient_call_timings in timing_grouped_lines_list:
words = gradient_call_timings['gradient_total'].split()
data['time [s]'].append(float(words[4][:-2]))
data['timing_type'].append('update state')
data['worker_id'].append(None)
data['task'].append(None)
data['time [s]'].append(float(words[11][:-1]))
data['timing_type'].append('gradient work')
data['worker_id'].append(None)
data['task'].append(None)
for partial_derivative_line in gradient_call_timings['partial_derivatives']:
words = partial_derivative_line.split()
try:
data['worker_id'].append(words[4][:-1])
data['task'].append(words[6][:-1])
data['time [s]'].append(float(words[10][:-1]))
data['timing_type'].append('partial derivative')
except ValueError as e:
print(words)
raise e
words = terminate_line.split()
data['time [s]'].append(float(words[4][:-1]))
data['timing_type'].append('terminate')
data['worker_id'].append(None)
data['task'].append(None)
return pd.DataFrame(data)
def build_dflist_split_timing_info(fn):
bm_iterations = extract_split_timing_info(fn)
dflist = []
for bm in bm_iterations:
grouped_lines, terminate_line = group_timing_lines(bm)
if terminate_line is not None:
dflist.append(build_df_split_timing_run(grouped_lines, terminate_line))
return dflist
def build_comb_df_split_timing_info(fn):
dflist = build_dflist_split_timing_info(fn)
ix = 0
for df in dflist:
df_pardiff = df[df["timing_type"] == "partial derivative"]
N_tasks = len(df_pardiff["task"].unique())
N_gradients = len(df_pardiff) // N_tasks
gradient_indices = np.hstack(i * np.ones(N_tasks, dtype='int') for i in range(N_gradients))
df["gradient number"] = pd.Series(dtype='Int64')
df.loc[df["timing_type"] == "partial derivative", "gradient number"] = gradient_indices
df["benchmark_number"] = ix
ix += 1
return pd.concat(dflist)
# AhsF1415
def combine_split_total_timings(df_total_timings, df_split_timings,
calculate_rest=True, exclude_from_rest=[],
add_ideal=[]):
df_meta = df_total_timings.drop(['real_time', 'real or ideal'], axis=1).dropna().set_index('benchmark_number', drop=True)
df_all_timings = df_total_timings.rename(columns={'real_time': 'time [s]'})
df_all_timings['time [s]'] /= 1000 # convert to seconds
df_all_timings['timing_type'] = 'total'
df_split_sum = {}
for name, df in df_split_timings.items():
df_split_sum[name] = df.groupby('benchmark_number').sum().join(df_meta, on='benchmark_number').reset_index()
df_split_sum[name]['real or ideal'] = 'real'
if name in add_ideal:
df_split_sum[name] = add_ideal_timings(df_split_sum[name], time_col='time [s]')
df_split_sum[name]['timing_type'] = name
# note: sort sorts the *columns* if they are not aligned, nothing happens with the column data itself
df_all_timings = pd.concat([df_all_timings, ] + list(df_split_sum.values()), sort=True)
if calculate_rest:
rest_time = df_all_timings[(df_all_timings['timing_type'] == 'total') & (df_all_timings['real or ideal'] == 'real')].set_index('benchmark_number')['time [s]']
for name, df in df_split_sum.items():
if name not in exclude_from_rest:
rest_time = rest_time - df.set_index('benchmark_number')['time [s]']
df_rest_time = rest_time.to_frame().join(df_meta, on='benchmark_number').reset_index()
df_rest_time['timing_type'] = 'rest'
df_rest_time['real or ideal'] = 'real'
# note: sort sorts the *columns* if they are not aligned, nothing happens with the column data itself
df_all_timings = df_all_timings.append(df_rest_time, sort=True)
return df_all_timings
def combine_detailed_with_gbench_timings_by_name(df_gbench, df_detailed, timing_types={}, **kwargs):
detailed_selection = {}
if len(timing_types) == 0:
raise Exception("Please give some timing_types, otherwise this function is pointless.")
for name, timing_type in timing_types.items():
detailed_selection[name] = df_detailed[df_detailed['timing_type'] == timing_type].drop('timing_type', axis=1)
return combine_split_total_timings(df_gbench, detailed_selection, **kwargs)
df_split_timings_1633603 = build_comb_df_split_timing_info('../rootbench/1633603.burrell.nikhef.nl.out')
```
Ok, that works, now to combine them with the json timings.
```
dfs_split_timings_1633594__603 = {}
for i in range(1633594, 1633604):
dfs_split_timings_1633594__603[i] = build_comb_df_split_timing_info(f'../rootbench/{i}.burrell.nikhef.nl.out')
```
Then the json timings:
```
dfs_1633594__603 = {}
for i in range(1633594, 1633604):
json_file = glob.glob(f'../rootbench/{i}.burrell.nikhef.nl/RoofitMPworkspaceNumCPUInConfigFile_*.json')
if len(json_file) == 1:
dfs_1633594__603[i] = RooFitMP_analysis.load_result_file(json_file[0], plot_results=False)
else:
print(json_file)
raise Exception("whups")
dfs_total_timings_1633594__603 = {}
nums = list(range(1, 9)) + [16, 32]
num_from_run = {}
for ix, run in enumerate(range(1633594, 1633604)):
num_from_run[run] = nums[ix]
for run, df in dfs_1633594__603.items():
dfc = df['BM_RooFit_MP_GradMinimizer_workspace_file_NumCPUInConfigFile'].copy()
dfc['NumCPU'] = num_from_run[run]
dfs_total_timings_1633594__603[run] = dfc
```
Combine them per run, because otherwise the benchmark_numbers won't match (they only count within a run).
```
# df_baseline_timings_1633594__603 = pd.concat([df['BM_RooFit_RooMinimizer_workspace_file_NumCPUInConfigFile'] for df in dfs_1633594__603])
# vanilla_t, vanilla_t_std = df_baseline_timings_1633594__603['real_time'].mean() / 1000, df_baseline_timings_1633594__603['real_time'].std() / 1000
_timing_types = {
'update': 'update state',
'gradient': 'gradient work',
'terminate': 'terminate',
# 'partial derivatives': 'partial derivative'
}
_comb = {}
for i in range(1633594, 1633604):
_comb[i] = combine_detailed_with_gbench_timings_by_name(dfs_total_timings_1633594__603[i],
dfs_split_timings_1633594__603[i],
timing_types=_timing_types,
add_ideal=['gradient'],
exclude_from_rest=['partial derivatives'])
_df = pd.concat(_comb.values())
_g = sns.relplot(data=_df,
x='NumCPU', y='time [s]', style="real or ideal",
hue='timing_type',
markers=True, err_style="bars", legend='full', kind="line")
_g.fig.set_size_inches(10,6)
_g.axes[0,0].set_xlim((0,33))
# linestyle = {'color': 'black', 'lw': 0.7}
# _g.axes[0,0].axhline(vanilla_t, **linestyle)
# _g.axes[0,0].axhline(vanilla_t - vanilla_t_std, alpha=0.5, **linestyle)
# _g.axes[0,0].axhline(vanilla_t + vanilla_t_std, alpha=0.5, **linestyle)
```
A rather big rest term now! It's linear though...
Could this be the cost of the extra prints we now introduced all over the place? Especially the update_state ones are a lot.
Only one way to really find out: run again without all those added prints. We're not using them yet anyway.
... But we can do a quick estimate as well. A flush costs about 10 microseconds according to the interwebs. The output files of the runs (which each contain 3 repeats) have the following number of lines:
- 32 workers: 5309080
- 16 workers: 2647773
- 8 workers: 1419130
- 1 worker: 345466
This means that the flushing overhead per minimization is:
```
{'32': 5309080/3/100000,
'16': 2647773/3/100000,
'8': 1419130/3/100000,
'1': 345466/3/100000}
```
Ok, so that's not the dominant term in the rest term, although it may explain the slight upward trend at the end, though we could hardly call that significant.
Let's look at the line search timings as well. There is one line_search print after each gradient in these runs, that's fine. There's also a whole bunch of other line_search prints at the end of the output file which belong to the RooMinimizer run, so we shouldn't include those. We will then only modify the function above to take the line_search lines directly below a gradient output block.
Also, we'll add the queue and worker process update_real timings here.
```
def group_timing_lines(bm_iteration_lines):
"""
Group lines (from one benchmark iteration) by gradient call,
specifying:
- Update time on master
- update_real times on queue and workers
- Gradient work time
- For all partial derivatives a sublist of all lines
- After each gradient block, there may be line_search times,
these are also included as a sublist
Finally, the terminate time for the entire bm_iteration is also
returned (last line in the list).
"""
gradient_calls = []
start_indices = []
end_indices = []
line_search_lines = []
update_queue = []
update_worker = []
# flag to check whether we are still in the same gradient call
in_block = False
for ix, line in enumerate(bm_iteration_lines[:-1]): # -1: leave out terminate line
if not in_block and (line[:9] == 'worker_id' or line[:24] == '[#0] DEBUG: -- worker_id'):
start_indices.append(ix)
in_block = True
elif 'update_state' in line:
end_indices.append(ix)
in_block = False
# the rest has nothing to do with the gradient call block, so we don't touch in_block there:
elif 'line_search' in line:
line_search_lines.append(line)
elif 'update_real on queue' in line:
update_queue.append(line)
elif 'update_real on worker' in line:
update_worker.append(line)
if len(start_indices) != len(end_indices):
raise Exception(f"Number of start and end indices unequal (resp. {len(start_indices)} and {len(end_indices)})!")
for ix in range(len(start_indices)):
partial_derivatives, job_times = separate_partderi_job_time_lines(bm_iteration_lines[start_indices[ix]:end_indices[ix]])
gradient_calls.append({
'gradient_total': bm_iteration_lines[end_indices[ix]],
'partial_derivatives': partial_derivatives,
'job_times': job_times
})
try:
terminate_line = bm_iteration_lines[-1]
except IndexError:
terminate_line = None
return gradient_calls, line_search_lines, update_queue, update_worker, terminate_line
def build_df_split_timing_run(timing_grouped_lines_list, line_search_lines, update_queue, update_worker, terminate_line):
data = {'time [s]': [], 'timing_type': [], 'worker_id': [], 'task': []}
for gradient_call_timings in timing_grouped_lines_list:
words = gradient_call_timings['gradient_total'].split()
data['time [s]'].append(float(words[4][:-2]))
data['timing_type'].append('update state')
data['worker_id'].append(None)
data['task'].append(None)
data['time [s]'].append(float(words[11][:-1]))
data['timing_type'].append('gradient work')
data['worker_id'].append(None)
data['task'].append(None)
for partial_derivative_line in gradient_call_timings['partial_derivatives']:
words = partial_derivative_line.split()
try:
data['worker_id'].append(words[4][:-1])
data['task'].append(words[6][:-1])
data['time [s]'].append(float(words[10][:-1]))
data['timing_type'].append('partial derivative')
except ValueError as e:
print(words)
raise e
for line_search_line in line_search_lines:
words = line_search_line.split()
data['time [s]'].append(float(words[1][:-1]))
data['timing_type'].append('line search')
data['worker_id'].append(None)
data['task'].append(None)
for line in update_queue:
words = line.split()
data['time [s]'].append(float(words[6][:-1]))
data['timing_type'].append('update_real on queue')
data['worker_id'].append(None)
data['task'].append(None)
for line in update_worker:
words = line.split()
data['time [s]'].append(float(words[7][:-1]))
data['timing_type'].append('update_real on worker')
data['worker_id'].append(int(words[6][:-1]))
data['task'].append(None)
words = terminate_line.split()
data['time [s]'].append(float(words[4][:-1]))
data['timing_type'].append('terminate')
data['worker_id'].append(None)
data['task'].append(None)
return pd.DataFrame(data)
def build_dflist_split_timing_info(fn):
bm_iterations = extract_split_timing_info(fn)
dflist = []
for bm in bm_iterations:
grouped_lines, line_search_lines, update_queue, update_worker, terminate_line = group_timing_lines(bm)
if terminate_line is not None:
dflist.append(build_df_split_timing_run(grouped_lines, line_search_lines, update_queue, update_worker, terminate_line))
return dflist
dfs_split_timings_1633594__603_v2 = {}
for i in range(1633594, 1633604):
dfs_split_timings_1633594__603_v2[i] = build_comb_df_split_timing_info(f'../rootbench/{i}.burrell.nikhef.nl.out')
_timing_types = {
'update': 'update state',
'gradient': 'gradient work',
'terminate': 'terminate',
'line search': 'line search',
'update_real on queue': 'update_real on queue',
'update_real on worker': 'update_real on worker'
# 'partial derivatives': 'partial derivative'
}
add_ideal_cols = ['gradient']
_comb = {}
for i in range(1633594, 1633604):
_comb[i] = combine_detailed_with_gbench_timings_by_name(dfs_total_timings_1633594__603[i],
dfs_split_timings_1633594__603_v2[i],
timing_types=_timing_types,
exclude_from_rest=['partial derivatives',
'update_real on queue',
'update_real on worker'])
_df = pd.concat(_comb.values())
for col in add_ideal_cols:
df_ideal = RooFitMP_analysis.add_ideal_timings(_df[_df['timing_type'] == col],
time_col='time [s]', return_ideal=True, ideal_stat='mean')
df_ideal['timing_type'] = col
_df = _df.append(df_ideal, sort=True)
_g = sns.relplot(data=_df,
x='NumCPU', y='time [s]', style="real or ideal",
hue='timing_type',
markers=True, err_style="bars", legend='full', kind="line")
_g.fig.set_size_inches(16,8)
_g.axes[0,0].set_xlim((0,33))
# linestyle = {'color': 'black', 'lw': 0.7}
# _g.axes[0,0].axhline(vanilla_t, **linestyle)
# _g.axes[0,0].axhline(vanilla_t - vanilla_t_std, alpha=0.5, **linestyle)
# _g.axes[0,0].axhline(vanilla_t + vanilla_t_std, alpha=0.5, **linestyle)
```
Ok, so the line search is not insignificant, but also not the whole story.
The update_real on the queue also grows a bit, but nothing too shocking.
The update_real on the workers is not really useful as a total aggregate, since we're really only interested in the time that each worker individually "wastes", i.e. the time between when the master thinks it's done updating (but then the queue and workers still have to process it) and when the worker starts to do actual work.
Ah, another possibility for the rest term is actually the constant term optimization. We should measure this as well, because if indeed it is so big, it might become interesting to turn it off, or at least set it to 1 instead of 2. The latter option may give a factor 2 slower runs in single core mode, but if it can scale down due to parallelism, it might still become faster in the end!
That still doesn't explain the discrepancy in real vs ideal of the gradient timing though. For that we really need to dive into the absolute timings to find out whether there are significant delays between jobs on the workers.
A simple first tally of some of the output may already tell us some things. For instance, let's look at how much workers spend asking for work when there's nothing left:
```sh
for i in {594..603}; do no_work=$(grep "no work" 1633${i}.burrell.nikhef.nl.out | wc -l); job_done=$(grep "job done" 1633${i}.burrell.nikhef.nl.out | wc -l); echo "$i: $job_done / $no_work"; done
```
The result for these runs is:
```
594: 34512 / 178
595: 34512 / 586
596: 34512 / 6083
597: 34512 / 33391
598: 34512 / 32171
599: 34512 / 63563
600: 34512 / 87326
601: 34512 / 107505
602: 34512 / 231764
603: 34512 / 684300
```
That's quite the rise! To see what kind of impact this has exactly, we'd have to dig deeper, though. It may be pretty much harmless, since all it should do is cause a delay in the queue loop for processing actually useful worker messages, but the useful workers are probably not producing useful results every nanosecond, so small delays in their processing may not be that important.
# Timestamps
Let's do one more modification round to also read in the absolute timestamps. We'll first leave out the timestamps of update_real, since we're more interested in the delays between start and work, between work and end and especially between jobs.
```
def extract_split_timing_info(fn):
"""
Group lines by benchmark iteration, starting from migrad until
after the forks have been terminated.
"""
with open(fn, 'r') as fh:
lines = fh.read().splitlines()
bm_iterations = []
start_indices = []
end_indices = []
for ix, line in enumerate(lines):
if 'start migrad' in line:
if 'start migrad' in lines[ix - 1]: # sometimes 'start migrad' appears twice
start_indices.pop()
start_indices.append(ix)
elif 'terminate: ' in line:
end_indices.append(ix)
if len(start_indices) != len(end_indices):
raise Exception(f"Number of start and end indices unequal (resp. {len(start_indices)} and {len(end_indices)})!")
for ix in range(len(start_indices)):
bm_iterations.append(lines[start_indices[ix]:end_indices[ix] + 1])
return bm_iterations
def group_timing_lines(bm_iteration_lines):
"""
Group lines (from one benchmark iteration) by gradient call,
specifying:
- Update time on master
- update_real times on queue and workers
- Gradient work time
- For all partial derivatives a sublist of all lines
- After each gradient block, there may be line_search times,
these are also included as a sublist
Finally, the terminate time for the entire bm_iteration is also
returned (last line in the list).
In each gradient, also timestamps are printed. These are not
further subdivided in this function, but are output as part of
the `gradient_calls` list for further processing elsewhere.
"""
gradient_calls = []
start_indices = []
end_indices = []
line_search_lines = []
update_queue = []
update_worker = []
# flag to check whether we are still in the same gradient call
in_block = False
for ix, line in enumerate(bm_iteration_lines[:-1]): # -1: leave out terminate line
if not in_block and (line[:9] == 'worker_id' or line[:24] == '[#0] DEBUG: -- worker_id'):
start_indices.append(ix)
in_block = True
elif 'update_state' in line:
end_indices.append(ix)
in_block = False
# the rest has nothing to do with the gradient call block, so we don't touch in_block there:
elif 'line_search' in line:
line_search_lines.append(line)
elif 'update_real on queue' in line:
update_queue.append(line)
elif 'update_real on worker' in line:
update_worker.append(line)
elif 'start migrad' in line:
start_migrad_line = line
elif 'end migrad' in line:
end_migrad_line = line
if len(start_indices) != len(end_indices):
raise Exception(f"Number of start and end indices unequal (resp. {len(start_indices)} and {len(end_indices)})!")
for ix in range(len(start_indices)):
partial_derivatives, timestamps = separate_partderi_job_time_lines(bm_iteration_lines[start_indices[ix]:end_indices[ix]])
gradient_calls.append({
'gradient_total': bm_iteration_lines[end_indices[ix]],
'partial_derivatives': partial_derivatives,
'timestamps': timestamps
})
try:
terminate_line = bm_iteration_lines[-1]
except IndexError:
terminate_line = None
return gradient_calls, line_search_lines, update_queue, update_worker, terminate_line, start_migrad_line, end_migrad_line
def build_df_stamps(grouped_lines, start_migrad_line, end_migrad_line):
data = {'timestamp': [], 'stamp_type': [], 'worker_id': []}
words = start_migrad_line.split()
data['timestamp'].append(int(words[6]))
data['stamp_type'].append('start migrad')
data['worker_id'].append(None)
NumCPU = int(words[10])
words = end_migrad_line.split()
data['timestamp'].append(int(words[6]))
data['stamp_type'].append('end migrad')
data['worker_id'].append(None)
for gradient_group in grouped_lines:
for line in gradient_group['timestamps']:
if 'no work' in line:
words = line.split()
data['worker_id'].append(int(words[6]))
data['timestamp'].append(int(words[9]))
data['stamp_type'].append('no job - asked')
data['worker_id'].append(int(words[6]))
data['timestamp'].append(int(words[14]))
data['stamp_type'].append('no job - denied')
elif 'job done' in line:
words = line.split()
data['worker_id'].append(int(words[6]))
data['timestamp'].append(int(words[9][:-1]))
data['stamp_type'].append('job done - asked')
data['worker_id'].append(int(words[6]))
data['timestamp'].append(int(words[12]))
data['stamp_type'].append('job done - started')
data['worker_id'].append(int(words[6]))
data['timestamp'].append(int(words[16]))
data['stamp_type'].append('job done - finished')
elif 'update_real' in line:
# discard for now
pass
else:
raise Exception("got a weird line:\n" + line)
return pd.DataFrame(data), NumCPU
def build_dflist_split_timing_info(fn, extract_fcn=extract_split_timing_info):
bm_iterations = extract_fcn(fn)
dflist = []
dflist_stamps = []
NumCPU_list = []
for bm in bm_iterations:
grouped_lines, line_search_lines, update_queue, update_worker, terminate_line, start_migrad_line, end_migrad_line = group_timing_lines(bm)
if terminate_line is not None:
dflist.append(build_df_split_timing_run(grouped_lines, line_search_lines, update_queue, update_worker, terminate_line))
df_stamps, NumCPU = build_df_stamps(grouped_lines, start_migrad_line, end_migrad_line)
dflist_stamps.append(df_stamps)
NumCPU_list.append(NumCPU)
return dflist, dflist_stamps, NumCPU_list
def build_comb_df_split_timing_info(fn, extract_fcn=extract_split_timing_info):
dflist, dflist_stamps, NumCPU_list = build_dflist_split_timing_info(fn, extract_fcn=extract_fcn)
for ix, df in enumerate(dflist):
df_pardiff = df[df["timing_type"] == "partial derivative"]
N_tasks = len(df_pardiff["task"].unique())
N_gradients = len(df_pardiff) // N_tasks
gradient_indices = np.hstack(i * np.ones(N_tasks, dtype='int') for i in range(N_gradients))
df["gradient number"] = pd.Series(dtype='Int64')
df.loc[df["timing_type"] == "partial derivative", "gradient number"] = gradient_indices
dflist_stamps[ix]["gradient number"] = pd.Series(dtype='Int64')
dflist_stamps[ix].loc[dflist_stamps[ix]["stamp_type"] == "job done - asked", "gradient number"] = gradient_indices
dflist_stamps[ix].loc[dflist_stamps[ix]["stamp_type"] == "job done - started", "gradient number"] = gradient_indices
dflist_stamps[ix].loc[dflist_stamps[ix]["stamp_type"] == "job done - finished", "gradient number"] = gradient_indices
df["benchmark_number"] = ix
dflist_stamps[ix]["benchmark_number"] = ix
# assuming the stamps are ordered properly, which I'm pretty sure is correct,
# we can do ffill:
df_stamps = pd.concat(dflist_stamps)
df_stamps.loc[~df_stamps['stamp_type'].str.contains('migrad'), 'gradient number'] = df_stamps.loc[~df_stamps['stamp_type'].str.contains('migrad'), 'gradient number'].fillna(method='ffill')
return pd.concat(dflist), df_stamps
df_split_timings_1633594_v3, df_stamps_1633594 = build_comb_df_split_timing_info('../rootbench/1633594.burrell.nikhef.nl.out')
df_stamps_1633594[df_stamps_1633594['stamp_type'].str.contains('no job')]
```
# ACAT 2019
Some quick runs for ACAT 2019 that requires custom analysis.
```
def load_acat19_out(fn):
"""
Just single migrad runs, so no need for further splitting by run.
Does add dummy terminate line, because the other functions expect
this. Also dummy start and end migrad lines.
"""
with open(fn, 'r') as fh:
lines = fh.read().splitlines()
print(lines[-1])
lines = ['[#0] DEBUG: -- start migrad at 0 with NumCPU = 0'] + lines
lines.append('[#0] DEBUG: -- end migrad at 0')
lines.append('[#0] DEBUG: -- terminate: 0.0s')
return [lines]
fn_acat19_1552415151 = '/Users/pbos/projects/apcocsm/code/acat19_1552415151.out'
df_split_timings_acat19_1552415151, df_stamps_acat19_1552415151 = build_comb_df_split_timing_info(fn_acat19_1552415151,
extract_fcn=load_acat19_out)
df_split_timings_acat19_1552415151.groupby('timing_type')['time [s]'].sum()
579.161-536.84-15.586-1.225684
```
That's good, at least the rest term is constant. This is important, because in reality, this fit will take about an hour, so 25 seconds are acceptable overhead.
```
df_split_timings_1633594_v3.groupby(['benchmark_number', 'timing_type'])['time [s]'].sum()
536.840400/216.742200, 15.586356/5.496111
```
Indeed, the other terms scale by a factor of about 2.5 to 3, while the rest term remains constant.
# RooMinimizer timings for comparison
```
dfs_baseline_timings_1633594__603 = {}
for run, df in dfs_1633594__603.items():
dfc = df['BM_RooFit_RooMinimizer_workspace_file_NumCPUInConfigFile'].copy()
dfc['NumCPU'] = 1
dfs_baseline_timings_1633594__603[run] = dfc
df_baseline_timings_1633594__603 = pd.concat(dfs_baseline_timings_1633594__603.values())
vanilla_t, vanilla_t_std = df_baseline_timings_1633594__603['real_time'].mean() / 1000, df_baseline_timings_1633594__603['real_time'].std() / 1000
_timing_types = {
'update': 'update state',
'gradient': 'gradient work',
'terminate': 'terminate',
'line search': 'line search',
# 'update_real on queue': 'update_real on queue',
# 'update_real on worker': 'update_real on worker'
# 'partial derivatives': 'partial derivative'
}
add_ideal_cols = ['gradient']
_comb = {}
for i in range(1633594, 1633604):
_comb[i] = combine_detailed_with_gbench_timings_by_name(dfs_total_timings_1633594__603[i],
dfs_split_timings_1633594__603_v2[i],
timing_types=_timing_types,
exclude_from_rest=['partial derivatives',
'update_real on queue',
'update_real on worker'])
_df = pd.concat(_comb.values())
# for col in add_ideal_cols:
# df_ideal = RooFitMP_analysis.add_ideal_timings(_df[_df['timing_type'] == col],
# time_col='time [s]', return_ideal=True, ideal_stat='mean')
# df_ideal['timing_type'] = col
# _df = _df.append(df_ideal, sort=True)
# _g = sns.relplot(data=_df,
# x='NumCPU', y='time [s]', style="real or ideal",
# hue='timing_type',
# markers=True,
# err_style="bars",
# legend='full', kind="line")
_g = sns.catplot(data=_df,
x='NumCPU', y='time [s]', style="real or ideal",
hue='timing_type',
# markers=True,
# err_style="bars",
legend='full', kind="point")
_g.fig.set_size_inches(16,8)
# _g.axes[0,0].set_xlim((0,33))
linestyle = {'color': 'black', 'lw': 0.7}
_g.axes[0,0].axhline(vanilla_t, **linestyle)
_g.axes[0,0].axhline(vanilla_t - vanilla_t_std, alpha=0.5, **linestyle)
_g.axes[0,0].axhline(vanilla_t + vanilla_t_std, alpha=0.5, **linestyle)
(dfs_total_timings_1633594__603[1633594]['real_time'] / 1000).mean(), vanilla_t
fig, ax = plt.subplots(1, 1, sharey=True, figsize=(14, 8))
_timing_types = ['gradient', 'line search', 'update', 'terminate', 'rest']
colors = plt.cm.get_cmap('tab10', 10)
first_nc = True
for nc in _df['NumCPU'].unique():
prev_time = 0
for ix_ttype, ttype in enumerate(_timing_types):
time_mu = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == ttype)]['time [s]'].mean()
time_std = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == ttype)]['time [s]'].std()
if first_nc:
label = ttype
else:
label = ""
ax.bar(str(nc), time_mu, bottom=prev_time, color=colors(ix_ttype), label=label)
prev_time += float(time_mu)
first_nc = False
plt.legend()
```
## calculate speed-up
W.r.t. mean time of single core run. **Not w.r.t. old RooMinimizer timing** because for that we don't measure all the specific timing types, only total time.
```
_df_speedup = {'NumCPU': [], 'timing_type': [], 'speedup': []}
for ttype in _df['timing_type'].unique():
mean_single_core_time = _df[(_df['NumCPU'] == 1) & (_df['timing_type'] == ttype)]['time [s]'].mean()
for nc in _df['NumCPU'].unique():
mean_time = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == ttype)]['time [s]'].mean()
_df_speedup['NumCPU'].append(nc)
_df_speedup['timing_type'].append(ttype)
_df_speedup['speedup'].append(mean_single_core_time / mean_time)
_df_speedup = pd.DataFrame(_df_speedup)
with sns.axes_style('whitegrid'):
_g = sns.catplot(data=_df_speedup, x='NumCPU', y='speedup', hue='timing_type', kind='bar')
_g.fig.set_size_inches(14,8)
```
# Back to timestamps
```
_st = df_stamps_1633594
_st_b0 = _st[_st['benchmark_number'] == 0]
start_stamps = _st[_st['stamp_type'] == 'start migrad']
start_stamps
# _st[~_st['stamp_type'].str.contains('migrad')]
end_stamps = _st[_st['stamp_type'] == 'end migrad']
end_stamps
start_st0 = _st_b0[_st_b0['stamp_type'] == 'start migrad']['timestamp'].iloc[0]
end_st0 = _st_b0[_st_b0['stamp_type'] == 'end migrad']['timestamp'].iloc[0]
(end_st0 - start_st0)/1.e9
_st_b0[~_st_b0['stamp_type'].str.contains('migrad')].head()
```
We can get two kinds of worker-loop overhead out of the "job done" stamps: "explicit" overhead in the time between asked and started and "implicit" overhead in the time between a finished job and the asking of a next one.
```
_st_b0_jd = _st_b0[_st_b0['stamp_type'].str.contains('job done')]
_st_b0_jd_ask = _st_b0_jd[_st_b0_jd['stamp_type'].str.contains('asked')]
_st_b0_jd_sta = _st_b0_jd[_st_b0_jd['stamp_type'].str.contains('started')]
_st_b0_jd_fin = _st_b0_jd[_st_b0_jd['stamp_type'].str.contains('finished')]
```
Explicit:
```
(_st_b0_jd_sta.reset_index()['timestamp'] - _st_b0_jd_ask.reset_index()['timestamp']).sum()/1.e9
```
Implicit:
```
(_st_b0_jd_ask.iloc[1:].reset_index()['timestamp'] - _st_b0_jd_fin.iloc[:-1].reset_index()['timestamp']).sum()/1.e9
```
Does this sum together with the actual partial derivatives to the total gradient time?
```
_df_split = dfs_split_timings_1633594__603_v2[1633594]
_pd_time = _df_split[(_df_split["timing_type"] == "partial derivative")
& (_df_split["benchmark_number"] == 0)]['time [s]'].sum()
_grad_time = _df_split[(_df_split["timing_type"] == "gradient work")
& (_df_split["benchmark_number"] == 0)]['time [s]'].sum()
_grad_time, _pd_time
```
Oh, it's too much... wait, probably the times between gradients are longer, so they should be filtered out first.
```
impl_overhead = 0
for g in _st_b0_jd_ask['gradient number'].unique():
_ask_g = _st_b0_jd_ask[_st_b0_jd_ask['gradient number'] == g]
_fin_g = _st_b0_jd_fin[_st_b0_jd_fin['gradient number'] == g]
impl_overhead += (_ask_g.iloc[1:].reset_index()['timestamp'] - _fin_g.iloc[:-1].reset_index()['timestamp']).sum()/1.e9
impl_overhead
216.74220000000003 - (0.779148892 + 1.015019575 + 212.982649869)
```
Ok, that makes more sense. The remaining ~2 seconds could be any number of things, like communication delays with the queue, or delays caused by update_real.
# TODO
- Also check the wait time from job rejections.
- Include delays from update_real
# Distribution of parallel task times
```
_df_split = dfs_split_timings_1633594__603_v2[1633594]
_pdtime = _df_split[_df_split['timing_type'] == 'partial derivative']['time [s]']
# fig, ax = plt.subplots(1, 1, figsize=(14,8))
_g = sns.distplot(_pdtime, kde=False)#, ax=ax)
_g.set_yscale('log')
```
# ACAT 2019 talk plots
```
sns.reset_orig()
sns.set_context('talk')
sns.set_style('whitegrid')
_df = combine_detailed_with_gbench_timings_by_name(df_total_timings_1538069,
df_split_timings_1538069,
{'update': 'update state',
'gradient': 'gradient work',
'terminate': 'terminate'})
fig, ax = plt.subplots(1, 1, sharey=True, figsize=(14, 8), dpi=200)
_timing_types = ['gradient', 'update', 'terminate', 'rest']
colors = plt.cm.get_cmap('tab10', 10)
lighten = lambda c: min(0.3*c, 1)
first_nc = True
for nc in _df['NumCPU'].unique():
prev_time = 0
for ix_ttype, ttype in enumerate(_timing_types):
time_mu = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == ttype)]['time [s]'].mean()
time_std = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == ttype)]['time [s]'].std()
if first_nc:
label = ttype
else:
label = ""
color = colors(ix_ttype)
ecolor = (lighten(color[0]), lighten(color[1]), lighten(color[2]), 0.8)
ax.bar(str(nc), time_mu, bottom=prev_time, color=color, label=label,
yerr=time_std, ecolor=ecolor, capsize=5)
prev_time += float(time_mu)
first_nc = False
plt.legend()
ax.set_xlabel('workers')
ax.set_ylabel('time [s]')
_vanilla_t, _vanilla_t_std = df_baseline_timings_1538069['real_time'].mean() / 1000, df_baseline_timings_1538069['real_time'].std() / 1000
linestyle = {'color': 'black', 'lw': 0.7}
ax.axhline(_vanilla_t, **linestyle)
ax.axhline(_vanilla_t - _vanilla_t_std, alpha=0.5, **linestyle)
ax.axhline(_vanilla_t + _vanilla_t_std, alpha=0.5, **linestyle)
_df_speedup = {'NumCPU': [], 'timing type': [], 'mean speedup': []}
ttypes = _df['timing_type'].unique()
for ttype in ttypes:
mean_single_core_time = _df[(_df['NumCPU'] == 1) & (_df['timing_type'] == ttype)]['time [s]'].mean()
for nc in _df['NumCPU'].unique():
mean_time = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == ttype)]['time [s]'].mean()
_df_speedup['NumCPU'].append(nc)
_df_speedup['timing type'].append(ttype)
_df_speedup['mean speedup'].append(mean_single_core_time / mean_time)
_df_speedup = pd.DataFrame(_df_speedup)
_g = sns.catplot(data=_df_speedup, x='NumCPU', y='mean speedup', hue='timing type', kind='point', palette='tab10', hue_order=_timing_types + ['total'])
_g.fig.set_size_inches(11,8)
_g.fig.set_dpi(200)
_g.axes[0,0].set_xlabel('workers')
_timing_types = {
'update': 'update state',
'gradient': 'gradient work',
'terminate': 'terminate',
'line search': 'line search',
}
add_ideal_cols = ['gradient']
_comb = {}
for i in range(1633594, 1633604):
_comb[i] = combine_detailed_with_gbench_timings_by_name(dfs_total_timings_1633594__603[i],
dfs_split_timings_1633594__603_v2[i],
timing_types=_timing_types,
exclude_from_rest=['partial derivatives',
'update_real on queue',
'update_real on worker'])
_df = pd.concat(_comb.values())
fig, ax = plt.subplots(1, 1, sharey=True, figsize=(14, 8), dpi=200)
_timing_types = ['gradient', 'line search', 'update', 'terminate', 'rest']
colors = plt.cm.get_cmap('tab10', 10)
lighten = lambda c: min(0.3*c, 1)
first_nc = True
for nc in _df['NumCPU'].unique():
prev_time = 0
for ix_ttype, ttype in enumerate(_timing_types):
time_mu = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == ttype)]['time [s]'].mean()
time_std = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == ttype)]['time [s]'].std()
if first_nc:
label = ttype
else:
label = ""
color = colors(ix_ttype)
ecolor = (lighten(color[0]), lighten(color[1]), lighten(color[2]), 0.8)
ax.bar(str(nc), time_mu, bottom=prev_time, color=color, label=label,
yerr=time_std, ecolor=ecolor, capsize=5)
prev_time += float(time_mu)
first_nc = False
plt.legend()
ax.set_xlabel('workers')
ax.set_ylabel('time [s]')
# _vanilla_t, _vanilla_t_std = df_baseline_timings_1633594__603['real_time'].mean() / 1000, df_baseline_timings_1633594__603['real_time'].std() / 1000
# linestyle = {'color': 'black', 'lw': 0.7}
# ax.axhline(_vanilla_t, **linestyle)
# ax.axhline(_vanilla_t - _vanilla_t_std, alpha=0.5, **linestyle)
# ax.axhline(_vanilla_t + _vanilla_t_std, alpha=0.5, **linestyle)
_df_speedup = {'NumCPU': [], 'timing type': [], 'speedup': []}
ttypes = _df['timing_type'].unique()
for ttype in ttypes:
mean_single_core_time = _df[(_df['NumCPU'] == 1) & (_df['timing_type'] == ttype)]['time [s]'].mean()
for nc in _df['NumCPU'].unique():
# mean_time = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == ttype)]['time [s]'].mean()
times = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == ttype)]['time [s]']
for time in times:
_df_speedup['NumCPU'].append(nc)
_df_speedup['timing type'].append(ttype)
_df_speedup['speedup'].append(mean_single_core_time / float(time))
_df_speedup = pd.DataFrame(_df_speedup)
_g = sns.catplot(data=_df_speedup, x='NumCPU', y='speedup', hue='timing type',
kind='point', palette='tab10', hue_order=_timing_types + ['total'])
_g.fig.set_size_inches(11,8)
_g.fig.set_dpi(200)
_g.axes[0,0].set_xlabel('workers')
```
Extrapolate total time to longer run times: 10 minutes, 1 hour, 2 hours.
```
# in the actual run
total_runtime = _df[(_df['NumCPU'] == 1) & (_df['timing_type'] == 'total')]['time [s]'].mean() # seconds
gradient = _df[(_df['NumCPU'] == 1) & (_df['timing_type'] == 'gradient')]['time [s]'].mean()
total_overhead = total_runtime - gradient
_df_total_times = _df[(_df['timing_type'] == 'total')]
_df_total_times['timing_type'] = 'total (~4min)'
# extrapolate gradient timings
_df_extrapolate_10min = {'NumCPU': [], 'timing_type': [], 'time [s]': []}
extrap_factor_10min = (10 * 60 - total_overhead) / gradient
for nc in _df['NumCPU'].unique():
times = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == 'gradient')]['time [s]']
for time in times:
_df_extrapolate_10min['NumCPU'].append(nc)
_df_extrapolate_10min['timing_type'].append('total (10min)')
_df_extrapolate_10min['time [s]'].append(float(time) * extrap_factor_10min + total_overhead)
_df_extrapolate_10min = pd.DataFrame(_df_extrapolate_10min)
_df_extrapolate_1h = {'NumCPU': [], 'timing_type': [], 'time [s]': []}
extrap_factor_1h = (60 * 60 - total_overhead) / gradient
for nc in _df['NumCPU'].unique():
times = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == 'gradient')]['time [s]']
for time in times:
_df_extrapolate_1h['NumCPU'].append(nc)
_df_extrapolate_1h['timing_type'].append('total (1h)')
_df_extrapolate_1h['time [s]'].append(float(time) * extrap_factor_1h + total_overhead)
_df_extrapolate_1h = pd.DataFrame(_df_extrapolate_1h)
_df_extrapolate_2h = {'NumCPU': [], 'timing_type': [], 'time [s]': []}
extrap_factor_2h = (2 * 60 * 60 - total_overhead) / gradient
for nc in _df['NumCPU'].unique():
times = _df[(_df['NumCPU'] == nc) & (_df['timing_type'] == 'gradient')]['time [s]']
for time in times:
_df_extrapolate_2h['NumCPU'].append(nc)
_df_extrapolate_2h['timing_type'].append('total (2h)')
_df_extrapolate_2h['time [s]'].append(float(time) * extrap_factor_2h + total_overhead)
_df_extrapolate_2h = pd.DataFrame(_df_extrapolate_2h)
_df_extr_speedup = {'NumCPU': [], 'timing type': [], 'speedup': []}
for extr_time, _df_extr in {'true run': _df_total_times,
'10min': _df_extrapolate_10min,
'1h': _df_extrapolate_1h,
'2h': _df_extrapolate_2h,
}.items():
ttypes = _df_extr['timing_type'].unique()
for ttype in ttypes:
mean_single_core_time = _df_extr[(_df_extr['NumCPU'] == 1) & (_df_extr['timing_type'] == ttype)]['time [s]'].mean()
for nc in _df_extr['NumCPU'].unique():
times = _df_extr[(_df_extr['NumCPU'] == nc) & (_df_extr['timing_type'] == ttype)]['time [s]']
for time in times:
_df_extr_speedup['NumCPU'].append(nc)
_df_extr_speedup['timing type'].append(ttype)
_df_extr_speedup['speedup'].append(mean_single_core_time / float(time))
_df_extr_speedup = pd.DataFrame(_df_extr_speedup)
_g = sns.catplot(data=_df_extr_speedup, x='NumCPU', y='speedup', hue='timing type',
kind='point', palette='tab10')
_g.fig.set_size_inches(11,8)
_g.fig.set_dpi(200)
_g.axes[0,0].set_xlabel('workers')
```
Ok, talk done. Back to:
# Back to back to timestamps
```
_st = df_stamps_1633594
```
Let's try to just plot the timeline to visualize the load on the worker during the whole run.
```
_sta = _st_b0_jd_sta['timestamp'] - start_st0
_fin = _st_b0_jd_fin['timestamp'] - start_st0
fig, ax = plt.subplots(1, 1, figsize=(20, 3))
for ix in range(len(_sta)):
# for ix in range(2000):
ax.barh('worker 0', _fin.iloc[ix] - _sta.iloc[ix], left=_sta.iloc[ix],
color='red', linewidth=0)
ax.set_xlim((0, end_st0 - start_st0))
len(_fin), len(_sta)
```
Goddamn, that takes forever to complete. Seems like matplotlib cannot handle too much bars at the same time.
But ok, no shockers here.
Now for a multi-worker run:
```
_, df_stamps_1633603 = build_comb_df_split_timing_info('../rootbench/1633603.burrell.nikhef.nl.out')
_, df_stamps_1633602 = build_comb_df_split_timing_info('../rootbench/1633602.burrell.nikhef.nl.out')
_, df_stamps_1633601 = build_comb_df_split_timing_info('../rootbench/1633601.burrell.nikhef.nl.out')
_bench_nr = (df_stamps_1633601['benchmark_number'] == 0)
_start_st = df_stamps_1633601[_bench_nr
& (df_stamps_1633601['stamp_type'] == 'start migrad')
]['timestamp'].iloc[0]
_end_st = df_stamps_1633601[_bench_nr
& (df_stamps_1633601['stamp_type'] == 'end migrad')
]['timestamp'].iloc[0]
_sta = df_stamps_1633601[_bench_nr
& (df_stamps_1633601['stamp_type'].str.contains('started'))
]#['timestamp'] - _start_st
_fin = df_stamps_1633601[_bench_nr
& (df_stamps_1633601['stamp_type'].str.contains('finished'))
]#['timestamp'] - _start_st
len(_sta), len(_fin), _end_st, _start_st
fig, ax = plt.subplots(1, 1, figsize=(20, 5))
ax.set_ylabel('worker')
ax.set_xlim((0, _end_st - _start_st))
batches = int(len(_sta)/1000) + 1 # get aruond matplotlib limits?
for batch in range(batches):
for ix in range(batch * 1000, (batch + 1) * 1000):
worker = _fin.iloc[ix]["worker_id"]
ax.barh(worker,
_fin.iloc[ix]["timestamp"] - _sta.iloc[ix]["timestamp"],
left=_sta.iloc[ix]["timestamp"] - _start_st,
color='red', linewidth=0)
import matplotlib as mpl
mpl.use('qt5Agg')
%matplotlib inline
batch_size = 2000
batches = int(len(_sta)/batch_size) + 1 # get aruond matplotlib limits?
for batch in range(batches):
fig, ax = plt.subplots(1, 1, figsize=(20, 5))
ax.set_ylabel('worker')
ax.set_xlim((0, _end_st - _start_st))
for ix in range(min(batch * batch_size, len(_sta)),
min((batch + 1) * batch_size, len(_sta))):
worker = _fin.iloc[ix]["worker_id"]
ax.barh(worker,
_fin.iloc[ix]["timestamp"] - _sta.iloc[ix]["timestamp"],
left=_sta.iloc[ix]["timestamp"] - _start_st,
color='red', linewidth=0)
for grad_nr in df_stamps_1633601['gradient number'].unique():
_grad_nr = (df_stamps_1633601['gradient number'] == grad_nr)
_sta = df_stamps_1633601[_bench_nr & _grad_nr
& (df_stamps_1633601['stamp_type'].str.contains('started'))
]#['timestamp'] - _start_st
_fin = df_stamps_1633601[_bench_nr & _grad_nr
& (df_stamps_1633601['stamp_type'].str.contains('finished'))
]#['timestamp'] - _start_st
fig, ax = plt.subplots(1, 1, figsize=(20, 5))
ax.set_ylabel('worker')
# ax.set_xlim((0, _end_st - _start_st))
for ix in range(len(_sta)):
worker = _fin.iloc[ix]["worker_id"]
ax.barh(worker,
_fin.iloc[ix]["timestamp"] - _sta.iloc[ix]["timestamp"],
left=_sta.iloc[ix]["timestamp"] - _start_st,
color='red', linewidth=0)
_st = df_stamps_1633601
_st_b0 = _st[_st['benchmark_number'] == 0]
_jd = _st_b0['stamp_type'].str.contains('job done')
_st_b0_jd_ask = _st_b0[_jd & _st_b0['stamp_type'].str.contains('asked')]
_st_b0_jd_sta = _st_b0[_jd & _st_b0['stamp_type'].str.contains('started')]
_st_b0_jd_fin = _st_b0[_jd & _st_b0['stamp_type'].str.contains('finished')]
explicit_delay = 0
implicit_delay = 0
for g in _st_b0['gradient number'].unique():
for w in _st_b0['worker_id'].unique():
_sta_g = _st_b0_jd_sta[(_st_b0_jd_sta['gradient number'] == g) & (_st_b0_jd_sta['worker_id'] == w)]
_ask_g = _st_b0_jd_ask[(_st_b0_jd_ask['gradient number'] == g) & (_st_b0_jd_ask['worker_id'] == w)]
_fin_g = _st_b0_jd_fin[(_st_b0_jd_fin['gradient number'] == g) & (_st_b0_jd_fin['worker_id'] == w)]
explicit_delay += (_sta_g.reset_index()['timestamp'] - _ask_g.reset_index()['timestamp']).sum()/1.e9
implicit_delay += (_ask_g.iloc[1:].reset_index()['timestamp'] - _fin_g.iloc[:-1].reset_index()['timestamp']).sum()/1.e9
explicit_delay, implicit_delay
_st = df_stamps_1633602
_st_b0 = _st[_st['benchmark_number'] == 0]
_jd = _st_b0['stamp_type'].str.contains('job done')
_st_b0_jd_ask = _st_b0[_jd & _st_b0['stamp_type'].str.contains('asked')]
_st_b0_jd_sta = _st_b0[_jd & _st_b0['stamp_type'].str.contains('started')]
_st_b0_jd_fin = _st_b0[_jd & _st_b0['stamp_type'].str.contains('finished')]
explicit_delay = 0
implicit_delay = 0
for g in _st_b0['gradient number'].unique():
for w in _st_b0['worker_id'].unique():
_sta_g = _st_b0_jd_sta[(_st_b0_jd_sta['gradient number'] == g) & (_st_b0_jd_sta['worker_id'] == w)]
_ask_g = _st_b0_jd_ask[(_st_b0_jd_ask['gradient number'] == g) & (_st_b0_jd_ask['worker_id'] == w)]
_fin_g = _st_b0_jd_fin[(_st_b0_jd_fin['gradient number'] == g) & (_st_b0_jd_fin['worker_id'] == w)]
explicit_delay += (_sta_g.reset_index()['timestamp'] - _ask_g.reset_index()['timestamp']).sum()/1.e9
implicit_delay += (_ask_g.iloc[1:].reset_index()['timestamp'] - _fin_g.iloc[:-1].reset_index()['timestamp']).sum()/1.e9
explicit_delay, implicit_delay
_st = df_stamps_1633603
_st_b0 = _st[_st['benchmark_number'] == 0]
_jd = _st_b0['stamp_type'].str.contains('job done')
_st_b0_jd_ask = _st_b0[_jd & _st_b0['stamp_type'].str.contains('asked')]
_st_b0_jd_sta = _st_b0[_jd & _st_b0['stamp_type'].str.contains('started')]
_st_b0_jd_fin = _st_b0[_jd & _st_b0['stamp_type'].str.contains('finished')]
explicit_delay = 0
implicit_delay = 0
for g in _st_b0['gradient number'].unique():
for w in _st_b0['worker_id'].unique():
_sta_g = _st_b0_jd_sta[(_st_b0_jd_sta['gradient number'] == g) & (_st_b0_jd_sta['worker_id'] == w)]
_ask_g = _st_b0_jd_ask[(_st_b0_jd_ask['gradient number'] == g) & (_st_b0_jd_ask['worker_id'] == w)]
_fin_g = _st_b0_jd_fin[(_st_b0_jd_fin['gradient number'] == g) & (_st_b0_jd_fin['worker_id'] == w)]
explicit_delay += (_sta_g.reset_index()['timestamp'] - _ask_g.reset_index()['timestamp']).sum()/1.e9
implicit_delay += (_ask_g.iloc[1:].reset_index()['timestamp'] - _fin_g.iloc[:-1].reset_index()['timestamp']).sum()/1.e9
explicit_delay, implicit_delay
```
That's all not that spectacular, except the 32 worker run, but that might also be due to actually having 34 processes to run (incl queue and master).
Let's now sum up the delays caused by having to wait for one of the workers at the end + the corresponding effect at the beginning (the latter effect is a lot smaller, the workers start more synchronously than they end).
There are several ways you can measure the impact of these worker idle times.
- You can sum the total time that all workers wait, which gives you a sum total of all the potential calculation time you could have used while the slowest worker was still working.
- But this doesn't tell you how fast it could have actually been if load balancing were perfect. To get the time you could have saved if you had perfect load balancing, divide the above number by the number of workers. **This is the actual time lost due to imperfect load balancing.**
```
_st = df_stamps_1633601
for b in _st['benchmark_number'].dropna().unique():
_st_b0 = _st[_st['benchmark_number'] == b]
_jd = _st_b0['stamp_type'].str.contains('job done')
_st_b0_jd_ask = _st_b0[_jd & _st_b0['stamp_type'].str.contains('asked')]
_st_b0_jd_sta = _st_b0[_jd & _st_b0['stamp_type'].str.contains('started')]
_st_b0_jd_fin = _st_b0[_jd & _st_b0['stamp_type'].str.contains('finished')]
start_delay = 0
finish_delay = 0
for g in _st_b0['gradient number'].dropna().unique():
min_start = _st_b0_jd_sta[(_st_b0_jd_sta['gradient number'] == g)]['timestamp'].min()
max_finish = _st_b0_jd_fin[(_st_b0_jd_fin['gradient number'] == g)]['timestamp'].max()
for w in _st_b0['worker_id'].dropna().unique():
start_delay += (_st_b0_jd_sta[(_st_b0_jd_sta['gradient number'] == g)
& (_st_b0_jd_sta['worker_id'] == w)
]['timestamp'].min() - min_start) / 1.e9
finish_delay += (max_finish - _st_b0_jd_fin[(_st_b0_jd_fin['gradient number'] == g)
& (_st_b0_jd_fin['worker_id'] == w)
]['timestamp'].max()) / 1.e9
print('benchmark', b, 'with', len(_st_b0['worker_id'].dropna().unique()), 'workers')
print('start delay: ', start_delay)
print('finish delay:', finish_delay)
print('total delay: ', start_delay + finish_delay)
print('time lost due to imperfect load balancing: ', (start_delay + finish_delay) / len(_st_b0['worker_id'].dropna().unique()))
```
Ok, so that is negligible.
# CPU time
Suggestion by Wouter: compare the partial derivative wall time with the CPU time. This way we can find out whether something is really being delayed in the calculation itself, or whether we should look elsewhere.
Did another acat19.cpp run including CPU timing, now with 8 workers and without the `mu = 1.5` line.
Have to again modify the functions for this.
```
def build_df_split_timing_run(timing_grouped_lines_list, line_search_lines, update_queue, update_worker, terminate_line):
data = {'time [s]': [], 'timing_type': [], 'worker_id': [], 'task': []}
for gradient_call_timings in timing_grouped_lines_list:
words = gradient_call_timings['gradient_total'].split()
data['time [s]'].append(float(words[4][:-2]))
data['timing_type'].append('update state')
data['worker_id'].append(None)
data['task'].append(None)
data['time [s]'].append(float(words[11][:-1]))
data['timing_type'].append('gradient work')
data['worker_id'].append(None)
data['task'].append(None)
for partial_derivative_line in gradient_call_timings['partial_derivatives']:
words = partial_derivative_line.split()
try:
data['worker_id'].append(words[4][:-1])
data['task'].append(words[6][:-1])
data['time [s]'].append(float(words[10][:-1]))
data['timing_type'].append('partial derivative')
if len(words) > 13:
data['worker_id'].append(words[4][:-1])
data['task'].append(words[6][:-1])
data['time [s]'].append(float(words[13][:-1]))
data['timing_type'].append('partial derivative CPU time')
except ValueError as e:
print(words)
raise e
for line_search_line in line_search_lines:
words = line_search_line.split()
data['time [s]'].append(float(words[1][:-1]))
data['timing_type'].append('line search')
data['worker_id'].append(None)
data['task'].append(None)
for line in update_queue:
words = line.split()
data['time [s]'].append(float(words[6][:-1]))
data['timing_type'].append('update_real on queue')
data['worker_id'].append(None)
data['task'].append(None)
for line in update_worker:
words = line.split()
data['time [s]'].append(float(words[7][:-1]))
data['timing_type'].append('update_real on worker')
data['worker_id'].append(int(words[6][:-1]))
data['task'].append(None)
words = terminate_line.split()
data['time [s]'].append(float(words[4][:-1]))
data['timing_type'].append('terminate')
data['worker_id'].append(None)
data['task'].append(None)
return pd.DataFrame(data)
fn_acat19_1552664372 = '/Users/pbos/projects/apcocsm/code/acat19_1552664372.out'
df_split_timings_acat19_1552664372, df_stamps_acat19_1552664372 = build_comb_df_split_timing_info(fn_acat19_1552664372,
extract_fcn=load_acat19_out)
df_split_timings_acat19_1552664372[df_split_timings_acat19_1552664372['timing_type'] == 'partial derivative']['time [s]'].sum(),\
df_split_timings_acat19_1552664372[df_split_timings_acat19_1552664372['timing_type'] == 'partial derivative CPU time']['time [s]'].sum()
```
Those are almost completely equal, which is good news: we don't have to look anywhere else, like in scheduling or something. There is really something strange going on in the partial derivatives themselves that makes their total time grow.
Note that these numbers in absolute sense are not comparable to the earlier big fit runs, because those were on the schudeled nodes, and this was run on the stoomboot headnode! Indeed, that seems to have 2400 MHz cores, while google benchmark reports 1996 MHz (weird, but ok) for the compute nodes.
# What about the big REST block?
We may be onto the culprit. Put timing around the SetInitialGradient function...
Shit, nope. See the `acat19_1552671011.out` file. Three times ~0.0003 seconds. That's nothing. In hindsight, it makes sense: SetInitialGradient never calls the function (the likelihood), so it's not at all expensive.
Wouter suspected that somehow **Simplex** gets run somewhere before Migrad, but I haven't been able to find it anywhere... at least not starting from `RooMinimizer::migrad`. Maybe it is somewhere else, perhaps somewhere in the Migrad/Minuit setup.
We could time the Migrad Seed creation to be sure...
But maybe it's easier to just throw in some timestamps, then at least we can pin down where in migrad things are happening.
------
In the meantime, why not take a look at those...
# update_state timestamps
```
def group_timing_lines(bm_iteration_lines):
"""
Group lines (from one benchmark iteration) by gradient call,
specifying:
- Update time on master
- update_real times on queue and workers
- Gradient work time
- For all partial derivatives a sublist of all lines
- After each gradient block, there may be line_search times,
these are also included as a sublist
Finally, the terminate time for the entire bm_iteration is also
returned (last line in the list).
In each gradient, also timestamps are printed. These are not
further subdivided in this function, but are output as part of
the `gradient_calls` list for further processing elsewhere.
"""
gradient_calls = []
start_indices = []
end_indices = []
line_search_lines = []
update_queue = []
update_worker = []
# flag to check whether we are still in the same gradient call
in_block = False
for ix, line in enumerate(bm_iteration_lines[:-1]): # -1: leave out terminate line
if not in_block and (line[:9] == 'worker_id' or line[:24] == '[#0] DEBUG: -- worker_id'):
start_indices.append(ix)
in_block = True
elif 'update_state' in line:
end_indices.append(ix)
in_block = False
# the rest has nothing to do with the gradient call block, so we don't touch in_block there:
elif 'line_search' in line:
line_search_lines.append(line)
elif 'update_real on queue' in line:
update_queue.append(line)
elif 'update_real on worker' in line:
update_worker.append(line)
elif 'start migrad' in line:
start_migrad_line = line
elif 'end migrad' in line:
end_migrad_line = line
if len(start_indices) != len(end_indices):
raise Exception(f"Number of start and end indices unequal (resp. {len(start_indices)} and {len(end_indices)})!")
for ix in range(len(start_indices)):
partial_derivatives, timestamps = separate_partderi_job_time_lines(bm_iteration_lines[start_indices[ix]:end_indices[ix]])
gradient_calls.append({
'gradient_total': bm_iteration_lines[end_indices[ix]],
'partial_derivatives': partial_derivatives,
'timestamps': timestamps
})
try:
terminate_line = bm_iteration_lines[-1]
except IndexError:
terminate_line = None
special_lines = dict(terminate_line=terminate_line,
start_migrad_line=start_migrad_line,
end_migrad_line=end_migrad_line)
return gradient_calls, line_search_lines, update_queue, update_worker, special_lines
def build_df_stamps(grouped_lines, special_lines, update_queue, update_worker):
data = {'timestamp': [], 'stamp_type': [], 'worker_id': []}
words = special_lines["start_migrad_line"].split()
shift = 3 if '[#' in words[0] else 0
data['timestamp'].append(int(words[3 + shift]))
data['stamp_type'].append('start migrad')
data['worker_id'].append(None)
if len(words) > 10:
NumCPU = int(words[10])
else:
NumCPU = 0
words = special_lines["end_migrad_line"].split()
shift = 3 if '[#' in words[0] else 0
data['timestamp'].append(int(words[3 + shift]))
data['stamp_type'].append('end migrad')
data['worker_id'].append(None)
for gradient_group in grouped_lines:
for line in gradient_group['timestamps']:
if 'no work' in line:
words = line.split()
shift = 3 if '[#' in words[0] else 0
data['worker_id'].append(int(words[3 + shift]))
data['timestamp'].append(int(words[6 + shift]))
data['stamp_type'].append('no job - asked')
data['worker_id'].append(int(words[3 + shift]))
data['timestamp'].append(int(words[11 + shift]))
data['stamp_type'].append('no job - denied')
elif 'job done' in line:
words = line.split()
shift = 3 if '[#' in words[0] else 0
data['worker_id'].append(int(words[3 + shift]))
data['timestamp'].append(int(words[6 + shift][:-1]))
data['stamp_type'].append('job done - asked')
data['worker_id'].append(int(words[3 + shift]))
data['timestamp'].append(int(words[9 + shift]))
data['stamp_type'].append('job done - started')
data['worker_id'].append(int(words[3 + shift]))
data['timestamp'].append(int(words[13 + shift]))
data['stamp_type'].append('job done - finished')
else:
raise Exception("got a weird line:\n" + line)
for line in update_queue:
words = line.split()
shift = 3 if '[#' in words[0] else 0
data['worker_id'].append(None)
data['timestamp'].append(int(words[5 + shift]))
data['stamp_type'].append('update_real queue start')
data['worker_id'].append(None)
data['timestamp'].append(int(words[7 + shift][:-3]))
data['stamp_type'].append('update_real queue end')
for line in update_worker:
words = line.split()
shift = 3 if '[#' in words[0] else 0
data['worker_id'].append(int(words[3 + shift][:-1]))
data['timestamp'].append(int(words[6 + shift]))
data['stamp_type'].append('update_real worker start')
data['worker_id'].append(int(words[3 + shift][:-1]))
data['timestamp'].append(int(words[8 + shift][:-3]))
data['stamp_type'].append('update_real worker end')
return pd.DataFrame(data), NumCPU
def build_dflist_split_timing_info(fn, extract_fcn=extract_split_timing_info):
bm_iterations = extract_fcn(fn)
dflist = []
dflist_stamps = []
NumCPU_list = []
for bm in bm_iterations:
grouped_lines, line_search_lines, update_queue, update_worker, special_lines = group_timing_lines(bm)
if special_lines["terminate_line"] is not None:
dflist.append(build_df_split_timing_run(grouped_lines, line_search_lines, update_queue, update_worker, special_lines["terminate_line"]))
df_stamps, NumCPU = build_df_stamps(grouped_lines, special_lines, update_queue, update_worker)
dflist_stamps.append(df_stamps)
NumCPU_list.append(NumCPU)
return dflist, dflist_stamps, NumCPU_list
def build_comb_df_split_timing_info(fn, extract_fcn=extract_split_timing_info):
dflist, dflist_stamps, NumCPU_list = build_dflist_split_timing_info(fn, extract_fcn=extract_fcn)
for ix, df in enumerate(dflist):
df_pardiff = df[df["timing_type"] == "partial derivative"]
N_tasks = len(df_pardiff["task"].unique())
N_gradients = len(df_pardiff) // N_tasks
gradient_indices = np.hstack(i * np.ones(N_tasks, dtype='int') for i in range(N_gradients))
df["gradient number"] = pd.Series(dtype='Int64')
df.loc[df["timing_type"] == "partial derivative", "gradient number"] = gradient_indices
dflist_stamps[ix]["gradient number"] = pd.Series(dtype='Int64')
dflist_stamps[ix].loc[dflist_stamps[ix]["stamp_type"] == "job done - asked", "gradient number"] = gradient_indices
dflist_stamps[ix].loc[dflist_stamps[ix]["stamp_type"] == "job done - started", "gradient number"] = gradient_indices
dflist_stamps[ix].loc[dflist_stamps[ix]["stamp_type"] == "job done - finished", "gradient number"] = gradient_indices
df["benchmark_number"] = ix
dflist_stamps[ix]["benchmark_number"] = ix
# assuming the stamps are ordered properly, which I'm pretty sure is correct,
# we can do ffill:
df_stamps = pd.concat(dflist_stamps)
df_stamps.loc[~df_stamps['stamp_type'].str.contains('migrad'), 'gradient number'] = df_stamps.loc[~df_stamps['stamp_type'].str.contains('migrad'), 'gradient number'].fillna(method='ffill')
return pd.concat(dflist), df_stamps
# _, df_stamps_1633603 = build_comb_df_split_timing_info('../rootbench/1633603.burrell.nikhef.nl.out')
_, df_stamps_1633602 = build_comb_df_split_timing_info('../rootbench/1633602.burrell.nikhef.nl.out')
_, df_stamps_1633601 = build_comb_df_split_timing_info('../rootbench/1633601.burrell.nikhef.nl.out')
_st = df_stamps_1633601
_st['stamp_type'].unique()
```
### update_real queue timeline
```
_st = df_stamps_1633601
_bench_nr = (_st['benchmark_number'] == 0)
_start_st = _st[_bench_nr
& (_st['stamp_type'] == 'start migrad')
]['timestamp'].iloc[0]
_end_st = _st[_bench_nr
& (_st['stamp_type'] == 'end migrad')
]['timestamp'].iloc[0]
_sta = _st[_bench_nr
& (_st['stamp_type'] == 'update_real queue start')
]#['timestamp'] - _start_st
_fin = _st[_bench_nr
& (_st['stamp_type'] == 'update_real queue end')
]#['timestamp'] - _start_st
fig, ax = plt.subplots(1, 1, figsize=(20, 5))
# ax.set_ylabel('worker')
ax.set_xlim((0, _end_st - _start_st))
# worker = _fin.iloc[ix]["worker_id"]
ax.plot(_sta["timestamp"] - _start_st,
np.zeros_like(_sta["timestamp"]),
color='blue', marker='|', linestyle='', alpha=0.5)
ax.plot(_fin["timestamp"] - _start_st,
np.zeros_like(_sta["timestamp"]),
color='red', marker='|', linestyle='', alpha=0.5)
```
Cool, this works a lot better than the bar plots, at least we can show everything like this.
Let's also try for the job stamps:
```
_st = df_stamps_1633601
_bench_nr = (_st['benchmark_number'] == 0)
_start_st = _st[_bench_nr
& (_st['stamp_type'] == 'start migrad')
]['timestamp'].iloc[0]
_end_st = _st[_bench_nr
& (_st['stamp_type'] == 'end migrad')
]['timestamp'].iloc[0]
_sta = _st[_bench_nr
& (_st['stamp_type'].str.contains('started'))
]#['timestamp'] - _start_st
_fin = _st[_bench_nr
& (_st['stamp_type'].str.contains('finished'))
]#['timestamp'] - _start_st
fig, ax = plt.subplots(1, 1, figsize=(20, 5))
ax.set_ylabel('worker')
ax.set_xlim((0, _end_st - _start_st))
ax.plot(_sta["timestamp"] - _start_st, _sta["worker_id"],
color='green', marker='|', linestyle='', alpha=0.5)
ax.plot(_fin["timestamp"] - _start_st, _fin["worker_id"],
color='blue', marker='|', linestyle='', alpha=0.5)
```
Nice! Let's combine:
```
def plot_timestamps(_st, bench_nr=0, figsize=(20, 5)):
vert_offset = 0.1
_bench_nr = (_st['benchmark_number'] == bench_nr)
_start_st = _st[_bench_nr & (_st['stamp_type'] == 'start migrad')]['timestamp'].iloc[0]
_end_st = _st[_bench_nr & (_st['stamp_type'] == 'end migrad')]['timestamp'].iloc[0]
fig, ax = plt.subplots(1, 1, figsize=figsize)
ax.set_ylabel('worker')
ax.set_xlim((0, _end_st - _start_st))
# update @ queue
_sta = _st[_bench_nr & (_st['stamp_type'].str.contains('update_real queue start'))]
_fin = _st[_bench_nr & (_st['stamp_type'].str.contains('update_real queue end'))]
ax.plot(_sta["timestamp"] - _start_st, np.zeros_like(_sta["timestamp"]) - 1 - vert_offset,
color='red', marker='|', linestyle='', alpha=0.5)
ax.plot(_fin["timestamp"] - _start_st, np.zeros_like(_fin["timestamp"]) - 1 + vert_offset,
color='red', marker='|', linestyle='', alpha=0.5)
# update @ queue
_sta = _st[_bench_nr & (_st['stamp_type'].str.contains('update_real worker start'))]
_fin = _st[_bench_nr & (_st['stamp_type'].str.contains('update_real worker end'))]
ax.plot(_sta["timestamp"] - _start_st, _sta["worker_id"] - vert_offset,
color='red', marker='|', linestyle='', alpha=0.5)
ax.plot(_fin["timestamp"] - _start_st, _fin["worker_id"] + vert_offset,
color='red', marker='|', linestyle='', alpha=0.5)
# jobs
_sta = _st[_bench_nr & (_st['stamp_type'].str.contains('started'))]
_fin = _st[_bench_nr & (_st['stamp_type'].str.contains('finished'))]
ax.plot(_sta["timestamp"] - _start_st, _sta["worker_id"] - vert_offset,
color='blue', marker='|', linestyle='', alpha=0.8)
ax.plot(_fin["timestamp"] - _start_st, _fin["worker_id"] + vert_offset,
color='blue', marker='|', linestyle='', alpha=0.8)
# out of jobs!
_sta = _st[_bench_nr & (_st['stamp_type'].str.contains('no job - asked'))]
_fin = _st[_bench_nr & (_st['stamp_type'].str.contains('no job - denied'))]
ax.plot(_sta["timestamp"] - _start_st, _sta["worker_id"] - vert_offset,
color='grey', marker='|', linestyle='', alpha=0.3)
ax.plot(_fin["timestamp"] - _start_st, _fin["worker_id"] + vert_offset,
color='grey', marker='|', linestyle='', alpha=0.3)
# migrad timestamps
migrad = _st[_bench_nr & (_st['stamp_type'].str.contains('migrad timestamp'))]
ax.plot(migrad["timestamp"] - _start_st, np.zeros_like(migrad["timestamp"]) - 2,
color='black', marker='|', linestyle='')
# setfcn timestamps
_sta = _st[_bench_nr & (_st['stamp_type'].str.contains('start SetFCN'))]
_fin = _st[_bench_nr & (_st['stamp_type'].str.contains('end SetFCN'))]
ax.plot(_sta["timestamp"] - _start_st, np.zeros_like(_sta["timestamp"]) - 2 - vert_offset,
color='violet', marker='|', linestyle='', alpha=0.9)
ax.plot(_fin["timestamp"] - _start_st, np.zeros_like(_fin["timestamp"]) - 2 + vert_offset,
color='violet', marker='|', linestyle='', alpha=0.9)
# GradFcnSynchronize timestamps
GradFcnSynchronize = _st[_bench_nr & (_st['stamp_type'].str.contains('GradFcnSynchronize timestamp'))]
ax.plot(GradFcnSynchronize["timestamp"] - _start_st, np.zeros_like(GradFcnSynchronize["timestamp"]) - 2,
color='tab:green', marker='|', linestyle='', alpha=0.9)
# setup_differentiate timestamps
_this = _st[_bench_nr & (_st['stamp_type'].str.contains('setup_differentiate timestamps'))]
ax.plot(_this["timestamp"] - _start_st, _this['worker_id'],
color='tab:orange', marker='|', linestyle='', alpha=0.9)
return fig, ax
plot_timestamps(df_stamps_1633601)
```
Ok, interesting observation: there is actually a long job at the start of every gradient calculation iteration **on every worker**. This does hint quite strongly at either recalculation of the cache or perhaps of something to do with the sync_parameters stuff... We should measure this first task of each gradient in more detail, because this might very well explain (a big part of) the non-scaling of the gradient!
After the blue but before the red parts the line search probably takes place. Don't have timestamps for them, but they take between about 0.7 and 0.9 seconds, that fits nicely with those gaps.
### Does ZeroMQ performance explain the update state durations?
Did some measurements of the ZeroMQ performance using the provided performance measurement tools. At least on my Macbook, the red parts should take about 0.1s for 8 workers, so this ~1 second is a bit mysterious. It could be that the performance is different on Stoomboot though...
No, actually it turns out the performance of the Stoomboot headnode (where these runs were done) is a bit higher than my Macbook! So it should be even more than a factor 10 faster!
### 16 cores
Is it the case that by coincidence the first 8 tasks are longer than the rest?
```
plot_timestamps(df_stamps_1633602)
```
### Todo
Discussed with Wouter, we can make two quick tests to see what's going on here:
- For the longer initial tasks: let's measure this on an N-dim Gaussian (128 or something) so that we can exclude computational causes, because all components should then take exactly the same amount of time.
- For the communication overhead, as a quick test for the ZMQ throughput: send twice the amount of data to see if the time indeed increases.
# N-D gaussian, run on stoomboot head node
```
# _, df_stamps_acat_ND_gauss_1553700233 = build_comb_df_split_timing_info('../acat19_ND_gauss_1553700233.out', extract_fcn=load_acat19_out)
```
Oh, that one doesn't have migrad start and end timestamps... fixed that in the next one, plus new loading function that doesn't add dummy lines:
```
def load_acat19_out_v2(fn):
"""
Just single migrad runs, so no need for further splitting by run.
Does add dummy terminate line, because the other functions expect
this. No more dummy start and end migrad lines.
"""
with open(fn, 'r') as fh:
lines = fh.read().splitlines()
print(lines[-1])
lines.append('[#0] DEBUG: -- terminate: 0.0s')
return [lines]
_, df_stamps_acat_ND_gauss_1553701454 = build_comb_df_split_timing_info('../acat19_ND_gauss_1553701454.out', extract_fcn=load_acat19_out_v2)
fig, ax = plot_timestamps(df_stamps_acat_ND_gauss_1553701454)
ax.set_xlim(0, 0.2e10)
```
Let's try without randomized parameters, hopefully the run will be a bit shorter then.
```
_, df_stamps_acat_ND_gauss_1553754554 = build_comb_df_split_timing_info('../acat19_ND_gauss_1553754554.out', extract_fcn=load_acat19_out_v2)
fig, ax = plot_timestamps(df_stamps_acat_ND_gauss_1553754554)
ax.set_xlim(0, 0.2e10)
```
Nope, still pretty long. Same picture though, indeed also here there's a slight increase in the runtime for the first tasks of the run.
Could this scale with the number of parameters? The above runs were with 128 parameters, Let's try a run with 1024.
```
_, df_stamps_acat_ND_gauss_1553755013 = build_comb_df_split_timing_info('../acat19_ND_gauss_1553755013.out', extract_fcn=load_acat19_out_v2)
fig, ax = plot_timestamps(df_stamps_acat_ND_gauss_1553755013)
# ax.set_xlim(0, 0.2e10)
```
Ok, bit weird, only one iteration in this case... maybe reactivate parameter randomization here.
Oh, one other thing, I added the "migrad timestamps", should read those in!
```
def group_timing_lines(bm_iteration_lines):
"""
Group lines (from one benchmark iteration) by gradient call,
specifying:
- Update time on master
- update_real times on queue and workers
- Gradient work time
- For all partial derivatives a sublist of all lines
- After each gradient block, there may be line_search times,
these are also included as a sublist
Finally, the terminate time for the entire bm_iteration is also
returned (last line in the list).
In each gradient, also timestamps are printed. These are not
further subdivided in this function, but are output as part of
the `gradient_calls` list for further processing elsewhere.
"""
gradient_calls = []
start_indices = []
end_indices = []
line_search_lines = []
update_queue = []
update_worker = []
setfcn_timestamps = []
setup_differentiate_timestamps = []
# to prevent UnboundLocalErrors in older data files:
GradFcnSynchronize_timestamps = None
# flag to check whether we are still in the same gradient call
in_block = False
for ix, line in enumerate(bm_iteration_lines[:-1]): # -1: leave out terminate line
if not in_block and (line[:9] == 'worker_id' or line[:24] == '[#0] DEBUG: -- worker_id'):
start_indices.append(ix)
in_block = True
elif 'update_state' in line:
end_indices.append(ix)
in_block = False
# the rest has nothing to do with the gradient call block, so we don't touch in_block there:
elif 'line_search' in line:
line_search_lines.append(line)
elif 'update_real on queue' in line:
update_queue.append(line)
elif 'update_real on worker' in line:
update_worker.append(line)
elif 'start migrad' in line:
start_migrad_line = line
elif 'end migrad' in line:
end_migrad_line = line
elif 'migrad timestamps' in line:
migrad_timestamps = line
elif 'Fitter::SetFCN timestamps' in line:
setfcn_timestamps.append(line)
elif 'RooGradientFunction::synchronize_parameter_settings timestamps' in line:
GradFcnSynchronize_timestamps = line
elif 'NumericalDerivatorMinuit2::setup_differentiate' in line:
setup_differentiate_timestamps.append(line)
if len(start_indices) != len(end_indices):
raise Exception(f"Number of start and end indices unequal (resp. {len(start_indices)} and {len(end_indices)})!")
for ix in range(len(start_indices)):
partial_derivatives, timestamps = separate_partderi_job_time_lines(bm_iteration_lines[start_indices[ix]:end_indices[ix]])
gradient_calls.append({
'gradient_total': bm_iteration_lines[end_indices[ix]],
'partial_derivatives': partial_derivatives,
'timestamps': timestamps
})
try:
terminate_line = bm_iteration_lines[-1]
except IndexError:
terminate_line = None
special_lines = dict(terminate_line=terminate_line,
start_migrad_line=start_migrad_line,
end_migrad_line=end_migrad_line,
migrad_timestamps=migrad_timestamps,
setfcn_timestamps=setfcn_timestamps,
GradFcnSynchronize_timestamps=GradFcnSynchronize_timestamps,
setup_differentiate_timestamps=setup_differentiate_timestamps)
return gradient_calls, line_search_lines, update_queue, update_worker, special_lines
def build_df_stamps(grouped_lines, special_lines, update_queue, update_worker):
data = {'timestamp': [], 'stamp_type': [], 'worker_id': []}
words = special_lines["start_migrad_line"].split()
shift = 3 if '[#' in words[0] else 0
data['timestamp'].append(int(words[3 + shift]))
data['stamp_type'].append('start migrad')
data['worker_id'].append(None)
if len(words) > 10:
NumCPU = int(words[10])
else:
NumCPU = 0
words = special_lines["end_migrad_line"].split()
shift = 3 if '[#' in words[0] else 0
data['timestamp'].append(int(words[3 + shift]))
data['stamp_type'].append('end migrad')
data['worker_id'].append(None)
words = special_lines["migrad_timestamps"].split()
shift = 3 if '[#' in words[0] else 0
for word in words[shift + 2:]:
data['timestamp'].append(int(word))
data['stamp_type'].append('migrad timestamp')
data['worker_id'].append(None)
if special_lines["GradFcnSynchronize_timestamps"] is not None:
words = special_lines["GradFcnSynchronize_timestamps"].split()
shift = 3 if '[#' in words[0] else 0
for word in words[shift + 2:]:
data['timestamp'].append(int(word))
data['stamp_type'].append('GradFcnSynchronize timestamp')
data['worker_id'].append(None)
for line in special_lines["setfcn_timestamps"]:
words = line.split()
shift = 3 if '[#' in words[0] else 0
data['timestamp'].append(int(words[shift + 2]))
data['stamp_type'].append('start SetFCN')
data['worker_id'].append(None)
data['timestamp'].append(int(words[shift + 3]))
data['stamp_type'].append('end SetFCN')
data['worker_id'].append(None)
for line in special_lines["setup_differentiate_timestamps"]:
words = line.split()
shift = 3 if '[#' in words[0] else 0
worker = int(words[shift + 3][:-1])
for word in words[shift + 5:]:
data['timestamp'].append(int(word))
data['stamp_type'].append('setup_differentiate timestamps')
data['worker_id'].append(worker)
for gradient_group in grouped_lines:
for line in gradient_group['timestamps']:
if 'no work' in line:
words = line.split()
shift = 3 if '[#' in words[0] else 0
data['worker_id'].append(int(words[3 + shift]))
data['timestamp'].append(int(words[6 + shift]))
data['stamp_type'].append('no job - asked')
data['worker_id'].append(int(words[3 + shift]))
data['timestamp'].append(int(words[11 + shift]))
data['stamp_type'].append('no job - denied')
elif 'job done' in line:
words = line.split()
shift = 3 if '[#' in words[0] else 0
data['worker_id'].append(int(words[3 + shift]))
data['timestamp'].append(int(words[6 + shift][:-1]))
data['stamp_type'].append('job done - asked')
data['worker_id'].append(int(words[3 + shift]))
data['timestamp'].append(int(words[9 + shift]))
data['stamp_type'].append('job done - started')
data['worker_id'].append(int(words[3 + shift]))
data['timestamp'].append(int(words[13 + shift]))
data['stamp_type'].append('job done - finished')
elif 'NumericalDerivatorMinuit2::setup_differentiate' in line:
pass
elif 'fVal on worker' in line or 'fVal after line search' in line:
pass
else:
raise Exception("got a weird line:\n" + line)
for line in update_queue:
words = line.split()
shift = 3 if '[#' in words[0] else 0
data['worker_id'].append(None)
data['timestamp'].append(int(words[5 + shift]))
data['stamp_type'].append('update_real queue start')
data['worker_id'].append(None)
data['timestamp'].append(int(words[7 + shift][:-3]))
data['stamp_type'].append('update_real queue end')
for line in update_worker:
words = line.split()
shift = 3 if '[#' in words[0] else 0
data['worker_id'].append(int(words[3 + shift][:-1]))
data['timestamp'].append(int(words[6 + shift]))
data['stamp_type'].append('update_real worker start')
data['worker_id'].append(int(words[3 + shift][:-1]))
data['timestamp'].append(int(words[8 + shift][:-3]))
data['stamp_type'].append('update_real worker end')
return pd.DataFrame(data), NumCPU
_, df_stamps_acat_ND_gauss_1553754554 = build_comb_df_split_timing_info('../acat19_ND_gauss_1553754554.out', extract_fcn=load_acat19_out_v2)
fig, ax = plot_timestamps(df_stamps_acat_ND_gauss_1553754554)
ax.set_xlim(-1e9, 3e10)
_start_st = df_stamps_acat_ND_gauss_1553754554[df_stamps_acat_ND_gauss_1553754554['stamp_type'] == "start migrad"]
df_stamps_acat_ND_gauss_1553754554[df_stamps_acat_ND_gauss_1553754554['stamp_type'] == "migrad timestamp"]['timestamp'] - int(_start_st["timestamp"])
```
Ok, clearly the only dominant part of migrad is FitFCN. New run with timestamps inside there to measure SetFCN...
```
_, df_stamps_acat_1553758901 = build_comb_df_split_timing_info('../acat19_1553758901.out', extract_fcn=load_acat19_out_v2)
fig, ax = plot_timestamps(df_stamps_acat_1553758901)
```
Ok, so SetFCN is not the main problem here, though it might be the cause of the two ZMQ threads problem.
But, wait a minute there, that's a pretty significant gap over there in the migrad timestamps all of a sudden! `Synchronize` is the major rest-term culprit!
```
_start_st = df_stamps_acat_1553758901[df_stamps_acat_1553758901['stamp_type'] == "start migrad"]
df_stamps_acat_1553758901[df_stamps_acat_1553758901['stamp_type'] == "migrad timestamp"]['timestamp'] - int(_start_st["timestamp"])
```
Ok, added some timestamps there as well...
```
_, df_stamps_acat_1553763665 = build_comb_df_split_timing_info('../acat19_1553763665.out', extract_fcn=load_acat19_out_v2)
fig, ax = plot_timestamps(df_stamps_acat_1553763665)
_start_st = int(df_stamps_acat_1553763665[df_stamps_acat_1553763665['stamp_type'] == "start migrad"]["timestamp"])
df_stamps_acat_1553763665[df_stamps_acat_1553763665['stamp_type'].str.contains('Grad')]["timestamp"] - _start_st
```
Ok, interesting, the first part is the first big loop in `synchronize_parameter_settings`, but the second part is not the second big loop, but the copying of parameters from one list to the other! Both take about the same time and the rest is insignificant.
```
# _, df_stamps_acat_1553778980 = build_comb_df_split_timing_info('../acat19_1553778980.out', extract_fcn=load_acat19_out_v2)
```
Oh, forgot to add worker_id there, again:
```
_, df_stamps_acat_1553779882 = build_comb_df_split_timing_info('../acat19_1553779882.out', extract_fcn=load_acat19_out_v2)
fig, ax = plot_timestamps(df_stamps_acat_1553779882)
ax.set_xlim(2.01e10, 2.1e10)
```
Ok, there's definitely something there that's taking a while... But which part is it?
```
def setup_differentiate_timestamps_duration_per_task(df, task_ix, worker_id=0):
worker = (df["worker_id"] == worker_id)
_df_w0 = df[worker]
first_task_start = int(_df_w0[_df_w0["stamp_type"].str.contains('job done - started')]['timestamp'].iloc[task_ix])
first_task_end = int(_df_w0[_df_w0["stamp_type"].str.contains('job done - finished')]['timestamp'].iloc[task_ix])
first_task = (_df_w0['timestamp'] > first_task_start) & (_df_w0['timestamp'] < first_task_end)
_st = _df_w0[_df_w0["stamp_type"].str.contains('setup_differentiate') & first_task]['timestamp']
_st = _st - _st.iloc[0]
return _st
# worker 0, first task:
setup_differentiate_timestamps_duration_per_task(df_stamps_acat_1553779882, 0, worker_id=0)
# worker 0, second task:
setup_differentiate_timestamps_duration_per_task(df_stamps_acat_1553779882, 1, worker_id=0)
```
Ok, so it's rather clear that most time in the first task goes to the stuff between t6 and t7, which is... as we guessed: the function call! `fVal` has to be calculated once at the start for that set of parameters. But wait, doesn't that hold for all tasks? I think it does.
To be sure, let's check out a few more tasks:
```
def setup_differentiate_fVal_duration(df, task_ix, worker_id=0):
_ts = setup_differentiate_timestamps_duration_per_task(df, task_ix, worker_id=worker_id)
return _ts.iloc[6] - _ts.iloc[5]
# all first tasks:
print([setup_differentiate_fVal_duration(df_stamps_acat_1553779882, 0, worker_id=w) for w in range(8)])
# all 2-10th tasks:
print([setup_differentiate_fVal_duration(df_stamps_acat_1553779882, t, worker_id=w) for w in range(8) for t in range(1, 10)])
# all 11-100th tasks:
print([setup_differentiate_fVal_duration(df_stamps_acat_1553779882, t, worker_id=w) for w in range(8) for t in range(10, 100)])
# all 101-250th tasks:
print([setup_differentiate_fVal_duration(df_stamps_acat_1553779882, t, worker_id=w) for w in range(8) for t in range(100, 250)])
```
Yeah, seems pretty solid.
Now, what exactly causes this first fVal calculation to be so slow? Recalculation of all cached elements? If so, then you won't get out of this.
So then the only option left to make things scale better is to make communication times shorter.
## Shorter communication times
- The most obvious option to do this is to simply send around less stuff. We could send only the updated gradients (and hessians and stepsizes) that are necessary for a certain task. This could be sent along with the task, for instance. This would save on average `5 * 3 * N_tasks * (N_workers-1)/N_workers` sends, i.e., for our example here, about:
```
5 * 3 * 1600 * 7/8
```
Or, more simply put, it reduces the sending workload by a factor `N_workers` for the gradient updates. Unfortunately, there's still the parameter updates which have to be sent fully.
# 8 April
## Benchmark with added `none_have_been_calculated` parameter
```
_, df_stamps_acat_1554710451 = build_comb_df_split_timing_info('../acat19_1554710451.out', extract_fcn=load_acat19_out_v2)
fig, ax = plot_timestamps(df_stamps_acat_1554710451)
```
Hm, doesn't really seem to help that much, unfortunately... putting it next to an earlier run, it's more or less similar in communication time:
```
fig, ax = plot_timestamps(df_stamps_acat_1553763665)
```
| github_jupyter |
## Importing UK Postcodes into Amazon Lex to create a custom slot
This is a sample notebook that shows how to use pandas together the AWS Python SDK, boto3, to process a publicly available postcode file, sample it and create/update a custom slot type in Amazon Lex using the sample to train for slot recognition.
I am using the postcode file from https://www.doogal.co.uk/ukpostcodes.php, but this should work with other CSV format postcode downloads as long as you set the field header correctly.
This is the header row from the file that used.
```
Postcode,In Use?,Latitude,Longitude,Easting,Northing,Grid Ref,County,District,Ward,District Code,Ward Code,Country,County Code,Constituency,Introduced,Terminated,Parish,National Park,Population,Households,Built up area,Built up sub-division,Lower layer super output area,Rural/urban,Region,Altitude,London zone,LSOA Code,Local authority,MSOA Code,Middle layer super output area,Parish Code,Census output area,Constituency Code,Index of Multiple Deprivation
```
```
working_dir = '/Users/ianmas/aws/ai/'
filename = "postcodes.csv"
field_name = 'Postcode'
sample_size = 1000
import pandas as pd
```
Load the csv into a pandas dataframe. This might take a few seconds. You will get a confirmation message with the shape of the dastaframe once it's completed.
```
df = pd.read_csv(working_dir + filename,index_col=False, header=0,low_memory=False)
print("Data loaded into pandas dataframe with shape: {}".format(df.shape))
```
Extract the column with the postcodes using the ```field_name``` column defined earlier. Then extract a sample of size ```sample_size```.
```
postcode_column = df[field_name]
postcode_sample = postcode_column.sample(n=sample_size)
print("Postcode list created with {} samples".format(len(postcode_sample.tolist())))
```
The next step is to build the list of dict objects that is required for the Amazon Lex put_slot request document. You'll see a comfirmation and few samples printed when this is complete.
```
postcodes_values_list = []
for postcode_value in postcode_sample:
new_dict = dict()
new_dict['value'] = postcode_value
postcodes_values_list.append(new_dict)
print("The first 5 entries that will be added to your model are:")
print(postcodes_values_list[0:5])
```
Import the AWS SDK
```
import boto3
lex_client = boto3.client('lex-models', region_name='us-east-1')
response = lex_client.get_slot_type(
name='UKPostcodes',
version='$LATEST'
)
# grab the checksum attribute from the response, we need it to update existing an slot type
latest_checksum = response['checksum']
response = lex_client.put_slot_type(
name='UKPostcodes',
description='UK Postcodes',
enumerationValues=postcodes_values_list,
valueSelectionStrategy='ORIGINAL_VALUE',
checksum=latest_checksum
)
if response['ResponseMetadata']['HTTPStatusCode'] == 200:
print('Suceeded: Updated slot \'{}\' with {} values'.format(response['name'],len(response['enumerationValues'])))
else:
print('Failed with response code: {}'.format(response['ResponseMetadata']['HTTPStatusCode']))
```
#### Only use this cell create the slot. Update with the cell above.
Once this is done you need to use the version above that posts the checksum for the current version of the slot.
```
# put slot first version of slot type
# you can only use this once because subsequent requests require a checksum attribute
response = lex_client.put_slot_type(
name='UKPostcodes',
description='UK Postcodes',
enumerationValues=postcodes_values_list,
valueSelectionStrategy='ORIGINAL_VALUE'
)
if response['ResponseMetadata']['HTTPStatusCode'] == 200:
print('Suceeded: Updated slot \'{}\' with {} values'.format(response['name'],len(response['enumerationValues'])))
else:
print('Failed with response code: {}'.format(response['ResponseMetadata']['HTTPStatusCode']))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Het-Shah/Meme-Classification/blob/master/nnfl_proj_v1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!nvidia-smi
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = torch.cuda.device_count()
torch.cuda.get_device_name(0)
# !pip install pytorch-transformers
pip install langdetect
!pip install pillow
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
import os
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from PIL import Image
import cv2
import re
import string
import nltk
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer, SnowballStemmer
from nltk.tokenize import TweetTokenizer
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('wordnet')
import keras
from keras.models import Model, Sequential
from keras.layers import Dense, GlobalAveragePooling2D, Input, Embedding, Bidirectional, LSTM, Flatten, concatenate, Dropout, Conv2D, MaxPool2D, BatchNormalization, LeakyReLU, GRU
from keras import optimizers
from keras.callbacks import LearningRateScheduler, ModelCheckpoint
from keras.applications.inception_resnet_v2 import InceptionResNetV2
from keras.applications.vgg19 import VGG19
from keras.applications.vgg16 import VGG16
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.applications.resnet import ResNet50
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torchvision.models as models
# from pytorch_transformers import XLNetModel, XLNetTokenizer, XLNetForSequenceClassification
# from pytorch_transformers import AdamW
from tqdm import tqdm, trange
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score, accuracy_score
import copy
import warnings
warnings.filterwarnings('ignore')
from PIL import Image
%matplotlib inline
from google.colab import drive
drive.mount('/content/drive')
```
# Pickle imports
```
import pickle
with open('/content/drive/My Drive/nnfl_pkls/img.pkl', 'rb') as f:
X_img = pickle.load(f)
with open('/content/drive/My Drive/nnfl_pkls/taskA.pkl','rb') as f:
y1 = pickle.load(f)
with open('/content/drive/My Drive/nnfl_pkls/taskB.pkl','rb') as f:
y2 = pickle.load(f)
with open('/content/drive/My Drive/nnfl_pkls/taskC.pkl','rb') as f:
y3 = pickle.load(f)
with open('/content/drive/My Drive/nnfl_pkls/dataframe.pkl','rb') as f:
df = pickle.load(f)
with open('/content/drive/My Drive/nnfl_pkls/text.pkl','rb') as f:
X_text = pickle.load(f)
```
# Pre processing and pytorch
```
text = "10 YEAR CHALLENGE WITH NO FILTER 47 Hilarious 10 Year Challenge Memes | What is #10 Year Challenge? "
text = re.sub(r'[%s]' % re.escape(string.punctuation), '', text)
text = re.sub(r'^\w+'," ", text)
text = re.sub(r'\d+', " ", text)
print(text)
df = pd.read_csv("/content/drive/My Drive/data_7000_new.csv",header=None,names=["imgname","imgpath","imgtext1","imgtext2","funniness","sarcasm","offense","motivation","sentiment"])
# df.head()
df = df.sample(frac=1).reset_index(drop=True)
import string
def preprocessing(text):
text = text.lower()
text = re.sub(r'[a-z]+.com'," ",text)
text = re.sub(r'[a-z]+.net', " ", text)
text = re.sub(r'www.[a-z]+', " ", text)
text = re.sub(r'[%s]' % re.escape(string.punctuation), '', text)
text = re.sub(r'^\w+'," ", text)
# text = text.translate(None, string.punctuation)
# text = re.sub(r'\d+', " ", text)
text = text.strip()
return text
def clean_tweets(tweet):
tweet = re.sub('@(\\w{1,15})\b', '', str(tweet))
tweet = tweet.replace("via ", "")
tweet = tweet.replace("RT ", "")
tweet = tweet.lower()
return tweet
def clean_url(tweet):
tweet = re.sub('http\\S+', '', tweet, flags=re.MULTILINE)
tweet = re.sub(r'[a-z]+.com', '', tweet)
tweet = re.sub(r'[a-z]+.net', '', tweet)
tweet = re.sub(r'www.[a-z]+', '', tweet)
return tweet
def remove_stop_words(tweet):
stops = set(stopwords.words("english"))
stops.update(['.',',','"',"'",'?',':',';','(',')','[',']','{','}'])
toks = [tok for tok in tweet if not tok in stops and len(tok) >= 3]
return toks
def stemming_tweets(tweet):
stemmer = SnowballStemmer('english')
stemmed_words = [stemmer.stem(word) for word in tweet]
return stemmed_words
def remove_number(tweet):
newTweet = re.sub('\\d+', '', tweet)
return newTweet
def remove_hashtags(tweet):
result = ''
for word in tweet.split():
if word.startswith('#') or word.startswith('@'):
result += word[1:]
result += ' '
else:
result += word
result += ' '
return result
def preprocessing(tweet, swords = True, url = True, stemming = False, ctweets = True, number = True, hashtag = True):
if ctweets:
tweet = clean_tweets(tweet)
if url:
tweet = clean_url(tweet)
if hashtag:
tweet = remove_hashtags(tweet)
twtk = TweetTokenizer(strip_handles=True, reduce_len=True)
if number:
tweet = remove_number(tweet)
tokens = [w.lower() for w in twtk.tokenize(tweet) if w != "" and w is not None]
if swords:
tokens = remove_stop_words(tokens)
if stemming:
tokens = stemming_tweets(tokens)
text = " ".join(tokens)
return text
# for i in range(len(df)):
# if df.offense[i] == "not_motivational":
# df.sentiment[i] = df.motivation[i]
# df.motivation[i] = "not_motivational"
# df.offense[i] = df.sarcasm[i]
# if df.offense[i] == "motivational":
# df.sentiment[i] = df.motivation[i]
# df.motivation[i] = "motivational"
# df.offense[i] = df.sarcasm[i]
# if df.sarcasm[i] == "motivational":
# df.motivation[i] = "motivational"
# df.sentiment[i] = df.offense[i]
# df.offense[i] = df.funniness[i]
# if df.sarcasm[i] == "not_motivational":
# df.motivation[i] = "not_motivational"
# df.sentiment[i] = df.offense[i]
# df.offense[i] = df.funniness[i]
# if df.sentiment[i] == "positivechandler_Friday-Mood-AF.-meme-Friends-ChandlerBing.jpg":
# df.sentiment[i] = "positive"
# if df.imgtext2[i] == "not_funny":
# df.imgtext2[i] = df.imgtext1[i]
# if df.imgtext2[i] == "funny":
# df.imgtext2[i] = df.imgtext1[i]
# if df.imgtext2[i] == "very_funny":
# df.imgtext2[i] = df.imgtext1[i]
# if df.imgtext2[i] == "hilarious":
# df.imgtext2[i] = df.imgtext1[i]
# if df.imgtext2[i] == "<html><head><meta content=\"text/html; charset=UTF-8\" http-equiv=\"content-type\"><style type=\"text/css\">ol{margin:0;padding:0}table td" :
# df.imgtext2[i] = " "
# df = df.drop_duplicates(subset='imgname', keep='last').reset_index(drop=True)
# df = df.drop(labels=["imgpath", "imgtext1"],axis=1).reset_index(drop=True)
df['new_text'] = df.imgtext2.astype('str').apply(preprocessing)
# moti = {"not_motivational": 0, "motivational": 1}
# off = {"not_offensive": 0 , "slight" : 1, "very_offensive": 2, "hateful_offensive": 3}
# sent = {"very_positive": 0, "positive": 1, "neutral": 2, "negative": 3, "very_negative": 4}
# df['taskA'] = [moti[i] for i in df.motivation]
# df['taskB'] = [off[i] for i in df.offense]
# df['taskC'] = [sent[i] for i in df.sentiment]
from langdetect import detect
# print(df.new_text[0])
# # detect(df.new_text[0])
temp = []
for i in df.new_text:
try:
temp.append(detect(i))
except:
temp.append('en')
# df['language'] = [detect(i) for i in df.new_text]
# df['language'] = temp
# df.language.value_counts()
df.head()
df_temp = df[df.language=='en']
```
# Pytorch
```
class MemeDataset(Dataset):
def __init__(self,root_path,train=False,transform=None):
self.root_path = root_path
self.transform = transform
self.df = df
self.listIndex = list(df.index.values)
self.train = train
def __len__(self):
if self.train:
return int(len(self.listIndex[:5800]))
else:
return int(len(self.listIndex[5800:]))
def __getitem__(self,idx):
if not self.train:
idx = int(len(self.listIndex[:5800])) + idx
imgname = self.df.imgname[self.listIndex[idx]]
img = Image.open(os.path.join(self.root_path,imgname)).convert('RGB')
if self.transform:
img = self.transform(img)
text = self.df.new_text[self.listIndex[idx]]
y1 = self.df.taskA[self.listIndex[idx]]
y2 = self.df.taskB[self.listIndex[idx]]
y3 = self.df.taskC[self.listIndex[idx]]
return img,text,y1,y2,y3
# mean and sampler vagera
# pop_mean = []
# pop_std0 = []
# pop_std1 = []
# for i, data in enumerate(dataloader["train"], 0):
# # shape (batch_size, 3, height, width)
# img,text,_,_,_ = data
# numpy_image = img.numpy()
# # shape (3,)
# batch_mean = np.mean(numpy_image, axis=(0,2,3))
# batch_std0 = np.std(numpy_image, axis=(0,2,3))
# batch_std1 = np.std(numpy_image, axis=(0,2,3), ddof=1)
# pop_mean.append(batch_mean)
# pop_std0.append(batch_std0)
# pop_std1.append(batch_std1)
# # shape (num_iterations, 3) -> (mean across 0th axis) -> shape (3,)
# pop_mean = np.array(pop_mean).mean(axis=0)
# pop_std0 = np.array(pop_std0).mean(axis=0)
# pop_std1 = np.array(pop_std1).mean(axis=0)
batch_size = 32
class_count = [4280, 2320]
weights = 1 / torch.Tensor(class_count)
sampler = torch.utils.data.sampler.WeightedRandomSampler(weights, batch_size)
transform = transforms.Compose([transforms.Resize((299,299)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5091738,0.47823206,0.46088094],std = [0.3463122,0.34370992,0.34629512])])
dataset = {"train" : MemeDataset("/content/drive/My Drive/data_7000/",train = True,transform=transform), "val" : MemeDataset("/content/drive/My Drive/data_7000/",transform=transform)}
dataloader = {"train": DataLoader(dataset=dataset["train"],batch_size=32), "val" : DataLoader(dataset=dataset["val"],batch_size=10)}
def imshow(img): # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
img,text,y1,y2,y3 = next(iter(dataloader["val"]))
imshow(img[0])
print(text[0])
print(y1[0])
print(y2[0])
print(y3[0])
def set_parameter_requires_grad(model, feature_extracting):
if feature_extracting:
for param in model.parameters():
param.requires_grad = False
class MemeModel(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super().__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
initrange = 0.5
self.embedding.weight.data.uniform_(-initrange, initrange)
self.fc.weight.data.uniform_(-initrange, initrange)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded)
num_classes = 2
model_ft = models.inception_v3(pretrained=True)
set_parameter_requires_grad(model_ft, True)
# Handle the auxilary net
num_ftrs = model_ft.AuxLogits.fc.in_features
model_ft.AuxLogits.fc = nn.Linear(num_ftrs, num_classes)
# Handle the primary net
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, num_classes)
model_ft = model_ft.cuda()
# model_ft
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model_ft.parameters(), lr=0.01)
steps = 10
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, steps)
def train_model(model, dataloaders, criterion, optimizer , num_epochs=25, is_inception=True):
since = time.time()
val_acc_history = []
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for imgs, texts, c1, c2, c3 in dataloaders[phase]:
imgs = imgs.cuda()
# texts = texts.cuda()
c1 = c1.cuda()
c2 = c2.cuda()
c3 = c3.cuda()
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'train'):
if is_inception and phase == 'train':
# From https://discuss.pytorch.org/t/how-to-optimize-inception-model-with-auxiliary-classifiers/7958
outputs, aux_outputs = model(imgs)
# print(outputs)
# print(c1)
loss1 = criterion(outputs, c1)
loss2 = criterion(aux_outputs, c1)
loss = loss1 + 0.4*loss2
else:
outputs = model(imgs)
loss = criterion(outputs, c1)
_, preds = torch.max(outputs, 1)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# scheduler.step()
# print(scheduler.get_lr())
# statistics
running_loss += loss.item() * imgs.size(0)
running_corrects += torch.sum(preds == c1.data)
epoch_loss = running_loss / len(dataloaders[phase].dataset)
epoch_acc = running_corrects.double() / len(dataloaders[phase].dataset)
print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
if phase == 'val':
val_acc_history.append(epoch_acc)
# print('Reset scheduler')
# scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, steps)
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model, val_acc_history
model, val_acc_history = train_model(model_ft,dataloaders=dataloader,criterion=criterion,optimizer=optimizer,num_epochs=10)
imgs,text,c1,c2,c3 = next(iter(dataloader["val"]))
model_ft.eval()
out = model_ft(imgs.cuda())
_, preds = torch.max(out, 1)
preds = np.array(preds.detach().cpu())
c1 = np.array(c1.detach().cpu())
acc = sum(preds == c1)
print(acc/len(imgs))
print(out)
print(preds)
print(c1)
```
# Keras
```
num_words =10000
tokenizer = Tokenizer(num_words=num_words, filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',
lower=True,split=' ')
tokenizer.fit_on_texts(df['new_text'].values)
X_text = tokenizer.texts_to_sequences(df['new_text'].values)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
max_length_of_text = 200
X_text = pad_sequences(X_text, maxlen=max_length_of_text)
print(X_text[0])
print(len(word_index))
with open("tokenizer_new.pickle","wb") as handle:
pickle.dump(tokenizer,handle,protocol=pickle.HIGHEST_PROTOCOL)
!cp /content/tokenizer_new.pickle /content/drive/My\ Drive/
print(len(X_img))
print(len(X_text))
# y1 = df_temp.taskA
print(len(y1))
print(len(y2))
print(len(y3))
!wget https://github.com/kmr0877/IMDB-Sentiment-Classification-CBOW-Model/raw/master/glove.6B.50d.txt.gz
! gunzip glove.6B.50d.txt.gz
embeddings_index = {}
f = open('glove.6B.50d.txt')
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors in pretrained word vector model.' % len(embeddings_index))
print('Dimensions of the vector space : ', len(embeddings_index['the']))
EMBEDDING_DIM = 50
embedding_matrix = np.zeros((len(word_index) + 1, EMBEDDING_DIM))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
base_model = VGG19(weights='imagenet', include_top=False)
for layer in base_model.layers:
layer.trainable = False
model_input = Input(shape = (299,299,3))
x = base_model(model_input)
x = GlobalAveragePooling2D()(x)
length_of_text = 200
text_input = Input((length_of_text, ))
embedding_layer = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=max_length_of_text,
trainable=True)
y = embedding_layer(text_input)
# y = LSTM(10,dropout=0.5,recurrent_dropout = 0.5)(y)
y = Flatten()(y)
# y = Dense(32,kernel_initializer="random_uniform")(y)
# y = LeakyReLU(alpha=0.1)(y)
# y = Dropout(rate=0.5)(y)
# y = Dense(32,kernel_initializer="random_uniform")(y)
# y = LeakyReLU(alpha=0.2)(y)
# y = Dropout(rate=0.5)(y)
middle = concatenate([x,y])
middle = Dense(32,kernel_initializer="random_uniform")(middle)
middle = LeakyReLU(alpha=0.1)(middle)
middle = Dropout(rate=0.5)(middle)
# upper = Dense(16)(x)
# upper = LeakyReLU(alpha=0.2)(upper)
# upper = Dropout(rate=0.5)(upper)
# lower = Dense(16)(y)
# lower = LeakyReLU(alpha=0.2)(lower)
# lower = Dropout(rate=0.5)(lower)
# concat = concatenate([ middle])
# x1 = Dense(16)(middle)
# x1 = LeakyReLU(alpha=0.2)(x1)
pred1 = Dense(4,activation="softmax")(middle)
# x2 = Dense(1024, activation='relu')(concat)
# pred2 = Dense(4, activation='softmax')(x2)
# x3 = Dense(64, activation='relu')(concat)
# pred3 = Dense(5, activation='softmax')(x3)
model = Model(inputs=[model_input,text_input], outputs =[pred1])
model.compile(optimizer='adam', loss='categorical_crossentropy',metrics = ['accuracy'])
X_img = np.array(X_img)
X_text = np.array(X_text)
y1 = np.array(y1)
y2 = np.array(y2)
y3 = np.array(y3)
# X_img = X_img.astype('float') / 255
label2 = []
for i in y2:
if i == 0:
label2.append(0.0)
elif i == 1:
label2.append(0.25)
elif i == 2:
label2.append(0.5)
else:
label2.append(0.75)
label2 = np.array(label2)
y3_new = []
for i in y3:
if i ==0 or i == 1:
y3_new.append(0)
elif i == 2:
y3_new.append(1)
else:
y3_new.append(2)
label3 = []
for i in y3:
if i == 0:
label3.append(0.0)
elif i == 1:
label3.append(0.2)
elif i == 2:
label3.append(0.4)
elif i == 3:
label3.append(0.6)
else:
label3.append(0.8)
label3 = np.array(label3)
from keras.utils import to_categorical
# y1_one_hot = to_categorical(y1)
y2_one_hot = to_categorical(y2)
y3_one_hot = to_categorical(y3)
X_img_train, X_img_test, X_text_train, X_text_test, y1_train, y1_test = train_test_split(X_img,X_text,y2_one_hot,test_size=0.1,random_state=121,stratify=y2)
model.summary()
# checkpoint = ModelCheckpoint("model2.h5", monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
history = model.fit(x=[X_img,X_text],y=y2_one_hot, batch_size= 128,epochs=23,class_weight=None)
pred = model.predict(x=[X_img_test,X_text_test])
y_pred = [np.round(i) if i < 3.0 else 3.0 for i in pred]
y_pred = [i if i > 0.0 else 0.0 for i in y_pred]
model.save_weights("/content/model_offensive_overfit2.h5")
!cp model_offensive_overfit2.h5 /content/drive/My\ Drive/
pred = model.predict(x=[X_img_test,X_text_test])
print(pred[:10])
print(y1_test[:10])
# y_pred = np.argmax(pred,axis=1)
y_pred = []
for i in pred:
if i < 0.2:
y_pred.append(0)
elif i >= 0.25 and i < 0.5:
y_pred.append(1)
elif i >= 0.5 and i < 0.75:
y_pred.append(2)
# elif i >= 0.6 and i < 0.8:
# y_pred.append(3)
else:
y_pred.append(3)
# y1_test1 = np.argmax(y1_test,axis=1)
y1_test1 = []
for i in y1_test:
if i == 0:
y1_test1.append(0)
elif i == 0.25:
y1_test1.append(1)
elif i == 0.5:
y1_test1.append(2)
elif i == 0.75:
y1_test1.append(3)
# else:
# y1_test1.append(4)
# y_pred = []
# for i in pred:
# if i >=0.5:
# y_pred.append(1)
# else:
# y_pred.append(0)
y1_test1 = y1_test
confusion_matrix(y1_test1,y_pred,labels=[0,1,2,3])
print("F1.........: %f" %(f1_score(y1_test1, y_pred, average="macro")))
print("Precision..: %f" %(precision_score(y1_test1, y_pred, average="macro")))
print("Recall.....: %f" %(recall_score(y1_test1, y_pred, average="macro")))
print("Accuracy...: %f" %(accuracy_score(y1_test1, y_pred)))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train'], loc='upper left')
plt.show()
from sklearn.metrics import mean_absolute_error
print("Mean Absolute Error is : ",mean_absolute_error(y1_test1,y_pred))
p = []
y2_old = []
for i in pred:
if i<0.25:
p.append(0)
y2_old.append(0)
elif i>=0.25 and i<0.50:
p.append(1)
y2_old.append(1)
elif i>=0.5 and i<0.75:
p.append(2)
y2_old.append(2)
else:
p.append(3)
y2_old.append(3)
print(pred)
print(y2[:10])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_text,y2,test_size=0.2 ,random_state=121)
from sklearn.svm import SVC
clf = SVC(gamma='auto')
clf.fit(X_train,y_train)
score_train = clf.score(X_train,y_train)
score_test = clf.score(X_test, y_test)
print(score_train)
print(score_test)
model.save_weights('model3.h5')
img = cv2.imread("/content/drive/My Drive/data_7000/"+df.imgname[0],cv2.IMREAD_GRAYSCALE)
bilFilter = cv2.bilateralFilter(img,9,75,75)
gaus = cv2.GaussianBlur(img,(3, 3), 0)
thres = cv2.adaptiveThreshold(gaus,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY_INV,5,4)
plt.imshow(thres)
```
# New Section
```
import re
import string
import pickle
import numpy as np
import pandas as pd
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer, SnowballStemmer
from nltk.tokenize import TweetTokenizer
from sklearn.ensemble import VotingClassifier
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import f1_score, precision_score, accuracy_score, recall_score
from sklearn.metrics import confusion_matrix
from nltk.tokenize import TweetTokenizer
from sklearn.model_selection import train_test_split
from keras.preprocessing.text import Tokenizer
from keras.preprocessing import sequence
from keras.models import Model
from keras.layers import Dense, Dropout, Embedding
from keras.layers import LSTM, CuDNNLSTM, Flatten
from keras.layers import GRU, Activation, Input, concatenate
from keras.layers import Conv2D, MaxPool2D, Reshape
from keras.optimizers import Adam, SGD, RMSprop
from keras import optimizers
from keras import regularizers
from xgboost import XGBClassifier
import nltk
nltk.download('stopwords')
trial_data = pd.read_csv("/content/drive/My Drive/data1.csv", sep=',')
train_data = pd.read_csv("/content/drive/My Drive/data_7000_new.csv", sep=',', names=['image_name', 'Image_URL', 'OCR_extracted_text', 'corrected_text', 'Humour', 'Sarcasm', 'offensive', 'Motivational', 'Overall_Sentiment', 'Basis_of_classification'])
train_data.Overall_Sentiment.value_counts()
# train_data = train_data[train_data.Overall_Sentiment != 'neutral']
train_data = train_data[train_data.Overall_Sentiment != 'positivechandler_Friday-Mood-AF.-meme-Friends-ChandlerBing.jpg']
train_data = train_data[~train_data.Overall_Sentiment.isnull()]
# trial_data = trial_data[trial_data.Overall_Sentiment != 'neutral']
trial_data = trial_data[~trial_data.Overall_Sentiment.isnull()]
train_data.Overall_Sentiment.value_counts()
print(train_data.shape)
print(trial_data.shape)
def clean_tweets(tweet):
tweet = re.sub('@(\\w{1,15})\b', '', str(tweet))
tweet = tweet.replace("via ", "")
tweet = tweet.replace("RT ", "")
tweet = tweet.lower()
return tweet
def clean_url(tweet):
tweet = re.sub('http\\S+', '', tweet, flags=re.MULTILINE)
return tweet
def remove_stop_words(tweet):
stops = set(stopwords.words("english"))
stops.update(['.',',','"',"'",'?',':',';','(',')','[',']','{','}'])
toks = [tok for tok in tweet if not tok in stops and len(tok) >= 3]
return toks
def stemming_tweets(tweet):
stemmer = SnowballStemmer('english')
stemmed_words = [stemmer.stem(word) for word in tweet]
return stemmed_words
def remove_number(tweet):
newTweet = re.sub('\\d+', '', tweet)
return newTweet
def remove_hashtags(tweet):
result = ''
for word in tweet.split():
if word.startswith('#') or word.startswith('@'):
result += word[1:]
result += ' '
else:
result += word
result += ' '
return result
def preprocessing(tweet, swords = True, url = True, stemming = True, ctweets = True, number = True, hashtag = True):
if ctweets:
tweet = clean_tweets(tweet)
if url:
tweet = clean_url(tweet)
if hashtag:
tweet = remove_hashtags(tweet)
twtk = TweetTokenizer(strip_handles=True, reduce_len=True)
if number:
tweet = remove_number(tweet)
tokens = [w.lower() for w in twtk.tokenize(tweet) if w != "" and w is not None]
if swords:
tokens = remove_stop_words(tokens)
if stemming:
tokens = stemming_tweets(tokens)
text = " ".join(tokens)
return text
train_text = train_data['corrected_text'].map(lambda x: preprocessing(x, swords = True, url = True, stemming = True, ctweets = True, number = True, hashtag = True))
s_train = train_data['Overall_Sentiment']
trial_text = trial_data['corrected_text'].map(lambda x: preprocessing(x, swords = True, url = True, stemming = True, ctweets = True, number = True, hashtag = True))
s_trial = trial_data['Overall_Sentiment']
print(len(train_text), len(s_train))
print(len(trial_text), len(s_trial))
def bag_of_words(train, test):
vec = CountVectorizer()
train = vec.fit_transform(train).toarray()
test = vec.transform(test).toarray()
print(vec.vocabulary_)
return train, test
x_train, x_test = bag_of_words(train_text, trial_text)
xgb = XGBClassifier()
xgb.fit(x_train,s_train)
xgb.score(x_train,s_train)
y_pred = xgb.predict(x_test)
print("F1.........: %f" %(f1_score(s_trial, y_pred, average="macro")))
print("Precision..: %f" %(precision_score(s_trial, y_pred, average="macro")))
print("Recall.....: %f" %(recall_score(s_trial, y_pred, average="macro")))
print("Accuracy...: %f" %(accuracy_score(s_trial, y_pred)))
confusion_matrix(s_trial, y_pred, labels=["positive", "very_positive", "neutral", "negative", "very_negative"])
print(y_pred)
```
# XLNet
```
sentences = df.new_text.values
sentences = [sentence + " [SEP] [CLS]" for sentence in sentences]
labels = df.taskA.values
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased', do_lower_case=True)
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
print ("Tokenize the first sentence:")
print (tokenized_texts[0])
MAX_LEN = 128
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post")
attention_masks = []
# Create a mask of 1s for each token followed by 0s for padding
for seq in input_ids:
seq_mask = [float(i>0) for i in seq]
attention_masks.append(seq_mask)
train_inputs, validation_inputs, train_labels, validation_labels = train_test_split(input_ids, labels,
random_state=2018, test_size=0.1)
train_masks, validation_masks, _, _ = train_test_split(attention_masks, input_ids,
random_state=2018, test_size=0.1)
train_inputs = torch.tensor(train_inputs)
validation_inputs = torch.tensor(validation_inputs)
train_labels = torch.tensor(train_labels)
validation_labels = torch.tensor(validation_labels)
train_masks = torch.tensor(train_masks)
validation_masks = torch.tensor(validation_masks)
batch_size = 32
# Create an iterator of our data with torch DataLoader. This helps save on memory during training because, unlike a for loop,
# with an iterator the entire dataset does not need to be loaded into memory
train_data = TensorDataset(train_inputs, train_masks, train_labels)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels)
validation_sampler = SequentialSampler(validation_data)
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size)
model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased", num_labels=2)
model.cuda()
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters,
lr=2e-5)
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
train_loss_set = []
# Number of training epochs (authors recommend between 2 and 4)
epochs = 4
# trange is a tqdm wrapper around the normal python range
for _ in trange(epochs, desc="Epoch"):
# Training
# Set our model to training mode (as opposed to evaluation mode)
model.train()
# Tracking variables
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
# Train the data for one epoch
for step, batch in enumerate(train_dataloader):
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Clear out the gradients (by default they accumulate)
optimizer.zero_grad()
# Forward pass
outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
loss = outputs[0]
logits = outputs[1]
train_loss_set.append(loss.item())
# Backward pass
loss.backward()
# Update parameters and take a step using the computed gradient
optimizer.step()
# Update tracking variables
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
print("Train loss: {}".format(tr_loss/nb_tr_steps))
# Validation
# Put model in evaluation mode to evaluate loss on the validation set
model.eval()
# Tracking variables
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
# Evaluate data for one epoch
for batch in validation_dataloader:
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Telling the model not to compute or store gradients, saving memory and speeding up validation
with torch.no_grad():
# Forward pass, calculate logit predictions
output = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)
logits = output[0]
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_accuracy += tmp_eval_accuracy
nb_eval_steps += 1
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
plt.figure(figsize=(15,8))
plt.title("Training loss")
plt.xlabel("Batch")
plt.ylabel("Loss")
plt.plot(train_loss_set)
plt.show()
```
| github_jupyter |
# Malware Classification
**Data taken from**<br>
https://github.com/Te-k/malware-classification
Here is the plan that we will follow :
- Extract as many features as we can from binaries to have a good training data set. The features have to be integers or floats to be usable by the algorithms
- Identify the best features for the algorithm : we should select the information that best allows to differenciate legitimate files from malware.
- Choose a classification algorithm
- Test the efficiency of the algorithm and identify the False Positive/False negative rate
### How was the data generated
- Some features were extracted using Manalyzer. The PE features extracted are almost used directly as they are integers (field size, addresses, parameters…)
The author of data quotes:
```
So I extracted all the PE parameters I could by using pefile, and considered especially the one that are relevant for identifying malware, like the entropy of section for packer detection. As we can only have a fix list of feature (and not one per section), I extracted the Mean, Minimum and Maximum of entropy for sections and resources.
```
For legitimate file, I gathered all the Windows binaries (exe + dll) from Windows 2008, Windows XP and Windows 7 32 and 64 bits, so exactly 41323 binaries. It is not a perfect dataset as there is only Microsoft binaries and not binaries from application which could have different properties, but I did not find any easy way to gather easily a lot of legitimate binaries, so it will be enough for playing.<br>
Regarding malware, I used a part of [Virus Share](https://virusshare.com) collection by downloading one archive (the 134th) and kept only PE files (96724 different files).<br>
I used [pefile](https://github.com/erocarrera/pefile) to extract all these features from the binaries and store them in a csv file (ugly code is here, data are here).
### Services used:
- https://manalyzer.org : Manalyzer is a free service which performs static analysis on PE executables to detect undesirable behavior. Try it online, or check out the underlying software on https://github.com/JusticeRage/Manalyze
- https://virusshare.com : VirusShare.com is a repository of malware samples to provide security researchers, incident responders, forensic analysts, and the morbidly curious access to samples of live malicious code.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
## load the dataset
```
df = pd.read_csv("../dataset/malware.csv", sep="|")
df.head(10)
```
### Scripts used to generate above data
- https://github.com/Te-k/malware-classification/blob/master/checkmanalyzer.py
- https://github.com/Te-k/malware-classification/blob/master/checkpe.py
- https://github.com/Te-k/malware-classification/blob/master/generatedata.py
## Analysis
```
df.shape
df.columns
```
## Distribution of legitimate and malwares
```
df["_type"] = "legit"
df.loc[df['legitimate'] == 0, '_type'] = "malware"
df['_type'].value_counts()
df['_type'].value_counts(normalize=True) * 100
pd.crosstab(df['_type'], 'count').plot(kind='bar', color=['green', 'red'])
plt.title("Distribution of legitimite and malware")
## todo add more visualizations
```
## Feature Selection
The idea of feature selection is to reduce the 54 features extracted to a smaller set of feature which are the most relevant for differentiating legitimate binaries from malware.
```
legit_binaries = df[0:41323].drop(['legitimate'], axis=1)
malicious_binaries = df[41323::].drop(['legitimate'], axis=1)
```
## Manual data cleaning & feature selection
So a first way of doing it manually could be to check the different values and see if there is a difference between the two groups. For instance, we can take the parameter FileAlignment (which defines the alignment of sections and is by default 0x200 bytes) and check the values :
```
legit_binaries['FileAlignment'].value_counts(normalize=True) * 100
malicious_binaries['FileAlignment'].value_counts(normalize=True) * 100
```
So if we remove the 20 malware having weird values here, there is not much difference on this value between the two groups, this parameter would not make a good feature for us.
On the other side, some values are clearly interesting like the max entropy of the sections which can be represented with an histogram:
```
plt.figure(figsize=(15,10))
plt.hist([legit_binaries['SectionsMaxEntropy'], malicious_binaries['SectionsMaxEntropy']], range=[0,8], normed=True, color=["green", "red"],label=["legitimate", "malicious"])
plt.legend()
plt.title("distribution of Section Max Entropy for legit vs malaicious binaries")
plt.show()
xlabel = 'Machine'
ylabel = 'ImageBase'
plt.figure(figsize=(10,6))
plt.scatter(_df[xlabel], _df[ylabel], label=_df['legitimate'], color=['green', 'red'])
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.title("Distribution by %s & %s" % (xlabel, ylabel))
def plotDistributionForFeature(colName, figsize=(7,5)):
global legit_binaries
global malicious_binaries
plt.figure(figsize=figsize)
plt.hist(
[legit_binaries[colName], malicious_binaries[colName]],
range=[0,8],
normed=True,
color=["green", "red"],
label=["legitimate", "malicious"])
plt.legend()
plt.title("distribution of %s for legit vs malaicious binaries" % colName)
plt.show()
avoidCols = ['_type', 'Name', 'md5', 'legitimate']
for col in df.columns:
if col not in avoidCols:
plotDistributionForFeature(col)
```
## Automatic Feature Selection
some algorithms have been developed to identify the most interesting features and reduce the dimensionality of the data set (see the Scikit page for Feature Selection).
In our case, we will use the Tree-based feature selection:
```
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectFromModel
X = df.drop(['Name', 'md5', 'legitimate', '_type'], axis=1).values
y = df['legitimate'].values
fsel = ExtraTreesClassifier().fit(X, y)
model = SelectFromModel(fsel, prefit=True)
X_new = model.transform(X)
print ("Shape before feaure selection: ", X.shape)
print ("Shape after feaure selection: ", X_new.shape)
```
So in this case, the algorithm selected 13 important features among the 54, and we can notice that indeed the SectionsMaxEntropy is selected but other features (like the Machine value) are surprisingly also good parameters for this classification :
```
nb_features = X_new.shape[1]
indices = np.argsort(fsel.feature_importances_)[::-1][:nb_features]
for f in range(nb_features):
print("%d. feature %s (%f)" % (f + 1, df.columns[2+indices[f]], fsel.feature_importances_[indices[f]]))
```
## Test train split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_new, y ,test_size=0.2)
```
# Classification: selecting Models
```
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier, AdaBoostClassifier
from sklearn.linear_model import LogisticRegression
algorithms = {
"DecisionTree": DecisionTreeClassifier(max_depth=10),
"RandomForest": RandomForestClassifier(n_estimators=50),
"GradientBoosting": GradientBoostingClassifier(n_estimators=50),
"AdaBoost": AdaBoostClassifier(n_estimators=100),
"LogisticRegression": LogisticRegression()
}
results = {}
print("\nNow testing algorithms")
for algo in algorithms:
clf = algorithms[algo]
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
print("%s : %f %%" % (algo, score*100))
results[algo] = score
winner = max(results, key=results.get)
print('\nWinner algorithm is %s with a %f %% success' % (winner, results[winner]*100))
```
## Evaluation
```
from sklearn.metrics import confusion_matrix
# Identify false and true positive rates
clf = algorithms[winner]
res = clf.predict(X_test)
mt = confusion_matrix(y_test, res)
print("False positive rate : %f %%" % ((mt[0][1] / float(sum(mt[0])))*100))
print('False negative rate : %f %%' % ( (mt[1][0] / float(sum(mt[1]))*100)))
```
## Results
### 99.38% Accuracy
### False positive rate : 0.383678 %
### False negative rate : 0.937162 %
### Why it’s not enough?
First, a bit of vocabulary for measuring IDS accuracy (taken from Wikipedia):
- *Sensitivity* : the proportion of positives identified as such (or true positive rate)
- *Specificity* : the proportion of negatives correctly identified as such (or true negative)
- *False Positive Rate (FPR)* : the proportion of events badly identified as positive over the total number of negatives
- *False Negative Rate (FNR)* : the proportion of events badly identified as negative over the total number of positives
### So why 99.38% is not enough?
- Because you can’t just consider the sensitivity/specificity of the algorithm, you have to consider the malicious over legitimate traffic ratio to understand how many alerts will be generated by the IDS. And this ratio is extremely low.
- Let’s consider the you have 1 malicious event every 10 000 event (it’s a really high ratio) and 1 000 000 events per day, you will have :
- 100 malicious events, 99 identified by the tool and 1 false negative (0.93% FNR but let’s consider 1% here)
- 999 900 legitimate events, around *3835* identified as malicious (0.38% FPR)
- So in the end, the *analyst would received 3934 alerts per day* with only *99 true positive in it (2.52%)*. Your IDS is useless here.
## Visualization
To be done
| github_jupyter |
# Welcome to PySyft
The goal of this notebook is to provide step by step explanation of the internal workings of PySyft for developers and have working examples of the API to play with.
**Note:** You should be able to run these without any issues. This notebook will be automatically run by CI and flagged if it fails. If your commit breaks this notebook, either fix the issue or add some information here for others.
```
assert True is True
import sys
import pytest
import syft as sy
from syft.core.node.common.service.auth import AuthorizationException
from syft.util import key_emoji
sy.LOG_FILE = "syft_do.log"
sy.logger.remove()
_ = sy.logger.add(sys.stdout, level="DEBUG")
```
Bob decides has some data he wants to share with Alice using PySyft.
The first thing Bob needs to do is to create a Node to handle all the PySyft services he will need to safely and securely share his data with Alice.
```
somedevice = sy.Device()
```
This is a device, it is a Node and it has a name, id and address.
```
print(somedevice.name, somedevice.id, somedevice.address)
```
The ID is a class called UID which is essentially a uuid. The address is a combination of up to four different locations, identifying the path to resolve the final target of the address.
```
print(somedevice.address.vm, somedevice.address.device, somedevice.address.domain, somedevice.address.network)
print(somedevice.address.target_id)
```
UIDs are hard to read and compare so we have a function which converts them to 2 x emoji.
Just like the "name", Emoji uniqueness is not guaranteed but is very useful during debugging.
```
print(somedevice.address.target_id.pprint)
print(somedevice.address.pprint)
```
Most things that are "pretty printed" include an emoji, a name and a class, additionally with a UID and "visual UID" emoji.
📌 somedevice.address.target_id is a "Location" which is pointing to this specific location with a name and address.
💠 [📱] somedevice.address is the "Address" of somedevice (Think up to 4 x locations) in this case the contents of the List [] show it only has the Location of a device currently.
**note:** Sometimes Emoji's look like this: 🙓. The dynamically generated code point doesnt have an Emoji. A PR fix would be welcome. 🙂
```
print(somedevice.id, somedevice.address.target_id.id)
print(somedevice.id.emoji(), "==", somedevice.address.target_id.id.emoji())
print(somedevice.pprint)
assert somedevice.id == somedevice.address.target_id.id
```
Interaction with a Node like a device is always done through a client. Clients can "send" messages and Nodes can "receive" them. Bob needs to get a client for his device. But first it might be a good idea to name is device so that its easier to follow.
```
bob_device = sy.Device(name="Bob's iPhone")
assert bob_device.name == "Bob's iPhone"
bob_device_client = bob_device.get_client()
```
When you ask a node for a client you get a "Client" which is named after the device and has the same "UID" and "Address" (4 x Locations) as the device it was created from, and it will have a "Route" that connects it to the "Device"
```
assert bob_device_client.name == "Bob's iPhone Client"
print(bob_device_client.pprint, bob_device.pprint)
print(bob_device.id.emoji(), "==", bob_device_client.id.emoji())
assert bob_device.id == bob_device_client.device.id
assert bob_device.address == bob_device_client.address
```
📡 [📱] Bob's iPhone Client is a "DeviceClient" and it has the same UID and "Address" as Bob's Device
Now we have something that can send and receive lets take it for a spin.
Since everything is handled with a layer of abstraction the smallest unit of work is a "SyftMessage". Very little can be done without sending a message from a Client to a Node. There are many types of "SyftMessage" which boil down to whether or not they are Sync or Async, and whether or not they expect a response.
Lets make a ReprMessage which simply gets a message and prints it at its destination Node.
SyftMessage's all have an "address" field, without this they would never get delivered. They also generally have a msg_id which can be used to keep track of them.
```
msg = sy.ReprMessage(address=bob_device_client.address)
print(msg.pprint)
print(bob_device_client.address.pprint)
assert msg.address == bob_device_client.address
```
What type of Message is ReprMessage you ask?
```
print(sy.ReprMessage.mro())
```
Its an "Immediate" "WithoutReply" okay so Sync and no response.
Now lets send it, remember we need a Client not a Node for sending.
```
with pytest.raises(AuthorizationException):
bob_device_client.send_immediate_msg_without_reply(
msg=sy.ReprMessage(address=bob_device_client.address)
)
```
Oh oh! Why did Auth fail? We'll we can see from the debug that the 🔑 (VerifyKey) of the sender was matched to the 🗝 (Root VerifyKey) of the destination and they don't match. This client does not have sufficient permission to send a ReprMessage.
First lets take a look at the keys involved.
```
print(bob_device_client.keys)
print(bob_device.keys)
assert bob_device_client.verify_key != bob_device.root_verify_key
```
Not to worry we have a solution, lets get a client which does have this permission.
```
bob_device_client = bob_device.get_root_client()
```
Lets take a look again.
```
print(bob_device_client.keys)
print(bob_device.keys)
assert bob_device_client.verify_key == bob_device.root_verify_key
```
Lets try sending the message again.
```
bob_device_client.send_immediate_msg_without_reply(
msg=sy.ReprMessage(address=bob_device_client.address)
)
```
Woohoo! 🎉
Okay so theres a lot going on but lets step through it.
The ReprMessage is created by the Client, and then signed with the Client's SigningKey.
The SigningKey and VerifyKey are a pair and the VerifyKey is public and derived from the SigningKey.
When we call get_root_client() we update the Node with the newly generated key on the Client so that the Client will now have permission to execute root actions.
Behind every message type is a service which executes the message on a Node.
To run ReprMessage on our Device Node, we can see that during startup it adds a service to handle these kinds of messages:
```python
# common/node.py
self.immediate_services_without_reply.append(ReprService)
````
Not all actions / services require "root". To enable this a decorator is added like so:
```python
# repr_service.py
class ReprService(ImmediateNodeServiceWithoutReply):
@staticmethod
@service_auth(root_only=True)
```
Okay so Bob has root access to his own device, but he wants to share some data and compute resources of this device to someone else. So to do that he needs to create a "Sub Node" which will be a "VirtualMachine". Think of this as a partition or slice of his device which can be allocated memory, storage and compute.
```
bob_vm = sy.VirtualMachine(name="Bob's VM")
```
Since VirtualMachine is a Node (Server) it will need a Client to issue commands to it.
Lets make one.
**note:** Why do we need a root client? The registration process is two way and the Registeree will need to update its address in response to a successful registration.
```
bob_vm_client = bob_vm.get_root_client()
```
Okay so now Bob has two Nodes and their respective clients, but they know nothing of each other.
They both have addresses that only point to themselves.
```
print(bob_device_client.address.pprint)
print(bob_vm_client.address.pprint)
```
Lets register Bob's vm with its device since the Device is higher up in the level of scope.
```
bob_device_client.register(client=bob_vm_client)
```
Whoa.. Okay lots happening. As you can see there are two messages and two Authentications. The first one is the `RegisterChildNodeMessage` which is dispatched to the address of the Device. Once it is received it stores the address of the registering Node and then dispatches a new `HeritageUpdateMessage` back to the sender of the first message.
**note:** This is not a reply message, this is a completely independent message that happens to be sent back to the sender's address.
```python
issubclass(RegisterChildNodeMessage, SignedImmediateSyftMessageWithoutReply)
issubclass(HeritageUpdateService, SignedImmediateSyftMessageWithoutReply)
```
You will also notice that the Messages turned into Protobufs and then Signed.
Lets see how this works for `ReprMessage`.
1) ✉️ -> Proto 🔢
Every message that is sent requires and the following method signatures:
```python
def _object2proto(self) -> ReprMessage_PB:
def _proto2object(proto: ReprMessage_PB) -> "ReprMessage":
def get_protobuf_schema() -> GeneratedProtocolMessageType:
```
The get_protobuf_schema method will tell the caller what Protobuf class to use, and then the _object2proto method will be called to turn normal python into a protobuf message.
```python
# repr_service.py
def _object2proto(self) -> ReprMessage_PB:
return ReprMessage_PB(
msg_id=self.id.serialize(), address=self.address.serialize(),
)
```
Any type which isnt a normal Protobuf primitive must be converted to a proto or serialized before being stored.
**note:** self.id and self.address also need to be serialized so this will call their `_object2proto` methods.
At this point we are using code auto generated by `protoc` as per the build script:
```bash
$ ./scripts/build_proto.sh
```
Here is the .proto definition for `ReprMessage`
```c++
// repr_service.proto
syntax = "proto3";
package syft.core.node.common.service;
import "proto/core/common/common_object.proto";
import "proto/core/io/address.proto";
message ReprMessage {
syft.core.common.UID msg_id = 1;
syft.core.io.Address address = 2;
}
```
Once the message needs to be Deserialized the `_proto2object` method will be called.
```python
# repr_service.py
return ReprMessage(
msg_id=_deserialize(blob=proto.msg_id),
address=_deserialize(blob=proto.address),
)
```
Two things to pay attention to:
1) `RegisterChildNodeMessage` has caused Bob's Device Store has been updated with an entry representing Bob's VM Address
2) `HeritageUpdateService` has caused Bob's VM to update its address to now include the `SpecificLocation` of Bob's Device.
**note:** The Address for Bob's VM Client inside the Store does not include the "Device" part of the "Address" (4 x Locations) since it isn't updated until after the HeritageUpdateService message is sent.
```
print(bob_device.store.pprint)
assert bob_vm_client.address.target_id.id in bob_device.store
print(bob_vm_client.address.pprint, bob_vm_client.address.target_id.id.emoji())
```
What about `SignedMessage`? If you read `message.py` you will see that all messages inherit from `SignedMessage` and as such contain the following fields.
```c++
message SignedMessage {
syft.core.common.UID msg_id = 1;
string obj_type = 2;
bytes signature = 3;
bytes verify_key = 4;
bytes message = 5;
}
```
The actual message is serialized and stored in the `message` field and a hash of its bytes is calculated using the Client's `SigningKey`.
The contents of `message` are not encrypted and can be read at any time by simply deserializing them.
```
def get_signed_message_bytes() -> bytes:
# return a signed message fixture containing the uid from get_uid
blob = (
b'\n?syft.core.common.message.SignedImmediateSyftMessageWithoutReply\x12\xad\x02\n\x12\n\x10\x8c3\x19,'
+ b'\xcd\xd3\xf3N\xe2\xb0\xc6\tU\xdf\x02u\x126syft.core.node.common.service.repr_service.ReprMessage'
+ b'\x1a@@\x82\x13\xfaC\xfb=\x01H\x853\x1e\xceE+\xc6\xb5\rX\x16Z\xb8l\x02\x10\x8algj\xd6U\x11]\xe9R\x0ei'
+ b'\xd8\xca\xb9\x00=\xa1\xeeoEa\xe2C\xa0\x960\xf7A\xfad<(9\xe1\x8c\x93\xf1\x0b" \x81\xff\xcc\xfc7\xc4U.'
+ b'\x8a*\x1f"=0\x10\xc4\xef\x88\xc80\x01\xf0}3\x0b\xd4\x97\xad/P\x8f\x0f*{\n6'
+ b'syft.core.node.common.service.repr_service.ReprMessage\x12A\n\x12\n\x10\x8c3\x19,'
+ b'\xcd\xd3\xf3N\xe2\xb0\xc6\tU\xdf\x02u\x12+\n\x0bGoofy KirchH\x01R\x1a\n\x12\n\x10\xfb\x1b\xb0g[\xb7LI'
+ b'\xbe\xce\xe7\x00\xab\n\x15\x14\x12\x04Test'
)
return blob
sig_msg = sy.deserialize(blob=get_signed_message_bytes(), from_bytes=True)
```
We can get the nested message with a property called `message`
```
repr_msg = sig_msg.message
print(repr_msg.pprint, sig_msg.pprint)
print(repr_msg.address.pprint, sig_msg.address.pprint)
print(repr_msg.address.target_id.id.emoji(), sig_msg.address.target_id.id.emoji())
assert sig_msg.id == repr_msg.id
assert sig_msg.address == repr_msg.address
```
Notice the UID's of `ReprMessage` and `SignedImmediateSyftMessageWithoutReply` are the same. So are the delivery address's.
But the original bytes are still available and serialization / deserialization or serde (ser/de) is bi-directional and reversible
```
assert repr_msg.serialize(to_bytes=True) == sig_msg.serialized_message
print(repr_msg.pprint, " ⬅️ ", sig_msg.pprint)
from nacl.signing import SigningKey, VerifyKey
def get_signing_key() -> SigningKey:
# return a the signing key used to sign the get_signed_message_bytes fixture
key = "e89ff2e651b42393b6ecb5956419088781309d953d72bd73a0968525a3a6a951"
return SigningKey(bytes.fromhex(key))
```
Lets try re-signing it with the same key it was signed with.
```
sig_msg_comp = repr_msg.sign(signing_key=get_signing_key())
signing_key = get_signing_key()
verify_key = signing_key.verify_key
print(f"SigningKey: {key_emoji(key=signing_key)}")
print(f"VerifyKey: {key_emoji(key=verify_key)}")
print(type(signing_key), type(verify_key))
print(f"🔑 {key_emoji(key=sig_msg.verify_key)} == {key_emoji(key=verify_key)} 🔑")
print(bytes(verify_key))
assert sig_msg_comp == sig_msg
assert sig_msg.verify_key == verify_key
assert VerifyKey(bytes(verify_key)) == verify_key
```
The message is signed with the `SigningKey`, a consistent `VerifyKey` is derived from the `SigningKey`. Both keys can be transformed to bytes and back easily.
Okay now Bob wants to protect his Device(s) and its / their VM(s). To do that he needs to add them to a higher level Node called a `Domain`.
```
bob_domain = sy.Domain(name="Bob's Domain")
bob_domain_client = bob_domain.get_root_client()
```
Okay lets follow the same proceedure and link up these nodes.
```
print(bob_domain.address.pprint)
bob_domain_client.register(client=bob_device_client)
```
Thats interesting, we see theres two `HeritageUpdateMessage` that get sent. The address update is "Flowing" upward to the leaf VM nodes.
```
print(bob_vm.address.pprint)
print(bob_device.address.pprint)
print(bob_domain.address.pprint)
assert bob_domain_client.id == bob_device.address.domain.id
assert bob_device.id == bob_vm.address.device.id
```
Now that the Nodes are aware of each other, we can send a message to any child node by dispatching a message on a parent Client and addressing the Child node.
**note:** We are changing the bob_vm root_verify_key because ReprMessage is a root message. We should change this example.
Note, the repr service has the service_auth decorator.
`@service_auth(root_only=True)`
```python
class ReprService(ImmediateNodeServiceWithoutReply):
@staticmethod
@service_auth(root_only=True)
def process(node: AbstractNode, msg: ReprMessage, verify_key: VerifyKey) -> None:
print(node.__repr__())
@staticmethod
def message_handler_types() -> List[Type[ReprMessage]]:
return [ReprMessage]
```
For the purpose of the demonstration we will override the destination nodes root_verify_key with the one of our new domain client and the secure ReprMessage will be executed. Normally the remote node is not running in the same python REPL as the client.
```
# Just to bypass the auth we will set the root verify key on destination so that it will accept this message
bob_vm.root_verify_key = bob_domain_client.verify_key # inject 📡🔑 as 📍🗝
bob_domain_client.send_immediate_msg_without_reply(
msg=sy.ReprMessage(address=bob_vm.address)
)
```
| github_jupyter |
```
# Uncomment and run this cell if you're on Colab or Kaggle
# !git clone https://github.com/nlp-with-transformers/notebooks.git
# %cd notebooks
# from install import *
# install_requirements()
#hide
from utils import *
setup_chapter()
```
# Multilingual Named Entity Recognition
## The Dataset
```
#id jeff-dean-ner
#caption An example of a sequence annotated with named entities
#hide_input
import pandas as pd
toks = "Jeff Dean is a computer scientist at Google in California".split()
lbls = ["B-PER", "I-PER", "O", "O", "O", "O", "O", "B-ORG", "O", "B-LOC"]
df = pd.DataFrame(data=[toks, lbls], index=['Tokens', 'Tags'])
df
from datasets import get_dataset_config_names
xtreme_subsets = get_dataset_config_names("xtreme")
print(f"XTREME has {len(xtreme_subsets)} configurations")
panx_subsets = [s for s in xtreme_subsets if s.startswith("PAN")]
panx_subsets[:3]
# hide_output
from datasets import load_dataset
load_dataset("xtreme", name="PAN-X.de")
# hide_output
from collections import defaultdict
from datasets import DatasetDict
langs = ["de", "fr", "it", "en"]
fracs = [0.629, 0.229, 0.084, 0.059]
# Return a DatasetDict if a key doesn't exist
panx_ch = defaultdict(DatasetDict)
for lang, frac in zip(langs, fracs):
# Load monolingual corpus
ds = load_dataset("xtreme", name=f"PAN-X.{lang}")
# Shuffle and downsample each split according to spoken proportion
for split in ds:
panx_ch[lang][split] = (
ds[split]
.shuffle(seed=0)
.select(range(int(frac * ds[split].num_rows))))
import pandas as pd
pd.DataFrame({lang: [panx_ch[lang]["train"].num_rows] for lang in langs},
index=["Number of training examples"])
element = panx_ch["de"]["train"][0]
for key, value in element.items():
print(f"{key}: {value}")
for key, value in panx_ch["de"]["train"].features.items():
print(f"{key}: {value}")
tags = panx_ch["de"]["train"].features["ner_tags"].feature
print(tags)
# hide_output
def create_tag_names(batch):
return {"ner_tags_str": [tags.int2str(idx) for idx in batch["ner_tags"]]}
panx_de = panx_ch["de"].map(create_tag_names)
# hide_output
de_example = panx_de["train"][0]
pd.DataFrame([de_example["tokens"], de_example["ner_tags_str"]],
['Tokens', 'Tags'])
from collections import Counter
split2freqs = defaultdict(Counter)
for split, dataset in panx_de.items():
for row in dataset["ner_tags_str"]:
for tag in row:
if tag.startswith("B"):
tag_type = tag.split("-")[1]
split2freqs[split][tag_type] += 1
pd.DataFrame.from_dict(split2freqs, orient="index")
```
## Multilingual Transformers
## A Closer Look at Tokenization
```
# hide_output
from transformers import AutoTokenizer
bert_model_name = "bert-base-cased"
xlmr_model_name = "xlm-roberta-base"
bert_tokenizer = AutoTokenizer.from_pretrained(bert_model_name)
xlmr_tokenizer = AutoTokenizer.from_pretrained(xlmr_model_name)
text = "Jack Sparrow loves New York!"
bert_tokens = bert_tokenizer(text).tokens()
xlmr_tokens = xlmr_tokenizer(text).tokens()
#hide_input
df = pd.DataFrame([bert_tokens, xlmr_tokens], index=["BERT", "XLM-R"])
df
```
### The Tokenizer Pipeline
<img alt="Tokenizer pipeline" caption="The steps in the tokenization pipeline" src="images/chapter04_tokenizer-pipeline.png" id="toknizer-pipeline"/>
### The SentencePiece Tokenizer
```
"".join(xlmr_tokens).replace(u"\u2581", " ")
```
## Transformers for Named Entity Recognition
<img alt="Architecture of a transformer encoder for classification." caption="Fine-tuning an encoder-based transformer for sequence classification" src="images/chapter04_clf-architecture.png" id="clf-arch"/>
<img alt="Architecture of a transformer encoder for named entity recognition. The wide linear layer shows that the same linear layer is applied to all hidden states." caption="Fine-tuning an encoder-based transformer for named entity recognition" src="images/chapter04_ner-architecture.png" id="ner-arch"/>
## The Anatomy of the Transformers Model Class
### Bodies and Heads
<img alt="bert-body-head" caption="The `BertModel` class only contains the body of the model, while the `BertFor<Task>` classes combine the body with a dedicated head for a given task" src="images/chapter04_bert-body-head.png" id="bert-body-head"/>
### Creating a Custom Model for Token Classification
```
import torch.nn as nn
from transformers import XLMRobertaConfig
from transformers.modeling_outputs import TokenClassifierOutput
from transformers.models.roberta.modeling_roberta import RobertaModel
from transformers.models.roberta.modeling_roberta import RobertaPreTrainedModel
class XLMRobertaForTokenClassification(RobertaPreTrainedModel):
config_class = XLMRobertaConfig
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
# Load model body
self.roberta = RobertaModel(config, add_pooling_layer=False)
# Set up token classification head
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
# Load and initialize weights
self.init_weights()
def forward(self, input_ids=None, attention_mask=None, token_type_ids=None,
labels=None, **kwargs):
# Use model body to get encoder representations
outputs = self.roberta(input_ids, attention_mask=attention_mask,
token_type_ids=token_type_ids, **kwargs)
# Apply classifier to encoder representation
sequence_output = self.dropout(outputs[0])
logits = self.classifier(sequence_output)
# Calculate losses
loss = None
if labels is not None:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
# Return model output object
return TokenClassifierOutput(loss=loss, logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions)
```
### Loading a Custom Model
```
index2tag = {idx: tag for idx, tag in enumerate(tags.names)}
tag2index = {tag: idx for idx, tag in enumerate(tags.names)}
# hide_output
from transformers import AutoConfig
xlmr_config = AutoConfig.from_pretrained(xlmr_model_name,
num_labels=tags.num_classes,
id2label=index2tag, label2id=tag2index)
# hide_output
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
xlmr_model = (XLMRobertaForTokenClassification
.from_pretrained(xlmr_model_name, config=xlmr_config)
.to(device))
# hide_output
input_ids = xlmr_tokenizer.encode(text, return_tensors="pt")
pd.DataFrame([xlmr_tokens, input_ids[0].numpy()], index=["Tokens", "Input IDs"])
outputs = xlmr_model(input_ids.to(device)).logits
predictions = torch.argmax(outputs, dim=-1)
print(f"Number of tokens in sequence: {len(xlmr_tokens)}")
print(f"Shape of outputs: {outputs.shape}")
preds = [tags.names[p] for p in predictions[0].cpu().numpy()]
pd.DataFrame([xlmr_tokens, preds], index=["Tokens", "Tags"])
def tag_text(text, tags, model, tokenizer):
# Get tokens with special characters
tokens = tokenizer(text).tokens()
# Encode the sequence into IDs
input_ids = xlmr_tokenizer(text, return_tensors="pt").input_ids.to(device)
# Get predictions as distribution over 7 possible classes
outputs = model(input_ids)[0]
# Take argmax to get most likely class per token
predictions = torch.argmax(outputs, dim=2)
# Convert to DataFrame
preds = [tags.names[p] for p in predictions[0].cpu().numpy()]
return pd.DataFrame([tokens, preds], index=["Tokens", "Tags"])
```
## Tokenizing Texts for NER
```
words, labels = de_example["tokens"], de_example["ner_tags"]
tokenized_input = xlmr_tokenizer(de_example["tokens"], is_split_into_words=True)
tokens = xlmr_tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"])
#hide_output
pd.DataFrame([tokens], index=["Tokens"])
# hide_output
word_ids = tokenized_input.word_ids()
pd.DataFrame([tokens, word_ids], index=["Tokens", "Word IDs"])
#hide_output
previous_word_idx = None
label_ids = []
for word_idx in word_ids:
if word_idx is None or word_idx == previous_word_idx:
label_ids.append(-100)
elif word_idx != previous_word_idx:
label_ids.append(labels[word_idx])
previous_word_idx = word_idx
labels = [index2tag[l] if l != -100 else "IGN" for l in label_ids]
index = ["Tokens", "Word IDs", "Label IDs", "Labels"]
pd.DataFrame([tokens, word_ids, label_ids, labels], index=index)
def tokenize_and_align_labels(examples):
tokenized_inputs = xlmr_tokenizer(examples["tokens"], truncation=True,
is_split_into_words=True)
labels = []
for idx, label in enumerate(examples["ner_tags"]):
word_ids = tokenized_inputs.word_ids(batch_index=idx)
previous_word_idx = None
label_ids = []
for word_idx in word_ids:
if word_idx is None or word_idx == previous_word_idx:
label_ids.append(-100)
else:
label_ids.append(label[word_idx])
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
def encode_panx_dataset(corpus):
return corpus.map(tokenize_and_align_labels, batched=True,
remove_columns=['langs', 'ner_tags', 'tokens'])
# hide_output
panx_de_encoded = encode_panx_dataset(panx_ch["de"])
```
## Performance Measures
```
from seqeval.metrics import classification_report
y_true = [["O", "O", "O", "B-MISC", "I-MISC", "I-MISC", "O"],
["B-PER", "I-PER", "O"]]
y_pred = [["O", "O", "B-MISC", "I-MISC", "I-MISC", "I-MISC", "O"],
["B-PER", "I-PER", "O"]]
print(classification_report(y_true, y_pred))
import numpy as np
def align_predictions(predictions, label_ids):
preds = np.argmax(predictions, axis=2)
batch_size, seq_len = preds.shape
labels_list, preds_list = [], []
for batch_idx in range(batch_size):
example_labels, example_preds = [], []
for seq_idx in range(seq_len):
# Ignore label IDs = -100
if label_ids[batch_idx, seq_idx] != -100:
example_labels.append(index2tag[label_ids[batch_idx][seq_idx]])
example_preds.append(index2tag[preds[batch_idx][seq_idx]])
labels_list.append(example_labels)
preds_list.append(example_preds)
return preds_list, labels_list
```
## Fine-Tuning XLM-RoBERTa
```
# hide_output
from transformers import TrainingArguments
num_epochs = 3
batch_size = 24
logging_steps = len(panx_de_encoded["train"]) // batch_size
model_name = f"{xlmr_model_name}-finetuned-panx-de"
training_args = TrainingArguments(
output_dir=model_name, log_level="error", num_train_epochs=num_epochs,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size, evaluation_strategy="epoch",
save_steps=1e6, weight_decay=0.01, disable_tqdm=False,
logging_steps=logging_steps, push_to_hub=True)
#hide_output
from huggingface_hub import notebook_login
notebook_login()
from seqeval.metrics import f1_score
def compute_metrics(eval_pred):
y_pred, y_true = align_predictions(eval_pred.predictions,
eval_pred.label_ids)
return {"f1": f1_score(y_true, y_pred)}
from transformers import DataCollatorForTokenClassification
data_collator = DataCollatorForTokenClassification(xlmr_tokenizer)
def model_init():
return (XLMRobertaForTokenClassification
.from_pretrained(xlmr_model_name, config=xlmr_config)
.to(device))
#hide
%env TOKENIZERS_PARALLELISM=false
# hide_output
from transformers import Trainer
trainer = Trainer(model_init=model_init, args=training_args,
data_collator=data_collator, compute_metrics=compute_metrics,
train_dataset=panx_de_encoded["train"],
eval_dataset=panx_de_encoded["validation"],
tokenizer=xlmr_tokenizer)
#hide_input
trainer.train()
trainer.push_to_hub(commit_message="Training completed!")
# hide_input
df = pd.DataFrame(trainer.state.log_history)[['epoch','loss' ,'eval_loss', 'eval_f1']]
df = df.rename(columns={"epoch":"Epoch","loss": "Training Loss", "eval_loss": "Validation Loss", "eval_f1":"F1"})
df['Epoch'] = df["Epoch"].apply(lambda x: round(x))
df['Training Loss'] = df["Training Loss"].ffill()
df[['Validation Loss', 'F1']] = df[['Validation Loss', 'F1']].bfill().ffill()
df.drop_duplicates()
# hide_output
text_de = "Jeff Dean ist ein Informatiker bei Google in Kalifornien"
tag_text(text_de, tags, trainer.model, xlmr_tokenizer)
```
## Error Analysis
```
from torch.nn.functional import cross_entropy
def forward_pass_with_label(batch):
# Convert dict of lists to list of dicts suitable for data collator
features = [dict(zip(batch, t)) for t in zip(*batch.values())]
# Pad inputs and labels and put all tensors on device
batch = data_collator(features)
input_ids = batch["input_ids"].to(device)
attention_mask = batch["attention_mask"].to(device)
labels = batch["labels"].to(device)
with torch.no_grad():
# Pass data through model
output = trainer.model(input_ids, attention_mask)
# Logit.size: [batch_size, sequence_length, classes]
# Predict class with largest logit value on classes axis
predicted_label = torch.argmax(output.logits, axis=-1).cpu().numpy()
# Calculate loss per token after flattening batch dimension with view
loss = cross_entropy(output.logits.view(-1, 7),
labels.view(-1), reduction="none")
# Unflatten batch dimension and convert to numpy array
loss = loss.view(len(input_ids), -1).cpu().numpy()
return {"loss":loss, "predicted_label": predicted_label}
# hide_output
valid_set = panx_de_encoded["validation"]
valid_set = valid_set.map(forward_pass_with_label, batched=True, batch_size=32)
df = valid_set.to_pandas()
# hide_output
index2tag[-100] = "IGN"
df["input_tokens"] = df["input_ids"].apply(
lambda x: xlmr_tokenizer.convert_ids_to_tokens(x))
df["predicted_label"] = df["predicted_label"].apply(
lambda x: [index2tag[i] for i in x])
df["labels"] = df["labels"].apply(
lambda x: [index2tag[i] for i in x])
df['loss'] = df.apply(
lambda x: x['loss'][:len(x['input_ids'])], axis=1)
df['predicted_label'] = df.apply(
lambda x: x['predicted_label'][:len(x['input_ids'])], axis=1)
df.head(1)
# hide_output
df_tokens = df.apply(pd.Series.explode)
df_tokens = df_tokens.query("labels != 'IGN'")
df_tokens["loss"] = df_tokens["loss"].astype(float).round(2)
df_tokens.head(7)
(
df_tokens.groupby("input_tokens")[["loss"]]
.agg(["count", "mean", "sum"])
.droplevel(level=0, axis=1) # Get rid of multi-level columns
.sort_values(by="sum", ascending=False)
.reset_index()
.round(2)
.head(10)
.T
)
(
df_tokens.groupby("labels")[["loss"]]
.agg(["count", "mean", "sum"])
.droplevel(level=0, axis=1)
.sort_values(by="mean", ascending=False)
.reset_index()
.round(2)
.T
)
from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix
def plot_confusion_matrix(y_preds, y_true, labels):
cm = confusion_matrix(y_true, y_preds, normalize="true")
fig, ax = plt.subplots(figsize=(6, 6))
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels)
disp.plot(cmap="Blues", values_format=".2f", ax=ax, colorbar=False)
plt.title("Normalized confusion matrix")
plt.show()
plot_confusion_matrix(df_tokens["labels"], df_tokens["predicted_label"],
tags.names)
# hide_output
def get_samples(df):
for _, row in df.iterrows():
labels, preds, tokens, losses = [], [], [], []
for i, mask in enumerate(row["attention_mask"]):
if i not in {0, len(row["attention_mask"])}:
labels.append(row["labels"][i])
preds.append(row["predicted_label"][i])
tokens.append(row["input_tokens"][i])
losses.append(f"{row['loss'][i]:.2f}")
df_tmp = pd.DataFrame({"tokens": tokens, "labels": labels,
"preds": preds, "losses": losses}).T
yield df_tmp
df["total_loss"] = df["loss"].apply(sum)
df_tmp = df.sort_values(by="total_loss", ascending=False).head(3)
for sample in get_samples(df_tmp):
display(sample)
# hide_output
df_tmp = df.loc[df["input_tokens"].apply(lambda x: u"\u2581(" in x)].head(2)
for sample in get_samples(df_tmp):
display(sample)
```
## Cross-Lingual Transfer
```
def get_f1_score(trainer, dataset):
return trainer.predict(dataset).metrics["test_f1"]
f1_scores = defaultdict(dict)
f1_scores["de"]["de"] = get_f1_score(trainer, panx_de_encoded["test"])
print(f"F1-score of [de] model on [de] dataset: {f1_scores['de']['de']:.3f}")
text_fr = "Jeff Dean est informaticien chez Google en Californie"
tag_text(text_fr, tags, trainer.model, xlmr_tokenizer)
def evaluate_lang_performance(lang, trainer):
panx_ds = encode_panx_dataset(panx_ch[lang])
return get_f1_score(trainer, panx_ds["test"])
# hide_output
f1_scores["de"]["fr"] = evaluate_lang_performance("fr", trainer)
print(f"F1-score of [de] model on [fr] dataset: {f1_scores['de']['fr']:.3f}")
# hide_input
print(f"F1-score of [de] model on [fr] dataset: {f1_scores['de']['fr']:.3f}")
# hide_output
f1_scores["de"]["it"] = evaluate_lang_performance("it", trainer)
print(f"F1-score of [de] model on [it] dataset: {f1_scores['de']['it']:.3f}")
# hide_input
print(f"F1-score of [de] model on [it] dataset: {f1_scores['de']['it']:.3f}")
#hide_output
f1_scores["de"]["en"] = evaluate_lang_performance("en", trainer)
print(f"F1-score of [de] model on [en] dataset: {f1_scores['de']['en']:.3f}")
#hide_input
print(f"F1-score of [de] model on [en] dataset: {f1_scores['de']['en']:.3f}")
```
### When Does Zero-Shot Transfer Make Sense?
```
def train_on_subset(dataset, num_samples):
train_ds = dataset["train"].shuffle(seed=42).select(range(num_samples))
valid_ds = dataset["validation"]
test_ds = dataset["test"]
training_args.logging_steps = len(train_ds) // batch_size
trainer = Trainer(model_init=model_init, args=training_args,
data_collator=data_collator, compute_metrics=compute_metrics,
train_dataset=train_ds, eval_dataset=valid_ds, tokenizer=xlmr_tokenizer)
trainer.train()
if training_args.push_to_hub:
trainer.push_to_hub(commit_message="Training completed!")
f1_score = get_f1_score(trainer, test_ds)
return pd.DataFrame.from_dict(
{"num_samples": [len(train_ds)], "f1_score": [f1_score]})
# hide_output
panx_fr_encoded = encode_panx_dataset(panx_ch["fr"])
# hide_output
training_args.push_to_hub = False
metrics_df = train_on_subset(panx_fr_encoded, 250)
metrics_df
#hide_input
# Hack needed to exclude the progress bars in the above cell
metrics_df
# hide_output
for num_samples in [500, 1000, 2000, 4000]:
metrics_df = metrics_df.append(
train_on_subset(panx_fr_encoded, num_samples), ignore_index=True)
fig, ax = plt.subplots()
ax.axhline(f1_scores["de"]["fr"], ls="--", color="r")
metrics_df.set_index("num_samples").plot(ax=ax)
plt.legend(["Zero-shot from de", "Fine-tuned on fr"], loc="lower right")
plt.ylim((0, 1))
plt.xlabel("Number of Training Samples")
plt.ylabel("F1 Score")
plt.show()
```
### Fine-Tuning on Multiple Languages at Once
```
from datasets import concatenate_datasets
def concatenate_splits(corpora):
multi_corpus = DatasetDict()
for split in corpora[0].keys():
multi_corpus[split] = concatenate_datasets(
[corpus[split] for corpus in corpora]).shuffle(seed=42)
return multi_corpus
panx_de_fr_encoded = concatenate_splits([panx_de_encoded, panx_fr_encoded])
# hide_output
training_args.logging_steps = len(panx_de_fr_encoded["train"]) // batch_size
training_args.push_to_hub = True
training_args.output_dir = "xlm-roberta-base-finetuned-panx-de-fr"
trainer = Trainer(model_init=model_init, args=training_args,
data_collator=data_collator, compute_metrics=compute_metrics,
tokenizer=xlmr_tokenizer, train_dataset=panx_de_fr_encoded["train"],
eval_dataset=panx_de_fr_encoded["validation"])
trainer.train()
trainer.push_to_hub(commit_message="Training completed!")
#hide_output
for lang in langs:
f1 = evaluate_lang_performance(lang, trainer)
print(f"F1-score of [de-fr] model on [{lang}] dataset: {f1:.3f}")
#hide_input
for lang in langs:
f1 = evaluate_lang_performance(lang, trainer)
print(f"F1-score of [de-fr] model on [{lang}] dataset: {f1:.3f}")
# hide_output
corpora = [panx_de_encoded]
# Exclude German from iteration
for lang in langs[1:]:
training_args.output_dir = f"xlm-roberta-base-finetuned-panx-{lang}"
# Fine-tune on monolingual corpus
ds_encoded = encode_panx_dataset(panx_ch[lang])
metrics = train_on_subset(ds_encoded, ds_encoded["train"].num_rows)
# Collect F1-scores in common dict
f1_scores[lang][lang] = metrics["f1_score"][0]
# Add monolingual corpus to list of corpora to concatenate
corpora.append(ds_encoded)
corpora_encoded = concatenate_splits(corpora)
# hide_output
training_args.logging_steps = len(corpora_encoded["train"]) // batch_size
training_args.output_dir = "xlm-roberta-base-finetuned-panx-all"
trainer = Trainer(model_init=model_init, args=training_args,
data_collator=data_collator, compute_metrics=compute_metrics,
tokenizer=xlmr_tokenizer, train_dataset=corpora_encoded["train"],
eval_dataset=corpora_encoded["validation"])
trainer.train()
trainer.push_to_hub(commit_message="Training completed!")
# hide_output
for idx, lang in enumerate(langs):
f1_scores["all"][lang] = get_f1_score(trainer, corpora[idx]["test"])
scores_data = {"de": f1_scores["de"],
"each": {lang: f1_scores[lang][lang] for lang in langs},
"all": f1_scores["all"]}
f1_scores_df = pd.DataFrame(scores_data).T.round(4)
f1_scores_df.rename_axis(index="Fine-tune on", columns="Evaluated on",
inplace=True)
f1_scores_df
```
## Interacting with Model Widgets
<img alt="A Hub widget" caption="Example of a widget on the Hugging Face Hub" src="images/chapter04_ner-widget.png" id="ner-widget"/>
## Conclusion
| github_jupyter |
# Lesson 1 - FastAI
## New to ML? Don't know where to start?
Machine learning may seem complex at first, given the math, background understanding, and code involved. However, if you truly want to learn, the best place to start is by building and messing around with a model. FastiAI makes it super easy to create and modify models to best solve your problem! Don't worry too much if you don't understand, we will get there.
## Our First Model
As I have said above, the best way to learn is by actually creating your first model
```
from fastai.vision.all import * #IMPORT
path = untar_data(URLs.PETS)/'images' #DATA SET
def is_cat(x): return x[0].isupper() #Labels for the dataset (This dataset cat labels begin w/ uppercase letter)
#Create dataset (Training data, test data) and correctly gets imgs w/ labels
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224))
learn = cnn_learner(dls, resnet34, metrics=error_rate) #Creating architecture
learn.fine_tune(1) #Training
```
>Look at that, we created our first model and all with a few lines of code.
### Why don't we test our cat classifier?
```
img = PILImage.create(image_cat())
img.to_thumb(192)
uploader = widgets.FileUpload()
uploader
img = PILImage.create(uploader.data[0])
img.to_thumb(192)
is_cat,_,probs = learn.predict(img)
print(f"Is this a cat?: {is_cat}.")
print(f"Probability it's a cat: {probs[1].item():.6f}")
```
>Fantastic, you can now classify cats!
## Deep Learning Is Not Just for Image Classification
Often people think machine learning model are used only for images, this is **not** true at all!
Below you will see numerous other types of models, each with its benifits!
## A segmentation model
```
path = untar_data(URLs.CAMVID_TINY)
dls = SegmentationDataLoaders.from_label_func( #Segmentation
path, bs=8, fnames = get_image_files(path/"images"),
label_func = lambda o: path/'labels'/f'{o.stem}_P{o.suffix}',
codes = np.loadtxt(path/'codes.txt', dtype=str)
)
learn = unet_learner(dls, resnet34)
learn.fine_tune(8)
learn.show_results(max_n=2, figsize=(10,12))
```
### A natural language model
```
from fastai.text.all import *
dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
learn.fine_tune(2, 1e-2)
learn.predict("I really liked that movie!")
```
### A salary prediction model (Regression)
```
from fastai.tabular.all import *
path = untar_data(URLs.ADULT_SAMPLE)
dls = TabularDataLoaders.from_csv(path/'adult.csv', path=path, y_names="salary",
cat_names = ['workclass', 'education', 'marital-status', 'occupation',
'relationship', 'race'],
cont_names = ['age', 'fnlwgt', 'education-num'],
procs = [Categorify, FillMissing, Normalize])
learn = tabular_learner(dls, metrics=accuracy)
learn.fit_one_cycle(3)
```
### The below is a reccomendation model (AKA Regression model)
```
from fastai.collab import *
path = untar_data(URLs.ML_SAMPLE)
dls = CollabDataLoaders.from_csv(path/'ratings.csv')
learn = collab_learner(dls, y_range=(0.5,5.5))
learn.fine_tune(10)
learn.show_results()
```
# Conclusion
I hope you feel more comfortable with machine learning and recognize the many benefits it can serve you :)
## Questionnaire
1. **Do you need these for deep learning?**
- Lots of math T / **F**
- Lots of data T / **F**
- Lots of expensive computers T / **F**
- A PhD T / **F**
1. **Name five areas where deep learning is now the best in the world.**
Vision, Natural language processing, Medicine, Robotics, and Games
1. **What was the name of the first device that was based on the principle of the artificial neuron?**
Mark I Perceptron
1. **Based on the book of the same name, what are the requirements for parallel distributed processing (PDP)?**
Processing units, State of activation, Output function, Pattern of connectivity, Propagation rule, Activation rule, Learning rule, Environment
1. **What were the two theoretical misunderstandings that held back the field of neural networks?**
Single layer network unable to learn simple mathimatical functions.
More layers make network too big and slow to be useful.
1. **What is a GPU?**
A graphics card is a processor that can handle 1000's of tasks at the same time. Particularly great for deep learning.
1. **Open a notebook and execute a cell containing: `1+1`. What happens?**
2
1. **Follow through each cell of the stripped version of the notebook for this chapter. Before executing each cell, guess what will happen.**
1. **Complete the Jupyter Notebook online appendix.**
1. **Why is it hard to use a traditional computer program to recognize images in a photo?**
They are missing the weight assignment needed to recognize patterns within images to accomplish the task.
1. **What did Samuel mean by "weight assignment"?**
The weight is another form of input that has direct influence on the model's performance.
1. **What term do we normally use in deep learning for what Samuel called "weights"?**
Parameters
1. **Draw a picture that summarizes Samuel's view of a machine learning model.**
https://vikramriyer.github.io/assets/images/machine_learning/fastai/model.jpeg
1. **Why is it hard to understand why a deep learning model makes a particular prediction?**
There are many layers, each with numerous neurons. Therefore, it gets complex really fast what each neuron is looking for when viewing an image, and how that impacts the perediction.
1. **What is the name of the theorem that shows that a neural network can solve any mathematical problem to any level of accuracy?**
Universal approximation theorem
1. **What do you need in order to train a model?**
Data with labels
1. **How could a feedback loop impact the rollout of a predictive policing model?**
The more the model is used the more biased the data becomes, and therefore, the more bias the model becomes.
1. **Do we always have to use 224×224-pixel images with the cat recognition model?**
No.
1. **What is the difference between classification and regression?**
Classification is about categorizing/labeling objects.
Regression is about predicting numerical quantities, such as temp.
1. **What is a validation set? What is a test set? Why do we need them?**
The validation set measures the accuracy of the model during training.
The test set is used during the final evaluation to test the accuracy of the model.
We need both of them because the validation set could cause some bias in the model as we would are fitting the model towards it during training. However, the test set removes this and evaluates the model on unseen data, thereby, giving an accurate metric of accuracy.
1. **What will fastai do if you don't provide a validation set?**
Fastai will automatically create a validation dataset for us.
1. **Can we always use a random sample for a validation set? Why or why not?**
It is not reccomended where order is neccessary, example ordered by time.
1. **What is overfitting? Provide an example.**
This is when the model begins to fit to the training data rather than generalizing for similar unseen datasets. For example a model that does amazing on the training data, but performs poorly on test data: Good indication that model may have overfitted.
1. **What is a metric? How does it differ from "loss"?**
The loss is the value calculated by the model to determine the impact each neuron has on the end result: Therefore, the value is used by models to measure its performance. The metric gives us, humans, an overall value of how accurate the model was: Therefore, a value we use to understand the models performance.
1. **How can pretrained models help?**
A pretrained model already has the fundementals. Therefore, it can use this prior knowledge to learn faster and perform better on similer datasets.
1. **What is the "head" of a model?**
The final layers from the pretrained model that have been replaced with new layers (w/ randomized weights) to better align with our dataset. These final layers are often the only thing trained while the rest of the model is frozen.
1. **What kinds of features do the early layers of a CNN find? How about the later layers?**
The early layers often extract simple features like edges.
The later layers are more complex and can identify advanced features like faces.
1. **Are image models only useful for photos?**
No. Lots of other forms of data can be converted into images that can be used to solve such non-photo data problems.
1. **What is an "architecture"?**
This is the structure of the model we use to solve the problem.
1. **What is segmentation?**
Method of labeling all pixels within an image and masking it.
1. **What is `y_range` used for? When do we need it?**
Specifies the range of values that can be perdicted by model. For example, movie rating's 0-5.
1. **What are "hyperparameters"?**
These are the parameters that we can adjust to help the model perform better (Ex: Epochs).
1. **What's the best way to avoid failures when using AI in an organization?**
Begin with the most simplest model and then slowly building up to more complexity. This way you have something working and don't get lost as you add onto the model.
### Further Research
Each chapter also has a "Further Research" section that poses questions that aren't fully answered in the text, or gives more advanced assignments. Answers to these questions aren't on the book's website; you'll need to do your own research!
1. **Why is a GPU useful for deep learning? How is a CPU different, and why is it less effective for deep learning?** </br>
Modern GPUs provide a far superior processing power, memory bandwidth, and efficiency over the CPU.
1. **Try to think of three areas where feedback loops might impact the use of machine learning. See if you can find documented examples of that happening in practice.** </br>
I believe feedback loops are primarly great for recommendation models. This is because the feedback loops create a bias model. For example, if a viewer like a movie, he/she will like similer movies. Being bias here towards particular types of movie is the best way to keep the viewer engaged.
| github_jupyter |
# Optical Flow
Optical flow tracks objects by looking at where the *same* points have moved from one image frame to the next. Let's load in a few example frames of a pacman-like face moving to the right and down and see how optical flow finds **motion vectors** that describe the motion of the face!
As usual, let's first import our resources and read in the images.
```
import numpy as np
import matplotlib.image as mpimg # for reading in images
import matplotlib.pyplot as plt
import cv2 # computer vision library
%matplotlib inline
# Read in the image frames
frame_1 = cv2.imread('images/pacman_1.png')
frame_2 = cv2.imread('images/pacman_2.png')
frame_3 = cv2.imread('images/pacman_3.png')
# convert to RGB
frame_1 = cv2.cvtColor(frame_1, cv2.COLOR_BGR2RGB)
frame_2 = cv2.cvtColor(frame_2, cv2.COLOR_BGR2RGB)
frame_3 = cv2.cvtColor(frame_3, cv2.COLOR_BGR2RGB)
# Visualize the individual color channels
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,10))
ax1.set_title('frame 1')
ax1.imshow(frame_1)
ax2.set_title('frame 2')
ax2.imshow(frame_2)
ax3.set_title('frame 3')
ax3.imshow(frame_3)
```
## Finding Points to Track
Befor optical flow can work, we have to give it a set of *keypoints* to track between two image frames!
In the below example, we use a **Shi-Tomasi corner detector**, which uses the same process as a Harris corner detector to find patterns of intensity that make up a "corner" in an image, only it adds an additional parameter that helps select the most prominent corners. You can read more about this detection algorithm in [the documentation](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_shi_tomasi/py_shi_tomasi.html).
Alternatively, you could choose to use Harris or even ORB to find feature points. I just found that this works well.
**You sould see that the detected points appear at the corners of the face.**
```
# parameters for ShiTomasi corner detection
feature_params = dict( maxCorners = 10,
qualityLevel = 0.2,
minDistance = 5,
blockSize = 5 )
# convert all frames to grayscale
gray_1 = cv2.cvtColor(frame_1, cv2.COLOR_RGB2GRAY)
gray_2 = cv2.cvtColor(frame_2, cv2.COLOR_RGB2GRAY)
gray_3 = cv2.cvtColor(frame_3, cv2.COLOR_RGB2GRAY)
# Take first frame and find corner points in it
pts_1 = cv2.goodFeaturesToTrack(gray_1, mask = None, **feature_params)
# display the detected points
plt.imshow(frame_1)
for p in pts_1:
# plot x and y detected points
plt.plot(p[0][0], p[0][1], 'r.', markersize=15)
# print out the x-y locations of the detected points
print(pts_1)
```
## Perform Optical Flow
Once we've detected keypoints on our initial image of interest, we can calculate the optical flow between this image frame (frame 1) and the next frame (frame 2), using OpenCV's `calcOpticalFlowPyrLK` which is [documented, here](https://docs.opencv.org/trunk/dc/d6b/group__video__track.html#ga473e4b886d0bcc6b65831eb88ed93323). It takes in an initial image frame, the next image, and the first set of points, and it returns the detected points in the next frame and a value that indicates how good matches are between points from one frame to the next.
The parameters also include a window size and maxLevels that indicate the size of a window and mnumber of levels that will be used to scale the given images using pyramid scaling; this version peforms an iterative search for matching points and this matching criteria is reflected in the last parameter (you may need to change these values if you are working with a different image, but these should work for the provided example).
```
# parameters for lucas kanade optical flow
lk_params = dict( winSize = (5,5),
maxLevel = 5,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
# calculate optical flow between first and second frame
pts_2, match, err = cv2.calcOpticalFlowPyrLK(gray_1, gray_2, pts_1, None, **lk_params)
# Select good matching points between the two image frames
good_new = pts_2[match==1]
good_old = pts_1[match==1]
```
Next, let's display the resulting motion vectors! You should see the first image with motion vectors drawn on it that indicate the direction of motion from the first frame to the next.
```
# create a mask image for drawing (u,v) vectors on top of the second frame
mask = np.zeros_like(frame_2)
# draw the lines between the matching points (these lines indicate motion vectors)
for i,(new,old) in enumerate(zip(good_new,good_old)):
a,b = new.ravel()
c,d = old.ravel()
# draw points on the mask image
mask = cv2.circle(mask,(a,b),8,(200),-1)
# draw motion vector as lines on the mask image
mask = cv2.line(mask, (a,b),(c,d), (200), 3)
# add the line image and second frame together
composite_im = np.copy(frame_2)
composite_im[mask!=0] = [0]
plt.imshow(composite_im)
```
### Perform Optical Flow between image frames 2 and 3
Repeat this process but for the last two image frames; see what the resulting motion vectors look like. Imagine doing this for a series of image frames and plotting the entire-motion-path of a given object.
```
# Take first frame and find corner points in it
pts_1 = cv2.goodFeaturesToTrack(gray_2, mask = None, **feature_params)
# display the detected points
plt.imshow(frame_2)
for p in pts_1:
# plot x and y detected points
plt.plot(p[0][0], p[0][1], 'r.', markersize=15)
# parameters for lucas kanade optical flow
lk_params = dict( winSize = (5,5),
maxLevel = 5,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
# calculate optical flow between first and second frame
pts_2, match, err = cv2.calcOpticalFlowPyrLK(gray_2, gray_3, pts_1, None, **lk_params)
print(match)
# Select good matching points between the two image frames
good_new = pts_2[match==1]
good_old = pts_1[match==1]
## Perform optical flow between image frames 2 and 3
# create a mask image for drawing (u,v) vectors on top of the second frame
mask = np.zeros_like(frame_3)
# draw the lines between the matching points (these lines indicate motion vectors)
for i,(new,old) in enumerate(zip(good_new,good_old)):
a,b = new.ravel()
c,d = old.ravel()
# draw points on the mask image
mask = cv2.circle(mask,(a,b),8,(200),-1)
# draw motion vector as lines on the mask image
mask = cv2.line(mask, (a,b),(c,d), (200), 3)
# add the line image and second frame together
composite_im = np.copy(frame_3)
composite_im[mask!=0] = [0]
plt.imshow(composite_im)
```
| github_jupyter |
# 1. Import Library
```
from keras.datasets import cifar10
import numpy as np
np.random.seed(10)
```
# 資料準備
```
(x_img_train,y_label_train),(x_img_test,y_label_test)=cifar10.load_data()
print("train data:",'images:',x_img_train.shape,
" labels:",y_label_train.shape)
print("test data:",'images:',x_img_test.shape ,
" labels:",y_label_test.shape)
x_img_train_normalize = x_img_train.astype('float32') / 255.0
x_img_test_normalize = x_img_test.astype('float32') / 255.0
from keras.utils import np_utils
y_label_train_OneHot = np_utils.to_categorical(y_label_train)
y_label_test_OneHot = np_utils.to_categorical(y_label_test)
y_label_test_OneHot.shape
```
# 建立模型
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D, ZeroPadding2D
model = Sequential()
#卷積層1
model.add(Conv2D(filters=32,kernel_size=(3,3),
input_shape=(32, 32,3),
activation='relu',
padding='same'))
model.add(Dropout(rate=0.25))
model.add(MaxPooling2D(pool_size=(2, 2)))
#卷積層2與池化層2
model.add(Conv2D(filters=64, kernel_size=(3, 3),
activation='relu', padding='same'))
model.add(Dropout(0.25))
model.add(MaxPooling2D(pool_size=(2, 2)))
#Step3 建立神經網路(平坦層、隱藏層、輸出層)
model.add(Flatten())
model.add(Dropout(rate=0.25))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(rate=0.25))
model.add(Dense(10, activation='softmax'))
print(model.summary())
```
# 載入之前訓練的模型
```
try:
model.load_weights("SaveModel/CifarModelCnn_v1.h5")
print("載入模型成功!繼續訓練模型")
except :
print("載入模型失敗!開始訓練一個新模型")
```
# 訓練模型
```
model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
train_history=model.fit(x_img_train_normalize, y_label_train_OneHot,
validation_split=0.2,
epochs=10, batch_size=128, verbose=1)
import matplotlib.pyplot as plt
def show_train_history(train_acc,test_acc):
plt.plot(train_history.history[train_acc])
plt.plot(train_history.history[test_acc])
plt.title('Train History')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
show_train_history('acc','val_acc')
show_train_history('loss','val_loss')
```
# 評估模型準確率
```
scores = model.evaluate(x_img_test_normalize,
y_label_test_OneHot, verbose=0)
scores[1]
```
# 進行預測
```
prediction=model.predict_classes(x_img_test_normalize)
prediction[:10]
```
# 查看預測結果
```
label_dict={0:"airplane",1:"automobile",2:"bird",3:"cat",4:"deer",
5:"dog",6:"frog",7:"horse",8:"ship",9:"truck"}
import matplotlib.pyplot as plt
def plot_images_labels_prediction(images,labels,prediction,
idx,num=10):
fig = plt.gcf()
fig.set_size_inches(12, 14)
if num>25: num=25
for i in range(0, num):
ax=plt.subplot(5,5, 1+i)
ax.imshow(images[idx],cmap='binary')
title=str(i)+','+label_dict[labels[i][0]]
if len(prediction)>0:
title+='=>'+label_dict[prediction[i]]
ax.set_title(title,fontsize=10)
ax.set_xticks([]);ax.set_yticks([])
idx+=1
plt.show()
plot_images_labels_prediction(x_img_test,y_label_test,
prediction,0,10)
```
# 查看預測機率
```
Predicted_Probability=model.predict(x_img_test_normalize)
def show_Predicted_Probability(y,prediction,
x_img,Predicted_Probability,i):
print('label:',label_dict[y[i][0]],
'predict:',label_dict[prediction[i]])
plt.figure(figsize=(2,2))
plt.imshow(np.reshape(x_img_test[i],(32, 32,3)))
plt.show()
for j in range(10):
print(label_dict[j]+
' Probability:%1.9f'%(Predicted_Probability[i][j]))
show_Predicted_Probability(y_label_test,prediction,
x_img_test,Predicted_Probability,0)
show_Predicted_Probability(y_label_test,prediction,
x_img_test,Predicted_Probability,3)
```
# confusion matrix
```
prediction.shape
y_label_test.shape
y_label_test
y_label_test.reshape(-1)
import pandas as pd
print(label_dict)
pd.crosstab(y_label_test.reshape(-1),prediction,
rownames=['label'],colnames=['predict'])
print(label_dict)
```
# Save model to JSON
```
model_json = model.to_json()
with open("SaveModel/CifarModelCnn_v1.json", "w") as json_file:
json_file.write(model_json)
```
# Save Model to YAML
```
model_yaml = model.to_yaml()
with open("SaveModel/CifarModelCnn_v1.yaml", "w") as yaml_file:
yaml_file.write(model_yaml)
```
# Save Weight to h5
```
model.save_weights("SaveModel/CifarModelCnn_v1.h5")
print("Saved model to disk")
for layer in model.layers:
lay_config = layer.get_config()
lay_weights = layer.get_weights()
print('*** layer config ***')
print(lay_config)
print('*** layer weights ***')
print(lay_weights)
layer = model.layers[0]
lay_config = layer.get_config()
lay_weights = layer.get_weights()
print('*** cifar-10 layer config ***')
print(lay_config)
print('*** cifar-10 layer weights ***')
print(lay_weights)
layer = model.layers[3]
lay_config = layer.get_config()
lay_weights = layer.get_weights()
print('*** cifar-10 layer-2 config ***')
print(lay_config)
print('*** cifar-10 layer-2 weights ***')
print(lay_weights)
layidx = 0
params_list = model.layers[0].get_weights()
weights_array = params_list[0]
biases_array = params_list[1]
images = weights_array.reshape(3, 32, 3, 3)
plotpos = 1
for idx in range(32):
plt.subplot(1, 32, plotpos)
plt.imshow(images[0][idx])
plt.gray()
plt.axis('off')
plotpos += 1
plt.show()
layidx = 3
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
images = weights_array.reshape(32, 64, 3, 3)
plotpos = 1
for idx1 in range(32):
for idx2 in range(64):
# 画像データのイメージを表示
plt.subplot(32, 64, plotpos)
plt.imshow(images[idx1][idx2])
plt.gray()
plt.axis('off')
plotpos += 1
plt.show()
layidx = 8
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
biases_array = params_list[1]
image = weights_array.reshape(1024, 4096)
plt.imshow(image)
plt.gray()
plt.axis('off')
plt.show()
layidx = 10
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
biases_array = params_list[1]
image = weights_array.reshape(10, 1024)
plt.imshow(image)
plt.gray()
plt.axis('off')
plt.show()
import codecs, json
print("output weights as JSON")
filename = "SaveModel/CifarModelCnnParamsCnn_v1_layer%02d.json"
layidx = 0
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
biases_array = params_list[1]
dict = {}
dict['weights'] = weights_array.tolist()
dict['biases'] = biases_array.tolist()
file_path = filename % layidx
json.dump(dict,
codecs.open(file_path, 'w', encoding='utf-8'),
separators=(',', ':'),
sort_keys=False,
indent=4)
layidx = 3
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
biases_array = params_list[1]
dict = {}
dict['weights'] = weights_array.tolist()
dict['biases'] = biases_array.tolist()
file_path = filename % layidx
json.dump(dict,
codecs.open(file_path, 'w', encoding='utf-8'),
separators=(',', ':'),
sort_keys=False,
indent=4)
layidx = 8
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
biases_array = params_list[1]
dict = {}
dict['weights'] = weights_array.tolist()
dict['biases'] = biases_array.tolist()
file_path = filename % layidx
json.dump(dict,
codecs.open(file_path, 'w', encoding='utf-8'),
separators=(',', ':'),
sort_keys=False,
indent=4)
layidx = 10
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
biases_array = params_list[1]
dict = {}
dict['weights'] = weights_array.tolist()
dict['biases'] = biases_array.tolist()
file_path = filename % layidx
json.dump(dict,
codecs.open(file_path, 'w', encoding='utf-8'),
separators=(',', ':'),
sort_keys=False,
indent=4)
print("done")
#from keras.models import load_model
#model2 = load_model('./SaveModel/cifarCnnModelnew.h5')
# 再学習
train_history=model.fit(x_img_train_normalize, y_label_train_OneHot,
validation_split=0.2,
epochs=100, batch_size=128, verbose=1)
show_train_history('acc','val_acc')
scores = model.evaluate(x_img_test_normalize,
y_label_test_OneHot, verbose=0)
scores[1]
Predicted_Probability=model.predict(x_img_test_normalize)
def show_Predicted_Probability(y,prediction,
x_img,Predicted_Probability,i):
print('label:',label_dict[y[i][0]],
'predict:',label_dict[prediction[i]])
plt.figure(figsize=(2,2))
plt.imshow(np.reshape(x_img_test[i],(32, 32,3)))
plt.show()
for j in range(10):
print(label_dict[j]+
' Probability:%1.9f'%(Predicted_Probability[i][j]))
show_Predicted_Probability(y_label_test,prediction,
x_img_test,Predicted_Probability,100)
import pandas as pd
print(label_dict)
pd.crosstab(y_label_test.reshape(-1),prediction,
rownames=['label'],colnames=['predict'])
model_json = model.to_json()
with open("SaveModel/CifarModelCnn_v2.json", "w") as json_file:
json_file.write(model_json)
model_yaml = model.to_yaml()
with open("SaveModel/CifarModelCnn_v2.yaml", "w") as yaml_file:
yaml_file.write(model_yaml)
model.save_weights("SaveModel/CifarModelCnn_v2.h5")
print("Saved model to disk")
for layer in model.layers:
lay_config = layer.get_config()
lay_weights = layer.get_weights()
print('*** layer config ***')
print(lay_config)
print('*** layer weights ***')
print(lay_weights)
import codecs, json
print("output weights as JSON")
filename = "SaveModel/CifarModelCnnParamsCnn_v2_layer%02d.json"
layidx = 0
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
biases_array = params_list[1]
dict = {}
dict['weights'] = weights_array.tolist()
dict['biases'] = biases_array.tolist()
file_path = filename % layidx
json.dump(dict,
codecs.open(file_path, 'w', encoding='utf-8'),
separators=(',', ':'),
sort_keys=False,
indent=4)
layidx = 3
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
biases_array = params_list[1]
dict = {}
dict['weights'] = weights_array.tolist()
dict['biases'] = biases_array.tolist()
file_path = filename % layidx
json.dump(dict,
codecs.open(file_path, 'w', encoding='utf-8'),
separators=(',', ':'),
sort_keys=False,
indent=4)
layidx = 8
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
biases_array = params_list[1]
dict = {}
dict['weights'] = weights_array.tolist()
dict['biases'] = biases_array.tolist()
file_path = filename % layidx
json.dump(dict,
codecs.open(file_path, 'w', encoding='utf-8'),
separators=(',', ':'),
sort_keys=False,
indent=4)
layidx = 10
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
biases_array = params_list[1]
dict = {}
dict['weights'] = weights_array.tolist()
dict['biases'] = biases_array.tolist()
file_path = filename % layidx
json.dump(dict,
codecs.open(file_path, 'w', encoding='utf-8'),
separators=(',', ':'),
sort_keys=False,
indent=4)
print("done")
x_img_train[0]
x_img_train_normalize_test = x_img_train[0].astype('float32') / 255.0
x_img_train_normalize_test.shape
layidx = 0
params_list = model.layers[layidx].get_weights()
weights_array = params_list[0]
weights_array[0].shape
weights_array[0][0].shape
weights_array[0][0][0].shape
layidx = 0
params_list = model.layers[layidx].get_weights()
lay_config = model.layers[layidx].get_config()
weights_array = params_list[0]
biases_array = params_list[1]
print(weights_array)
weights_array.shape
print(lay_config)
layidx = 3
params_list = model.layers[layidx].get_weights()
lay_config = model.layers[layidx].get_config()
weights_array = params_list[0]
biases_array = params_list[1]
weights_array.shape
```
| github_jupyter |
# Detecting and examining gender bias in the MIND dataset
The primary goal of this project is to build metrics of bias (here focusing on gender bias).
Author: <b>Jamell Dacon</b> (daconjam@msu.edu)
```
import pandas as pd
import sys
import matplotlib.pyplot as plt
%matplotlib inline
import nltk
from nltk.tokenize import sent_tokenize
#nltk.downloader.download('vader_lexicon')
from nltk.sentiment.vader import SentimentIntensityAnalyzer
import warnings
warnings.filterwarnings('ignore')
```
## News
The news.tsv file contains the details of the information of news articles involved in the behaviors.tsv file. It has 7 columns , which are divided by tab symbols.
```
df = pd.read_csv('news.tsv', header=None, sep='\t')
df.columns = ['News_ID','Category','SubCategory','Title','Abstract','URL','Title Entities','Abstract Entities']
df['Gender'] = 'N/A'
cols = df.columns.tolist()
cols = cols[-1:] + cols[:-1]
df = df[cols]
df = df.drop(['News_ID','SubCategory','Title','URL','Title Entities','Abstract Entities'], axis=1)
female = ['goddesses', 'niece', 'baroness', 'mother', 'duchesses', 'mom', 'belle', 'belles', 'mummies', 'policewoman',
'grandmother', 'landlady', 'landladies', 'nuns', 'stepdaughter', 'milkmaids', 'chairwomen', 'stewardesses',
'women', 'masseuses', 'daughter-in-law', 'priestesses', 'stewardess', 'empress', 'daughter', 'queens',
'proprietress', 'brides', 'lady', 'queen', 'matron', 'waitresses', 'mummy', 'empresses', 'madam',
'witches', 'sorceress', 'lass', 'milkmaid', 'granddaughter', 'grand-daughter', 'congresswomen','moms', 'manageress',
'princess', 'stepmothers', 'stepdaughters', 'girlfriend', 'shepherdess', 'females', 'grand-mothers', 'grandmothers'
'step-daughter', 'nieces', 'priestess', 'wife', 'mothers', 'usherette', 'postwoman', 'hinds', 'wives',
'murderess', 'hostess', 'girl', 'waitress', 'spinster', 'shepherdess', 'businesswomen', 'duchess', 'madams', 'mamas',
'nun', 'heiress', 'aunt', 'princesses', 'fiancee', 'mrs', 'ladies', 'mother-in-law', 'actress', 'actresses',
'postmistress', 'headmistress', 'heroines', 'bride', 'businesswoman', 'baronesses', 'sows', 'witch',
'daughters-in-law','aunts', 'huntress', 'lasses', 'mistress', 'mistresses', 'sister', 'hostesses', 'poetess',
'masseuse', 'heroine', 'goddess','grandma', 'grandmas', 'maidservant', 'heiresses', 'patroness',
'female', 'governesses', 'millionairess', 'congresswoman', 'dam', 'widow', 'granddaughters', 'grand-daughters', 'headmistresses',
'girls', 'she', 'policewomen', 'step-mother','stepmother', 'widows', 'abbess', 'mrs.', 'chairwoman', 'sisters',
'mama', 'woman','daughters', 'girlfriends', 'she’s', 'her', 'maid', 'countess', 'giantess', 'poetess', 'jew',
'mayoress', 'peeress', 'negress', 'abbess', 'traitress', 'benefactress', 'instructress', 'conductress', 'founder',
'huntress', 'temptress', 'enchantress', 'songstress', 'murderess', 'murderesses', 'patronesses', 'authoress', 'czarina',
'spokeswoman', 'spokeswomen', 'ma', 'councilwoman', 'council-woman', 'councilwomen', 'council-women', 'mum', 'lesbian', 'lesbians', 'breast', 'breasts'
'maiden', 'maidens', 'sorority', 'sororities', 'saleswoman', 'dudette', 'maternal', 'feminist', 'feminists', 'sisterhood',
'housewife', 'housewives', 'stateswoman', 'stateswomen', 'countrywoman', 'countrywomen', 'chick', 'chicks', 'mommy',
'strongwoman', 'strongwomen', 'babe', 'babes', 'diva', 'divas', 'feminine', 'feminism', 'gal', 'gals', 'sistren', 'schoolgirl',
'schoolgirls', 'matriarch', 'matriarchy', 'motherhood', 'wifey', 'sis', 'femininity', 'ballerina', 'ballerinas', 'granny',
'grannies', 'mami', 'momma', "ma'am", 'gf', 'gfs', 'damsel', 'damsels', 'vixen', 'vixens', 'nan', 'nanny', 'nannies',
'auntie', 'womenfolk', 'sisterly', 'motherly', 'homegirl', 'homegirls', 'grand-neice', 'grand-neices',
'grandneice', 'grandneices', 'jane doe', 'noblewoman', 'noblewomen', 'dream girl', 'madame', 'herself', 'hers']
male = ['god', 'gods', 'nephew', 'nephews', 'him', 'baron', 'father', 'fathers' 'dukes', 'dad', 'beau', 'beaus', 'daddies',
'policeman', 'policemen', 'grandfather', 'landlord', 'landlords', 'monk', 'monks', 'step-son', 'step-sons',
'milkmen', 'chairmen', 'chairman', 'steward', 'men', 'masseurs', 'son-in-law', 'priest', 'king', 'governor',
'waiter', 'daddy', 'steward', 'emperor', 'son', 'proprietor', 'groom', 'grooms', 'gentleman', 'gentlemen', 'sir',
'wizards', 'sorcerer', 'lad','milk-man', 'grandson', 'grand-son','congressmen','dads', 'manager', 'prince', 'stepfathers',
'boyfriend', 'shepherd', 'shepherds', 'males', 'grandfathers', 'grand-fathers', 'husband', 'usher', 'postman','stags',
'husbands', 'host', 'boy', 'waiter', 'bachelor', 'bachelors', 'businessmen', 'duke', 'sirs', 'papas', 'heir', 'uncle',
'princes', 'fiance', 'mr', 'count', 'lords', 'father-in-law', 'actor', 'actors', 'postmaster', 'headmaster', 'heroes',
'businessman', 'boars','wizard', 'sons-in-law', 'fiances', 'uncles', 'hunter', 'lads', 'masters', 'brother',
'hosts', 'poet', 'hero', 'grandpa', 'grandpas','manservant', 'heirs', 'male', 'tutors', 'millionaire',
'congressman', 'sire', 'sires', 'widower','grandsons', 'grand-sons','boys', 'he', 'step-father', 'jewess', 'bridegroom', 'bridegrooms'
'stepfather', 'widowers', 'abbot', 'mr.,' 'brothers', 'man', 'sons', 'boyfriends', 'he’s', 'his', 'earl',
'giant', 'stepson', 'stepsons', 'poet', 'mayor', 'peer', 'negro', 'abbot', 'traitor', 'benefactor',
'instructor', 'conductor', 'founder', 'founders', 'hunters', 'huntresses', 'tempt', 'enchanter', 'enchanters', 'songster',
'songsters', 'murderer', 'murderers', 'patron', 'patrons', 'author', 'czar', 'guy', 'spokesman', 'spokesmen',
'pa', 'councilman', 'council-man', 'councilmen', 'council-men', 'gay', 'gays', 'prostate cancer', 'fraternity', 'fraternities', 'salesman', 'dude', 'dudes', 'paternal',
'brotherhood', 'statesman', 'statesmen', 'countryman', 'countrymen', 'suitor', 'macho', 'papa', 'strongman', 'strongmen',
'boyhood', 'manhood', 'masculine', 'macho', 'horsemen', 'brethren', 'chap','chaps', 'schoolboy', 'schoolboys', 'bloke',
'blokes', 'patriarch', 'patriachy', 'fatherhood', 'hubby', 'hubbies', 'fella', 'fellas', 'handyman', 'fraternal',
'bro', 'masculinity', 'ballerino', 'pappy', 'papi', 'pappies', 'dada', 'bf', 'bfs', 'knights', 'knight',
'menfolk', 'brotherly', 'manly', 'pimp', 'pimps', 'homeboy', 'homeboys','grandnephew', 'grand-nephew',
'grand-nephew', 'grand-nephews', 'john doe', 'nobleman', 'noblemen', 'dream boy', 'himself', 'gramps']
gender_words = len(female) + len(male)
print('The total no. of gender words are', gender_words)
print('There are {0} male words, and {1} female words'.format(len(male), len(female)))
len(df['Abstract'])
df.head(10)
df['Abstract'] = df['Abstract'].apply(str) # ensures the sentences are strings
df['Abstract'] = df['Abstract'].str.lower() # make the abstract lowercase
len(df['Abstract'])
df.shape
# Here we count the male and female gender words in each abstract and
# determine a specific gender tag per abstract
m_dict, f_dict = {}, {}
for index, line in enumerate(df['Abstract']):
words = line.split()
m, f = 0, 0 #counters for male and female words
for w in words:
if w in male: #checks into list of male words
m += 1
if w in m_dict: #counts the male words in the abstracts
m_dict[w] += 1
else:
m_dict[w] = 1
elif w in female: #checks into list of female words
f += 1
if w in f_dict: #counts the male words in the abstracts
f_dict[w] += 1
else:
f_dict[w] = 1
# Here we determine the tags of the abstracts
if m > f:
df.loc[index, 'Gender'] = 'M'
elif m == f:
df.loc[index, 'Gender'] = 'Neutral'
elif f > m:
df.loc[index, 'Gender'] = 'F'
sorted(m_dict.items(), key=lambda x: x[1], reverse=True)
len(m_dict)
len(f_dict)
sorted(f_dict.items(), key=lambda x: x[1], reverse=True)
# Here we create sentiment columns
df['Neg'] = int(0)
df['Neu'] = int(0)
df['Pos'] = int(0)
analyzer = SentimentIntensityAnalyzer()
df['Abstract'] = df['Abstract'].apply(str) # ensures the sentences are strings
for index, line in enumerate(df['Abstract']):
# Tokenize each sentence
sent_text = nltk.sent_tokenize(line)
ps,ng,ne = 0,0,0 # positive, negative and neutrals counters
# Determine the sentiment of each sentence
for i, sentence in enumerate(sent_text):
comp = analyzer.polarity_scores(sentence)['compound']
if comp >= 0.05:
ps +=1
elif comp > -0.05 and comp < 0.05:
ne +=1
elif comp <= -0.05:
ng += 1
# Here we determine the sentiments of the abstracts
if (len(sent_text)==1):
if (ps >= ne) and (ps > ng):
df.loc[index, 'Pos'] = 1
elif (ps <= ne) and (ne >= ng):
df.loc[index, 'Neu'] = 1
elif (ps < ng) and (ne <= ng):
df.loc[index, 'Neg'] = 1
if (ps == ng) and (len(sent_text)!=1):
df.loc[index, 'Neu'] = 1
elif (ps >= ne) and (ps > ng):# and (len(sent_text)!=1):
df.loc[index, 'Pos'] = 1
elif (ps <= ne) and (ne >= ng):# and (len(sent_text)!=1):
df.loc[index, 'Neu'] = 1
elif (ps < ng) and (ne <= ng):# and (len(sent_text)!=1):
df.loc[index, 'Neg'] = 1
df.head(20)
df = df[df.Abstract != 'nan']
df['Gender'].value_counts()
df.Gender.unique()
df['Gender'].value_counts().plot(kind='bar')
# Here we will count the number of news in each category for each gender
df['Count'] = 1
df_obs = df.groupby(['Gender', 'Category']).agg({'Count':sum})
df_obs
df['Category'].value_counts()
df['Category'].value_counts().plot(kind='bar')
df['Neg'] = df['Neg'].astype(float)
df['Neu'] = df['Neu'].astype(float)
df['Pos'] = df['Pos'].astype(float)
# Here we create gender-specific dfs
male_df = df[df['Gender'] == 'M']
female_df = df[df['Gender'] == 'F']
# Here we create a female df and analyze the sentiments
mneg_avg = (male_df['Neg'].sum())/(len(male_df))
mpos_avg = (male_df['Pos'].sum())/(len(male_df))
mneu_avg = (male_df['Neu'].sum())/(len(male_df))
print((male_df['Neg'].sum()))
print((male_df['Neu'].sum()))
print((male_df['Pos'].sum()))
print(len(male_df))
print('The avg. negative sentiment for "M" is :', round(mneg_avg, 4),
'\nThe avg. neutral sentiment for "M" is :', round(mneu_avg, 4),
'\nThe avg. positive sentiment for "M" is :', round(mpos_avg, 4))
# Here we create a female df and analyze the sentiments
fneg_avg = (female_df['Neg'].sum())/(len(female_df))
fpos_avg = (female_df['Pos'].sum())/(len(female_df))
fneu_avg = (female_df['Neu'].sum())/(len(female_df))
print((female_df['Neg'].sum()))
print((female_df['Neu'].sum()))
print((female_df['Pos'].sum()))
print(len(female_df))
print('The avg. negative sentiment for "F" is :', round(fneg_avg, 4),
'\nThe avg. neutral sentiment for "F" is :', round(fneu_avg, 4),
'\nThe avg. positive sentiment for "F" is :', round(fpos_avg, 4))
fneg_obs = female_df.groupby(['Gender', 'Category']).agg({'Neg':sum})
fneg_obs
mneg_obs = male_df.groupby(['Gender', 'Category']).agg({'Neg':sum})
mneg_obs
fpos_obs = female_df.groupby(['Gender', 'Category']).agg({'Pos':sum})
fpos_obs
mpos_obs = male_df.groupby(['Gender', 'Category']).agg({'Pos':sum})
mpos_obs
df.to_csv('MIND.csv')
```
| github_jupyter |
# Using Interrupts and asyncio for Buttons and Switches
This notebook provides a simple example for using asyncio I/O to interact asynchronously with multiple input devices. A task is created for each input device and coroutines used to process the results. To demonstrate, we recreate the flashing LEDs example in the getting started notebook but using interrupts to avoid polling the GPIO devices. The aim is have holding a button result in the corresponding LED flashing.
## Initialising the Enviroment
First we import an instantiate all required classes to interact with the buttons, switches and LED and ensure the base overlay is loaded.
```
from pynq import PL
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
```
## Define the flash LED task
Next step is to create a task that waits for the button to be pressed and flash the LED until the button is released. The `while True` loop ensures that the coroutine keeps running until cancelled so that multiple presses of the same button can be handled.
```
import asyncio
@asyncio.coroutine
def flash_led(num):
while True:
yield from base.buttons[num].wait_for_value_async(1)
while base.buttons[num].read():
base.leds[num].toggle()
yield from asyncio.sleep(0.1)
base.leds[num].off()
```
## Create the task
As there are four buttons we want to check, we create four tasks. The function `asyncio.ensure_future` is used to convert the coroutine to a task and schedule it in the event loop. The tasks are stored in an array so they can be referred to later when we want to cancel them.
```
tasks = [asyncio.ensure_future(flash_led(i)) for i in range(4)]
```
## Monitoring the CPU Usage
One of the advantages of interrupt-based I/O is to minimised CPU usage while waiting for events. To see how CPU usages is impacted by the flashing LED tasks we create another task that prints out the current CPU utilisation every 3 seconds.
```
import psutil
@asyncio.coroutine
def print_cpu_usage():
# Calculate the CPU utilisation by the amount of idle time
# each CPU has had in three second intervals
last_idle = [c.idle for c in psutil.cpu_times(percpu=True)]
while True:
yield from asyncio.sleep(3)
next_idle = [c.idle for c in psutil.cpu_times(percpu=True)]
usage = [(1-(c2-c1)/3) * 100 for c1,c2 in zip(last_idle, next_idle)]
print("CPU Usage: {0:3.2f}%, {1:3.2f}%".format(*usage))
last_idle = next_idle
tasks.append(asyncio.ensure_future(print_cpu_usage()))
```
## Run the event loop
All of the blocking wait_for commands will run the event loop until the condition is met. All that is needed is to call the blocking `wait_for_level` method on the switch we are using as the termination condition.
While waiting for switch 0 to get high, users can press any push button on the board to flash the corresponding LED. While this loop is running, try opening a terminal and running `top` to see that python is consuming no CPU cycles while waiting for peripherals.
As this code runs until the switch 0 is high, make sure it is low before running the example.
```
if base.switches[0].read():
print("Please set switch 0 low before running")
else:
base.switches[0].wait_for_value(1)
```
## Clean up
Even though the event loop has stopped running, the tasks are still active and will run again when the event loop is next used. To avoid this, the tasks should be cancelled when they are no longer needed.
```
[t.cancel() for t in tasks]
```
Now if we re-run the event loop, nothing will happen when we press the buttons. The process will block until the switch is set back down to the low position.
```
base.switches[0].wait_for_value(0)
```
| github_jupyter |
# The number of cats
You are working on a natural language processing project to determine what makes great writers so great. Your current hypothesis is that great writers talk about cats a lot. To prove it, you want to count the number of times the word "cat" appears in "Alice's Adventures in Wonderland" by Lewis Carroll. You have already downloaded a text file, alice.txt, with the entire contents of this great book.
```
# Open "alice.txt" and assign the file to "file"
with open('alice.txt', encoding='utf-8') as file:
text = file.read()
n = 0
for word in text.split():
if word.lower() in ['cat', 'cats']:
n += 1
print('Lewis Carroll uses the word "cat" {} times'.format(n))
```
# The speed of cats
You're working on a new web service that processes Instagram feeds to identify which pictures contain cats (don't ask why -- it's the internet). The code that processes the data is slower than you would like it to be, so you are working on tuning it up to run faster. Given an image, image, you have two functions that can process it:
- process_with_numpy(image)
- process_with_pytorch(image)
Your colleague wrote a context manager, timer(), that will print out how long the code inside the context block takes to run. She is suggesting you use it to see which of the two options is faster. Time each function to determine which one to use in your web service.
```
import numpy as np
import torch
def get_image_from_instagram():
return np.random.rand(84, 84)
import time
import contextlib
@contextlib.contextmanager
def timer():
"""Time how long code in the context block takes to run."""
t0 = time.time()
try:
yield
except:
raise
finally:
t1 = time.time()
print('Elapsed: {:.2f} seconds'.format(t1 - t0))
def _process_pic(n_sec):
print('Processing', end='', flush=True)
for i in range(10):
print('.', end='' if i < 9 else 'done!\n', flush=True)
time.sleep(n_sec)
def process_with_numpy(p):
_process_pic(0.1521)
def process_with_pytorch(p):
_process_pic(0.0328)
image = get_image_from_instagram()
# Time how long process_with_numpy(image) takes to run
with timer():
print('Numpy version')
process_with_numpy(image)
# Time how long process_with_pytorch(image) takes to run
with timer():
print('Pytorch version')
process_with_pytorch(image)
```
# The timer() context manager
A colleague of yours is working on a web service that processes Instagram photos. Customers are complaining that the service takes too long to identify whether or not an image has a cat in it, so your colleague has come to you for help. You decide to write a context manager that they can use to time how long their functions take to run.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.