code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Generating C code for the right-hand sides of Maxwell's equations, in ***curvilinear*** coordinates, using a reference metric formalism
## Author: Ian Ruchlin
### Formatting improvements courtesy Brandon Clark
[comment]: <> (Abstract: TODO)
### The following formulations of Maxwell's equations, called System I and System II, are described in [Illustrating Stability Properties of Numerical Relativity in Electrodynamics](https://arxiv.org/abs/gr-qc/0201051) by Knapp et al.
**Notebook Status:** <font color='red'><b> In progress </b></font>
**Validation Notes:** This module has not yet undergone validation testing. Do ***not*** use it until after appropriate validation testing has been performed.
## Introduction:
[Maxwell's equations](https://en.wikipedia.org/wiki/Maxwell%27s_equations) are subject to the Gauss' law constraint
$$\mathcal{C} \equiv \hat{D}_{i} E^{i} - 4 \pi \rho = 0 \; ,$$
where $E^{i}$ is the electric vector field, $\hat{D}_{i}$ is the [covariant derivative](https://en.wikipedia.org/wiki/Covariant_derivative) associated with the reference metric $\hat{\gamma}_{i j}$ (which is taken to represent flat space), and $\rho$ is the electric charge density. We use $\mathcal{C}$ as a measure of numerical error. Maxwell's equations are also required to satisfy $\hat{D}_{i} B^{i} = 0$, where $B^{i}$ is the magnetic vector field. The magnetic constraint implies that the magnetic field can be expressed as
$$B_{i} = \epsilon_{i j k} \hat{D}^{j} A^{k} \; ,$$
where $\epsilon_{i j k}$ is the totally antisymmetric [Levi-Civita tensor](https://en.wikipedia.org/wiki/Levi-Civita_symbol) and $A^{i}$ is the vector potential field. Together with the scalar potential $\psi$, the electric field can be expressed in terms of the potential fields as
$$E_{i} = -\hat{D}_{i} \psi - \partial_{t} A_{i} \; .$$
For now, we work in vacuum, where the electric charge density and the electric current density vector both vanish ($\rho = 0$ and $j_{i} = 0$).
In addition to the Gauss constraints, the electric and magnetic fields obey two independent [electromagnetic invariants](https://en.wikipedia.org/wiki/Classification_of_electromagnetic_fields#Invariants)
\begin{align}
\mathcal{P} &\equiv B_{i} B^{i} - E_{i} E^{i} \; , \\
\mathcal{Q} &\equiv E_{i} B^{i} \; .
\end{align}
In vacuum, these satisfy $\mathcal{P} = \mathcal{Q} = 0$.
<a id='toc'></a>
# Table of Contents:
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#sys1): System I
1. [Step 2](#sys2): System II
1. [Step 3](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='sys1'></a>
# Step 1: System I \[Back to [top](#toc)\]
$$\label{sys1}$$
In terms of the above definitions, the evolution Maxwell's equations take the form
\begin{align}
\partial_{t} A_{i} &= -E_{i} - \hat{D}_{i} \psi \; , \\
\partial_{t} E_{i} &= -\hat{D}_{j} \hat{D}^{j} A_{i} + \hat{D}_{i} \hat{D}_{j} A^{j}\; , \\
\partial_{t} \psi &= -\hat{D}_{i} A^{i} \; .
\end{align}
Note that this coupled system contains mixed second derivatives in the second term on the right hand side of the $E^{i}$ evolution equation. We will revisit this fact when building System II.
It can be shown that the Gauss constraint satisfies the evolution equation
$$\partial_{t} \mathcal{C} = 0 \; .$$
This implies that any constraint violating numerical error remains fixed in place during the evolution. This becomes problematic when the violations grow large and spoil the physics of the simulation.
```
import NRPy_param_funcs as par # NRPy+: parameter interface
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import grid as gri # NRPy+: Functions having to do with numerical grids
import finite_difference as fin # NRPy+: Finite difference C code generation module
import reference_metric as rfm # NRPy+: Reference metric support
from outputC import lhrh # NRPy+: Core C code output module
par.set_parval_from_str("reference_metric::CoordSystem", "Spherical")
par.set_parval_from_str("grid::DIM", 3)
rfm.reference_metric()
# The name of this module ("maxwell") is given by __name__:
thismodule = __name__
# Step 0: Read the spatial dimension parameter as DIM.
DIM = par.parval_from_str("grid::DIM")
# Step 1: Set the finite differencing order to 4.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", 4)
# Step 2: Register gridfunctions that are needed as input.
psi = gri.register_gridfunctions("EVOL", ["psi"])
# Step 3a: Declare the rank-1 indexed expressions E_{i}, A_{i},
# and \partial_{i} \psi. Derivative variables like these
# must have an underscore in them, so the finite
# difference module can parse the variable name properly.
ED = ixp.register_gridfunctions_for_single_rank1("EVOL", "ED")
AD = ixp.register_gridfunctions_for_single_rank1("EVOL", "AD")
psi_dD = ixp.declarerank1("psi_dD")
# Step 3b: Declare the rank-2 indexed expression \partial_{j} A_{i},
# which is not symmetric in its indices.
# Derivative variables like these must have an underscore
# in them, so the finite difference module can parse the
# variable name properly.
AD_dD = ixp.declarerank2("AD_dD", "nosym")
# Step 3c: Declare the rank-3 indexed expression \partial_{jk} A_{i},
# which is symmetric in the two {jk} indices.
AD_dDD = ixp.declarerank3("AD_dDD", "sym12")
# Step 4: Calculate first and second covariant derivatives, and the
# necessary contractions.
# First covariant derivative
# D_{j} A_{i} = A_{i,j} - \Gamma^{k}_{ij} A_{k}
AD_dHatD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
AD_dHatD[i][j] = AD_dD[i][j]
for k in range(DIM):
AD_dHatD[i][j] -= rfm.GammahatUDD[k][i][j] * AD[k]
# Second covariant derivative
# D_{k} D_{j} A_{i} = \partial_{k} D_{j} A_{i} - \Gamma^{l}_{jk} D_{l} A_{i}
# - \Gamma^{l}_{ik} D_{j} A_{l}
# = A_{i,jk}
# - \Gamma^{l}_{ij,k} A_{l}
# - \Gamma^{l}_{ij} A_{l,k}
# - \Gamma^{l}_{jk} A_{i;\hat{l}}
# - \Gamma^{l}_{ik} A_{l;\hat{j}}
AD_dHatDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AD_dHatDD[i][j][k] = AD_dDD[i][j][k]
for l in range(DIM):
AD_dHatDD[i][j][k] += - rfm.GammahatUDDdD[l][i][j][k] * AD[l] \
- rfm.GammahatUDD[l][i][j] * AD_dD[l][k] \
- rfm.GammahatUDD[l][j][k] * AD_dHatD[i][l] \
- rfm.GammahatUDD[l][i][k] * AD_dHatD[l][j]
# Covariant divergence
# D_{i} A^{i} = ghat^{ij} D_{j} A_{i}
DivA = 0
# Gradient of covariant divergence
# DivA_dD_{i} = ghat^{jk} A_{k;\hat{j}\hat{i}}
DivA_dD = ixp.zerorank1()
# Covariant Laplacian
# LapAD_{i} = ghat^{jk} A_{i;\hat{j}\hat{k}}
LapAD = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
DivA += rfm.ghatUU[i][j] * AD_dHatD[i][j]
for k in range(DIM):
DivA_dD[i] += rfm.ghatUU[j][k] * AD_dHatDD[k][j][i]
LapAD[i] += rfm.ghatUU[j][k] * AD_dHatDD[i][j][k]
# Step 5: Define right-hand sides for the evolution.
AD_rhs = ixp.zerorank1()
ED_rhs = ixp.zerorank1()
for i in range(DIM):
AD_rhs[i] = -ED[i] - psi_dD[i]
ED_rhs[i] = -LapAD[i] + DivA_dD[i]
psi_rhs = -DivA
# Step 6: Generate C code for System I Maxwell's evolution equations,
# print output to the screen (standard out, or stdout).
lhrh_list = []
for i in range(DIM):
lhrh_list.append(lhrh(lhs=gri.gfaccess("rhs_gfs", "AD" + str(i)), rhs=AD_rhs[i]))
lhrh_list.append(lhrh(lhs=gri.gfaccess("rhs_gfs", "ED" + str(i)), rhs=ED_rhs[i]))
lhrh_list.append(lhrh(lhs=gri.gfaccess("rhs_gfs", "psi"), rhs=psi_rhs))
fin.FD_outputC("stdout", lhrh_list)
```
<a id='sys2'></a>
# Step 2: System II \[Back to [top](#toc)\]
$$\label{sys2}$$
Define the auxiliary variable
$$\Gamma \equiv \hat{D}_{i} A^{i} \; .$$
Substituting this into Maxwell's equations yields the system
\begin{align}
\partial_{t} A_{i} &= -E_{i} - \hat{D}_{i} \psi \; , \\
\partial_{t} E_{i} &= -\hat{D}_{j} \hat{D}^{j} A_{i} + \hat{D}_{i} \Gamma \; , \\
\partial_{t} \psi &= -\Gamma \; , \\
\partial_{t} \Gamma &= -\hat{D}_{i} \hat{D}^{i} \psi \; .
\end{align}
It can be shown that the Gauss constraint now satisfies the wave equation
$$\partial_{t}^{2} \mathcal{C} = \hat{D}_{i} \hat{D}^{i} \mathcal{C} \; .$$
Thus, any constraint violation introduced by numerical error propagates away at the speed of light. This property increases the stability of of the simulation, compared to System I above. A similar trick is used in the [BSSN formulation](Tutorial-BSSNCurvilinear.ipynb) of Einstein's equations.
```
# We inherit here all of the definitions from System I, above
# Step 7a: Register the scalar auxiliary variable \Gamma
Gamma = gri.register_gridfunctions("EVOL", ["Gamma"])
# Step 7b: Declare the ordinary gradient \partial_{i} \Gamma
Gamma_dD = ixp.declarerank1("Gamma_dD")
# Step 8a: Construct the second covariant derivative of the scalar \psi
# \psi_{;\hat{i}\hat{j}} = \psi_{,i;\hat{j}}
# = \psi_{,ij} - \Gamma^{k}_{ij} \psi_{,k}
psi_dDD = ixp.declarerank2("psi_dDD", "sym01")
psi_dHatDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
psi_dHatDD[i][j] = psi_dDD[i][j]
for k in range(DIM):
psi_dHatDD[i][j] += - rfm.GammahatUDD[k][i][j] * psi_dD[k]
# Step 8b: Construct the covariant Laplacian of \psi
# Lappsi = ghat^{ij} D_{j} D_{i} \psi
Lappsi = 0
for i in range(DIM):
for j in range(DIM):
Lappsi += rfm.ghatUU[i][j] * psi_dHatDD[i][j]
# Step 9: Define right-hand sides for the evolution.
AD_rhs = ixp.zerorank1()
ED_rhs = ixp.zerorank1()
for i in range(DIM):
AD_rhs[i] = -ED[i] - psi_dD[i]
ED_rhs[i] = -LapAD[i] + Gamma_dD[i]
psi_rhs = -Gamma
Gamma_rhs = -Lappsi
# Step 10: Generate C code for System II Maxwell's evolution equations,
# print output to the screen (standard out, or stdout).
lhrh_list = []
for i in range(DIM):
lhrh_list.append(lhrh(lhs=gri.gfaccess("rhs_gfs", "AD" + str(i)), rhs=AD_rhs[i]))
lhrh_list.append(lhrh(lhs=gri.gfaccess("rhs_gfs", "ED" + str(i)), rhs=ED_rhs[i]))
lhrh_list.append(lhrh(lhs=gri.gfaccess("rhs_gfs", "psi"), rhs=psi_rhs))
lhrh_list.append(lhrh(lhs=gri.gfaccess("rhs_gfs", "Gamma"), rhs=Gamma_rhs))
fin.FD_outputC("stdout", lhrh_list)
```
<a id='latex_pdf_output'></a>
# Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-MaxwellCurvilinear.pdf](Tutorial-MaxwellCurvilinear.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-MaxwellCurvilinear")
```
| github_jupyter |
# Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored in 2016 and has seen impressive results in generating new images; you can read the [original paper, here](https://arxiv.org/pdf/1511.06434.pdf).
You'll be training DCGAN on the [Street View House Numbers](http://ufldl.stanford.edu/housenumbers/) (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
<img src='assets/svhn_dcgan.png' width=80% />
So, our goal is to create a DCGAN that can generate new, realistic-looking images of house numbers. We'll go through the following steps to do this:
* Load in and pre-process the house numbers dataset
* Define discriminator and generator networks
* Train these adversarial networks
* Visualize the loss over time and some sample, generated images
#### Deeper Convolutional Networks
Since this dataset is more complex than our MNIST data, we'll need a deeper network to accurately identify patterns in these images and be able to generate new ones. Specifically, we'll use a series of convolutional or transpose convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get these convolutional networks to train.
Besides these changes in network structure, training the discriminator and generator networks should be the same as before. That is, the discriminator will alternate training on real and fake (generated) images, and the generator will aim to trick the discriminator into thinking that its generated images are real!
```
# import libraries
import matplotlib.pyplot as plt
import numpy as np
import pickle as pkl
%matplotlib inline
```
## Getting the data
Here you can download the SVHN dataset. It's a dataset built-in to the PyTorch datasets library. We can load in training data, transform it into Tensor datatypes, then create dataloaders to batch our data into a desired size.
```
import torch
from torchvision import datasets
from torchvision import transforms
# Tensor transform
transform = transforms.ToTensor()
# SVHN training datasets
svhn_train = datasets.SVHN(root='data/', split='train', download=True, transform=transform)
batch_size = 128
num_workers = 0
# build DataLoaders for SVHN dataset
train_loader = torch.utils.data.DataLoader(dataset=svhn_train,
batch_size=batch_size,
shuffle=True,
num_workers=num_workers)
```
### Visualize the Data
Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real, training images that we'll pass to the discriminator. Notice that each image has _one_ associated, numerical label.
```
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size // 2, idx + 1, xticks=[], yticks=[])
ax.imshow(np.transpose(images[idx], (1, 2, 0)))
# print out the correct label for each image
# .item() gets the value contained in a Tensor
ax.set_title(str(labels[idx].item()))
plt.show()
```
### Pre-processing: scaling from -1 to 1
We need to do a bit of pre-processing; we know that the output of our `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
```
# current range
img = images[0]
print('Min: ', img.min())
print('Max: ', img.max())
# helper scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
min, max = feature_range
return x * (max - min) + min
# scaled range
scaled_img = scale(img)
print('Scaled min: ', scaled_img.min())
print('Scaled max: ', scaled_img.max())
```
---
# Define the Model
A GAN is comprised of two adversarial networks, a discriminator and a generator.
## Discriminator
Here you'll build the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers.
* The inputs to the discriminator are 32x32x3 tensor images
* You'll want a few convolutional, hidden layers
* Then a fully connected layer for the output; as before, we want a sigmoid output, but we'll add that in the loss function, [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss), later
<img src='assets/conv_discriminator.png' width=80%/>
For the depths of the convolutional layers I suggest starting with 32 filters in the first layer, then double that depth as you add layers (to 64, 128, etc.). Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpooling layers.
You'll also want to use batch normalization with [nn.BatchNorm2d](https://pytorch.org/docs/stable/nn.html#batchnorm2d) on each layer **except** the first convolutional layer and final, linear output layer.
#### Helper `conv` function
In general, each layer should look something like convolution > batch norm > leaky ReLU, and so we'll define a function to put these layers together. This function will create a sequential series of a convolutional + an optional batch norm layer. We'll create these using PyTorch's [Sequential container](https://pytorch.org/docs/stable/nn.html#sequential), which takes in a list of layers and creates layers according to the order that they are passed in to the Sequential constructor.
Note: It is also suggested that you use a **kernel_size of 4** and a **stride of 2** for strided convolutions.
```
import torch.nn as nn
import torch.nn.functional as F
# helper conv function
def conv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
"""Creates a convolutional layer, with optional batch normalization.
"""
layers = []
conv_layer = nn.Conv2d(in_channels, out_channels,
kernel_size, stride, padding, bias=False)
# append conv layer
layers.append(conv_layer)
if batch_norm:
# append batchnorm layer
layers.append(nn.BatchNorm2d(out_channels))
# using Sequential container
return nn.Sequential(*layers)
class Discriminator(nn.Module):
def __init__(self, conv_dim=32):
super(Discriminator, self).__init__()
# complete init function
# input 32x32x3 -> output 16x16x32
self.conv1 = conv(3, conv_dim, 4, batch_norm=False) # we don't need batch norm in the first conv layer
# input 16x16x32 -> output 8x8x64
self.conv2 = conv(conv_dim, conv_dim * 2, 4)
# input 8x8x64 -> output 4x4x128
self.conv3 = conv(conv_dim * 2, conv_dim * 4, 4)
# define a FC layer
self.fc = nn.Linear(4*4*128, 1)
self.dropout = nn.Dropout(0.3)
def forward(self, x):
# complete forward function
x = F.leaky_relu(self.conv1(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.conv2(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.conv3(x), 0.2)
x = self.dropout(x)
# flatten the image
x = x.view(-1, 4*4*128)
x = self.fc(x)
return x
```
## Generator
Next, you'll build the generator network. The input will be our noise vector `z`, as before. And, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
<img src='assets/conv_generator.png' width=80% />
What's new here is we'll use transpose convolutional layers to create our new images.
* The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x512.
* Then, we use batch normalization and a leaky ReLU activation.
* Next is a series of [transpose convolutional layers](https://pytorch.org/docs/stable/nn.html#convtranspose2d), where you typically halve the depth and double the width and height of the previous layer.
* And, we'll apply batch normalization and ReLU to all but the last of these hidden layers. Where we will just apply a `tanh` activation.
#### Helper `deconv` function
For each of these layers, the general scheme is transpose convolution > batch norm > ReLU, and so we'll define a function to put these layers together. This function will create a sequential series of a transpose convolutional + an optional batch norm layer. We'll create these using PyTorch's Sequential container, which takes in a list of layers and creates layers according to the order that they are passed in to the Sequential constructor.
Note: It is also suggested that you use a **kernel_size of 4** and a **stride of 2** for transpose convolutions.
```
# helper deconv function
def deconv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
"""Creates a transposed-convolutional layer, with optional batch normalization.
"""
## TODO: Complete this function
## create a sequence of transpose + optional batch norm layers
layers = []
conv_transpose = nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride, bias=False)
# append conv transpose layer
layers.append(conv_transpose)
if batch_norm:
layers.append(nn.BatchNorm2d(out_channels))
return nn.Sequential(*layers)
class Generator(nn.Module):
def __init__(self, z_size, conv_dim=32):
super(Generator, self).__init__()
# complete init function
self.fc = nn.Linear(z_size, conv_dim*4*4*4)
self.conv_trans1 = deconv(conv_dim*4, conv_dim*2, 4)
self.conv_trans2 = deconv(conv_dim*2, conv_dim, 4)
self.conv_trans3 = deconv(conv_dim, 3, 4, batch_norm=False)
self.dropout = nn.Dropout(0.3)
def forward(self, x):
# complete forward function
x = self.fc(x)
# reshape
x = x.view(-1, 4*32, 4, 4)
x = F.relu(self.conv_trans1(x))
x = F.relu(self.conv_trans2(x))
x = self.conv_trans3(x)
out = F.tanh(x)
return out
```
## Build complete network
Define your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
```
# define hyperparams
conv_dim = 32
z_size = 100
# define discriminator and generator
D = Discriminator(conv_dim)
G = Generator(z_size=z_size, conv_dim=conv_dim)
print(D)
print()
print(G)
```
### Training on GPU
Check if you can train on GPU. If you can, set this as a variable and move your models to GPU.
> Later, we'll also move any inputs our models and loss functions see (real_images, z, and ground truth labels) to GPU as well.
```
train_on_gpu = torch.cuda.is_available()
if train_on_gpu:
# move models to GPU
G.cuda()
D.cuda()
print('GPU available for training. Models moved to GPU')
else:
print('Training on CPU.')
```
---
## Discriminator and Generator Losses
Now we need to calculate the losses. And this will be exactly the same as before.
### Discriminator Losses
> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`.
* Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
The losses will by binary cross entropy loss with logits, which we can get with [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss). This combines a `sigmoid` activation function **and** and binary cross entropy loss in one function.
For the real images, we want `D(real_images) = 1`. That is, we want the discriminator to classify the real images with a label = 1, indicating that these are real. The discriminator loss for the fake data is similar. We want `D(fake_images) = 0`, where the fake images are the _generator output_, `fake_images = G(z)`.
### Generator Loss
The generator loss will look similar only with flipped labels. The generator's goal is to get `D(fake_images) = 1`. In this case, the labels are **flipped** to represent that the generator is trying to fool the discriminator into thinking that the images it generates (fakes) are real!
```
def real_loss(D_out, smooth=False):
batch_size = D_out.size(0)
# label smoothing
if smooth:
# smooth, real labels = 0.9
labels = torch.ones(batch_size)*0.9
else:
labels = torch.ones(batch_size) # real labels = 1
# move labels to GPU if available
if train_on_gpu:
labels = labels.cuda()
# binary cross entropy with logits loss
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
batch_size = D_out.size(0)
labels = torch.zeros(batch_size) # fake labels = 0
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
```
## Optimizers
Not much new here, but notice how I am using a small learning rate and custom parameters for the Adam optimizers, This is based on some research into DCGAN model convergence.
### Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read [the DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf) to see what worked for them.
```
import torch.optim as optim
# params
lr = 0.0002
beta1 = 0.5
beta2 = 0.5
# Create optimizers for the discriminator and generator
d_optimizer = optim.Adam(D.parameters(), lr, [beta1, beta2])
g_optimizer = optim.Adam(G.parameters(), lr, [beta1, beta2])
```
---
## Training
Training will involve alternating between training the discriminator and the generator. We'll use our functions `real_loss` and `fake_loss` to help us calculate the discriminator losses in all of the following cases.
### Discriminator training
1. Compute the discriminator loss on real, training images
2. Generate fake images
3. Compute the discriminator loss on fake, generated images
4. Add up real and fake loss
5. Perform backpropagation + an optimization step to update the discriminator's weights
### Generator training
1. Generate fake images
2. Compute the discriminator loss on fake images, using **flipped** labels!
3. Perform backpropagation + an optimization step to update the generator's weights
#### Saving Samples
As we train, we'll also print out some loss statistics and save some generated "fake" samples.
**Evaluation mode**
Notice that, when we call our generator to create the samples to display, we set our model to evaluation mode: `G.eval()`. That's so the batch normalization layers will use the population statistics rather than the batch statistics (as they do during training), *and* so dropout layers will operate in eval() mode; not turning off any nodes for generating samples.
```
import pickle as pkl
# training hyperparams
num_epochs = 30
# keep track of loss and generated, "fake" samples
samples = []
losses = []
print_every = 300
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# train the network
for epoch in range(num_epochs):
for batch_i, (real_images, _) in enumerate(train_loader):
batch_size = real_images.size(0)
# important rescaling step
real_images = scale(real_images)
# ============================================
# TRAIN THE DISCRIMINATOR
# ============================================
d_optimizer.zero_grad()
# 1. Train with real images
# Compute the discriminator losses on real images
if train_on_gpu:
real_images = real_images.cuda()
real_images = scale(real_images)
D_real = D(real_images)
d_real_loss = real_loss(D_real)
# 2. Train with fake images
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
# move x to GPU, if available
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
# Compute the discriminator losses on fake images
D_fake = D(fake_images)
d_fake_loss = fake_loss(D_fake)
# add up loss and perform backprop
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
d_optimizer.step()
# =========================================
# TRAIN THE GENERATOR
# =========================================
g_optimizer.zero_grad()
# 1. Train with fake images and flipped labels
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
# Compute the discriminator losses on fake images
# using flipped labels!
D_fake = D(fake_images)
g_loss = real_loss(D_fake) # use real loss to flip labels
# perform backprop
g_loss.backward()
g_optimizer.step()
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, num_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# generate and save sample, fake images
G.eval() # for generating samples
if train_on_gpu:
fixed_z = fixed_z.cuda()
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
```
## Training loss
Here we'll plot the training losses for the generator and discriminator, recorded after each epoch.
```
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
```
## Generator samples from training
Here we can view samples of images from the generator. We'll look at the images we saved during training.
```
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img +1)*255 / (2)).astype(np.uint8) # rescale to pixel range (0-255)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
_ = view_samples(-1, samples)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import matplotlib.pyplot as plt
%matplotlib inline
data_raw = pd.read_csv('../input/sign_mnist_train.csv', sep=",")
test_data_raw = pd.read_csv('../input/sign_mnist_test.csv', sep=",")
labels = data_raw['label']
data_raw.drop('label', axis=1, inplace=True)
labels_test = test_data_raw['label']
test_data_raw.drop('label', axis=1, inplace=True)
data = data_raw.values
labels = labels.values
test_data = test_data_raw.values
labels_test = labels_test.values
pixels = data[10].reshape(28, 28)
plt.subplot(221)
sns.heatmap(data=pixels)
pixels = data[12].reshape(28, 28)
plt.subplot(222)
sns.heatmap(data=pixels)
pixels = data[20].reshape(28, 28)
plt.subplot(223)
sns.heatmap(data=pixels)
pixels = data[32].reshape(28, 28)
plt.subplot(224)
sns.heatmap(data=pixels)
reshaped = []
for i in data:
reshaped.append(i.reshape(1, 28, 28))
data = np.array(reshaped)
reshaped_test = []
for i in test_data:
reshaped_test.append(i.reshape(1,28,28))
test_data = np.array(reshaped_test)
x = torch.FloatTensor(data)
y = torch.LongTensor(labels.tolist())
test_x = torch.FloatTensor(test_data)
test_y = torch.LongTensor(labels_test.tolist())
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.conv1 = nn.Conv2d(1, 10, 3)
self.pool1 = nn.MaxPool2d(2)
self.conv2 = nn.Conv2d(10, 20, 3)
self.pool2 = nn.MaxPool2d(2)
self.conv3 = nn.Conv2d(20, 30, 3)
self.dropout1 = nn.Dropout2d()
self.fc3 = nn.Linear(30 * 3 * 3, 270)
self.fc4 = nn.Linear(270, 26)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.pool1(x)
x = self.conv2(x)
x = F.relu(x)
x = self.pool2(x)
x = self.conv3(x)
x = F.relu(x)
x = self.dropout1(x)
x = x.view(-1, 30 * 3 * 3)
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
return self.softmax(x)
def test(self, predictions, labels):
self.eval()
correct = 0
for p, l in zip(predictions, labels):
if p == l:
correct += 1
acc = correct / len(predictions)
print("Correct predictions: %5d / %5d (%5f)" % (correct, len(predictions), acc))
def evaluate(self, predictions, labels):
correct = 0
for p, l in zip(predictions, labels):
if p == l:
correct += 1
acc = correct / len(predictions)
return(acc)
!pip install torchsummary
from torchsummary import summary
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = Network().to(device)
summary(model, (1, 28, 28))
net = Network()
optimizer = optim.SGD(net.parameters(),0.001, momentum=0.7)
loss_func = nn.CrossEntropyLoss()
loss_log = []
acc_log = []
for e in range(50):
for i in range(0, x.shape[0], 100):
x_mini = x[i:i + 100]
y_mini = y[i:i + 100]
optimizer.zero_grad()
net_out = net(Variable(x_mini))
loss = loss_func(net_out, Variable(y_mini))
loss.backward()
optimizer.step()
if i % 1000 == 0:
#pred = net(Variable(test_data_formated))
loss_log.append(loss.item())
acc_log.append(net.evaluate(torch.max(net(Variable(test_x[:500])).data, 1)[1], test_y[:500]))
print('Epoch: {} - Loss: {:.6f}'.format(e + 1, loss.item()))
plt.figure(figsize=(10,8))
plt.plot(loss_log[2:])
plt.plot(acc_log)
plt.plot(np.ones(len(acc_log)), linestyle='dashed')
plt.show()
predictions = net(Variable(test_x))
net.test(torch.max(predictions.data, 1)[1], test_y)
```
| github_jupyter |
# Remuestreo Bootstrap
Entre los métodos inferenciales que permiten cuantificar el grado de confianza que se puede tener de un estadı́sitico, y saber cuán acertados son los resultados sobre los parámetros de la población, se encuentran las técnias de remuestreo.
Estas técnicas tienen la ventaja de que no necesitan datos distribuidos normalmente, muestras muy grandes y fórmulas complicadas. Además permiten obtener resultados muchas veces más exactos que otros métodos.
El bootstrap es un mecanismo que se centra en el remuestreo de datos dentro de una muestra aleatoria, diseñado para aproximar la precisión de un estimador.
El método se basa en: dada una muestra aleatoria con 'n' observaciones, se construyen con ella 'B' "muestras Bootstrap" del mismo tamaño con reposición (es decir los valores se pueden repeitir).
Para cada una de las B nuevas muestras, se realiza una estimación del parámetro de interés $\theta$.
Luego, se usan los B valores bootstrap estimados para aproximar la distribución del estimador del parámetro.
Esta distribución se utiliza para hacer más inferencias estadísticas, como la estimación del error estándar de $\theta$ o un intervalo de confianza para el mismo.
EL intervalo de confianza que se calcula a partir de los datos de la muestra, es un intervalo en donde se estima que estará cierto valor desconocido, como el parámtero poblacional, con un determinado nivel de confianza.Se denomina nivel de significancia a $\alpha$ y representa la probabilidad de que el intervalo contenga el parámetro poblacional.
En este ejercicio se quiere diseñar una función que por medio del método de boostrap resampling estime la varianza de una V.A. a partir de una muestra de datos. Se toma como 'muestra' a las magnitudes de estrellas pertenecientes a cúmulos globulares los cuales se encuentran en la columna número 6 (contando desde cero) del archivo 'cumulos_globulares.dat'.
Primero para estimar la varianza, se calcula la varianza muestral.
```
from math import *
import numpy as np
import matplotlib.pyplot as plt
import random
import seaborn as sns
sns.set()
muestra = np.genfromtxt('cumulos_globulares.dat', usecols=6) #se carga el archivo
muestra = muestra[~np.isnan(muestra)] #tiene NaNs, así que usa solo los numéricos.
n=len(muestra) #defino n como el tamaño de la muestra
xm= sum(muestra)/n #Calculo la media muestral
s2= sum((muestra-xm)**2)/(n-1) #Calculo varianza muestral
print('Varianza muestral:', s2)
```
A continuación, se realizan remuestreos para aplicar el método de bootstrap y calcular el intervalo de confianza.
Se define la función 'boot' que realiza realiza 'B' muestras nuevas aleatorias del mismo tamaño que la original utilizando la función 'np.random.choice'. Para cada muestra se calcula la varianza muestral y se guardan en una lista.
Abajo se grafica la distribución obtenida para la varianza para verla visualmente.
```
def boot(muestra, n, B=1000): #defino función con B=cantidad de muestras bootstraps
var_mues=[]
for i in range(B):
muestra_nueva=np.random.choice(muestra, size=n) #genera una muestra aleatoria a partir de un array de tamaño n
xm= sum(muestra_nueva)/n #calculo media muestral
s2= sum((muestra_nueva-xm)**2)/(n-1) #calculo varianza muestral
var_mues.append(s2)
return var_mues
#Grafico el histograma de las varianzas calculadas
var = boot(muestra, n) # varianzas muestrales de las distintas muestras
plt.hist(var, color='gold')
plt.title('Distribución muestral de la varianza')
plt.xlabel('$S^2$')
plt.ylabel('Frecuencia absoluta')
plt.show()
```
A continuación, se quiere calcular los intervalos de confidencia del estimador de la varianza con un nivel de significancia $\alpha$ dado. El intervalo de confianza va a estar definido entre los valores $(q_1, q_2)$, tal que el área bajo la curva de la distribución encerrada entre ellos es igual a $\alpha$.
Como en el histograma formado para la varianza se ve que la distribución que se forma es simétrica, se pide que el intervalo de confianza sea simétrico. Por lo tanto, las colas de la distribución (es decir $S^2<q_1$ y $S^2>q_2$), van a tener un área bajo la curva de valor $\frac{1-\alpha}{2}$ cada una.
Luego, se buscan los valores de $q_1$ y $q_2$ que cumplan con lo siguiente:
$$\frac{N(S^2<q_1)}{B}=\frac{1-\alpha}{2}$$
$$\frac{N(S^2>q_2)}{B}=\frac{1-\alpha}{2}$$
donde N() indica el número de valores de $S^2$ que cumplen esa codición.
Programa para calcular q1:
```
def IC_q1(var, a): #a es alpha
var.sort() #ordeno los valores de menor a mayor
suma=0
y=(1-a)/2 #condición que quiero que se cumpla
for i in range(len(var)):
x=var[i] #defino como x el elemento i de la varianza
suma=suma+x #los sumo
t=suma/(len(var)) #divido por la cantidad de muestras
if t<= y:
None
else:
q1=x
break
return q1
```
Programa para calcular q2:
```
def IC_q2(var, a):
var.sort(reverse=True) #ordeno los valores de mayor a menor
suma=0
y=(1-a)/2
for i in range(len(var)):
x=var[i]
suma=suma+x
t=suma/(len(var))
if t<= y:
None
else:
q2=x
break
return q2
```
Como ejemplo, se toma el valor de $\alpha$=0.95 y 0.9 para computar el valor final obtenido para la varianza con su intervalo de confianza.
```
q1=IC_q1(var, a=0.95)
print('Valor de q1=', q1)
q2=IC_q2(var, a=0.95)
print('Valor de q2=', q2)
print('El valor que se obtiene para la varianza es ', s2, 'con un intervalo de confianza de (', q1, ',', q2,').')
q1=IC_q1(var, a=0.9)
print('Valor de q1=', q1)
q2=IC_q2(var, a=0.9)
print('Valor de q2=', q2)
print('El valor que se obtiene para la varianza es ', s2, 'con un intervalo de confianza de (', q1, ',', q2,').')
```
## Conclusiones
Por medio del método de remuestreo bootstrap se puede conocer la varianza de una variable aleatoria y una estimación de su incerteza de la cual no se tiene conocimiento sobre su distribución. Además se puede calcular un intervalo de confianza para un determinado valor de $\alpha$ mediante el calculo de los límites inferiores y superiores del intervalo.
Se puede ver que la distribución de la varianza tiene forma de campana centrada en el valor estimado de la varianza muestral, por lo que el intervalo de confianza es simétrico.
También se ve, con los últimos ejemplos que si el valor de $\alpha$ decrece, el IC también.
| github_jupyter |
# Table of Contents
<p><div class="lev1 toc-item"><a href="#Initialization" data-toc-modified-id="Initialization-1"><span class="toc-item-num">1 </span>Initialization</a></div><div class="lev1 toc-item"><a href="#Load-Data" data-toc-modified-id="Load-Data-2"><span class="toc-item-num">2 </span>Load Data</a></div><div class="lev1 toc-item"><a href="#Raw-KMeans" data-toc-modified-id="Raw-KMeans-3"><span class="toc-item-num">3 </span>Raw KMeans</a></div><div class="lev2 toc-item"><a href="#Averaging" data-toc-modified-id="Averaging-31"><span class="toc-item-num">3.1 </span>Averaging</a></div><div class="lev1 toc-item"><a href="#Testing-All-Columns-Combinations" data-toc-modified-id="Testing-All-Columns-Combinations-4"><span class="toc-item-num">4 </span>Testing All Columns Combinations</a></div><div class="lev1 toc-item"><a href="#Manual" data-toc-modified-id="Manual-5"><span class="toc-item-num">5 </span>Manual</a></div><div class="lev2 toc-item"><a href="#Creating-Dataset" data-toc-modified-id="Creating-Dataset-51"><span class="toc-item-num">5.1 </span>Creating Dataset</a></div><div class="lev2 toc-item"><a href="#Final-Result" data-toc-modified-id="Final-Result-52"><span class="toc-item-num">5.2 </span>Final Result</a></div><div class="lev2 toc-item"><a href="#Evolution-of-Centroids" data-toc-modified-id="Evolution-of-Centroids-53"><span class="toc-item-num">5.3 </span>Evolution of Centroids</a></div><div class="lev1 toc-item"><a href="#Compare-Manual-with-Sci-Kit-Learn" data-toc-modified-id="Compare-Manual-with-Sci-Kit-Learn-6"><span class="toc-item-num">6 </span>Compare Manual with Sci Kit Learn</a></div><div class="lev2 toc-item"><a href="#Comparing-Speed" data-toc-modified-id="Comparing-Speed-61"><span class="toc-item-num">6.1 </span>Comparing Speed</a></div><div class="lev3 toc-item"><a href="#Sci-Kit-Learn" data-toc-modified-id="Sci-Kit-Learn-611"><span class="toc-item-num">6.1.1 </span>Sci-Kit Learn</a></div><div class="lev3 toc-item"><a href="#Manual" data-toc-modified-id="Manual-612"><span class="toc-item-num">6.1.2 </span>Manual</a></div><div class="lev2 toc-item"><a href="#Comparing-Accuracy" data-toc-modified-id="Comparing-Accuracy-62"><span class="toc-item-num">6.2 </span>Comparing Accuracy</a></div>
# Initialization
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn import model_selection, preprocessing
from itertools import combinations, chain
from matplotlib import animation, rc
import matplotlib as mpl
from IPython.display import HTML
from collections import defaultdict
from random import shuffle
%matplotlib inline
mpl.rcParams['figure.figsize'] = (9,9)
rc('animation', html='html5')
```
# Load Data
```
df = pd.read_excel('../data/titanic.xls')
df = df.fillna(0)
df.tail()
```
# Raw KMeans
```
dropped_columns = ['name', 'survived', 'body', 'ticket', 'home.dest', 'cabin']
text_columns = [col for col in df.columns if df[col].dtype not in [np.int64, np.float64]]
dummy_columns = [col for col in text_columns if col not in dropped_columns]
X_df = df.drop(dropped_columns, 1)
X_df = pd.get_dummies(X_df, columns=dummy_columns, drop_first=True)
X = np.array(X_df).astype(float)
X = preprocessing.scale(X)
y = np.array(df['survived']).reshape(-1, 1)
def KMeans_process(X, y, call=KMeans):
clf = call(n_clusters=2)
clf.fit(X)
correct = 0
for x_i, y_i in zip(X, y):
predict_me = np.array(x_i).reshape(1, -1)
prediction = clf.predict(predict_me)
if prediction == y_i:
correct += 1
acc = correct / len(X)
return acc if acc > 0.5 else 1- acc
acc = KMeans_process(X, y)
print(f'One Shot Accuracy: {round(acc * 100, 2)}%')
```
## Averaging
```
total = 0
n = 10
for i in range(n):
total += process(X, y)
print(f'Average Accuracy: {round(total / n * 100, 2)}%')
```
# Testing All Columns Combinations
```
a = (combinations(df.columns, i) for i in range(len(df.columns)))
all_posibilities = chain.from_iterable(a)
acc_max = 0
best_columns = []
for dropped_columns in all_posibilities:
text_columns = [col for col in df.columns if df[col].dtype not in [np.int64, np.float64]]
dummy_columns = [col for col in text_columns if col not in dropped_columns]
X_df = df.drop(list(dropped_columns), 1)
X_df = pd.get_dummies(X_df, columns=dummy_columns, drop_first=True)
X = np.array(X_df).astype(float)
X = preprocessing.scale(X)
acc = KMeans_process(X, y)
if acc > acc_max:
acc_max = acc
best_columns = dropped_columns
print(acc_max, best_columns, len(X_df.columns))
acc_max, best_columns
```
# Manual
## Creating Dataset
```
X = np.array([[1, 2],
[1.5, 1.8],
[5, 8 ],
[8, 8],
[1, 0.6],
[9,11]])
plt.scatter(X[:,0], X[:,1], s=150)
plt.show()
colors = 10*["g","r","c","b","k"]
class K_Means:
def __init__(self, n_clusters=2, tol=0.001, max_iter=300):
self.k = n_clusters
self.tol = tol
self.max_iter = max_iter
self.steps = []
def fit(self, data):
self.centroids = {key:data[i] for key, i in enumerate(np.random.randint(0, len(data), size=self.k))}
for i in range(self.max_iter):
self.classification = defaultdict(list)
for featureset in data:
distances = {np.linalg.norm(featureset - coord):centroid
for centroid, coord in self.centroids.items()}
classification = distances[min(distances)]
self.classification[classification].append(featureset)
prev_centroids = dict(self.centroids)
for class_, points in self.classification.items():
self.centroids[class_] = np.average(points, axis=0)
optimized = True
for old, new in zip(prev_centroids.values(), self.centroids.values()):
if np.sum(abs(new - old) / old * 100) > self.tol:
optimized = False
break
self.steps.append((prev_centroids, self.classification))
if optimized:
break
def predict(self, data):
distances = {np.linalg.norm(data - centroid):class_ for class_, centroid in self.centroids.items()}
return distances[min(distances)]
def visualize(self):
if len(list(self.classification.values())[0][0]) != 2:
print('Your data have to be 2 Dimensional to be plotted')
return
fig, ax = plt.subplots()
for centroid in self.centroids.values():
x, y = centroid
ax.scatter(x, y, marker='x', color='k', s=200)
for class_, points in self.classification.items():
color = colors[class_]
points = np.array(points)
x, y = points.T
ax.scatter(x, y, marker='o', color=color, s=150)
```
## Final Result
```
clf = K_Means()
clf.fit(X)
clf.visualize()
clf.predict([8,9])
```
## Evolution of Centroids
```
def animate(i, steps):
ax.cla()
centroids, classification = steps[i]
ax.set_xlim(x_min - x_margin, x_max + x_margin)
ax.set_ylim(y_min - y_margin, y_max + y_margin)
for centroid in centroids.values():
x, y = centroid
ax.scatter(x, y, marker='x', color='k', s=200)
for class_, points in classification.items():
color = colors[class_]
points = np.array(points)
x, y = points.T
ax.scatter(x, y, marker='o', color=color, s=150)
return []
fig, ax = plt.subplots()
clf = K_Means()
clf.fit(X)
xs = X[:, 0]
ys = X[:, 1]
x_max, x_min = max(xs), min(xs)
x_margin = 0.1 * x_max
y_max, y_min = max(ys), min(ys)
y_margin = 0.1 * y_max
steps = clf.steps
anim = animation.FuncAnimation(fig, animate, frames=len(steps), interval=750, blit=True, fargs=(steps,))
plt.close()
HTML(anim.to_html5_video())
```
# Compare Manual with Sci Kit Learn
```
df = pd.read_excel('../data/titanic.xls')
df = df.fillna(0)
dropped_columns = ['name', 'survived', 'body', 'ticket', 'home.dest', 'cabin']
text_columns = [col for col in df.columns if df[col].dtype not in [np.int64, np.float64]]
dummy_columns = [col for col in text_columns if col not in dropped_columns]
X_df = df.drop(dropped_columns, 1)
X_df = pd.get_dummies(X_df, columns=dummy_columns, drop_first=True)
X = np.array(X_df).astype(float)
X = preprocessing.scale(X)
y = np.array(df['survived']).reshape(-1, 1)
```
## Comparing Speed
### Sci-Kit Learn
```
%timeit KMeans_process(X, y)
```
### Manual
```
%timeit KMeans_process(X, y, call=K_Means )
```
## Comparing Accuracy
```
KMeans_process(X, y)
KMeans_process(X, y, call=K_Means)
acc_sklearn = KMeans_process(X, y)
acc_manual = KMeans_process(X, y, call=K_Means)
print(f'One Shot Accuracy Manual KMeans: {round(acc_manual * 100, 2)}%')
print(f'One Shot Accuracy SKlearn KMeans: {round(acc_sklearn * 100, 2)}%')
better = 0
n = 20
for i in range(n):
acc_sklearn = KMeans_process(X, y)
acc_manual = KMeans_process(X, y, call=K_Means)
better += (acc_manual - acc_sklearn) / acc_sklearn
print(f'The manual KMeans was {round(better*100/n, 2)}% better than SKLearn')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mella30/Deep-Learning-with-Tensorflow-2/blob/main/Course2-Customising_your_models_with_Tensorflow_2/week4_subclassing_custom_training.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import tensorflow as tf
print(tf.__version__)
```
# Model subclassing and custom training loops
## Coding tutorials
#### [1. Model subclassing](#coding_tutorial_1)
#### [2. Custom layers](#coding_tutorial_2)
#### [3. Automatic differentiation](#coding_tutorial_3)
#### [4. Custom training loops](#coding_tutorial_4)
#### [5. tf.function decorator](#coding_tutorial_5)
***
<a id="coding_tutorial_1"></a>
## Model subclassing
```
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Dropout, Softmax, concatenate
```
#### Create a simple model using the model subclassing API
```
# Build the model
# One branch has only the dense one layer, the other has the dense two, a dense three layers sequentially.
# Then the outputs of both branches can be concatenated by just writing concatenate.
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.dense_1 = Dense(64, activation='relu')
self.dense_2 = Dense(10)
self.dense_3 = Dense(5)
self.softmax = Softmax()
def call(self, inputs, training=True):
x = self.dense_1(inputs)
y1 = self.dense_2(inputs)
y2 = self.dense_3(y1)
concat = concatenate([x,y2])
return self.softmax(concat)
# Print the model summary
model = MyModel()
model(tf.random.uniform([1,10]))
model.summary()
```
***
<a id="coding_tutorial_2"></a>
## Custom layers
```
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Layer, Softmax
```
#### Create custom layers
```
# Create a custom layer
class MyLayer(Layer):
def __init__(self, units, input_dim):
super(MyLayer, self).__init__()
self.w = self.add_weight(shape=(input_dim, units),
initializer='random_normal')
self.b = self.add_weight(shape=(units,),
initializer='zeros')
def call(self, inputs):
return tf.matmul(inputs, self.w)+self.b
dense_layer = MyLayer(3,5)
x = tf.ones((1,5))
print(dense_layer(x))
print(dense_layer.weights)
# Specify trainable weights (to freeze parts of the layers weights)
class MyLayerNontrainable(Layer):
def __init__(self, units, input_dim):
super(MyLayerNontrainable, self).__init__()
self.w = self.add_weight(shape=(input_dim, units),
initializer='random_normal',
trainable=False)
self.b = self.add_weight(shape=(units,),
initializer='zeros',
trainable=False)
def call(self, inputs):
return tf.matmul(inputs, self.w)+self.b
dense_layer_nontrainable = MyLayerNontrainable(3,5)
print('trainable weights:', len(dense_layer_nontrainable.trainable_weights))
print('non-trainable weights:', len(dense_layer_nontrainable.non_trainable_weights))
# Create a custom layer to accumulate means of output values
#
class MyLayerMean(Layer):
def __init__(self, units, input_dim):
super(MyLayerMean, self).__init__()
self.w = self.add_weight(shape=(input_dim, units),
initializer='random_normal')
self.b = self.add_weight(shape=(units,),
initializer='zeros')
# accumulate means of output values everytime it is called
self.sum_activation = tf.Variable(initial_value=tf.zeros((units,)),
trainable=False)
# counts the number of times the layer has been called
self.number_call = tf.Variable(initial_value=0,
trainable=False)
def call(self, inputs):
# activation of the outputs
activations = tf.matmul(inputs, self.w)+self.b
# update values (sum and number of calls, will keep their values across calls)
self.sum_activation.assign_add(tf.reduce_sum(activations, axis=0))
self.number_call.assign_add(inputs.shape[0])
return activations, self.sum_activation / tf.cast(self.number_call, tf.float32)
dense_layer_mean = MyLayerMean(3,5)
# Test the layer
y, activation_means = dense_layer_mean(tf.ones((1, 5)))
print(activation_means.numpy())
# nothing changes because weights and bias are not updated
y, activation_means = dense_layer_mean(tf.ones((1, 5)))
print(activation_means.numpy())
# Accumulating the mean or variance of the activations can be really useful e.g. for analyzing the propagation of signals in the network.
# Create a Dropout layer as a custom layer
class MyDropout(Layer):
def __init__(self, rate):
super(MyDropout, self).__init__()
self.rate = rate
def call(self, inputs):
# Define forward pass for dropout layer
return tf.nn.dropout(inputs, rate=self.rate)
```
#### Implement the custom layers into a model
```
# Build the model using custom layers with the model subclassing API
class MyModel(Model):
def __init__(self, units_1, input_dim_1, units_2, units_3):
super(MyModel, self).__init__()
# Define layers
self.layer_1 = MyLayer(units_1, input_dim_1)
self.dropout_1 = MyDropout(0.5)
self.layer_2 = MyLayer(units_2, units_1)
self.dropout_2 = MyDropout(0.5)
self.layer_3 = MyLayer(units_3, units_2)
self.softmax = Softmax()
def call(self, inputs):
# Define forward pass
x = self.layer_1(inputs)
x = tf.nn.relu(x) # call activations here, otherwise the layers will be linear
x = self.dropout_1(x)
x = self.layer_2(x)
x = tf.nn.relu(x)
x = self.dropout_2(x)
x = self.layer_3(x)
return self.softmax(x)
# Instantiate a model object
model = MyModel(64,10000,64,46)
print(model(tf.ones((1, 10000))))
model.summary()
```
***
<a id="coding_tutorial_3"></a>
## Automatic differentiation
```
import numpy as np
import matplotlib.pyplot as plt
```
#### Create synthetic data
```
# Create data from a noise contaminated linear model
def MakeNoisyData(m, b, n=20):
x = tf.random.uniform(shape=(n,))
noise = tf.random.normal(shape=(len(x),), stddev=0.1)
y = m * x + b + noise
return x, y
m=1
b=2
x_train, y_train = MakeNoisyData(m,b)
plt.plot(x_train, y_train, 'b.')
```
#### Define a linear regression model
```
from tensorflow.keras.layers import Layer
# Build a custom layer for the linear regression model
class LinearLayer(Layer):
def __init__(self):
super(LinearLayer, self).__init__()
self.m = self.add_weight(shape=(1,),
initializer='random_normal')
self.b = self.add_weight(shape=(1,),
initializer='zeros')
def call(self, inputs):
return self.m*inputs+self.b
linear_regression = LinearLayer()
print(linear_regression(x_train))
print(linear_regression.weights)
```
#### Define the loss function
```
# Define the mean squared error loss function
def SquaredError(y_pred, y_true):
return tf.reduce_mean(tf.square(y_pred - y_true))
starting_loss = SquaredError(linear_regression(x_train), y_train)
print("Starting loss", starting_loss.numpy())
```
#### Train and plot the model
```
# Implement a gradient descent training loop for the linear regression model
learning_rate = 0.05
steps = 25
for i in range(steps):
with tf.GradientTape() as tape:
predictions = linear_regression(x_train)
loss = SquaredError(predictions, y_train)
gradients = tape.gradient(loss, linear_regression.trainable_variables)
linear_regression.m.assign_sub(learning_rate * gradients[0])
linear_regression.b.assign_sub(learning_rate * gradients[0])
print("Step %d, Loss %f" % (i, loss.numpy()))
# Plot the learned regression model
print("m:{}, trained m:{}".format(m,linear_regression.m.numpy()))
print("b:{}, trained b:{}".format(b,linear_regression.b.numpy()))
plt.plot(x_train, y_train, 'b.')
x_linear_regression=np.linspace(min(x_train), max(x_train),50)
plt.plot(x_linear_regression, linear_regression.m*x_linear_regression+linear_regression.b, 'r.')
```
***
<a id="coding_tutorial_4"></a>
## Custom training loops
```
import numpy as np
import matplotlib.pyplot as plt
import time
```
#### Build the model
```
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Layer, Softmax
# Define the custom layers and model
class MyLayer(Layer):
def __init__(self, units):
super(MyLayer, self).__init__()
self.units = units # number of layer units (weights)
# add build method to the MyLayer class to avoid setting the input shape until the layer is called (flexible shapes)
# name the layer parts for simplify usage
def build(self, input_shape):
# layer weights
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
name='kernel')
# layer bias
self.b = self.add_weight(shape=(self.units,),
initializer='zeros',
name='bias')
def call(self, inputs):
return tf.matmul(inputs, self.w)+self.b
class MyDropout(Layer):
def __init__(self, rate):
super(MyDropout, self).__init__()
self.rate = rate
def call(self, inputs):
# Define forward pass for dropout layer
return tf.nn.dropout(inputs, rate=self.rate)
class MyModel(Model):
def __init__(self, units_1, units_2, units_3):
super(MyModel, self).__init__()
# Define layers
self.layer_1 = MyLayer(units_1)
self.dropout_1 = MyDropout(0.5)
self.layer_2 = MyLayer(units_2)
self.dropout_2 = MyDropout(0.5)
self.layer_3 = MyLayer(units_3)
self.softmax = Softmax()
def call(self, inputs):
# Define forward pass
x = self.layer_1(inputs)
x = tf.nn.relu(x) # call activations here, otherwise the layers will be linear
x = self.dropout_1(x)
x = self.layer_2(x)
x = tf.nn.relu(x)
x = self.dropout_2(x)
x = self.layer_3(x)
return self.softmax(x)
# Instantiate a model object
model = MyModel(64,64,46)
print(model(tf.ones((1, 10000))))
model.summary() # is the exact same as above
```
#### Load the reuters dataset and define the class_names
```
# Load the dataset
from tensorflow.keras.datasets import reuters
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
class_names = ['cocoa','grain','veg-oil','earn','acq','wheat','copper','housing','money-supply',
'coffee','sugar','trade','reserves','ship','cotton','carcass','crude','nat-gas',
'cpi','money-fx','interest','gnp','meal-feed','alum','oilseed','gold','tin',
'strategic-metal','livestock','retail','ipi','iron-steel','rubber','heat','jobs',
'lei','bop','zinc','orange','pet-chem','dlr','gas','silver','wpi','hog','lead']
# Print the class of the first sample
print("Label: {}".format(class_names[train_labels[0]]))
```
#### Get the dataset word index
```
# Load the Reuters word index
word_to_index = reuters.get_word_index()
invert_word_index = dict([(value, key) for (key, value) in word_to_index.items()])
text_news = ' '.join([invert_word_index.get(i - 3, '?') for i in train_data[0]])
# Print the first data example sentence
print(text_news)
```
#### Preprocess the data
```
# Define a function that encodes the data into a 'bag of words' representation
def bag_of_words(text_samples, elements=10000):
output = np.zeros((len(text_samples), elements))
for i, word in enumerate(text_samples):
output[i, word] = 1.
return output
x_train = bag_of_words(train_data)
x_test = bag_of_words(test_data)
print("Shape of x_train:", x_train.shape)
print("Shape of x_test:", x_test.shape)
```
#### Define the loss function and optimizer
```
# Define the categorical cross entropy loss and Adam optimizer
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
def loss(model, x, y, wd):
kernel_variables = []
for l in model.layers:
for w in l.weights:
if 'kernel' in w.name:
kernel_variables.append(w)
# weight decay penalty as regulizer
wd_penalty = wd * tf.reduce_sum([tf.reduce_sum(tf.square(k)) for k in kernel_variables])
y_ = model(x)
return loss_object(y_true=y, y_pred=y_) + wd_penalty
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
```
#### Train the model
```
# Define a function to compute the forward and backward pass
def grad(model, inputs, targets, wd):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets, wd)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
# Implement the training loop
from tensorflow.keras.utils import to_categorical
start_time = time.time()
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, train_labels))
train_dataset = train_dataset.batch(32)
# keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 10
weight_decay = 0.005
for epoch in range(num_epochs):
epoch_avg_loss = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.CategoricalAccuracy()
# training loop
for x, y in train_dataset:
# optimize model
loss_value, grads = grad(model, x, y, weight_decay)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# compute current loss
epoch_avg_loss(loss_value)
epoch_accuracy(to_categorical(y), model(x))
# end epoch
train_loss_results.append(epoch_avg_loss.result())
train_accuracy_results.append(epoch_accuracy.result())
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_avg_loss.result(), epoch_accuracy.result()))
print("Duration :{:.3f}".format(time.time() - start_time))
```
#### Evaluate the model
```
# Create a Dataset object for the test set
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, test_labels))
test_dataset = test_dataset.batch(32)
# Collect average loss and accuracy
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.CategoricalAccuracy()
# Loop over the test set and print scores
from tensorflow.keras.utils import to_categorical
for x, y in test_dataset:
# Optimize the model
loss_value = loss(model, x, y, weight_decay)
# Compute current loss
epoch_loss_avg(loss_value)
# Compare predicted label to actual label
epoch_accuracy(to_categorical(y), model(x))
print("Test loss: {:.3f}".format(epoch_loss_avg.result().numpy()))
print("Test accuracy: {:.3%}".format(epoch_accuracy.result().numpy()))
```
#### Plot the learning curves
```
# Plot the training loss and accuracy
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('Training Metrics')
axes[0].set_ylabel("Loss", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("Accuracy", fontsize=14)
axes[1].set_xlabel("Epoch", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
```
#### Predict from the model
```
# Get the model prediction for an example input
predicted_label = np.argmax(model(x_train[np.newaxis,0]),axis=1)[0]
print("Prediction: {}".format(class_names[predicted_label]))
print(" Label: {}".format(class_names[train_labels[0]]))
```
***
<a id="coding_tutorial_5"></a>
## tf.function decorator
```
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Layer, Softmax
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import reuters
import numpy as np
import matplotlib.pyplot as plt
import time
```
#### Build the model
```
# Initialize a new model
model = MyModel(64,64,46)
print(model(tf.ones((1, 10000))))
model.summary() # is the exact same as above
```
#### Redefine the grad function using the @tf.function decorator
```
# Use the @tf.function decorator
@tf.function
def grad(model, inputs, targets, wd):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets, wd)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
```
#### Train the model
```
# Re-run the training loop
from tensorflow.keras.utils import to_categorical
start_time = time.time()
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, train_labels))
train_dataset = train_dataset.batch(32)
# keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 10
weight_decay = 0.005
for epoch in range(num_epochs):
epoch_avg_loss = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.CategoricalAccuracy()
# training loop
for x, y in train_dataset:
# optimize model
loss_value, grads = grad(model, x, y, weight_decay)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# compute current loss
epoch_avg_loss(loss_value)
epoch_accuracy(to_categorical(y), model(x))
# end epoch
train_loss_results.append(epoch_avg_loss.result())
train_accuracy_results.append(epoch_accuracy.result())
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_avg_loss.result(), epoch_accuracy.result()))
print("Duration :{:.3f}".format(time.time() - start_time))
```
#### Print the autograph code
```
# Use tf.autograph.to_code to see the generated code
print(tf.autograph.to_code(grad.python_function))
```
| github_jupyter |
# Data Bootcamp: Demography
We love demography, specifically the dynamics of population growth and decline. You can drill down seemingly without end, as this [terrific graphic](http://www.bloomberg.com/graphics/dataview/how-americans-die/) about causes of death suggests.
We take a look here at the UN's [population data](http://esa.un.org/unpd/wpp/Download/Standard/Population/): the age distribution of the population, life expectancy, fertility (the word we use for births), and mortality (deaths). Explore the website, it's filled with interesting data. There are other sources that cover longer time periods, and for some countries you can get detailed data on specific things (causes of death, for example).
We use a number of countries as examples, but Japan and China are the most striking. The code is written so that the country is easily changed.
This IPython notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course [Data Bootcamp](http://databootcamp.nyuecon.com/).
## Preliminaries
Import statements and a date check for future reference.
```
# import packages
import pandas as pd # data management
import matplotlib.pyplot as plt # graphics
import matplotlib as mpl # graphics parameters
import numpy as np # numerical calculations
# IPython command, puts plots in notebook
%matplotlib inline
# check Python version
import datetime as dt
import sys
print('Today is', dt.date.today())
print('What version of Python are we running? \n', sys.version, sep='')
```
## Population by age
We have both "estimates" of the past (1950-2015) and "projections" of the future (out to 2100). Here we focus on the latter, specifically what the UN refers to as the medium variant: their middle of the road projection. It gives us a sense of how Japan's population might change over the next century.
It takes a few seconds to read the data.
What are the numbers? Thousands of people in various 5-year age categories.
```
url1 = 'http://esa.un.org/unpd/wpp/DVD/Files/'
url2 = '1_Indicators%20(Standard)/EXCEL_FILES/1_Population/'
url3 = 'WPP2015_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.XLS'
url = url1 + url2 + url3
cols = [2, 5] + list(range(6,28))
#est = pd.read_excel(url, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])
prj = pd.read_excel(url, sheetname=1, skiprows=16, parse_cols=cols, na_values=['…'])
prj.head(3)[list(range(6))]
# rename some variables
pop = prj
names = list(pop)
pop = pop.rename(columns={names[0]: 'Country',
names[1]: 'Year'})
# select country and years
country = ['Japan']
years = [2015, 2055, 2095]
pop = pop[pop['Country'].isin(country) & pop['Year'].isin(years)]
pop = pop.drop(['Country'], axis=1)
# set index = Year
# divide by 1000 to convert numbers from thousands to millions
pop = pop.set_index('Year')/1000
pop.head()[list(range(8))]
# transpose (T) so that index = age
pop = pop.T
pop.head(3)
ax = pop.plot(kind='bar',
color='blue',
alpha=0.5, subplots=True, sharey=True, figsize=(8,6))
for axnum in range(len(ax)):
ax[axnum].set_title('')
ax[axnum].set_ylabel('Millions')
ax[0].set_title('Population by age', fontsize=14, loc='left')
```
**Exercise.** What do you see here? What else would you like to know?
**Exercise.** Adapt the preceeding code to do the same thing for China. Or some other country that sparks your interest.
## Fertility: aka birth rates
We might wonder, why is the population falling in Japan? Other countries? Well, one reason is that birth rates are falling. Demographers call this fertility. Here we look at the fertility using the same [UN source](http://esa.un.org/unpd/wpp/Download/Standard/Fertility/) as the previous example. We look at two variables: total fertility and fertility by age of mother. In both cases we explore the numbers to date, but the same files contain projections of future fertility.
```
# fertility overall
uft = 'http://esa.un.org/unpd/wpp/DVD/Files/'
uft += '1_Indicators%20(Standard)/EXCEL_FILES/'
uft += '2_Fertility/WPP2015_FERT_F04_TOTAL_FERTILITY.XLS'
cols = [2] + list(range(5,18))
ftot = pd.read_excel(uft, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])
ftot.head(3)[list(range(6))]
# rename some variables
names = list(ftot)
f = ftot.rename(columns={names[0]: 'Country'})
# select countries
countries = ['China', 'Japan', 'Germany', 'United States of America']
f = f[f['Country'].isin(countries)]
# shape
f = f.set_index('Country').T
f = f.rename(columns={'United States of America': 'United States'})
f.tail(3)
fig, ax = plt.subplots()
f.plot(ax=ax, kind='line', alpha=0.5, lw=3, figsize=(6.5, 4))
ax.set_title('Fertility (births per woman, lifetime)', fontsize=14, loc='left')
ax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)
ax.set_ylim(ymin=0)
ax.hlines(2.1, -1, 13, linestyles='dashed')
ax.text(8.5, 2.4, 'Replacement = 2.1')
```
**Exercise.** What do you see here? What else would you like to know?
**Exercise.** Add Canada to the figure. How does it compare to the others? What other countries would you be interested in?
## Life expectancy
One of the bottom line summary numbers for mortality is life expectancy: if mortaility rates fall, people live longer, on average. Here we look at life expectancy at birth. There are also numbers for life expectancy given than you live to some specific age; for example, life expectancy given that you survive to age 60.
```
# life expectancy at birth, both sexes
ule = 'http://esa.un.org/unpd/wpp/DVD/Files/1_Indicators%20(Standard)/EXCEL_FILES/3_Mortality/'
ule += 'WPP2015_MORT_F07_1_LIFE_EXPECTANCY_0_BOTH_SEXES.XLS'
cols = [2] + list(range(5,34))
le = pd.read_excel(ule, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])
le.head(3)[list(range(10))]
# rename some variables
oldname = list(le)[0]
l = le.rename(columns={oldname: 'Country'})
l.head(3)[list(range(8))]
# select countries
countries = ['China', 'Japan', 'Germany', 'United States of America']
l = l[l['Country'].isin(countries)]
# shape
l = l.set_index('Country').T
l = l.rename(columns={'United States of America': 'United States'})
l.tail()
fig, ax = plt.subplots()
l.plot(ax=ax, kind='line', alpha=0.5, lw=3, figsize=(6, 8), grid=True)
ax.set_title('Life expectancy at birth', fontsize=14, loc='left')
ax.set_ylabel('Life expectancy in years')
ax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)
ax.set_ylim(ymin=0)
```
**Exercise.** What other countries would you like to see? Can you add them? The code below generates a list.
```
countries = le.rename(columns={oldname: 'Country'})['Country']
```
**Exercise.** Why do you think the US is falling behind? What would you look at to verify your conjecture?
## Mortality: aka death rates
Another thing that affects the age distribution of the population is the mortality rate: if mortality rates fall people live longer, on average. Here we look at how mortality rates have changed over the past 60+ years. Roughly speaking, people live an extra five years every generation. Which is a lot. Some of you will live to be a hundred. (Look at the 100+ agen category over time for Japan.)
The experts look at mortality rates by age. The UN has a [whole page](http://esa.un.org/unpd/wpp/Download/Standard/Mortality/) devoted to mortality numbers. We take 5-year mortality rates from the Abridged Life Table.
The numbers are percentages of people in a given age group who die over a 5-year period. 0.1 means that 90 percent of an age group is still alive in five years.
```
# mortality overall
url = 'http://esa.un.org/unpd/wpp/DVD/Files/'
url += '1_Indicators%20(Standard)/EXCEL_FILES/3_Mortality/'
url += 'WPP2015_MORT_F17_1_ABRIDGED_LIFE_TABLE_BOTH_SEXES.XLS'
cols = [2, 5, 6, 7, 9]
mort = pd.read_excel(url, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])
mort.tail(3)
# change names
names = list(mort)
m = mort.rename(columns={names[0]: 'Country', names[2]: 'Age', names[3]: 'Interval', names[4]: 'Mortality'})
m.head(3)
```
**Comment.** At this point, we need to pivot the data. That's not something we've done before, so take it as simply something we can do easily if we have to. We're going to do this twice to produce different graphs:
* Compare countries for the same period.
* Compare different periods for the same country.
```
# compare countries for most recent period
countries = ['China', 'Japan', 'Germany', 'United States of America']
mt = m[m['Country'].isin(countries) & m['Interval'].isin([5]) & m['Period'].isin(['2010-2015'])]
print('Dimensions:', mt.shape)
mp = mt.pivot(index='Age', columns='Country', values='Mortality')
mp.head(3)
fig, ax = plt.subplots()
mp.plot(ax=ax, kind='line', alpha=0.5, linewidth=3,
# logy=True,
figsize=(6, 4))
ax.set_title('Mortality by age', fontsize=14, loc='left')
ax.set_ylabel('Mortality Rate (log scale)')
ax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)
```
**Exercises.**
* What country's old people have the lowest mortality?
* What do you see here for the US? Why is our life expectancy shorter?
* What other countries would you like to see? Can you adapt the code to show them?
* Anything else cross your mind?
```
# compare periods for the one country -- countries[0] is China
mt = m[m['Country'].isin([countries[0]]) & m['Interval'].isin([5])]
print('Dimensions:', mt.shape)
mp = mt.pivot(index='Age', columns='Period', values='Mortality')
mp = mp[[0, 6, 12]]
mp.head(3)
fig, ax = plt.subplots()
mp.plot(ax=ax, kind='line', alpha=0.5, linewidth=3,
# logy=True,
figsize=(6, 4))
ax.set_title('Mortality over time', fontsize=14, loc='left')
ax.set_ylabel('Mortality Rate (log scale)')
ax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)
```
**Exercise.** What do you see? What else would you like to know?
**Exercise.** Repeat this graph for the United States? How does it compare?
| github_jupyter |
# Model understanding and interpretability
In this colab, we will
- Will learn how to interpret model results and reason about the features
- Visualize the model results
```
import time
# We will use some np and pandas for dealing with input data.
import numpy as np
import pandas as pd
# And of course, we need tensorflow.
import tensorflow as tf
from matplotlib import pyplot as plt
from IPython.display import clear_output
tf.__version__
```
Below we demonstrate both *local* and *global* model interpretability for gradient boosted trees.
Local interpretability refers to an understanding of a model’s predictions at the individual example level, while global interpretability refers to an understanding of the model as a whole.
For local interpretability, we show how to create and visualize per-instance contributions using the technique outlined in [Palczewska et al](https://arxiv.org/pdf/1312.1121.pdf) and by Saabas in [Interpreting Random Forests](http://blog.datadive.net/interpreting-random-forests/) (this method is also available in scikit-learn for Random Forests in the [`treeinterpreter`](https://github.com/andosa/treeinterpreter) package). To distinguish this from feature importances, we refer to these values as directional feature contributions (DFCs).
For global interpretability we show how to retrieve and visualize gain-based feature importances, [permutation feature importances](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) and also show aggregated DFCs.
# Setup
## Load dataset
We will be using the titanic dataset, where the goal is to predict passenger survival given characteristiscs such as gender, age, class, etc.
```
tf.logging.set_verbosity(tf.logging.ERROR)
tf.set_random_seed(123)
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
# Feature columns.
fcol = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fcol.indicator_column(
fcol.categorical_column_with_vocabulary_list(feature_name,
vocab))
fc = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
fc.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
fc.append(fcol.numeric_column(feature_name,
dtype=tf.float32))
# Input functions.
def make_input_fn(X, y, n_epochs=None):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y))
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = (dataset
.repeat(n_epochs)
.batch(len(y))) # Use entire dataset since this is such a small dataset.
return dataset
return input_fn
# Training and evaluation input functions.
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1)
```
# Interpret model
## Local interpretability
Output directional feature contributions (DFCs) to explain individual predictions, using the approach outlined in [Palczewska et al](https://arxiv.org/pdf/1312.1121.pdf) and by Saabas in [Interpreting Random Forests](http://blog.datadive.net/interpreting-random-forests/). The DFCs are generated with:
`pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))`
```
params = {
'n_trees': 50,
'max_depth': 3,
'n_batches_per_layer': 1,
# You must enable center_bias = True to get DFCs. This will force the model to
# make an initial prediction before using any features (e.g. use the mean of
# the training labels for regression or log odds for classification when
# using cross entropy loss).
'center_bias': True
}
est = tf.estimator.BoostedTreesClassifier(fc, **params)
# Train model.
est.train(train_input_fn)
# Evaluation.
results = est.evaluate(eval_input_fn)
clear_output()
pd.Series(results).to_frame()
```
## Local interpretability
Next you will output the directional feature contributions (DFCs) to explain individual predictions using the approach outlined in [Palczewska et al](https://arxiv.org/pdf/1312.1121.pdf) and by Saabas in [Interpreting Random Forests](http://blog.datadive.net/interpreting-random-forests/) (this method is also available in scikit-learn for Random Forests in the [`treeinterpreter`](https://github.com/andosa/treeinterpreter) package). The DFCs are generated with:
`pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))`
(Note: The method is named experimental as we may modify the API before dropping the experimental prefix.)
```
import matplotlib.pyplot as plt
import seaborn as sns
sns_colors = sns.color_palette('colorblind')
pred_dicts = list(est.experimental_predict_with_explanations(eval_input_fn))
def clean_feature_names(df):
"""Boilerplate code to cleans up feature names -- this is unneed in TF 2.0"""
df.columns = [v.split(':')[0].split('_indi')[0] for v in df.columns.tolist()]
df = df.T.groupby(level=0).sum().T
return df
# Create DFC Pandas dataframe.
labels = y_eval.values
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
df_dfc = pd.DataFrame([pred['dfc'] for pred in pred_dicts])
df_dfc.columns = est._names_for_feature_id
df_dfc = clean_feature_names(df_dfc)
df_dfc.describe()
# Sum of DFCs + bias == probabality.
bias = pred_dicts[0]['bias']
dfc_prob = df_dfc.sum(axis=1) + bias
np.testing.assert_almost_equal(dfc_prob.values,
probs.values)
```
Plot results
```
import seaborn as sns # Make plotting nicer.
sns_colors = sns.color_palette('colorblind')
def plot_dfcs(example_id):
label, prob = labels[ID], probs[ID]
example = df_dfc.iloc[ID] # Choose ith example from evaluation set.
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index
ax = example[sorted_ix].plot(kind='barh', color='g', figsize=(10,5))
ax.grid(False, axis='y')
plt.title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, prob, label))
plt.xlabel('Contribution to predicted probability')
ID = 102
plot_dfcs(ID)
```
**??? ** How would you explain the above plot in plain english?
### Prettier plotting
Color codes based on directionality and adds feature values on figure. Please do not worry about the details of the plotting code :)
```
def plot_example_pretty(example):
"""Boilerplate code for better plotting :)"""
def _get_color(value):
"""To make positive DFCs plot green, negative DFCs plot red."""
green, red = sns.color_palette()[2:4]
if value >= 0: return green
return red
def _add_feature_values(feature_values, ax):
"""Display feature's values on left of plot."""
x_coord = ax.get_xlim()[0]
OFFSET = 0.15
for y_coord, (feat_name, feat_val) in enumerate(feature_values.items()):
t = plt.text(x_coord, y_coord - OFFSET, '{}'.format(feat_val), size=12)
t.set_bbox(dict(facecolor='white', alpha=0.5))
from matplotlib.font_manager import FontProperties
font = FontProperties()
font.set_weight('bold')
t = plt.text(x_coord, y_coord + 1 - OFFSET, 'feature\nvalue',
fontproperties=font, size=12)
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index # Sort by magnitude.
example = example[sorted_ix]
colors = example.map(_get_color).tolist()
ax = example.to_frame().plot(kind='barh',
color=[colors],
legend=None,
alpha=0.75,
figsize=(10,6))
ax.grid(False, axis='y')
ax.set_yticklabels(ax.get_yticklabels(), size=14)
_add_feature_values(dfeval.iloc[ID].loc[sorted_ix], ax)
ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
ax.set_xlabel('Contribution to predicted probability', size=14)
plt.show()
return ax
# Plot results.
ID = 102
example = df_dfc.iloc[ID] # Choose ith example from evaluation set.
ax = plot_example_pretty(example)
```
## Global feature importances
1. Gain-based feature importances using `est.experimental_feature_importances`
2. Aggregate DFCs using `est.experimental_predict_with_explanations`
3. Permutation importances
Gain-based feature importances measure the loss change when splitting on a particular feature, while permutation feature importances are computed by evaluating model performance on the evaluation set by shuffling each feature one-by-one and attributing the change in model performance to the shuffled feature.
In general, permutation feature importance are preferred to gain-based feature importance, though both methods can be unreliable in situations where potential predictor variables vary in their scale of measurement or their number of categories and when features are correlated ([source](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-9-307)). Check out [this article](http://explained.ai/rf-importance/index.html) for an in-depth overview and great discussion on different feature importance types.
## 1. Gain-based feature importances
```
features, importances = est.experimental_feature_importances(normalize=True)
df_imp = pd.DataFrame(importances, columns=['importances'], index=features)
# For plotting purposes. This is not needed in TF 2.0.
df_imp = clean_feature_names(df_imp.T).T.sort_values('importances', ascending=False)
# Visualize importances.
N = 8
ax = df_imp.iloc[0:N][::-1]\
.plot(kind='barh',
color=sns_colors[0],
title='Gain feature importances',
figsize=(10, 6))
ax.grid(False, axis='y')
plt.tight_layout()
```
**???** What does the x axis represent? -- A. It represents relative importance. Specifically, the average reduction in loss that occurs when a split occurs on that feature.
**???** Can we completely trust these results and the magnitudes? -- A. The results can be misleading because variables are correlated.
### 2. Average absolute DFCs
We can also average the absolute values of DFCs to understand impact at a global level.
```
# Plot.
dfc_mean = df_dfc.abs().mean()
sorted_ix = dfc_mean.abs().sort_values()[-8:].index # Average and sort by absolute.
ax = dfc_mean[sorted_ix].plot(kind='barh',
color=sns_colors[1],
title='Mean |directional feature contributions|',
figsize=(10, 6))
ax.grid(False, axis='y')
```
We can also see how DFCs vary as a feature value varies.
```
age = pd.Series(df_dfc.age.values, index=dfeval.age.values).sort_index()
sns.jointplot(age.index.values, age.values);
```
# Visualizing the model's prediction surface
Lets first simulate/create training data using the following formula:
$z=x* e^{-x^2 - y^2}$
Where $z$ is the dependent variable we are trying to predict and $x$ and $y$ are the features.
```
from numpy.random import uniform, seed
from matplotlib.mlab import griddata
# Create fake data
seed(0)
npts = 5000
x = uniform(-2, 2, npts)
y = uniform(-2, 2, npts)
z = x*np.exp(-x**2 - y**2)
# Prep data for training.
df = pd.DataFrame({'x': x, 'y': y, 'z': z})
xi = np.linspace(-2.0, 2.0, 200),
yi = np.linspace(-2.1, 2.1, 210),
xi,yi = np.meshgrid(xi, yi)
df_predict = pd.DataFrame({
'x' : xi.flatten(),
'y' : yi.flatten(),
})
predict_shape = xi.shape
def plot_contour(x, y, z, **kwargs):
# Grid the data.
plt.figure(figsize=(10, 8))
# Contour the gridded data, plotting dots at the nonuniform data points.
CS = plt.contour(x, y, z, 15, linewidths=0.5, colors='k')
CS = plt.contourf(x, y, z, 15,
vmax=abs(zi).max(), vmin=-abs(zi).max(), cmap='RdBu_r')
plt.colorbar() # Draw colorbar.
# Plot data points.
plt.xlim(-2, 2)
plt.ylim(-2, 2)
```
We can visualize our function:
```
zi = griddata(x, y, z, xi, yi, interp='linear')
plot_contour(xi, yi, zi)
plt.scatter(df.x, df.y, marker='.')
plt.title('Contour on training data')
plt.show()
def predict(est):
"""Predictions from a given estimator."""
predict_input_fn = lambda: tf.data.Dataset.from_tensors(dict(df_predict))
preds = np.array([p['predictions'][0] for p in est.predict(predict_input_fn)])
return preds.reshape(predict_shape)
```
First let's try to fit a linear model to the data.
```
fc = [tf.feature_column.numeric_column('x'),
tf.feature_column.numeric_column('y')]
train_input_fn = make_input_fn(df, df.z)
est = tf.estimator.LinearRegressor(fc)
est.train(train_input_fn, max_steps=500);
plot_contour(xi, yi, predict(est))
```
Not very good at all...
**???** Why is the linear model not performing well for this problem? Can you think of how to improve it just using a linear model?
Next let's try to fit a GBDT model to it and try to understand what the model does
```
for n_trees in [1,2,3,10,30,50,100,200]:
est = tf.estimator.BoostedTreesRegressor(fc,
n_batches_per_layer=1,
max_depth=4,
n_trees=n_trees)
est.train(train_input_fn)
plot_contour(xi, yi, predict(est))
plt.text(-1.8, 2.1, '# trees: {}'.format(n_trees), color='w', backgroundcolor='black', size=20)
```
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
#**Exploratory Data Analysis**
### Setting Up Environment
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.style as style
import seaborn as sns
from scipy.stats import pointbiserialr
from scipy.stats import pearsonr
from scipy.stats import chi2_contingency
from sklearn.impute import SimpleImputer
plt.rcParams["figure.figsize"] = (15,8)
application_data_raw = pd.read_csv('application_data.csv', encoding = 'unicode_escape')
application_data_raw.info()
#application_data_raw.describe()
df = application_data_raw.copy()
```
### Data Cleaning
```
# drop the customer id column
df = df.drop(columns=['SK_ID_CURR'])
# remove invalid values in gender column
df['CODE_GENDER'] = df['CODE_GENDER'].replace("XNA", None)
# drop columns filled >25% with null values
num_missing_values = df.isnull().sum()
nulldf = round(num_missing_values/len(df)*100, 2)
cols_to_keep = nulldf[nulldf<=0.25].index.to_list()
df = df.loc[:, cols_to_keep] # 61 of 121 attributes were removed due to null values.
# impute remaining columns with null values
num_missing_values = df.isnull().sum()
missing_cols = num_missing_values[num_missing_values>0].index.tolist()
for col in missing_cols:
imp_mean = SimpleImputer(strategy='most_frequent')
imp_mean.fit(df[[col]])
df[col] = imp_mean.transform(df[[col]]).ravel()
df.info()
```
### Data Preprocessing
```
continuous_vars = ['CNT_CHILDREN', 'AMT_INCOME_TOTAL', 'AMT_CREDIT', 'AMT_ANNUITY', 'AMT_GOODS_PRICE', 'REGION_POPULATION_RELATIVE',
'DAYS_REGISTRATION', 'DAYS_ID_PUBLISH', 'CNT_FAM_MEMBERS', 'REGION_RATING_CLIENT', 'REGION_RATING_CLIENT_W_CITY',
'HOUR_APPR_PROCESS_START', 'EXT_SOURCE_2', 'DAYS_LAST_PHONE_CHANGE', 'YEARS_BIRTH', 'YEARS_EMPLOYED']
#categorical_variables = df.select_dtypes(include=["category"]).columns.tolist()
#len(categorical_variables)
categorical_vars = ['NAME_CONTRACT_TYPE', 'CODE_GENDER', 'FLAG_OWN_CAR', 'FLAG_OWN_REALTY', 'NAME_INCOME_TYPE','NAME_EDUCATION_TYPE',
'NAME_FAMILY_STATUS', 'NAME_HOUSING_TYPE', 'FLAG_MOBIL', 'FLAG_EMP_PHONE', 'FLAG_WORK_PHONE', 'FLAG_CONT_MOBILE', 'FLAG_PHONE',
'FLAG_EMAIL', 'WEEKDAY_APPR_PROCESS_START', 'REG_REGION_NOT_LIVE_REGION','REG_REGION_NOT_WORK_REGION',
'LIVE_REGION_NOT_WORK_REGION', 'REG_CITY_NOT_LIVE_CITY', 'REG_CITY_NOT_WORK_CITY', 'LIVE_CITY_NOT_WORK_CITY',
'ORGANIZATION_TYPE', 'FLAG_DOCUMENT_2', 'FLAG_DOCUMENT_3', 'FLAG_DOCUMENT_4', 'FLAG_DOCUMENT_5', 'FLAG_DOCUMENT_6',
'FLAG_DOCUMENT_7', 'FLAG_DOCUMENT_8', 'FLAG_DOCUMENT_9', 'FLAG_DOCUMENT_10', 'FLAG_DOCUMENT_11', 'FLAG_DOCUMENT_12',
'FLAG_DOCUMENT_13', 'FLAG_DOCUMENT_14', 'FLAG_DOCUMENT_15', 'FLAG_DOCUMENT_16', 'FLAG_DOCUMENT_17', 'FLAG_DOCUMENT_18',
'FLAG_DOCUMENT_19', 'FLAG_DOCUMENT_20', 'FLAG_DOCUMENT_21']
# plot to see distribution of categorical variables
n_cols = 4
fig, axes = plt.subplots(nrows=int(np.ceil(len(categorical_vars)/n_cols)),
ncols=n_cols,
figsize=(15,45))
for i in range(len(categorical_vars)):
var = categorical_vars[i]
dist = df[var].value_counts()
labels = dist.index
counts = dist.values
ax = axes.flatten()[i]
ax.bar(labels, counts)
ax.tick_params(axis='x', labelrotation = 90)
ax.title.set_text(var)
plt.tight_layout()
plt.show()
# This gives us an idea about which features may already be more useful
# Remove all FLAG_DOCUMENT features except for FLAG_DOCUMENT_3 as most did not submit, insignificant on model
vars_to_drop = []
vars_to_drop = ["FLAG_DOCUMENT_2"]
vars_to_drop += ["FLAG_DOCUMENT_{}".format(i) for i in range(4,22)]
# Unit conversions
df['AMT_INCOME_TOTAL'] = df['AMT_INCOME_TOTAL']/100000 # yearly income to be expressed in hundred thousands
df['YEARS_BIRTH'] = round((df['DAYS_BIRTH']*-1)/365).astype('int64') # days of birth changed to years of birth
df['YEARS_EMPLOYED'] = round((df['DAYS_EMPLOYED']*-1)/365).astype('int64') # days employed change to years employed
df.loc[df['YEARS_EMPLOYED']<0, 'YEARS_EMPLOYED'] = 0
df = df.drop(columns=['DAYS_BIRTH', 'DAYS_EMPLOYED'])
# Encoding categorical variables
def encode_cat(df, var_list):
for var in var_list:
df[var] = df[var].astype('category')
d = dict(zip(df[var], df[var].cat.codes))
df[var] = df[var].map(d)
print(var+" Category Codes")
print(d)
return df
already_coded = ['FLAG_MOBIL', 'FLAG_EMP_PHONE', 'FLAG_WORK_PHONE', 'FLAG_CONT_MOBILE', 'FLAG_PHONE', 'FLAG_EMAIL', 'REG_REGION_NOT_LIVE_REGION',
'REG_REGION_NOT_WORK_REGION', 'LIVE_REGION_NOT_WORK_REGION', 'REG_CITY_NOT_LIVE_CITY', 'REG_CITY_NOT_WORK_CITY',
'LIVE_CITY_NOT_WORK_CITY', 'FLAG_DOCUMENT_2', 'FLAG_DOCUMENT_3', 'FLAG_DOCUMENT_4', 'FLAG_DOCUMENT_5', 'FLAG_DOCUMENT_6',
'FLAG_DOCUMENT_7', 'FLAG_DOCUMENT_8', 'FLAG_DOCUMENT_9', 'FLAG_DOCUMENT_10', 'FLAG_DOCUMENT_11', 'FLAG_DOCUMENT_12',
'FLAG_DOCUMENT_13', 'FLAG_DOCUMENT_14', 'FLAG_DOCUMENT_15', 'FLAG_DOCUMENT_16', 'FLAG_DOCUMENT_17', 'FLAG_DOCUMENT_18',
'FLAG_DOCUMENT_19', 'FLAG_DOCUMENT_20', 'FLAG_DOCUMENT_21']
vars_to_encode = ['NAME_CONTRACT_TYPE', 'CODE_GENDER', 'FLAG_OWN_CAR', 'FLAG_OWN_REALTY', 'NAME_INCOME_TYPE','NAME_EDUCATION_TYPE',
'NAME_FAMILY_STATUS', 'NAME_HOUSING_TYPE', 'WEEKDAY_APPR_PROCESS_START', 'ORGANIZATION_TYPE']
for var in already_coded:
df[var] = df[var].astype('category')
df = encode_cat(df, vars_to_encode)
# removing rows with all 0
df = df[df.T.any()]
df.describe()
```
### Checking for correlations between variables
```
X = df.iloc[:, 1:]
# getting correlation matrix of continuous and categorical variables
cont = ['TARGET'] + continuous_vars
cat = ['TARGET'] + categorical_vars
cont_df = df.loc[:, cont]
cat_df = df.loc[:, cat]
cont_corr = cont_df.corr()
cat_corr = cat_df.corr()
plt.figure(figsize=(10,10));
sns.heatmap(cont_corr,
xticklabels = cont_corr.columns,
yticklabels = cont_corr.columns,
cmap="PiYG",
linewidth = 1);
# Find Point biserial correlation
for cat_var in categorical_vars:
for cont_var in continuous_vars:
data_cat = df[cat_var].to_numpy()
data_cont = df[cont_var].to_numpy()
corr, p_val = pointbiserialr(x=data_cat, y=data_cont)
if np.abs(corr) >= 0.8:
print(f'Categorical variable: {cat_var}, Continuous variable: {cont_var}, correlation: {corr}')
# Find Pearson correlation
total_len = len(continuous_vars)
for idx1 in range(total_len-1):
for idx2 in range(idx1+1, total_len):
cont_var1 = continuous_vars[idx1]
cont_var2 = continuous_vars[idx2]
data_cont1 = X[cont_var1].to_numpy()
data_cont2 = X[cont_var2].to_numpy()
corr, p_val = pearsonr(x=data_cont1, y=data_cont2)
if np.abs(corr) >= 0.8:
print(f' Continuous var 1: {cont_var1}, Continuous var 2: {cont_var2}, correlation: {corr}')
sns.scatterplot(data=X, x='CNT_CHILDREN',y='CNT_FAM_MEMBERS');
# Find Cramer's V correlation
total_len = len(categorical_vars)
for idx1 in range(total_len-1):
for idx2 in range(idx1+1, total_len):
cat_var1 = categorical_vars[idx1]
cat_var2 = categorical_vars[idx2]
c_matrix = pd.crosstab(X[cat_var1], X[cat_var2])
""" calculate Cramers V statistic for categorial-categorial association.
uses correction from Bergsma and Wicher,
Journal of the Korean Statistical Society 42 (2013): 323-328
"""
chi2 = chi2_contingency(c_matrix)[0]
n = c_matrix.sum().sum()
phi2 = chi2/n
r,k = c_matrix.shape
phi2corr = max(0, phi2-((k-1)*(r-1))/(n-1))
rcorr = r-((r-1)**2)/(n-1)
kcorr = k-((k-1)**2)/(n-1)
corr = np.sqrt(phi2corr/min((kcorr-1),(rcorr-1)))
if corr >= 0.8:
print(f'categorical variable 1 {cat_var1}, categorical variable 2: {cat_var2}, correlation: {corr}')
corr, p_val = pearsonr(x=df['REGION_RATING_CLIENT_W_CITY'], y=df['REGION_RATING_CLIENT'])
print(corr)
# High collinearity of 0.95 between variables suggests that one of it should be removed, we shall remove the REGION_RATING_CLIENT_W_CITY.
# Drop highly correlated variables
vars_to_drop += ['CNT_FAM_MEMBERS', 'REG_REGION_NOT_WORK_REGION', 'REG_CITY_NOT_WORK_CITY', 'AMT_GOODS_PRICE', 'REGION_RATING_CLIENT_W_CITY']
features_to_keep = [x for x in df.columns if x not in vars_to_drop]
features_to_keep
new_df = df.loc[:, features_to_keep]
new_df
# Checking correlation of X continuous columns vs TARGET column
plt.figure(figsize=(10,10))
df_corr = new_df.corr()
ax = sns.heatmap(df_corr,
xticklabels=df_corr.columns,
yticklabels=df_corr.columns,
annot = True,
cmap ="RdYlGn")
# No particular feature found to be significantly correlated with the target
# REGION_RATING_CLIENT and REGION_POPULATION_RELATIVE have multicollinearity
features_to_keep.remove('REGION_POPULATION_RELATIVE')
features_to_keep
# These are our final list of features
```
###Plots
```
ax1 = sns.boxplot(y='AMT_CREDIT', x= 'TARGET', data=new_df)
ax1.set_title("Target by amount credit of the loan", fontsize=20);
```
The amount credit of an individual does not seem to have a siginifcant effect on whether a person finds it difficult to pay. But they are crucial for our business reccomendations so we keep them.
```
ax2 = sns.barplot(x='CNT_CHILDREN', y= 'TARGET', data=new_df)
ax2.set_title("Target by number of children", fontsize=20);
```
From these plots, we can see that number of children has quite a significant effect on whether one defaults or not, with an increasing number of children proving more difficulty to return the loan.
```
ax3 = sns.barplot(x='NAME_FAMILY_STATUS', y= 'TARGET', data=new_df);
ax3.set_title("Target by family status", fontsize=20);
plt.xticks(np.arange(6), ['Civil marriage', 'Married', 'Separated', 'Single / not married',
'Unknown', 'Widow'], rotation=20);
```
Widows have the lowest likelihood of finding it difficult to pay, a possible target for our reccomendation strategy.
```
new_df['YEARS_BIRTH_CAT'] = pd.cut(df.YEARS_BIRTH, bins= [21, 25, 35, 45, 55, 69], labels= ["25 and below", "26-35", "36-45", "46-55", "Above 55"])
ax4 = sns.barplot(x='YEARS_BIRTH_CAT', y= 'TARGET', data=new_df);
ax4.set_title("Target by age", fontsize=20);
```
Analysis of age groups on ability to pay shows clear trend that the older you are, the better able you are to pay your loans. We will use this to craft our reccomendations.
```
ax5 = sns.barplot(y='TARGET', x= 'NAME_INCOME_TYPE', data=new_df);
ax5.set_title("Target by income type", fontsize=20);
plt.xticks(np.arange(0, 8),['Businessman', 'Commercial associate', 'Maternity leave', 'Pensioner',
'State servant', 'Student', 'Unemployed', 'Working'], rotation=20);
ax6 = sns.barplot(x='NAME_EDUCATION_TYPE', y= 'TARGET', data=new_df);
ax6.set_title("Target by education type", fontsize=20);
plt.xticks(np.arange(5), ['Academic Degree', 'Higher education', 'Incomplete higher', 'Lower secondary', 'Secondary / secondary special'], rotation=20);
ax7 = sns.barplot(x='ORGANIZATION_TYPE', y= 'TARGET', data=new_df);
ax7.set_title("Target by organization type", fontsize=20);
plt.xticks(np.arange(58), ['Unknown','Advertising','Agriculture', 'Bank', 'Business Entity Type 1', 'Business Entity Type 2',
'Business Entity Type 3', 'Cleaning', 'Construction', 'Culture', 'Electricity', 'Emergency', 'Government', 'Hotel', 'Housing', 'Industry: type 1', 'Industry: type 10', 'Industry: type 11', 'Industry: type 12', 'Industry: type 13', 'Industry: type 2', 'Industry: type 3', 'Industry: type 4', 'Industry: type 5', 'Industry: type 6', 'Industry: type 7', 'Industry: type 8', 'Industry: type 9', 'Insurance', 'Kindergarten', 'Legal Services', 'Medicine', 'Military', 'Mobile', 'Other', 'Police', 'Postal', 'Realtor', 'Religion', 'Restaurant', 'School', 'Security', 'Security Ministries', 'Self-employed', 'Services', 'Telecom', 'Trade: type 1', 'Trade: type 2', 'Trade: type 3', 'Trade: type 4', 'Trade: type 5', 'Trade: type 6', 'Trade: type 7', 'Transport: type 1', 'Transport: type 2', 'Transport: type 3', 'Transport: type 4','University'], rotation=90);
ax8 = sns.barplot(x='NAME_CONTRACT_TYPE', y= 'TARGET', data=new_df);
ax8.set_title("Target by contract type", fontsize=20);
plt.xticks(np.arange(2), ['Cash Loan', 'Revolving Loan']);
```
People who get revolving loans are more likely to pay back their loans than cash loans, perhaps due to the revolving loans being of a lower amount, and also its higher interest rate and recurring nature.
```
ax9 = sns.barplot(x='CODE_GENDER', y= 'TARGET', data=new_df);
ax9.set_title("Target by gender", fontsize=20);
plt.xticks(np.arange(2), ['Female', 'Male']);
```
Males find it harder to pay back their loans than females in general.
```
# Splitting Credit into bins of 10k
new_df['Credit_Category'] = pd.cut(new_df.AMT_CREDIT, bins= [0, 100000, 200000, 300000, 400000, 500000, 600000, 700000, 800000, 900000, 1000000, 4.050000e+06], labels= ["0-100k", "100-200k", "200-300k", "300-400k", "400-500k", "500-600k", "600-700k", "700-800k", "800-900k","900-1 million", "Above 1 million"])
setorder= new_df.groupby('Credit_Category')['TARGET'].mean().sort_values(ascending=False)
ax10 = sns.barplot(x='Credit_Category', y= 'TARGET', data=new_df, order = setorder.index);
ax10.set_title("Target by Credit Category", fontsize=20);
plt.show()
#No. of people who default
print(new_df.loc[new_df["TARGET"]==0, 'Credit_Category',].value_counts().sort_index())
#No. of people who repayed
print(new_df.loc[new_df["TARGET"]==1, 'Credit_Category',].value_counts().sort_index())
new_df['Credit_Category'].value_counts().sort_index()
# This will be useful for our first recommendation
#temp = new_df["Credit_Category"].value_counts()
#df1 = pd.DataFrame({"Credit_Category": temp.index,'Number of contracts': temp.values})
## Calculate the percentage of target=1 per category value
#cat_perc = new_df[["Credit_Category", 'TARGET']].groupby(["Credit_Category"],as_index=False).mean()
#cat_perc["TARGET"] = cat_perc["TARGET"]*100
#cat_perc.sort_values(by='TARGET', ascending=False, inplace=True)
#fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12,6))
#s = sns.countplot(ax=ax1,
# x = "Credit_Category",
# data=new_df,
# hue ="TARGET",
# order=cat_perc["Credit_Category"],
# palette=['g','r'])
#ax1.set_title("Credit Category", fontdict={'fontsize' : 10, 'fontweight' : 3, 'color' : 'Blue'})
#ax1.legend(['Repayer','Defaulter'])
## If the plot is not readable, use the log scale.
##if ylog:
## ax1.set_yscale('log')
## ax1.set_ylabel("Count (log)",fontdict={'fontsize' : 10, 'fontweight' : 3, 'color' : 'Blue'})
#s.set_xticklabels(s.get_xticklabels(),rotation=90)
#s = sns.barplot(ax=ax2, x = "Credit_Category", y='TARGET', order=cat_perc["Credit_Category"], data=cat_perc, palette='Set2')
#s.set_xticklabels(s.get_xticklabels(),rotation=90)
#plt.ylabel('Percent of Defaulters [%]', fontsize=10)
#plt.tick_params(axis='both', which='major', labelsize=10)
#ax2.set_title("Credit Category" + " Defaulter %", fontdict={'fontsize' : 15, 'fontweight' : 5, 'color' : 'Blue'})
#plt.show();
new_df.info()
```
| github_jupyter |
### PROBLEM DEFINITION
### Knowing from a training set of samples listing passengers who survived or did not survive the Titanic disaster, Can our model determine based on a given test dataset not containing the survival information, if these passengers in the test dataset survived or not.
## BackGround
### - On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. Translated 32% survival rate.
### -One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew.
### -Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
# Import Libraries
```
#Data Wrangling and Visualization
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import numpy as np
import warnings
%matplotlib inline
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', 100)
#machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
```
#### Acquire The DataSets
```
test_df=pd.read_csv('./kaggle-titanic-master/input/test.csv')
train_df=pd.read_csv('./kaggle-titanic-master/input/train.csv')
```
## DATA ANALYSIS
```
train_df.head()
```
### Data Dictionary
* Survived: 0 = No, 1 = Yes
* pclass: Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd
* sibsp: # of siblings / spouses aboard the Titanic
* parch: # of parents / children aboard the Titanic
* ticket: Ticket number
* cabin: Cabin number
* embarked: Port of Embarkation C = Cherbourg, Q = Queenstown, S = Southampton
```
train_df.shape
test_df.shape
test_df.head()
train_df.info()
```
##### Age,cabin and embarked have missing values
```
test_df.info()
```
##### Age,Fare and Cabin have missing values
```
train_df.isnull()
train_df.describe()
print(train_df.describe(include=['O']))
test_df.describe(include=['O'])
```
### Visualization
```
def bar_chart(feature):
survived = train_df[train_df['Survived']==1][feature].value_counts()
dead = train_df[train_df['Survived']==0][feature].value_counts()
df = pd.DataFrame([survived,dead])
df.index = ['Survived','Dead']
df.plot(kind='bar', figsize=(10,5))
bar_chart('Sex')
bar_chart('Pclass')
bar_chart('SibSp')
bar_chart('Parch')
bar_chart('Embarked')
```
### DATA WRANGLING
Correlation Anaysis
```
sns.heatmap(train_df.corr(), annot=True, cmap='RdYlGn', linewidth=0.4)
def impute(feature):
dfs=[train_df,test_df]
for df in dfs:
df[feature][df[feature].isnull()]=np.random.normal(df[feature].mean(),df[feature].std())
def Drop_feature(feature):
dfs=[train_df,test_df]
for df in dfs:
df.drop(feature,axis=1,inplace=True)
```
Utility Functions to impute and drop features
```
train_df.describe()
```
Decription of the numerical features
```
train_df.describe(include ='O')
```
Description of the object features
```
impute('Fare')
impute('Age')
Drop_feature('Name')
Drop_feature('Cabin')
Drop_feature('Ticket')
dfs=[train_df,test_df]
Gender={"male":0 ,"female":1}
Towns={"S":1, "Q":2 ,"C":3}
for df in dfs:
df.Age=df.Age.astype(int)
df["Sex"]=df["Sex"].map(Gender)
df["Embarked"]=df["Embarked"].map(Towns)
df.loc[df['Age']<=15,'Age']=0
df.loc[(df['Age']>15) & (df.Age<=30),'Age']=1
df.loc[(df['Age']>30) & (df.Age<=45),'Age']=2
df.loc[(df['Age']>45) & (df.Age<=60),'Age']=3
df.loc[df['Age']>45,'Age']=4
```
Transform Categorical features to discrete feature
```
impute('Embarked')
sns.heatmap(train_df.corr(), annot=True, cmap='RdYlGn', linewidth=0.4)
corr_ds=train_df.corr()
```
| github_jupyter |
```
import warnings
import sys
sys.path.insert(0, '../src')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from felix_ml_tools import macgyver as mg
from preprocess import *
from sklearn.linear_model import LassoCV, Lasso
warnings.filterwarnings('ignore')
pd.set_option("max_columns", None)
pd.set_option("max_rows", None)
pd.set_option('display.float_format', lambda x: '%.3f' % x)
%matplotlib inline
# todas as UFs do Nordeste
UFs_NORDESTE = ['AL', 'BA', 'CE', 'MA', 'PB', 'PE', 'PI', 'RN', 'SE']
UFs_NORDESTE_NAMES = ['Alagoas', 'Bahia', 'Ceará', 'Maranhão', 'Paraíba', 'Pernambuco', 'Piauí', 'Rio Grande do Norte', 'Sergipe']
```
# Intro
Ingesting some base data
-----------
## Compreender os índices do CRAS
```
idcras = pd.read_excel(
'../../dataset/GAMMAChallenge21Dados/6. Assistencia Social/IDCRAS.xlsx',
sheet_name = 'dados'
)
inspect(idcras);
idcras = generates_normalized_column(idcras,
column_to_normalize='nome_municipio',
column_normalized='nome_municipio_norm')
idcras = keep_last_date(idcras, column_dates='ano')
columns_to_keep = ['uf', 'nome_municipio', 'nome_municipio_norm', 'cod_ibge', 'ind_estru_fisic', 'ind_servi', 'ind_rh', 'id_cras.1']
idcras = idcras[columns_to_keep]
# há cidades com mais de um nucleo CRAS
idcras = (idcras[idcras['uf'].isin(UFs_NORDESTE)]
.groupby(['uf', 'nome_municipio', 'nome_municipio_norm', 'cod_ibge'])[
['ind_estru_fisic', 'ind_servi', 'ind_rh', 'id_cras.1']
]
.mean())
```
Como são os índices médios das unidades públicas CRAS para estados e municipios?
```
idcras_cities = (
idcras.groupby(['nome_municipio',
'nome_municipio_norm',
'cod_ibge'])[['ind_estru_fisic',
'ind_servi', 'ind_rh',
'id_cras.1']]
.mean()
.reset_index()
)
inspect(idcras_cities);
idcras_states = (
idcras.groupby(['uf'])[['ind_estru_fisic',
'ind_servi',
'ind_rh',
'id_cras.1']]
.mean()
.sort_values('uf')
.reset_index()
)
inspect(idcras_states);
```
-----------
### Registros CRAS RMA
```
cras_rma = pd.read_excel(
'../../dataset/GAMMAChallenge21Dados/6. Assistencia Social/RMA.xlsx',
sheet_name = 'dados_CRAS'
)
cras_rma = generates_normalized_column(cras_rma,
column_to_normalize='nome_municipio',
column_normalized='nome_municipio_norm')
cras_rma = keep_last_date(cras_rma, column_dates='ano')
inspect(cras_rma);
cras_rma_states = cras_rma[cras_rma['uf'].isin(UFs_NORDESTE)]
cras_rma_states_grouped = (
cras_rma_states
.groupby(['uf', 'nome_municipio_norm'])
[['cod_ibge', 'a1', 'a2', 'b1', 'b2', 'b3', 'b5', 'b6', 'c1', 'c6', 'd1', 'd4']]
.mean()
.fillna(0)
.astype(int)
)
registry_cras_rma_states_grouped = (
cras_rma_states_grouped.groupby(['uf'])[['a1', 'a2', 'b1', 'b2', 'b3', 'b5', 'b6', 'c1', 'c6', 'd1', 'd4']]
.mean()
.sort_values('uf')
)
registry_cras_rma_states_grouped = (
registry_cras_rma_states_grouped
.rename(columns = {
'a1': 'a1 Total de famílias em acompanhamento pelo PAIF',
'a2': 'a2 Novas famílias inseridas no acompanhamento do PAIF no mês de referência',
'b1': 'b1 PAIF Famílias em situação de extrema pobreza',
'b2': 'b2 PAIF Famílias beneficiárias do Programa Bolsa Família',
'b3': 'b3 PAIF Famílias beneficiárias do Programa Bolsa Família, em descumprimento de condicionalidades',
'b5': 'b5 PAIF Famílias com crianças/adolescentes em situação de trabalho infantil',
'b6': 'b6 PAIF Famílias com crianças e adolescentes em Serviço de Acolhimento',
'c1': 'c1 Total de atendimentos individualizados realizados, no mês',
'c6': 'c6 Visitas domiciliares',
'd1': 'd1 Famílias participando regularmente de grupos no âmbito do PAIF',
'd4': 'd4 Adolescentes de 15 a 17 anos em Serviços de Convivência e Fortalecimentos de Vínculos'
},
inplace = False)
)
registry_cras_rma_states_grouped
```
## Target Population
```
sim_pf_homcidios = pd.read_parquet('../../dataset/handled/sim_pf_homcidios.parquet')
sim_pf_homcidios = sim_pf_homcidios[sim_pf_homcidios['uf'].isin(UFs_NORDESTE)]
ibge_municipio_pib = pd.read_parquet('../../dataset/handled/ibge_municipio_pib.parquet')
ibge_estados = pd.read_excel(
'../../dataset/GAMMAChallenge21Dados/3. Atlas de Desenvolvimento Humano/Atlas 2013_municipal, estadual e Brasil.xlsx',
sheet_name = 'UF 91-00-10'
)
ibge_estados = keep_last_date(ibge_estados, 'ANO')
ibge_estados = ibge_estados[ibge_estados.UFN.isin(UFs_NORDESTE_NAMES)]
inspect(ibge_estados);
mortes_estados_norm = dict()
for estado in ibge_estados.UFN.sort_values().to_list():
mortes_estados_norm[estado] = (
(sim_pf_homcidios[sim_pf_homcidios.nomeUf == estado]
.groupby('uf')['uf']
.count()
.values[0]) / (ibge_estados[ibge_estados.UFN == estado]
.POP
.values[0])*1000
)
mortes_estados_norm = pd.DataFrame.from_dict(mortes_estados_norm, orient='index', columns=['mortes_norm'])
mortes_estados_norm['mortes_norm']
homicidios_municipios = sim_pf_homcidios.groupby(['uf','municipioCodigo','nomeMunicipioNorm'])[['contador']].count().astype(int).reset_index()
inspect(homicidios_municipios);
ibge_municipio_populacao_estimada = pd.read_parquet('../../dataset/handled/ibge_municipio_populacao_estimada.parquet')
ibge_municipio_populacao_estimada = generates_normalized_column(ibge_municipio_populacao_estimada, 'nomeMunicipio', 'nome_municipio_norm')
inspect(ibge_municipio_populacao_estimada);
cras_rma_municipios = cras_rma.groupby(['uf','cod_ibge','nome_municipio_norm'])[[
'a1', 'a2',
'b1', 'b2', 'b3', 'b4', 'b5', 'b6',
'c1', 'c2', 'c3', 'c4', 'c5', 'c6',
'd1', 'd2', 'd3', 'd4', 'd5', 'd6', 'd7'
]].mean().fillna(0).astype(int).reset_index()
inspect(cras_rma_municipios);
list_pop = list()
list_mortes_norm = list()
for cod in cras_rma_municipios.cod_ibge:
populacao = ibge_municipio_populacao_estimada[ibge_municipio_populacao_estimada['municipioCodigo'].str.contains(str(cod))]['populacaoEstimada'].astype(int).values[0]
list_pop.append(populacao)
if homicidios_municipios[(homicidios_municipios['municipioCodigo'].str.contains(str(cod)))&(ibge_municipio_populacao_estimada['faixaPopulacaoEstimada'] == '0:10000')].any().any():
mortes = homicidios_municipios[(homicidios_municipios['municipioCodigo'].str.contains(str(cod)))]['contador'].astype(int).values[0]
mortes /= populacao
mortes *= 10000
else:
mortes = None
list_mortes_norm.append(mortes)
```
## Hypothesis
### Hypothesis 2
```
plot_grouped_df(sim_pf_homcidios.groupby('uf')['uf'],
xlabel='Quantidade de Homicidios',
ylabel='UF',
figsize=(17,3))
mortes_estados_norm.plot(kind='bar',
figsize=(17,3),
rot=0,
grid=True).set_ylabel("Quantidade de Homicidios Normalizado")
registry_cras_rma_states_grouped['mortes'] = mortes_estados_norm['mortes_norm'].values
registry_cras_rma_states_grouped
```
#### (2.1) Estados com CRAS suportam moradores com dificuldade
```
registry_cras_rma_states_grouped.plot(kind='bar',
title='Registro de Atividades - CRAS',
figsize=(17,5),
ylabel='Registros',
xlabel='UF',
rot=0,
grid=True)
```
```
X = registry_cras_rma_states_grouped.iloc[:,:-1]
y = registry_cras_rma_states_grouped.iloc[:,-1:]
reg = LassoCV()
reg.fit(X, y)
print("Melhor valor de alpha usando LassoCV: %f" % reg.alpha_)
print("Melhor score usando LassoCV: %f" % reg.score(X,y))
coef = pd.Series(reg.coef_, index = X.columns)
print("LASSO manteve " + str(sum(coef != 0)) + " variaveis, e elimina " + str(sum(coef == 0)) + " variaveis")
imp_coef = coef.sort_values()
import matplotlib
matplotlib.rcParams['figure.figsize'] = (5, 5)
imp_coef.plot(kind = "barh")
plt.title("Feature importance usando LASSO")
trabalhos_cras_estados_totfam = (
registry_cras_rma_states_grouped[['a1 Total de famílias em acompanhamento pelo PAIF',
'c1 Total de atendimentos individualizados realizados, no mês']]
)
trabalhos_cras_estados_extpobres = (
registry_cras_rma_states_grouped[['b1 PAIF Famílias em situação de extrema pobreza']]
)
trabalhos_cras_estados_acompanha = (
registry_cras_rma_states_grouped[['b5 PAIF Famílias com crianças/adolescentes em situação de trabalho infantil',
'b6 PAIF Famílias com crianças e adolescentes em Serviço de Acolhimento']]
)
trabalhos_cras_estados_atendimento = (
registry_cras_rma_states_grouped[['d1 Famílias participando regularmente de grupos no âmbito do PAIF',
'd4 Adolescentes de 15 a 17 anos em Serviços de Convivência e Fortalecimentos de Vínculos']]
)
def plot_cras_registries(df):
df.plot(kind='bar',
title='Registro de Atividades - CRAS',
figsize=(10,5),
ylabel='Registros',
xlabel='UF',
fontsize=12,
colormap='Pastel2',
rot=0,
grid=True)
plot_cras_registries(trabalhos_cras_estados_totfam)
plot_cras_registries(trabalhos_cras_estados_extpobres)
plot_cras_registries(trabalhos_cras_estados_acompanha)
plot_cras_registries(trabalhos_cras_estados_atendimento)
```
## Recomendação
O CRAS demonstra que, se rankear-mos os estados entra os mais e menos violentos, onde ocorre maior média do índice de registros apresentado por unidades públicas CRAS, o estado tende a controlar o números de homicidios
- índices na identificação de famílias em situação de extrema pobreza
- índices na identificação de trabalho infantil e acolhimento
- índices na participação das famílias em grupos no âmbito do PAIF
---------------
#### (2.2) Estados com CRAS suportam moradores com dificuldade
```
#cras_municipios['populacao'] = list_pop
cras_rma_municipios['Mortes/Populacao'] = list_mortes_norm
inspect(cras_rma_municipios);
cras_rma_municipios = cras_rma_municipios.dropna()
cras_rma_municipios_mortes = (
cras_rma_municipios[['uf',
'nome_municipio_norm',
'a1', 'a2',
'b1', 'b2', 'b3', 'b4', 'b5', 'b6',
'c1', 'c2', 'c3', 'c4', 'c5', 'c6', 'd1',
'd2', 'd3', 'd4', 'd5', 'd6', 'd7',
'Mortes/Populacao']]
.reset_index(drop=True)
)
inspect(cras_rma_municipios_mortes);
corr = cras_rma_municipios_mortes.corr()
corr.style.background_gradient(cmap='hot')
```
### Não é possível identificar, através de correlação, que alguma ação registrada pelo CRAS tem influência nos níveis de violência das cidades
```
cor_target = abs(corr["Mortes/Populacao"])
relevant_features = cor_target[cor_target>=0.07]
relevant_features
X = cras_rma_municipios_mortes.drop(['uf','nome_municipio_norm','Mortes/Populacao'],1)
y = cras_rma_municipios_mortes['Mortes/Populacao']
reg = LassoCV()
reg.fit(X, y)
print("Melhor valor de alpha usando LassoCV: %f" % reg.alpha_)
print("Melhor score usando LassoCV: %f" % reg.score(X,y))
coef = pd.Series(reg.coef_, index = X.columns)
print("LASSO manteve " + str(sum(coef != 0)) + " variaveis, e elimina " + str(sum(coef == 0)) + " variaveis")
imp_coef = coef.sort_values()
import matplotlib
matplotlib.rcParams['figure.figsize'] = (5, 5)
imp_coef.plot(kind = "barh")
plt.title("Feature importance usando LASSO")
```
```
cras_municipios_feat_select = cras_rma_municipios_mortes[['uf', 'nome_municipio_norm', 'Mortes/Populacao', 'c1', 'c3', 'd6']].reset_index(drop=True)
corr = cras_municipios_feat_select.sort_values(by='Mortes/Populacao', ascending=False).corr()
corr.style.background_gradient(cmap='hot')
```
## Recomendação
Não é possível recomendar uma ação do CRAS a nível municipal até este instante.
| github_jupyter |
```
import pandas as pd
from pandas import DataFrame
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import datetime, timedelta
from statsmodels.tsa.arima_model import ARIMA
from statsmodels.tsa.statespace.sarimax import SARIMAX
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.seasonal import seasonal_decompose
from scipy import stats
import statsmodels.api as sm
from itertools import product
from math import sqrt
from sklearn.metrics import mean_squared_error
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
colors = ["windows blue", "amber", "faded green", "dusty purple"]
sns.set(rc={"figure.figsize": (20,10), "axes.titlesize" : 18, "axes.labelsize" : 12,
"xtick.labelsize" : 14, "ytick.labelsize" : 14 })
dateparse = lambda dates: pd.datetime.strptime(dates, '%m/%d/%Y')
df = pd.read_csv('BTCUSDTEST.csv', parse_dates=['Date'], index_col='Date', date_parser=dateparse)
df = df.loc[:, ~df.columns.str.contains('^Unnamed')]
df.sample(5)
# Extract the bitcoin data only
btc=df[df['Symbol']=='BTCUSD']
# Drop some columns
btc.drop(['Volume', 'Market Cap'],axis=1,inplace=True)
# Resampling to monthly frequency
btc_month = btc.resample('M').mean()
#seasonal_decompose(btc_month.close, freq=12).plot()
seasonal_decompose(btc_month.Close, model='additive').plot()
print("Dickey–Fuller test: p=%f" % adfuller(btc_month.Close)[1])
# Box-Cox Transformations
btc_month['close_box'], lmbda = stats.boxcox(btc_month.Close)
print("Dickey–Fuller test: p=%f" % adfuller(btc_month.close_box)[1])
# Seasonal differentiation (12 months)
btc_month['box_diff_seasonal_12'] = btc_month.close_box - btc_month.close_box.shift(12)
print("Dickey–Fuller test: p=%f" % adfuller(btc_month.box_diff_seasonal_12[12:])[1])
# Seasonal differentiation (3 months)
btc_month['box_diff_seasonal_3'] = btc_month.close_box - btc_month.close_box.shift(3)
print("Dickey–Fuller test: p=%f" % adfuller(btc_month.box_diff_seasonal_3[3:])[1])
# Regular differentiation
btc_month['box_diff2'] = btc_month.box_diff_seasonal_12 - btc_month.box_diff_seasonal_12.shift(1)
# STL-decomposition
seasonal_decompose(btc_month.box_diff2[13:]).plot()
print("Dickey–Fuller test: p=%f" % adfuller(btc_month.box_diff2[13:])[1])
#autocorrelation_plot(btc_month.close)
plot_acf(btc_month.Close[13:].values.squeeze(), lags=12)
plt.tight_layout()
# Initial approximation of parameters using Autocorrelation and Partial Autocorrelation Plots
ax = plt.subplot(211)
# Plot the autocorrelation function
#sm.graphics.tsa.plot_acf(btc_month.box_diff2[13:].values.squeeze(), lags=48, ax=ax)
plot_acf(btc_month.box_diff2[13:].values.squeeze(), lags=12, ax=ax)
ax = plt.subplot(212)
#sm.graphics.tsa.plot_pacf(btc_month.box_diff2[13:].values.squeeze(), lags=48, ax=ax)
plot_pacf(btc_month.box_diff2[13:].values.squeeze(), lags=12, ax=ax)
plt.tight_layout()
# Initial approximation of parameters
qs = range(0, 3)
ps = range(0, 3)
d=1
parameters = product(ps, qs)
parameters_list = list(parameters)
len(parameters_list)
# Model Selection
results = []
best_aic = float("inf")
warnings.filterwarnings('ignore')
for param in parameters_list:
try:
model = SARIMAX(btc_month.close_box, order=(param[0], d, param[1])).fit(disp=-1)
except ValueError:
print('bad parameter combination:', param)
continue
aic = model.aic
if aic < best_aic:
best_model = model
best_aic = aic
best_param = param
results.append([param, model.aic])
# Best Models
result_table = pd.DataFrame(results)
result_table.columns = ['parameters', 'aic']
print(result_table.sort_values(by = 'aic', ascending=True).head())
print(best_model.summary())
print("Dickey–Fuller test:: p=%f" % adfuller(best_model.resid[13:])[1])
best_model.plot_diagnostics(figsize=(15, 12))
plt.show()
# Inverse Box-Cox Transformation Function
def invboxcox(y,lmbda):
if lmbda == 0:
return(np.exp(y))
else:
return(np.exp(np.log(lmbda*y+1)/lmbda))
# Prediction
btc_month_pred = btc_month[['Close']]
date_list = [datetime(2019, 10, 31), datetime(2019, 11, 30), datetime(2020, 7, 31)]
future = pd.DataFrame(index=date_list, columns= btc_month.columns)
btc_month_pred = pd.concat([btc_month_pred, future])
#btc_month_pred['forecast'] = invboxcox(best_model.predict(start=0, end=75), lmbda)
btc_month_pred['forecast'] = invboxcox(best_model.predict(start=datetime(2015, 10, 31), end=datetime(2020, 7, 31)), lmbda)
btc_month_pred.Close.plot(linewidth=3)
btc_month_pred.forecast.plot(color='r', ls='--', label='Predicted Close', linewidth=3)
plt.legend()
plt.grid()
plt.title('Bitcoin monthly forecast')
plt.ylabel('USD')
#from google.colab import files
#uploaded = files.upload()
```
| github_jupyter |
## Intro to Python
### Learning tips.
* Practice, practice, practice.
* Get used to making mistakes! It’s OK.
* Don’t memorize. There are thousands packages in python. Learn to read documentation.
## Switching between python versions
### Anaconda Install
[https://docs.continuum.io/anaconda/install](https://docs.continuum.io/anaconda/install)
### Anaconda Docs
[https://conda.io/docs/](https://conda.io/docs/)
[https://conda.io/docs/py2or3.html](https://conda.io/docs/py2or3.html)
### Create a Python 3.5 environment
conda create -n py35 python=3.5 anaconda
To activate this environment, use:
> source activate py35
To deactivate this environment, use:
> source deactivate py35
### Create a Python 2.7 environment
conda create -n py27 python=2.7 anaconda
To activate this environment, use:
> source activate py27
To deactivate this environment, use:
> source deactivate py27
### Update or Upgrade Python
conda update python
### Simple calculator
```
from __future__ import print_function
print(33+5)
print(55*11)
print(2**5)
print(3/5)
print(3.0/5)
# To discard the fractional part and do integer division, you can use a double slash.
print(5.5//3.3)
print(5.5/3.3)
print(2^5) # XOR (exclusive OR).
# 0010 # 2 (binary)
# 0101 # 5 (binary)
# ---- # APPLY XOR ('vertically')
# 0111 # result = 7 (binary)
print(3%2) # Modulo
print(0xAD) # Hexadecimal
print(0o111) # Octal
print(0b0111) # Binary
value = (3.55 / 0.11)**3
print(value)
```
## Data types
```
months = [
1.0,
'January',
'February',
'March',
'April',
'May',
'June',
'July',
'August',
'September',
'October',
'November',
'December'
]
print (type(months))
print (type(months[0]))
print (len(months))
# print (class(months))
print(months[0])
print(months[-1])
print(months[len(months)-1])
print(months)
del months[0]
print(months)
print(months[9:11])
print(months[9:])
print(months[:]) # Print everything
print(months[::2]) # Print every other - i.e. skip by 2
print(months[-1])
print(months[-3])
print(months[-3:-1]) # Print second to last to third to last
print(months[10:1:-3]) # Print 10 to 1 - i.e. skip by 3
print(months[10:1:-1]) # Print 10 to 1 - i.e. no skip but descending
print([1, 2, 3] + [4, 5, 6]) # join two lists
# print([1, 2, 3] - [4, 5, 6])
print('Hello, ' + 'world!') # concatenate two strings
name='Bear' # Create string 'Bear'
print(name*5) # concatenate 5 times
print('B' in name) # Test if B in Bear
print('b' in name) # Test if b in Bear
s=range(1,6) # Range doesn't create lists in python 3
print(s)
s=list(range(1,9)) # Using range to create lists in python 3
print(s)
s[2]=5
print(s)
del s[2] # Delete an item
print(s)
del s[2:4] # Delete more than one item
print(s)
t=[1, 2, 3, 4, 5] # Create a list
print(t)
t=(1, 2, 3, 4, 5) # Create a tuple
print(t)
# del t[2]
print(t)
print(2 ** 5) # Power
print(pow(2, 5)) # Power using function
import math # Some common math functions
print(math.ceil(33.3))
print(math.sqrt(9))
from math import sqrt
print(sqrt(9))
```
## List Methods
* list.append(elem) -- adds a single element to the end of the list. Common error: does not return the new list, just modifies the original.
* list.insert(index, elem) -- inserts the element at the given index, shifting elements to the right.
* list.extend(list2) adds the elements in list2 to the end of the list. Using + or += on a list is similar to using extend().
* list.index(elem) -- searches for the given element from the start of the list and returns its index. Throws a ValueError if the element does not appear (use "in" to check without a ValueError).
* list.remove(elem) -- searches for the first instance of the given element and removes it (throws ValueError if not present)
* list.sort() -- sorts the list in place (does not return it). (The sorted() function shown below is preferred.)
* list.reverse() -- reverses the list in place (does not return it)
* list.pop(index) -- removes and returns the element at the given index. Returns the rightmost element if index is omitted (roughly the opposite of append()).
```
l = ['Rick Sanchez', 'Morty Smith', 'Mr. Meeseeks']
print (l)
l.append('Doofus Rick') ## append item at end
print (l)
l.insert(0, 'Scary Terry') ## insert item at index 0
print (l)
l.extend(['Squanchy', 'Mr. Poopybutthole']) ## add list of items at end
print (l)
print (l.index('Morty Smith'))
print (l)
l.remove('Morty Smith') ## search and remove an item
print (l)
print(l.pop(1)) ## removes and returns 'Rick Sanchez' second item
print (l) ## ['Scary Terry', 'Mr. Meeseeks', 'Doofus Rick', 'Squanchy', 'Mr. Poopybutthole']
```
## Dictionaries
```
d={"python": 333, "R": 222, "C++": 111} # Create a dictionary
print(d.keys())
print(d["python"]) # List value with key "python"
print("java" in d) # Check if "java" in dictionary
print("python" in d) # Check if "python" in dictionary
d2={"java": 33, "C#": 22, "Scala": 11} # Create a dictionary
print(d2)
d.update(d2) # add dictionary to another dictionary
print(d)
for key in d: # List keys in dictionary
print(key)
print(d.keys())
for key in d: # List values in dictionary
print(d[key])
print(d.values())
d_backup = d.copy() # Create dictionary copy
print(d_backup)
d['C++']=55
print(d)
print(d_backup)
```
### Note:
* A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in the original.
* A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original.
```
print(len(d)-len(d_backup))
del d_backup["java"]
print(d_backup)
print(len(d)-len(d_backup)) # Note original isn't changed
```
## Dictionary Methods
* d.fromkeys() - Create a new dictonary with keys from seq and values set to value.
* d.get(key, default=None) - For any key, returns value or default if key not in dictonary
* d.has_key(key) - Removed, use the in operation instead.
* d.items() - Returns a list of d.s (key, value) tuple pairs
* d.keys() - Returns list of dictonary d's keys
* d.setdefault(key, default = None) - Similar to get(), but will set d.key] = default if key is not already in dict
* d.update(d2) - Adds dictonary d2's key-values pairs to dict
* d.values() - Returns list of dictonary d's values
* d.clear() - Removes all elements of dictonary d
Note:
* A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in the original.
* A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original.
## Excercise
Try the above methods on a dictonary that you create.
## Indentation
One unusual Python feature is that the whitespace indentation of a piece of code affects its meaning. A logical block of statements such as the ones that make up a function should all have the same indentation, set in from the indentation of their parent function or "if" or whatever. If one of the lines in a group has a different indentation, it is flagged as a syntax error.
## Conditionals
```
l=['Scary Terry', 'Rick Sanchez', 'Morty Smith', 'Mr. Meeseeks', 'Doofus Rick', 'Squanchy', 'Mr. Poopybutthole']
if 'Scary Terry' in l:
print ("Oh no!!!")
if 'Scary Tarry' in l:
print ("Oh no!!!")
else:
print ("Whew!!!")
if 'Scary Tarry' in l:
print ("Oh no!!!")
elif 'Mr. Meeseeks' in l:
print ("Oooo yeah, caaan doo!!!!")
else:
print ("Whew!!!")
```
## Comparison Operators
* x == y x equals y.
* x < y x is less than y.
* x > y x is greater than y.
* x >= y x is greater than or equal to y.
* x <= y x is less than or equal to y.
* x != y x is not equal to y.
* x is y x and y are the same object.
* x is not y x and y are different objects.
* x in y x is a member of the container (e.g., sequence) y.
* x not in y x is not a member of the container (e.g., sequence) y.
## Loops
### For loops
```
for name in l:
print(name)
```
### While loops
```
x = 1
while x <= 5:
print(x)
x += 1
x = 1
while x <= 5:
x += 1
if (x%2==0):
print(x)
x = 1
while x <= 5:
x += 1
print(x)
if (x==3): break
```
## help(), and dir()
here are a variety of ways to get help for Python.
Do a Google search, starting with the word "python", like "python list" or "python string lowercase". The first hit is often the answer. This technique seems to work better for Python than it does for other languages for some reason.
The official Python docs site — docs.python.org — has high quality docs. Nonetheless, I often find a Google search of a couple words to be quicker.
There is also an official Tutor mailing list specifically designed for those who are new to Python and/or programming!
Many questions (and answers) can be found on StackOverflow and Quora.
Use the help() and dir() functions (see below).
Inside the Python interpreter, the help() function pulls up documentation strings for various modules, functions, and methods. These doc strings are similar to Java's javadoc. The dir() function tells you what the attributes of an object are. Below are some ways to call help() and dir() from the interpreter:
help(len) — help string for the built-in len() function; note that it's "len" not "len()", which is a call to the function, which we don't want
help(sys) — help string for the sys module (must do an import sys first)
dir(sys) — dir() is like help() but just gives a quick list of its defined symbols, or "attributes"
help(sys.exit) — help string for the exit() function in the sys module
help('xyz'.split) — help string for the split() method for string objects. You can call help() with that object itself or an example of that object, plus its attribute. For example, calling help('xyz'.split) is the same as calling help(str.split).
help(list) — help string for list objects
dir(list) — displays list object attributes, including its methods
help(list.append) — help string for the append() method for list objects
## Functions
```
def fibonacci_seq(n): # Function syntax
result = [0, 1]
for i in range(n-2):
result.append(result[-2] + result[-1])
return result
print(fibonacci_seq(9))
def fibA(n):
a,b = 1,1
for i in range(n-1):
a,b = b,a+b
return a
print([fibA(i) for i in range(9)]) # Repeatedly calling a function
```
## Python Tutorials
* [“Dive into Python” (Chapters 2 to 4)] (http://diveintopython.org/)
* [Python 101 – Beginning Python] (http://www.rexx.com/~dkuhlman/python_101/python_101.html)
* [Nice free CS/python book] (https://www.cs.hmc.edu/csforall/index.html)
### Things to refer to
* [The Official Python Tutorial] (http://www.python.org/doc/current/tut/tut.html)
* [The Python Quick Reference] (http://rgruet.free.fr/PQR2.3.html)
### YouTube Python Tutorials
* [Python Fundamentals Training – Classes] (http://www.youtube.com/watch?v=rKzZEtxIX14)
* [Python 2.7 Tutorial Derek Banas] (http://www.youtube.com/watch?v=UQi-L-_chcc)
* [Python Programming Tutorial - thenewboston]
(http://www.youtube.com/watch?v=4Mf0h3HphEA)
* [Google Python Class](http://www.youtube.com/watch?v=tKTZoB2Vjuk)
### datacamp.com
* [datacamp.com] (https://www.datacamp.com/tracks/python-developer)
Last update September 5, 2017
| github_jupyter |
<center>
<img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# **Exploratory Data Analysis Lab**
Estimated time needed: **30** minutes
In this module you get to work with the cleaned dataset from the previous module.
In this assignment you will perform the task of exploratory data analysis.
You will find out the distribution of data, presence of outliers and also determine the correlation between different columns in the dataset.
## Objectives
In this lab you will perform the following:
* Identify the distribution of data in the dataset.
* Identify outliers in the dataset.
* Remove outliers from the dataset.
* Identify correlation between features in the dataset.
***
## Hands on Lab
Import the pandas module.
```
import pandas as pd
```
Load the dataset into a dataframe.
```
df = pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/LargeData/m2_survey_data.csv")
df
df['Age'].median()
df[df['Gender'] == 'Woman']['ConvertedComp'].median()
df['Age'].hist()
df['ConvertedComp'].median()
df.boxplot(column=['ConvertedComp'])
Q1 = df['ConvertedComp'].quantile(0.25)
Q3 = df['ConvertedComp'].quantile(0.75)
IQR = Q3 - Q1 #IQR is interquartile range.
filtered = (df['ConvertedComp'] >= Q1 - 1.5 * IQR) & (df['ConvertedComp'] <= Q3 + 1.5 *IQR)
df_remove_outliers = df.loc[filtered]
df_remove_outliers['ConvertedComp'].median()
df_remove_outliers['ConvertedComp'].mean()
df.boxplot(column=['Age'])
df.corr(method ='pearson')
import matplotlib.pyplot as plt
plt.plot(df['Age'],df['WorkWeekHrs'],'o')
```
## Distribution
### Determine how the data is distributed
The column `ConvertedComp` contains Salary converted to annual USD salaries using the exchange rate on 2019-02-01.
This assumes 12 working months and 50 working weeks.
Plot the distribution curve for the column `ConvertedComp`.
```
# your code goes here
```
Plot the histogram for the column `ConvertedComp`.
```
# your code goes here
```
What is the median of the column `ConvertedComp`?
```
# your code goes here
```
How many responders identified themselves only as a **Man**?
```
# your code goes here
```
Find out the median ConvertedComp of responders identified themselves only as a **Woman**?
```
# your code goes here
```
Give the five number summary for the column `Age`?
**Double click here for hint**.
<!--
min,q1,median,q3,max of a column are its five number summary.
-->
```
# your code goes here
```
Plot a histogram of the column `Age`.
```
# your code goes here
```
## Outliers
### Finding outliers
Find out if outliers exist in the column `ConvertedComp` using a box plot?
```
# your code goes here
```
Find out the Inter Quartile Range for the column `ConvertedComp`.
```
# your code goes here
```
Find out the upper and lower bounds.
```
# your code goes here
```
Identify how many outliers are there in the `ConvertedComp` column.
```
# your code goes here
```
Create a new dataframe by removing the outliers from the `ConvertedComp` column.
```
# your code goes here
```
## Correlation
### Finding correlation
Find the correlation between `Age` and all other numerical columns.
```
# your code goes here
```
## Authors
Ramesh Sannareddy
### Other Contributors
Rav Ahuja
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ----------------- | ---------------------------------- |
| 2020-10-17 | 0.1 | Ramesh Sannareddy | Created initial version of the lab |
Copyright © 2020 IBM Corporation. This notebook and its source code are released under the terms of the [MIT License](https://cognitiveclass.ai/mit-license?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDA0321ENSkillsNetwork21426264-2021-01-01\&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBM-DA0321EN-SkillsNetwork-21426264\&cm_mmca1=000026UJ\&cm_mmca2=10006555\&cm_mmca3=M12345678\&cvosrc=email.Newsletter.M12345678\&cvo_campaign=000026UJ).
| github_jupyter |
```
#Q1
#(a)
def odd():
sum = 0
n = int(input("Print sum of odd numbers till: "))
for i in range (0 , n+1):
if i % 2 == 1:
sum += i
print("term = ",i,", sum till this step = ", sum )
else:
continue
odd()
odd()
#(b)
def even():
sum = 0
n = int(input("Print sum of even numbers till: "))
for i in range (0 , n+1):
if i % 2 == 0 :
sum += i
print("term = ",i,", sum till this step = ", sum )
else:
continue
even()
even()
#(c)
n = input("Enter Number to calculate sum: ")
n = int (n)
sum = 0
for num in range(0, n+1, 1):
sum = sum+num
print("SUM of first ", n, "numbers is: ", sum )
#Q2
#(A)
t1=(1,2,3,4,5,6,7,8,9,10)
t1a=t1[:5]
t1b=t1[5:]
print(t1a)
print(t1b)
print()
#(B)
t1=(1,2,3,4,5,6,7,8,9,10)
n=list(t1)
list1=list()
for i in n:
if i in n:
if i%2==0:
list1.append(i)
p=tuple(list1)
print('tuple:',p)
print()
#(C)
t1=(1,2,3,4,5,6,7,8,9,10)
t2=(11,13,15)
a=list(t1)
b=list(t2)
c=(a+b)
print(c)
#(D)
t1=(1,2,3,4,5,6,7,8,9,10)
max(t1)
min(t2)
print('max no. is',max(t1))
print('min no. is',min(t1))
#Q3
def string():
choice =int(input('''Enter your choice
1 = length of the string
2 = maximum of three strrings
3 = Replace every successive character with #
4 = number of words in the string
'''))
if choice == 1:
n = input("Enter a string : ")
l=len(n)
print("The length of the string is =",l)
elif choice == 2:
n1 = input("Enter 1st string : ")
n2 = input("Enter 2nd string : ")
n3 = input("Enter 3rd string : ")
print("The maximum of the three strings is =",max(n1,n2,n3))
elif choice == 3:
n = input("Enter a string : ")
s = ""
m = list(n)
for i in range(len(n)):
if (i%2 != 0):
if m[i] == " ":
m[i] = " "
else:
m[i] = "#"
for x in m:
s=s+x
print("The sucessive character in the string is replaced with # :",s)
elif choice == 4:
n = input("Enter a string : ")
c = 0
for y in n :
if y == " " :
c += 1
w = c+1
print("The number of words = ",w)
else :
print("Please Choose from the given option only")
string()
```
| github_jupyter |
```
# Install a pip package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install pip install python-slugify
!{sys.executable} -m pip install pip install bs4
!{sys.executable} -m pip install pip install lxml
import requests, random, logging, urllib.request, json
from bs4 import BeautifulSoup
from tqdm import tqdm
from slugify import slugify
logging.basicConfig(filename='app.log', filemode='w', format='%(name)s - %(levelname)s - %(message)s')
url = 'https://www.investopedia.com/financial-term-dictionary-4769738'
master_links = []
page = urllib.request.urlopen(url).read().decode('utf8','ignore')
soup = BeautifulSoup(page,"lxml")
for link in soup.find_all('a',{'class': 'terms-bar__link mntl-text-link'}, href = True):
master_links.append(link.get('href'))
master_links = master_links[0:27]
with open('URL_INDEX_BY_ALPHA.txt', 'w') as f:
for item in master_links:
f.write("%s\n" % item)
list_alpha = []
for articleIdx in master_links:
page = urllib.request.urlopen(articleIdx).read().decode('utf8','ignore')
soup = BeautifulSoup(page,"lxml")
for link in soup.find_all('a',{'class': 'dictionary-top300-list__list mntl-text-link'}, href = True):
list_alpha.append(link.get('href'))
with open('FULL_URL_INDEX.txt', 'w') as f:
for item in list_alpha:
f.write("%s\n" % item)
logf = open("error.log", "w")
# for article in tqdm(random.sample(list_alpha, 10)):
data = {} #json file
for article in tqdm(list_alpha):
list_related = []
body = []
try:
page = urllib.request.urlopen(article, timeout = 3).read().decode('utf8','ignore')
soup = BeautifulSoup(page,"lxml")
myTags = soup.find_all('p', {'class': 'comp mntl-sc-block finance-sc-block-html mntl-sc-block-html'})
for link in soup.find_all('a',{'class': 'related-terms__title mntl-text-link'}, href = True):
list_related.append(link.get('href'))
title = slugify(soup.find('title').get_text(strip=True)) + '.json'
data['name'] = soup.find('title').get_text(strip=True)
data['@id'] = article
data['related'] = list_related
post = ''
for tag in myTags:
# body.append(str(tag.get_text(strip=True).encode('utf8', errors='replace'))) #get text content
body.append(tag.decode_contents()) # get html content
f = 'data/' + title
data['body'] = body
w = open(f, 'w')
w.write(json.dumps(data))
w.close()
except:
logf.write("Failed to extract: {0}\n".format(str(article)))
logging.error("Exception occurred", exc_info=True)
finally:
pass
```
# create RDF from JSON
```
# Install a pip package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install pip install rdflib
import os, json, rdflib
from rdflib import URIRef, BNode, Literal, Namespace, Graph
from rdflib.namespace import CSVW, DC, DCAT, DCTERMS, DOAP, FOAF, ODRL2, ORG, OWL, \
PROF, PROV, RDF, RDFS, SDO, SH, SKOS, SOSA, SSN, TIME, \
VOID, XMLNS, XSD
path_to_json = 'data/'
json_files = [pos_json for pos_json in os.listdir(path_to_json) if pos_json.endswith('.json')]
for link in json_files:
with open('data/'+link) as f:
data = json.load(f)
#create RDF graph
g = Graph()
INVP = Namespace("https://www.investopedia.com/vocab/")
g.bind("rdfs", RDFS)
g.bind("schema", SDO)
g.bind("invp", INVP)
rdf_content = ''
termS = URIRef(data['@id'])
g.add((termS, RDF.type, INVP.Term))
g.add((termS, SDO.url, termS))
g.add((termS, RDFS.label, Literal(data['name'])))
for rel in data['related']:
g.add((termS, INVP.relates_to, URIRef(rel)))
separator = ''
content= separator.join(data['body'])
g.add((termS, INVP.description, Literal(content)))
output = '# '+data['name']+ '\n' + g.serialize(format="turtle").decode("utf-8")
w = open('rdf/'+link.replace('.json','')+'.ttl', 'wb')
w.write(output.encode("utf8"))
w.close()
```
| github_jupyter |
```
from datetime import datetime
from IPython.display import display, HTML, clear_output
import ipywidgets as widgets
from ipywidgets import Checkbox, Box, Dropdown, fixed, interact, interactive, interact_manual, Label, Layout, Textarea
# Override ipywidgets styles
display(HTML('''
<style>
textarea {min-height: 110px !important;}
/* Allow extra-long labels */
.widget-label {min-width: 25ex !important; font-weight: bold;}
.widget-checkbox {padding-right: 300px !important;}
/* Classes for toggling widget visibility */
.hidden {display: none;}
.visible {display: flex;}
</style>
'''))
class ConfigForm(object):
# Define widgets
path_caption_msg = '''
<h4>The path should be entire path after the <code>collection</code> name, including node type,
sub-branches, and <code>_id</code>.<br>For example, <code>,Metadata,sub-branch,node_id,</code>.</h4>
'''
path_caption = widgets.HTML(value=path_caption_msg)
caption = widgets.HTML(value='<h3>Configure Your Action</h3>')
button = widgets.Button(description='Submit')
action = widgets.Dropdown(options=['Insert', 'Delete', 'Display', 'Update'], value='Insert')
collection = widgets.Text(value='')
datatype = widgets.RadioButtons(options=['Metadata', 'Outputs', 'Related', 'Generic'], value='Metadata')
path = widgets.Text(value='')
title = widgets.Text(value='')
altTitle = widgets.Text(value='')
description = widgets.Textarea(value='')
date = widgets.Textarea(placeholder='List each date on a separate line.\nDate ranges should use the separate start and end dates with commas.\nValid formats are YYYY-MM-DD and YYYY-MM-DDTHH:MM:SS.')
label = widgets.Text(value='')
precise = widgets.Checkbox(value=False)
notes = widgets.Textarea(placeholder='List each note on a separate line.')
# Configure widget layout
flex = Layout(display='flex', flex_flow='row', justify_content='space-between')
# Assemble widgets in Boxes
form_items = [
Box([Label(value='Action:'), action], layout=flex),
Box([Label(value='Collection: (required)'), collection], layout=flex),
Box([Label(value='Node Type:'), datatype], layout=flex),
Box([Label(value='Path: (required)'), path], layout=flex),
Box([Label(value='title (optional):'), title], layout=flex),
Box([Label(value='altTitle (optional):'), altTitle], layout=flex),
Box([Label(value='description (optional):'), description], layout=flex),
Box([Label(value='date (optional):'), date], layout=flex),
Box([Label(value='label (optional):'), label], layout=flex),
Box([Label(value='notes (optional):'), notes], layout=flex)
]
# Initialise the class object
def __init__(self, object):
self.path_caption = ConfigForm.path_caption
self.caption = ConfigForm.caption
self.button = ConfigForm.button
self.action = ConfigForm.form_items[0]
self.collection = ConfigForm.form_items[1]
self.datatype = ConfigForm.form_items[2]
self.path = ConfigForm.form_items[3]
self.title = ConfigForm.form_items[4]
self.altTitle = ConfigForm.form_items[5]
self.description = ConfigForm.form_items[6]
self.date = ConfigForm.form_items[7]
self.label = ConfigForm.form_items[8]
self.notes = ConfigForm.form_items[9]
# Helper method
def show(widgets, all_widgets):
for widget in all_widgets:
if widget in widgets:
widget.add_class('visible').remove_class('hidden')
else:
widget.remove_class('visible').add_class('hidden')
# Modify widget visibility when the form is changed
def change_visibility(change):
box = self.form
collection_widget = box.children[1]
datatype_widget = box.children[2]
path_widget = box.children[3]
global_widgets = [box.children[4], box.children[5], box.children[6], box.children[7], box.children[8], box.children[9]]
all_widgets = [collection_widget, datatype_widget, path_widget] + global_widgets
generic_widgets = {
'Insert': [collection_widget, datatype_widget, path_widget] + global_widgets,
'Update': [collection_widget, datatype_widget, path_widget] + global_widgets,
'Display': [collection_widget, datatype_widget, path_widget],
'Delete': [collection_widget, datatype_widget, path_widget]
}
nongeneric_widgets = {
'Insert': [collection_widget, datatype_widget] + global_widgets,
'Update': [collection_widget, datatype_widget] + global_widgets,
'Display': [collection_widget, datatype_widget],
'Delete': [collection_widget, datatype_widget]
}
if str(ConfigForm.datatype.value) == 'Generic':
widget_list = generic_widgets
self.path_caption.remove_class('hidden')
else:
widget_list = nongeneric_widgets
self.path_caption.add_class('hidden')
active = str(ConfigForm.action.value)
if active == 'Insert':
show(widget_list['Insert'], all_widgets)
elif active == 'Delete':
show(widget_list['Delete'], all_widgets)
elif active == 'Update':
show(widget_list['Update'], all_widgets)
else:
show(widget_list['Display'], all_widgets)
def split_list(str):
seq = str.split('\n')
for key, value in enumerate(seq):
seq[key] = value.strip()
return seq
# Ensure that a date is correctly formatted and start dates precede end dates
def check_date_format(date):
items = date.replace(' ', '').split(',')
for item in items:
if len(item) > 10:
format = '%Y-%m-%dT%H:%M:%S'
else:
format = '%Y-%m-%d'
try:
if item != datetime.strptime(item, format).strftime(format):
raise ValueError
return True
except ValueError:
print('The date value ' + item + ' is in an incorrect format. Use YYYY-MM-DD or YYYY-MM-DDTHH:MM:SS.')
if len(items) == 2:
try:
assert items[1] > items[0]
except:
print('Your end date ' + items[1] + ' must be after your start date ' + items[0] + '.')
# Transform textarea with dates to a valid schema array
def process_dates(datefield):
date = datefield.strip().split('\n')
new_date = []
d = {}
# Validate all the date formats individually
for item in date:
check_date_format(item)
# Determine if the datelist is a mix of normal and precise dates
contains_precise = []
for item in date:
if ',' in item:
start, end = item.replace(' ', '').split(',')
if len(start) > 10 or len(end) > 10:
contains_precise.append('precise')
else:
contains_precise.append('normal')
elif len(item) > 10:
contains_precise.append('precise')
else:
contains_precise.append('normal')
# Handle a mix of normal and precise dates
if 'normal' in contains_precise and 'precise' in contains_precise:
d['normal'] = []
d['precise'] = []
for item in date:
if ',' in item:
start, end = item.replace(' ', '').split(',')
if len(start) > 10 or len(end) > 10:
d['precise'].append({'start': start, 'end': end})
else:
d['normal'].append({'start': start, 'end': end})
else:
if len(item) > 10:
d['precise'].append(item)
else:
d['normal'].append(item)
new_date.append(d)
# Handle only precise dates
elif 'precise' in contains_precise:
d['precise'] = []
for item in date:
if ',' in item:
start, end = item.replace(' ', '').split(',')
d['precise'].append({'start': start, 'end': end})
else:
d['precise'].append(item)
new_date.append(d)
# Handle only normal dates
else:
for item in date:
if ',' in item:
start, end = item.replace(' ', '').split(',')
new_date.append({'start': start, 'end': end})
else:
new_date.append(item)
return new_date
def handle_submit(values):
# Save the form values in a dict
self.values = {'action': ConfigForm.action.value}
self.values['datatype'] = ConfigForm.datatype.value
self.values['collection'] = ConfigForm.collection.value.strip()
self.values['title'] = ConfigForm.title.value
self.values['altTitle'] = ConfigForm.altTitle.value
self.values['date'] = process_dates(ConfigForm.date.value)
self.values['description'] = ConfigForm.description.value
self.values['label'] = ConfigForm.label.value
self.values['notes'] = split_list(ConfigForm.notes.value)
if ConfigForm.action.value == 'Insert' or ConfigForm.action.value == 'Update':
if ConfigForm.datatype.value == 'Generic':
self.values['path'] = ConfigForm.rights.value.strip()
self.values['path'] = ',' + self.values['path'].strip(',') + ','
clear_output()
print('Configuration saved. Values will be available is the cells below.')
print(self.values)
# Initialise widgets in the container Box
self.form = Box([self.action, self.datatype, self.collection, self.path, self.title,
self.altTitle, self.date, self.description, self.label, self.notes],
layout=Layout(
display='flex',
flex_flow='column',
border='solid 2px',
align_items='stretch',
width='50%'))
# Modify the CSS and set up some helper variables
box = self.form
box.children[2].add_class('hidden')
action_field = box.children[0].children[1]
nodetype_field = box.children[1].children[1]
# Display the form and watch for changes
display(self.caption)
display(self.path_caption)
display(box)
display(self.button)
self.path_caption.add_class('hidden')
nodetype_field.observe(change_visibility)
action_field.observe(change_visibility)
self.button.on_click(handle_submit)
# Instantiate the form - values accessible with e.g. config.values['delete_id]
config = ConfigForm(object)
```
| github_jupyter |
```
import numpy as np
class Regularizations:
class L2:
@staticmethod
def reg(a):
return np.mean(a**2)
@staticmethod
def reg_prime(a):
return 2 * a / a.size
class Losses:
class MSE:
@staticmethod
def loss(y_true, y_pred):
return np.mean(np.power(y_true-y_pred, 2))
@staticmethod
def loss_prime(y_true, y_pred):
# print(y_true.shape, y_pred.shape)
return 2*(y_pred-y_true) / y_true.size
class Activations: # alphabet order
class activation:
@staticmethod
def activation(a):
return a
@staticmethod
def activation_prime(a):
return 1
class ReLU(activation):
@staticmethod
def activation(a):
return np.maximum(a, 0)
@staticmethod
def activation_prime(a):
ret = np.array(1 * (a > 0))
return ret
class sigmoid(activation):
@staticmethod
def activation(a):
return 1 / (1 + np.exp(-a))
@staticmethod
def activation_prime(a):
return a * (1 - a)
class softmax(activation):
@staticmethod
def activation(a):
exp = np.exp(a)
return exp / (0.0001 + np.sum(exp, axis=0)) # mb 0
@staticmethod
def activation_prime(a):
t = np.eye(N=a.shape[0], M=a.shape[1])
return t * a * (1 - a) - (1 - t) * a * a
class stable_softmax(activation):
@staticmethod
def activation(a):
a = a - max(a)
exp = np.exp(a)
return exp / np.sum(exp, axis=1)
# dont know prime
class tanh(activation):
@staticmethod
def activation(a):
return np.tanh(a)
@staticmethod
def activation_prime(a):
return 1 - np.tanh(a)**2
class Layers:
class DummyLayer:
def __init__(self):
self.input_shape = None
self.output_shape = None
def forward_pass(self, input):
raise NotImplementedError
def backward_pass(self, output):
raise NotImplementedError
class Dense(DummyLayer):
def __init__(self, input_shape=None, output_shape=None, learning_rate=None, reg_const=None, reg_type=None):
super().__init__()
self.input_shape = input_shape
self.output_shape = output_shape
self.input = None
self.output = None
self.learning_rate = learning_rate
self.reg_const = reg_const
self.reg_type = reg_type
if self.reg_type is None:
self.reg_function = None
self.reg_prime = None
else:
self.reg_function = reg_type.reg
self.reg_prime = reg_type.reg_prime
self.features_weights = np.random.rand(input_shape, output_shape) - 0.5
self.bias_weights = np.random.rand(1, output_shape) - 0.5
self.learnable = True
def forward_pass(self, input):
self.input = input
self.output = input @ self.features_weights + self.bias_weights
return self.output
def backward_pass(self, output_error):
input_error = output_error @ self.features_weights.T
weights_error = self.input.T @ output_error + self.reg_const * self.reg_prime(self.features_weights)
bias_error = np.sum(output_error, axis=0)
self.features_weights -= self.learning_rate * weights_error
self.bias_weights -= self.learning_rate * bias_error
return input_error
class Activation(DummyLayer):
def __init__(self, activation_type=Activations.tanh):
super().__init__()
self.activation = activation_type.activation
self.activation_prime = activation_type.activation_prime
self.input = None
self.output = None
self.learnable = False
def forward_pass(self, input):
self.input = input
self.output = self.activation(self.input)
return self.output
def backward_pass(self, output):
return self.activation_prime(self.input) * output
class NeuralNetwork:
def __init__(self, layers, default_learning_rate=0.01, default_reg_const=0.01, reg_type=Regularizations.L2, loss_class=Losses.MSE):
self.layers = []
for layer in layers:
if layer.learnable:
if layer.learning_rate is None:
layer.learning_rate = default_learning_rate
if layer.reg_const is None:
layer.reg_const = default_reg_const
if layer.reg_type is None:
layer.reg_function = reg_type.reg
layer.reg_prime = reg_type.reg_prime
self.layers.append(layer)
self.loss = loss_class.loss
self.loss_prime = loss_class.loss_prime
def fit(self, X, y, cnt_epochs=10, cnt_it=10000): # add optimizer
it_for_epoch = cnt_it // cnt_epochs
for i in range(cnt_epochs):
for j in range(it_for_epoch):
output = X
for layer in self.layers:
output = layer.forward_pass(output)
error = self.loss_prime(y, output)
for layer in reversed(self.layers):
error = layer.backward_pass(error)
print('epoch %d/%d error=%f' % (i+1, cnt_epochs, self.loss(y, self.predict(X))))
def predict(self, X):
output = X
for layer in self.layers:
output = layer.forward_pass(output)
return output
X = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
y = np.array([[0],
[1],
[1],
[0]])
print(X.shape, y.shape)
nn = NeuralNetwork([
Layers.Dense(2, 2),
Layers.Activation(activation_type=Activations.tanh),
Layers.Dense(2, 1),
Layers.Activation(activation_type=Activations.tanh)
], default_learning_rate=0.01, default_reg_const=0)
nn.fit(X, y, cnt_epochs=10, cnt_it=30000)
nn.predict(X)
# for mnist in colab
!wget https://raw.githubusercontent.com/yandexdataschool/Practical_DL/35c067adcc1ab364c8803830cdb34d0d50eea37e/week01_backprop/util.py -O util.py
!wget https://raw.githubusercontent.com/yandexdataschool/Practical_DL/35c067adcc1ab364c8803830cdb34d0d50eea37e/week01_backprop/mnist.py -O mnist.py
import mnist
from mnist import load_dataset
X_train, y_train, X_val, y_val, X_test, y_test = load_dataset(flatten=True)
a = y_train
b = np.zeros((a.size, a.max()+1))
b[np.arange(a.size),a] = 1
b.shape
y_train_prepr = b
nn = NeuralNetwork([
Layers.Dense(28*28, 100),
Layers.Activation(activation_type=Activations.ReLU),
Layers.Dense(100, 200),
Layers.Activation(activation_type=Activations.ReLU),
Layers.Dense(200, 10),
], default_learning_rate=0.01)
nn.fit(X_train, y_train_prepr, cnt_epochs=100, cnt_it=1000)
val = np.argmax(nn.predict(X_train), axis=1)
print(y_train[0], val[0])
cnt = 0
for i in range(val.shape[0]):
if val[i] == y_train[i]:
cnt += 1
print(cnt / val.shape[0])
```
| github_jupyter |
# Introduction to Python
> Defining Functions with Python
Kuo, Yao-Jen
## TL; DR
> In this lecture, we will talk about defining functions with Python.
## Encapsulations
## What is encapsulation?
> Encapsulation refers to one of two related but distinct notions, and sometimes to the combination thereof:
> 1. A language mechanism for restricting direct access to some of the object's components.
> 2. A language construct that facilitates the bundling of data with the methods (or other functions) operating on that data.
Source: <https://en.wikipedia.org/wiki/Encapsulation_(computer_programming)>
## Why encapsulation?
As our codes piled up, we need a mechanism making them:
- more structured
- more reusable
- more scalable
## Python provides several tools for programmers organizing their codes
- Functions
- Classes
- Modules
- Libraries
## How do we decide which tool to adopt?
Simply put, that depends on **scale** and project spec.
## These components are mixed and matched with great flexibility
- A couple lines of code assembles a function
- A couple of functions assembles a class
- A couple of classes assembles a module
- A couple of modules assembles a library
- A couple of libraries assembles a larger library
## Codes, assemble!

Source: <https://giphy.com/>
## Functions
## What is a function
> A function is a named sequence of statements that performs a computation, either mathematical, symbolic, or graphical. When we define a function, we specify the name and the sequence of statements. Later, we can call the function by name.
## Besides built-in functions or library-powered functions, we sometimes need to self-define our own functions
- `def` the name of our function
- `return` the output of our function
```python
def function_name(INPUTS, ARGUMENTS, ...):
"""
docstring: print documentation when help() is called
"""
# sequence of statements
return OUTPUTS
```
## The principle of designing of a function is about mapping the relationship of inputs and outputs
- The one-on-one relationship
- The many-on-one relationship
- The one-on-many relationship
- The many-on-many releationship
## The one-on-one relationship
Using scalar as input and output.
```
def absolute(x):
"""
Return the absolute value of the x.
"""
if x >= 0:
return x
else:
return -x
```
## Once the function is defined, call as if it is a built-in function
```
help(absolute)
print(absolute(-5566))
print(absolute(5566))
print(absolute(0))
```
## The many-on-one relationship relationship
- Using scalars or structures for fixed inputs
- Using `*args` or `**kwargs` for flexible inputs
## Using scalars for fixed inputs
```
def product(x, y):
"""
Return the product values of x and y.
"""
return x*y
print(product(5, 6))
```
## Using structures for fixed inputs
```
def product(x):
"""
x: an iterable.
Return the product values of x.
"""
prod = 1
for i in x:
prod *= i
return prod
print(product([5, 5, 6, 6]))
```
## Using `*args` for flexible inputs
- As in flexible arguments
- Getting flexible `*args` as a `tuple`
```
def plain_return(*args):
"""
Return args.
"""
return args
print(plain_return(5, 5, 6, 6))
```
## Using `**kwargs` for flexible inputs
- AS in keyword arguments
- Getting flexible `**kwargs` as a `dict`
```
def plain_return(**kwargs):
"""
Retrun kwargs.
"""
return kwargs
print(plain_return(TW='Taiwan', JP='Japan', CN='China', KR='South Korea'))
```
## The one-on-many relationship
- Using default `tuple` with comma
- Using preferred data structure
## Using default `tuple` with comma
```
def as_integer_ratio(x):
"""
Return x as integer ratio.
"""
x_str = str(x)
int_part = int(x_str.split(".")[0])
decimal_part = x_str.split(".")[1]
n_decimal = len(decimal_part)
denominator = 10**(n_decimal)
numerator = int(decimal_part)
while numerator % 2 == 0 and denominator % 2 == 0:
denominator /= 2
numerator /= 2
while numerator % 5 == 0 and denominator % 5 == 0:
denominator /= 5
numerator /= 5
final_numerator = int(int_part*denominator + numerator)
final_denominator = int(denominator)
return final_numerator, final_denominator
print(as_integer_ratio(3.14))
print(as_integer_ratio(0.56))
```
## Using preferred data structure
```
def as_integer_ratio(x):
"""
Return x as integer ratio.
"""
x_str = str(x)
int_part = int(x_str.split(".")[0])
decimal_part = x_str.split(".")[1]
n_decimal = len(decimal_part)
denominator = 10**(n_decimal)
numerator = int(decimal_part)
while numerator % 2 == 0 and denominator % 2 == 0:
denominator /= 2
numerator /= 2
while numerator % 5 == 0 and denominator % 5 == 0:
denominator /= 5
numerator /= 5
final_numerator = int(int_part*denominator + numerator)
final_denominator = int(denominator)
integer_ratio = {
'numerator': final_numerator,
'denominator': final_denominator
}
return integer_ratio
print(as_integer_ratio(3.14))
print(as_integer_ratio(0.56))
```
## The many-on-many relationship
A mix-and-match of one-on-many and many-on-one relationship.
## Handling errors
## Coding mistakes are common, they happen all the time

Source: Google Search
## How does a function designer handle errors?
Python mistakes come in three basic flavors:
- Syntax errors
- Runtime errors
- Semantic errors
## Syntax errors
Errors where the code is not valid Python (generally easy to fix).
```
# Python does not need curly braces to create a code block
for (i in range(10)) {print(i)}
```
## Runtime errors
Errors where syntactically valid code fails to execute, perhaps due to invalid user input (sometimes easy to fix)
- `NameError`
- `TypeError`
- `ZeroDivisionError`
- `IndexError`
- ...etc.
```
print('5566'[4])
```
## Semantic errors
Errors in logic: code executes without a problem, but the result is not what you expect (often very difficult to identify and fix)
```
def product(x):
"""
x: an iterable.
Return the product values of x.
"""
prod = 0 # set
for i in x:
prod *= i
return prod
print(product([5, 5, 6, 6])) # expecting 900
```
## Using `try` and `except` to catch exceptions
```python
try:
# sequence of statements if everything is fine
except TYPE_OF_ERROR:
# sequence of statements if something goes wrong
```
```
try:
exec("""for (i in range(10)) {print(i)}""")
except SyntaxError:
print("Encountering a SyntaxError.")
try:
print('5566'[4])
except IndexError:
print("Encountering a IndexError.")
try:
print(5566 / 0)
except ZeroDivisionError:
print("Encountering a ZeroDivisionError.")
# it is optional to specify the type of error
try:
print(5566 / 0)
except:
print("Encountering a whatever error.")
```
## Scope
## When it comes to defining functions, it is vital to understand the scope of a variable
## What is scope?
> In computer programming, the scope of a name binding, an association of a name to an entity, such as a variable, is the region of a computer program where the binding is valid.
Source: <https://en.wikipedia.org/wiki/Scope_(computer_science)>
## Simply put, now we have a self-defined function, so the programming environment is now split into 2:
- Global
- Local
## A variable declared within the indented block of a function is a local variable, it is only valid inside the `def` block
```
def check_odd_even(x):
mod = x % 2 # local variable, declared inside def block
if mod == 0:
return '{} is a even number.'.format(x)
else:
return '{} is a odd number.'.format(x)
print(check_odd_even(0))
print(x)
print(mod)
```
## A variable declared outside of the indented block of a function is a glocal variable, it is valid everywhere
```
x = 0
mod = x % 2
def check_odd_even():
if mod == 0:
return '{} is a even number.'.format(x)
else:
return '{} is a odd number.'.format(x)
print(check_odd_even())
print(x)
print(mod)
```
## Although global variable looks quite convenient, it is HIGHLY recommended NOT using global variable directly in a indented function block.
| github_jupyter |
# DATA512 A1: Data Curation
How does English Wikipedia page traffic trend over time? To answer this question, we plot Wikimedia traffic data spanning from 1 January, 2008 to 30 September 2018. We combine data from two Wikimedia APIs, each covering a different period of time, so that we can show the trend over 10 years.
The first API is [Legacy Pagecounts](https://wikitech.wikimedia.org/wiki/Analytics/AQS/Legacy_Pagecounts), with aggregated data from January 2008 to July 2016. The second API is [Pageviews](https://wikitech.wikimedia.org/wiki/Analytics/AQS/Pageviews), with aggregated data starting from May 2015. Besides the difference in periods for which data is available, there is a quality difference between the two APIs: Pagecounts did not differentiate between traffic generated by a person or by automated robots and web crawlers. The new Pageviews API does make this differentiation and lets us count traffic from agents not identified as web spiders.
The data acquired from both these two API endpoints were available in the public domain under the CC0 1.0 license, according to Wikimedia's [RESTBase](https://wikimedia.org/api/rest_v1/) documentation. Use of these APIs are subject to [terms and conditions from Wikimedia](https://www.mediawiki.org/wiki/REST_API#Terms_and_conditions).
## Data dictionary
Cleaned data from the Wikimedia API are saved to `en-wikipedia_traffic_200712-201809.csv` with the following fields:
| Column Name | Description | Format |
|-------------------------|-----------------------------|---------|
| year | 4-digit year of the period | YYYY |
| month | 2-digit month of the period | MM |
| pagecount_all_views | Number of desktop views | integer |
| pagecount_desktop_views | Number of desktop views | integer |
| pagecount_mobile_views | Number of desktop views | integer |
| pageview_all_views | Number of desktop views | integer |
| pageview_desktop_views | Number of desktop views | integer |
| pageview_mobile_views | Number of desktop views | integer |
## Data acquisition
In this section, we download data from the Wikimedia APIs and save the raw response JSON object to the raw data folder.
```
import os, errno
import json
import requests
# API rules require setting a unique User-Agent or Api-User-Agent
# that makes it easy for the API caller to be contacted.
HEADERS = {
'Api-User-Agent': 'https://github.com/EdmundTse/data-512-a1'
}
def api_call(endpoint, parameters):
"""Makes an web request to the supplied Wikimedia API endpoint."""
call = requests.get(endpoint.format(**parameters), headers=HEADERS)
response = call.json()
return response
# Create the raw data folder, if it doesn't already exist.
# https://stackoverflow.com/questions/273192/
RAW_DATA_DIR = "data_raw"
try:
os.makedirs(RAW_DATA_DIR)
print("Created a folder named '%s'." % RAW_DATA_DIR)
except OSError as e:
if e.errno != errno.EEXIST:
raise
```
First, collect data from the legacy API for all of the months where data is available. The API call requests data for the entire period of interest, and the API should return the months where data is available.
```
ENDPOINT_LEGACY = 'https://wikimedia.org/api/rest_v1/metrics/legacy/' + \
'pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}'
access_types = ["all-sites", "desktop-site", "mobile-site"]
pagecounts_data_files = []
for access in access_types:
filename = 'pagecounts_%s_200801-201809.json' % access
path = os.path.join(RAW_DATA_DIR, filename)
pagecounts_data_files.append(path)
# Don't re-download if we already have the data.
if os.path.exists(path):
print("Skipped download of pagecounts '%s' data since %s already exists." % (access, filename))
continue
params = {
"project": "en.wikipedia",
"access-site": access,
"granularity": "monthly",
"start": "2008010100",
# In monthly granularity, the value is exclusive. So we use
# the first day of the month after the final month of data
"end": "2018100100"
}
pageviews = api_call(ENDPOINT_LEGACY, params)
with open(path, 'w') as f:
json.dump(pageviews, f)
print("Pagecounts '%s' data saved to %s." % (access, filename))
```
Similarly, collect data from the new pagecounts API for all of the months where data is available.
```
ENDPOINT_PAGEVIEWS = 'https://wikimedia.org/api/rest_v1/metrics/' + \
'pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
access_types = ["all-access", "desktop", "mobile-app", "mobile-web"]
pageviews_data_files = []
for access in access_types:
filename = 'pageviews_%s_200801-201809.json' % access
path = os.path.join(RAW_DATA_DIR, filename)
pageviews_data_files.append(path)
# Don't re-download if we already have the data.
if os.path.exists(path):
print("Skipping download of pageviews '%s' data since %s already exists." % (access, filename))
continue
params = {
"project": "en.wikipedia",
"access": access,
"agent": "user", # Ignore traffic generated by spiders
"granularity": "monthly",
"start": "2008010100",
# Unlike the legacy API, the end value here value is exclusive.
# So we use the last day of the month of data we need
"end": "2018093000"
}
pageviews = api_call(ENDPOINT_PAGEVIEWS, params)
with open(path, 'w') as f:
json.dump(pageviews, f)
print("Pageviews '%s' data downloaded to %s." % (access, filename))
```
## Data processing
In this step, we combine the data from both APIs
```
import pandas as pd
from pandas.io.json import json_normalize
# Create the cleaned data folder, if it doesn't already exist.
CLEAN_DATA_DIR = "data_clean"
try:
os.makedirs(CLEAN_DATA_DIR)
print("Created a folder named '%s'." % CLEAN_DATA_DIR)
except OSError as e:
if e.errno != errno.EEXIST:
raise
```
First, process the data from the legacy pagecounts API.
```
# Read and combine all of the downloaded pagecounts data into one dataframe
dataframes = []
for file in pagecounts_data_files:
with open(file) as f:
data = json.load(f)
df = json_normalize(data, record_path="items")
dataframes.append(df)
# Concatenate the dataframes and adapt to the desired format
df_pagecounts = pd.concat(dataframes)
df_pagecounts.loc[:, "source"] = df_pagecounts['access-site'] \
.replace({
"all-sites": "pagecount_all_views",
"desktop-site": "pagecount_desktop_views",
"mobile-site": "pagecount_mobile_views"
})
df_pagecounts.loc[:, 'date'] = pd.to_datetime(df_pagecounts.timestamp, format='%Y%m%d00')
df_pagecounts = df_pagecounts.drop(['access-site', 'granularity', 'project', 'timestamp'], axis=1)
df_pagecounts = df_pagecounts.rename(index=str, columns={'count': 'views'})
df_pagecounts.tail()
```
We see that there is an entry for the month of August 2016, and it is about an order of magnitude smaller than previous months. The API documentation states that data will never be updated after July 2016, so the value we see for August 2016 might be from the first few days of the month while the data processing is being shut down. For our purposes, we will remove all entries after July 2016.
```
df_pagecounts = df_pagecounts[df_pagecounts.date < pd.datetime(2016, 8, 1)]
```
Now, process the data from the pageviews API. This API not only allows us to request only organic user traffic, it also splits mobile views into mobile app and mobile web. We shall add these two values together to produce one mobile views number.
```
# Read and combine all of the downloaded pageviews data into one dataframe
dataframes = []
for file in pageviews_data_files:
with open(file) as f:
data = json.load(f)
df = json_normalize(data, record_path="items")
dataframes.append(df)
# Concatenate the dataframes and adapt to the desired format
df_pageviews = pd.concat(dataframes)
df_pageviews.loc[:, "source"] = df_pageviews['access'] \
.replace({
"all-access": "pageview_all_views",
"desktop": "pageview_desktop_views",
"mobile-app": "pageview_mobile_views", # Combine mobile-app and mobile-web
"mobile-web": "pageview_mobile_views" # views into mobile views
})
df_pageviews.loc[:, 'date'] = pd.to_datetime(df_pageviews.timestamp, format='%Y%m%d00')
df_pageviews = df_pageviews.drop(['access', 'agent', 'granularity', 'project', 'timestamp'], axis=1)
df_pageviews.head()
```
With the data from both APIs in tables with common columns, we can now concatenate them before using a pivot table reshape it into the required format for outputting to CSV. With a pivot table, we specify that values for the same month and source are combined by taking the sum over the values. This is needed for pageview_mobile_views since we converted both mobile-app and mobile-web access types into the same 'mobile' type.
```
df = pd.concat([df_pagecounts, df_pageviews], sort=False)
new_index = pd.DatetimeIndex(df.date).to_period(freq='M')
df = df.pivot_table(values='views', index=new_index, columns='source', aggfunc='sum', fill_value=0)
df.loc[:, 'year'] = df.index.year
df.loc[:, 'month'] = df.index.month
df.head()
# Output the cleaned data in the required format
CLEANED_FILENAME = 'en-wikipedia_traffic_200712-201809.csv'
cleaned_filepath = os.path.join(CLEAN_DATA_DIR, CLEANED_FILENAME)
OUTPUT_COLUMNS = [
'year',
'month',
'pagecount_all_views',
'pagecount_desktop_views',
'pagecount_mobile_views',
'pageview_all_views',
'pageview_desktop_views',
'pageview_mobile_views'
]
df.to_csv(cleaned_filepath, columns=OUTPUT_COLUMNS, index=False)
```
## Analysis
To understand how the English Wikipedia page traffic trends over time, we will plot each of the series of data from old and new API as well as for mobile, desktop and the total traffic (mobile plus desktop). Because the pageviews count ranges into the billions, we divide the counts by 1 million to make the numbers shown on the y-axis more readable.
```
# Create the results data folder, if it doesn't already exist.
RESULTS_DIR = "results"
try:
os.makedirs(RESULTS_DIR)
print("Created a folder named '%s'." % RESULTS_DIR)
except OSError as e:
if e.errno != errno.EEXIST:
raise
# Re-create the pivot table, without filling NA values with zeros, for a cleaner plot
df = pd.concat([df_pagecounts, df_pageviews], sort=False)
df = df.pivot_table(values='views', index=new_index, columns='source', aggfunc='sum')
# Rescale the values into "millions of views" for easier to read numbers
df = df / 1e6
df.head()
%matplotlib inline
import matplotlib.lines as mlines
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
plt.style.use('seaborn-bright')
pagecount_series = ['pagecount_all_views', 'pagecount_desktop_views', 'pagecount_mobile_views']
pageview_series = ['pageview_all_views', 'pageview_desktop_views', 'pageview_mobile_views']
fig, ax = plt.subplots(figsize=(16,8))
df.loc[:, pageview_series].plot(ax=ax, style='-')
ax.set_prop_cycle(None) # Reset colour cycling
df.loc[:, pagecount_series].plot(ax=ax, style='--')
linestyle_legend = [mlines.Line2D([], [], color='k', linestyle='--', label="'Pagecounts' data from legacy API"),
mlines.Line2D([], [], color='k', linestyle='-', label="New 'Pageviews' data ignore web crawlers")]
ax.add_artist(ax.legend(handles=linestyle_legend))
ax.legend(["Total", "Desktop site", "Mobile site"], loc=2)
ax.set_title("English Wikipedia's monthly page views stabilised since 2014")
ax.set_xlabel("Year")
ax.set_ylabel("Monthly page views (millions)")
ax.set_ylim((0, None)) # Make the value axis for page views start from 0
# Save the plot to file
PLOT_FILENAME = 'en-wikipedia_traffic_200712-201809.png'
plot_filepath = os.path.join(RESULTS_DIR, PLOT_FILENAME)
fig.savefig(plot_filepath)
```
## Findings
From the plot, we can see that page traffic for English Wikipedia steadily rose from 2008 until about 2014, when the page traffic became stable at around 8 billion views per month.
Before mid-2014, Wikimedia only reported one desktop pagecounts number. Since then it separated page views into desktop and mobile, when it also started to report a total value for the combination of the two.
The total pageviews using the new Pageviews API reported substantially fewer views than the legacy API. Comparing the two lines during the overlapping period, it appears roughly one tenth of all traffic was due to web crawlers.
| github_jupyter |
<br><br><font color="gray">DOING COMPUTATIONAL SOCIAL SCIENCE<br>MODULE 10 <strong>PROBLEM SETS</strong></font>
# <font color="#49699E" size=40>MODULE 10 </font>
# What You Need to Know Before Getting Started
- **Every notebook assignment has an accompanying quiz**. Your work in each notebook assignment will serve as the basis for your quiz answers.
- **You can consult any resources you want when completing these exercises and problems**. Just as it is in the "real world:" if you can't figure out how to do something, look it up. My recommendation is that you check the relevant parts of the assigned reading or search for inspiration on [https://stackoverflow.com](https://stackoverflow.com).
- **Each problem is worth 1 point**. All problems are equally weighted.
- **The information you need for each problem set is provided in the blue and green cells.** General instructions / the problem set preamble are in the blue cells, and instructions for specific problems are in the green cells. **You have to execute all of the code in the problem set, but you are only responsible for entering code into the code cells that immediately follow a green cell**. You will also recognize those cells because they will be incomplete. You need to replace each blank `▰▰#▰▰` with the code that will make the cell execute properly (where # is a sequentially-increasing integer, one for each blank).
- Most modules will contain at least one question that requires you to load data from disk; **it is up to you to locate the data, place it in an appropriate directory on your local machine, and replace any instances of the `PATH_TO_DATA` variable with a path to the directory containing the relevant data**.
- **The comments in the problem cells contain clues indicating what the following line of code is supposed to do.** Use these comments as a guide when filling in the blanks.
- **You can ask for help**. If you run into problems, you can reach out to John (john.mclevey@uwaterloo.ca) or Pierson (pbrowne@uwaterloo.ca) for help. You can ask a friend for help if you like, regardless of whether they are enrolled in the course.
Finally, remember that you do not need to "master" this content before moving on to other course materials, as what is introduced here is reinforced throughout the rest of the course. You will have plenty of time to practice and cement your new knowledge and skills.
<div class='alert alert-block alert-danger'>As you complete this assignment, you may encounter variables that can be assigned a wide variety of different names. Rather than forcing you to employ a particular convention, we leave the naming of these variables up to you. During the quiz, submit an answer of 'USER_DEFINED' (without the quotation marks) to fill in any blank that you assigned an arbitrary name to. In most circumstances, this will occur due to the presence of a local iterator in a for-loop.</b></div>
## Package Imports
```
import pandas as pd
import numpy as np
from numpy.random import seed as np_seed
import graphviz
from graphviz import Source
from pyprojroot import here
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
from sklearn import preprocessing
from sklearn.model_selection import train_test_split, cross_val_score, ShuffleSplit
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.ensemble import BaggingClassifier, RandomForestClassifier, GradientBoostingClassifier
from sklearn.preprocessing import LabelEncoder, LabelBinarizer
from sklearn.neighbors import KNeighborsClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
import tensorflow as tf
from tensorflow import keras
from tensorflow.random import set_seed
import spacy
from time import time
set_seed(42)
np_seed(42)
```
## Defaults
```
x_columns = [
# Religion and Morale
'v54', # Religious services? - 1=More than Once Per Week, 7=Never
'v149', # Do you justify: claiming state benefits? - 1=Never, 10=Always
'v150', # Do you justify: cheating on tax? - 1=Never, 10=Always
'v151', # Do you justify: taking soft drugs? - 1=Never, 10=Always
'v152', # Do you justify: taking a bribe? - 1=Never, 10=Always
'v153', # Do you justify: homosexuality? - 1=Never, 10=Always
'v154', # Do you justify: abortion? - 1=Never, 10=Always
'v155', # Do you justify: divorce? - 1=Never, 10=Always
'v156', # Do you justify: euthanasia? - 1=Never, 10=Always
'v157', # Do you justify: suicide? - 1=Never, 10=Always
'v158', # Do you justify: having casual sex? - 1=Never, 10=Always
'v159', # Do you justify: public transit fare evasion? - 1=Never, 10=Always
'v160', # Do you justify: prostitution? - 1=Never, 10=Always
'v161', # Do you justify: artificial insemination? - 1=Never, 10=Always
'v162', # Do you justify: political violence? - 1=Never, 10=Always
'v163', # Do you justify: death penalty? - 1=Never, 10=Always
# Politics and Society
'v97', # Interested in Politics? - 1=Interested, 4=Not Interested
'v121', # How much confidence in Parliament? - 1=High, 4=Low
'v126', # How much confidence in Health Care System? - 1=High, 4=Low
'v142', # Importance of Democracy - 1=Unimportant, 10=Important
'v143', # Democracy in own country - 1=Undemocratic, 10=Democratic
'v145', # Political System: Strong Leader - 1=Good, 4=Bad
# 'v208', # How often follow politics on TV? - 1=Daily, 5=Never
# 'v211', # How often follow politics on Social Media? - 1=Daily, 5=Never
# National Identity
'v170', # How proud are you of being a citizen? - 1=Proud, 4=Not Proud
'v184', # Immigrants: impact on development of country - 1=Bad, 5=Good
'v185', # Immigrants: take away jobs from Nation - 1=Take, 10=Do Not Take
'v198', # European Union Enlargement - 1=Should Go Further, 10=Too Far Already
]
y_columns = [
# Overview
'country',
# Socio-demographics
'v226', # Year of Birth by respondent
'v261_ppp', # Household Monthly Net Income, PPP-Corrected
]
```
## Problem 1:
<div class="alert alert-block alert-info">
In this assignment, we're going to continue our exploration of the European Values Survey dataset. By wielding the considerable power of Artificial Neural Networks, we'll aim to create a model capable of predicting an individual survey respondent's country of residence. As with all machine/deep learning projects, our first task will involve loading and preparing the data.
</div>
<div class="alert alert-block alert-success">
Load the EVS dataset and use it to create a feature matrix (using all columns from x_columns) and (with the assistance of Scikit Learn's LabelBinarizer) a target array (representing each respondent's country of residence).
</div>
```
# Load EVS Dataset
df = pd.read_csv(PATH_TO_DATA/"evs_module_08.csv")
# Create Feature Matrix (using all columns from x_columns)
X = df[x_columns]
# Initialize LabelBinarizer
country_encoder = ▰▰1▰▰()
# Fit the LabelBinarizer instance to the data's 'country' column and store transformed array as target
y = country_encoder.▰▰2▰▰(np.array(▰▰3▰▰))
```
## Problem 2:
<div class="alert alert-block alert-info">
As part of your work in the previous module, you were introduced to the concept of the train-validate-test split. Up until now, we had made extensive use of Scikit Learn's preprocessing and cross-validation suites in order to easily get the most out of our data. Since we're using TensorFlow for our Artificial Neural Networks, we're going to have to change course a little: we can still use the <code>train_test_split</code> function, but we must now use it twice: the first iteration will produce our test set and a 'temporary' dataset; the second iteration will split the 'temporary' data into training and validation sets. Throughout this process, we must take pains to ensure that each of the data splits are shuffled and stratified.
</div>
<div class="alert alert-block alert-success">
Create shuffled, stratified splits for testing (10% of original dataset), validation (10% of data remaining from test split), and training (90% of data remaining from test split) sets. Submit the number of observations in the <code>X_valid</code> set, as an integer.
</div>
```
# Split into temporary and test sets
X_t, X_test, y_t, y_test = ▰▰1▰▰(
▰▰2▰▰,
▰▰3▰▰,
test_size = ▰▰4▰▰,
shuffle = ▰▰5▰▰,
stratify = y,
random_state = 42
)
# Split into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(
▰▰6▰▰,
▰▰7▰▰,
test_size = ▰▰8▰▰,
shuffle = ▰▰9▰▰,
stratify = ▰▰10▰▰,
random_state = 42,
)
len(X_valid)
```
## Problem 3:
<div class="alert alert-block alert-info">
As you work with Keras and Tensorflow, you'll rapidly discover that both packages are very picky about the 'shape' of the data you're using. What's more, you can't always rely on them to correctly infer your data's shape. As such, it's usually a good idea to store the two most important shapes -- number of variables in the feature matrix and number of unique categories in the target -- as explicit, named variables; doing so will save you the trouble of trying to retrieve them later (or as part of your model specification, which can get messy). We'll start with the number of variables in the feature matrix.
</div>
<div class="alert alert-block alert-success">
Store the number of variables in the feature matrix, as an integer, in the <code>num_vars</code> variable. Submit the resulting number as an integer.
</div>
```
# The code we've provided here is just a suggestion; feel free to use any approach you like
num_vars = np.▰▰1▰▰(▰▰2▰▰).▰▰3▰▰[1]
print(num_vars)
```
## Problem 4:
<div class="alert alert-block alert-info">
Now, for the number of categories (a.k.a. labels) in the target.
</div>
<div class="alert alert-block alert-success">
Store the number of categories in the target, as an integer, in the <code>num_vars</code> variable. Submit the resulting number as an integer.
</div>
```
# The code we've provided here is just a suggestion; feel free to use any approach you like
num_labels = ▰▰1▰▰.▰▰2▰▰[1]
print(num_labels)
```
## Problem 5:
<div class="alert alert-block alert-info">
Everything is now ready for us to begin building an Artifical Neural Network! Aside from specifying that the ANN must be built using Keras's <code>Sequential</code> API, we're going to give you the freedom to tackle the creation of your ANN in whichever manner you like. Feel free to use the 'add' method to build each layer one at a time, or pass all of the layers to your model at instantiation as a list, or any other approach you may be familiar with. Kindly ensure that your model matches the specifications below <b>exactly</b>!
</div>
<div class="alert alert-block alert-success">
Using Keras's <code>Sequential</code> API, create a new ANN. Your ANN should have the following layers, in this order:
<ol>
<li> Input layer with one argument: number of variables in the feature matrix
<li> Dense layer with 400 neurons and the "relu" activation function
<li> Dense layer with 10 neurons and the "relu" activation function
<li> Dense layer with neurons equal to the number of labels in the target and the "softmax" activation function
</ol>
Submit the number of hidden layers in your model.
</div>
```
# Create your ANN!
nn_model = keras.models.Sequential()
```
## Problem 6:
<div class="alert alert-block alert-info">
Even though we've specified all of the layers in our model, it isn't yet ready to go. We must first 'compile' the model, during which time we'll specify a number of high-level arguments. Just as in the textbook, we'll go with a fairly standard set of arguments: we'll use Stochastic Gradient Descent as our optimizer, and our only metric will be Accuracy (an imperfect but indispensably simple measure). It'll be up to you to figure out what loss function we should use: you might have to go digging in the textbook to find it!
</div>
<div class="alert alert-block alert-success">
Compile the model according to the specifications outlined in the blue text above. Submit the name of the loss function <b>exactly</b> as it appears in your code (you should only need to include a single underscore -- no other punctuation, numbers, or special characters).
</div>
```
nn_model.▰▰1▰▰(
loss=keras.losses.▰▰2▰▰,
optimizer=▰▰3▰▰,
metrics=[▰▰4▰▰]
)
```
## Problem 7:
<div class="alert alert-block alert-info">
Everything is prepared. All that remains is to train the model!
</div>
<div class="alert alert-block alert-success">
Train your neural network for 100 epochs. Be sure to include the validation data variables.
</div>
```
np_seed(42)
tf.random.set_seed(42)
history = nn_model.▰▰1▰▰(▰▰2▰▰, ▰▰3▰▰, epochs=▰▰4▰▰, validation_data = (▰▰5▰▰, ▰▰6▰▰))
```
## Problem 8:
<div class="alert alert-block alert-info">
For some Neural Networks, 100 epochs is more than ample time to reach a best solution. For others, 100 epochs isn't enough time for the learning process to even get underway. One good method for assessing the progress of your model at a glance involves visualizing how your loss scores and metric(s) -- for both your training and validation sets) -- changed during training.
</div>
<div class="alert alert-block alert-success">
After 100 epochs of training, is the model still appreciably improving? (If it is still improving, you shouldn't see much evidence of overfitting). Submit your answer as a boolean value (True = still improving, False = not still improving).
</div>
```
pd.DataFrame(history.history).plot(figsize = (8, 8))
plt.grid(True)
plt.show()
```
## Problem 9:
<div class="alert alert-block alert-info">
Regardless of whether this model is done or not, it's time to dig into what our model has done. Here, we'll continue re-tracing the steps taken in the textbook, producing a (considerably more involved) confusion matrix, visualizing it as a heatmap, and peering into our model's soul. The first step in this process involves creating the confusion matrix.
</div>
<div class="alert alert-block alert-success">
Using the held-back test data, create a confusion matrix.
</div>
```
y_pred = np.argmax(nn_model.predict(▰▰1▰▰), axis=1)
y_true = np.argmax(▰▰2▰▰, axis=1)
conf_mat = tf.math.confusion_matrix(▰▰3▰▰, ▰▰4▰▰)
```
## Problem 10:
<div class="alert alert-block alert-info">
Finally, we're ready to visualize the matrix we created above. Rather than asking you to recreate the baroque visualization code, we're going to skip straight to interpretation.
</div>
<div class="alert alert-block alert-success">
Plot the confusion matrix heatmap and examine it. Based on what you know about the dataset, should the sum of the values in a column (representing the number of observations from a country) be the same for each country? If so, submit the integer that each column adds up to. If not, submit 0.
</div>
```
sns.set(rc={'figure.figsize':(12,12)})
plt.figure()
sns.heatmap(
np.array(conf_mat).T,
xticklabels=country_encoder.classes_,
yticklabels=country_encoder.classes_,
square=True,
annot=True,
fmt='g',
)
plt.xlabel("Observed")
plt.ylabel("Predicted")
plt.show()
```
## Problem 11:
<div class="alert alert-block alert-success">
Based on what you know about the dataset, should the sum of the values in a row (representing the number of observations your model <b>predicted</b> as being from a country) be the same for each country? If so, submit the integer that each row adds up to. If not, submit 0.
</div>
```
```
## Problem 12:
<div class="alert alert-block alert-success">
If your model was built and run to the specifications outlined in the assignment, your results should include at least three countries whose observations the model struggled to identify (fewer than 7 accurate predictions each). Submit the name of one such country.<br><br>As a result of the randomness inherent to these models, it is possible that your interpretation will be correct, but will be graded as incorrect. If you feel that your interpretation was erroneously graded, please email a screenshot of your confusion matrix heatmap to Pierson along with an explanation of how you arrived at the answer you did.
</div>
```
```
| github_jupyter |
# A Câmara de Vereadores e o COVID-19
Você também fica curioso(a) para saber o que a Câmara de Vereadores
de Feira de Santana fez em relação ao COVID-19? O que discutiram?
O quão levaram a sério o vírus? Vamos responder essas perguntas e
também te mostrar como fazer essa análise. Vem com a gente!
Desde o início do ano o mundo tem falado a respeito do COVID-19.
Por isso, vamos basear nossa análise no período de 1 de Janeiro
a 27 de Março de 2020. Para entender o que se passou na Câmara
vamos utilizar duas fontes de dados:
* [Diário Oficial](diariooficial.feiradesantana.ba.gov.br/)
* [Atas das sessões](https://www.feiradesantana.ba.leg.br/atas)
Nas atas temos acesso ao que foi falado nos discursos e no Diário
Oficial temos acesso ao que realmente virou (ou não) lei. Você
pode fazer _download_ dos dados [aqui]()
e [aqui](https://www.kaggle.com/anapaulagomes/dirios-oficiais-de-feira-de-santana).
Procuramos mas não foi encontrada nenhuma menção ao vírus na
[Agenda da Câmara Municipal](https://www.feiradesantana.ba.leg.br/agenda).
Lembrando que as atas foram disponibilizadas no site da casa apenas
depois de uma reunião nossa com o presidente em exercício José Carneiro.
Uma grande vitória dos dados abertos e da transparência na cidade.
Os dados são coletados por nós e todo código está [aberto e disponível
no Github](https://github.com/DadosAbertosDeFeira/maria-quiteria/).
## Vamos começar por as atas
As atas trazem uma descrição do que foi falado durante as sessões.
Se você quer acompanhar o posicionamento do seu vereador(a), é uma
boa maneira. Vamos ver se encontramos alguém falando sobre *coronavírus*
ou *vírus*.
Se você não é uma pessoa técnica, não se preocupe. Vamos continuar o texto
junto com o código.
```
import pandas as pd
# esse arquivo pode ser baixado aqui: https://www.kaggle.com/anapaulagomes/atas-da-cmara-de-vereadores
atas = pd.read_csv('atas-28.03.2020.csv')
atas.describe()
```
Explicando um pouco sobre os dados (colunas):
* `crawled_at`: data da coleta
* `crawled_from`: fonte da coleta (site de onde a informação foi retirada)
* `date`: data da sessão
* `event_type`: tipo do evento: sessão ordinária, ordem do dia, solene, etc
* `file_content`: conteúdo do arquivo da ata
* `file_urls`: url(s) do arquivo da ata
* `title`: título da ata
### Filtrando por data
Bom, vamos então filtrar os dados e pegar apenas as atas entre 1 de Janeiro
e 28 de Março:
```
atas["date"] = pd.to_datetime(atas["date"])
atas["date"]
atas = atas[atas["date"].isin(pd.date_range("2020-01-01", "2020-03-28"))]
atas = atas.sort_values('date', ascending=True) # aqui ordenados por data em ordem crescente
atas.head()
```
Bom, quantas atas temos entre Janeiro e Março?
```
len(atas)
```
Apenas 21 atas, afinal os trabalhos na casa começaram no início de Fevereiro
e pausaram por uma semana por conta do coronavírus.
Fonte:
https://www.feiradesantana.ba.leg.br/agenda
https://www.feiradesantana.ba.leg.br/noticia/2029/c-mara-municipal-suspende-sess-es-ordin-rias-da-pr-xima-semana
### Filtrando conteúdo relacionado ao COVID-19
Agora que temos nossos dados filtrados por data, vamos ao conteúdo.
Na coluna `file_content` podemos ver o conteúdo das atas. Nelas vamos
buscar por as palavras:
- COVID, COVID-19
- coronavírus, corona vírus
```
termos_covid = "COVID-19|coronavírus"
atas = atas[atas['file_content'].str.contains(termos_covid, case=False)]
atas
len(atas)
```
Doze atas mencionando termos que representam o COVID-19 foram encontradas.
Vamos ver o que elas dizem?
Atenção: o conteúdo das atas é grande para ser mostrado aqui. Por isso vamos
destacar as partes que contém os termos encontrados.
```
import re
padrao = r'[A-Z][^\\.;]*(coronavírus|covid)[^\\.;]*'
def trecho_encontrado(conteudo_do_arquivo):
frases_encontradas = []
for encontrado in re.finditer(padrao, conteudo_do_arquivo, re.IGNORECASE):
frases_encontradas.append(encontrado.group().strip().replace('\n', ''))
return '\n'.join(frases_encontradas)
atas['trecho'] = atas['file_content'].apply(trecho_encontrado)
pd.set_option('display.max_colwidth', 100)
atas[['date', 'event_type', 'title', 'file_urls', 'trecho']]
```
Já que não temos tantos dados assim (apenas 12 atas) podemos fazer parte dessa análise manual,
para ter certeza que nenhum vereador foi deixado de fora. Vamos usar o próximo comando
para exportar os dados para um novo CSV. Nesse CSV vai ter os dados filtrados por data
e também o trecho, além do conteúdo da ata completo.
```
def converte_para_arquivo(df, nome_do_arquivo):
conteudo_do_csv = df.to_csv(index=False) # convertemos o conteudo para CSV
arquivo = open(nome_do_arquivo, 'w') # criamos um arquivo
arquivo.write(conteudo_do_csv)
arquivo.close()
converte_para_arquivo(atas, 'analise-covid19-atas-camara.csv')
```
### Quem levantou a bola do COVID-19 na Câmara?
Uma planilha com a análise do quem-disse-o-que pode ser encontrada [aqui](https://docs.google.com/spreadsheets/d/1h7ioFnHH8sGSxglThTpQX8W_rK9cgI_QRB3u5aAcNMI/edit?usp=sharing).
## E o que dizem os diários oficiais?
No diário oficial do município você encontra informações sobre o que virou
realidade: as decretas, medidas, nomeações, vetos.
Em Feira de Santana os diários dos poderes executivo e legislativo estão juntos.
Vamos filtrar os diários do legislativo, separar por data, como fizemos com as
atas, e ver o que foi feito.
```
# esse arquivo pode ser baixado aqui: https://www.kaggle.com/anapaulagomes/dirios-oficiais-de-feira-de-santana
diarios = pd.read_csv('gazettes-28.03.2020.csv')
diarios = diarios[diarios['power'] == 'legislativo']
diarios["date"] = pd.to_datetime(diarios["date"])
diarios = diarios[diarios["date"].isin(pd.date_range("2020-01-01", "2020-03-28"))]
diarios = diarios.sort_values('date', ascending=True) # aqui ordenados por data em ordem crescente
diarios.head()
```
O que é importante saber sobre as colunas dessa base de dados:
* `date`: quando o diário foi publicado
* `power`: poder executivo ou legislativo (sim, os diários são unificados)
* `year_and_edition`: ano e edição do diário
* `events`:
* `crawled_at`: quando foi feita essa coleta
* `crawled_from`: qual a fonte dos dados
* `file_urls`: url dos arquivos
* `file_content`: o conteúdo do arquivo do diário
Vamos filtrar o conteúdo dos arquivos que contém os termos relacionados ao COVID-19
(os mesmos que utilizamos com as atas).
```
diarios = diarios[diarios['file_content'].str.contains(termos_covid, case=False)]
diarios['trecho'] = diarios['file_content'].apply(trecho_encontrado)
diarios[['date', 'power', 'year_and_edition', 'file_urls', 'trecho']]
```
Apenas 4 diários com menção ao termo, entre 1 de Janeiro e 28 de Março de 2020
foram encontrados. O que será que eles dizem? Vamos exportar os resultados para
um novo CSV e continuar no Google Sheets.
```
converte_para_arquivo(diarios, 'analise-covid19-diarios-camara.csv')
```
### O que encontramos nos diários
Os 4 diários tratam de suspender licitações por conta da situação na cidade.
Apenas um dos diários, o diário do dia 17 de Março de 2020, [Ano VI - Edição Nº 733](http://www.diariooficial.feiradesantana.ba.gov.br/atos/legislativo/2CI2L71632020.pdf),
traz instruções a respeito do que será feito na casa. Aqui citamos o trecho que fala
a respeito das medidas:
> Art. 1º **Qualquer servidor, estagiário, terceirizado, vereador que apresentar febre ou sintomas respiratórios (tosse seca, dor de garganta, mialgia, cefaleia e prostração, dificuldade para respirar e batimento das asas nasais) passa a ser considerado um caso suspeito e deverá notificar imediatamente em até 24 horas à Vigilância Sanitária Epidemiológica/Secretaria Municipal de Saúde**.
§ 1ºQualquer servidor, estagiário, terceirizado, a partir dos 60 (sessenta)anos; portador de doença crônica e gestante estarão liberados das atividades laborais na Câmara Municipal de Feira de Santana sem prejuízo a sua remuneração.
§ 2º A(o) vereador(a) a partir dos 60 (sessenta)anos; portador de doença crônica e gestante será facultado as atividades laborais na Câmara Municipal de Feira de Santana sem prejuízo a sua remuneração
§ 3º Será considerado falta justificada ao serviço público ou à atividade laboral privada o período de ausência decorrente de afastamento por orientação médica.
> Art. 2º **Fica interditado o elevador no prédio anexo por tempo indeterminado**, até que sejam efetivamente contida a propagação do Coronavírus no Município e estabilizada a situação. _Parágrafo único_ O uso do elevador nesse período só será autorizado para transporte de portadores de necessidades especiais, como cadeirantes e pessoas com mobilidade reduzida.
> Art. 3º Será liberada a catraca para acesso ao prédio anexo.
> Art. 4º O portãolateral (acesso a rua do prédio anexo) será fechado com entrada apenas pela portaria principal.
> Art. 5º Será disponibilizado nas áreas comuns dispensadores para álcool em gel.
> Art.6º Será intensificada a limpeza nos banheiros, elevadores, corrimãos, maçanetase áreas comuns com grande circulação de pessoas.
> Art.7º Na cozinha e copa só será permitido simultaneamente à permanência de uma pessoa.
> Art. 8º **Ficam suspensas as Sessões Solenes, Especiais e Audiências Públicas por tempo indeterminado**, até que sejam efetivamente contida a propagação do Coronavírus no Município e estabilizada a situação.
> Art. 9º Nesse período de contenção é recomendado que o público/visitante assista a Sessão Ordinária on-line via TV Câmara. Parágrafo - Na Portaria do Prédio Sede **haverá um servidor orientando as pessoas a assistirem os trabalhos legislativos pela TV Câmara** disponível em https://www.feiradesantana.ba.leg.br/ distribuindo panfletos informativos sobre sintomas e métodos de prevenção.
> Art. 10º No âmbito dos gabinetes dos respectivos Vereadores, **fica a critério de cada qual adotar restrições ao atendimento presencial do público externo ou visitação à sua respectiva área**.
> Art.11º A Gerência Administrativa fica autorizada a adotar outras providências administrativas necessárias para evitar a propagação interna do vírus COVID-19, devendo as medidas serem submetidas ao conhecimento da Presidência.
> Art.12º A validade desta Portaria será enquanto valer oestado de emergência de saúde pública realizado pela Secretaria Municipal de Saúde e pelo Comitê de Ações de Enfrentamento ao Coronavírus no Município de Feira de Santana.
> Art.13º Esta Portaria entra em vigor na data de sua publicação, revogadas disposições em contrário.
## Conclusão
Caso os edis da casa tenham feito mais nós não teríamos como saber sem ser por as atas,
pelo diário oficial ou por notícias do site. O ideal seria ter uma página com projetos
e iniciativas de cada vereador. Dessa forma, seria mais fácil se manter atualizado a
respeito do trabalho de cada um na cidade.
As análises manuais e o que encontramos pode ser visto [aqui](https://docs.google.com/spreadsheets/d/1h7ioFnHH8sGSxglThTpQX8W_rK9cgI_QRB3u5aAcNMI/edit?usp=sharing).
Um texto sobre a análise e as descobertas podem ser encontradas no blog do [Dados Abertos de Feira](https://medium.com/@dadosabertosdefeira).
Gostou do que viu? Achou interessante? Compartilhe com outras pessoas e não esqueça de mencionar
o projeto.
| github_jupyter |
# Titanic
In this notebook we will experiment with tabular data and fastai v2 and other machine learning techniques
```
#collapse-hide
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from fastai.tabular.all import *
from pathlib import Path
import seaborn as sns
sns.set_palette('Set1')
sns.set_style("white")
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split
#collapse-hide
path = Path('/home/jupyter/kaggle/titanic/')
train = pd.read_csv(path/'train.csv')
test = pd.read_csv(path/'test.csv')
```
Let's take a look of the data composing the dataset
```
train.head()
```
Notes about the columns:
| Column name | Description |
|---|---|
| PassangerID | Unique identifier for each passanger |
| Survived | 1 survived, 0 did not survive |
| Pclass | From the problem description: A proxy for socio-economic status (SES) 1 = Upper ,2= Middle, 3 = Lower |
| Name | Name of the passanger |
| Sex | Sex of passanger |
| Age | Age of the passanger, if younger than 1 it will be fractionary |
| SibSp | Number of brothers, sisters, stepbrother, stepsister and wife or husband of the given passanger embarked |
| Parch | Number of parents or children embarked of the passanger |
| Ticket | Ticket number |
| Cabin | Cabin number |
| Embarked | Port of embarkation -> C = Cherbourg, Q = Queenstown, S = Southampton |
Before preparing the data for our model, let's take a look how each column influences survivality
```
#collapse-hide
sns.displot(data=train, x='Sex', hue='Survived', multiple = 'stack',alpha=0.6)
```
There is almost double male than female passengers but there is much more female survivors. Next let's take a look how the passengers where distributed among the classes and age.
```
#collapse-hide
g = sns.FacetGrid(train, col='Pclass',row='Sex',hue='Survived', )
g.map(sns.histplot,'Age', alpha=0.6)
g.add_legend()
```
1st and 2nd class female passengers where much more likely to survive than male passengers from 2nd and 3rd class.
```
#collapse-hide
train['Family']=(train['SibSp']+train['Parch']).astype(int)
order=[]
for i in range(10):
order.append(str(i))
a=sns.countplot(data=train, x='Family', hue='Survived',dodge=True, alpha=0.6)
```
There are many passengers travelling alone and they are much less likely to survive. Then people who had between 2 and 3 relatives was more likely to survive than the rest of passengers.
```
#hide
g = sns.FacetGrid(train, col='Family',row='Sex',hue='Survived', )
g.map(sns.histplot,'Age', alpha=0.6)
g.add_legend()
#hide
plt.figure(figsize=(10,10))
a=sns.stripplot(data=train, x='Family', y='Age', hue='Survived', alpha=0.5)
```
There are some columns not very useful predicting the survival rate of a given passanger, these are: 'PassengerId', 'Name', 'Ticket' and 'Fare'. Let's remove this columns.
```
#hide
def remove_cols(df, cols, debug=False):
df.drop(columns = cols, inplace=True);
if debug: print('columns_droped')
return(df)
#hide
cols = ['PassengerId', 'Ticket', 'Fare','Name']
print(f'This features:{cols} will be removed from the data set')
```
We can check the unique values the dataset takes for each column
```
#hide_input
print('Number of unique classes in each feature: ')
print(train.nunique())
```
We can take a look of the missing values in the columns, we can se there is missing data in 'Age' and 'Cabin'.
```
#hide_input
print('Number of passengers missing each feature:')
train.isna().sum()
```
We will create a new column indicating is that the data is missing and impute the **mean age of the training set** and fill cabin with the text 'Missing'.
Cabin might look as it is not relevant but the letter of leading the number can indicate the deck it was and might be relevant to the survivality. There are also 2 passengers missing where they embarked, this can be very relevant so we will create a new column for missing embarkation port. This might be relevant because it could be very easy to retrieve this information from the survivors. So instead of keeping the whole cabin number, we will just save the cabin letter.
```
MEAN_AGE = train['Age'].mean()
def fill_na(df,debug=False):
df.loc[:,'AgeMissing'] = df['Age'].isna() # New column for missing data
if debug: print('fillna complete: 1/4')
df['Age'].fillna(MEAN_AGE, inplace=True) #filling 'Age' with the mean
if debug: print('fillna complete: 2/4')
df['Cabin'].fillna('Missing', inplace=True)
if debug: print('fillna complete: 3/4')
df['Cabin'] = df['Cabin'].apply(lambda x: x[0])
if debug: print('fillna complete')
return df
```
Next is to convert all categorical feature columns to multiple binnary columns using [one hot encoding](https://en.wikipedia.org/wiki/One-hot 'One-hot
From Wikipedia, the free encyclopedia') technique. Each category in a given feature will be a new column.
```
#hide_input
OHE_COLS = ['Pclass', 'Sex','Cabin']
def ohe_data(df, columns):
new_pd = pd.get_dummies(df)
return new_pd
#hide_input
def prepare_data(df, cols, ohe_cols, debug=False):
df = df.copy()
df = remove_cols(df, cols,debug)
df = fill_na(df, debug)
df['Pclass'] = df['Pclass'].astype(str)
df = ohe_data(df, ohe_cols)
return df
#hide_input
df_test = pd.read_csv(path/'test.csv')
full = pd.concat([train,df_test])
#hide
full_clean = prepare_data(full, cols, OHE_COLS,True)
train_clean = full_clean.iloc[:len(train)].copy()
train_clean1 = train_clean.copy()
test_clean = full_clean.iloc[-len(test):].copy()
y = train_clean['Survived']
train_clean.drop(columns='Survived', inplace=True)
```
With the data cleaned we can train the classifier. First we need a validation set to check the performance of the classifier with data not used in training
```
x_train, x_val, y_train, y_val = train_test_split(train_clean,y, test_size=0.2, random_state=42)
clf = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1, max_depth=4, random_state=42)
clf.fit(x_train,y_train)
#hide_input
score = clf.score(x_val,y_val)
print(f'With a Gradient Boosting Classifier we get a validation accuracy of {score*100:.2f}%')
```
With this classifier it is easy to analyse which features are the most important to predict the survivality
```
clf.feature_importances_
chart=sns.barplot(x=train_clean.columns, y=clf.feature_importances_,);
chart.set_xticklabels(train_clean.columns,rotation=90);
submission = pd.read_csv('gender_submission.csv')
test_clean.drop(columns='Survived', inplace=True)
test_clean['Family']=(test_clean['SibSp']+test_clean['Parch']).astype(int)
y_sub = clf.predict(test_clean)
submission['Survived']=y_sub.astype(int)
submission
submission.to_csv('sub.csv',index=False)
#!kaggle competitions submit titanic -f sub.csv -m "first submision"
```
Let's try a more sophisticated method using Fast.ai library
```
path = Path('/home/jupyter/kaggle/titanic/')
train = pd.read_csv(path/'train.csv')
test = pd.read_csv(path/'test.csv')
train.drop(columns=['PassengerId','Ticket','Name'],inplace = True)
test.drop(columns=['PassengerId','Ticket','Name'],inplace = True)
train['Cabin'] = train['Cabin'].fillna(value='Missing')
test['Cabin'] = train['Cabin'].fillna(value='Missing')
train['Cabin'] = train['Cabin'].apply(lambda x:x[0])
test['Cabin'] = train['Cabin'].apply(lambda x:x[0])
train['Cabin']
y_names = 'Survived'
cat_names = ['Pclass','Sex','Cabin', 'Embarked']
cont_names = ['Age', 'SibSp','Parch',]
procs = [ Categorify, FillMissing, Normalize]
splits = RandomSplitter(valid_pct=0.2)(range_of(train_clean1))
to = TabularPandas(df=train,
cat_names=cat_names,
cont_names=cont_names,
procs=procs,
y_names=y_names,
splits=splits,
y_block= CategoryBlock,)
dls = to.dataloaders(bs=128)
dls.show_batch()
learn = tabular_learner(dls, metrics=[ accuracy],layers=[50,25])
learn.model
learn.lr_find()
learn.save('empty')
learn = learn.load('empty')
learn.fit_one_cycle(16,0.005)
learn.recorder.plot_loss()
learn.save('16_epoch')
learn.show_results()
dl_test=learn.dls.test_dl(test)
probs,_ =learn.get_preds(dl=dl_test)
y = np.argmax(probs,1)
y = np.array(y)
submission['Survived']=y.astype(int)
submission.to_csv('sub.csv',index=False)
!kaggle competitions submit titanic -f sub.csv -m "nn mid"
learn.model
learn.recorder.plot_sched()
```
| github_jupyter |
# Closed Loop Kinematics
We want to simulate a closed loop system - just because . As a starting point, we will tackle the **STEWART PLATFORM**.
While this seems like an overkill, applying the closure conditions in pybullet is quite easy.
First, we import the needed modules.
```
import pybullet as pb
import pybullet_data as pbd
```
And start a server. Additionally, we supply the open loop model of the stewart platform.
```
client = pb.connect(pb.GUI)
robot_path = '../../models/urdf/Stewart.urdf'
```
We will start by simulating the open loop kinematics under the influence of gravity.
```
pb.resetSimulation()
pb.removeAllUserDebugItems()
pb.setGravity(0, 0, -9.81, client)
robot = pb.loadURDF(robot_path, useFixedBase=True, flags = pb.URDF_USE_INERTIA_FROM_FILE|pb.URDF_USE_SELF_COLLISION_INCLUDE_PARENT)
pb.setRealTimeSimulation(1)
pb.resetSimulation()
pb.removeAllUserDebugItems()
robot = pb.loadURDF(robot_path, useFixedBase=True)
pb.setPhysicsEngineParameter(
fixedTimeStep = 1e-4,
numSolverIterations = 200,
constraintSolverType = pb.CONSTRAINT_SOLVER_LCP_DANTZIG,
globalCFM = 0.000001
)
pb.setRealTimeSimulation(1)
# Gather all joints and links
joint_dict = {}
for i in range(pb.getNumJoints(robot, client)):
joint_info = pb.getJointInfo(robot, i)
name = (joint_info[1]).decode('UTF-8')
if not name in joint_dict.keys():
joint_dict.update(
{name : i}
)
# Make some new constraints
parents = ['top_11', 'top_12', 'top_13']
children = {'top_11' : ['ITF_31'], 'top_12' : ['ITF_22', 'ITF_12'], 'top_13' : ['ITF_23', 'ITF_33']}
constraint_dict = {}
for parent in parents:
parent_id = joint_dict[parent]
for child in children[parent]:
constraint_name = parent + '_2_' + child
child_id = joint_dict[child]
# Create a p2p connection
constraint_id = pb.createConstraint(
parentBodyUniqueId = robot,
parentLinkIndex = parent_id,
childBodyUniqueId = robot,
childLinkIndex = child_id,
jointType = pb.JOINT_POINT2POINT,
jointAxis = (0,0,0),
parentFramePosition = (0,0,0),
childFramePosition = (0,0,0),
)
# Store the constraint information
constraint_dict.update(
{constraint_name : constraint_id}
)
# Change the constraint erp
pb.changeConstraint(
constraint_id,
erp = 1e-4,
)
# Create Actuation
control = ['P_11', 'P_33', 'P_55', 'P_22', 'P_44', 'P_55']
debug_dict = {}
for actor in control:
actor_id = joint_dict[actor]
joint_info = pb.getJointInfo(robot, actor_id)
min_val = joint_info[8]
max_val = joint_info[9]
current_val = pb.getJointState(robot, actor_id, client)[0]
debug_id = pb.addUserDebugParameter(actor, min_val, max_val, current_val, client)
debug_dict.update(
{debug_id : actor_id}
)
# Enable Position control
while True:
for debug, actor in debug_dict.items():
current_val = pb.readUserDebugParameter(debug, client)
pb.setJointMotorControl2(
robot,
actor,
controlMode = pb.POSITION_CONTROL,
targetPosition = current_val,
)
```
| github_jupyter |
```
from readability import Readability
text = """
# Introduction to PyMC3 - Concept
In this lecture we're going to talk about the theories, advantages and application of using Bayesian
statistics.
Bayesian inference is a method of making statistical decisions as more information or evidence becomes
available. It is a powerful tool to help us understand uncertain events in the form of probabilities.
Central to the application of Bayesian methods is the idea of a distribution. In general, a distribution
describes how likely a certain outcome will occur given a series of inputs. For instance, we might have a
distribution which describes how likely a given kicker on a football team will make a field goal (the
outcome).
# We can use the line magic %run to load all variables from the previous Overview lecture
# and revisit the histogram
%run PyMC3_Code_Training_Overview.ipynb
Here is an example of what the distribution of National Football League kicker's field goal probability
looks like. It shows that most kickers score roughly 85 percent of the field goals and the number drops
gradually on both sides as field goal percentage goes up and down. So the graph looks pretty much a bell
curve, a shape reminiscent of a bell.
In Bayesian statistics, we consider our initial understanding of the outcome of an event before data is
collected. We do this by creating a **prior distribution**. Similar to most data science methods, we will
then collect the observed data to create a model using some probability distribution. We rely on that model
called **likelihood** to update our initial understanding (**prior distribution**) using the **Bayes'
Theorem**.
We won't go into the maths of Bayes' theorem, but the theorem suggests that by using the observed data as
new evidence, the updated belief incorporates our initial understanding and the evidence related to the
problem being examined, just like we cumulate knowledge by learning the problems or doing experiments.
**Bayes's theorem** also suggests a nice property that the updated belief is proportional to the product of
the **prior distribution** and the **likelihood**. We call the updated belief **posterior distribution**.

Bayesian inference is carried out in four steps.
First, specify prior distribution. We can start with forming some prior distribution for our initial
understanding of a subject matter, before observing data. This **prior distribution** can come from personal
experience, literature, or even thoughts from experts.
During the second and third step, we move on to **collect the data** from observations or experiments and
then **construct a model** using probability distribution(s) to represent how likely the outcomes occur
given the data input. In our example, we can collect evidence by recording American football players' field
goal performance for a few NFL seasons in a database. Then based on the data we collect, we can use
visualizations to understand how to construct a suitable probabilistic model (**likelihood**) to represent
the distribution of the NFL data.
Finally, apply the Bayes' rules. In this step, we incorporate the **prior distribution** and the
**likelihood** using Bayes' rules to update our understanding and return the **posterior distribution**.
Let's first focus on the first three steps.
For the NFL example, as we observe kicks from a player, we can record the success and failures for each kick
and then assign binomial distribution to represent the how likely a field goal is made for different
players. We usually use **beta distribution** to set up a prior distribution involving success and failure
trials, where you can specify the parameters to decide how likely a player will make a field goal based on
your knowledge.
# We can randomly generate binomial distributed samples by
# setting the sample size as 100 and success probability of making a field goal as 0.8
n = 100
p = 0.8
size = 100
# Next we can set up binom as a random binomial variable that will give us 100 draws of binomial distributions
binom = np.random.binomial(n, p, size)
# Now we use plt.figure function to define the figure size and plt.hist to draw a histogram, so you
# can visualize what's going on here
plt.figure(figsize=(8,8))
plt.hist(binom/n)
# Let's set the x and y axis and the title as well.
plt.xlabel("Success Probability")
plt.ylabel("Frequency")
plt.title("Binomial Distribution")
# and show the plot
plt.show()
Once again, you can see from this graph that most samples made roughly 80% of the field goal, with only few
of those making less than 72.5% and more than 87.5%. We can again say the histogram shows a binomial
distribution with mean closed to 0.8.
## Posterior Distribution
The question of how to obtain the posterior distribution by Bayes' rules represents the coolest part of
Bayesian analysis.
Traditional methods include integrating the random variables and determining the resulting distribution in
closed form. However, it is not always the case that posterior distribution is obtainable through
integration.
**Probabilistic programming languages**, a renowned programming tool to perform probabilistic models
automatically, can help update the belief iteratively to approximate the posterior distribution even when a
model is complex or hierarchical. For example, the NUTS algorithm under Monte Carlo Markov Chain (MCMC),
which we will discuss in the next video, is found to be effective in computing and representing the
posterior distribution.
## Bayesian Inference Advantages

So why Bayesian Inference useful? A salient advantage of the Bayesian approach in statistics lies on its
capability of easily cumulating knowledge. For every time when you have new evidence, using Bayesian
approach can effectively update our prior knowledge.
Another potential of Bayesian statistics is that it allows every parameters, such as the probability of
making the field goal, be summarized by probability distributions, regardless of whether it is prior,
likelihood, or posterior. With that, we can always look at the distributions to quantify the degree of
uncertainty.
Furthermore, Bayesian approach generates more robust posterior statistics by utilizing flexible
distributions in prior and the observed data.
In the next video, we will introduce PyMC3, a Python package to perform MCMC conveniently and obtain easily
interpretable Bayesian estimations and plots, with a worked example. Bye for now!
"""
r = Readability(text)
r.flesch_kincaid()
# r.flesch()
# r.gunning_fog()
# r.coleman_liau()
# r.dale_chall()
# r.ari()
# r.linsear_write()
# r.smog()
# r.spache()
```
| github_jupyter |
# 12 - Beginner Exercises
* Conditional Statements
## 🍼 🍼 🍼
1.Create a Python program that receive a number from the user and determines if a given integer is even or odd.
```
# Write your own code in this cell
n =
```
## 🍼🍼
2.Write a Python code that would read any integer day number and show the day's name in the word
```
# Write your own code in this cell
```
## 🍼🍼
3.Create a Python program that receive three angles from the user to determine whether the supplied angle values may be used to construct a triangle.
```
# Write your own code in this cell
a =
b =
c =
```
## 🍼
4.Write a Python program that receive three numbers from the user to calculate the root of a Quadratic Equation.
$$ax^2+bx+c$$
As you know, the roots of the quadratic equation can be obtained using the following formula:
$$x= \frac{-b \pm \sqrt{ \Delta } }{2a} $$
* If the delta is a number greater than zero ($\Delta > 0$), our quadratic equation has two roots as follows:
$$x_{1}= \frac{-b+ \sqrt{ \Delta } }{2a} $$
$$x_{2}= \frac{-b- \sqrt{ \Delta } }{2a} $$
* If the delta is zero, then there is exactly one real root:
$$x= \frac{-b}{2a} $$
* If the delta is negative, then there are no real roots
We also know that Delta equals:
$$ \Delta = b^{2} - 4ac$$
```
# Write your own code in this cell
a =
b =
c =
```
## 🍼
5.Create an application that takes the user's systolic and diastolic blood pressures and tells them category of their current condition .
|BLOOD PRESSURE CATEGORY | SYSTOLIC mm Hg |and/or| DIASTOLIC mm Hg |
|---|---|---|---|
|**NORMAL**| LESS THAN 120 | and | LESS THAN 80|
|**ELEVATED**| 120 – 129 | and | LESS THAN 80 |
|**HIGH BLOOD PRESSURE (HYPERTENSION) STAGE 1**| 130 – 139 | or | 80 – 89 |
|**HIGH BLOOD PRESSURE (HYPERTENSION) STAGE 2**| 140 OR HIGHER | or | 90 OR HIGHER |
|**HYPERTENSIVE CRISIS *(consult your doctor immediately)***| HIGHER THAN 180 | and/or | HIGHER THAN 120|
```
# Write your own code in this cell
```
## 🍼
6.Create an application that initially displays a list of geometric shapes to the user.
By entering a number, the user can pick one of those shapes.
Then, based on the geometric shape selected, obtain the shape parameters from the user.
Finally, as the output, display the shape's area.
```
# Write your own code in this cell
```
## 🌶️🌶️
7.Create a lambda function that accepts a number and returns a string based on this logic,
* If the given value is between 3 to 8 then return "Severe"
* else if it’s between 9 to 15 then return "Mild to Moderate"
* else returns the "The number is not valid"
```The above numbers are derived from the Glasgow Coma Scale.```
```
# Write your own code in this cell
```
## 🌶️
**```You can nest expressions to evaluate inside expressions in an f-string```**
8.You are given a set of prime numbers less than 1000.
Create a program that receives a number and then uses a single print() function with F-String to determine whether or not it was a prime number.
for example:
"997 is a prime number,"
"998 is not a prime number."
```
# Write your own code in this cell
P = {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41,
43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97,
101, 103, 107, 109, 113, 127, 131, 137, 139, 149,
151, 157, 163, 167, 173, 179, 181, 191, 193, 197,
199, 211, 223, 227, 229, 233, 239, 241, 251, 257,
263, 269, 271, 277, 281, 283, 293, 307, 311, 313,
317, 331, 337, 347, 349, 353, 359, 367, 373, 379,
383, 389, 397, 401, 409, 419, 421, 431, 433, 439,
443, 449, 457, 461, 463, 467, 479, 487, 491, 499,
503, 509, 521, 523, 541, 547, 557, 563, 569, 571,
577, 587, 593, 599, 601, 607, 613, 617, 619, 631,
641, 643, 647, 653, 659, 661, 673, 677, 683, 691,
701, 709, 719, 727, 733, 739, 743, 751, 757, 761,
769, 773, 787, 797, 809, 811, 821, 823, 827, 829,
839, 853, 857, 859, 863, 877, 881, 883, 887, 907,
911, 919, 929, 937, 941, 947, 953, 967, 971, 977,
983, 991, 997}
```
| github_jupyter |
# Semi-Monocoque Theory: corrective solutions
```
from pint import UnitRegistry
import sympy
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import sys
%matplotlib inline
from IPython.display import display
```
Import **Section** class, which contains all calculations
```
from Section import Section
```
Initialization of **sympy** symbolic tool and **pint** for dimension analysis (not really implemented rn as not directly compatible with sympy)
```
ureg = UnitRegistry()
sympy.init_printing()
```
Define **sympy** parameters used for geometric description of sections
```
A, A0, t, t0, a, b, h, L, E, G = sympy.symbols('A A_0 t t_0 a b h L E G', positive=True)
```
We also define numerical values for each **symbol** in order to plot scaled section and perform calculations
```
values = [(A, 150 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 80 * ureg.millimeter), \
(b, 20 * ureg.millimeter),(h, 35 * ureg.millimeter),(L, 2000 * ureg.millimeter), \
(t, 0.8 *ureg.millimeter),(E, 72e3 * ureg.MPa), (G, 27e3 * ureg.MPa)]
datav = [(v[0],v[1].magnitude) for v in values]
```
# Second example: Rectangular section with 6 nodes and 2 loops
Define graph describing the section:
1) **stringers** are **nodes** with parameters:
- **x** coordinate
- **y** coordinate
- **Area**
2) **panels** are **oriented edges** with parameters:
- **thickness**
- **lenght** which is automatically calculated
```
stringers = {1:[(2*a,h),A],
2:[(a,h),A],
3:[(sympy.Integer(0),h),A],
4:[(sympy.Integer(0),sympy.Integer(0)),A],
5:[(a,sympy.Integer(0)),A],
6:[(2*a,sympy.Integer(0)),A]}
panels = {(1,2):t,
(2,3):t,
(3,4):t,
(4,5):t,
(5,6):t,
(6,1):t,
(5,2):t}
```
Define section and perform first calculations
```
S1 = Section(stringers, panels)
S1.cycles
```
## Plot of **S1** section in original reference frame
Define a dictionary of coordinates used by **Networkx** to plot section as a Directed graph.
Note that arrows are actually just thicker stubs
```
start_pos={ii: [float(S1.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }
plt.figure(figsize=(12,8),dpi=300)
nx.draw(S1.g,with_labels=True, arrows= True, pos=start_pos)
plt.arrow(0,0,20,0)
plt.arrow(0,0,0,20)
#plt.text(0,0, 'CG', fontsize=24)
plt.axis('equal')
plt.title("Section in starting reference Frame",fontsize=16);
```
## Plot of **S1** section in inertial reference Frame
Section is plotted wrt **center of gravity** and rotated (if necessary) so that *x* and *y* are principal axes.
**Center of Gravity** and **Shear Center** are drawn
```
positions={ii: [float(S1.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }
x_ct, y_ct = S1.ct.subs(datav)
plt.figure(figsize=(12,8),dpi=300)
nx.draw(S1.g,with_labels=True, pos=positions)
plt.plot([0],[0],'o',ms=12,label='CG')
plt.plot([x_ct],[y_ct],'^',ms=12, label='SC')
#plt.text(0,0, 'CG', fontsize=24)
#plt.text(x_ct,y_ct, 'SC', fontsize=24)
plt.legend(loc='lower right', shadow=True)
plt.axis('equal')
plt.title("Section in pricipal reference Frame",fontsize=16);
```
Expression of **inertial properties** in *principal reference frame*
```
sympy.simplify(S1.Ixx), sympy.simplify(S1.Iyy), sympy.simplify(S1.Ixy), sympy.simplify(S1.θ)
S1.symmetry
```
Compute **L** matrix: with 6 nodes we expect 3 **dofs**, two with _symmetric load_ and one with _antisymmetric load_
```
S1.compute_L()
S1.L
```
Compute **H** matrix
```
S1.compute_H()
S1.H.subs(datav)
```
Compute $\tilde{K}$ and $\tilde{M}$
```
S1.compute_KM(A,h,t)
S1.Ktilde
S1.Mtilde.subs(datav)
```
Compute **eigenvalues** and **eigenvectors**: results are in the form:
- eigenvalue
- multiplicity
- eigenvector
```
sol_data = (S1.Ktilde.inv()*(S1.Mtilde.subs(datav))).eigenvects()
sol_data
```
Extract eigenvalues
```
β2 = [sol[0] for sol in sol_data]
β2
```
Extract and normalize eigenvectors
```
X = [sol[2][0]/sol[2][0].norm() for sol in sol_data]
X
```
Compute numerical value of $\lambda$
```
λ = [sympy.N(sympy.sqrt(E*A*h/(G*t)*βi).subs(datav)) for βi in β2]
λ
```
| github_jupyter |
```
# Pre requisites for this notebook
!pip install Pillow
!pip install nb_black
%load_ext nb_black
import os
from requests import get
from pathlib import Path
import gc
import tensorflow as tf
from concurrent.futures import ProcessPoolExecutor as PoolExecutor
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow_addons.optimizers import RectifiedAdam, Lookahead
from tensorflow_addons.activations import mish
import pandas as pd
import numpy as np
from sklearn.metrics import roc_auc_score
np.random.seed(0)
from PIL import Image, ImageDraw, ImageFont
from matplotlib.pyplot import imshow
import matplotlib.pyplot as plt
%matplotlib inline
def download(url, out, force=False, verify=True):
out.parent.mkdir(parents=True, exist_ok=True)
if force:
print(f"Removing file at {str(out)}")
out.unlink()
if out.exists():
print("File already exists.")
return
print(f"Downloading {url} at {str(out)} ...")
# open in binary mode
with out.open(mode="wb") as file:
# get request
response = get(url, verify=verify)
for chunk in response.iter_content(100000):
# write to file
file.write(chunk)
```
## Download census-income dataset
```
# url = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
# url_test = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test"
font_url = "https://ff.static.1001fonts.net/r/o/roboto-condensed.regular.ttf"
dataset_name = "bank-marketing"
out = Path(os.getcwd()) / f"data/{dataset_name}/train_bench.csv"
# out_test = Path(os.getcwd()) / f"data/{dataset_name}_test.csv"
out_font = Path(os.getcwd()) / f"RobotoCondensed-Regular.ttf"
# download(url, out)
# download(url_test, out_test)
download(font_url, out_font)
```
### Load bank-marketing as a dataframe
```
target = "y"
train = pd.read_csv(out, low_memory=False)
print(train.shape)
```
### Prepare split
```
train_indices = train[train.Set == "train"].index.values
np.random.shuffle(train_indices)
valid_indices = train[train.Set == "valid"].index.values
test_indices = train[train.Set == "test"].index.values
```
### Encode target
```
from sklearn import preprocessing
# label_encoder object knows how to understand word labels.
label_encoder = preprocessing.LabelEncoder()
# Encode labels in column 'species'.
Y = (
label_encoder.fit_transform(train[[target]].values.reshape(-1))
.reshape(-1, 1)
.astype("uint8")
)
print(Y.shape)
X = train.drop(columns=[target, "Set"]).values.astype("str")
print(X.shape)
del train
gc.collect()
# https://he-arc.github.io/livre-python/pillow/index.html#methodes-de-dessin
# https://stackoverflow.com/questions/26649716/how-to-show-pil-image-in-ipython-notebook
# https://stackoverflow.com/questions/384759/how-to-convert-a-pil-image-into-a-numpy-array
# line = np.array(pic, dtypes="uint8")
# from https://arxiv.org/pdf/1902.02160.pdf page 2
WHITE = (255, 255, 255)
BLACK = (0, 0, 0)
def word_to_square_image(text, size, cut_length=None):
truncated = text[:cut_length] if cut_length is not None else text
max_x = np.ceil(np.sqrt(len(truncated))).astype("int")
character_size = np.floor(size / max_x).astype("int")
padding = np.floor((size - (max_x * character_size)) / 2).astype("int")
# Do we need pt to px conversion ? Seems like not
# font_size = int(np.floor(character_size*0.75))
font_size = character_size
fnt = ImageFont.truetype(out_font.as_posix(), font_size)
image = Image.new("RGB", (size, size), BLACK)
# Obtention du contexte graphique
draw = ImageDraw.Draw(image)
x = 0
y = 0
for letter in text:
draw.text(
(padding + x * character_size, padding + y * character_size),
letter,
font=fnt,
fill=WHITE,
)
if x + 1 < max_x:
x += 1
else:
y += 1
x = 0
return np.array(image)
def text_to_square_image(features, image_size=299, cut_length=None):
square_nb = np.ceil(np.sqrt(len(features))).astype("int")
word_size = np.floor(image_size / square_nb).astype("int")
max_features = len(features)
padding = np.floor((image_size - square_nb * word_size) / 2).astype("int")
result_image = np.zeros((image_size, image_size, 3), dtype="uint8")
results = []
i_feature = 0
for x in range(0, square_nb):
if i_feature is None:
break
for y in range(0, square_nb):
i_feature = x * (square_nb) + y
if i_feature >= max_features:
i_feature = None
break
x_pos = x * word_size + padding
y_pos = y * word_size + padding
result_image[
x_pos : x_pos + word_size, y_pos : y_pos + word_size
] = word_to_square_image(
features[i_feature], size=word_size, cut_length=cut_length
)
return result_image
def preprocess_data(data, map_func):
preprocessed_df = None
with PoolExecutor() as executor:
preprocessed_df = np.stack(list(executor.map(map_func, data)), axis=0)
print(preprocessed_df.shape)
print(preprocessed_df.nbytes / (1024 * 1024)) # Memory size in RAM
gc.collect()
return preprocessed_df
def fixed_text_to_square_image(values):
return text_to_square_image(values, image_size=96, cut_length=None)
from tensorflow.keras.utils import to_categorical
from concurrent.futures import ProcessPoolExecutor as PoolExecutor
class TabularToImagesDataset:
def __init__(self, values, target, func, prefetch=1024):
self.values = values
self.target = to_categorical(target.reshape(-1))
assert target.shape[0] == self.values.shape[0]
self.current = -1
self.max_prefetch = -1
self.prefetch_nb = prefetch
self.func = func
self.ready = None
def __iter__(self):
# print("Calling __iter__")
self.current = -1
self.max_prefetch = -1
return self
def __call__(self):
# print("Calling __call__")
self.current = -1
self.max_prefetch = -1
return self
def __next__(self):
return self.next()
def __len__(self):
return self.values.shape[0]
def prefetch(self):
if self.ready is not None and self.ready.shape[0] == self.values.shape[0]:
return
# print("HERE")
self.max_prefetch = min(self.current + self.prefetch_nb, self.values.shape[0])
# if self.current == self.max_prefetch:
with PoolExecutor() as executor:
self.ready = np.stack(
list(
executor.map(
self.func, self.values[self.current : self.max_prefetch]
)
),
axis=0,
)
return
def next(self):
self.current += 1
# print(self.current)
if self.current >= self.values.shape[0]:
raise StopIteration()
# self.current = 0
# self.max_prefetch = -1
if self.current >= self.max_prefetch:
# print("Will prefetch")
self.prefetch()
# print(self.ready.shape)
return (
self.ready[self.current % self.prefetch_nb],
# text_to_square_image(self.values[self.current]),
self.target[self.current],
)
def build_tf_dataset(X_values, Y_values, image_size, fix_func, prefetch, batch_size):
gen = TabularToImagesDataset(
X_values, Y_values, fix_func, prefetch=prefetch * batch_size,
)
if prefetch * batch_size > X_values.shape[0]:
prefetch = np.ceil(X_values.shape[0] / batch_size).astype("int")
dataset = tf.data.Dataset.from_generator(
gen,
(tf.uint8, tf.uint8),
(tf.TensorShape([image_size, image_size, 3]), tf.TensorShape([2])),
)
dataset = dataset.repeat().batch(batch_size)
return dataset.prefetch(prefetch)
epochs_1 = 2
epochs_2 = 200
image_size = 96
cut_length = None
BATCH_SIZE = 64
PREFETCH = 10000 # 40
# SHUFFLE_BUFFER_SIZE = 10000
patience = 5
def fixed_text_to_square_image(values):
return text_to_square_image(values, image_size=image_size, cut_length=cut_length)
steps_per_epoch = np.ceil(train_indices.shape[0] / BATCH_SIZE)
steps_per_epoch_val = np.ceil(valid_indices.shape[0] / BATCH_SIZE)
dataset = build_tf_dataset(
X[train_indices],
Y[train_indices],
image_size,
fixed_text_to_square_image,
PREFETCH,
BATCH_SIZE,
)
dataset_valid = build_tf_dataset(
X[valid_indices],
Y[valid_indices],
image_size,
fixed_text_to_square_image,
PREFETCH,
BATCH_SIZE,
)
```
```
activation = mish
optimizer = Lookahead(RectifiedAdam(), sync_period=6, slow_step_size=0.5)
```
```
from tensorflow.keras.applications.inception_v3 import InceptionV3
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import Model
from tensorflow.keras.layers import (
Dense,
GlobalAveragePooling2D,
BatchNormalization,
Dropout,
)
from tensorflow.keras import backend as K
# create the base pre-trained model
base_model = InceptionV3(weights="imagenet", include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation=activation, kernel_initializer="he_normal")(x)
x = Dropout(0.2)(x)
x = Dense(128, activation=activation, kernel_initializer="he_normal")(x)
x = Dropout(0.2)(x)
# and a logistic layer -- let's say we have 200 classes
# predictions = Dense(200, activation='softmax')(x)
predictions = Dense(2, activation="softmax")(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
layer.trainable = False
# es.set_model(model)
# compile the model (should be done *after* setting layers to non-trainable)
model.compile(optimizer=optimizer, loss="binary_crossentropy") # , metrics=["AUC"])
%%time
# train the model on the new data for a few epochs
history_1 = model.fit(
dataset,
#callbacks=[es],
epochs=epochs_1,
steps_per_epoch=steps_per_epoch,
validation_data=dataset_valid,
validation_steps=steps_per_epoch_val
)
# at this point, the top layers are well trained and we can start fine-tuning
# convolutional layers from inception V3. We will freeze the bottom N layers
# and train the remaining top layers.
def plot_metric(history, metric):
# Plot training & validation loss values
plt.plot(history.history[metric])
plt.plot(history.history[f"val_{metric}"])
plt.title(f"Model {metric}")
plt.ylabel(f"{metric}")
plt.xlabel("Epoch")
plt.legend(["Train", "Test"], loc="upper left")
plt.show()
plot_metric(history_1, "loss")
# plot_metric(history_1, "AUC")
# let's visualize layer names and layer indices to see how many layers
# we should freeze:
model.summary()
# for i, layer in enumerate(model.layers):
# print(i, layer.name)
# Let's unfreeze the whole model
for layer in model.layers:
layer.trainable = True
# Let's build an optimizer
optimizer = Lookahead(RectifiedAdam(), sync_period=6, slow_step_size=0.5)
es = EarlyStopping(
monitor="val_loss",
verbose=1,
mode="max",
patience=patience,
restore_best_weights=True,
)
# We need to recompile the model for these modifications to take effect
es.set_model(model)
model.compile(optimizer=optimizer, loss="binary_crossentropy") # , metrics=["AUC"])
%%time
# we train our model again (this time fine-tuning the top 2 inception blocks
# alongside the top Dense layers
history_2 = model.fit(
dataset,
callbacks=[es],
epochs=epochs_2,
steps_per_epoch=steps_per_epoch,
validation_data=dataset_valid,
validation_steps=steps_per_epoch_val
)
plot_metric(history_2, "loss")
# plot_metric(history_2, "AUC")
%%time
valid_data = preprocess_data(X[valid_indices], fixed_text_to_square_image)
%%time
preds = model.predict(valid_data)
print(roc_auc_score(Y[valid_indices], preds[:, 1]))
del valid_data
gc.collect()
%%time
test_data = preprocess_data(X[test_indices], fixed_text_to_square_image)
preds = model.predict(test_data)
print(roc_auc_score(Y[test_indices], preds[:, 1]))
del test_data
gc.collect()
# https://medium.com/google-developer-experts/interpreting-deep-learning-models-for-computer-vision-f95683e23c1d
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%config InlineBackend.figure_format='retina'
dir_cat = './'
#vit_df = pd.read_csv(dir_cat+'gz2_vit_09172021_0000_predictions.csv')
#resnet_df = pd.read_csv(dir_cat+'gz2_resnet50_A_predictions.csv')
df = pd.read_csv(dir_cat+'gz2_predictions.csv')
df_vTrT = df[df.vitTresT == 1]
df_vTrF = df[df.vitTresF == 1]
df_vFrT = df[df.vitFresT == 1]
df_vFrF = df[df.vitFresF == 1]
print(f'Number of galaxies in test set : {len(df)}\n')
print(f'ViT True , resnet True galaxies: {len(df_vTrT)}')
print(f'ViT True , resnet False galaxies: {len(df_vTrF)}')
print(f'ViT False, resnet True galaxies: {len(df_vFrT)}')
print(f'ViT False, resnet False galaxies: {len(df_vFrF)}')
df.head()
df_stats = df.groupby(['class'])['class'].agg('count').to_frame('count').reset_index()
df_stats['test_set'] = df_stats['count']/df_stats['count'].sum()
df_stats['vitT_resT'] = df_vTrT.groupby('class').size() #/ df_stats['count'].sum()
df_stats['vitT_resF'] = df_vTrF.groupby('class').size() #/ df_stats['count'].sum()
df_stats['vitF_resT'] = df_vFrT.groupby('class').size() #/ df_stats['count'].sum()
df_stats['vitF_resF'] = df_vFrF.groupby('class').size() #/ df_stats['count'].sum()
print(df_stats)
###### plot ######
#ax = df_stats.plot.bar(x='class', y=['test_set', 'vitT_resT', 'vitT_resF', 'vitF_resT', 'vitF_resF'], rot=20, color=['gray', 'orange', 'red', 'blue', 'skyblue'])
#ax = df_stats.plot.bar(x='class', y=['test_set', 'vitT_resT'], rot=20, color=['gray', 'orange', 'red', 'blue', 'skyblue'])
ax = df_stats.plot.bar(x='class', y=['vitT_resF', 'vitF_resT'], rot=20, color=['red', 'blue', 'skyblue'])
#ax = df_stats.plot.bar(x='class', y=['vitT_resT', 'vitF_resF'], rot=20, color=['orange', 'skyblue'])
ax.set_xticklabels(['Round','In-between','Cigar-shaped','Edge-on','Barred','UnBarred','Irregular','Merger'])
ax.set_ylabel('class fraction')
ax.set_xlabel('galaxy morphology class')
df_vFrT.groupby('class').size()
df_stats = df.groupby(['class'])['class'].agg('count').to_frame('count').reset_index()
df_stats['test_set'] = df_stats['count']/df_stats['count'].sum()
df_stats['vitT_resT'] = df_vTrT.groupby('class').size() / df_vTrT.groupby('class').size().sum()
df_stats['vitT_resF'] = df_vTrF.groupby('class').size() / df_vTrF.groupby('class').size().sum()
df_stats['vitF_resT'] = df_vFrT.groupby('class').size() / df_vFrT.groupby('class').size().sum()
df_stats['vitF_resF'] = df_vFrF.groupby('class').size() / df_vFrF.groupby('class').size().sum()
print(df_stats)
###### plot ######
ax = df_stats.plot.bar(x='class', y=['test_set', 'vitT_resT', 'vitT_resF', 'vitF_resT', 'vitF_resF'], rot=20,
color=['gray', 'orange', 'red', 'blue', 'skyblue'])
ax.set_xticklabels(['Round','In-between','Cigar-shaped','Edge-on','Barred','UnBarred','Irregular','Merger'])
ax.set_ylabel('class fraction')
ax.set_xlabel('galaxy morphology class')
```
# color, size distributions
```
fig, ax = plt.subplots(figsize=(7.2, 5.5))
plt.rc('font', size=16)
#tag = 'model_g_r'
tag = 'dered_g_r'
bins = np.linspace(df[tag].min(), df[tag].max(),80)
ax.hist(df[tag] , bins=bins, color='lightgray' , label='full test set', density=True)
ax.hist(df_vTrT[tag], bins=bins, color='royalblue' , label=r'ViT $\bf{T}$ CNN $\bf{T}$', histtype='step' , lw=2, ls='-.', density=True)
ax.hist(df_vTrF[tag], bins=bins, color='firebrick' , label=r'ViT $\bf{T}$ CNN $\bf{F}$', histtype='step', lw=3, density=True)
ax.hist(df_vFrT[tag], bins=bins, color='orange' , label=r'ViT $\bf{F}$ CNN $\bf{T}$', histtype='step' , lw=3, ls='--', density=True)
ax.hist(df_vFrF[tag], bins=bins, color='forestgreen', label=r'ViT $\bf{F}$ CNN $\bf{F}$', histtype='step', lw=2, ls=':', density=True)
#r"$\bf{" + str(number) + "}$"
ax.set_xlabel('g-r')
ax.set_ylabel('pdf')
ax.set_xlim(-0.25, 2.2)
ax.legend(fontsize=14.5)
from scipy.stats import ks_2samp
ks_2samp(df_vTrF['dered_g_r'], df_vFrT['dered_g_r'])
fig, ax = plt.subplots(figsize=(7.2, 5.5))
plt.rc('font', size=16)
tag = 'petroR50_r'
bins = np.linspace(df[tag].min(), df[tag].max(),50)
ax.hist(df[tag] , bins=bins, color='lightgray' , label='full test set', density=True)
ax.hist(df_vTrT[tag], bins=bins, color='royalblue' , label=r'ViT $\bf{T}$ CNN $\bf{T}$', histtype='step' , lw=2, ls='-.', density=True)
ax.hist(df_vTrF[tag], bins=bins, color='firebrick' , label=r'ViT $\bf{T}$ CNN $\bf{F}$', histtype='step', lw=3, density=True)
ax.hist(df_vFrT[tag], bins=bins, color='orange' , label=r'ViT $\bf{F}$ CNN $\bf{T}$', histtype='step' , lw=3, ls='--', density=True)
ax.hist(df_vFrF[tag], bins=bins, color='forestgreen', label=r'ViT $\bf{F}$ CNN $\bf{F}$', histtype='step', lw=2, ls=':', density=True)
#r"$\bf{" + str(number) + "}$"
ax.set_xlabel('50% light radius')
ax.set_ylabel('pdf')
ax.set_xlim(0.5, 10)
ax.legend(fontsize=14.5)
fig, ax = plt.subplots(figsize=(7.2, 5.5))
plt.rc('font', size=16)
tag = 'petroR90_r'
bins = np.linspace(df[tag].min(), df[tag].max(),50)
ax.hist(df[tag] , bins=bins, color='lightgray' , label='full test set', density=True)
ax.hist(df_vTrT[tag], bins=bins, color='royalblue' , label=r'ViT $\bf{T}$ CNN $\bf{T}$', histtype='step' , lw=2, ls='-.', density=True)
ax.hist(df_vTrF[tag], bins=bins, color='firebrick' , label=r'ViT $\bf{T}$ CNN $\bf{F}$', histtype='step', lw=3, density=True)
ax.hist(df_vFrT[tag], bins=bins, color='orange' , label=r'ViT $\bf{F}$ CNN $\bf{T}$', histtype='step' , lw=3, ls='--', density=True)
ax.hist(df_vFrF[tag], bins=bins, color='forestgreen', label=r'ViT $\bf{F}$ CNN $\bf{F}$', histtype='step', lw=2, ls=':', density=True)
#r"$\bf{" + str(number) + "}$"
ax.set_xlabel('90% light radius')
ax.set_ylabel('pdf')
ax.set_xlim(2.5, 25)
ax.legend(fontsize=14.5)
ks_2samp(df_vTrF['petroR90_r'], df_vFrT['petroR90_r'])
fig, ax = plt.subplots(figsize=(7.2, 5.5))
plt.rc('font', size=16)
tag = 'dered_r'
bins = np.linspace(df[tag].min(), df[tag].max(),20)
ax.hist(df[tag] , bins=bins, color='lightgray' , label='full test set', density=True)
ax.hist(df_vTrT[tag], bins=bins, color='royalblue' , label=r'ViT $\bf{T}$ CNN $\bf{T}$', histtype='step' , lw=2, ls='-.', density=True)
ax.hist(df_vTrF[tag], bins=bins, color='firebrick' , label=r'ViT $\bf{T}$ CNN $\bf{F}$', histtype='step' , lw=3, density=True)
ax.hist(df_vFrT[tag], bins=bins, color='orange' , label=r'ViT $\bf{F}$ CNN $\bf{T}$', histtype='step' , lw=3, ls='--', density=True)
ax.hist(df_vFrF[tag], bins=bins, color='forestgreen', label=r'ViT $\bf{F}$ CNN $\bf{F}$', histtype='step' , lw=2, ls=':', density=True)
#r"$\bf{" + str(number) + "}$"
ax.set_xlabel('r-band (apparent) magnitude')
ax.set_ylabel('pdf')
#ax.set_xlim(2.5, 25)
ax.legend(fontsize=14.5)
```
### check galaxy image
```
dir_image = '/home/hhg/Research/galaxyClassify/catalog/galaxyZoo_kaggle/gz2_images/images'
galaxyID = 241961
current_IMG = plt.imread(dir_image+f'/{galaxyID}.jpg')
plt.imshow(current_IMG)
plt.axis('off')
```
| github_jupyter |
```
from openpyxl import load_workbook
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import plotly.plotly as py
import seaborn as sns
import jinja2
from bokeh.io import output_notebook
import bokeh.palettes
from bokeh.plotting import figure, show, output_file
from bokeh.models import HoverTool, ColumnDataSource, Range1d, LabelSet, Label
from collections import OrderedDict
from bokeh.sampledata.periodic_table import elements
from bokeh.resources import CDN
from bokeh.embed import file_html
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.cross_validation import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import Ridge
import scipy
sns.set(style='white')
sns.set(style='whitegrid', color_codes=True)
wb = load_workbook(r'C:\Users\diego\Downloads\worldcup.xlsx')
#['Hoja1', 'Mundial1994-2014']
work_sheet = wb['Mundial1994-2014']
df = pd.DataFrame(work_sheet.values)
df.columns = ['Team1','Goal1','Goal2','Team2','Year','Stage','Goal_Dif','Team1_Res','Team2_Res',
'Rank1','Rank2','Dif_Rank','Value1','Value2','Value_Dif','Age1','Age2','Age_Dif','Color1','Color2']
df = df.drop(df.index[[0]])
df['Match_Info'] = df['Team1'] + ' '+ df['Goal1'].astype(str)+'-'+df['Goal2'].astype(str)+ ' '+ df['Team2']
df['y'] = np.where(df['Team1_Res'] == -1, 0, 1)
df_win_lose = df[(df['Team1_Res'] > 0) | (df['Team1_Res'] < 0)]
df_value = df[df['Year']>= 2010]
df_value_win_lose = df_value[(df['Team1_Res'] > 0) | (df_value['Team1_Res'] < 0)]
#Gráfica comparativa Goal_Dif , Dif_Rank
df_Team1Res_DifRank = df[['Team1_Res','Dif_Rank','Team1','Color1']]
df_GoalDif_DifRank = df[['Goal_Dif','Dif_Rank','Team1','Color1']]
colors1 = list(df['Color1'])
plt.scatter(df_Team1Res_DifRank['Dif_Rank'],df_Team1Res_DifRank['Team1_Res'],
s=250, alpha=0.5, c=colors1,edgecolor='#6b0c08',linewidth = 0.75)
plt.axvline(0, color='black',linestyle=':')
plt.axhline(0, color='black',linestyle=':')
plt.suptitle('Win or Lose vs Ranking Difference', weight='bold', fontsize=30)
plt.title('FIFA World Cups (1994-2014)', weight='bold', fontsize=20)
plt.show()
plt.scatter(x='Dif_Rank',y='Goal_Dif', data=df_GoalDif_DifRank, marker='o', c=colors1,
s=250, alpha=0.5)
plt.axvline(0, color='black',linestyle=':')
plt.axhline(0, color='black',linestyle=':')
plt.suptitle('Goal(s) Difference vs Ranking Difference', weight='bold', fontsize=30)
plt.title('FIFA World Cups (1994-2014)', weight='bold', fontsize=20)
plt.show()
#Grafico1
tools = 'pan, wheel_zoom, box_zoom, reset, save'.split(',')
hover = HoverTool(tooltips=[
("Team", "@Team1"),
("Rank Difference:", '@Dif_Rank'),
("Team Result", "@Team1_Res"),
('Year', '@Year'),
('Stage:', '@Stage'),
('Match:', '@Match_Info')
])
tools.append(hover)
p = figure(plot_width = 1300, plot_height = 1100, title='Win(1)-Draw(0)-Lose(-1) vs Ranking Difference',
x_range=Range1d(-100, 100),y_range = Range1d(-1.5,1.5), tools=tools)
source= ColumnDataSource(df)
p.title.text_font_size = '19pt'
p.title_location = 'above'
p.scatter(x='Dif_Rank', y='Team1_Res', size=28,color='Color1' , source=source, fill_alpha=0.6)
p.xaxis[0].axis_label = 'Ranking Difference'
p.xaxis[0].axis_label_text_font_size = '15pt'
p.yaxis[0].axis_label = 'Win(1)-Draw(0)-Lose(-1)'
p.yaxis[0].axis_label_text_font_size = '15pt'
#labels = LabelSet(x='Dif_Rank', y='Team1_Res', text='Team1', level='glyph',
# x_offset=8, y_offset=8, source=source, render_mode='canvas', text_align ='center', text_font_size='10pt')
#p.add_layout(labels)
output_file("label.html", title="label.py example")
show(p)
#Grafico2
tools = 'pan, wheel_zoom, box_zoom, reset, save'.split(',')
hover = HoverTool(tooltips=[
("Team:", "@Team1"),
("Rank Difference:", '@Dif_Rank'),
("Goal(s) Difference:", "@Goal_Dif"),
('Year:', '@Year'),
('Stage:', '@Stage'),
('Match:', '@Match_Info')
])
tools.append(hover)
p = figure(plot_width = 1000, plot_height = 900, title='Goal(s) Difference vs Ranking Difference',
x_range=Range1d(-100, 100),y_range = Range1d(-10,10), tools=tools)
source= ColumnDataSource(df)
p.title.text_font_size = '19pt'
p.title_location = 'above'
p.scatter(x='Dif_Rank', y='Goal_Dif', size=28,color='Color1' , source=source, fill_alpha=0.6)
p.xaxis[0].axis_label = 'Ranking Difference'
p.xaxis[0].axis_label_text_font_size = '15pt'
p.yaxis[0].axis_label = 'Goal(s) Difference'
p.yaxis[0].axis_label_text_font_size = '15pt'
#labels = LabelSet(x='Dif_Rank', y='Team1_Res', text='Team1', level='glyph',
# x_offset=8, y_offset=8, source=source, render_mode='canvas', text_align ='center', text_font_size='10pt')
output_file("label2.html", title="label2.py example")
show(p)
#Grafico3
tools = 'pan, wheel_zoom, box_zoom, reset, save'.split(',')
hover = HoverTool(tooltips=[
("Team:", "@Team1"),
("Market Value Difference:", '@Value_Dif'),
("Goal(s) Difference:", "@Goal_Dif"),
('Year:', '@Year'),
('Stage:', '@Stage'),
('Match:', '@Match_Info')
])
tools.append(hover)
p = figure(plot_width = 1000, plot_height = 900, title='Goal(s) Difference vs Team Market Value Difference',
y_range = Range1d(-10,10), tools=tools)
source= ColumnDataSource(df_value)
p.title.text_font_size = '19pt'
p.title_location = 'above'
p.scatter(x='Value_Dif', y='Goal_Dif', size=28,color='Color1' , source=source, fill_alpha=0.6)
p.xaxis[0].axis_label = 'Team Market Value Difference (Hundred Million USD)'
p.xaxis[0].axis_label_text_font_size = '15pt'
p.yaxis[0].axis_label = 'Goal(s) Difference'
p.yaxis[0].axis_label_text_font_size = '15pt'
#labels = LabelSet(x='Dif_Rank', y='Team1_Res', text='Team1', level='glyph',
# x_offset=8, y_offset=8, source=source, render_mode='canvas', text_align ='center', text_font_size='10pt')
output_file("label3.html", title="label3.py example")
show(p)
#Grafico4
tools = 'pan, wheel_zoom, box_zoom, reset, save'.split(',')
hover = HoverTool(tooltips=[
("Team:", "@Team1"),
("Market Value Difference:", '@Value_Dif'),
("Team Result:", "@Team1_Res"),
('Year:','@Year'),
('Stage:', '@Stage'),
('Match:','@Match_Info')])
tools.append(hover)
p = figure(plot_width = 1000, plot_height = 900, title='Win(1)-Draw(0)-Lose(-1) vs Team Market Value Difference',
y_range = Range1d(-1.5,1.5), tools=tools)
source= ColumnDataSource(df_value)
p.title.text_font_size = '19pt'
p.title_location = 'above'
p.scatter(x='Value_Dif', y='Team1_Res', size=28,color='Color1' , source=source, fill_alpha=0.6)
p.xaxis[0].axis_label = 'Team Market Value Difference (Hundred Million USD)'
p.xaxis[0].axis_label_text_font_size = '15pt'
p.yaxis[0].axis_label = 'Win(1)-Draw(0)-Lose(-1)'
p.yaxis[0].axis_label_text_font_size = '15pt'
#labels = LabelSet(x='Dif_Rank', y='Team1_Res', text='Team1', level='glyph',
# x_offset=8, y_offset=8, source=source, render_mode='canvas', text_align ='center', text_font_size='10pt')
output_file("label4.html",mode='inline', title="label4.py example")
show(p)
#Grafico5
tools = 'pan, wheel_zoom, box_zoom, reset, save'.split(',')
hover = HoverTool(tooltips=[
("Team", "@Team1"),
("Rank Difference:", '@Dif_Rank'),
("Team Result", "@Team1_Res"),
('Year', '@Year'),
('Stage:', '@Stage'),
('Match:', '@Match_Info')
])
tools.append(hover)
p = figure(plot_width = 1000, plot_height = 900, title='Win(1)-Lose(-1) vs Ranking Difference',
x_range=Range1d(-100, 100),y_range = Range1d(-1.5,1.5), tools=tools)
source = ColumnDataSource(df_value_win_lose)
p.title.text_font_size = '19pt'
p.title_location = 'above'
p.scatter(x='Dif_Rank', y='Team1_Res', size=28,color='Color1' , source=source, fill_alpha=0.6)
p.xaxis[0].axis_label = 'Ranking Difference'
p.xaxis[0].axis_label_text_font_size = '15pt'
p.yaxis[0].axis_label = 'Win(1)-Lose(-1)'
p.yaxis[0].axis_label_text_font_size = '15pt'
#labels = LabelSet(x='Dif_Rank', y='Team1_Res', text='Team1', level='glyph',
# x_offset=8, y_offset=8, source=source, render_mode='canvas', text_align ='center', text_font_size='10pt')
#p.add_layout(labels)
output_file("label5.html", title="label5.py example")
show(p)
#Grafico6
tools = 'pan, wheel_zoom, box_zoom, reset, save'.split(',')
hover = HoverTool(tooltips=[
("Team", "@Team1"),
("Market Value Difference:", '@Value_Dif'),
("Team Result", "@Team1_Res"),
('Year', '@Year'),
('Stage:', '@Stage'),
('Match:', '@Match_Info')
])
tools.append(hover)
p = figure(plot_width = 1000, plot_height = 900, title='Win(1)-Lose(-1) vs Team Market Value Difference',
y_range = Range1d(-1.5,1.5), tools=tools)
source = ColumnDataSource(df_value_win_lose)
p.title.text_font_size = '19pt'
p.title_location = 'above'
p.scatter(x='Value_Dif', y='Team1_Res', size=28,color='Color1' , source=source, fill_alpha=0.6)
p.xaxis[0].axis_label = 'Team Market Value Difference (Hundred Million USD)'
p.xaxis[0].axis_label_text_font_size = '15pt'
p.yaxis[0].axis_label = 'Win(1)-Lose(-1)'
p.yaxis[0].axis_label_text_font_size = '15pt'
#labels = LabelSet(x='Dif_Rank', y='Team1_Res', text='Team1', level='glyph',
# x_offset=8, y_offset=8, source=source, render_mode='canvas', text_align ='center', text_font_size='10pt')
#p.add_layout(labels)
output_file("label6.html", title="label6.py example")
show(p)
#Grafico7
tools = 'pan, wheel_zoom, box_zoom, reset, save'.split(',')
hover = HoverTool(tooltips=[
("Team", "@Team1"),
("Market Value Difference:", '@Value_Dif'),
("Team Result", "@Team1_Res"),
('Year', '@Year'),
('Stage:', '@Stage'),
('Match:', '@Match_Info')
])
tools.append(hover)
p = figure(plot_width = 1000, plot_height = 900, title='Goal(s) Difference vs Team Market Value Difference',
y_range = Range1d(-10,10), tools=tools)
source = ColumnDataSource(df_value_win_lose)
p.title.text_font_size = '19pt'
p.title_location = 'above'
p.scatter(x='Value_Dif', y='Goal_Dif', size=28,color='Color1' , source=source, fill_alpha=0.6)
p.xaxis[0].axis_label = 'Team Market Value Difference (Hundred Million USD)'
p.xaxis[0].axis_label_text_font_size = '15pt'
p.yaxis[0].axis_label = 'Goal(s) Difference'
p.yaxis[0].axis_label_text_font_size = '15pt'
#labels = LabelSet(x='Dif_Rank', y='Team1_Res', text='Team1', level='glyph',
# x_offset=8, y_offset=8, source=source, render_mode='canvas', text_align ='center', text_font_size='10pt')
#p.add_layout(labels)
output_file("label7.html", title="label7.py example")
show(p)
#Grafico8
tools = 'pan, wheel_zoom, box_zoom, reset, save'.split(',')
hover = HoverTool(tooltips=[
("Team", "@Team1"),
("Market Value Difference:", '@Value_Dif'),
("Team Result", "@Team1_Res"),
('Year', '@Year'),
('Stage:', '@Stage'),
('Match:', '@Match_Info')
])
tools.append(hover)
p = figure(plot_width = 1000, plot_height = 900, title='Goal(s) Difference vs Ranking Difference',
y_range = Range1d(-10,10), tools=tools)
source = ColumnDataSource(df_value_win_lose)
p.title.text_font_size = '19pt'
p.title_location = 'above'
p.scatter(x='Dif_Rank', y='Goal_Dif', size=28,color='Color1' , source=source, fill_alpha=0.6)
p.xaxis[0].axis_label = 'Ranking Difference'
p.xaxis[0].axis_label_text_font_size = '15pt'
p.yaxis[0].axis_label = 'Goal(s Difference)'
p.yaxis[0].axis_label_text_font_size = '15pt'
#labels = LabelSet(x='Dif_Rank', y='Team1_Res', text='Team1', level='glyph',
# x_offset=8, y_offset=8, source=source, render_mode='canvas', text_align ='center', text_font_size='10pt')
#p.add_layout(labels)
output_file("label8.html", title="label8.py example")
show(p)
#Linear Regression World Cup-Transfer Market Value
#Predict variable desired target 1 means win, -1 means lose
#Data Exploration
train, test = train_test_split(df_value_win_lose, test_size=0.20, random_state=99)
X_train = train[['Value_Dif']]
y_train = train[['y']]
X_test = test[['Value_Dif']]
y_test = test[['y']]
#Usar si son más de una variable independiente
#data_final = df_value_win_lose[['y','Value_Dif']]
#X = data_final['Value_Dif']
#y = data_final['y']
#Choose Data- tienen que ser más de 1 variable independiente
#logreg = LogisticRegression()
#rfe = RFE(logreg, 1)
#rfe = rfe.fit(data_final['Value_Dif'], data_final['y'])
#Split Data Set
#split = int(0.7*len(data_final))
#X_train, X_test, y_train, y_test = X[:split],X[split:],y[:split], y[split:]
#Fit Model- mas de 1 variable
#model = LogisticRegression()
#model = model.fit(X_train, y_train)
model = LogisticRegression()
model = model.fit(X_train, y_train)
#Predict Probabilities
probability = model.predict_proba(X_test)
print(probability)
#Predict class labels
predicted = model.predict(X_test)
print(len(predicted))
print(len(y_test))
#Evaluate the model
#Confusion Matrix
print(metrics.confusion_matrix(y_test,predicted))
print(metrics.classification_report(y_test, predicted))
#Model Accuracy
print(model.score(X_test,y_test))
#Cross Validation
Value_Dif = df_value_win_lose['Value_Dif']
Value_Dif = np.array(Value_Dif).reshape(-1,1)
cross_val = cross_val_score(LogisticRegression(), Value_Dif, df_value_win_lose['y'],
scoring='accuracy', cv=10)
print(cross_val)
print(cross_val.mean())
#Create Strategy Using the model
plt.scatter(X_test, y_test, color='black')
plt.scatter(X_test, predicted, color='blue', linewidth=3, alpha=0.4)
plt.xlabel("Team Value Difference")
plt.ylabel("Team Result Win(1) Lose(0)")
plt.show()
#Segundo Modelo
train, test = train_test_split(df, test_size=0.20, random_state=99)
X_train = train[['Dif_Rank']]
y_train = train[['y']]
X_test = test[['Dif_Rank']]
y_test = test[['y']]
#Polynomial and Ridge model alternative en caso de que no sea Logistic Regression
#pol = make_pipeline(PolynomialFeatures(6), Ridge())
#pol.fit(X_train,y_train)
model = LogisticRegression()
model = model.fit(X_train, y_train)
#Predict Probabilities
probability = model.predict_proba(X_test)
print(probability)
#Predict class labels
predicted = model.predict(X_test)
print(len(predicted))
print(len(y_test))
#Evaluate the model
#Confusion Matrix
print(metrics.confusion_matrix(y_test,predicted))
print(metrics.classification_report(y_test, predicted))
#Model Accuracy
print(model.score(X_test, y_test))
#Cross Validation
Dif_Rank = df_value_win_lose['Dif_Rank']
Dif_Rank = np.array(Dif_Rank).reshape(-1,1)
cross_val = cross_val_score(LogisticRegression(), Dif_Rank, df_value_win_lose['y'],
scoring='accuracy', cv=10)
print(cross_val)
print(cross_val.mean())
plt.scatter(X_test, y_test, color='black')
plt.scatter(X_test, predicted, color='blue', linewidth=3, alpha=0.4)
plt.xlabel("Team Ranking Difference")
plt.ylabel("Team Result Win(1) Lose(0)")
plt.show()
#Polynomial Ridge En caso de que no sea Logistic Regression
#y_pol = pol.predict(X_test)
#plt.scatter(X_test, y_test, color='black')
#plt.scatter(X_test, y_pol, color='blue')
#plt.xlabel("Team Ranking Difference")
#plt.ylabel("Team Result Win(1) Lose(0)")
#plt.show()
#Tercer Modelo Age-Rank
#Fit Model- mas de 1 variable
#model = LogisticRegression()
#model = model.fit(X_train, y_train)
train, test = train_test_split(df_value_win_lose, test_size=0.20, random_state=99)
X_train = train[['Age_Dif','Dif_Rank']]
y_train = train[['y']]
X_test = test[['Age_Dif','Dif_Rank']]
y_test = test[['y']]
model = LogisticRegression()
model = model.fit(X_train, y_train)
#Predict Probabilities
probability = model.predict_proba(X_test)
print(probability)
#Predict class labels
predicted = model.predict(X_test)
print(len(predicted))
print(len(y_test))
#Evaluate the model
#Confusion Matrix
print(metrics.confusion_matrix(y_test,predicted))
print(metrics.classification_report(y_test, predicted))
#Model Accuracy
print(model.score(X_test,y_test))
#Cross Validation
Indep_Var = df_value_win_lose[['Dif_Rank','Age_Dif']]
cross_val = cross_val_score(LogisticRegression(), Indep_Var, df_value_win_lose['y'],
scoring='accuracy', cv=10)
print(cross_val)
print(cross_val.mean())
#Mejorar el modelo
# from sklearn.svm import SVR
#
# regr_more_features = LogisticRegression()
# regr_more_features.fit(X_train, y_train)
# y_pred_more_features = regr_more_features.predict(X_test)
# print("Mean squared error: %.2f" % metrics.mean_squared_error(y_test, y_pred_more_features))
# print('Variance score: %.2f' % metrics.r2_score(y_test, y_pred_more_features))
#
# pol_more_features = make_pipeline(PolynomialFeatures(4), Ridge())
# pol_more_features.fit(X_train, y_train)
# y_pol_more_features = pol_more_features.predict(X_test)
# print("Mean squared error: %.2f" % metrics.mean_squared_error(y_test, y_pol_more_features))
# print('Variance score: %.2f' % metrics.r2_score(y_test, y_pol_more_features))
#
# svr_rbf_more_features = SVR(kernel='rbf', gamma=1e-3, C=100, epsilon=0.1)
# svr_rbf_more_features.fit(X_train, y_train.values.ravel())
# y_rbf_more_features = svr_rbf_more_features.predict(X_test)
# print("Mean squared error: %.2f" % metrics.mean_squared_error(y_test, y_rbf_more_features))
# print('Variance score: %.2f' % metrics.r2_score(y_test, y_rbf_more_features))
#print(test[['Team1','Team2','y', 'Age_Dif', 'Value_Dif', 'Dif_Rank', 'Result_Prediction_RBF','Error_Percentage']].nlargest(4, columns='Error_Percentage'))
#Modelo Final
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import train_test_split
from sklearn.datasets import make_regression
#Generate DataSet
train, test = train_test_split(df_value_win_lose, test_size=0.20, random_state=99)
X_train = train[['Age_Dif','Dif_Rank']]
y_train = train[['y']]
X_test = test[['Age_Dif','Dif_Rank']]
y_test = test[['y']]
#Fit Final Model
model = LogisticRegression()
model = model.fit(X_train,y_train)
#New instances where we do not want answers
work_sheet2018 = wb['Mundial2018']
data2018 = pd.DataFrame(work_sheet2018.values)
data2018.columns = ['Team1','Team2','Year','Stage','Rank1','Rank2','Dif_Rank','Value1','Value2','Value_Dif','Age1','Age2','Age_Dif']
data2018 = data2018.drop(data2018.index[[0]])
#New Instances which we do not know the answer
Xnew = data2018[['Age_Dif','Dif_Rank']]
#Make Predictions
y_predicted_WC2018 = model.predict(Xnew)
probability2018 = model.predict_proba(Xnew)
#show the inputs and predicted outputs
#writer = pd.ExcelWriter('/Users/juancarlos/ProbabilidadMundial2018.xlsx')
#print(pd.DataFrame(y_predicted_WC2018))
#probability2018 = pd.DataFrame(probability2018)
#probability2018.to_excel(writer,'Sheet1')
#writer.save()
#Calculadora de Probabilidades
Calculadora_Prob = wb['Calculadora2']
df_calculadora = pd.DataFrame(Calculadora_Prob.values)
df_calculadora.columns = ['num','Team1','Team2','Year','Rank1','Rank2','Dif_Rank','Age1','Age2','Age_Dif']
df_calculadora = df_calculadora.drop(df_calculadora.index[[0]])
#New Data to predict
xnew_calc = df_calculadora[['Age_Dif','Dif_Rank']]
y_predict_calc = model.predict(xnew_calc)
prob_calc = model.predict_proba(xnew_calc)
print(y_predict_calc)
print(prob_calc)
# #show the inputs and predicted outputs
# writer = pd.ExcelWriter('/Users/juancarlos/ProbabilidadMundial2018-2.xlsx')
# print(pd.DataFrame(y_predict_calc))
# y_predict_calc= pd.DataFrame(y_predict_calc)
#
# y_predict_calc.to_excel(writer,'Sheet1')
# writer.save()
#
# writer = pd.ExcelWriter('/Users/juancarlos/ProbabilidadMundial2018-1.xlsx')
# print(pd.DataFrame(prob_calc))
# prob_calc= pd.DataFrame(prob_calc)
#
# prob_calc.to_excel(writer,'Sheet1')
# writer.save()
```
Predicciones por equipos con su probabilidad: 1=Triunfo, 0=derrota. Lista de probabilidad abajo.
| github_jupyter |
# Modèle ColBERT
On s'intéresse ici au modèle ColBERT, qui propose ...
```
!pip install --upgrade python-terrier
!pip install --upgrade git+https://github.com/terrierteam/pyterrier_colbert
#initialisation pyterrier
import pyterrier as pt
if not pt.started():
pt.init()
#récupération du dataset covid19
dataset = pt.datasets.get_dataset('irds:cord19/trec-covid')
topics = dataset.get_topics(variant='description')
qrels = dataset.get_qrels()
#construction de l'index
import os
cord19 = pt.datasets.get_dataset('irds:cord19/trec-covid')
pt_index_path = './terrier_cord19'
if not os.path.exists(pt_index_path + "/data.properties"):
# create the index, using the IterDictIndexer indexer
indexer = pt.index.IterDictIndexer(pt_index_path)
# we give the dataset get_corpus_iter() directly to the indexer
# while specifying the fields to index and the metadata to record
index_ref = indexer.index(cord19.get_corpus_iter(),
fields=('abstract',),
meta=('docno',))
else:
# if you already have the index, use it.
index_ref = pt.IndexRef.of(pt_index_path + "/data.properties")
index = pt.IndexFactory.of(index_ref)
import os
if not os.path.exists("terrier_index.zip"):
!wget http://www.dcs.gla.ac.uk/~craigm/ecir2021-tutorial/terrier_index.zip
!unzip -j terrier_index.zip -d terrier_index
index_ref = pt.IndexRef.of("./terrier_index/data.properties")
index = pt.IndexFactory.of(index_ref)
```
## chargement du modèle ColBERT
```
import pyterrier_colbert.ranking
colbert_factory = pyterrier_colbert.ranking.ColBERTFactory(
"http://www.dcs.gla.ac.uk/~craigm/colbert.dnn.zip", None, None)
colbert = colbert_factory.text_scorer(doc_attr='abstract')
```
Exécution des requêtes du dataset et calcul des métriques
```
br = pt.BatchRetrieve(index) % 100
pipeline = br >> pt.text.get_text(dataset, 'abstract') >> colbert
pt.Experiment(
[br, pipeline],
topics,
qrels,
names=['DPH', 'DPH >> ColBERT'],
eval_metrics=["map", "ndcg", 'ndcg_cut.10', 'P.10', 'mrt']
)
```
on souhaite ici visualiser les résultats du modèle, et surtout visualiser l'attention calculée sur le texte.
```
res = pipeline(topics.iloc[:1])
res.merge(dataset.get_qrels(), how='left').head()
```
le premier document est pertinent pour la requête q0. Nous observons ci-dessous la carte d'attention associée au calcul du score :
```
text = dataset.irds_ref().docs_store().get('4dtk1kyh').abstract[:300] + '...' # truncate text
colbert_factory.explain_text('what is the origin of covid 19', text)
```
| github_jupyter |
# Intro to Reinforcement Learning
Reinforcement learning requires us to model our problem using the following two constructs:
* An agent, the thing that makes decisions.
* An environment, the world which encodes what decisions can be made, and the impact of those decisions.
The environment contains all the possible states, knows all the actions that can be taken from each state, and knows when rewards should be given and what the magnitude of those rewards should be. An agent gets this information from the environment by exploring and learns from experience which states provide the best rewards. Rewards slowly percolate outward to neighboring states iteratively, which helps the agent make decisions over longer time horizons.
## The Agent/Environment Interface

> Image Source: [Reinforcement Learning:An Introduction](http://incompleteideas.net/book/bookdraft2017nov5.pdf)
Reinforcement learning typically takes place over a number of episodes—which are rougly analogous to an epoch. In most situations for RL, there are some "terminal" states within the environment which indicate the end of an episode. In games for example, this happens when the game ends. For example when Mario gets killed or reaches the end of the level. Or in chess when someone is put into checkmate or conceeds. An episode ends when an agent reaches one of these terminal states.
An episode is compirsed of the agent making a series of decisions until it reaches one of these terminal states. Sometimes engineers may choose to terminate an episode after a maximum number of decisions, especially if there is a strong chance that the agent will never reach a terminal state.
## Markov Decsion Processes
Formally, the problems that reinforcement learning is best at solving are modeled by Markov Decision Processes (MDP). MDPs are a special kind of graph that are similar to State Machines. MDPs have two kinds of nodes, states and actions. From any given state an agent can select from only the available actions, those actions will take an agent to another state. These transitions from actions to states are frequently stochastic—meaning taking a particular action might lead you to one of several states based on some probabilistic function.
Transitions from state to state can be associated with a reward but they are not required to be. Many MDPs have terminal states, but those are not formally required either.

> Image Source: [Wikimedia commons, public domain](https://commons.wikimedia.org/wiki/File:Markov_Decision_Process_example.png)
This MDP has 3 states (larger green nodes S0, S1, S2), each state has exactly 2 actions available (smaller red nodes, a0, a1), and two transitions have rewards (from S2a1 -> S0 has -1 reward, from S1a0 -> S0 has +5 reward).
## Finding a Policy
The reinforcment learning algorithm we're going to focus on (Q-Learning) is a "policy based" agent. This means, it's goal is to discover which decision is the "best" decision to make at any given state. Sometimes, this goal is a little naive, for example if the state-spaces is evolving in real time, it may not be possible to determine a "best" policy. In the above MDP, though, there IS an optimal policy... What is it?
### Optimal Policy For the Above:
The only way to gain a positive reward is to take the transition S1a0 -> S0. That gives us +5 70% of the time.
Getting to S1 requires a risk though: the only way to get to S1 is by taking a1 from S2, which has a 30% chance of landing us back at S0 with a -1 reward.
We can easily oscilate infinitely between S0 and S2 with zero reward by taking only S0a1, S0a0, and S2a0 repeatedly. So the question is: is the risk worth the reward?
Say we're in S1, we can get +5 70% of the time by taking a0. That's an expected value of 3.5 if our policy is always take action a0. If we're in S2, then, we can get an expected value of 3.5 30% of the time, and a -1 30% of the time by always taking action a1:
`(.3 * 3.5) + (.3 * -1) = 1.05 - .3 = .75`
So intiutively we should go ahead and take the risky action to gain the net-positive reward. **But wait!** Both of our actions are *self-referential* and might lead us back to the original state... how do we account for that?
For the mathematical purists, we can use something called the Bellman Optimality Equation. Intuitively, the Bellman optimality equation expresses the fact that the value of a state under an optimal policy must equal the expected return for the best action from that state:
For the value of states:

For the state-action pairs:

> For a more complete treatment of these equations, see [Chapter 3.6 of this book](http://incompleteideas.net/book/bookdraft2017nov5.pdf)
Several bits of notation were just introduced:
* The Discount Factor (γ) — some value between 0 and 1, which is required for convergance.
* The expected return (Gt) — the value we want to optimize.
* The table of state-action pairs (Q) — these are the values of being in a state and taking a given action.
* The table of state values (V*) — these are based on the Q-values from the Q-table based on taking the best action.
* The policy is represented by π — our policy is what our agent thinks is the best action
* S and S' both represent states, the current state (S) and the next state (S') for any given state-action pair.
* r represents a reward for a given transition.
Solving this series of equations is computationally unrealistic for most problems of any real size. It is an iterative process that will only converge if the discount factor λ is between 0 and 1, and even then often it converges slowly. The most common algorithm for solving the Bellman equations directly is called Value Iteration, and it is much like what we did above, but we'd apply that logic repeatedly, for every state-action pair, and we'd have to apply a discount factor to the expected values we computed.
Value iteration is never used in practice. Instead we use Q-learning to experimentally explore states, essentially we attempt to partially solve the above Bellman equations. For a more complete treatment of value iteration, see the book linked above.
In my opinion, it is much easier, and much more helpful to see Q-Learning in action than it is to pour over the dense and confusing mathematical notation above. Q-Learning is actually wonderfully intuitive when you take a step back from the math.
| github_jupyter |
# Homework and bake-off: Word relatedness
```
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2021"
```
## Contents
1. [Overview](#Overview)
1. [Set-up](#Set-up)
1. [Development dataset](#Development-dataset)
1. [Vocabulary](#Vocabulary)
1. [Score distribution](#Score-distribution)
1. [Repeated pairs](#Repeated-pairs)
1. [Evaluation](#Evaluation)
1. [Error analysis](#Error-analysis)
1. [Homework questions](#Homework-questions)
1. [PPMI as a baseline [0.5 points]](#PPMI-as-a-baseline-[0.5-points])
1. [Gigaword with LSA at different dimensions [0.5 points]](#Gigaword-with-LSA-at-different-dimensions-[0.5-points])
1. [t-test reweighting [2 points]](#t-test-reweighting-[2-points])
1. [Pooled BERT representations [1 point]](#Pooled-BERT-representations-[1-point])
1. [Learned distance functions [2 points]](#Learned-distance-functions-[2-points])
1. [Your original system [3 points]](#Your-original-system-[3-points])
1. [Bake-off [1 point]](#Bake-off-[1-point])
1. [Submission Instruction](#Submission-Instruction)
## Overview
Word similarity and relatedness datasets have long been used to evaluate distributed representations. This notebook provides code for conducting such analyses with a new word relatedness datasets. It consists of word pairs, each with an associated human-annotated relatedness score.
The evaluation metric for each dataset is the [Spearman correlation coefficient $\rho$](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient) between the annotated scores and your distances, as is standard in the literature.
This homework ([questions at the bottom of this notebook](#Homework-questions)) asks you to write code that uses the count matrices in `data/vsmdata` to create and evaluate some baseline models. The final question asks you to create your own original system for this task, using any data you wish. This accounts for 9 of the 10 points for this assignment.
For the associated bake-off, we will distribute a new dataset, and you will evaluate your original system (no additional training or tuning allowed!) on that datasets and submit your predictions. Systems that enter will receive the additional homework point, and systems that achieve the top score will receive an additional 0.5 points.
## Set-up
```
from collections import defaultdict
import csv
import itertools
import numpy as np
import os
import pandas as pd
import random
from scipy.stats import spearmanr
import vsm
import utils
utils.fix_random_seeds()
VSM_HOME = os.path.join('data', 'vsmdata')
DATA_HOME = os.path.join('data', 'wordrelatedness')
```
## Development dataset
You can use development dataset freely, since our bake-off evalutions involve a new test set.
```
dev_df = pd.read_csv(
os.path.join(DATA_HOME, "cs224u-wordrelatedness-dev.csv"))
```
The dataset consists of word pairs with scores:
```
dev_df.head()
```
This gives the number of word pairs in the data:
```
dev_df.shape[0]
```
The test set will contain 1500 word pairs with scores of the same type. No word pair in the development set appears in the test set, but some of the individual words are repeated in the test set.
### Vocabulary
The full vocabulary in the dataframe can be extracted as follows:
```
dev_vocab = set(dev_df.word1.values) | set(dev_df.word2.values)
len(dev_vocab)
```
The vocabulary for the bake-off test is different – it is partly overlapping with the above. If you want to be sure ahead of time that your system has a representation for every word in the dev and test sets, then you can check against the vocabularies of any of the VSMs in `data/vsmdata` (which all have the same vocabulary). For example:
```
task_index = pd.read_csv(
os.path.join(VSM_HOME, 'yelp_window5-scaled.csv.gz'),
usecols=[0], index_col=0)
full_task_vocab = list(task_index.index)
len(full_task_vocab)
```
If you can process every one of those words, then you are all set. Alternatively, you can wait to see the test set and make system adjustments to ensure that you can process all those words. This is fine as long as you are not tuning your predictions.
### Score distribution
All the scores fall in $[0, 1]$, and the dataset skews towards words with low scores, meaning low relatedness:
```
ax = dev_df.plot.hist().set_xlabel("Relatedness score")
```
### Repeated pairs
The development data has some word pairs with multiple distinct scores in it. Here we create a `pd.Series` that contains these word pairs:
```
repeats = dev_df.groupby(['word1', 'word2']).apply(lambda x: x.score.var())
repeats = repeats[repeats > 0].sort_values(ascending=False)
repeats.name = 'score variance'
repeats.shape[0]
```
The `pd.Series` is sorted with the highest variance items at the top:
```
repeats.head()
```
Since this is development data, it is up to you how you want to handle these repeats. The test set has no repeated pairs in it.
## Evaluation
Our evaluation function is `vsm.word_relatedness_evaluation`. Its arguments:
1. A relatedness dataset `pd.DataFrame` – e.g., `dev_df` as given above.
1. A VSM `pd.DataFrame` – e.g., `giga5` or some transformation thereof, or a GloVe embedding space, or something you have created on your own. The function checks that you can supply a representation for every word in `dev_df` and raises an exception if you can't.
1. Optionally a `distfunc` argument, which defaults to `vsm.cosine`.
The function returns a tuple:
1. A copy of `dev_df` with a new column giving your predictions.
1. The Spearman $\rho$ value (our primary score).
Important note: Internally, `vsm.word_relatedness_evaluation` uses `-distfunc(x1, x2)` as its score, where `x1` and `x2` are vector representations of words. This is because the scores in our data are _positive_ relatedness scores, whereas we are assuming that `distfunc` is a _distance_ function.
Here's a simple illustration using one of our count matrices:
```
count_df = pd.read_csv(
os.path.join(VSM_HOME, "giga_window5-scaled.csv.gz"), index_col=0)
count_pred_df, count_rho = vsm.word_relatedness_evaluation(dev_df, count_df)
count_rho
count_pred_df.head()
```
It's instructive to compare this against a truly random system, which we can create by simply having a custom distance function that returns a random number in [0, 1] for each example, making no use of the VSM itself:
```
def random_scorer(x1, x2):
"""`x1` and `x2` are vectors, to conform to the requirements
of `vsm.word_relatedness_evaluation`, but this function just
returns a random number in [0, 1]."""
return random.random()
random_pred_df, random_rho = vsm.word_relatedness_evaluation(
dev_df, count_df, distfunc=random_scorer)
random_rho
```
This is a truly baseline system!
## Error analysis
For error analysis, we can look at the words with the largest delta between the gold score and the distance value in our VSM. We do these comparisons based on ranks, just as with our primary metric (Spearman $\rho$), and we normalize both rankings so that they have a comparable number of levels.
```
def error_analysis(pred_df):
pred_df = pred_df.copy()
pred_df['relatedness_rank'] = _normalized_ranking(pred_df.prediction)
pred_df['score_rank'] = _normalized_ranking(pred_df.score)
pred_df['error'] = abs(pred_df['relatedness_rank'] - pred_df['score_rank'])
return pred_df.sort_values('error')
def _normalized_ranking(series):
ranks = series.rank(method='dense')
return ranks / ranks.sum()
```
Best predictions:
```
error_analysis(count_pred_df).head()
```
Worst predictions:
```
error_analysis(count_pred_df).tail()
```
## Homework questions
Please embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)
### PPMI as a baseline [0.5 points]
The insight behind PPMI is a recurring theme in word representation learning, so it is a natural baseline for our task. This question asks you to write code for conducting such experiments.
Your task: write a function called `run_giga_ppmi_baseline` that does the following:
1. Reads the Gigaword count matrix with a window of 20 and a flat scaling function into a `pd.DataFrame`, as is done in the VSM notebooks. The file is `data/vsmdata/giga_window20-flat.csv.gz`, and the VSM notebooks provide examples of the needed code.
1. Reweights this count matrix with PPMI.
1. Evaluates this reweighted matrix using `vsm.word_relatedness_evaluation` on `dev_df` as defined above, with `distfunc` set to the default of `vsm.cosine`.
1. Returns the return value of this call to `vsm.word_relatedness_evaluation`.
The goal of this question is to help you get more familiar with the code in `vsm` and the function `vsm.word_relatedness_evaluation`.
The function `test_run_giga_ppmi_baseline` can be used to test that you've implemented this specification correctly.
```
def run_giga_ppmi_baseline():
pass
##### YOUR CODE HERE
def test_run_giga_ppmi_baseline(func):
"""`func` should be `run_giga_ppmi_baseline"""
pred_df, rho = func()
rho = round(rho, 3)
expected = 0.586
assert rho == expected, \
"Expected rho of {}; got {}".format(expected, rho)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_giga_ppmi_baseline(run_giga_ppmi_baseline)
```
### Gigaword with LSA at different dimensions [0.5 points]
We might expect PPMI and LSA to form a solid pipeline that combines the strengths of PPMI with those of dimensionality reduction. However, LSA has a hyper-parameter $k$ – the dimensionality of the final representations – that will impact performance. This problem asks you to create code that will help you explore this approach.
Your task: write a wrapper function `run_ppmi_lsa_pipeline` that does the following:
1. Takes as input a count `pd.DataFrame` and an LSA parameter `k`.
1. Reweights the count matrix with PPMI.
1. Applies LSA with dimensionality `k`.
1. Evaluates this reweighted matrix using `vsm.word_relatedness_evaluation` with `dev_df` as defined above. The return value of `run_ppmi_lsa_pipeline` should be the return value of this call to `vsm.word_relatedness_evaluation`.
The goal of this question is to help you get a feel for how LSA can contribute to this problem.
The function `test_run_ppmi_lsa_pipeline` will test your function on the count matrix in `data/vsmdata/giga_window20-flat.csv.gz`.
```
def run_ppmi_lsa_pipeline(count_df, k):
pass
##### YOUR CODE HERE
def test_run_ppmi_lsa_pipeline(func):
"""`func` should be `run_ppmi_lsa_pipeline`"""
giga20 = pd.read_csv(
os.path.join(VSM_HOME, "giga_window20-flat.csv.gz"), index_col=0)
pred_df, rho = func(giga20, k=10)
rho = round(rho, 3)
expected = 0.545
assert rho == expected,\
"Expected rho of {}; got {}".format(expected, rho)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_ppmi_lsa_pipeline(run_ppmi_lsa_pipeline)
```
### t-test reweighting [2 points]
The t-test statistic can be thought of as a reweighting scheme. For a count matrix $X$, row index $i$, and column index $j$:
$$\textbf{ttest}(X, i, j) =
\frac{
P(X, i, j) - \big(P(X, i, *)P(X, *, j)\big)
}{
\sqrt{(P(X, i, *)P(X, *, j))}
}$$
where $P(X, i, j)$ is $X_{ij}$ divided by the total values in $X$, $P(X, i, *)$ is the sum of the values in row $i$ of $X$ divided by the total values in $X$, and $P(X, *, j)$ is the sum of the values in column $j$ of $X$ divided by the total values in $X$.
Your task: implement this reweighting scheme. You can use `test_ttest_implementation` below to check that your implementation is correct. You do not need to use this for any evaluations, though we hope you will be curious enough to do so!
```
def ttest(df):
pass
##### YOUR CODE HERE
def test_ttest_implementation(func):
"""`func` should be `ttest`"""
X = pd.DataFrame([
[1., 4., 3., 0.],
[2., 43., 7., 12.],
[5., 6., 19., 0.],
[1., 11., 1., 4.]])
actual = np.array([
[ 0.04655, -0.01337, 0.06346, -0.09507],
[-0.11835, 0.13406, -0.20846, 0.10609],
[ 0.16621, -0.23129, 0.38123, -0.18411],
[-0.0231 , 0.0563 , -0.14549, 0.10394]])
predicted = func(X)
assert np.array_equal(predicted.round(5), actual), \
"Your ttest result is\n{}".format(predicted.round(5))
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_ttest_implementation(ttest)
```
### Pooled BERT representations [1 point]
The notebook [vsm_04_contextualreps.ipynb](vsm_04_contextualreps.ipynb) explores methods for deriving static vector representations of words from the contextual representations given by models like BERT and RoBERTa. The methods are due to [Bommasani et al. 2020](https://www.aclweb.org/anthology/2020.acl-main.431). The simplest of these methods involves processing the words as independent texts and pooling the sub-word representations that result, using a function like mean or max.
Your task: write a function `evaluate_pooled_bert` that will enable exploration of this approach. The function should do the following:
1. Take as its arguments (a) a word relatedness `pd.DataFrame` `rel_df` (e.g., `dev_df`), (b) a `layer` index (see below), and (c) a `pool_func` value (see below).
1. Set up a BERT tokenizer and BERT model based on `'bert-base-uncased'`.
1. Use `vsm.create_subword_pooling_vsm` to create a VSM (a `pd.DataFrame`) with the user's values for `layer` and `pool_func`.
1. Return the return value of `vsm.word_relatedness_evaluation` using this new VSM, evaluated on `rel_df` with `distfunc` set to its default value.
The function `vsm.create_subword_pooling_vsm` does the heavy-lifting. Your task is really just to put these pieces together. The result will be the start of a flexible framework for seeing how these methods do on our task.
The function `test_evaluate_pooled_bert` can help you obtain the design we are seeking.
```
from transformers import BertModel, BertTokenizer
def evaluate_pooled_bert(rel_df, layer, pool_func):
bert_weights_name = 'bert-base-uncased'
# Initialize a BERT tokenizer and BERT model based on
# `bert_weights_name`:
##### YOUR CODE HERE
# Get the vocabulary from `rel_df`:
##### YOUR CODE HERE
# Use `vsm.create_subword_pooling_vsm` with the user's arguments:
##### YOUR CODE HERE
# Return the results of the relatedness evalution:
##### YOUR CODE HERE
def test_evaluate_pooled_bert(func):
import torch
rel_df = pd.DataFrame([
{'word1': 'porcupine', 'word2': 'capybara', 'score': 0.6},
{'word1': 'antelope', 'word2': 'springbok', 'score': 0.5},
{'word1': 'llama', 'word2': 'camel', 'score': 0.4},
{'word1': 'movie', 'word2': 'play', 'score': 0.3}])
layer = 2
pool_func = vsm.max_pooling
pred_df, rho = evaluate_pooled_bert(rel_df, layer, pool_func)
rho = round(rho, 2)
expected_rho = 0.40
assert rho == expected_rho, \
"Expected rho={}; got rho={}".format(expected_rho, rho)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_evaluate_pooled_bert(evaluate_pooled_bert)
```
### Learned distance functions [2 points]
The presentation thus far leads one to assume that the `distfunc` argument used in the experiments will be a standard vector distance function like `vsm.cosine` or `vsm.euclidean`. However, the framework itself simply requires that this function map two fixed-dimensional vectors to a real number. This opens up a world of possibilities. This question asks you to dip a toe in these waters.
Your task: write a function `run_knn_score_model` for models in this class. The function should:
1. Take as its arguments (a) a VSM dataframe `vsm_df`, (b) a relatedness dataset (e.g., `dev_df`), and (c) a `test_size` value between 0.0 and 1.0 that can be passed directly to `train_test_split` (see below).
1. Create a feature matrix `X`: each word pair in `dev_df` should be represented by the concatenation of the vectors for word1 and word2 from `vsm_df`.
1. Create a score vector `y`, which is just the `score` column in `dev_df`.
1. Split the dataset `(X, y)` into train and test portions using [sklearn.model_selection.train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
1. Train an [sklearn.neighbors.KNeighborsRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html#sklearn.neighbors.KNeighborsRegressor) model on the train split from step 4, with default hyperparameters.
1. Return the value of the `score` method of the trained `KNeighborsRegressor` model on the test split from step 4.
The functions `test_knn_feature_matrix` and `knn_represent` will help you test the crucial representational aspects of this.
Note: if you decide to apply this approach to our task as part of an original system, recall that `vsm.create_subword_pooling_vsm` returns `-d` where `d` is the value computed by `distfunc`, since it assumes that `distfunc` is a distance value of some kind rather than a relatedness/similarity value. Since most regression models will return positive scores for positive associations, you will probably want to undo this by having your `distfunc` return the negative of its value.
```
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsRegressor
def run_knn_score_model(vsm_df, dev_df, test_size=0.20):
pass
# Complete `knn_feature_matrix` for this step.
##### YOUR CODE HERE
# Get the values of the 'score' column in `dev_df`
# and store them in a list or array `y`.
##### YOUR CODE HERE
# Use `train_test_split` to split (X, y) into train and
# test protions, with `test_size` as the test size.
##### YOUR CODE HERE
# Instantiate a `KNeighborsRegressor` with default arguments:
##### YOUR CODE HERE
# Fit the model on the training data:
##### YOUR CODE HERE
# Return the value of `score` for your model on the test split
# you created above:
##### YOUR CODE HERE
def knn_feature_matrix(vsm_df, rel_df):
pass
# Complete `knn_represent` and use it to create a feature
# matrix `np.array`:
##### YOUR CODE HERE
def knn_represent(word1, word2, vsm_df):
pass
# Use `vsm_df` to get vectors for `word1` and `word2`
# and concatenate them into a single vector:
##### YOUR CODE HERE
def test_knn_feature_matrix(func):
rel_df = pd.DataFrame([
{'word1': 'w1', 'word2': 'w2', 'score': 0.1},
{'word1': 'w1', 'word2': 'w3', 'score': 0.2}])
vsm_df = pd.DataFrame([
[1, 2, 3.],
[4, 5, 6.],
[7, 8, 9.]], index=['w1', 'w2', 'w3'])
expected = np.array([
[1, 2, 3, 4, 5, 6.],
[1, 2, 3, 7, 8, 9.]])
result = func(vsm_df, rel_df)
assert np.array_equal(result, expected), \
"Your `knn_feature_matrix` returns: {}\nWe expect: {}".format(
result, expected)
def test_knn_represent(func):
vsm_df = pd.DataFrame([
[1, 2, 3.],
[4, 5, 6.],
[7, 8, 9.]], index=['w1', 'w2', 'w3'])
result = func('w1', 'w3', vsm_df)
expected = np.array([1, 2, 3, 7, 8, 9.])
assert np.array_equal(result, expected), \
"Your `knn_represent` returns: {}\nWe expect: {}".format(
result, expected)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_knn_represent(knn_represent)
test_knn_feature_matrix(knn_feature_matrix)
```
### Your original system [3 points]
This question asks you to design your own model. You can of course include steps made above (ideally, the above questions informed your system design!), but your model should not be literally identical to any of the above models. Other ideas: retrofitting, autoencoders, GloVe, subword modeling, ...
Requirements:
1. Your system must work with `vsm.word_relatedness_evaluation`. You are free to specify the VSM and the value of `distfunc`.
1. Your code must be self-contained, so that we can work with your model directly in your homework submission notebook. If your model depends on external data or other resources, please submit a ZIP archive containing these resources along with your submission.
In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
```
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM
# SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
# My peak score was: MY_NUMBER
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# STOP COMMENT: Please do not remove this comment.
```
## Bake-off [1 point]
For the bake-off, you simply need to evaluate your original system on the file
`vsmdata/cs224u-wordrelatedness-bakeoff-test-unlabeled.csv`
This contains only word pairs (no scores), so `vsm.word_relatedness_evaluation` will simply make predictions without doing any scoring. Use that function to make predictions with your original system, store the resulting `pred_df` to a file, and then upload the file as your bake-off submission.
The following function should be used to conduct this evaluation:
```
def create_bakeoff_submission(
vsm_df,
distfunc,
output_filename="cs224u-wordrelatedness-bakeoff-entry.csv"):
test_df = pd.read_csv(
os.path.join(DATA_HOME, "cs224u-wordrelatedness-test-unlabeled.csv"))
pred_df, _ = vsm.word_relatedness_evaluation(test_df, vsm_df, distfunc=distfunc)
pred_df.to_csv(output_filename)
```
For example, if `count_df` were the VSM for my system, and I wanted my distance function to be `vsm.euclidean`, I would do
```
create_bakeoff_submission(count_df, vsm.euclidean)
```
This creates a file `cs224u-wordrelatedness-bakeoff-entry.csv` in the current directory. That file should be uploaded as-is. Please do not change its name.
Only one upload per team is permitted, and you should do no tuning of your system based on what you see in `pred_df` – you should not study that file in anyway, beyond perhaps checking that it contains what you expected it to contain. The upload function will do some additional checking to ensure that your file is well-formed.
People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.
Late entries will be accepted, but they cannot earn the extra 0.5 points.
## Submission Instruction
Submit the following files to gradescope submission
- Please do not change the file name as described below
- `hw_wordrelatedness.ipynb` (this notebook)
- `cs224u-wordrelatedness-bakeoff-entry.csv` (bake-off output)
| github_jupyter |
# Analysis of Microglia data
```
# Setup
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sccoda.util import comp_ana as mod
from sccoda.util import cell_composition_data as dat
from sccoda.model import other_models as om
```
Load and format data:
4 control samples, 2 samples for other conditions each; 8 cell types
```
cell_counts = pd.read_csv("../../sccoda_benchmark_data/cell_count_microglia_AD_WT_location.csv", index_col=0)
# Sort values such that wild type is considered the base category
print(cell_counts)
data_cer = dat.from_pandas(cell_counts.loc[cell_counts["location"] == "cerebellum"],
covariate_columns=["mouse_type", "location", "replicate"])
data_cor = dat.from_pandas(cell_counts.loc[cell_counts["location"] == "cortex"],
covariate_columns=["mouse_type", "location", "replicate"])
```
Plot data:
```
# Count data to ratios
counts = cell_counts.iloc[:, 3:]
rowsum = np.sum(counts, axis=1)
ratios = counts.div(rowsum, axis=0)
ratios["mouse_type"] = cell_counts["mouse_type"]
ratios["location"] = cell_counts["location"]
# Make boxplots
fig, ax = plt.subplots(figsize=(12,5), ncols=3)
df = pd.melt(ratios, id_vars=['location', "mouse_type"], value_vars=ratios.columns[:3])
df = df.sort_values(["location", "mouse_type"], ascending=[True, False])
print(df)
sns.set_context('notebook')
sns.set_style('ticks')
for i in range(3):
d = sns.boxplot(x='location', y = 'value', hue="mouse_type",
data=df.loc[df["variable"]==f"microglia {i+1}"], fliersize=1,
palette='Blues', ax=ax[i])
d.set_ylabel('Proportion')
loc, labels = plt.xticks()
d.set_xlabel('Location')
d.set_title(f"microglia {i+1}")
plt.legend(bbox_to_anchor=(1.33, 1), borderaxespad=0., title="Condition")
# plt.savefig(plot_path + "haber_boxes_blue.svg", format="svg", bbox_inches="tight")
# plt.savefig(plot_path + "haber_boxes_blue.png", format="png", bbox_inches="tight")
plt.show()
```
Analyze cerebellum data:
Apply scCODA for every cell type set as the reference.
We see no credible effects at the 0.2 FDR level.
```
# Use this formula to make a wild type -> treated comparison, not the other way
formula = "C(mouse_type, levels=['WT', 'AD'])"
# cerebellum
res_cer = []
effects_cer = pd.DataFrame(index=data_cer.var.index.copy(),
columns=data_cer.var.index.copy())
effects_cer.index.rename("cell type", inplace=True)
effects_cer.columns.rename("reference", inplace=True)
for ct in data_cer.var.index:
print(f"Reference: {ct}")
model = mod.CompositionalAnalysis(data=data_cer, formula=formula, reference_cell_type=ct)
results = model.sample_hmc()
_, effect_df = results.summary_prepare(est_fdr=0.2)
res_cer.append(results)
effects_cer[ct] = effect_df.loc[:, "Final Parameter"].array
# Column: Reference category
# Row: Effect
print(effects_cer)
for x in res_cer:
print(x.summary_extended())
```
Now with cortex data:
We see credible effects on all cell types, depending on the reference.
Only the effects on microglia 2 and 3 when using the other one as the reference are not considered credible at the 0.2 FDR level.
```
# cortex
res_cor = []
effects_cor = pd.DataFrame(index=data_cor.var.index.copy(),
columns=data_cor.var.index.copy())
effects_cor.index.rename("cell type", inplace=True)
effects_cor.columns.rename("reference", inplace=True)
for ct in data_cer.var.index:
print(f"Reference: {ct}")
model = mod.CompositionalAnalysis(data=data_cor, formula=formula, reference_cell_type=ct)
results = model.sample_hmc()
_, effect_df = results.summary_prepare(est_fdr=0.2)
res_cor.append(results)
effects_cor[ct] = effect_df.loc[:, "Final Parameter"].array
# Column: Reference category
# Row: Effect
effects_cor.index.name = "cell type"
effects_cor.columns.name = "reference"
print(effects_cor)
for x in res_cor:
print(x.summary_extended())
```
For validataion, apply ancom to the same dataset.
We see also no changes for the cerebellum data, and change in all microglia types for the cortex data.
```
cer_ancom = data_cer.copy()
cer_ancom.obs = cer_ancom.obs.rename(columns={"mouse_type": "x_0"})
ancom_cer = om.AncomModel(cer_ancom)
ancom_cer.fit_model(alpha=0.2)
print(ancom_cer.ancom_out)
cor_ancom = data_cor.copy()
cor_ancom.obs = cor_ancom.obs.rename(columns={"mouse_type": "x_0"})
cor_ancom.X = cor_ancom.X + 0.5
ancom_cor = om.AncomModel(cor_ancom)
ancom_cor.fit_model(alpha=0.2)
print(ancom_cor.ancom_out)
```
| github_jupyter |
# A project that shows the law of large numbers
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
```
# **Generating a population of random numbers**
```
# Creating 230,000 random numbers in a 1/f distribution
randint = np.logspace(np.log10(0.001),np.log10(100),230000)
fdist = np.zeros(230000)
for i in range(len(randint)):
fdist[i] = 1/randint[i]
if fdist[i]< 0:
print(fdist[i])
fdist[:40]
# Only taking every 1000th entry in fdist
fdist1000 = fdist[0::1000]
fdist1000[:40]
#Sorting the array in descending order
fdist1000[::-1].sort()
fdist1000[1]
#Creating the index
index = np.zeros(len(fdist1000))
for i in range(len(fdist1000)):
index[i] = i+1
index[:40]
# Graphing the random numbers
plt.plot(index,fdist1000, 'go--')
plt.xlabel('Sample')
plt.ylabel('Data Value')
plt.show()
# Shuffling fdist1000
shuffledist = fdist1000.copy()
np.random.shuffle(shuffledist)
plt.plot(index,shuffledist, 'go')
plt.xlabel('Sample')
plt.ylabel('Data Value')
plt.show()
```
# **Monte Carlo sampling**
```
### Randomly selecting 50 of the 230000 data points. Finding the mean. Repeat this 500 times. ###
mean = np.zeros(500)
for i in range(len(mean)):
fifty = np.random.choice(fdist, size=50, replace=False)
mean[i] = np.mean(fifty)
mean[:100]
# Calculating the real average
realmean = sum(fdist)/len(fdist)
print(realmean)
# Creating the index for mean to plot
meanindex = np.zeros(len(mean))
for i in range(len(mean)):
meanindex[i] = i+1
meanindex[:40]
# Plotting the Monte-Carlo sampling
plt.plot(meanindex,mean, 'ko',markerfacecolor='c' , label = 'Sample Means')
plt.axhline(y=realmean, color='r', linewidth='3', linestyle='-', label = 'True Mean')
plt.xlabel('Sample Number')
plt.ylabel('Mean Value')
plt.legend()
plt.show()
```
# **Cumulative averaging**
```
# Cumulative average of all sample means
cumemean = np.zeros(500)
cumesum = np.zeros(500)
length = len(mean)
for i in range(length):
cumesum[i] = np.sum(mean[:i+1])
#if i == 0:
#cumemean[i] = cumesum[i]/(i+1)
#else:
cumemean[i] = cumesum[i] / (i+1)
# Creating the index for cumulative mean to plot
cumemeanindex = np.zeros(len(cumemean))
for i in range(len(cumemean)):
cumemeanindex[i] = i+1
cumemeanindex[-1]
# Plotting of the Cumulative Average
plt.plot(cumemeanindex,cumemean, 'bo',markerfacecolor='w' , label = 'Cumulative Averages')
plt.axhline(y=realmean, color='r', linewidth='3', linestyle='-', label = 'True Mean')
plt.xlabel('Sample Number')
plt.ylabel('Mean Value')
plt.legend()
plt.show()
### Computing the square divergence for each point. Repeating 100 times ###
dfdivergence = pd.DataFrame(cumemean, columns = ['Original Cumulative Mean Run 1'])
# Finding the square divergence from mean (realmean) for the first run
divergence1 = (dfdivergence["Original Cumulative Mean Run 1"] - realmean)**2
divergence1[:20]
dfdivergence['Square Divergence 1'] = divergence1
dfdivergence
dfmeans = pd.DataFrame()
for i in range(99):
dfmeans[i] = np.zeros(500)
# (Sampling 50 random point 500 times) for 100 runs.
for i in range(99):
for j in range(500):
dffifty = np.random.choice(fdist, size=50, replace=False)
tempmean = np.mean(dffifty)
dfmeans.at[j, i] = tempmean
dfmeans[:5]
dfcumemean = pd.DataFrame()
dfcumesum = pd.DataFrame()
dfcumesum = dfmeans.cumsum(axis=0) # Finding the cumulative sum down each column
dfcumesum
for i in range(99):
for j in range(500):
dfcumemean.at[j,i] = dfcumesum.at[j,i]/(j+1) # Finding the cumulative mean down each column
dfcumemean
for i in range(99):
dfdivergence['Square Divergence' + ' ' + str(i+2)] = (dfcumemean[i] - realmean)**2 # finding the divergence down each column
dfdivergence[:4]
dfdivergence = dfdivergence.drop(['Original Cumulative Mean Run 1'], axis=1)
dfdivergence[:4]
dfdivergence.plot(figsize = (12,7), legend=False) # This plot shows the divergence from the real mean for 100 runs of cumulative averaging.
plt.ylim([-10,500])
plt.ylabel('Square Divergence from True Mean')
plt.xlabel('Sample Number')
plt.show()
```
| github_jupyter |
# GLM: Robust Regression with Outlier Detection
**A minimal reproducable example of Robust Regression with Outlier Detection using Hogg 2010 Signal vs Noise method.**
+ This is a complementary approach to the Student-T robust regression as illustrated in Thomas Wiecki's notebook in the [PyMC3 documentation](http://pymc-devs.github.io/pymc3/GLM-robust/), that approach is also compared here.
+ This model returns a robust estimate of linear coefficients and an indication of which datapoints (if any) are outliers.
+ The likelihood evaluation is essentially a copy of eqn 17 in "Data analysis recipes: Fitting a model to data" - [Hogg 2010](http://arxiv.org/abs/1008.4686).
+ The model is adapted specifically from Jake Vanderplas' [implementation](http://www.astroml.org/book_figures/chapter8/fig_outlier_rejection.html) (3rd model tested).
+ The dataset is tiny and hardcoded into this Notebook. It contains errors in both the x and y, but we will deal here with only errors in y.
**Note:**
+ Python 3.4 project using latest available [PyMC3](https://github.com/pymc-devs/pymc3)
+ Developed using [ContinuumIO Anaconda](https://www.continuum.io/downloads) distribution on a Macbook Pro 3GHz i7, 16GB RAM, OSX 10.10.5.
+ During development I've found that 3 data points are always indicated as outliers, but the remaining ordering of datapoints by decreasing outlier-hood is slightly unstable between runs: the posterior surface appears to have a small number of solutions with similar probability.
+ Finally, if runs become unstable or Theano throws weird errors, try clearing the cache `$> theano-cache clear` and rerunning the notebook.
**Package Requirements (shown as a conda-env YAML):**
```
$> less conda_env_pymc3_examples.yml
name: pymc3_examples
channels:
- defaults
dependencies:
- python=3.4
- ipython
- ipython-notebook
- ipython-qtconsole
- numpy
- scipy
- matplotlib
- pandas
- seaborn
- patsy
- pip
$> conda env create --file conda_env_pymc3_examples.yml
$> source activate pymc3_examples
$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3
```
## Setup
```
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import optimize
import pymc3 as pm
import theano as thno
import theano.tensor as T
# configure some basic options
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.notebook_repr_html', True)
plt.rcParams['figure.figsize'] = 12, 8
np.random.seed(0)
```
### Load and Prepare Data
We'll use the Hogg 2010 data available at https://github.com/astroML/astroML/blob/master/astroML/datasets/hogg2010test.py
It's a very small dataset so for convenience, it's hardcoded below
```
#### cut & pasted directly from the fetch_hogg2010test() function
## identical to the original dataset as hardcoded in the Hogg 2010 paper
dfhogg = pd.DataFrame(np.array([[1, 201, 592, 61, 9, -0.84],
[2, 244, 401, 25, 4, 0.31],
[3, 47, 583, 38, 11, 0.64],
[4, 287, 402, 15, 7, -0.27],
[5, 203, 495, 21, 5, -0.33],
[6, 58, 173, 15, 9, 0.67],
[7, 210, 479, 27, 4, -0.02],
[8, 202, 504, 14, 4, -0.05],
[9, 198, 510, 30, 11, -0.84],
[10, 158, 416, 16, 7, -0.69],
[11, 165, 393, 14, 5, 0.30],
[12, 201, 442, 25, 5, -0.46],
[13, 157, 317, 52, 5, -0.03],
[14, 131, 311, 16, 6, 0.50],
[15, 166, 400, 34, 6, 0.73],
[16, 160, 337, 31, 5, -0.52],
[17, 186, 423, 42, 9, 0.90],
[18, 125, 334, 26, 8, 0.40],
[19, 218, 533, 16, 6, -0.78],
[20, 146, 344, 22, 5, -0.56]]),
columns=['id','x','y','sigma_y','sigma_x','rho_xy'])
## for convenience zero-base the 'id' and use as index
dfhogg['id'] = dfhogg['id'] - 1
dfhogg.set_index('id', inplace=True)
## standardize (mean center and divide by 1 sd)
dfhoggs = (dfhogg[['x','y']] - dfhogg[['x','y']].mean(0)) / dfhogg[['x','y']].std(0)
dfhoggs['sigma_y'] = dfhogg['sigma_y'] / dfhogg['y'].std(0)
dfhoggs['sigma_x'] = dfhogg['sigma_x'] / dfhogg['x'].std(0)
## create xlims ylims for plotting
xlims = (dfhoggs['x'].min() - np.ptp(dfhoggs['x'])/5
,dfhoggs['x'].max() + np.ptp(dfhoggs['x'])/5)
ylims = (dfhoggs['y'].min() - np.ptp(dfhoggs['y'])/5
,dfhoggs['y'].max() + np.ptp(dfhoggs['y'])/5)
## scatterplot the standardized data
g = sns.FacetGrid(dfhoggs, size=8)
_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker="o", ls='')
_ = g.axes[0][0].set_ylim(ylims)
_ = g.axes[0][0].set_xlim(xlims)
plt.subplots_adjust(top=0.92)
_ = g.fig.suptitle('Scatterplot of Hogg 2010 dataset after standardization', fontsize=16)
```
**Observe**:
+ Even judging just by eye, you can see these datapoints mostly fall on / around a straight line with positive gradient
+ It looks like a few of the datapoints may be outliers from such a line
## Create Conventional OLS Model
The *linear model* is really simple and conventional:
$$\bf{y} = \beta^{T} \bf{X} + \bf{\sigma}$$
where:
$\beta$ = coefs = $\{1, \beta_{j \in X_{j}}\}$
$\sigma$ = the measured error in $y$ in the dataset `sigma_y`
### Define model
**NOTE:**
+ We're using a simple linear OLS model with Normally distributed priors so that it behaves like a ridge regression
```
with pm.Model() as mdl_ols:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=100)
b1 = pm.Normal('b1_slope', mu=0, sd=100)
## Define linear model
yest = b0 + b1 * dfhoggs['x']
## Use y error from dataset, convert into theano variable
sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],
dtype=thno.config.floatX), name='sigma_y')
## Define Normal likelihood
likelihood = pm.Normal('likelihood', mu=yest, sd=sigma_y, observed=dfhoggs['y'])
```
### Sample
```
with mdl_ols:
## take samples
traces_ols = pm.sample(2000, tune=1000)
```
### View Traces
**NOTE**: I'll 'burn' the traces to only retain the final 1000 samples
```
_ = pm.traceplot(traces_ols[-1000:], figsize=(12,len(traces_ols.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces_ols[-1000:]).iterrows()})
```
**NOTE:** We'll illustrate this OLS fit and compare to the datapoints in the final plot
---
---
## Create Robust Model: Student-T Method
I've added this brief section in order to directly compare the Student-T based method exampled in Thomas Wiecki's notebook in the [PyMC3 documentation](http://pymc-devs.github.io/pymc3/GLM-robust/)
Instead of using a Normal distribution for the likelihood, we use a Student-T, which has fatter tails. In theory this allows outliers to have a smaller mean square error in the likelihood, and thus have less influence on the regression estimation. This method does not produce inlier / outlier flags but is simpler and faster to run than the Signal Vs Noise model below, so a comparison seems worthwhile.
**Note:** we'll constrain the Student-T 'degrees of freedom' parameter `nu` to be an integer, but otherwise leave it as just another stochastic to be inferred: no need for prior knowledge.
### Define Model
```
with pm.Model() as mdl_studentt:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=100)
b1 = pm.Normal('b1_slope', mu=0, sd=100)
## Define linear model
yest = b0 + b1 * dfhoggs['x']
## Use y error from dataset, convert into theano variable
sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],
dtype=thno.config.floatX), name='sigma_y')
## define prior for Student T degrees of freedom
nu = pm.Uniform('nu', lower=1, upper=100)
## Define Student T likelihood
likelihood = pm.StudentT('likelihood', mu=yest, sd=sigma_y, nu=nu,
observed=dfhoggs['y'])
```
### Sample
```
with mdl_studentt:
## take samples
traces_studentt = pm.sample(2000, tune=1000)
```
#### View Traces
```
_ = pm.traceplot(traces_studentt[-1000:],
figsize=(12,len(traces_studentt.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces_studentt[-1000:]).iterrows()})
```
**Observe:**
+ Both parameters `b0` and `b1` show quite a skew to the right, possibly this is the action of a few samples regressing closer to the OLS estimate which is towards the left
+ The `nu` parameter seems very happy to stick at `nu = 1`, indicating that a fat-tailed Student-T likelihood has a better fit than a thin-tailed (Normal-like) Student-T likelihood.
+ The inference sampling also ran very quickly, almost as quickly as the conventional OLS
**NOTE:** We'll illustrate this Student-T fit and compare to the datapoints in the final plot
---
---
## Create Robust Model with Outliers: Hogg Method
Please read the paper (Hogg 2010) and Jake Vanderplas' code for more complete information about the modelling technique.
The general idea is to create a 'mixture' model whereby datapoints can be described by either the linear model (inliers) or a modified linear model with different mean and larger variance (outliers).
The likelihood is evaluated over a mixture of two likelihoods, one for 'inliers', one for 'outliers'. A Bernouilli distribution is used to randomly assign datapoints in N to either the inlier or outlier groups, and we sample the model as usual to infer robust model parameters and inlier / outlier flags:
$$
\mathcal{logL} = \sum_{i}^{i=N} log \left[ \frac{(1 - B_{i})}{\sqrt{2 \pi \sigma_{in}^{2}}} exp \left( - \frac{(x_{i} - \mu_{in})^{2}}{2\sigma_{in}^{2}} \right) \right] + \sum_{i}^{i=N} log \left[ \frac{B_{i}}{\sqrt{2 \pi (\sigma_{in}^{2} + \sigma_{out}^{2})}} exp \left( - \frac{(x_{i}- \mu_{out})^{2}}{2(\sigma_{in}^{2} + \sigma_{out}^{2})} \right) \right]
$$
where:
$\bf{B}$ is Bernoulli-distibuted $B_{i} \in [0_{(inlier)},1_{(outlier)}]$
### Define model
```
def logp_signoise(yobs, is_outlier, yest_in, sigma_y_in, yest_out, sigma_y_out):
'''
Define custom loglikelihood for inliers vs outliers.
NOTE: in this particular case we don't need to use theano's @as_op
decorator because (as stated by Twiecki in conversation) that's only
required if the likelihood cannot be expressed as a theano expression.
We also now get the gradient computation for free.
'''
# likelihood for inliers
pdfs_in = T.exp(-(yobs - yest_in + 1e-4)**2 / (2 * sigma_y_in**2))
pdfs_in /= T.sqrt(2 * np.pi * sigma_y_in**2)
logL_in = T.sum(T.log(pdfs_in) * (1 - is_outlier))
# likelihood for outliers
pdfs_out = T.exp(-(yobs - yest_out + 1e-4)**2 / (2 * (sigma_y_in**2 + sigma_y_out**2)))
pdfs_out /= T.sqrt(2 * np.pi * (sigma_y_in**2 + sigma_y_out**2))
logL_out = T.sum(T.log(pdfs_out) * is_outlier)
return logL_in + logL_out
with pm.Model() as mdl_signoise:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=10, testval=pm.floatX(0.1))
b1 = pm.Normal('b1_slope', mu=0, sd=10, testval=pm.floatX(1.))
## Define linear model
yest_in = b0 + b1 * dfhoggs['x']
## Define weakly informative priors for the mean and variance of outliers
yest_out = pm.Normal('yest_out', mu=0, sd=100, testval=pm.floatX(1.))
sigma_y_out = pm.HalfNormal('sigma_y_out', sd=100, testval=pm.floatX(1.))
## Define Bernoulli inlier / outlier flags according to a hyperprior
## fraction of outliers, itself constrained to [0,.5] for symmetry
frac_outliers = pm.Uniform('frac_outliers', lower=0., upper=.5)
is_outlier = pm.Bernoulli('is_outlier', p=frac_outliers, shape=dfhoggs.shape[0],
testval=np.random.rand(dfhoggs.shape[0]) < 0.2)
## Extract observed y and sigma_y from dataset, encode as theano objects
yobs = thno.shared(np.asarray(dfhoggs['y'], dtype=thno.config.floatX), name='yobs')
sigma_y_in = thno.shared(np.asarray(dfhoggs['sigma_y'], dtype=thno.config.floatX),
name='sigma_y_in')
## Use custom likelihood using DensityDist
likelihood = pm.DensityDist('likelihood', logp_signoise,
observed={'yobs': yobs, 'is_outlier': is_outlier,
'yest_in': yest_in, 'sigma_y_in': sigma_y_in,
'yest_out': yest_out, 'sigma_y_out': sigma_y_out})
```
### Sample
```
with mdl_signoise:
## two-step sampling to create Bernoulli inlier/outlier flags
step1 = pm.Metropolis([frac_outliers, yest_out, sigma_y_out, b0, b1])
step2 = pm.step_methods.BinaryGibbsMetropolis([is_outlier])
## take samples
traces_signoise = pm.sample(20000, step=[step1, step2], tune=10000, progressbar=True)
```
### View Traces
```
traces_signoise[-10000:]['b0_intercept']
_ = pm.traceplot(traces_signoise[-10000:], figsize=(12,len(traces_signoise.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces_signoise[-1000:]).iterrows()})
```
**NOTE:**
+ During development I've found that 3 datapoints id=[1,2,3] are always indicated as outliers, but the remaining ordering of datapoints by decreasing outlier-hood is unstable between runs: the posterior surface appears to have a small number of solutions with very similar probability.
+ The NUTS sampler seems to work okay, and indeed it's a nice opportunity to demonstrate a custom likelihood which is possible to express as a theano function (thus allowing a gradient-based sampler like NUTS). However, with a more complicated dataset, I would spend time understanding this instability and potentially prefer using more samples under Metropolis-Hastings.
---
---
## Declare Outliers and Compare Plots
### View ranges for inliers / outlier predictions
At each step of the traces, each datapoint may be either an inlier or outlier. We hope that the datapoints spend an unequal time being one state or the other, so let's take a look at the simple count of states for each of the 20 datapoints.
```
outlier_melt = pd.melt(pd.DataFrame(traces_signoise['is_outlier', -1000:],
columns=['[{}]'.format(int(d)) for d in dfhoggs.index]),
var_name='datapoint_id', value_name='is_outlier')
ax0 = sns.pointplot(y='datapoint_id', x='is_outlier', data=outlier_melt,
kind='point', join=False, ci=None, size=4, aspect=2)
_ = ax0.vlines([0,1], 0, 19, ['b','r'], '--')
_ = ax0.set_xlim((-0.1,1.1))
_ = ax0.set_xticks(np.arange(0, 1.1, 0.1))
_ = ax0.set_xticklabels(['{:.0%}'.format(t) for t in np.arange(0,1.1,0.1)])
_ = ax0.yaxis.grid(True, linestyle='-', which='major', color='w', alpha=0.4)
_ = ax0.set_title('Prop. of the trace where datapoint is an outlier')
_ = ax0.set_xlabel('Prop. of the trace where is_outlier == 1')
```
**Observe**:
+ The plot above shows the number of samples in the traces in which each datapoint is marked as an outlier, expressed as a percentage.
+ In particular, 3 points [1, 2, 3] spend >=95% of their time as outliers
+ Contrastingly, points at the other end of the plot close to 0% are our strongest inliers.
+ For comparison, the mean posterior value of `frac_outliers` is ~0.35, corresponding to roughly 7 of the 20 datapoints. You can see these 7 datapoints in the plot above, all those with a value >50% or thereabouts.
+ However, only 3 of these points are outliers >=95% of the time.
+ See note above regarding instability between runs.
The 95% cutoff we choose is subjective and arbitrary, but I prefer it for now, so let's declare these 3 to be outliers and see how it looks compared to Jake Vanderplas' outliers, which were declared in a slightly different way as points with means above 0.68.
### Declare outliers
**Note:**
+ I will declare outliers to be datapoints that have value == 1 at the 5-percentile cutoff, i.e. in the percentiles from 5 up to 100, their values are 1.
+ Try for yourself altering cutoff to larger values, which leads to an objective ranking of outlier-hood.
```
cutoff = 5
dfhoggs['outlier'] = np.percentile(traces_signoise[-1000:]['is_outlier'],cutoff, axis=0)
dfhoggs['outlier'].value_counts()
```
### Posterior Prediction Plots for OLS vs StudentT vs SignalNoise
```
g = sns.FacetGrid(dfhoggs, size=8, hue='outlier', hue_order=[True,False],
palette='Set1', legend_out=False)
lm = lambda x, samp: samp['b0_intercept'] + samp['b1_slope'] * x
pm.plot_posterior_predictive_glm(traces_ols[-1000:],
eval=np.linspace(-3, 3, 10), lm=lm, samples=200, color='#22CC00', alpha=.2)
pm.plot_posterior_predictive_glm(traces_studentt[-1000:], lm=lm,
eval=np.linspace(-3, 3, 10), samples=200, color='#FFA500', alpha=.5)
pm.plot_posterior_predictive_glm(traces_signoise[-1000:], lm=lm,
eval=np.linspace(-3, 3, 10), samples=200, color='#357EC7', alpha=.3)
_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker="o", ls='').add_legend()
_ = g.axes[0][0].annotate('OLS Fit: Green\nStudent-T Fit: Orange\nSignal Vs Noise Fit: Blue',
size='x-large', xy=(1,0), xycoords='axes fraction',
xytext=(-160,10), textcoords='offset points')
_ = g.axes[0][0].set_ylim(ylims)
_ = g.axes[0][0].set_xlim(xlims)
```
**Observe**:
+ The posterior preditive fit for:
+ the **OLS model** is shown in **Green** and as expected, it doesn't appear to fit the majority of our datapoints very well, skewed by outliers
+ the **Robust Student-T model** is shown in **Orange** and does appear to fit the 'main axis' of datapoints quite well, ignoring outliers
+ the **Robust Signal vs Noise model** is shown in **Blue** and also appears to fit the 'main axis' of datapoints rather well, ignoring outliers.
+ We see that the **Robust Signal vs Noise model** also yields specific estimates of _which_ datapoints are outliers:
+ 17 'inlier' datapoints, in **Blue** and
+ 3 'outlier' datapoints shown in **Red**.
+ From a simple visual inspection, the classification seems fair, and agrees with Jake Vanderplas' findings.
+ Overall, it seems that:
+ the **Signal vs Noise model** behaves as promised, yielding a robust regression estimate and explicit labelling of inliers / outliers, but
+ the **Signal vs Noise model** is quite complex and whilst the regression seems robust and stable, the actual inlier / outlier labelling seems slightly unstable
+ if you simply want a robust regression without inlier / outlier labelling, the **Student-T model** may be a good compromise, offering a simple model, quick sampling, and a very similar estimate.
---
Example originally contributed by Jonathan Sedar 2015-12-21 [github.com/jonsedar](https://github.com/jonsedar)
| github_jupyter |
```
import pandas as pd
import numpy as np
import gc
import time
df=pd.read_csv("df.csv")
train_df = df[df['TARGET'].notnull()]
test_df = df[df['TARGET'].isnull()]
del df
gc.collect()
feats = [f for f in train_df.columns if f not in ['TARGET','SK_ID_CURR','SK_ID_BUREAU','SK_ID_PREV','index']]
X = train_df[feats]
y = train_df['TARGET']
total = X.isnull().sum().sort_values(ascending = False)
percent = (X.isnull().sum() / X.isnull().count()*100).sort_values(ascending = False)
missing_X = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
L=missing_X[missing_X['Total']==0]
L1=L.reset_index().values
nomiss_feats=[]
for e in range(0, L.shape[0]):
nomiss_feats.append(L1[e][0])
del missing_X
del L
del L1
gc.collect()
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
for f in X.columns:
if f in nomiss_feats:
X[f] = scaler.fit_transform(X[f].values.reshape(-1,1))
y = scaler.fit_transform(y.values.reshape(-1,1))
del nomiss_feats
gc.collect()
X = X.fillna(value = 0)
# split data
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(X, y,test_size=0.2)#, random_state=42)
print (X_train.shape)
print (X_valid.shape)
print (y_train.shape)
print (y_valid.shape)
# Build and run model
import tensorflow as tf
ITERATIONS = 40000
LEARNING_RATE = 1e-4
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
# Let's train the model
feature_count = X_train.shape[1]
x = tf.placeholder('float', shape=[None, feature_count], name='x')
y_ = tf.placeholder('float', shape=[None, 1], name='y_')
print(x.get_shape())
nodes = 20
w1 = weight_variable([feature_count, nodes])
b1 = bias_variable([nodes])
l1 = tf.nn.relu(tf.matmul(x, w1) + b1)
w2 = weight_variable([nodes, 1])
b2 = bias_variable([1])
y = tf.nn.sigmoid(tf.matmul(l1, w2) + b2)
cross_entropy = -tf.reduce_mean(y_*tf.log(tf.maximum(0.00001, y)) + (1.0 - y_)*tf.log(tf.maximum(0.00001, 1.0-y)))
reg = 0.01 * (tf.reduce_mean(tf.square(w1)) + tf.reduce_mean(tf.square(w2)))
predict = (y > 0.5)
correct_prediction = tf.equal(predict, (y_ > 0.5))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
train_step = tf.train.AdamOptimizer(LEARNING_RATE).minimize(cross_entropy + reg)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(ITERATIONS):
feed={x:X_train, y_:y_train}
sess.run(train_step, feed_dict=feed)
if i % 1000 == 0 or i == ITERATIONS-1:
print('{} {} {:.2f}%'.format(i, sess.run(cross_entropy, feed_dict=feed), sess.run(accuracy, feed_dict=feed)*100.0))
T = test_df[feats]
total = T.isnull().sum().sort_values(ascending = False)
percent = (T.isnull().sum() / T.isnull().count()*100).sort_values(ascending = False)
missing_T = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
L2=missing_T[missing_T['Total']==0]
L3=L2.reset_index().values
nomiss_feats=[]
for e in range(0, L2.shape[0]):
nomiss_feats.append(L3[e][0])
del missing_T
del L2
del L3
gc.collect()
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
for f in T.columns:
if f in nomiss_feats:
T[f] = scaler.fit_transform(T[f].values.reshape(-1,1))
del nomiss_feats
gc.collect()
T
```
| github_jupyter |
# Demystifying Neural Networks
---
# Exercises - ANN Weights
We will generate matrices that can be used as an ANN.
You can generate matrices with any function from `numpy.random`.
You can provide a tuple to the `size=` parameter to get an array
of that shape. For example, `np.random.normal(0, 1, (3, 6))`
generates a matrix of 3 rows and 6 columns.
```
import numpy as np
```
#### 1. Generate the following matrices with values from the normal distribution
A) $A_{2 \times 3}$
B) $B_{7 \times 5}$
```
A = np.random.normal(0, 1, (2, 3))
B = np.random.normal(0, 1, (7, 5))
print(A)
print(B)
```
#### 2. Generate matrices of the same size as the used in the `pytorch` network
$$
W_{25 \times 8}, W_{B\: 25 \times 1},
W'_{10 \times 25}, W'_{B\: 10 \times 1},
W'_{2 \times 10}, W'_{B\: 2 \times 1}
$$
```
W = np.random.normal(0, 1, (25, 8))
W_B = np.random.normal(0, 1, (25, 1))
Wx = np.random.normal(0, 1, (10, 25))
Wx_B = np.random.normal(0, 1, (10, 1))
Wxx = np.random.normal(0, 1, (2, 10))
Wxx_B = np.random.normal(0, 1, (2, 1))
weights = [W, W_B, Wx, Wx_B, Wxx, Wxx_B]
print([x.shape for x in weights])
```
---
Weight generation is a big topic in ANN research.
We will use one well accepted way of generating ways but there are plethora of others.
The way we will generate weight matrices is to:
If we need to generate a matrix of size $p \times n$,
we take all values for the matrix from the normal distribution
with mean and standard deviation as:
$$
\mu = 0 \\
\sigma = \frac{1}{n + p}
$$
In `numpy` the mean argument is `loc=` and standard deviation is called `scale=`
#### 3. Generate the same matrices as above but use the distribution describe above, then evaluate
$$
X = \left[
\begin{matrix}
102.50781 & 58.88243 & 0.46532 & -0.51509 & 1.67726 & 14.86015 & 10.57649 & 127.39358 \\
142.07812 & 45.28807 & -0.32033 & 0.28395 & 5.37625 & 29.00990 & 6.07627 & 37.83139 \\
138.17969 & 51.52448 & -0.03185 & 0.04680 & 6.33027 & 31.57635 & 5.15594 & 26.14331 \\
\end{matrix}
\right]
$$
(These are the first three rows in the pulsar dataset)
$$
\hat{Y}_{2 \times 3} = tanh(W''_{2 \times 10} \times
tanh(W'_{10 \times 25} \times
tanh(W_{25 \times 8} \times X^T + W_{B\: 25 \times 1})
+ W'_{B\: 10 \times 1})
+ W''_{B\: 2 \times 1})
$$
```
X = np.array([
[102.50781, 58.88243, 0.46532, -0.51509, 1.67726, 14.86015, 10.57649, 127.39358],
[142.07812, 45.28807, -0.32033, 0.28395, 5.37625, 29.00990, 6.07627, 37.83139],
[138.17969, 51.52448, -0.03185, 0.04680, 6.33027, 31.57635, 5.15594, 26.14331],
])
W = np.random.normal(0, 1/(8+25), (25, 8))
W_B = np.random.normal(0, 1/(25+1), (25, 1))
Wx = np.random.normal(0, 1/(10+25), (10, 25))
Wx_B = np.random.normal(0, 1/(10+1), (10, 1))
Wxx = np.random.normal(0, 1/(2+10), (2, 10))
Wxx_B = np.random.normal(0, 1/(2+1), (2, 1));
Y_hat = np.tanh(Wxx @ np.tanh(Wx @ np.tanh(W @ X.T + W_B) + Wx_B) + Wxx_B)
print(Y_hat.T)
```
| github_jupyter |
# Document Processing with AutoML and Vision API
## Problem Statement
Formally the brief for this Open Project could be stated as follows: Given a collection of varying pdf/png documents containing similar information, create a pipeline that will extract relevant entities from the documents and store the entities in a standardized, easily accessible format.
The data for this project is contained in the Cloud Storage bucket [gs://document-processing/patent_dataset.zip](https://storage.googleapis.com/document-processing/patent_dataset.zip). The file [gs://document-processing/ground_truth.csv](https://storage.googleapis.com/document-processing/ground_truth.csv) contains hand-labeled fields extracted from the patents.
The labels in the ground_truth.csv file are filename, category, publication_date, classification_1, classification_2, application_number, filing_date, priority, representative, applicant, inventor, titleFL, titleSL, abstractFL, and publication_number
Here is an example of two different patent formats:
<table><tr>
<td> <img src="eu_patent.png" alt="Drawing" style="width: 600px;"/> </td>
<td> <img src="us_patent.png" alt="Drawing" style="width: 600px;"/> </td>
</tr></table>
### Flexible Solution
There are many possible ways to develop a solution to this task which allows students to touch on various functionality and GCP tools that we discuss during the ASL, including the Vision API, AutoML Vision, BigQuery, Tensorflow, Cloud Composer, PubSub.
For students more interested in modeling with Tensorflow, they could build a classification model from scratch to regcognize the various type of document formats at hand. Knowing the document format (e.g. US or EU patents as in the example above), relevant entities can then be extracted using the Vision API and some basic regex extactors. It might also be possible to train a [conditional random field in Tensorflow](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/crf) to learn how to tag and extract relevant entities from text and the given labels, instead of writing regex-based entity extractors for each document class.
Students more interested in productionization could work to use Cloud Functions to automate the extraction pipeline. Or incorporate PubSub so that when a new document is uploaded to a specific GCS bucket it is parsed and the entities uploaded to a BigQuery table.
Below is a solution outline that uses the Vision API and AutoML, uploading the extracted entities to a table in BigQuery.
## Install AutoML package
**Caution:** Run the following command and **restart the kernel** afterwards.
```
!pip freeze | grep google-cloud-automl==0.1.2 || pip install --upgrade google-cloud-automl==0.1.2
```
## Set the correct environment variables
The following variables should be updated according to your own enviroment:
```
PROJECT_ID = "asl-open-projects"
SERVICE_ACCOUNT = "entity-extractor"
ZONE = "us-central1"
AUTOML_MODEL_ID = "ICN6705037528556716784"
```
The following variables are computed from the one you set above, and should
not be modified:
```
import os
PWD = os.path.abspath(os.path.curdir)
SERVICE_KEY_PATH = os.path.join(PWD, "{0}.json".format(SERVICE_ACCOUNT))
SERVICE_ACCOUNT_EMAIL="{0}@{1}.iam.gserviceaccount.com".format(SERVICE_ACCOUNT, PROJECT_ID)
# Exporting the variables into the environment to make them available to all the subsequent cells
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["SERVICE_ACCOUNT"] = SERVICE_ACCOUNT
os.environ["SERVICE_KEY_PATH"] = SERVICE_KEY_PATH
os.environ["SERVICE_ACCOUNT_EMAIL"] = SERVICE_ACCOUNT_EMAIL
os.environ["ZONE"] = ZONE
```
## Switching the right project and zone
```
%%bash
gcloud config set project $PROJECT_ID
gcloud config set compute/region $ZONE
```
## Create a service account
```
%%bash
gcloud iam service-accounts list | grep $SERVICE_ACCOUNT ||
gcloud iam service-accounts create $SERVICE_ACCOUNT
```
## Grant service account project ownership
TODO: We should ideally restrict the permissions to AutoML and Vision roles only
```
%%bash
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member "serviceAccount:$SERVICE_ACCOUNT_EMAIL" \
--role "roles/owner"
```
## Create service account keys if not existing
```
%%bash
test -f $SERVICE_KEY_PATH ||
gcloud iam service-accounts keys create $SERVICE_KEY_PATH \
--iam-account $SERVICE_ACCOUNT_EMAIL
echo "Service key: $(ls $SERVICE_KEY_PATH)"
```
## Make the key available to google clients for authentication
```
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = SERVICE_KEY_PATH
```
## Implement a document classifier with AutoML
Here is a simple wrapper around an already trained AutoML model trained directly
from the cloud console on the various document types:
```
from google.cloud import automl_v1beta1 as automl
class DocumentClassifier:
def __init__(self, project_id, model_id, zone):
self.client = automl.PredictionServiceClient()
self.model = self.client.model_path(project_id, zone, model_id)
def __call__(self, filename):
with open(filename, 'rb') as fp:
image = fp.read()
payload = {
'image': {
'image_bytes': image
}
}
response = self.client.predict(self.model, payload)
predicted_class = response.payload[0].display_name
return predicted_class
```
Let's see how to use that `DocumentClassifier`:
```
classifier = DocumentClassifier(PROJECT_ID, AUTOML_MODEL_ID, ZONE)
eu_image_label = classifier("./eu_patent.png")
us_image_label = classifier("./us_patent.png")
print("EU patent inferred label:", eu_image_label)
print("US patent inferred label:", us_image_label)
```
## Implement a document parser with Vision API
Documentation:
* https://cloud.google.com/vision/docs/base64
* https://stackoverflow.com/questions/49918950/response-400-from-google-vision-api-ocr-with-a-base64-string-of-specified-image
Here is a simple class wrapping calls to the OCR capabilities of Cloud Vision:
```
!pip freeze | grep google-api-python-client==1.7.7 || pip install --upgrade google-api-python-client==1.7.7
import base64
from googleapiclient.discovery import build
class DocumentParser:
def __init__(self):
self.client = build('vision', 'v1')
def __call__(self, filename):
with open(filename, 'rb') as fp:
image = fp.read()
encoded_image = base64.b64encode(image).decode('UTF-8')
payload = {
'requests': [{
'image': {
'content': encoded_image
},
'features': [{
'type': 'TEXT_DETECTION',
}]
}],
}
request = self.client.images().annotate(body=payload)
response = request.execute(num_retries=3)
return response['responses'][0]['textAnnotations'][0]['description']
```
Let's now see how to use our `DocumentParser`:
```
parser = DocumentParser()
eu_patent_text = parser("./eu_patent.png")
us_patent_text = parser("./us_patent.png")
print(eu_patent_text)
```
# Implement the rule-based extractors for each document categories
For each patent type, we now want to write a simple function that takes the
text extracted by the OCR system above and extract the name and date of the patent.
We will write two rule base extractors, one for each type of patent (us or eu), each of
which will yield a PatentInfo object collecting the extracted object into a `nametuple` instance.
```
from collections import namedtuple
PatentInfo = namedtuple('PatentInfo', ['filename', 'category', 'date', 'number'])
```
Here are two helper functions for text splitting and pattern matching:
```
!pip freeze | grep pandas==0.23.4 || pip install --upgrade pandas==0.23.4
import pandas as pd
import re
def split_text_into_lines(text, sep="\(..\)"):
lines = [line.strip() for line in re.split(sep, text)]
return lines
def extract_pattern_from_lines(lines, pattern):
"""Extracts the first line from `text` with a matching `pattern`.
"""
lines = pd.Series(lines)
mask = lines.str.contains(pattern)
return lines[mask].values[0] if mask.any() else None
```
### European patent extractor
```
def extract_info_from_eu_patent(filename, text):
lines = split_text_into_lines(text)
category = "eu"
number_paragraph = extract_pattern_from_lines(lines, "EP")
number_lines = number_paragraph.split('\n')
number = extract_pattern_from_lines(number_lines, 'EP')
date_paragraph = extract_pattern_from_lines(lines, 'Date of filing:')
date = date_paragraph.replace("Date of filing:", "").strip()
return PatentInfo(
filename=filename,
category=category,
date=date,
number=number
)
eu_patent_info = extract_info_from_eu_patent("./eu_patent.png", eu_patent_text)
eu_patent_info
```
### US patent extractor
```
def extract_info_from_us_patent(filename, text):
lines = split_text_into_lines(text)
category = "us"
number_paragraph = extract_pattern_from_lines(lines, "Patent No.:")
number = number_paragraph.replace("Patent No.:", "").strip()
date_paragraph = extract_pattern_from_lines(lines, "Date of Patent:")
date = date_paragraph.split('\n')[-1]
return PatentInfo(
filename=filename,
category=category,
date=date,
number=number
)
us_patent_info = extract_info_from_us_patent("./us_patent.png", us_patent_text)
us_patent_info
```
## Tie all together into a DocumentExtractor
```
class DocumentExtractor:
def __init__(self, classifier, parser):
self.classifier = classifier
self.parser = parser
def __call__(self, filename):
text = self.parser(filename)
label = self.classifier(filename)
if label == 'eu':
info = extract_info_from_eu_patent(filename, text)
elif label == 'us':
info = extract_info_from_us_patent(filename, text)
else:
raise ValueError
return info
extractor = DocumentExtractor(classifier, parser)
eu_patent_info = extractor("./eu_patent.png")
us_patent_info = extractor("./us_patent.png")
print(eu_patent_info)
print(us_patent_info)
```
## Upload found entites to BigQuery
Start by adding a dataset called patents to the current project
```
!pip freeze | grep google-cloud-bigquery==1.8.1 || pip install google-cloud-bigquery==1.8.1
```
Check to see if the dataset called "patents" exists in the current project. If not, create it.
```
from google.cloud import bigquery
client = bigquery.Client()
# Collect datasets and project information
datasets = list(client.list_datasets())
project = client.project
# Create a list of the datasets. If the 'patents' dataset
# does not exist, then create it.
if datasets:
all_datasets = []
for dataset in datasets:
all_datasets.append(dataset.dataset_id)
else:
print('{} project does not contain any datasets.'.format(project))
if datasets and 'patents' in all_datasets:
print('The dataset "patents" already exists in project {}.'.format(project))
else:
dataset_id = 'patents'
dataset_ref = client.dataset(dataset_id)
# Construct a Dataset object.
dataset = bigquery.Dataset(dataset_ref)
# Specify the geographic location where the dataset should reside.
dataset.location = "US"
# Send the dataset to the API for creation.
dataset = client.create_dataset(dataset) # API request
print('The dataset "patents" was created in project {}.'.format(project))
```
Upload the extracted entities to a table called "found_entities" in the "patents" dataset.
Start by creating an empty table in the patents dataset.
```
# Create an empty table in the patents dataset and define schema
dataset_ref = client.dataset('patents')
schema = [
bigquery.SchemaField('filename', 'STRING', mode='NULLABLE'),
bigquery.SchemaField('category', 'STRING', mode='NULLABLE'),
bigquery.SchemaField('date', 'STRING', mode='NULLABLE'),
bigquery.SchemaField('number', 'STRING', mode='NULLABLE'),
]
table_ref = dataset_ref.table('found_entities')
table = bigquery.Table(table_ref, schema=schema)
table = client.create_table(table) # API request
assert table.table_id == 'found_entities'
def upload_to_bq(patent_info, dataset_id, table_id):
"""Appends the information extracted in patent_info into the
dataset_id:table_id in BigQuery.
patent_info should be a namedtuple as created above and should
have components matching the schema set up for the table
"""
table_ref = client.dataset(dataset_id).table(table_id)
table = client.get_table(table_ref) # API request
rows_to_insert = [tuple(patent_info._asdict().values())]
errors = client.insert_rows(table, rows_to_insert) # API request
assert errors == []
upload_to_bq(eu_patent_info, 'patents', 'found_entities')
upload_to_bq(us_patent_info, 'patents', 'found_entities')
```
### Examine the resuts in BigQuery.
We can now query the BigQuery table to see what values have been uploaded.
```
%load_ext google.cloud.bigquery
%%bigquery
SELECT
*
FROM `asl-open-projects.patents.found_entities`
```
We can also look at the resulting entities in a dataframe.
```
dataset_id = 'patents'
table_id = 'found_entities'
sql = """
SELECT
*
FROM
`{}.{}.{}`
LIMIT 10
""".format(project, dataset_id, table_id)
df = client.query(sql).to_dataframe()
df.head()
```
## Pipeline Evaluation
TODO: We should include some section on how to evaluate the performance of the extractor. Here we can use the ground_truth table and explore different kinds of string metrics (e.g. Levenshtein distance) to measure accuracy of the entity extraction.
## Clean up
To remove the table "found_entities" from the "patents" dataset created above.
```
dataset_id = 'patents'
table_id = 'found_entities'
tables = list(client.list_tables(dataset_id)) # API request(s)
if tables:
num_tables = len(tables)
all_tables = []
for _ in range(num_tables):
all_tables.append(tables[_].table_id)
print('These tables were found in the {} dataset: {}'.format(dataset_id,all_tables))
if table_id in all_tables:
table_ref = client.dataset(dataset_id).table(table_id)
client.delete_table(table_ref) # API request
print('Table {} was deleted from dataset {}.'.format(table_id, dataset_id))
else:
print('{} dataset does not contain any tables.'.format(dataset_id))
```
### The next cells will remove the patents dataset and all of its tables. Not recommended as I recently uploaded a talbe of 'ground_truth' for entities in the files
To remove the "patents" dataset and all of its tables.
```
'''
client = bigquery.Client()
# Collect datasets and project information
datasets = list(client.list_datasets())
project = client.project
if datasets:
all_datasets = []
for dataset in datasets:
all_datasets.append(dataset.dataset_id)
if 'patents' in all_datasets:
# Delete the dataset "patents" and its contents
dataset_id = 'patents'
dataset_ref = client.dataset(dataset_id)
client.delete_dataset(dataset_ref, delete_contents=True)
print('Dataset {} deleted from project {}.'.format(dataset_id, project))
else: print('{} project does not contain the "patents" datasets.'.format(project))
else:
print('{} project does not contain any datasets.'.format(project))
'''
```
| github_jupyter |
# Try MediaWiki's RecentChanges API
## Setup
### imports
```
import pandas as pd, dateutil.parser as dp
import os, requests, datetime, time, json
from sseclient import SSEClient as EventSource
```
### define function ```get_rc```
```
def get_rc(rc_list:list, params:dict, url: str, sesh) -> str:
'''
Inputs: rc_list: list to be populated with recentchanges jsons
params: dictionary of parameters for the API request
this fn expects at least these parameters:
'rcprop' : 'timestamp|ids', (more is okay)
'action' : 'query',
'rcdir' : 'newer',
'format' : 'json',
'list' : 'recentchanges',
url: API url (designates which wiki)
sesh: requests session
Outputs: timestamp of latest
'''
raw_output= sesh.get(url=url, params=params)
json_data = raw_output.json()
recent_changes = json_data['query']['recentchanges']
rc_list.append(recent_changes)
timestamps = [rc['timestamp'] for rc in recent_changes]
timestamps = sorted(map(dp.isoparse, timestamps))
ts = timestamps[-3]
return ts.strftime('%Y-%m-%dT%H:%M:%SZ')
```
## Collect 500 recent changes
### get ```rc_list```
#### initialize requests session
```
sesh = requests.Session()
```
#### set parameters
```
rc_list=[]
url = 'https://en.wikipedia.org/w/api.php'
params = {
'rcstart' : '2021-10-20T00:30:01Z',
'rcnamespace' : '0',
'rcshow' : '!bot',
'rclimit' : '50',
'rcprop': 'user|userid|timestamp|title|ids|sizes',
'action' : 'query',
'rcdir' : 'newer',
'format' : 'json',
'list' : 'recentchanges',
}
# Dictionary keys that output from these parameters:
['timestamp', 'type', 'title', 'anon', 'rcid', 'ns', 'revid', 'pageid', 'user', 'userid', 'oldlen', 'old_revid', 'newlen'];
```
#### populate rc_list
```
for i in range(10):
latest_timestamp = get_rc(rc_list, params, url, sesh)
params['rcstart'] = latest_timestamp
print(f'{i} {latest_timestamp}')
time.sleep(.5)
```
### peek at ```rc_list```
#### check dimensions of nested list
```
len(rc_list), len(rc_list[0]), len(rc_list[0][0])
```
#### look at one JSON element of nested list
```
rc_list[0][0]
```
#### timestamp of latest recentchanges record
```
latest_timestamp
```
### flatten rc_list to get unique_jsons
#### flatten
```
# flatten the jsons
all_jsons = [item for sublist in rc_list for item in sublist]
# remove jsons with duplicate rcid's
all_rcids = {j['rcid']:i for i,j in enumerate(all_jsons)}
unique_jsons = [all_jsons[i] for i in all_rcids.values()]
```
#### make dataframe
```
df = pd.DataFrame.from_records(unique_jsons)
```
#### peek at jsons as dataframe
```
df
# df.to_csv('../data/interim/2021-10-20T00:30:01Z_2021-10-20T01:39:40Z.csv')
```
## Build SQL schema for import into database
### This is for postgreSQL; <mark>(Update for MySQL)</mark>
#### make "create table" query
#### connect to database
#### create table and import data
#### update table to have date
#### SQLAlchemy
#### More detailed requests from wikipedia
| github_jupyter |
<a href="https://colab.research.google.com/github/ArmandoSep/DS-Unit-2-Applied-Modeling/blob/master/module2-wrangle-ml-datasets/LS_DS_232.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 3, Module 2*
---
# Wrangle ML datasets
- Explore tabular data for supervised machine learning
- Join relational data for supervised machine learning
# Explore tabular data for superviesd machine learning 🍌
Wrangling your dataset is often the most challenging and time-consuming part of the modeling process.
In today's lesson, we’ll work with a dataset of [3 Million Instacart Orders, Open Sourced](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)!
Let’s get set up:
```
# Download data
import requests
def download(url):
filename = url.split('/')[-1]
print(f'Downloading {url}')
r = requests.get(url)
with open(filename, 'wb') as f:
f.write(r.content)
print(f'Downloaded {filename}')
download('https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz')
# Uncompress data
import tarfile
tarfile.open('instacart_online_grocery_shopping_2017_05_01.tar.gz').extractall()
# Change directory to where the data was uncompressed
%cd instacart_2017_05_01
# Print the csv filenames
from glob import glob
for filename in glob('*.csv'):
print(filename)
# For each csv file, look at its shape & head
import pandas as pd
from IPython.display import display
def preview():
for filename in glob('*.csv'):
df = pd.read_csv(filename)
print('\n', filename, df.shape)
display(df.head())
preview()
```
### The original task was complex ...
[The Kaggle competition said,](https://www.kaggle.com/c/instacart-market-basket-analysis/data):
> The dataset for this competition is a relational set of files describing customers' orders over time. The goal of the competition is to predict which products will be in a user's next order.
> orders.csv: This file tells to which set (prior, train, test) an order belongs. You are predicting reordered items only for the test set orders.
Each row in the submission is an order_id from the test set, followed by product_id(s) predicted to be reordered.
> sample_submission.csv:
```
order_id,products
17,39276 29259
34,39276 29259
137,39276 29259
182,39276 29259
257,39276 29259
```
### ... but we can simplify!
Simplify the question, from "Which products will be reordered?" (Multi-class, [multi-label](https://en.wikipedia.org/wiki/Multi-label_classification) classification) to **"Will customers reorder this one product?"** (Binary classification)
Which product? How about **the most frequently ordered product?**
### Questions:
- What is the most frequently ordered product?
- How often is this product included in a customer's next order?
- Which customers have ordered this product before?
- How can we get a subset of data, just for these customers?
- What features can we engineer? We want to predict, will these customers reorder this product on their next order?
## Follow Along
### What was the most frequently ordered product?
```
order_products__train = pd.read_csv('order_products__train.csv')
order_products__train.head()
order_products__train['product_id'].value_counts()
# Group by example
temp = order_products__train.sort_values(by='product_id',
ascending=False).head()
temp
temp.groupby('product_id').count()
order_products__train.groupby('product_id').order_id.count().sort_values(
ascending=False
)
# Product 24852 was ordered almsot 20k times.
# Read the products table to see what product is it
products = pd.read_csv('products.csv')
products.head()
products[products.product_id==24852]
# On -> the common column
train = pd.merge(order_products__train, products, how='inner', on='product_id')
train.head()
train['product_name'].value_counts()
```
### How often are bananas included in a customer's next order?
There are [three sets of data](https://gist.github.com/jeremystan/c3b39d947d9b88b3ccff3147dbcf6c6b):
> "prior": orders prior to that users most recent order (3.2m orders)
"train": training data supplied to participants (131k orders)
"test": test data reserved for machine learning competitions (75k orders)
Customers' next orders are in the "train" and "test" sets. (The "prior" set has the orders prior to the most recent orders.)
We can't use the "test" set here, because we don't have its labels (only Kaggle & Instacart have them), so we don't know what products were bought in the "test" set orders.
So, we'll use the "train" set. It currently has one row per product_id and multiple rows per order_id.
But we don't want that. Instead we want one row per order_id, with a binary column: "Did the order include bananas?"
Let's wrangle!
```
train.head()
# Goal: 'Yes, they orderer no bananas'.
train['bananas'] = train['product_name'] == 'Banana'
train.head()
# How ofen are banans included in the customer's next order? Is this true
train['bananas'].value_counts(normalize=True)
# Let's use group by to simply our data down to bananas only
# .any() -> if there is at least one True boolean, it will return True
train_wrangled = train.groupby('order_id')['bananas'].any().reset_index()
train_wrangled.head()
train_wrangled['bananas'].value_counts(normalize=True)
# What is the most common hour of the day that bananas are ordered?
# What about the common hour for any order?
import numpy as np
train[train['bananas']]
# What other table do we need? Orders:
orders = pd.read_csv('orders.csv')
orders.head()
# What hour of the day most typical to place an order?
orders['order_hour_of_day'].value_counts(normalize=True)
```
# Join relational data for supervised machine learning
## Overview
Often, you’ll need to join data from multiple relational tables before you’re ready to fit your models.
### Which customers have ordered this product before?
- Customers are identified by `user_id`
- Products are identified by `product_id`
Do we have a table with both these id's? (If not, how can we combine this information?)
```
banana_order_id = train[train['bananas']].order_id
banana_order_id
orders[orders['order_id'].isin(banana_order_id)]
# Most typical hour for people who bought bananas
banana_orders = orders[orders['order_id'].isin(banana_order_id)]
banana_orders['order_hour_of_day'].value_counts(normalize=True)
banana_orders.head()
# Double check that we did things right, and that order 1492625 has a banana
train[train['order_id'] == 1492625]
```
## Follow Along
### How can we get a subset of data, just for these customers?
We want *all* the orders from customers who have *ever* bought bananas.
(And *none* of the orders from customers who have *never* bought bananas.)
```
# We did this above,
# note that this wasn't a merget directly
# but instead reused past merges and sliced a df by id
```
### What features can we engineer? We want to predict, will these customers reorder bananas on their next order?
```
# Is there a differece in average order size in bana orders vs. not?
# We know that it's about -10 items/order in general
# Number of items/order could be an interesting feature
orders.head()
train.head()
product_order_counts = train.groupby(['order_id']).count()['product_id']
product_order_counts
product_order_counts.describe()
import seaborn as sns
sns.distplot(product_order_counts);
# Number of items in an order where bananas where purchased
banana_orders_counts = train[train['order_id'].isin(banana_order_id)].groupby(
['order_id']).count()['product_id']
banana_orders_counts.mean()
banana_orders_counts.describe()
sns.distplot(banana_orders_counts);
```
## Challenge
**Continue to clean and explore your data.** Can you **engineer features** to help predict your target? For the evaluation metric you chose, what score would you get just by guessing? Can you **make a fast, first model** that beats guessing?
We recommend that you use your portfolio project dataset for all assignments this sprint. But if you aren't ready yet, or you want more practice, then use the New York City property sales dataset today. Follow the instructions in the assignment notebook. [Here's a video walkthrough](https://youtu.be/pPWFw8UtBVg?t=584) you can refer to if you get stuck or want hints!
| github_jupyter |
## HTML Essentials
```
%%html
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1>This is a Heading</h1>
<p>This is a paragraph.</p>
<a href="https://www.w3schools.com/">This is a link to tutorials</a>
</body>
</html>
```
## CSS Classes
```
%%html
<p class="center medium">This paragraph refers to two classes.</p>
```
### Simple Beautiful Soup Example
```
html_string = r"""<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1>This is a Heading</h1>
<p>This is a paragraph.</p>
<a href="https://www.w3schools.com/">This is a link to tutorials</a>
</body>
</html>"""
from bs4 import BeautifulSoup
html_soup = BeautifulSoup(html_string, 'html.parser')
html_soup.head.text
html_soup.a.get('href')
```
### Requests & Responses
```
import requests
response = requests.get('https://www.bundes-telefonbuch.de/nuernberg/firma/karosseriefachbetrieb-hofer-tb1150222')
print(response.status_code)
print(response.text)
```
### Beautiful Soup
```
entry_soup = BeautifulSoup(response.text,'html.parser')
name = entry_soup.h1.text.strip()
print(name)
address_div = entry_soup.find('div', {'class':'detail-address'})
print(address_div.text)
address_div.stripped_strings
address_parts = list(address_div.stripped_strings)
print(address_parts)
address = ' '.join(address_parts)
print(address)
email = entry_soup.find('a',{'class':'detail-email'}).get('href')
print(email)
website = entry_soup.find('a',{'class':'detail-homepage'}).get('href')
print(website)
tel = entry_soup.find('span', {'class':'detail-fax'}).text.strip()
fax = entry_soup.find('span', {'class':'detail-phone'}).text.strip()
print(tel, fax)
```
## Create Scrapy Project
```
scrapy startproject yellow_pages
cd yellow_pages
scrapy genspider telefonbuch bundes-telefonbuch.de
```
## Scrapy Shell for debugging
```
scrapy shell 'www.bundes-telefonbuch.de'
```
### Scrapy: Fill Out Forms
```
fetch(scrapy.FormRequest.from_response(response, formid='searchForm', formdata={'what':'auto', 'where':'Muenchen'}))
response.css('div.companyBox')
results_soup = BeautifulSoup(response.text,'html.parser')
results_soup.prettify()
```
### Extract entries
```
entry = results_soup.find('div',{'class':'companyBox'})
entries = result_soup.find_all('div', {'class':'companyBox'})
len(entries)
```
### Paging
```
paging = result_soup.find('div',{'id':'pagination'})
pages = paging.find_all('a')
len(pages)
next_page = None
for page in pages:
if page.text.strip()=='∨':
next_page = page.get('href')
break
start_urls = ['http://bundes-telefonbuch.de/']
if next_page:
next_page = start_urls[0] + next_page
print(next_page)
```
### Get the details of the entry
```
entry_url = entry.a.get('href')
fetch(scrapy.http.Request(start_urls[0] + entry_url))
entry_soup = BeautifulSoup(response.text,'html.parser')
name = entry_soup.h1.text.strip()
print(name)
address_div = entry_soup.find('div', {'class':'detail-address'})
print(address_div.text)
print(address_div.stripped_strings)
address_parts = list(address_div.stripped_strings)
print(address_parts)
address = ' '.join(address_parts)
print(address)
email = entry_soup.find('a',{'class':'detail-email'}).get('href')
print(email)
website = entry_soup.find('a',{'class':'detail-homepage'}).get('href')
print(website)
tel = entry_soup.find('span', {'class':'detail-fax'}).text.strip()
fax = entry_soup.find('span', {'class':'detail-phone'}).text.strip()
print(tel, fax)
```
## Run crawler
```
scrapy crawl telefonbuch -a sector="Auto" -a city="Muenchen" -o "Auto_Muenchen.csv"
```
| github_jupyter |
# Time series forecasting with ARIMA
In this notebook, we demonstrate how to:
- prepare time series data for training an ARIMA time series forecasting model
- implement a simple ARIMA model to forecast the next HORIZON steps ahead (time *t+1* through *t+HORIZON*) in the time series
- evaluate the model
The data in this example is taken from the GEFCom2014 forecasting competition<sup>1</sup>. It consists of 3 years of hourly electricity load and temperature values between 2012 and 2014. The task is to forecast future values of electricity load. In this example, we show how to forecast one time step ahead, using historical load data only.
<sup>1</sup>Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli and Rob J. Hyndman, "Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond", International Journal of Forecasting, vol.32, no.3, pp 896-913, July-September, 2016.
```
import os
import warnings
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
import math
from pandas.plotting import autocorrelation_plot
from statsmodels.tsa.statespace.sarimax import SARIMAX
from sklearn.preprocessing import MinMaxScaler
from common.utils import load_data, mape
from IPython.display import Image
%matplotlib inline
pd.options.display.float_format = '{:,.2f}'.format
np.set_printoptions(precision=2)
warnings.filterwarnings("ignore") # specify to ignore warning messages
energy = load_data('./data')[['load']]
energy.head(10)
energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
plt.xlabel('timestamp', fontsize=12)
plt.ylabel('load', fontsize=12)
plt.show()
train_start_dt = '2014-11-01 00:00:00'
test_start_dt = '2014-12-30 00:00:00'
energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
.join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
.plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
plt.xlabel('timestamp', fontsize=12)
plt.ylabel('load', fontsize=12)
plt.show()
energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'})
energy[test_start_dt:][['load']].rename(columns={'load':'test'})
train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
test = energy.copy()[energy.index >= test_start_dt][['load']]
print('Training data shape: ', train.shape)
print('Test data shape: ', test.shape)
scaler = MinMaxScaler()
train['load'] = scaler.fit_transform(train)
train.head(10)
energy[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
plt.show()
test['load'] = scaler.transform(test)
test.head()
# Specify the number of steps to forecast ahead
HORIZON = 3
print('Forecasting horizon:', HORIZON, 'hours')
order = (4, 1, 0)
seasonal_order = (1, 1, 0, 24)
model = SARIMAX(endog=train, order=order, seasonal_order=seasonal_order)
results = model.fit()
print(results.summary())
test_shifted = test.copy()
for t in range(1, HORIZON):
test_shifted['load+'+str(t)] = test_shifted['load'].shift(-t, freq='H')
test_shifted = test_shifted.dropna(how='any')
test_shifted.head(5)
%%time
training_window = 720 # dedicate 30 days (720 hours) for training
train_ts = train['load']
test_ts = test_shifted
history = [x for x in train_ts]
history = history[(-training_window):]
predictions = list()
order = (2, 1, 0)
seasonal_order = (1, 1, 0, 24)
for t in range(test_ts.shape[0]):
model = SARIMAX(endog=history, order=order, seasonal_order=seasonal_order)
model_fit = model.fit()
yhat = model_fit.forecast(steps = HORIZON)
predictions.append(yhat)
obs = list(test_ts.iloc[t])
# move the training window
history.append(obs[0])
history.pop(0)
print(test_ts.index[t])
print(t+1, ': predicted =', yhat, 'expected =', obs)
eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
eval_df['timestamp'] = test.index[0:len(test.index)-HORIZON+1]
eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
eval_df['actual'] = np.array(np.transpose(test_ts)).ravel()
eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
eval_df.head()
if(HORIZON > 1):
eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
print(eval_df.groupby('h')['APE'].mean())
print('One step forecast MAPE: ', (mape(eval_df[eval_df['h'] == 't+1']['prediction'], eval_df[eval_df['h'] == 't+1']['actual']))*100, '%')
print('Multi-step forecast MAPE: ', mape(eval_df['prediction'], eval_df['actual'])*100, '%')
if(HORIZON == 1):
## Plotting single step forecast
eval_df.plot(x='timestamp', y=['actual', 'prediction'], style=['r', 'b'], figsize=(15, 8))
else:
## Plotting multi step forecast
plot_df = eval_df[(eval_df.h=='t+1')][['timestamp', 'actual']]
for t in range(1, HORIZON+1):
plot_df['t+'+str(t)] = eval_df[(eval_df.h=='t+'+str(t))]['prediction'].values
fig = plt.figure(figsize=(15, 8))
ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='r', linewidth=4.0)
ax = fig.add_subplot(111)
for t in range(1, HORIZON+1):
x = plot_df['timestamp'][(t-1):]
y = plot_df['t+'+str(t)][0:len(x)]
ax.plot(x, y, color='blue', linewidth=4*math.pow(.9,t), alpha=math.pow(0.8,t))
ax.legend(loc='best')
plt.xlabel('timestamp', fontsize=12)
plt.ylabel('load', fontsize=12)
plt.show()
```
| github_jupyter |
# Import Python libraries.
```
#Importing required libraries
!sudo pip install --upgrade xgboost
import numpy as np
import pandas as pd
# import matplotlib.pyplot as plt
# %matplotlib inline
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import LabelEncoder
from xgboost import XGBClassifier
from sklearn.metrics import classification_report
from sklearn.preprocessing import StandardScaler
#importing required objects and instantiating them
# log_reg = LogisticRegression()
lab = LabelEncoder()
# mpld3.enable_notebook()
# plt.rcParams['figure.figsize'] = [15, 10]
xgbc = XGBClassifier(use_label_encoder=False)
scaler = StandardScaler()
```
# Import Training Dataset
```
#Obtaining the data
dataset = pd.read_csv("Training Data.csv")
```
# Check how train dataset looks like.
```
dataset.tail()
X = dataset.iloc[:,[1,2,3,4,5,6,7,8,10,11]]
y = dataset.risk_flag
X.tail()
y.tail()
```
# Clean the data.
```
# Encoding string variables
X['married'] = lab.fit_transform(X['married'])
X['house_ownership'] = lab.fit_transform(X['house_ownership'])
X['car_ownership'] = lab.fit_transform(X['car_ownership'])
X['profession'] = lab.fit_transform(X['profession'])
X['city'] = lab.fit_transform(X['city'])
X.tail()
X = pd.get_dummies(X, columns=['married', 'house_ownership', 'car_ownership', 'profession', 'city'], drop_first=True)
X.tail()
X = scaler.fit_transform(X)
print(X)
```
# Train the Model
```
# log_reg.fit(X, y)
xgbc.fit(X, y)
```
# Import Testing Dataset
```
#Obtaining the test data
dataset_test = pd.read_csv("Test Data.csv")
dataset_test.tail()
X_test = dataset_test.iloc[:,[1,2,3,4,5,6,7,8,10,11]]
X_test.tail()
```
# Clean the Data.
```
# Encoding string variables
X_test['married'] = lab.fit_transform(X_test['married'])
X_test['house_ownership'] = lab.fit_transform(X_test['house_ownership'])
X_test['car_ownership'] = lab.fit_transform(X_test['car_ownership'])
X_test['profession'] = lab.fit_transform(X_test['profession'])
X_test['city'] = lab.fit_transform(X_test['city'])
X_test.tail()
X_test = pd.get_dummies(X_test, columns=['married', 'house_ownership', 'car_ownership', 'profession', 'city'], drop_first=True)
X_test.tail()
X_test = scaler.fit_transform(X_test)
print(X_test)
```
# Test your Model
```
# y_test = log_reg.predict(X_test)
# print(y_test)
y_test_xgbc = xgbc.predict(X_test)
y_pred = xgbc.predict(X)
# y_pred_lr = log_reg.predict(X)
```
# Generating the Output File
```
output = pd.read_csv("Test Data.csv")
output['risk_flag'] = y_test_xgbc
output.tail()
output = output.iloc[:,[-1]]
output.tail()
output.to_csv("Test Data with Predictons3.csv")
```
# Evaluating the Model
```
# log_reg.score(X, y)
xgbc.score(X, y)
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(y, y_pred)
print('ROC AUC: %f' % auc)
# auc = roc_auc_score(y, y_pred_lr)
# print('ROC AUC: %f' % auc)
print(classification_report(y, y_pred))
# print(classification_report(y, y_pred_lr))
```
| github_jupyter |
### Data of Internet Service Providers in Each Neighborhood of Pittsburgh
The dataset that I chose to interpret contains the information on what service providers are in pittsburgh, what neighborhoods they are in, and how many are in each neighborhood. Of course, the data does not give us all that right away, which is where our data interpretation skills come in. Within this notebook, I was able to get rid of data that I would not be using for our final result, sort the data by providers per neighborhood, and then calculate and sort by providers per acre of each neighborhood to find the best overall coverage.
```
import pandas as pd
import numpy as np
import geopandas
ispdata = pd.read_csv("pittsburghispsbyblock.csv",
index_col="Neighborhood") # use the column named _id as the row index
hooddata = pd.read_csv("RAC223Neighborhoods_.csv")
```
The first two codeblocks are simple imports of the python packages I am using and the actual dataset that I am getting my data from. In this first block below, I first sort the dataset by each neighborhoood alphabetically. Then, I use the drop method to remove all of the unnessecary data columns from my set.
```
ispdata = ispdata.sort_values(by=["Neighborhood"])
thislist = ["GEOID", "LogRecNo", "Provider_Id", "FRN", "ProviderName", "DBAName", "HocoNum", "StateAbbr", "BlockCode", "TechCode", "Consumer", "MaxAdDown", "MaxAdUp", "Business", "MaxCIRDown", "MaxCIRUp", "HocoFinal"]
ispdata = ispdata.drop(axis=1,labels=thislist)
ispdata.groupby("Neighborhood").count()
```
For our project, we decided we only wanted to look at the neighborhoods that have Verizon internet. We decided upon this because we all use and trust verizon internet and, to us, it is the best service provider due to converage and 5G capabilities. In the code block below, I used code that we practiced in our weekly excercises to isolate all of the rows with "Verizon Communications Inc." and make a new dataset with only the Verizon neighborhoods.
```
query_mask = ispdata['HoldingCompanyName'] == 'Verizon Communications Inc.'
verizon = ispdata[query_mask]
verizon
```
Here I simply take the verizon dataset and sort it to count how many verizon networks are in each neighborhood
```
verizon = verizon.groupby("Neighborhood").count()
verizon
```
Next, I use the other dataset with the neighborhood data and sort it by each neighborhood's acreage
```
hooddata = hooddata[['hood', 'acres']]
hooddata = hooddata.sort_values(by=['hood'])
hooddata
```
Next, this code block begins to merge the two datasets of ISPs per Neighborhood and Neighborhood acreage into one dataset that computes the average ISP per acre for every neighborhood in the set. I used a for loop to calculate the ISP per Acre for every row in the dataset, with each row being each individual neighborhood. This gives us the final dataset with each neighborhoods acreage, ISP per Neighborhood, and ISP per Acre.
```
merged = pd.merge(hooddata, verizon, left_on="hood", right_on="Neighborhood")
merged.at[60, "acres"] = 775.68
newarray = []
for x in range(90):
newarray.append(merged.iloc[x, 2]/merged.iloc[x, 1])
merged["ISPs-Per-Acre"] = newarray
merged
```
Here, this dataset is made to sort the data specifically for graphing. It is sorted by ISP per Acre from least to greatest and graphed on a line graph.
```
graphable = merged.drop(columns = ["HoldingCompanyName", "acres"])
graphable = graphable.sort_values(by=["ISPs-Per-Acre"])
graphable.plot(x="hood", y="ISPs-Per-Acre", kind="bar", figsize=(15,15))
```
Finally, I used geopandas to make a map that would show the ISP per Acre for each neighborhood. This lets you see the density of each neighborhood's ISP data based on how big the actual neighborhood is. It is very obvious in both of these graphs that Marshall-Shadeland is the best neighborhood for Verizon internet per Acre.
```
neighborhoods = geopandas.read_file("Neighborhood/Neighborhoods_.shp") # read in the shapefile
sign_map = neighborhoods.merge(graphable, how='left', left_on='hood', right_on='hood')
sign_map.plot(column='ISPs-Per-Acre', # set the data to be used for coloring
cmap='Blues', # choose a color palette
edgecolor="white", # outline the districts in white
legend=True, # show the legend
legend_kwds={'label': "ISPs-Per-Acre"}, # label the legend
figsize=(15, 10), # set the size
# missing_kwds={"color": "lightgrey"} # set disctricts with no data to gray
)
```
This block simply converts the data I interpreted into a form that can be combined with the other two datasets from the project.
```
combiner=[]
for value in graphable["ISPs-Per-Acre"]:
combiner.append((value/graphable.iloc[89,1])*100)
graphable["Scaled"] = combiner
ISPDrop = graphable.drop(columns=['ISPs-Per-Acre'])
ISPDrop = graphableDrop.sort_values(by=["hood"])
print(graphableDrop)
```
The final answer for my dataset is that Marshall-Shadeland has the most ISPs per Acre out of all the neighborhoods I was dealing with. This is shown in the calculations and the two provided graphs.
| github_jupyter |
# Multi-asset option pricing - Monte Carlo - exchange option
```
import sys
sys.path.append('..')
from optionpricer import payoff
from optionpricer import option
from optionpricer import bspde
from optionpricer import analytics
from optionpricer import parameter as prmtr
from optionpricer import path
from optionpricer import generator
from optionpricer import montecarlo
import numpy as np
from scipy import linalg
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
# option and market properties
expiry = 1.0/12.0 # time until expiry (annualized)
r0 = 0.094 # interest rate (as decimal, not percentage)
# properties of the underlyings
sig=np.array([0.05,0.09]) # volatility (annualized) for underlyings [vol stock 1, vol stock 2]
rho = 0.8 # correlation between the two underlyings
correlation_matrix = np.array([[1.0,rho],[rho,1.0]])
# the spot values we will want to price the option at for the two underlyings
spot0 = np.linspace(30.0,70.0,30)
spot1 = np.linspace(40.0,60.0,20)
# create a meshgrid to easily run over all combinations of spots, and for ease of plotting later
SPOT0,SPOT1 = np.meshgrid(spot0,spot1)
# # use r0, and the volatilities (elements of sig) to make a SimpleParam objects containing these values
r_param = prmtr.SimpleParam(r0)
sig0_p = prmtr.SimpleParam(sig[0])
sig1_p = prmtr.SimpleParam(sig[1])
# we can use the correlation matrix of the underlyings to construct the covariance matrix
covars = correlation_matrix*np.outer(sig,sig)
# We can then Cholesky decompose the covariance matrix
L = linalg.cholesky(covars,lower=True)
# we obtain a lower traingular matrix which can be used to generate movements of the underlying
# which obey the covariance (/correlation) we defined above
print(L)
print(np.dot(L,L.T))
# create a simpleArrayParam (this object stores an array which is constant in time) usung L
covariance_param = prmtr.SimpleArrayParam(covars)
cholesky_param = prmtr.SimpleArrayParam(L)
# define the Spread option - with strike of 0 to make it an exchange option
exchange_po = payoff.SpreadPayOff(0.0)
# also valid for payoff of exchnage option: exchange_po=payoff.ExchangePayOff()
exchange_option = option.VanillaOption(exchange_po,expiry)
# define the random generator for problem - here normally districuted log returns
gen_norm = generator.Normal()
# decorate the generator, making it an antithetic generator for variance reduction
gen_norm_antith = generator.Antithetic(gen_norm)
# Define a multiasset montecarlo pricer
mc_pricer = montecarlo.MAMonteCarlo(exchange_option,gen_norm_antith)
# initialize arrays for prices
mc_prices = np.zeros_like(SPOT0)
# we also initialize an array of option prices using the analytic magrabe price of exchange option
magrabe_prices = np.zeros_like(SPOT0)
# loop over spots, and calculate the price of the option
for ind0 in range(SPOT0.shape[0]):
for ind1 in range(SPOT0.shape[1]):
s = np.array([SPOT0[ind0,ind1],SPOT1[ind0,ind1]])
mc_prices[ind0,ind1] = mc_pricer.solve_price(s,r_param,covariance_param,cholesky_param,eps_tol=0.0001)
magrabe_prices[ind0,ind1] = analytics.margrabe_option_price(s,expiry,covars)
# set up plotting parameters for nice to view plots
sns.set()
mpl.rcParams['lines.linewidth'] = 2.0
mpl.rcParams['font.weight'] = 'bold'
mpl.rcParams['axes.labelweight'] = 'bold'
mpl.rcParams['axes.titlesize'] = 12
mpl.rcParams['axes.titleweight'] = 'bold'
mpl.rcParams['font.size'] = 12
mpl.rcParams['legend.frameon'] = False
mpl.rcParams['figure.figsize'] = [15,10]
# we will plot the monte carlo, magrabe price, and the differen between the two (the error)
f, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True)
# calculate values of min and max expected values
# use to set colormap max/min values to ensure they're the same for both plots of price
vmin_=min(np.amin(mc_prices),np.amin(magrabe_prices))
vmax_=max(np.amax(mc_prices),np.amax(magrabe_prices))
# subplot of Monte Carlo
im1 = ax1.pcolormesh(spot0,spot1,mc_prices,vmin=vmin_,vmax=vmax_)
plt.colorbar(im1,ax=ax1)
# subplot of Magrabe price
im2 = ax2.pcolormesh(spot0,spot1,magrabe_prices,vmin=vmin_,vmax=vmax_)
plt.colorbar(im2,ax=ax2)
# subplot of error
im3 = ax3.pcolormesh(spot0,spot1,np.abs(magrabe_prices-mc_prices))
plt.colorbar(im3,ax=ax3)
ax3.set_xlabel('spot 0')
# set titles and y lables of subplots
titles = ['Monte Carlo','Magrabe Price','|Margrabe-MC|']
for i,ax in enumerate([ax1,ax2,ax3]):
ax.set_ylabel('spot 1')
ax.set_title(titles[i])
plt.show()
```
| github_jupyter |
# Integrating Functional Data
So far most of our work has been examining anatomical images - the reason being is that it provides a nice visual way of exploring the effects of data manipulation and visualization is easy. In practice, you will most likely not analyze anatomical data using <code>nilearn</code> since there are other tools that are better suited for that kind of analysis (freesurfer, connectome-workbench, mindboggle, etc...).
In this notebook we'll finally start working with functional MR data - the modality of interest in this workshop. First we'll cover some basics about how the data is organized (similar to T1s but slightly more complex), and then how we can integrate our anatomical and functional data together using tools provided by <code>nilearn</code>
Functional data consists of full 3D brain volumes that are *sampled* at multiple time points. Therefore you have a sequence of 3D brain volumes, stepping through sequences is stepping through time and therefore time is our 4th dimension! Here's a visualization to make this concept more clear:
<img src="./static/images/4D_array.png" alt="Drawing" align="middle" width="500px"/>
Each index along the 4th dimensions (called TR for "Repetition Time", or Sample) is a full 3D scan of the brain. Pulling out volumes from 4-dimensional images is similar to that of 3-dimensional images except you're now dealing with:
<code> img.slicer[x,y,z,time] </code>!
Let's try a couple of examples to familiarize ourselves with dealing with 4D images. But first, let's pull some functional data using PyBIDS!
```
fmriprep_dir = '../data/ds000030/derivatives/fmriprep/'
layout=BIDSLayout(fmriprep_dir, validate=False)
T1w_files = layout.get(subject='10788', datatype='anat', suffix='preproc')
brainmask_files = layout.get(subject='10788', datatype='anat', suffix='brainmask')
func_files = layout.get(subject='10788', datatype='func', suffix='preproc')
func_mask_files = layout.get(subject='10788', datatype='func', suffix='brainmask')
```
We'll be using functional files in MNI space rather than T1w space. Recall, that MNI space data is data that was been warped into standard space. These are the files you would typically use for a group-level functional imaging analysis!
First, take a look at the shape of the functional image:
Notice that the Functional MR scan contains *4 dimensions*. This is in the form of $(x,y,z,t)$, where $t$ is time.
We can use slicer as usual where instead of using 3 dimensions we use 4.
For example:
<code> func.slicer[x,y,z] </code>
vs.
<code> func.slicer[x,y,z,t] </code>
### Exercise
Try pulling out the 5th TR and visualizing it using <code>plot.plot_epi</code>. <code>plot_epi</code> is exactly the same as <code>plot_anat</code> except it displays using colors that make more sense for functional images...
```
#Pull the 5th TR
func_vol5 = func_mni_img.slicer[??,??,??,??]
plot.plot_epi(??)
```
## What fMRI actually represents
We've represented fMRI as a snapshot of MR signal over multiple timepoints. This is a useful way of understanding the organization of fMRI, however it isn't typically how we think about the data when we analyze fMRI data. fMRI is typically thought of as **time-series** data. We can think of each voxel (x,y,z coordinate) as having a time-series of length T. The length T represents the number of volumes/timepoints in the data. Let's pick an example voxel and examine its time-series using <code>func_mni_img.slicer</code>:
```
#Pick one voxel at coordinate (60,45,88)
```
As you can see here, we pulled one voxel that contains 152 timepoints. For plotting purposes as 4-dimensional array is difficult to deal with so we'll flatten it to 1 dimension (time) for convenience:
Here we've pulled out a voxel at a specific coordinate at every single time-point. This voxel has a single value for each timepoint and therefore is a time-series. We can visualize this time-series signal by using a standard python plotting library. We won't go into too much detail about python plotting, the intuition about what the data looks like is what is most important:
```
import matplotlib.pyplot as plt
```
## Resampling
Recall from our introductory exploration of neuroimaging data:
- T1 images are typically composed of voxels that are 1x1x1 in dimension
- Functional images are typically composed of voxels that are 4x4x4 in dimension
If we'd like to overlay our functional on top of our T1 (for visualization purposes, or analyses), then we need to match the size of the voxels!
Think of this like trying to overlay a 10x10 JPEG and a 20x20 JPEG on top of each other. To get perfect overlay we need to resize (or more accurately *resample*) our JPEGs to match!
**Note**:
Resampling is a method of interpolating in between data-points. When we stretch an image we need to figure out what goes in the spaces that are created via stretching - resampling does just that. In fact, resizing any type of image is actually just resampling to new dimensions.
Let's resampling some MRI data using nilearn.
**Goal**: Match the dimensions of the structural image to that of the functional image
```
#Files we'll be using (Notice that we're using _space-MNI..._ which means they are normalized brains)
```
Let's take a look at the sizes of both our functional and structural files:
Resampling in nilearn is as easy as telling it which image you want to sample and what the target image is.
Structure of function:
img.resample_to_img(source_img,target_img,interpolation)
- source_img = the image you want to sample
- target_img = the image you wish to *resample to*
- interpolation = the method of interpolation
A note on **interpolation**
nilearn supports 3 types of interpolation, the one you'll use depends on the type of data you're resampling!
1. **continuous** - Interpolate but maintain some edge features. Ideal for structural images where edges are well-defined. Uses $3^\text{rd}$-order spline interpolation.
2. **linear (default)** - Interpolate uses a combination of neighbouring voxels - will blur. Uses trilinear interpolation.
3. **nearest** - matches value of closest voxel (majority vote from neighbours). This is ideal for masks which are binary since it will preserve the 0's and 1's and will not produce in-between values (ex: 0.342). Also ideal for numeric labels where values are 0,1,2,3... (parcellations). Uses nearest-neighbours interpolation with majority vote.
```
#Try playing around with methods of interpolation
#options: 'linear','continuous','nearest'
import matplotlib.animation
from IPython.display import HTML
%%capture
%matplotlib inline
#Resample the T1 to the size of the functional image!
resamp_t1 = img.resample_to_img(source_img=T1_mni_img, target_img=func_mni_img, interpolation='continuous')
fig, ax = plt.subplots()
def animate(image):
plot.plot_anat(image, figure=fig, cut_coords=(0,0,0))
ax.set_facecolor('black')
ani = matplotlib.animation.FuncAnimation(fig, animate, frames=[resamp_t1, T1_mni_img])
#change the frames to look at the functional mask over the resampled T1
# ani = matplotlib.animation.FuncAnimation(fig, animate, frames=[resamp_t1, func])
# Display animation
HTML(ani.to_jshtml())
```
## **Exercise**
Using **Native** T1 and **T1w** resting state functional do the following:
1. Resample the native T1 image to resting state size
2. Replace the brain in the T1 image with the first frame of the resting state brain
```
#Files we'll need
####STRUCTURAL FILES
#T1 image
ex_t1 = img.load_img(T1w_files[0].path)
#mask file
ex_t1_bm = img.load_img(brainmask_files[0].path)
####FUNCTIONAL FILES
#This is the pre-processed resting state data that hasn't been standardized
ex_func = img.load_img(func_files[1].path)
#This is the associated mask for the resting state image.
ex_func_bm = img.load_img(func_mask_files[1].path)
```
The first step we need to do is to make sure the dimensions for our T1 image and resting state image match each other:
```
#Resample the T1 to the size of the functional image!
resamp_t1 = img.resample_to_img(source_img=??, target_img=??, interpolation='continuous')
plot.plot_anat(??)
print(resamp_t1.shape)
```
Next we want to make sure that the brain mask for the T1 is also the same dimensions as the functional image. This is exactly the same as above, except we use the brain mask as the source.
What kind of interpolation should we use for masks?
```
resamp_bm = img.??(??)
#Plot the image
??
print(resamp_bm.shape)
```
Once we've resampled both our T1 and our brain mask. We now want to remove the brain from the T1 image so that we can replace it with the funtional image instead. Remember to do this we need to:
1. Invert the T1 mask
2. Apply the inverted mask to the brain
```
inverted_bm_t1 = img.math_img(??,a=resamp_bm)
plot.plot_anat(inverted_bm_t1)
```
Now apply the mask:
```
resamp_t1_nobrain = img.??(??)
plot.plot_anat(resamp_t1_nobrain)
```
We now have a skull missing the structural T1 brain. The final steps is to stick in the brain from the functional image into the now brainless head. First we need to remove the surrounding signal from the functional image.
Since a functional image is 4-Dimensional, we'll need to pull the first volume to work with. This is because the structural image is 3-dimensional and operations will fail if we try to mix 3D and 4D data.
```
#Let's visualize the first volume of the functional image:
first_vol = ex_func.slicer[??,??,??,??]
plot.plot_epi(first_vol)
```
As shown in the figure above, the image has some "signal" outside of the brain. In order to place this within the now brainless head we made earlier, we need to mask out the functional MR data as well!
```
#Mask first_vol using ex_func_bm
masked_func = img.math_img('??', a=??, b=??)
plot.plot_epi(masked_func)
```
The final step is to stick this data into the head of the T1 data. Since the hole in the T1 data is represented as $0$'s. We can add the two images together to place the functional data into the void:
```
#Now overlay the functional image on top of the anatomical
combined_img = img.math_img(??)
plot.plot_anat(combined_img)
```
***
In this section we explored functional MR imaging. Specifically we covered:
1. How the data in a fMRI scan is organized - with the additional dimension of timepoints
2. How we can integrate functional MR images to our structural image using resampling
3. How we can just as easily manipulate functional images using <code>nilearn</code>
Now that we've covered all the basics, it's time to start working on data processing using the tools that we've picked up.
| github_jupyter |
# Spindles detection
This notebook demonstrates how to use YASA to perform **single-channel sleep spindles detection**. It also shows a step-by-step description of the detection algorithm.
Please make sure to install the latest version of YASA first by typing the following line in your terminal or command prompt:
`pip install --upgrade yasa`
```
import yasa
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale=1.2)
```
## Single-channel spindles detection
As an example, we load 15 seconds of N2 sleep on a single
channel central EEG data. The sampling rate is 200 Hz.
```
# Load data
data = np.loadtxt('data_N2_spindles_15sec_200Hz.txt')
# Define sampling frequency and time vector
sf = 200.
times = np.arange(data.size) / sf
# Plot the signal
fig, ax = plt.subplots(1, 1, figsize=(14, 4))
plt.plot(times, data, lw=1.5, color='k')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([times.min(), times.max()])
plt.title('N2 sleep EEG data (2 spindles)')
sns.despine()
```
We can clearly see that there are two clean spindles on this 15-seconds epoch. The first one starting at around 3.5 seconds and the second one starting around 13 seconds.
Let's try to detect these two spindles using the [yasa.spindles_detect](https://raphaelvallat.com/yasa/build/html/generated/yasa.spindles_detect.html) function. Here' we're using a minimal example, but there are many other optional arguments that you can pass to this function.
```
# Apply the detection using yasa.spindles_detect
sp = yasa.spindles_detect(data, sf)
# Display the results using .summary()
sp.summary()
```
Hooray! The algorithm successfully identified the two spindles!
The output of the spindles detection is a [SpindlesResults](https://raphaelvallat.com/yasa/build/html/generated/yasa.SpindlesResults.html#yasa.SpindlesResults) class, which comes with some pre-compiled functions (also called methods). For instance, the [summary](https://raphaelvallat.com/yasa/build/html/generated/yasa.SpindlesResults.html#yasa.SpindlesResults.summary) method returns a [pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe) with all the detected spindles and their properties.
### Plot an overlay of our detected spindles
First we need to create a boolean array of the same size of data indicating for each sample if this sample is part of a spindles or not. This is done using the [get_mask](https://raphaelvallat.com/yasa/build/html/generated/yasa.SpindlesResults.html#yasa.SpindlesResults.get_mask) method:
```
# Let's get a bool vector indicating for each sample
mask = sp.get_mask()
mask
# Now let's plot
spindles_highlight = data * mask
spindles_highlight[spindles_highlight == 0] = np.nan
plt.figure(figsize=(14, 4))
plt.plot(times, data, 'k')
plt.plot(times, spindles_highlight, 'indianred')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.xlim([0, times[-1]])
plt.title('N2 sleep EEG data (2 spindles detected)')
sns.despine()
# plt.savefig('detection.png', dpi=300, bbox_inches='tight')
```
### Logging
YASA uses the [logging](https://docs.python.org/3/library/logging.html) module to selectively print relevant messages. The default level of the logger is set to "WARNING", which means that a message will only be displayed if a warning occurs. However, you can easily set this parameter to "INFO" to get some relevant infos about the detection pipeline and the data.
This can be useful to debug the detection and/or if you feel that the detection is not working well on your data.
```
# The default verbose is None which corresponds to verbose='warning'
sp = yasa.spindles_detect(data, sf, thresh={'rms': None}, verbose='info')
sp.summary()
```
### Safety check
To make sure that our spindle detection does not detect false positives, let's load a new dataset, this time without any sleep spindles. The data represents 30 seconds of N3 sleep sampled at 100 Hz and acquired on a young, healthy, individual.
```
data_no_sp = np.loadtxt('data_N3_no-spindles_30sec_100Hz.txt')
sf_no_sp = 100
times_no_sp = np.arange(data_no_sp.size) / sf_no_sp
plt.figure(figsize=(14, 4))
plt.plot(times_no_sp, data_no_sp, 'k')
plt.xlim(0, times_no_sp.max())
plt.xlabel('Time (seconds)')
plt.ylabel('Voltage')
plt.xlim([times_no_sp.min(), times_no_sp.max()])
plt.title('N3 sleep EEG data (0 spindle)')
sns.despine()
sp = yasa.spindles_detect(data_no_sp, sf_no_sp)
sp
```
As hoped for, no spindles were detected in this window.
### Execution time
The total execution time on a regular laptop is 10-20 ms per 15 seconds of data sampled at 200 Hz. Scaled to a full night recording, the computation time should not exceed 5-10 seconds per channel on any modern computers. Furthermore, it is possible to disable one or more threshold and thus speed up the computation. Note that most of the computation cost is dominated by the bandpass filter(s).
```
%timeit -r 3 -n 100 yasa.spindles_detect(data, sf)
%timeit -r 3 -n 100 yasa.spindles_detect(data, sf, thresh={'rms': 3, 'corr': None, 'rel_pow': None})
# Line profiling
# %load_ext line_profiler
# %lprun -f yasa.spindles_detect yasa.spindles_detect(data, sf)
```
****************
## The YASA spindles algorithm: step-by-step
The YASA spindles algorithm is largely inspired by the A7 algorithm described in [Lacourse et al. 2018](https://doi.org/10.1016/j.jneumeth.2018.08.014):
> Lacourse, K., Delfrate, J., Beaudry, J., Peppard, P., Warby, S.C., 2018. A sleep spindle detection algorithm that emulates human expert spindle scoring. *J. Neurosci. Methods*. https://doi.org/10.1016/j.jneumeth.2018.08.014
The main idea of the algorithm is to compute different thresholds from the broadband-filtered signal (1 to 30Hz, $\text{EEG}_{bf}$) and the sigma-filtered signal (11 to 16 Hz, $\text{EEG}_{\sigma}$).
**There are some notable exceptions between YASA and the A7 algorithm:**
1. YASA uses 3 different thresholds (relative $\sigma$ power, [root mean square](https://en.wikipedia.org/wiki/Root_mean_square) and correlation). The A7 algorithm uses 4 thresholds (absolute and relative $\sigma$ power, covariance and correlation). Note that it is possible in YASA to disable one or more threshold by putting ``None`` instead.
2. The windowed detection signals are resampled to the original time vector of the data using cubic interpolation, thus resulting in a pointwise detection signal (= one value at every sample). The time resolution of YASA is therefore higher than the A7 algorithm. This allows for more precision to detect the beginning, end and durations of the spindles (typically, A7 = 100 ms and YASA = 10 ms).
3. The relative power in the sigma band is computed using a Short-Term Fourier Transform. The relative sigma power is not z-scored.
4. The median frequency and absolute power of each spindle is computed using an Hilbert transform.
5. YASA computes some additional spindles properties, such as the symmetry index and number of oscillations. These metrics are inspired from [Purcell et al. 2017](https://www.nature.com/articles/ncomms15930).
7. Potential sleep spindles are discarded if their duration is below 0.5 seconds and above 2 seconds. These values are respectively 0.3 and 2.5 seconds in the A7 algorithm.
8. YASA incorporates an automatic rejection of pseudo or fake events based on an [Isolation Forest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html) algorithm.
### Preprocessing
The raw signal is bandpass filtered to the broadband frequency range defined in the (optional) parameter `freq_broad`. The default is to use a FIR filter from 1 to 30 Hz. The filter is done using the MNE built-in [filter_data](https://martinos.org/mne/stable/generated/mne.filter.filter_data.html) function. The resulting, filtered, signal is $\text{EEG}_{bf}$.
```
from mne.filter import resample, filter_data
# Broadband (1 - 30 Hz) bandpass filter
freq_broad = (1, 30)
data_broad = filter_data(data, sf, freq_broad[0], freq_broad[1], method='fir',verbose=0)
```
### Threshold 1: Relative power in the sigma band
The first detection signal is the power in the sigma frequency range (11-16 Hz) relative to the total power in the broadband frequency (1-30 Hz). This is calculated using a [Short-Term Fourier Transform](https://en.wikipedia.org/wiki/Short-time_Fourier_transform) (STFT) on consecutive epochs of 2 seconds and with an overlap of 200 ms. The first threshold is exceeded whenever a sample has a relative power in the sigma frequency range $\geq 0.2$. In other words, it means that 20% of the signal's total power must be contained within the sigma band. The goal of this threshold is to make sure that the increase in sigma power is actually specific to the sigma frequency range and not just due to a global increase in power (e.g. caused by artefacts).
Importantly, you may want to lower this threshold if you aim to detect spindles in N3 sleep (slow-wave sleep), a sleep stage in which most of the relative spectral power is contained in the delta band (0.5 to 4 Hz).
#### More about the STFT
Because our STFT has a window of 2 seconds, it means that our frequency resolution is $1 / 2 = 0.5$ Hz. In other words, our frequency vector is *[1, 1.5, 2, ..., 29, 29.5, 30]* Hz. The power in the sigma frequency range is simply the sum, at each time point, of the power values at $f_{\sigma}=$*[11, 11.5, 12, 12.5, 13, 13.5, 14, 14.5, 15, 15.5, 16]* Hz.
```
# Compute the pointwise relative power using STFT and cubic interpolation
f, t, Sxx = yasa.main.stft_power(data_broad, sf, window=2, step=.2, band=freq_broad, norm=True, interp=True)
# Extract the relative power in the sigma band
idx_sigma = np.logical_and(f >= 11, f <= 16)
rel_pow = Sxx[idx_sigma].sum(0)
# Plot
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(14, 8), sharex=True)
plt.subplots_adjust(hspace=.25)
im = ax1.pcolormesh(t, f, Sxx, cmap='Spectral_r', vmax=0.2)
ax1.set_title('Spectrogram')
ax1.set_ylabel('Frequency (Hz)')
ax2.plot(t, rel_pow)
ax2.set_ylabel('Relative power (% $uV^2$)')
ax2.set_xlim(t[0], t[-1])
ax2.set_xlabel('Time (sec)')
ax2.axhline(0.20, ls=':', lw=2, color='indianred', label='Threshold #1')
plt.legend()
_ = ax2.set_title('Relative power in the sigma band')
```
### Threshold 2: Moving correlation
For the two remaining thresholds, we are going to need the sigma-filtered signal ($\text{EEG}_{\sigma}$). Here again, we use the MNE built-in [FIR filter](https://martinos.org/mne/stable/generated/mne.filter.filter_data.html). Note that we use a FIR filter and not a IIR filter because *"FIR filters are easier to control, are always stable, have a well-defined passband, and can be corrected to zero-phase without additional computations"* ([Widmann et al. 2015](https://doi.org/10.1016/j.jneumeth.2014.08.002)).
The default sigma bandpass filtering in YASA uses a 12 to 15 Hz zero-phase FIR filtering with transition bands of 1.5 Hz at each side. The - 6dB cutoff is therefore defined at 11.25 Hz and 15.75 Hz.
Please refer to the [MNE documentation](https://martinos.org/mne/stable/auto_tutorials/plot_background_filtering.html#sphx-glr-auto-tutorials-plot-background-filtering-py) for more details on filtering.
```
data_sigma = filter_data(data, sf, 12, 15, l_trans_bandwidth=1.5,
h_trans_bandwidth=1.5, method='fir', verbose=0)
# Plot the filtered signal
plt.figure(figsize=(14, 4))
plt.plot(times, data_sigma, 'k')
plt.xlabel('Time (seconds)')
plt.ylabel('Amplitude (uV)')
plt.title('$EEG_{\sigma}$ (11-16 Hz)')
_ = plt.xlim(0, times[-1])
```
Our second detection signal is calculated by taking, with a moving sliding window of 300 ms and a step of 100 ms, the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) between and $\text{EEG}_{bf}$ and $\text{EEG}_{\sigma}$. According to [Lacourse et al. 2018](http://dx.doi.org/10.1016/j.jneumeth.2018.08.014):
> The current spindle detector design is unique because it uses a correlation filter between the EEG signal filtered in the sigma band and the raw EEG signal itself. The proposed design is therefore biased to detect spindles that are visible on the raw EEG signal by requiring a high correlation between raw EEG signal and the filtered sigma burst (the pattern that represents a spindle).
Once again, the values are interpolated using cubic interpolation to obtain one value per each time point. The second threshold is exceeded whenever a sample has a correlation value $r \geq .65$.
```
t, mcorr = yasa.main.moving_transform(data_sigma, data_broad, sf, window=.3, step=.1, method='corr', interp=True)
plt.figure(figsize=(14, 4))
plt.plot(times, mcorr)
plt.xlabel('Time (seconds)')
plt.ylabel('Pearson correlation')
plt.axhline(0.65, ls=':', lw=2, color='indianred', label='Threshold #2')
plt.legend()
plt.title('Moving correlation between $EEG_{bf}$ and $EEG_{\sigma}$')
_ = plt.xlim(0, times[-1])
```
### Threshold 3: Moving RMS
The third and last threshold is defined by computing a moving [root mean square](https://en.wikipedia.org/wiki/Root_mean_square) (RMS) of $\text{EEG}_{\sigma}$, with a window size of 300 ms and a step of 100 ms. The purpose of this threshold is simply to detect increase of energy in the $\text{EEG}_{\sigma}$ signal. As before, the values are interpolated using cubic interpolation to obtain one value per each time point. The third threshold is exceeded whenever a sample has a $\text{RMS} \geq \text{RMS}_{\text{thresh}}$, the latter being defined as:
$\text{RMS}_{\text{thresh}} = \text{RMS}_{\text{mean}} + 1.5 \times \text{RMS}_{\text{std}}$
Note that the 10% lowest and 10% highest values are removed from the RMS signal before computing the standard deviation ($\text{RMS}_{\text{std}}$). This reduces the bias caused by potential artifacts and/or extreme values.
```
t, mrms = yasa.main.moving_transform(data_sigma, data, sf, window=.3, step=.1, method='rms', interp=True)
# Define threshold
trimmed_std = yasa.main.trimbothstd(mrms, cut=0.025)
thresh_rms = mrms.mean() + 1.5 * trimmed_std
plt.figure(figsize=(14, 4))
plt.plot(times, mrms)
plt.xlabel('Time (seconds)')
plt.ylabel('Root mean square')
plt.axhline(thresh_rms, ls=':', lw=2, color='indianred', label='Threshold #3')
plt.legend()
plt.title('Moving RMS of $EEG_{\sigma}$')
_ = plt.xlim(0, times[-1])
```
### Decision function
Every sample of the data that validate all 3 thresholds is considered as a potential sleep spindle. However, the detection using the three thresholds tends to underestimate the real duration of the spindle. To overcome this, we compute a soft threshold by smoothing the decision vector with a 100 ms window. We then find indices in the decision vector that are strictly greater than 2. In other words, we find
the *true* beginning and *true* end of the events by finding the indices at which two out of the three treshold were crossed.
```
# Combine all three threholds
idx_rel_pow = (rel_pow >= 0.2).astype(int)
idx_mcorr = (mcorr >= 0.65).astype(int)
idx_mrms = (mrms >= 10).astype(int)
idx_sum = (idx_rel_pow + idx_mcorr + idx_mrms).astype(int)
# Soft threshold
w = int(0.1 * sf)
idx_sum = np.convolve(idx_sum, np.ones(w) / w, mode='same')
plt.figure(figsize=(14, 4))
plt.plot(times, idx_sum, '.-', markersize=5)
plt.fill_between(times, 2, idx_sum, where=idx_sum > 2, color='indianred', alpha=.8)
plt.xlabel('Time (seconds)')
plt.ylabel('Number of passed thresholds')
plt.title('Decision function')
_ = plt.xlim(0, times[-1])
```
### Morphological criteria
Now that we have our potential spindles candidates, we apply two additional steps to optimize the detection:
1. Spindles that are too close to each other (less than 500 ms) are merged together
2. Spindles that are either too short ($<0.5$ sec) or too long ($>2$ sec) are removed.
```
where_sp = np.where(idx_sum > 2)[0]
# Merge events that are too close together
where_sp = yasa.main._merge_close(where_sp, 500, sf)
# Extract start, end, and duration of each spindle
sp = np.split(where_sp, np.where(np.diff(where_sp) != 1)[0] + 1)
idx_start_end = np.array([[k[0], k[-1]] for k in sp]) / sf
sp_start, sp_end = idx_start_end.T
sp_dur = sp_end - sp_start
# Find events with good duration
good_dur = np.logical_and(sp_dur > 0.5, sp_dur < 2)
print(sp_dur, good_dur)
```
### Spindles properties
From then, we can be pretty confident that our detected spindles are actually *true* sleep spindles.
The last step of the algorithm is to extract, for each individual spindle, several properties:
- Start and end time in seconds
- Duration (seconds)
- Amplitude ($\mu V$)
- Root mean square ($\mu V$)
- Median absolute power ($\log_{10} \mu V^2$)
- Median relative power (from 0 to 1, % $\mu V^2$)
- Median frequency (Hz, extracted with an Hilbert transform)
- Number of oscillations
- Index of the most prominent peak (in seconds)
- Symmetry (indicates where is the most prominent peak on a 0 to 1 vector where 0 is the beginning of the spindles and 1 the end. Ideally it should be around 0.5)
In the example below, we plot the two detected spindles and compute the peak-to-peak amplitude of the spindles.
To see how the other properties are computed, please refer to the [source code](https://github.com/raphaelvallat/yasa/blob/master/yasa/main.py) of the `spindles_detect` function.
```
from scipy.signal import detrend
sp_amp = np.zeros(len(sp))
plt.figure(figsize=(8, 4))
for i in np.arange(len(sp))[good_dur]:
# Important: detrend the spindle signal to avoid wrong peak-to-peak amplitude
sp_det = detrend(data[sp[i]], type='linear')
# Now extract the peak to peak amplitude
sp_amp[i] = np.ptp(sp_det) # Peak-to-peak amplitude
# And plot the spindles
plt.plot(np.arange(sp_det.size) / sf, sp_det,
lw=2, label='Spindle #' + str(i+1))
plt.legend()
plt.xlabel('Time (sec)')
plt.ylabel('Amplitude ($uV$)')
print('Peak-to-peak amplitude:\t', sp_amp)
```
| github_jupyter |
First we need to download the dataset. In this case we use a datasets containing poems. By doing so we train the model to create its own poems.
```
from datasets import load_dataset
dataset = load_dataset("poem_sentiment")
print(dataset)
```
Before training we need to preprocess the dataset. We tokenize the entries in the dataset and remove all columns we don't need to train the adapter.
```
from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-cluecorpussmall")
from transformers import GPT2Tokenizer
def encode_batch(batch):
"""Encodes a batch of input data using the model tokenizer."""
encoding = tokenizer(batch["verse_text"])
# For language modeling the labels need to be the input_ids
#encoding["labels"] = encoding["input_ids"]
return encoding
#tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
#tokenizer.pad_token = tokenizer.eos_token
# The GPT-2 tokenizer does not have a padding token. In order to process the data
# in batches we set one here
column_names = dataset["train"].column_names
dataset = dataset.map(encode_batch, remove_columns=column_names, batched=True)
```
Next we concatenate the documents in the dataset and create chunks with a length of `block_size`. This is beneficial for language modeling.
```
block_size = 50
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
dataset = dataset.map(group_texts,batched=True,)
dataset.set_format(type="torch", columns=["input_ids", "attention_mask", "labels"])
```
Next we create the model and add our new adapter.Let's just call it `poem` since it is trained to create new poems. Then we activate it and prepare it for training.
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("uer/gpt2-chinese-cluecorpussmall")
# add new adapter
model.add_adapter("poem")
# activate adapter for training
model.train_adapter("poem")
```
The last thing we need to do before we can start training is create the trainer. As trainingsargumnénts we choose a learningrate of 1e-4. Feel free to play around with the paraeters and see how they affect the result.
```
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./examples",
do_train=True,
remove_unused_columns=False,
learning_rate=5e-4,
num_train_epochs=3,
)
trainer = Trainer(
model=model,
args=training_args,
tokenizer=tokenizer,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
```
Now that we have a trained udapter we save it for future usage.
```
PREFIX = "what a "
encoding = tokenizer(PREFIX, return_tensors="pt")
encoding = encoding.to(model.device)
output_sequence = model.generate(
input_ids=encoding["input_ids"][:,:-1],
attention_mask=encoding["attention_mask"][:,:-1],
do_sample=True,
num_return_sequences=5,
max_length = 50,
)
```
Lastly we want to see what the model actually created. Too de this we need to decode the tokens from ids back to words and remove the end of sentence tokens. You can easily use this code with an other dataset. Don't forget to share your adapters at [AdapterHub](https://adapterhub.ml/).
```
for generated_sequence_idx, generated_sequence in enumerate(output_sequence):
print("=== GENERATED SEQUENCE {} ===".format(generated_sequence_idx + 1))
generated_sequence = generated_sequence.tolist()
# Decode text
text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)
# Remove EndOfSentence Tokens
text = text[: text.find(tokenizer.pad_token)]
print(text)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from Utils import load
from Utils import generator
from Utils import metrics
from train import *
from prune import *
from Layers import layers
from torch.nn import functional as F
import torch.nn as nn
def fc(input_shape, nonlinearity=nn.ReLU()):
size = np.prod(input_shape)
# Linear feature extractor
modules = [nn.Flatten()]
modules.append(layers.Linear(size, 5000))
modules.append(nonlinearity)
modules.append(layers.Linear(5000, 900))
modules.append(nonlinearity)
modules.append(layers.Linear(900, 400))
modules.append(nonlinearity)
modules.append(layers.Linear(400, 100))
modules.append(nonlinearity)
modules.append(layers.Linear(100, 30))
modules.append(nonlinearity)
modules.append(layers.Linear(30, 1))
model = nn.Sequential(*modules)
return model
from data import *
from models import *
from utils import *
from sklearn.model_selection import KFold
import os, shutil, pickle
ctx = mx.gpu(0) if mx.context.num_gpus() > 0 else mx.cpu(0)
loss_before_prune=[]
loss_after_prune=[]
loss_prune_posttrain=[]
NUM_PARA=[]
for datasetindex in range(10):#[0,1,4,5,6,7,8,9]:
dataset=str(datasetindex)+'.csv'
X, y= get_data(dataset)
np.random.seed(0)
kf = KFold(n_splits=5,random_state=0,shuffle=True)
kf.get_n_splits(X)
seed=0#[0,1,2,3,4]
chosenarmsList=[]
for train_index, test_index in kf.split(X):
X_tr, X_te = X[train_index], X[test_index]
y_tr, y_te = y[train_index], y[test_index]
X_test=nd.array(X_te).as_in_context(ctx) # Fix test data for all seeds
y_test=nd.array(y_te).as_in_context(ctx)
factor=np.max(y_te)-np.min(y_te) #normalize RMSE
print(factor)
#X_tr, X_te, y_tr, y_te = get_data(0.2,0)
#selected_interaction = detectNID(X_tr,y_tr,X_te,y_te,test_size,seed)
#index_Subsets=get_interaction_index(selected_interaction)
N=X_tr.shape[0]
p=X_tr.shape[1]
batch_size=500
n_epochs=300
if N<250:
batch_size=50
X_train=nd.array(X_tr).as_in_context(ctx)
y_train=nd.array(y_tr).as_in_context(ctx)
train_dataset = ArrayDataset(X_train, y_train)
# num_workers=4
train_data = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)#,num_workers=num_workers)
#X_test=nd.array(X_te).as_in_context(ctx)
#y_test=nd.array(y_te).as_in_context(ctx)
print('start training FC')
FCnet=build_FC(train_data,ctx) # initialize the overparametrized network
FCnet.load_parameters('Selected_models/FCnet_'+str(datasetindex)+'_seed_'+str(seed),ctx=ctx)
import torch.nn as nn
model=fc(10)
loss = nn.MSELoss()
dataset=torch.utils.data.TensorDataset(torch.Tensor(X_tr),torch.Tensor(y_tr))
for i in range(6):
model[int(i*2+1)].weight.data=torch.Tensor(FCnet[i].weight.data().asnumpy())
model[int(i*2+1)].bias.data=torch.Tensor(FCnet[i].bias.data().asnumpy())
print("dataset:",datasetindex,"seed",seed)
print("before prune:",torch.sqrt(torch.mean((model(torch.Tensor(X_te))-torch.Tensor(y_te))**2))/factor)
loss_before_prune.append(torch.sqrt(loss(model(torch.Tensor(X_te)),torch.Tensor(y_te)))/factor)
print(torch.sqrt(loss(model(torch.Tensor(X_te)),torch.Tensor(y_te)))/factor)
# Prune ##
device = torch.device("cpu")
prune_loader = load.dataloader(dataset, 64, True, 4, 1)
prune_epochs=10
print('Pruning with {} for {} epochs.'.format('synflow', prune_epochs))
pruner = load.pruner('synflow')(generator.masked_parameters(model, False, False, False))
sparsity = 10**(-float(2.44715803134)) #280X #100X 10**(-float(2))
prune_loop(model, loss, pruner, prune_loader, device, sparsity,
'exponential', 'global', prune_epochs, False, False, False, False)
pruner.apply_mask()
print("after prune:",torch.sqrt(loss(model(torch.Tensor(X_te)),torch.Tensor(y_te)))/factor)
loss_after_prune.append(torch.sqrt(loss(model(torch.Tensor(X_te)),torch.Tensor(y_te)))/factor)
## post_train
train_loader = load.dataloader(dataset, 64, True, 4)
test_loader = load.dataloader(dataset, 200 , False, 4)
optimizer = torch.optim.Adam(generator.parameters(model), betas=(0.9, 0.99))
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,milestones=[30,80], gamma=0.1)
post_result = train_eval_loop(model, loss, optimizer, scheduler, train_loader,
test_loader, device, 100, True)
print("after post_train:",torch.sqrt(loss(model(torch.Tensor(X_te)),torch.Tensor(y_te)))/factor)
loss_prune_posttrain.append(torch.sqrt(loss(model(torch.Tensor(X_te)),torch.Tensor(y_te)))/factor)
num=0
for i in pruner.masked_parameters:
num=num+sum(sum(i[0]))
print(num)
NUM_PARA.append(num)
seed=seed+1
import mxnet.gluon.nn as nn
##synflow results 280X
a=0
for i in range(10):
print(sum(loss_prune_posttrain[5*i:5*i+5])/5)
a=a+sum(loss_prune_posttrain[5*i:5*i+5])/5
print("ave:",a/10)
##synflow results 100X
a=0
for i in range(10):
print(sum(loss_prune_posttrain[5*i:5*i+5])/5)
a=a+sum(loss_prune_posttrain[5*i:5*i+5])/5
print("ave:",a/10)
```
| github_jupyter |
# In-Vehicle Security using Pattern Recognition Techniques
---
*ECE 5831 - 08/17/2021 - Kunaal Verma*
The goal of this project is to train a machine learning model to recognize unique signatures of each ECU in order to identify when intrusive messages are being fed to the network.
The dataset for this project consists of several time-series recordings of clock pulses produced by several ECUs on an in-vehicle CAN High network.
The test network has 8 trusted ECUs, with 30 records of 600 samples of the clock signal for each ECU.
# I. Initialization
```
### I. Initialization ###
print('--- Intialize ---')
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sn
import sklearn.metrics as sm
from google.colab import drive
drive.mount('/content/drive')
```
# II. File Pre-conditioning
```
### II. File Pre-conditioning ###
print('\n--- Pre-condition Files ---\n')
# Dataset path
datapath = 'drive/MyDrive/School/UMich Dearborn/SS21/ECE 5831/Project/Dataset'
# datapath = './Dataset'
print('Path to Dataset:')
print(datapath)
# List of files in directory, append to list recursively
file_list = []
for path, folders, files in os.walk(datapath):
for file in files:
file_list.append(os.path.join(path, file))
# Sort file list by record number, but use a copy that contains leading zeros
file_list0 = []
for filename in file_list:
m = re.search('_[0-9]\.', filename)
if m:
found = m.group(0)
filename = re.sub('_[0-9]\.', found[0] + '0' + found[1:-1] + '.', filename)
file_list0.append(filename)
file_list1 = dict(zip(file_list0,file_list))
file_list2 = dict(sorted(file_list1.items(), key = lambda x:x[0]))
# Produce sorted file lists
file_list = list(file_list2.values()) # Original list, properly sorted
file_list0 = list(file_list2.keys()) # Modified list with leading zeros, sorted
# Remove CAN5 items (CAN1 and CAN5 were made purposely similar, this is not realistic in real-world CAN networks)
file_list = [i for i in file_list if re.search("CAN5_2m_Thick_uns1", i) == None]
file_list0 = [i for i in file_list0 if re.search("CAN5_2m_Thick_uns1", i) == None]
print('\nAbsolute Paths to Records (Subset):')
# for filename in file_list[0:10]:
for filename in file_list:
print(filename)
```
# III. Feature Extraction
```
### III. Feature Extraction ###
print('\n--- Extract Features ---\n')
!cp '/content/drive/MyDrive/School/UMich Dearborn/SS21/ECE 5831/Project/github/ecu_fingerprint_lib.py' .
from ecu_fingerprint_lib import Record
pkl_path = './ecu_fingerprint.pkl'
if os.path.exists(pkl_path):
df = pd.read_pickle(pkl_path)
else:
# Dict instantiation
d = {}
# Iteratively build data dictionary
for i in np.arange(len(file_list0)):
filepath = file_list[i]
filepath0 = file_list0[i]
print(filepath)
# Extract folder name of current record
folder = os.path.basename(os.path.dirname(filepath0))
filename = os.path.basename(filepath0)
# Extract record identifiers
id, pl, pm, did = folder.split('_')
filename = re.split(r'_|\.',filename)
# Open File
with open(filepath) as file_name:
array = np.loadtxt(file_name, delimiter=",")
# Extract Features
r = Record(array)
# Add Features and File Attributes to Dict
if i == 0:
# File Metadata
d['Filepath'] = []
d['CAN_Id'] = []
d['CAN_PhyLen'] = []
d['CAN_PhyMat'] = []
d['CAN_RecId'] = []
# Feature Data
for feature_name in r.headers:
d[feature_name] = []
# Build data table
for k in np.arange(r.total_rec):
for j in np.arange(len(r.headers)):
d[r.headers[j]].append(r.features[j][k])
d['Filepath'].append(filepath)
d['CAN_Id'].append(id)
d['CAN_PhyLen'].append(pl)
d['CAN_PhyMat'].append(pm)
d['CAN_RecId'].append(filename[-2])
df = pd.DataFrame.from_dict(d)
df.to_pickle(pkl_path)
print('\nDataFrame Object:\n')
print(df.info())
```
# IV. Prepare Training and Test Datasets
```
### IV. Prepare Traininf and Test Datasets ###
print('\n--- Training and Test Datasets ---\n')
# Attribute and Label Datasets
## Full, Dom, Rec
X = df.iloc[:,5:] # Full feature set 0.0%
y = df.iloc[:,1]
# Change Labels from Categorical to Numerical values
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
Y = le.fit_transform(y)
# Train-Test Split (70/30)
from sklearn.model_selection import train_test_split
# X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=4) # Apples to Apples Debugging
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
print('Training Input:\n')
print(' %s: %s' %(X_train.dtype, X_train.shape))
print('\nTraining Output:\n')
print(' %s: %s' %(Y_train.dtype, Y_train.shape))
print('\nTest Input:\n')
print(' %s: %s' %(X_test.dtype, X_test.shape))
print('\nTest Output:\n')
print(' %s: %s' %(Y_test.dtype, Y_test.shape))
```
# V. Train Neural Network
```
### V. Train Neural Network ###
print('\n--- Neural Network Training ---\n')
from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(hidden_layer_sizes=([3*len(df),]), max_iter=2000)
mlp.fit(X_train, Y_train.ravel())
Y_pred = mlp.predict(X_test)
print('Hidden Layer Nodes: ',len(mlp.coefs_[1]))
print('Neural Network Solver: ',mlp.solver)
```
# VI. Produce Confusion Matrix
```
### VI. Produce Confusion Matrix ###
print('\n--- Confusion Matrix ---\n')
cm = sm.confusion_matrix(Y_test, Y_pred)
tr = cm.size # Total Records
np.sum(cm, axis=0) # Column Sum (Horizontal Result)
np.sum(cm, axis=1) # Row Sum (Vertical Result)
np.trace(cm) # Diagonal Sum (Total Result)
labels = [re.sub('CAN', 'Class', i) for i in y.unique()]
df_cm = pd.DataFrame(cm, labels, labels)
sn.set(font_scale=1.4) # for label size
sn.heatmap(df_cm, annot=True, annot_kws={"size": 12}) # font size
plt.xlabel("Prediction", fontsize = 12)
plt.ylabel("Actual", fontsize = 12)
plt.show()
print('')
print(cm)
labels = [re.sub('CAN', 'Class', i) for i in y.unique()]
df_cm = pd.DataFrame(cm, labels, labels)
sn.set(font_scale=1.4) # for label size
sn.heatmap(df_cm, annot=True, annot_kws={"size": 12}) # font size
plt.xlabel("Prediction", fontsize = 12)
plt.ylabel("Actual", fontsize = 12)
```
# VII. Evaluate Peformance Metrics
```
### VII. Evaluate Performance Metrics ###
print('\n--- Performance Metrics ---\n')
def perf_metrics(cm, report=False):
cm_acc = []
cm_pre = []
cm_rec = []
cm_f1s = []
cm_err = []
if report:
print('[ ECU_Id | Accuracy | Precision | Recall | F1 Score | Error ]')
print('=========|==========|===========|========|==========|========')
for i in np.arange(len(cm)):
cm_working = cm.copy()
idx = np.concatenate((np.arange(0,i),np.arange(-len(cm)+i+1,0)))
tp = cm_working[i,i]
fn = cm_working[i,idx]; fn = np.sum(fn)
fp = cm_working[idx,i]; fp = np.sum(fp)
cm_working[i,i] = 0
cm_working[i,idx] = 0
cm_working[idx,i] = 0
tn = np.sum(cm_working)
acc = (tp + tn)/(tp + tn + fp + fn)
err = 1-acc
pre = tp/(tp + fp)
rec = tp/(tp + fn)
f1s = 2*(pre*rec)/(pre+rec)
cm_acc.append(acc)
cm_pre.append(pre)
cm_rec.append(rec)
cm_f1s.append(f1s)
cm_err.append(err)
if report:
print('[%7s | %8.3f | %9.3f | %6.3f | %8.3f | %5.3f ]' \
% (y.unique()[i], acc, pre, rec, f1s, err))
return cm_acc, cm_pre, cm_rec, cm_f1s, cm_err
acc, pre, rec, f1s, err = perf_metrics(cm, True)
# Plot results
bw = 0.15 # Box plot bar width
p1 = np.arange(len(acc)) # Category x-axis positions
p2 = [x + bw for x in p1]
p3 = [x + bw for x in p2]
p4 = [x + bw for x in p3]
p5 = [x + bw for x in p4]
print('')
plt.bar(p1, acc, width=bw)
plt.bar(p2, pre, width=bw)
plt.bar(p3, rec, width=bw)
plt.bar(p4, f1s, width=bw)
plt.bar(p5, err, width=bw)
plt.xticks([p + bw for p in range(len(acc))], y.unique(), fontsize = 10)
plt.legend(['Accuracy','Precision','Recall','F1 Score','Error'], fontsize = 10, bbox_to_anchor = (1, 0.67))
```
# Appendix I. Neural Network Hyperparameter Testing
```
### IV. Prepare Training and Test Datasets ###
print('\n--- Training and Test Datasets ---\n')
# Attribute and Label Datasets
## Full, Spectral, Contrl, Dom, Rec Acc %
# X = df.iloc[:,5:] # Full feature set 98.76%
# X = df.iloc[:,[12,13,14,15,16,17,18,19,20,21,22,23,24,32,33,34,35,36,37,38,39]] # Spectral Only 97.26%
# X = df.iloc[:,[5,6,7,8,9,10,11,25,26,27,28,29,30,31]] # Control Only 97.17%
# X = df.iloc[:, 5:24] # Dominant Only 98.14%
# X = df.iloc[:,25:39] # Recessive Only 96.73%
## Spectral Features (sorted by Accuracy)
# X = df.iloc[:,[22,37]] # SNR 95.23%
# X = df.iloc[:,[23,38]] # Mean Freq 94.96%
# X = df.iloc[:,[12,13,14,15,16,17,18,19,20,21,32,33,34,35,36]] # Spectral Density 92.93%
# X = df.iloc[:,[24,39]] # Median Freq 91.34%
## Spectral Features (added by rank)
# X = df.iloc[:,[22,37]] # SNR 95.23% [ ]
# X = df.iloc[:,[22,23,37,38]] # + Mean Freq 97.35% [+] <<<
# X = df.iloc[:,[12,13,14,15,16,17,18,19,20,21,22,23,32,33,34,35,36,37,38]] # + Spectral Density 97.26% [-]
# X = df.iloc[:,[12,13,14,15,16,17,18,19,20,21,22,23,32,33,34,35,36,37,38]] # + Median Freq 97.26% [-]
## Control Features (sorted by Accuracy)
# X = df.iloc[:,[ 8,28]] # Steady State Value 95.23%
# X = df.iloc[:,[ 9,29]] # Steady State Error 95.05%
# X = df.iloc[:,[ 6,26]] # Percent Overshoot 91.52%
# X = df.iloc[:,[ 7,27]] # Settling Time 91.52%
# X = df.iloc[:,[10,30]] # Rise Time 83.48%
# X = df.iloc[:,[11,31]] # Delay Time 81.89%
# X = df.iloc[:,[ 5,25]] # Peak Time 80.12%
## Control Features (added by rank)
# X = df.iloc[:,[8,28]] # SSV 95.23% [ ]
# X = df.iloc[:,[8,9,28,29]] # + SSE 97.70% [+]
# X = df.iloc[:,[6,8,9,26,28,29]] # + %OS 98.32% [+] <<<
# X = df.iloc[:,[6,7,8,9,26,27,28,29]] # + Ts 98.32% [=]
# X = df.iloc[:,[6,7,8,9,10,26,27,28,29,30]] # + Tr 97.26 [-]
# X = df.iloc[:,[6,7,8,9,10,11,26,27,28,29,30,31]] # + Td 97.17% [-]
# X = df.iloc[:,[ 5,25]] # + Tp 97.17% [=]
## All Features (added by rank)
# X = df.iloc[:,[22,37]] # SNR 95.23% [ ]
# X = df.iloc[:,[8,22,28,37]] # + SSV 97.53% [+]
# X = df.iloc[:,[8,9,22,28,29,37]] # + SSE 97.97% [+]
# X = df.iloc[:,[8,9,22,23,28,29,37,38]] # + Mean Freq. 98.59% [+]
# X = df.iloc[:,[6,8,9,22,23,26,28,29,37,38]] # + %OS 98.94% [+] <<<
# X = df.iloc[:,[6,7,8,9,22,23,26,27,28,29,37,38]] # + Ts 98.85% [-]
# X = df.iloc[:,[6,7,8,9,22,23,24,26,27,28,29,37,38,39]] # + Med. Freq. 98.67% [-]
# X = df.iloc[:,[6,8,9,12,13,14,15,16,17,18,19,20,21,22,23,26,28,29,32,33,34,35,36,37,38]] # + SD 98.32% [-]
# X = df.iloc[:,[10,30]] # + Tr ...
# X = df.iloc[:,[11,31]] # + Td ...
# X = df.iloc[:,[ 5,25]] # + Tp ...
## Final Feature Set: (Control) SSV, SSE, %OS (Spectral) SNR, Mean Freq.
X = df.iloc[:,[6,8,9,22,23,26,28,29,37,38]] # 99.36%
# Testing Results
y = df.iloc[:,1]
# Change Labels from Categorical to Numerical values
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
Y = le.fit_transform(y)
# Train-Test Split (70/30)
from sklearn.model_selection import train_test_split
# X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=0) # Apples to Apples Debugging
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# Print Training/Test Set Details
print('Training Input:\n')
print(' %s: %s' %(X_train.dtype, X_train.shape))
print('\nTraining Output:\n')
print(' %s: %s' %(Y_train.dtype, Y_train.shape))
print('\nTest Input:\n')
print(' %s: %s' %(X_test.dtype, X_test.shape))
print('\nTest Output:\n')
print(' %s: %s' %(Y_test.dtype, Y_test.shape))
### V. Train Neural Network ###
print('\n--- Neural Network Training ---\n')
from sklearn.neural_network import MLPClassifier
hls = 1000
mit = 3000
mlp = MLPClassifier(hidden_layer_sizes=([hls,]), max_iter=mit)
mlp.fit(X_train, Y_train.ravel())
Y_pred = mlp.predict(X_test)
print('Hidden Layer Nodes: ',len(mlp.coefs_[1]))
print('Neural Network Solver: ',mlp.solver)
### VI. Produce Confusion Matrix ###
print('\n--- Confusion Matrix ---\n')
cm = sm.confusion_matrix(Y_test, Y_pred)
tr = cm.size # Total Records
np.sum(cm, axis=0) # Column Sum (Horizontal Result)
np.sum(cm, axis=1) # Row Sum (Vertical Result)
np.trace(cm) # Diagonal Sum (Total Result)
# labels = [re.sub('CAN', 'Class', i) for i in y.unique()]
labels = ["Class%i" % i for i in 1 + np.arange(len(y.unique()))]
df_cm = pd.DataFrame(cm, labels, labels)
sn.set(font_scale=1.4) # for label size
sn.heatmap(df_cm, annot=True, annot_kws={"size": 12}) # font size
plt.xlabel("Prediction", fontsize = 12)
plt.ylabel("Actual", fontsize = 12)
plt.show()
print('')
print(cm)
### VII. Evaluate Performance Metrics ###
print('\n--- Performance Metrics ---\n')
def perf_metrics(cm, report=False):
cm_acc = []
cm_pre = []
cm_rec = []
cm_f1s = []
cm_err = []
if report:
print('[ ECU_Id | Accuracy | Precision | Recall | F1 Score | Error ]')
print('=========|==========|===========|========|==========|========')
for i in np.arange(len(cm)):
cm_working = cm.copy()
idx = np.concatenate((np.arange(0,i),np.arange(-len(cm)+i+1,0)))
tp = cm_working[i,i]
fn = cm_working[i,idx]; fn = np.sum(fn)
fp = cm_working[idx,i]; fp = np.sum(fp)
cm_working[i,i] = 0
cm_working[i,idx] = 0
cm_working[idx,i] = 0
tn = np.sum(cm_working)
acc = (tp + tn)/(tp + tn + fp + fn)
err = 1-acc
pre = tp/(tp + fp)
rec = tp/(tp + fn)
f1s = 2*(pre*rec)/(pre+rec)
cm_acc.append(acc)
cm_pre.append(pre)
cm_rec.append(rec)
cm_f1s.append(f1s)
cm_err.append(err)
if report:
print('[%7s | %8.3f | %9.3f | %6.3f | %8.3f | %5.3f ]' \
% (y.unique()[i], acc, pre, rec, f1s, err))
return cm_acc, cm_pre, cm_rec, cm_f1s, cm_err
acc, pre, rec, f1s, err = perf_metrics(cm, True)
# Plot results
bw = 0.15 # Box plot bar width
p1 = np.arange(len(acc)) # Category x-axis positions
p2 = [x + bw for x in p1]
p3 = [x + bw for x in p2]
p4 = [x + bw for x in p3]
p5 = [x + bw for x in p4]
print('')
plt.bar(p1, acc, width=bw)
plt.bar(p2, pre, width=bw)
plt.bar(p3, rec, width=bw)
plt.bar(p4, f1s, width=bw)
plt.bar(p5, err, width=bw)
plt.xticks([p + bw for p in range(len(acc))], y.unique(), fontsize = 10)
plt.legend(['Accuracy','Precision','Recall','F1 Score','Error'], fontsize = 10, bbox_to_anchor = (1, 0.67))
acc_avg = np.average(acc)*100
pre_avg = np.average(pre)*100
rec_avg = np.average(rec)*100
f1s_avg = np.average(f1s)*100
err_avg = np.average(err)*100
print('%2.2f %2.2f %2.2f %2.2f %2.2f' % (acc_avg, pre_avg, rec_avg, f1s_avg, err_avg))
```
| github_jupyter |
# Description
This notebook compares the results of training a Conditional Normalizing Flow model on the synthetic two moons data. The data have rotation and moon class as conditioning variables.
The comparison is between a base flow with no regularization at all and adding in Clipped Adam and Batchnormalization.
Lots of different hyperparamters are tested to see if we can get batch normalization to work
# Base loads
```
!pip install pyro-ppl==1.3.0
!nvidia-smi
# "Standard" imports
import numpy as np
from time import time
import itertools
import matplotlib.pyplot as plt
import pickle
import os
import pandas as pd
import folium
import datetime
# Pytorch imports
import torch
import torch.nn.functional as F
from torch import nn
from torch import optim
from torch.utils.data import DataLoader
# Pyro imports
import pyro
from pyro.distributions import ConditionalTransformedDistribution, ConditionalTransformModule, TransformModule
import pyro.distributions as dist
from pyro.distributions.transforms import affine_coupling, affine_autoregressive, permute
# Sklearn imports
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
# Notebooks imports
from IPython.display import Image, display, clear_output
from tqdm import tqdm
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import numpy as np
import folium
from folium import plugins
import matplotlib
import seaborn as sns
# Mount my drive
from google.colab import drive
import sys
import os
drive.mount('/content/drive/')
root_path = 'drive/My Drive/Colab_Notebooks/normalizingflows'
trained_flows_folder = 'drive/My Drive/Colab_Notebooks/normalizingflows/trained_flows'
plot_folder = f'{root_path}/plot_notebooks/generated_plots'
dataset_folder = 'drive/My Drive/Colab_Notebooks/normalizingflows/datasets'
sys.path.append(root_path)
```
## Load Git code
```
%cd drive/'My Drive'/Thesis/code/DEwNF/
!git pull
%cd /content/
git_folder_path = 'drive/My Drive/Thesis/code/DEwNF/'
sys.path.append(git_folder_path)
from DEwNF.flows import ConditionalAffineCoupling, ConditionedAffineCoupling, ConditionalNormalizingFlowWrapper, conditional_affine_coupling, normalizing_flow_factory, conditional_normalizing_flow_factory
from DEwNF.utils import plot_4_contexts_cond_flow, plot_loss, sliding_plot_loss, plot_samples, simple_data_split_conditional, simple_data_split, circle_transform
from DEwNF.samplers import RotatingTwoMoonsConditionalSampler
from DEwNF.regularizers import NoiseRegularizer, rule_of_thumb_noise_schedule, approx_rule_of_thumb_noise_schedule, square_root_noise_schedule, constant_regularization_schedule
```
# Func def
```
def minMax(x):
return pd.Series(index=['min','max'],data=[x.min(),x.max()])
def rescale_samples(plot_flow_dist, scaler, n_samples=256):
x_s = plot_flow_dist.sample((n_samples,))
if x_s.is_cuda:
x_s = x_s.cpu()
if scaler is not None:
x_s = scaler.inverse_transform(x_s)
return x_s
def add_marker_w_prob(coords, obs_scaler, flow_dist, hub_map, name):
scaled_coords = obs_scaler.transform(np.flip(np.array([coords])))
log_prob = flow_dist.log_prob(torch.tensor(scaled_coords).float().cuda()).cpu().detach().numpy()
folium.CircleMarker(
location=coords,
popup=folium.Popup(f'{name} log likelihood: {log_prob}', show=False),
icon=folium.Icon(icon='cloud')
).add_to(hub_map)
def create_overlay(shape, bounds, flow_dist, hub_map, cm, flip):
with torch.no_grad():
nlats, nlons = shape
lats_array = torch.linspace(start=bounds[1][0], end=bounds[0][0], steps=nlats)
lons_array = torch.linspace(start=bounds[0][1], end=bounds[1][1], steps=nlons)
x, y = torch.meshgrid(lats_array, lons_array)
points = torch.stack((x.reshape(-1), y.reshape(-1)), axis=1)
if flip:
points = points.flip(1)
scaled_points = torch.tensor(obs_scaler.transform(points), requires_grad=False).float()
data = flow_dist.log_prob(scaled_points.cuda()).reshape(nlats,nlons).cpu().detach().numpy()
data = np.exp(data)
overlay = cm(data)
folium.raster_layers.ImageOverlay(
image=overlay,
bounds=bounds,
mercator_project=False,
opacity=0.75
).add_to(hub_map)
return overlay
def plot_samples(samples, hub_map):
for point in samples:
folium.CircleMarker(
location=point,
radius=2,
color='#3186cc',
fill=True,
fill_color='#3186cc',
opacityb=0.1).add_to(hub_map)
return hub_map
```
# Load planar conditional model
```
unconditional_path = os.path.join(root_path, "results/nyc_taxi/combi/test_combi_fixed_bn")
os.listdir(unconditional_path)
loaded_dicts = {}
folders = os.listdir(unconditional_path)
for run_name in folders:
run_path = folder = os.path.join(unconditional_path, run_name)
model_names = os.listdir(run_path)
loaded_dicts[run_name] = {}
for model_name in model_names:
model_path = os.path.join(run_path, model_name)
file = os.path.join(model_path)
with open(file, 'rb') as f:
loaded_dict = pickle.load(f)
loaded_dicts[run_name][model_name] = loaded_dict
runs = list(loaded_dicts.keys())
#runs = ['reg_comparisons2']
models = list(loaded_dicts[runs[0]].keys())
results_dict = {}
for model in models:
results_dict[model] = {}
runs_arr = []
for run in runs:
runs_arr.append(loaded_dicts[run][model]['logs']['test'][-1])
results_dict[model]['mean'] = np.mean(runs_arr)
results_dict[model]['max'] = np.max(runs_arr)
results_dict[model]['min'] = np.min(runs_arr)
combi_conditional_dicts = loaded_dicts
combi_conditional_results = results_dict
combi_conditional_dicts['run1']['nyc_model_combi.pickle']['settings']
```
# Compare results
```
combi_conditional_results
plt.figure(figsize=(15,10))
plt.plot(combi_conditional_dicts['run1']['nyc_model_combi.pickle']['logs']['test'])
plt.plot(combi_conditional_dicts['run1']['nyc_model_combi.pickle']['logs']['no_noise_losses'], '--')
plt.plot(combi_conditional_dicts['run1']['nyc_model_combi.pickle']['logs']['train'])
plt.ylim((0,5))
```
# Model visualization
## planar cond model
```
nyc_data_small_path = f"{dataset_folder}/NYC_yellow_taxi/processed_nyc_taxi_small.csv"
nyc_data_small = pd.read_csv(nyc_data_small_path)
obs_cols = ['dropoff_longitude', 'dropoff_latitude']
context_cols = ['pickup_longitude', 'pickup_latitude', 'pickup_dow_sin', 'pickup_dow_cos',
'pickup_tod_sin', 'pickup_tod_cos']
train_dataloader, test_dataloader, obs_scaler, context_scaler= simple_data_split_conditional(df=nyc_data_small,
obs_cols=obs_cols,
context_cols=context_cols,
batch_size=50000,
cuda_exp=True)
combi_conditional_dicts[run]['nyc_model_combi.pickle']['settings']
x_bounds = [-74.04, -73.75]
y_bounds = [40.62, 40.86]
x_loc = np.mean(x_bounds)
y_loc = np.mean(y_bounds)
lower_left = [y_bounds[0], x_bounds[0]]
upper_right = [y_bounds[1], x_bounds[1]]
bounds = [lower_left, upper_right]
initial_location = [y_loc, x_loc]
```
# Conditional model
```
combi_cond_flow1 = combi_conditional_dicts['run1']['nyc_model_combi.pickle']['model']
combi_cond_flow2 = combi_conditional_dicts['run2']['nyc_model_combi.pickle']['model']
combi_cond_flow3 = combi_conditional_dicts['run3']['nyc_model_combi.pickle']['model']
combi_cond_flow1.dist.transforms
pickup_tod_cos, pickup_tod_sin = circle_transform(1,23)
pickup_dow_cos, pickup_dow_sin = circle_transform(1,6)
context = np.array([[-7.38950000e+01, 4.07400000e+01, pickup_dow_sin, pickup_dow_cos, pickup_tod_sin, pickup_tod_cos]])
print(context)
scaled_context = torch.tensor(context_scaler.transform(context)).type(torch.FloatTensor).cuda()
#scaled_context = torch.tensor([-0.4450, 0.1878, -1.2765, -0.8224, 1.4222, -0.8806]).float().cuda()
cond_dist = combi_cond_flow1.condition(scaled_context)
def plot_func(hour, dayofweek, pickup_lat, pickup_lon, flow):
# Made for the context order:
#['pickup_longitude', 'pickup_latitude', 'pickup_dow_sin', 'pickup_dow_cos', 'pickup_tod_sin, 'pickup_tod_cos']
flow.modules.eval()
pickup_tod_cos, pickup_tod_sin = circle_transform(hour,23)
pickup_dow_cos, pickup_dow_sin = circle_transform(dayofweek,6)
context = np.array([[pickup_lon, pickup_lat, pickup_dow_sin, pickup_dow_cos, pickup_tod_sin, pickup_tod_cos]])
print(context)
scaled_context = torch.tensor(context_scaler.transform(context)).type(torch.FloatTensor).cuda()
#scaled_context = torch.tensor([-0.4450, 0.1878, -1.2765, -0.8224, 1.4222, -0.8806]).float().cuda()
cond_dist = flow.condition(scaled_context)
x_bounds = [-74.04, -73.75]
y_bounds = [40.62, 40.86]
x_loc = np.mean(x_bounds)
y_loc = np.mean(y_bounds)
lower_left = [y_bounds[0], x_bounds[0]]
upper_right = [y_bounds[1], x_bounds[1]]
bounds = [lower_left, upper_right]
initial_location = [y_loc, x_loc]
hub_map = folium.Map(width=845,height=920,location=initial_location, zoom_start=12, zoom_control=False)
cm = sns.cubehelix_palette(reverse=True, dark=0, light=1, gamma=0.8, as_cmap=True)
create_overlay(shape=(800,800), bounds=bounds, flow_dist=cond_dist, hub_map=hub_map, cm=cm, flip=True)
#samples = rescale_samples(cond_dist, obs_scaler, n_samples=1000)
#plot_samples(samples, hub_map)
folium.CircleMarker(
location=[pickup_lat, pickup_lon],
radius=2,
color='#3186cc',
fill=True,
fill_color='#3186cc',
opacityb=0.1).add_to(hub_map)
# add log prob at JFC
jfc_coords = [40.645587,-73.787536]
add_marker_w_prob(coords=jfc_coords, obs_scaler=obs_scaler, flow_dist=cond_dist, hub_map=hub_map, name='JFC')
# add log prob at LaGuardia
lg_coords = [40.774277,-73.874258]
add_marker_w_prob(coords=lg_coords, obs_scaler=obs_scaler, flow_dist=cond_dist, hub_map=hub_map, name='LaGuardia')
display(hub_map)
return hub_map
input_hour = 6
input_dayofweek = 1
input_pickup_lon =-73.89500000000001
input_pickup_lat = 40.739999999999995
hub_map = plot_func(hour=input_hour, dayofweek=input_dayofweek, pickup_lat=input_pickup_lat, pickup_lon=input_pickup_lon, flow=combi_cond_flow3)
input_hour = 18
input_dayofweek = 1
input_pickup_lon =-73.983313
input_pickup_lat = 40.733348
hub_map = plot_func(hour=input_hour, dayofweek=input_dayofweek, pickup_lat=input_pickup_lat, pickup_lon=input_pickup_lon, flow=combi_cond_flow3)
hub_map.save(f'{plot_folder}/nyc_combi_cond_manhattan.html')
input_hour = 18
input_dayofweek = 1
input_pickup_lon =-73.983313
input_pickup_lat = 40.733348
hub_map = plot_func(hour=input_hour, dayofweek=input_dayofweek, pickup_lat=input_pickup_lat, pickup_lon=input_pickup_lon, flow=combi_cond_flow2)
input_hour = 6
input_dayofweek = 5
input_pickup_lon =-73.983313
input_pickup_lat = 40.733348
hub_map = plot_func(hour=input_hour, dayofweek=input_dayofweek, pickup_lat=input_pickup_lat, pickup_lon=input_pickup_lon, flow=combi_cond_flow2)
interactive_plot = interactive(plot_func,
hour=(0,23),
dayofweek=(0,6),
pickup_lat=(y_bounds[0],y_bounds[1], 0.01),
pickup_lon=(x_bounds[0],x_bounds[1], 0.01),
flow=fixed(planar_cond_flow1)
)
hub_map = interactive_plot
display(hub_map)
input_hour = 17
input_dayofweek = 1
input_pickup_lon = -73.787536
input_pickup_lat = 40.645587
hub_map = plot_func(hour=input_hour, dayofweek=input_dayofweek, pickup_lat=input_pickup_lat, pickup_lon=input_pickup_lon, flow=combi_cond_flow3)
hub_map.save(f'{plot_folder}/nyc_combi_large_jfc.html')
```
# Path stuff 2
```
def plot_func(hour, dayofweek, pickup_lat, pickup_lon, flow):
# Made for the context order:
#['pickup_longitude', 'pickup_latitude', 'pickup_dow_sin', 'pickup_dow_cos', 'pickup_tod_sin, 'pickup_tod_cos']
flow.modules.eval()
pickup_tod_cos, pickup_tod_sin = circle_transform(hour,23)
pickup_dow_cos, pickup_dow_sin = circle_transform(dayofweek,6)
context = np.array([[pickup_lon, pickup_lat, pickup_dow_sin, pickup_dow_cos, pickup_tod_sin, pickup_tod_cos]])
print(context)
scaled_context = torch.tensor(context_scaler.transform(context)).type(torch.FloatTensor).cuda()
#scaled_context = torch.tensor([-0.4450, 0.1878, -1.2765, -0.8224, 1.4222, -0.8806]).float().cuda()
cond_dist = flow.condition(scaled_context)
x_bounds = [-73.95, -73.75]
y_bounds = [40.62, 40.86]
x_loc = np.mean(x_bounds)
y_loc = np.mean(y_bounds)
lower_left = [y_bounds[0], x_bounds[0]]
upper_right = [y_bounds[1], x_bounds[1]]
bounds = [lower_left, upper_right]
#initial_location = [40.739999999999995, -73.85]
initial_location = [40.720999999999995, -73.855]
hub_map = folium.Map(width=550,height=650,location=initial_location, zoom_start=12, zoom_control=False)
cm = sns.cubehelix_palette(reverse=True, dark=0, light=1, gamma=0.8, as_cmap=True)
create_overlay(shape=(800,800), bounds=bounds, flow_dist=cond_dist, hub_map=hub_map, cm=cm, flip=True)
#samples = rescale_samples(cond_dist, obs_scaler, n_samples=1000)
#plot_samples(samples, hub_map)
folium.CircleMarker(
location=[pickup_lat, pickup_lon],
radius=2,
color='#3186cc',
fill=True,
fill_color='#3186cc',
opacityb=0.1).add_to(hub_map)
# add log prob at JFC
jfc_coords = [40.645587,-73.787536]
add_marker_w_prob(coords=jfc_coords, obs_scaler=obs_scaler, flow_dist=cond_dist, hub_map=hub_map, name='JFC')
# add log prob at LaGuardia
lg_coords = [40.774277,-73.874258]
add_marker_w_prob(coords=lg_coords, obs_scaler=obs_scaler, flow_dist=cond_dist, hub_map=hub_map, name='LaGuardia')
train_station1 = [40.6586078,-73.8052761]
folium.CircleMarker(
location=train_station1,
radius=2,
color='#3186cc',
fill=True,
fill_color='#3186cc',
opacityb=0.1).add_to(hub_map)
train_station2 = [40.6612707,-73.8294946]
folium.CircleMarker(
location=train_station2,
radius=2,
color='#3186cc',
fill=True,
fill_color='#3186cc',
opacityb=0.1).add_to(hub_map)
display(hub_map)
return hub_map
input_hour = 18
input_dayofweek = 1
input_pickup_lon =-73.983313
input_pickup_lat = 40.733348
hub_map = plot_func(hour=input_hour, dayofweek=input_dayofweek, pickup_lat=input_pickup_lat, pickup_lon=input_pickup_lon, flow=combi_cond_flow3)
hub_map.save(f'{plot_folder}/nyc_combi_manhattan_path.html')
```
| github_jupyter |
```
import numpy as np
from matplotlib import pyplot
def flux(psi_l, psi_r, C):
return .5 * (C + abs(C)) * psi_l + \
.5 * (C - abs(C)) * psi_r
def upwind(psi, i, C):
return psi[i] - flux(psi[i ], psi[i+one], C[i]) + \
flux(psi[i-one], psi[i ], C[i-one])
def C_corr(C, nx, psi):
j = slice(0, nx-1)
return (abs(C[j]) - C[j]**2) * (psi[j+one] - psi[j]) / (psi[j+one] + psi[j] + 1e-10)
class shift:
def __radd__(self, i):
return slice(i.start+1, i.stop+1)
def __rsub__(self, i):
return slice(i.start-1, i.stop-1)
one = shift()
def psi_0(x):
a = 25
return np.where(np.abs(x)<np.pi/2*a, np.cos(x/a)**2, 0)
def plot(x, psi_0, psi_T_a, psi_T_n, n_corr_it):
pyplot.step(x, psi_0, label='initial', where='mid')
pyplot.step(x, psi_T_a, label='analytical', where='mid')
pyplot.step(x, psi_T_n, label='numerical', where='mid')
pyplot.grid()
pyplot.legend()
pyplot.title(f'MPDATA {n_corr_it} corrective iterations')
pyplot.show()
def solve(nx=75, nt=50, n_corr_it=2, make_plot=False):
T = 50
x_min, x_max = -50, 250
C = .1
dt = T / nt
x, dx = np.linspace(x_min, x_max, nx, endpoint=False, retstep=True)
v = C / dt * dx
assert C <= 1
i = slice(1, nx-2)
C_phys = np.full(nx-1, C)
psi = psi_0(x)
for _ in range(nt):
psi[i] = upwind(psi, i, C_phys)
C_iter = C_phys
for it in range(n_corr_it):
C_iter = C_corr(C_iter, nx, psi)
psi[i] = upwind(psi, i, C_iter)
psi_true = psi_0(x-v*nt*dt)
err = np.sqrt(sum(pow(psi-psi_true, 2)) / nt / nx)
if make_plot:
plot(x, psi_0(x), psi_T_a=psi_true, psi_T_n=psi, n_corr_it=n_corr_it)
return dx, dt, err
solve(nx=150, nt=500, n_corr_it=0, make_plot=True)
for n_it in [0,1,2]:
data = {'x':[], 'y':[]}
for nx in range(600,1000,50):
dx, dt, err = solve(nx=nx, nt=300, n_corr_it=n_it)
data['x'].append(np.log2(dx))
data['y'].append(np.log2(err))
pyplot.scatter(data['x'], data['y'], label=f"{n_it} corrective iters")
pyplot.plot(data['x'], data['y'])
def line(k, offset, label=False):
pyplot.plot(
data['x'],
k*np.array(data['x']) + offset,
label=f'$err \sim dx^{k}$' if label else '',
linestyle=':',
color='black'
)
line(k=2, offset=-10.1, label=True)
line(k=3, offset=-9.1, label=True)
line(k=2, offset=-14)
line(k=3, offset=-13)
pyplot.legend()
pyplot.gca().set_xlabel('$log_2(dx)$')
pyplot.gca().set_ylabel('$log_2(err)$')
pyplot.grid()
```
| github_jupyter |
```
from urllib.request import urlopen
from bs4 import BeautifulSoup
html = urlopen('http://en.wikipedia.org/wiki/Kevin_Bacon')
bs = BeautifulSoup(html, 'html.parser')
for link in bs.find_all('a'):
if 'href' in link.attrs:
print(link.attrs['href'])
```
## Retrieving Articles Only
```
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
html = urlopen('http://en.wikipedia.org/wiki/Kevin_Bacon')
bs = BeautifulSoup(html, 'html.parser')
for link in bs.find('div', {'id':'bodyContent'}).find_all(
'a', href=re.compile('^(/wiki/)((?!:).)*$')):
if 'href' in link.attrs:
print(link.attrs['href'])
```
## Random Walk
```
from urllib.request import urlopen
from bs4 import BeautifulSoup
import datetime
import random
import re
random.seed(datetime.datetime.now())
def getLinks(articleUrl):
html = urlopen('http://en.wikipedia.org{}'.format(articleUrl))
bs = BeautifulSoup(html, 'html.parser')
return bs.find('div', {'id':'bodyContent'}).find_all('a', href=re.compile('^(/wiki/)((?!:).)*$'))
links = getLinks('/wiki/Kevin_Bacon')
while len(links) > 0:
newArticle = links[random.randint(0, len(links)-1)].attrs['href']
print(newArticle)
links = getLinks(newArticle)
```
## Recursively crawling an entire site
```
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
pages = set()
def getLinks(pageUrl):
global pages
html = urlopen('http://en.wikipedia.org{}'.format(pageUrl))
bs = BeautifulSoup(html, 'html.parser')
for link in bs.find_all('a', href=re.compile('^(/wiki/)')):
if 'href' in link.attrs:
if link.attrs['href'] not in pages:
#We have encountered a new page
newPage = link.attrs['href']
print(newPage)
pages.add(newPage)
getLinks(newPage)
getLinks('')
```
## Collecting Data Across an Entire Site
```
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
pages = set()
def getLinks(pageUrl):
global pages
html = urlopen('http://en.wikipedia.org{}'.format(pageUrl))
bs = BeautifulSoup(html, 'html.parser')
try:
print(bs.h1.get_text())
print(bs.find(id ='mw-content-text').find_all('p')[0])
print(bs.find(id='ca-edit').find('span').find('a').attrs['href'])
except AttributeError:
print('This page is missing something! Continuing.')
for link in bs.find_all('a', href=re.compile('^(/wiki/)')):
if 'href' in link.attrs:
if link.attrs['href'] not in pages:
#We have encountered a new page
newPage = link.attrs['href']
print('-'*20)
print(newPage)
pages.add(newPage)
getLinks(newPage)
getLinks('')
```
## Crawling across the Internet
```
from urllib.request import urlopen
from urllib.parse import urlparse
from bs4 import BeautifulSoup
import re
import datetime
import random
pages = set()
random.seed(datetime.datetime.now())
#Retrieves a list of all Internal links found on a page
def getInternalLinks(bs, includeUrl):
includeUrl = '{}://{}'.format(urlparse(includeUrl).scheme, urlparse(includeUrl).netloc)
internalLinks = []
#Finds all links that begin with a "/"
for link in bs.find_all('a', href=re.compile('^(/|.*'+includeUrl+')')):
if link.attrs['href'] is not None:
if link.attrs['href'] not in internalLinks:
if(link.attrs['href'].startswith('/')):
internalLinks.append(includeUrl+link.attrs['href'])
else:
internalLinks.append(link.attrs['href'])
return internalLinks
#Retrieves a list of all external links found on a page
def getExternalLinks(bs, excludeUrl):
externalLinks = []
#Finds all links that start with "http" that do
#not contain the current URL
for link in bs.find_all('a', href=re.compile('^(http|www)((?!'+excludeUrl+').)*$')):
if link.attrs['href'] is not None:
if link.attrs['href'] not in externalLinks:
externalLinks.append(link.attrs['href'])
return externalLinks
def getRandomExternalLink(startingPage):
html = urlopen(startingPage)
bs = BeautifulSoup(html, 'html.parser')
externalLinks = getExternalLinks(bs, urlparse(startingPage).netloc)
if len(externalLinks) == 0:
print('No external links, looking around the site for one')
domain = '{}://{}'.format(urlparse(startingPage).scheme, urlparse(startingPage).netloc)
internalLinks = getInternalLinks(bs, domain)
return getRandomExternalLink(internalLinks[random.randint(0,
len(internalLinks)-1)])
else:
return externalLinks[random.randint(0, len(externalLinks)-1)]
def followExternalOnly(startingSite):
externalLink = getRandomExternalLink(startingSite)
print('Random external link is: {}'.format(externalLink))
followExternalOnly(externalLink)
followExternalOnly('http://oreilly.com')
```
## Collect all External Links from a Site
```
# Collects a list of all external URLs found on the site
allExtLinks = set()
allIntLinks = set()
def getAllExternalLinks(siteUrl):
html = urlopen(siteUrl)
domain = '{}://{}'.format(urlparse(siteUrl).scheme,
urlparse(siteUrl).netloc)
bs = BeautifulSoup(html, 'html.parser')
internalLinks = getInternalLinks(bs, domain)
externalLinks = getExternalLinks(bs, domain)
for link in externalLinks:
if link not in allExtLinks:
allExtLinks.add(link)
print(link)
for link in internalLinks:
if link not in allIntLinks:
allIntLinks.add(link)
getAllExternalLinks(link)
allIntLinks.add('http://oreilly.com')
getAllExternalLinks('http://oreilly.com')
```
| github_jupyter |
```
import hail as hl
```
# *set up dataset*
```
# read in the dataset Zan produced
# metadata from Alicia + sample QC metadata from Julia + densified mt from Konrad
# no samples or variants removed yet
mt = hl.read_matrix_table('gs://african-seq-data/hgdp_tgp/hgdp_tgp_dense_meta_preQC.mt') # 211358784 snps & 4151 samples
# read in variant QC metadata
var_meta = hl.read_table('gs://gcp-public-data--gnomad/release/3.1.1/ht/genomes/gnomad.genomes.v3.1.1.sites.ht')
# annotate variant QC metadata onto mt
mt = mt.annotate_rows(**var_meta[mt.locus, mt.alleles])
# read in the new dataset (including samples that were removed unknowngly)
mt_post = hl.read_matrix_table('gs://african-seq-data/hgdp_tgp/new_hgdp_tgp_postQC.mt') # (155648020, 4099)
```
# *gnomAD filter QC*
```
# editing the format of the filter names and putting them together in a set so that we won't have an issue later when filtering the matrixTable using difference()
# create a set of the gnomAD qc filters (column names under "sample filters") - looks like: {'sex_aneuploidy', 'insert_size', ...} but not in a certain order (randomly ordered)
all_sample_filters = set(mt['sample_filters'])
import re # for renaming purposes
# bad_sample_filters are filters that removed whole populations despite them passing all other gnomAD filters (mostly AFR and OCE popns)
# remove "fail_" from the filter names and pick those out (9 filters) - if the filter name starts with 'fail_' then replace it with ''
bad_sample_filters = {re.sub('fail_', '', x) for x in all_sample_filters if x.startswith('fail_')}
# this filters to only samples that passed all gnomad QC or only failed filters in bad_sample_filters
# 'qc_metrics_filters' is under 'sample_filters' and includes a set of all qc filters a particular sample failed
# if a sample passed all gnomAD qc filters then the column entry for that sample under 'qc_metrics_filters' is an empty set
# so this line goes through the 'qc_metrics_filters'column and sees if there are any samples that passed all the other qc filters except for the ones in the "bad_sample_filters" set (difference())
# if a sample has an empty set for the 'qc_metrics_filters' column or if it only failed the filters that are found in the bad_sample_filters set, then a value of zero is returned and we would keep that sample
# if a sample failed any filters that are not in the "bad_sample_filters" set, remove it
# same as gs://african-seq-data/hgdp_tgp/hgdp_tgp_dense_meta_filt.mt - 211358784 snps & 4120 samples
mt_filt = mt.filter_cols(mt['sample_filters']['qc_metrics_filters'].difference(bad_sample_filters).length() == 0)
# How many samples were removed by the initial QC?
print('Num of samples before initial QC = ' + str(mt.count()[1])) # 4151
print('Num of samples after initial QC = ' + str(mt_filt.count()[1])) # 4120
print('Samples removed = ' + str(mt.count()[1] - mt_filt.count()[1])) # 31
```
# *remove duplicate sample*
```
# duplicate sample - NA06985
mt_filt = mt_filt.distinct_by_col()
print('Num of samples after removal of duplicate sample = ' + str(mt_filt.count()[1])) # 4119
```
# *keep only PASS variants*
```
# subset to only PASS variants (those which passed variant QC) ~6min to run
mt_filt = mt_filt.filter_rows(hl.len(mt_filt.filters) !=0, keep=False)
print('Num of only PASS variants = ' + str(mt_filt.count()[0])) # 155648020
```
# *variant filter and ld pruning*
```
# run common variant statistics (quality control metrics) - more info https://hail.is/docs/0.2/methods/genetics.html#hail.methods.variant_qc
mt_var = hl.variant_qc(mt_filt)
# trying to get down to ~100-300k SNPs - might need to change values later accordingly
# AF: allele freq and call_rate: fraction of calls neither missing nor filtered
# mt.variant_qc.AF[0] is referring to the first element of the list under that column field
mt_var_filt = mt_var.filter_rows((mt_var.variant_qc.AF[0] > 0.05) & (mt_var.variant_qc.AF[0] < 0.95) & (mt_var.variant_qc.call_rate > 0.999))
# ~20min to run
mt_var_filt.count() # started with 155648020 snps and ended up with 6787034 snps
# LD pruning (~113 min to run)
pruned = hl.ld_prune(mt_var_filt.GT, r2=0.1, bp_window_size=500000)
# subset data even further
mt_var_pru_filt = mt_var_filt.filter_rows(hl.is_defined(pruned[mt_var_filt.row_key]))
# write out the output as a temp file - make sure to save the file on this step b/c the pruning step takes a while to run
# saving took ~23 min
mt_var_pru_filt.write('gs://african-seq-data/hgdp_tgp/filtered_n_pruned_output_updated.mt', overwrite=False)
# after saving the pruned file to the cloud, reading it back in for the next steps
mt_var_pru_filt = hl.read_matrix_table('gs://african-seq-data/hgdp_tgp/filtered_n_pruned_output_updated.mt')
# how many snps are left after filtering and prunning?
mt_var_pru_filt.count() # 248,634 snps
# between ~100-300k so we proceed without any value adjustments
```
# *run pc_relate*
```
# compute relatedness estimates between individuals using a variant of the PC-Relate method (https://hail.is/docs/0.2/methods/relatedness.html#hail.methods.pc_relate)
# only compute the kinship statistic using:
# a minimum minor allele frequency filter of 0.05,
# excluding sample-pairs with kinship less than 0.05, and
# 20 principal components to control for population structure
# a hail table is produced (~4min to run)
relatedness_ht = hl.pc_relate(mt_var_pru_filt.GT, min_individual_maf=0.05, min_kinship=0.05, statistics='kin', k=20).key_by()
# write out result - for Julia (~2hr 19min to run)
# includes i – first sample, j – second sample, and kin – kinship estimate
relatedness_ht.write('gs://african-seq-data/hgdp_tgp/relatedness.ht')
# read back in
relatedness_ht = hl.read_table('gs://african-seq-data/hgdp_tgp/relatedness.ht')
# identify related individuals in pairs to remove - returns a list of sample IDs (~2hr & 22 min to run) - previous one took ~13min
related_samples_to_remove = hl.maximal_independent_set(relatedness_ht.i, relatedness_ht.j, False)
# unkey table for exporting purposes - for Julia
unkeyed_tbl = related_samples_to_remove.expand_types()
# export sample IDs of related individuals
unkeyed_tbl.node.s.export('gs://african-seq-data/hgdp_tgp/related_sample_ids.txt', header=False)
# import back to see if format is correct
#tbl = hl.import_table('gs://african-seq-data/hgdp_tgp/related_sample_ids.txt', impute=True, no_header=True)
#tbl.show()
# using sample IDs (col_key of the matrixTable), pick out the samples that are not found in 'related_samples_to_remove' (had 'False' values for the comparison)
# subset the mt to those only
mt_unrel = mt_var_pru_filt.filter_cols(hl.is_defined(related_samples_to_remove[mt_var_pru_filt.col_key]), keep=False)
# do the same as above but this time for the samples with 'True' values (found in 'related_samples_to_remove')
mt_rel = mt_var_pru_filt.filter_cols(hl.is_defined(related_samples_to_remove[mt_var_pru_filt.col_key]), keep=True)
# write out mts of unrelated and related samples on to the cloud
# unrelated mt
mt_unrel.write('gs://african-seq-data/hgdp_tgp/unrel_updated.mt', overwrite=False)
# related mt
mt_rel.write('gs://african-seq-data/hgdp_tgp/rel_updated.mt', overwrite=False)
# read saved mts back in
# unrelated mt
mt_unrel = hl.read_matrix_table('gs://african-seq-data/hgdp_tgp/unrel_updated.mt')
# related mt
mt_rel = hl.read_matrix_table('gs://african-seq-data/hgdp_tgp/rel_updated.mt')
```
# PCA
# *run pca*
```
def run_pca(mt: hl.MatrixTable, reg_name:str, out_prefix: str, overwrite: bool = False):
"""
Runs PCA on a dataset
:param mt: dataset to run PCA on
:param reg_name: region name for saving output purposes
:param out_prefix: path for where to save the outputs
:return:
"""
pca_evals, pca_scores, pca_loadings = hl.hwe_normalized_pca(mt.GT, k=20, compute_loadings=True)
pca_mt = mt.annotate_rows(pca_af=hl.agg.mean(mt.GT.n_alt_alleles()) / 2)
pca_loadings = pca_loadings.annotate(pca_af=pca_mt.rows()[pca_loadings.key].pca_af)
pca_scores = pca_scores.transmute(**{f'PC{i}': pca_scores.scores[i - 1] for i in range(1, 21)})
pca_scores.export(out_prefix + reg_name + '_scores.txt.bgz') # save individual-level genetic region PCs
pca_loadings.write(out_prefix + reg_name + '_loadings.ht', overwrite) # save PCA loadings
```
# *project related individuals*
```
#if running on GCP, need to add "--packages gnomad" when starting a cluster in order for the import to work
from gnomad.sample_qc.ancestry import *
def project_individuals(pca_loadings, project_mt, reg_name:str, out_prefix: str, overwrite: bool = False):
"""
Project samples into predefined PCA space
:param pca_loadings: existing PCA space - unrelated samples
:param project_mt: matrixTable of data to project - related samples
:param reg_name: region name for saving output purposes
:param project_prefix: path for where to save PCA projection outputs
:return:
"""
ht_projections = pc_project(project_mt, pca_loadings)
ht_projections = ht_projections.transmute(**{f'PC{i}': ht_projections.scores[i - 1] for i in range(1, 21)})
ht_projections.export(out_prefix + reg_name + '_projected_scores.txt.bgz') # save output
#return ht_projections # return to user
```
# *global pca*
```
# run 'run_pca' function for global pca
run_pca(mt_unrel, 'global', 'gs://african-seq-data/hgdp_tgp/pca_preoutlier/', False)
# run 'project_relateds' function for global pca
loadings = hl.read_table('gs://african-seq-data/hgdp_tgp/pca_preoutlier/global_loadings.ht') # read in the PCA loadings that were obtained from 'run_pca' function
project_individuals(loadings, mt_rel, 'global', 'gs://african-seq-data/hgdp_tgp/pca_preoutlier/', False)
```
# *subcontinental pca*
```
# obtain a list of the genetic regions in the dataset - used the unrelated dataset since it had more samples
regions = mt_unrel['hgdp_tgp_meta']['Genetic']['region'].collect()
regions = list(dict.fromkeys(regions)) # 7 regions - ['EUR', 'AFR', 'AMR', 'EAS', 'CSA', 'OCE', 'MID']
# set argument values
subcont_pca_prefix = 'gs://african-seq-data/hgdp_tgp/pca_preoutlier/subcont_pca/subcont_pca_' # path for outputs
overwrite = False
# run 'run_pca' function for each region - nb freezes after printing the log for AMR
# don't restart it - just let it run and you can follow the progress through the SparkUI
# even after all the outputs are produced and the run is complete, the code chunk will seem as if it's still running (* in the left square bracket)
# can check if the run is complete by either checking the output files in the Google cloud bucket or using the SparkUI
# after checking the desired outputs are generated and the run is done, exit the current nb, open a new session, and proceed to the next step
# ~27min to run
for i in regions:
subcont_unrel = mt_unrel.filter_cols(mt_unrel['hgdp_tgp_meta']['Genetic']['region'] == i) # filter the unrelateds per region
run_pca(subcont_unrel, i, subcont_pca_prefix, overwrite)
# run 'project_relateds' function for each region (~2min to run)
for i in regions:
loadings = hl.read_table(subcont_pca_prefix + i + '_loadings.ht') # for each region, read in the PCA loadings that were obtained from 'run_pca' function
subcont_rel = mt_rel.filter_cols(mt_rel['hgdp_tgp_meta']['Genetic']['region'] == i) # filter the relateds per region
project_individuals(loadings, subcont_rel, i, subcont_pca_prefix, overwrite)
```
# *outlier removal*
#### After plotting the PCs, 22 outliers that need to be removed were identified (the table below will be completed for the final report)
| s | Genetic region | Population | Note |
| --- | --- | --- | -- |
| NA20314 | AFR | ASW | Clusters with AMR in global PCA |
| NA20299 | - | - | - |
| NA20274 | - | - | - |
| HG01880 | - | - | - |
| HG01881 | - | - | - |
| HG01628 | - | - | - |
| HG01629 | - | - | - |
| HG01630 | - | - | - |
| HG01694 | - | - | - |
| HG01696 | - | - | - |
| HGDP00013 | - | - | - |
| HGDP00150 | - | - | - |
| HGDP00029 | - | - | - |
| HGDP01298 | - | - | - |
| HGDP00130 | CSA | Makrani | Closer to AFR than most CSA |
| HGDP01303 | - | - | - |
| HGDP01300 | - | - | - |
| HGDP00621 | MID | Bedouin | Closer to AFR than most MID |
| HGDP01270 | MID | Mozabite | Closer to AFR than most MID |
| HGDP01271 | MID | Mozabite | Closer to AFR than most MID |
| HGDP00057 | - | - | - |
| LP6005443-DNA_B02 | - | - | - |
```
# read back in the unrelated and related mts to remove outliers and run pca
mt_unrel_unfiltered = hl.read_matrix_table('gs://african-seq-data/hgdp_tgp/unrel_updated.mt') # unrelated mt
mt_rel_unfiltered = hl.read_matrix_table('gs://african-seq-data/hgdp_tgp/rel_updated.mt') # related mt
# read the outliers file into a list
with hl.utils.hadoop_open('gs://african-seq-data/hgdp_tgp/pca_outliers_v2.txt') as file:
outliers = [line.rstrip('\n') for line in file]
# capture and broadcast the list as an expression
outliers_list = hl.literal(outliers)
# remove 22 outliers
mt_unrel = mt_unrel_unfiltered.filter_cols(~outliers_list.contains(mt_unrel_unfiltered['s']))
mt_rel = mt_rel_unfiltered.filter_cols(~outliers_list.contains(mt_rel_unfiltered['s']))
# sanity check
print('Unrelated: Before filtering ' + str(mt_unrel_unfiltered.count()[1]) + ' | After filtering ' + str(mt_unrel.count()[1]))
print('Related: Before filtering: ' + str(mt_rel_unfiltered.count()[1]) + ' | After filtering ' + str(mt_rel.count()[1]))
num_outliers = (mt_unrel_unfiltered.count()[1] - mt_unrel.count()[1]) + (mt_rel_unfiltered.count()[1] - mt_rel.count()[1])
print('Total samples removed = ' + str(num_outliers))
```
# rerun PCA
### - The following steps are similar to the ones prior to removing the outliers except now we are using the updated unrelated & related dataset and a new GCS bucket path to save the outputs
# *global pca*
```
# run 'run_pca' function for global pca - make sure the code block for the function (located above) is run prior to running this
run_pca(mt_unrel, 'global', 'gs://african-seq-data/hgdp_tgp/pca_postoutlier/', False)
# run 'project_relateds' function for global pca - make sure the code block for the function (located above) is run prior to running this
loadings = hl.read_table('gs://african-seq-data/hgdp_tgp/pca_postoutlier/global_loadings.ht') # read in the PCA loadings that were obtained from 'run_pca' function
project_individuals(loadings, mt_rel, 'global', 'gs://african-seq-data/hgdp_tgp/pca_postoutlier/', False)
```
# *subcontinental pca*
```
# obtain a list of the genetic regions in the dataset - used the unrelated dataset since it had more samples
regions = mt_unrel['hgdp_tgp_meta']['Genetic']['region'].collect()
regions = list(dict.fromkeys(regions)) # 7 regions - ['EUR', 'AFR', 'AMR', 'EAS', 'CSA', 'OCE', 'MID']
# set argument values
subcont_pca_prefix = 'gs://african-seq-data/hgdp_tgp/pca_postoutlier/subcont_pca/subcont_pca_' # path for outputs
overwrite = False
# run 'run_pca' function (located above) for each region
# notebook became slow and got stuck - don't restart it, just let it run and you can follow the progress through the SparkUI
# after checking the desired outputs are generated (GCS bucket) and the run is done (SparkUI), exit the current nb, open a new session, and proceed to the next step
# took roughly 25-27 min
for i in regions:
subcont_unrel = mt_unrel.filter_cols(mt_unrel['hgdp_tgp_meta']['Genetic']['region'] == i) # filter the unrelateds per region
run_pca(subcont_unrel, i, subcont_pca_prefix, overwrite)
# run 'project_relateds' function (located above) for each region - took ~3min
for i in regions:
loadings = hl.read_table(subcont_pca_prefix + i + '_loadings.ht') # for each region, read in the PCA loadings that were obtained from 'run_pca' function
subcont_rel = mt_rel.filter_cols(mt_rel['hgdp_tgp_meta']['Genetic']['region'] == i) # filter the relateds per region
project_individuals(loadings, subcont_rel, i, subcont_pca_prefix, overwrite)
```
# FST
### For FST, we are using the data we had prior to running pc_relate (*filtered_n_pruned_output_updated.mt*)
```
# read filtered and pruned mt (prior to pc_relate) back in for FST analysis
mt_var_pru_filt = hl.read_matrix_table('gs://african-seq-data/hgdp_tgp/filtered_n_pruned_output_updated.mt')
# num of samples before outlier removal
print('Before filtering: ' + str(mt_var_pru_filt.count()[1]))
# read the outliers file into a list
with hl.utils.hadoop_open('gs://african-seq-data/hgdp_tgp/pca_outliers_v2.txt') as file:
outliers = [line.rstrip('\n') for line in file]
# capture and broadcast the list as an expression
outliers_list = hl.literal(outliers)
# remove 22 outliers
mt_var_pru_filt = mt_var_pru_filt.filter_cols(~outliers_list.contains(mt_var_pru_filt['s']))
# sanity check
print('After filtering: ' + str(mt_var_pru_filt.count()[1]))
```
## *pair-wise comparison*
Formula to calculate number of pair-wise comparisons = (k * (k-1))/2
So in our case, since we have 78 populations, we would expect = (78 * (78-1))/2 = 6006/2 = 3003 pair-wise comparisons
```
pop = mt_var_pru_filt['hgdp_tgp_meta']['Population'].collect()
pop = list(dict.fromkeys(pop))
len(pop) # 78 populations in total
# example
ex = ['a','b','c']
# pair-wise comparison
ex_pair_com = [[x,y] for i, x in enumerate(ex) for j,y in enumerate(ex) if i<j]
ex_pair_com
# pair-wise comparison - creating list of lists
# enumerate gives index values for each population in the 'pop' list (ex. 0 CEU, 1 YRI, 2 LWK ...) and then by
# comparing those index values, we create a pair-wise comparison between the populations
# i < j so that it only does a single comparison among two different populations
# ex. for a comparison between populations CEU and YRI, it only keeps CEU-YRI and discards YRI-CEU, CEU-CEU and YRI-YRI
pair_com = [[x,y] for i, x in enumerate(pop) for j,y in enumerate(pop) if i<j]
# first 5 elements in the list
pair_com[0:5]
# sanity check
len(pair_com)
```
## *subset mt into popns according to the pair-wise comparisons and run common variant statistics*
```
pair_com[0]
## example - pair_com[0] = ['CEU', 'YRI'] and pair_com[0][0] = 'CEU'
CEU_mt = mt_var_pru_filt.filter_cols(mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pair_com[0][0])
YRI_mt = mt_var_pru_filt.filter_cols(mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pair_com[0][1])
CEU_YRI_mt = mt_var_pru_filt.filter_cols((mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pair_com[0][0]) | (mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pair_com[0][1]))
# sanity check
CEU_mt.count()[1] + YRI_mt.count()[1] == CEU_YRI_mt.count()[1] # 175 + 170 = 345
# run common variant statistics for each population and their combined mt
CEU_var = hl.variant_qc(CEU_mt) # individual
YRI_var = hl.variant_qc(YRI_mt) # individual
CEU_YRI_var = hl.variant_qc(CEU_YRI_mt) # total
```
### *Set up mt table for FST calculation - the next code is run for each population and their combos*
##### *population 1*
```
# drop certain fields first to make mt smaller
# drop all entry fields
# everything except for 's' (key) from the column fields
# everything from the row fields except for the keys -'locus' and 'alleles' and row field 'variant_qc'
CEU_interm = CEU_var.drop(*list(CEU_var.entry), *list(CEU_var.col)[1:], *list(CEU_var.row)[2:-1])
# only select the row field keys (locus and allele) and row fields 'AF' & 'AN' which are under 'variant_qc'
CEU_interm2 = CEU_interm.select_rows(CEU_interm['variant_qc']['AF'], CEU_interm['variant_qc']['AN'])
# quick look at the condensed mt
CEU_interm2.describe()
CEU_interm2.rows().show(5)
# only include the second entry of the array from the row field 'AF'
CEU_interm3 = CEU_interm2.transmute_rows(AF = CEU_interm2.AF[1])
# previous code
# key the rows only by 'locus' so that the 'allele' row field can be split into two row fields (one for each allele)
# also, only include the second entry of the array from 'AF' row field
#CEU_interm3 = CEU_interm2.key_rows_by('locus')
#CEU_interm3 = CEU_interm3.transmute_rows(AF = CEU_interm3.AF[1], A1 = CEU_interm3.alleles[0], A2 = CEU_interm3.alleles[1])
# add a row field with population name to keep track of which mt it came from
CEU_final = CEU_interm3.annotate_rows(pop = pair_com[0][0])
CEU_final.rows().show(5)
```
##### *population 2*
```
# drop fields
# drop all entry fields
# everything except for 's' (key) from the column fields
# everything from the row fields except for the keys -'locus' and 'alleles' and row field 'variant_qc'
CEU_YRI_interm = CEU_YRI_var.drop(*list(CEU_YRI_var.entry), *list(CEU_YRI_var.col)[1:], *list(CEU_YRI_var.row)[2:-1])
# only select the row field keys (locus and allele) and row fields 'AF' & 'AN' which are under 'variant_qc'
CEU_YRI_interm2 = CEU_YRI_interm.select_rows(CEU_YRI_interm['variant_qc']['AF'], CEU_YRI_interm['variant_qc']['AN'])
# quick look at the condensed mt
CEU_YRI_interm2.describe()
CEU_YRI_interm2.rows().show(5)
# only include the second entry of the array from the row field 'AF'
CEU_YRI_interm3 = CEU_YRI_interm2.transmute_rows(AF = CEU_YRI_interm2.AF[1])
# previous code
# key the rows only by 'locus' so that the 'allele' row field can be split into two row fields (one for each allele)
# also, only include the second entry of the array from 'AF' row field
#CEU_YRI_interm3 = CEU_YRI_interm2.key_rows_by('locus')
#CEU_YRI_interm3 = CEU_YRI_interm3.transmute_rows(AF = CEU_YRI_interm3.AF[1], A1 = CEU_YRI_interm3.alleles[0], A2 = CEU_YRI_interm3.alleles[1])
# add a row field with population name to keep track of which mt it came from
CEU_YRI_final = CEU_YRI_interm3.annotate_rows(pop = f'{pair_com[0][0]}-{pair_com[0][1]}')
CEU_YRI_final.rows().show(5)
```
### *FST formula pre-setup* - trial run
#### *Variables needed for FST calculation*
```
# converting lists into numpy arrarys cause it is easier to work with and more readable
# assign populations to formula variables
pop1 = CEU_final
pop2 = CEU_YRI_final
# number of alleles
n1 = np.array(pop1.AN.collect())
n2 = np.array(pop2.AN.collect())
# allele frequencies
FREQpop1 = np.array(pop1.AF.collect())
FREQpop2 = np.array(pop2.AF.collect())
```
#### *Weighted average allele frequency*
```
FREQ = ((n1*FREQpop1) + (n2*FREQpop2)) / (n1+n2)
# sanity checks
print(((n1[0]*FREQpop1[0]) + (n2[0]*FREQpop2[0])) / (n1[0]+n2[0]) == FREQ[0])
print(len(FREQ) == len(FREQpop1)) # length of output should be equal to the length of arrays we started with
```
#### *Filter to only freqs between 0 and 1*
```
INCLUDE=(FREQ>0) & (FREQ<1) # only include ave freq between 0 and 1 - started with FREQ = 248634
print(np.count_nonzero(INCLUDE)) # 246984 ave freq values were between 0 and 1 - returned True to the conditions above; 248634 - 246984 = 1650 were False
# subset allele frequencies
FREQpop1=FREQpop1[INCLUDE]
FREQpop2=FREQpop2[INCLUDE]
FREQ=FREQ[INCLUDE]
# sanity check
print(len(FREQpop1) == np.count_nonzero(INCLUDE)) # TRUE
# subset the number of alleles
n1 = n1[INCLUDE]
n2 = n2[INCLUDE]
# sanity check
print(len(n1) == np.count_nonzero(INCLUDE)) # TRUE
```
#### *FST Estimate - W&C ESTIMATOR*
```
## average sample size that incorporates variance
nc =((1/(s-1)) * (n1+n2)) - ((np.square(n1) + np.square(n2))/(n1+n2))
msa= (1/(s-1))*((n1*(np.square(FREQpop1-FREQ)))+(n2*(np.square(FREQpop2-FREQ))))
msw = (1/((n1-1)+(n2-1))) * ((n1*(FREQpop1*(1-FREQpop1))) + (n2*(FREQpop2*(1-FREQpop2))))
numer = msa-msw
denom = msa + ((nc-1)*msw)
FST_val = numer/denom
# sanity check using the first element
nc_0 =((1/(s-1)) * (n1[0]+n2[0])) - ((np.square(n1[0]) + np.square(n2[0]))/(n1[0]+n2[0]))
msa_0= (1/(s-1))*((n1[0]*(np.square(FREQpop1[0]-FREQ[0])))+(n2[0]*(np.square(FREQpop2[0]-FREQ[0]))))
msw_0 = (1/((n1[0]-1)+(n2[0]-1))) * ((n1[0]*(FREQpop1[0]*(1-FREQpop1[0]))) + (n2[0]*(FREQpop2[0]*(1-FREQpop2[0]))))
numer_0 = msa_0-msw_0
denom_0 = msa_0 + ((nc_0-1)*msw_0)
FST_0 = numer_0/denom_0
print(FST_0 == FST_val[0]) # TRUE
FST_val
```
## *Which FST value is for which locus-allele?* - actual run
```
# resetting variables for the actual FST run
# assign populations to formula variables
pop1 = CEU_final
pop2 = CEU_YRI_final
# number of alleles
n1 = np.array(pop1.AN.collect())
n2 = np.array(pop2.AN.collect())
# allele frequencies
FREQpop1 = np.array(pop1.AF.collect())
FREQpop2 = np.array(pop2.AF.collect())
# locus + alleles = keys - needed for reference purposes - these values are uniform across all populations
locus = np.array(hl.str(pop1.locus).collect())
alleles = np.array(hl.str(pop1.alleles).collect())
key = np.array([i + ' ' + j for i, j in zip(locus, alleles)])
s=2 # s is the number of populations - since we are calculating pair-wise FSTs, this is always 2
key_FST = {}
for i in range(len(key)):
FREQ = ((n1[i]*FREQpop1[i]) + (n2[i]*FREQpop2[i])) / (n1[i]+n2[i])
if (FREQ>0) & (FREQ<1): # only include ave freq between 0 and 1
## average sample size that incorporates variance
nc = ((1/(s-1)) * (n1[i]+n2[i])) - ((np.square(n1[i]) + np.square(n2[i]))/(n1[i]+n2[i]))
msa= (1/(s-1))*((n1[i]*(np.square(FREQpop1[i]-FREQ)))+(n2[i]*(np.square(FREQpop2[i]-FREQ))))
msw = (1/((n1[i]-1)+(n2[i]-1))) * ((n1[i]*(FREQpop1[i]*(1-FREQpop1[i]))) + (n2[i]*(FREQpop2[i]*(1-FREQpop2[i]))))
numer = msa-msw
denom = msa + ((nc-1)*msw)
FST = numer/denom
key_FST[key[i]] = FST
key_FST
# sanity checks
print(all(np.array(list(key_FST.values())) == FST_val)) # True
print(len(key_FST) == len(FST_val)) # True
```
## *other pair*
### population 3
```
# population - YRI
# same steps we did to CEU
YRI_interm = YRI_var.drop(*list(YRI_var.entry), *list(YRI_var.col)[1:], *list(YRI_var.row)[2:-1])
# only select the row field keys (locus and allele) and row fields 'AF' & 'AN' which are under 'variant_qc'
YRI_interm2 = YRI_interm.select_rows(YRI_interm['variant_qc']['AF'], YRI_interm['variant_qc']['AN'])
# only include the second entry of the array from the row field 'AF'
YRI_interm3 = YRI_interm2.transmute_rows(AF = YRI_interm2.AF[1])
# add a row field with population name to keep track of which mt it came from
YRI_final = YRI_interm3.annotate_rows(pop = pair_com[0][1])
YRI_final.rows().show(5)
```
### *FST*
```
# resetting variables for the actual FST run
# assign populations to formula variables
pop1 = YRI_final
pop2 = CEU_YRI_final
# number of alleles
n1 = np.array(pop1.AN.collect())
n2 = np.array(pop2.AN.collect())
# allele frequencies
FREQpop1 = np.array(pop1.AF.collect())
FREQpop2 = np.array(pop2.AF.collect())
# locus + alleles = keys - needed for reference purposes - these values are uniform across all populations
locus = np.array(hl.str(pop1.locus).collect())
alleles = np.array(hl.str(pop1.alleles).collect())
key = np.array([i + ' ' + j for i, j in zip(locus, alleles)])
s=2 # s is the number of populations - since we are calculating pair-wise FSTs, this is always 2
key_FST_YRI = {}
for i in range(len(key)):
FREQ = ((n1[i]*FREQpop1[i]) + (n2[i]*FREQpop2[i])) / (n1[i]+n2[i])
if (FREQ>0) & (FREQ<1): # only include ave freq between 0 and 1
## average sample size that incorporates variance
nc = ((1/(s-1)) * (n1[i]+n2[i])) - ((np.square(n1[i]) + np.square(n2[i]))/(n1[i]+n2[i]))
msa= (1/(s-1))*((n1[i]*(np.square(FREQpop1[i]-FREQ)))+(n2[i]*(np.square(FREQpop2[i]-FREQ))))
msw = (1/((n1[i]-1)+(n2[i]-1))) * ((n1[i]*(FREQpop1[i]*(1-FREQpop1[i]))) + (n2[i]*(FREQpop2[i]*(1-FREQpop2[i]))))
numer = msa-msw
denom = msa + ((nc-1)*msw)
FST = numer/denom
key_FST_YRI[key[i]] = FST
CEU
YRI
key_FST_YRI
```
## *three popn pairs*
```
## example using three sample pairs ['CEU', 'YRI'], ['CEU', 'LWK'], ['CEU', 'ESN'] and setting up the function
example_pairs = pair_com[0:3]
ex_dict = {} # empty dictionary to hold final outputs
for pairs in example_pairs:
l = [] # empty list to hold the subsetted datasets
l.append(mt_var_pru_filt.filter_cols(mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pairs[0])) # first population
l.append(mt_var_pru_filt.filter_cols(mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pairs[1])) # second population
l.append(mt_var_pru_filt.filter_cols((mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pairs[0]) | (mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pairs[1]))) # first + second = total population
# sanity check - the sample count of the first and second subset mts should be equal to the total subset mt
if l[0].count()[1] + l[1].count()[1] == l[2].count()[1]:
v = [] # empty list to hold output mts from running common variant statistics
# run common variant statistics for each population and their combined mt
v.append(hl.variant_qc(l[0])) # first population
v.append(hl.variant_qc(l[1])) # second population
v.append(hl.variant_qc(l[2])) # both/total population
# add to dictionary
ex_dict["-".join(pairs)] = v
# three mt subsets per comparison pair - set up as a dictionary
ex_dict
# population - YRI
# same steps we did to CEU
YRI_var == ex_dict['CEU-YRI'][0]
YRI_interm = ex_dict['CEU-YRI'][0].drop(*list(ex_dict['CEU-YRI'][0].entry)
YRI_interm = ex_dict['CEU-YRI'][0].drop(*list(ex_dict['CEU-YRI'][0].entry), *list(ex_dict['CEU-YRI'][0].col)[1:], *list(ex_dict['CEU-YRI'][0].row)[2:-1])
# only select the row field keys (locus and allele) and row fields 'AF' & 'AN' which are under 'variant_qc'
YRI_interm2 = YRI_interm.select_rows(YRI_interm['variant_qc']['AF'], YRI_interm['variant_qc']['AN'])
# only include the second entry of the array from the row field 'AF'
YRI_interm3 = YRI_interm2.transmute_rows(AF = YRI_interm2.AF[1])
# add a row field with population name to keep track of which mt it came from
YRI_final = YRI_interm3.annotate_rows(pop = pairs[0])
YRI_final.rows().show(5)
# same as CEU_var['variant_qc'].show(5)
ex_dict['CEU-YRI'][0]['variant_qc'].show(5)
len(ex_dict['CEU-YRI'])
a = ['CEU-YRI','CEU-LWK', 'CEU-ESN']
b = [0,1,2]
dc = {}
for i in a:
li = []
for j in b:
li.append(str(j) + i)
dc[i] = li
for i in range(len(v)-1):
print(i)
from collections import defaultdict
dd = defaultdict(list)
for d in (key_FST, key_FST_YRI):
print(d)
#for key, value in d.items():
#dd[key].append(value)
range(len(ex_dict[pair]))
final_dic = {}
for pair in ex_dict.keys(): # for each population pair
u = [] # list to hold updated mts
for i in range(len(ex_dict[pair])): # for each population (each mt)
# pop1
# drop certain fields and only keep the ones we need
interm = ex_dict[pair][i].drop(*list(ex_dict[pair][i].entry), *list(ex_dict[pair][i].col)[1:], *list(ex_dict[pair][i].row)[2:-1])
interm2 = interm.select_rows(interm['variant_qc']['AF'], interm['variant_qc']['AN'])
interm3 = interm2.transmute_rows(AF = interm2.AF[1])
#final = interm3.annotate_rows(pop = pair) # keep track of which mt it came from
u.append(interm3) # add updated mt to list
# variables for FST run
# assign populations to formula variables
pop1 = u[0]
pop2 = u[1]
total = u[2]
# number of alleles
n1 = np.array(pop1.AN.collect())
n2 = np.array(pop2.AN.collect())
total_n = np.array(total.AN.collect())
# allele frequencies
FREQpop1 = np.array(pop1.AF.collect())
FREQpop2 = np.array(pop2.AF.collect())
total_FREQ = np.array(total.AF.collect())
# locus + alleles = keys - needed for reference purposes during FST calculations - these values are uniform across all populations
locus = np.array(hl.str(pop1.locus).collect())
alleles = np.array(hl.str(pop1.alleles).collect())
key = np.array([i + ' ' + j for i, j in zip(locus, alleles)])
s=2 # s is the number of populations - since we are calculating pair-wise FSTs, this is always 2
# FST pop1 and total popn
key_pop1_total = {}
for i in range(len(key)):
FREQ = ((n1[i]*FREQpop1[i]) + (total_n[i]*total_FREQ[i])) / (n1[i]+total_n[i])
if (FREQ>0) & (FREQ<1): # only include ave freq between 0 and 1
## average sample size that incorporates variance
nc = ((1/(s-1)) * (n1[i]+total_n[i])) - ((np.square(n1[i]) + np.square(total_n[i]))/(n1[i]+total_n[i]))
msa= (1/(s-1))*((n1[i]*(np.square(FREQpop1[i]-FREQ)))+(total_n[i]*(np.square(total_FREQ[i]-FREQ))))
msw = (1/((n1[i]-1)+(total_n[i]-1))) * ((n1[i]*(FREQpop1[i]*(1-FREQpop1[i]))) + (total_n[i]*(total_FREQ[i]*(1-total_FREQ[i]))))
numer = msa-msw
denom = msa + ((nc-1)*msw)
FST = numer/denom
key_pop1_total[key[i]] = FST
# FST pop2 and total popn
key_pop2_total = {}
for i in range(len(key)):
FREQ = ((n2[i]*FREQpop2[i]) + (total_n[i]*total_FREQ[i])) / (n2[i]+total_n[i])
if (FREQ>0) & (FREQ<1): # only include ave freq between 0 and 1
## average sample size that incorporates variance
nc = ((1/(s-1)) * (n2[i]+total_n[i])) - ((np.square(n2[i]) + np.square(total_n[i]))/(n2[i]+total_n[i]))
msa= (1/(s-1))*((n2[i]*(np.square(FREQpop2[i]-FREQ)))+(total_n[i]*(np.square(total_FREQ[i]-FREQ))))
msw = (1/((n2[i]-1)+(total_n[i]-1))) * ((n2[i]*(FREQpop2[i]*(1-FREQpop2[i]))) + (total_n[i]*(total_FREQ[i]*(1-total_FREQ[i]))))
numer = msa-msw
denom = msa + ((nc-1)*msw)
FST = numer/denom
key_pop2_total[key[i]] = FST
# merge the two FST results together
from collections import defaultdict
dd = defaultdict(list)
for d in (key_pop1_total, key_pop2_total):
for key, value in d.items():
dd[key].append(value)
final_dic[pair] = dd
# convert to a table
import pandas as pd
df = pd.DataFrame(final_dic)
len(final_dic['CEU-YRI']) # 246984
## example - pair_com[0] = ['CEU', 'YRI'] and pair_com[0][0] = 'CEU'
CEU_mt = mt_var_pru_filt.filter_cols(mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pair_com[1][0])
LWK_mt = mt_var_pru_filt.filter_cols(mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pair_com[1][1])
CEU_LWK_mt = mt_var_pru_filt.filter_cols((mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pair_com[1][0]) | (mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pair_com[1][1]))
# run common variant statistics for each population and their combined mt
CEU_var = hl.variant_qc(CEU_mt) # individual
LWK_var = hl.variant_qc(LWK_mt) # individual
CEU_LWK_var = hl.variant_qc(CEU_YRI_mt) # total
LWK_var.count()
# population - YRI
# same steps we did to CEU
YRI_interm = YRI_var.drop(*list(YRI_var.entry), *list(YRI_var.col)[1:], *list(YRI_var.row)[2:-1])
# only select the row field keys (locus and allele) and row fields 'AF' & 'AN' which are under 'variant_qc'
YRI_interm2 = YRI_interm.select_rows(YRI_interm['variant_qc']['AF'], YRI_interm['variant_qc']['AN'])
# only include the second entry of the array from the row field 'AF'
YRI_interm3 = YRI_interm2.transmute_rows(AF = YRI_interm2.AF[1])
# add a row field with population name to keep track of which mt it came from
YRI_final = YRI_interm3.annotate_rows(pop = pair_com[0][1])
YRI_final.rows().show(5)
for pair in ex_dict.keys(): # for each population pair
for i in range(len(ex_dict[i])): # for each population
interm = ex_dict[pair][i].drop(*list(ex_dict[pair][i].entry), *list(ex_dict[pair][i].col)[1:], *list(ex_dict[pair][i].row)[2:-1])
# only select the row field keys (locus and allele) and row fields 'AF' & 'AN' which are under 'variant_qc'
interm2 = interm.select_rows(interm['variant_qc']['AF'], interm['variant_qc']['AN'])
# only include the second entry of the array from the row field 'AF'
interm3 = interm2.transmute_rows(AF = interm2.AF[1])
# add a row field with population name to keep track of which mt it came from
final = interm3.annotate_rows(pop = pair)
final.rows().show(5)
%%time
# actual function/run using all population pairs
dict = {} # empty dictionary to hold final outputs
for pairs in pair_com:
l = [] # empty list to hold the subsetted datasets
l.append(mt_var_pru_filt.filter_cols(mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pairs[0])) # first population
l.append(mt_var_pru_filt.filter_cols(mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pairs[1])) # second population
l.append(mt_var_pru_filt.filter_cols((mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pairs[0]) | (mt_var_pru_filt['hgdp_tgp_meta']['Population'] == pairs[1]))) # first + second = total population
# sanity check - the sample count of the first and second subset mts should be equal to the total subset mt
if l[0].count()[1] + l[1].count()[1] == l[2].count()[1]:
v = [] # empty list to hold output mts from running common variant statistics
# run common variant statistics for each population and their combined mt
v.append(hl.variant_qc(l[0])) # first population
v.append(hl.variant_qc(l[1])) # second population
v.append(hl.variant_qc(l[2])) # both/total population
# add to dictionary
dict["-".join(pairs)] = v
len(dict)
dict
dict['CEU-YRI'][0]['variant_qc'].show(5)
dict['CEU-YRI'][1]['variant_qc'].show(5)
dict['CEU-YRI'][2]['variant_qc'].show(5)
# accessing dictionary element with index
ex_dict[list(ex_dict)[0]][0]['variant_qc'].show(5)
for l in list(ex_dict):
print(ex_dict[l][1]['variant_qc']['AF'][1].show(5))
list(ex_dict)
CEU_af_freq = ex_dict[list(ex_dict)[0]][0]['variant_qc']['AF'][1]
play_mt = hl.utils.range_matrix_table(0, 6)
ex_dict[list(ex_dict)[0]][0].cols().show(5)
mt.select_rows(mt.r1, mt.r2,
r3=hl.coalesce(mt.r1, mt.r2))
mt.select_cols(mt.c2,
sum=mt.c2+mt.c1)
play_mt = ex_dict[list(ex_dict)[0]][0]
row_subsetted_mt.cols().show(5)
CEU_af_freq = CEU_af_freq.annotate_cols(AN=ex_dict[list(ex_dict)[0]][0]['variant_qc']['AN'])
mtA = mtA.annotate_rows(phenos = hl.dict(hl.agg.collect((mtA.pheno, mtA.value))))
mtB = mtB.annotate_cols(
phenos = mtA.rows()[mtB.col_key].phenos)
# additional stuff
CEU_var = hl.variant_qc(CEU_mt)
CEU__YRI_var = hl.variant_qc(CEU_YRI_mt)
a, b, c
a -> ac
B -> cb
a -> ab
CEU_var.row.show()
CEU_mt['variant_qc']['AF'][0]*CEU_mt['variant_qc']['AF'][1]
CEU_YRI_mt['variant_qc']['AF'][0]*CEU_YRI_mt['variant_qc']['AF'][1]
```
# junk code below
```
# this code is if the alleles were split into their separate columns and if we expect a mismatch across popns
# remove indels - only include single letter varients for each allele in both populations
# this is b/c the FST formula is set up for single letter alleles
#pop1 = CEU_final.filter_rows((CEU_final.A1.length() == 1) & (CEU_final.A2.length() == 1))
#pop2 = CEU_YRI_final.filter_rows((CEU_YRI_final.A1.length() == 1) & (CEU_YRI_final.A2.length() == 1))
# sanity check
#A1 = pop1.A1.collect()
#A1 = list(set(A1)) # OR can also do:
### from collections import OrderedDict
### A1 = list(OrderedDict.fromkeys(A1))
#print(A1)
#len(A1) == 4
# total # of snps at the beginning - 255666
# unique snps before removing indels - 2712
# total # of snps after removing indels - 221017 (34649 snps were indels for A1, A2 or both)
# unique snps after removing indels - 4 ['C', 'A', 'T', 'G'] - which is what we expect
## *use the same reference allele - A2 is minor allele here*
# get the minor alleles from both populations
#pop1_A2 = pop1.A2.collect()
#pop2_A2 = pop2.A2.collect()
# find values that are unequal
#import numpy as np
#switch1 = (np.array(pop1_A2) != np.array(pop2_A2))
#print(switch1.all()) # all comparisons returned 'FALSE' which means that all variants that were compared are the same
# sanity check
#print(len(pop1_A2) == len(pop2_A2) == len(switch1)) # True
### *if there is a variant mismatch among the minor alleles of the two populations*
# in case there was a comparison that didn't match correctly among the minor alleles of the two populations, we would adjust the allele frequency(AF) accordingly
#new_frq = pop2.AF.collect()
#new_frq = np.array(new_frq) # convert to numpy array for the next step
# explanation (with an example) for what this does is right below it
#new_frq[switch1] = 1-(new_frq[switch1])
# Example: for pop_1, A1 and A2 are 'T' and 'C' with AF of 0.25
# and for pop_2, A1 and A2 are 'C and 'T' with AF of 0.25
# then since the same reference allele is not used (alleles don't correctly align) in this case,
# we would subtract the AF of pop_2 from 1, to get the correct allele frequency
# the AF of pop_2 with A1 and A2 oriented the same way as pop_1: 'T' and 'C', would be 1-0.25 = 0.75 (w/c is the correct AF)
# if we wanted to convert array back to list
#pop2_frq = new_frq.tolist()
# junk code
#pop2.rows().show(5)
#p = pop2.filter_rows(str(pop2.locus) =='chr10:38960343')
p.row.show()
# for i in locus:
# if i =='chr1:94607079':
# print ("True")
sum(num == dup for num,dup in zip(locus, d))
# code to check if there are duplicates in a list and print them out
#import collections
#dup = [item for item, count in collections.Counter(key).items() if count > 1]
#print('Num of duplicate loci: ' + str(len(dup)))
#print(dup)
# which FST value is for which locus?
key_freq1 = {key[i]: FREQpop1[i] for i in range(len(key))}
key_freq2 = {key[i]: FREQpop2[i] for i in range(len(key))}
key_n1 = {key[i]: n1[i] for i in range(len(key))}
key_n2 = {key[i]: n2[i] for i in range(len(key))}
# for key,value in zip (locus, FREQpop1):
# print(dict(key, value))
#for v1,v2 in zip(list(locus_freq1.values())[0:5], list(locus_freq2.values())[0:5]):
#lq = ((n1*locus_freq1.values()) + (n2*locus_freq2.values())) / (n1+n2)
#print(key,value)
#locus #220945
#len(set(FREQpop1))
# check if there are duplicates in locus list and print them out - 72 duplicates
# import collections
# d = [item for item, count in collections.Counter(locus).items() if count > 1]
# list.sort(locus)
#locus
# from collections import Counter
# [k for k,v in Counter(locus).items() if v>1]
# where are each of the duplicated loci located?
from collections import defaultdict
D = defaultdict(list)
for i,item in enumerate(locus):
D[item].append(i)
D = {k:v for k,v in D.items() if len(v)>1}
locus[6202]
bad_locus = locus[INCLUDE=='FALSE']
# ave freq values that were not between 0 and 1 - returned FALSE to the conditions in the above chuck of code
print(np.count_nonzero(INCLUDE==0))
DONT_INCLUDE= (FREQ=='') & (FREQ>=1)
np.count_nonzero(DONT_INCLUDE)
# convert the output from the preimp_qc module (qced.mt) into a vcf file in Hail
import hail as hl
mt = hl.read_matrix_table('gs://nepal-geno/GWASpy/Preimp_QC/Nepal_PTSD_GSA_Updated_May2021_qced.mt')
hl.export_vcf(mt, 'gs://nepal-geno/Nepal_PTSD_GSA_Updated_May2021_qced.vcf.bgz')
```
| github_jupyter |
```
import pandas as pd
from datetime import *
from pandas_datareader.data import DataReader
import numpy as np
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
import spacy
import os
import seaborn as sns
from textblob import TextBlob
import nltk
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from nltk.classify.scikitlearn import SklearnClassifier
import pickle
from sklearn.naive_bayes import MultinomialNB, BernoulliNB
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.svm import SVC, LinearSVC, NuSVC
from nltk.classify import ClassifierI
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from statistics import mode
from nltk.tokenize import word_tokenize
import re
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import metrics
from scipy.sparse import coo_matrix, hstack
nlp = spacy.load("C:/Users/ksjag/Anaconda3/Lib/site-packages/en_core_web_sm/en_core_web_sm-2.2.5")
yahoo_url = "https://finance.yahoo.com/quote/%5EDJI/components/"
djia_table = pd.read_html(yahoo_url, header=0, index_col=0)[0]
djia_table = djia_table.reset_index()
tickers = djia_table.Symbol
len(tickers)
start_date = "2010-01-01"
end_date = "2019-12-31"
```
# Process the dataset function
```
def getDate(x):
return datetime.strptime(x[0:10], "%Y-%m-%d")
def get_data_for_multiple_stocks(tickers):
'''
Obtain stocks information (Date, OHLC, Volume and Adjusted Close).
Uses Pandas DataReader to make an API Call to Yahoo Finance and download the data directly.
Computes other values - Log Return and Arithmetic Return.
Input: List of Stock Tickers
Output: A dictionary of dataframes for each stock
'''
stocks = dict()
for ticker in tickers:
s = DataReader(ticker, 'yahoo', start_date, end_date)
s.insert(0, "Ticker", ticker) #insert ticker column so you can reference better later
s['Date'] = pd.to_datetime(s.index) #useful for transformation later
s['Adj Prev Close'] = s['Adj Close'].shift(1)
s['Log Return'] = np.log(s['Adj Close']/s['Adj Prev Close'])
s['Return'] = (s['Adj Close']/s['Adj Prev Close']-1)
s = s.reset_index(drop=True)
cols = list(s.columns.values) # re-arrange columns
cols.remove("Date")
s = s[["Date"] + cols]
stocks[ticker] = s
return stocks
def generate_features(df, ticker):
### Make into proper time series like dataframe
df = this_df = pd.read_csv("../../Raw Data/Financial News/" + ticker + ".csv")
df.drop(df.columns[0], axis=1, inplace=True)
df["Date"] = df["Date"].apply(getDate)
df.sort_values(by="Date", inplace=True)
df.reset_index(inplace=True, drop=True)
df.drop(columns=["num_hits"], inplace=True)
# ## Named Entity Recognition to filter out non-company related stuff
# noun_or_not = [] ## store the pos_
# for row in range(len(df)):
# this_headline = df.loc[row,"main_headline"]
# this_doc = nlp(this_headline)
# done = False
# for token in this_doc:
# if str(token)[0:len(company)].lower() == company.lower():
# noun_or_not.append(token.pos_)
# done = True
# break
# if done == False:
# noun_or_not.append("remove")
# df = pd.concat([df.reset_index(drop=True), pd.DataFrame(noun_or_not, columns=["noun_or_not"])], axis=1)
# df = df[df.noun_or_not == "PROPN"]
# df.drop(["noun_or_not"], axis=1, inplace=True)
# df.reset_index(drop=True, inplace=True)
##### JOIN WITH PRICE HISTORY ######
start_date = "2010-01-01"
end_date = "2019-12-31"
stock_prices = get_data_for_multiple_stocks([ticker])[ticker]
stock_prices = stock_prices[["Date", "Adj Close", "Adj Prev Close", "Return"]]
df = pd.merge(df, stock_prices, how='inner', on='Date')
df["text_label"] = df["main_headline"] + ". " + df["absract"]
df["Label"] = 1
df.loc[df["Return"] < 0, "Label"] = -1
## LEMMATIZE ###############
w_tokenizer = nltk.tokenize.WhitespaceTokenizer()
lemmatizer = nltk.stem.WordNetLemmatizer()
def lemmatize_text(text):
return [''.join(lemmatizer.lemmatize(w, 'v')) for w in w_tokenizer.tokenize(text)]
def lemmatize_text_str(text):
string = ''
for w in w_tokenizer.tokenize(text):
string = string + ' ' + lemmatizer.lemmatize(w, 'v')
return string
df_filtered = df[["Date", "word_count", "text_label", "Label", "Return"]]
df_filtered['text_lem_lst'] = df_filtered['text_label'].apply(lemmatize_text)
df_filtered['text_lem_str'] = df_filtered['text_label'].apply(lemmatize_text_str)
### SENTIMENT SCORE ############
def detect_sentiment(text):
# use this line instead for Python 3
blob = TextBlob(text)
return blob.sentiment.polarity
df_filtered["sentiment_txtblob"] = df_filtered.text_lem_str.apply(detect_sentiment)
sid = SentimentIntensityAnalyzer()
df_filtered["sentiment_nltk"] = df_filtered.text_lem_str.apply(lambda x: sid.polarity_scores(x))
df_filtered["positivity_sentiment_nltk"] = df_filtered.sentiment_nltk.apply(lambda x: x["pos"])
df_filtered["compound_sentiment_nltk"] = df_filtered.sentiment_nltk.apply(lambda x: x["compound"])
df_filtered["negativity_sentiment_nltk"] = df_filtered.sentiment_nltk.apply(lambda x: x["neg"])
df_filtered["neutral_sentiment_nltk"] = df_filtered.sentiment_nltk.apply(lambda x: x["neu"])
df_filtered.drop(columns=["sentiment_nltk"], inplace=True)
return df_filtered
for ticker in tickers:
continue ## take this out to actually run
print(ticker)
this_df = pd.read_csv("../../Raw Data/Financial News/" + ticker + ".csv")
company = djia_table[djia_table["Symbol"] == ticker]["Company Name"]
this_features = generate_features(this_df, ticker)
this_features.to_csv("../../Processed Data/Financial News/" + ticker + ".csv", index = False)
```
## For each company, train a model from 2010 - 2018, and generate predictions for 2019, 2020
```
def generate_train_test_csv(ticker):
this_df = pd.read_csv("../../Processed Data/Financial News/" + ticker + ".csv")
this_df.drop_duplicates(subset="Date", inplace=True, keep="first")
this_df.reset_index(drop=True, inplace=True)
df_train = this_df[this_df["Date"] < "2018-01-01"]
df_test = this_df[this_df["Date"] >= "2018-01-01"]
df_test.reset_index(drop=True, inplace=True)
if len(df_test) == 0 or len(df_train)==0: pass
cv = CountVectorizer(ngram_range=(1, 2), stop_words="english", analyzer="word", max_df=0.8)
y_train = df_train["Label"]
y_test = df_test["Label"]
X_train_vect = df_train["text_label"]
X_test_vect = df_test["text_label"]
X_train_dtm = cv.fit_transform(X_train_vect)
X_test_dtm = cv.transform(X_test_vect)
remaining_feats = np.array(df_train[['word_count', 'sentiment_txtblob', 'positivity_sentiment_nltk',
'compound_sentiment_nltk', 'negativity_sentiment_nltk', 'neutral_sentiment_nltk']])
remaining_test_feats = np.array(df_test[['word_count', 'sentiment_txtblob', 'positivity_sentiment_nltk',
'compound_sentiment_nltk', 'negativity_sentiment_nltk', 'neutral_sentiment_nltk']])
X_train_dtm = hstack(([X_train_dtm, remaining_feats]))
X_test_dtm = hstack(([X_test_dtm, remaining_test_feats]))
BNB = BernoulliNB()
BNB.fit(X_train_dtm, y_train)
LogReg = LogisticRegression()
LogReg.fit(X_train_dtm, y_train)
SGD = SGDClassifier()
SGD.fit(X_train_dtm, y_train)
SVC_c = SVC()
SVC_c.fit(X_train_dtm, y_train)
## TEST PREDICTIONS
svc_pred = SVC_c.predict(X_test_dtm)
bnb_pred = BNB.predict(X_test_dtm)
logreg_pred = LogReg.predict(X_test_dtm)
sgd_pred = SGD.predict(X_test_dtm)
## TRAINING PREDICTIONS
svc_pred_train = SVC_c.predict(X_train_dtm)
bnb_pred_train = BNB.predict(X_train_dtm)
logreg_pred_train = LogReg.predict(X_train_dtm)
sgd_pred_train = SGD.predict(X_train_dtm)
ensemble_pred_test = np.add(svc_pred, bnb_pred + logreg_pred + sgd_pred)/4
ensemble_pred_train = np.add(svc_pred_train, bnb_pred_train + logreg_pred_train + sgd_pred_train)/4
this_pred_test = pd.DataFrame({ticker: list(map(lambda x: 1 if x>= 0 else -1, ensemble_pred_test))})
this_pred_train = pd.DataFrame({ticker: list(map(lambda x: 1 if x>= 0 else -1, ensemble_pred_train))})
## merge this_pred_train with df_train and this_pred_test with df_test (dates only)
this_pred_train.set_index(df_train["Date"], inplace=True, drop=True)
this_pred_test.set_index(df_test["Date"], inplace=True, drop=True)
## Make it daily
test_dates = pd.DataFrame(index=pd.date_range(start="2018-01-01", end="2019-12-31", freq="D"))
train_dates = pd.DataFrame(index=pd.date_range(start="2010-01-01", end="2017-12-31", freq="D"))
test_df = pd.merge(test_dates, this_pred_test, how='outer', left_index=True, right_index=True)
test_df.fillna(method="ffill", limit=2, inplace=True)
test_df.fillna(0, inplace=True)
train_df = pd.merge(train_dates, this_pred_train, how='outer', left_index=True, right_index=True)
train_df.fillna(method="ffill", limit=2, inplace=True)
train_df.fillna(0, inplace=True)
## Remove Weekends
train_df = train_df[train_df.index.dayofweek < 5]
test_df = test_df[test_df.index.dayofweek < 5]
train_df.index.rename("Date", inplace=True)
test_df.index.rename("Date", inplace=True)
train_df.to_csv("../../Predictions/Financial News/" + ticker + "_train.csv")
test_df.to_csv("../../Predictions/Financial News/" + ticker + "_test.csv")
for ticker in tickers:
if ticker in ["DOW", "TRV", "DIS"]: continue
print(ticker)
generate_train_test_csv(ticker)
for ticker in tickers:
if ticker in ["DOW", "TRV", "DIS"]: continue
print(ticker)
train = pd.read_csv("../../Predictions/Financial News/" + ticker + "_train.csv")
test = pd.read_csv("../../Predictions/Financial News/" + ticker + "_test.csv")
print(len(train[train.duplicated(subset="Date") == True]))
print(len(test[test.duplicated(subset="Date") == True]))
ticker = "AAPL"
train = pd.read_csv("../../Predictions/Financial News/" + ticker + "_train.csv")
test = pd.read_csv("../../Predictions/Financial News/" + ticker + "_test.csv")
len(train[train.duplicated(subset="Date") == True])
len(test[test.duplicated(subset="Date") == True])
```
| github_jupyter |
## Week 2: 2.1 OOP : Part 1
**by Sarthak Niwate (Intern at Chistats)**
#### 1,2,3. What is class, object and reference variable in the python
#### Class
A class is a bundle of attributes/variables by instance and methods to define the type of object. A class can be viewed as a template of the objects. Variables of a class are termed as attributes and functions in the classes are called as methods.
#### Object
An object is an instance of a class with a specific set of attributes. Thus, one can create as many objects as needed from the same class.
#### Reference Variables
Class variables are shared across all objects while instance variables are for data unique to each instance. Instance variable overrides the Class variables having same name which can show unusual behaviour in output.
**Class Variables**: Declared inside the class definition (but outside any of the instance methods). They are not tied to any particular object of the class, hence shared across all the objects of the class. Modifying a class variable affects all objects instance at the same time.
**Instance Variable**: Declared inside the constructor method of class (the __init__ method). They are tied to the particular object instance of the class, hence the contents of an instance variable are completely independent from one object instance to the other.
**All the below three questions answered together**
#### 4. How to defin a class, define a student class with instance variables as StdName, RollNo. and StdGender. Create Constructor ( Ref. __init__(self) )
#### 5. Using Que 4, defin a method which takes self as a parameter and print the StdName, RollNo. and StdGender
```
class student5:
def __init__(self):
self.StdName = 'Sarthak'
self.RollNo = 45
self.StdGender = 'Male'
def display_data(self):
return f"The name of the student is {self.StdName} and Roll No. is {self.RollNo}. The gender is {self.StdGender}."
s1 = student5()
s1.display_data()
```
#### 6. Create object of the student class and call display method
```
class student6:
def __init__(self, StdName, RollNo, StdGender):
self.StdName = StdName
self.RollNo = RollNo
self.StdGender = StdGender
def display_data(self):
return f"The name of the student is {self.StdName} and Roll No. is {self.RollNo}. The gender is {self.StdGender}."
s1 = student6('Sarthak',45,'Male')
s1.display_data()
```
#### 7. What is self variable
#### 8. What is difference between Parameterized and Non Parameterized Constructor
We can use Paramtrized constructors to set custom values for instance variables.
```
class parametrizedConst:
def __init__(self,company,model):
self.company=company
self.model=model
def order(self):
print(f"Thank you for buying from {self.company}. Order successfully placed for {self.model}")
p = parametrizedConst('Apple','Macbook Pro HUINX786KOLS')
p.order()
```
We can use Non-Parametrized constructor to set values that we want.
```
class nonparametrizedConst:
def __init__(self):
self.company='Apple'
self.model='Macbook Pro HUINX786KOLS'
def order(self):
print(f"Thank you for buying from {self.company}. Order successfully placed for {self.model}")
np = nonparametrizedConst()
np.order()
```
#### 9. Create Parameterized constructor using class fro Que 4 and initialize the instance variables and print in the display method.
```
class student9:
def __init__(self, StdName, RollNo, StdGender):
self.StdName = StdName
self.RollNo = RollNo
self.StdGender = StdGender
def display_data(self):
return f"The name of the student is {self.StdName} and Roll No. is {self.RollNo}. The gender is {self.StdGender}."
s1 = student9('Sarthak',45,'Male')
s1.display_data()
```
#### 10. WAP to demonstrate that constructor will execute only once per object
```
def __new__(cls, *args, **kwargs):
obj = object.__new__(cls, *args, **kwargs)
obj.__init__(*args, **kwargs)
return obj
```
```
class MyClass(object):
def __init__(self, data):
self.data = data
def __getitem__(self, index):
return type(self)(self.data[index])
x = MyClass(range(10))
x2 = x[0:2]
x2
```
#### 11. WAP to demonstrate where can we declare the instance variables
1. inside constructer
2. insite instance method
3. ouside class ny using object reference variable
##### Use of __dict__ and ways of declaring instance variables
```
class Television:
def __init__(self):
self.company = 'Panasonic' # Declare instance variables in constructor
self.model = 'PNSCO741U-XBUR'
def amount(self):
self.price = 31250 # Declare instance variable in an instance method
t=Television()
print(t.__dict__)
t.amount()
print(t.__dict__)
t.orderid = 7896 # Declare instance variable from outside of the class using object reference
print(t.__dict__)
```
#### 12. Using variables declared in Que 4,
1. Print using self
2. delete using self
3. delete using object reference
4. Update the value of RollNo and print the object
```
class student12:
def __init__(self, StdName, RollNo, StdGender):
self.StdName = StdName
self.RollNo = RollNo
self.StdGender = StdGender
def display_data(self):
return f"The name of the student is {self.StdName} and Roll No. is {self.RollNo}. The gender is {self.StdGender}."
def delVariableInstance(self):
del self.RollNo # Method for deleting Instance variable inside a class
s = student12('Sarthak',45,'Male')
print(s.display_data())
s.delVariableInstance()
print(s.__dict__)
del s.StdGender # Deleting Instance variable outside the class
print(t.__dict__)
```
#### 13. Write short note on static variables
Static variables are those variables which remain same for all objects and don't change like instance variables.
```
class student13:
StdName = 'Sarthak'
RollNo = 45
def __init__(self):
self.StdGender = 'Male'
@classmethod
def display_data(clone):
return f"The name of the student is {student13.StdName} and Roll No. is {clone.RollNo}."
s = student13()
print(s.display_data())
print(student13.display_data())
```
In the above code snippet, we have created multiple objects of class Student and for all of them student name remains the same. Only one copy for the static variable will be created and will be shared with other objects. This also improves performance.
Static variables can be accessed using Class name or Object reference.
We can declare static variables:
Inside a class but outside any method or constructor, Inside constructor using classname, Inside instance method using class name, Inside class method using class name or cls variable, Inside static method by using class name, From outside the class by using class name.
#### 14. WAP to declare the static variable as school name in above Student class
1. Declare within class and outside the method body
2. inside constructor using class name
3. inside method using class name
4. inside class method using class name or cls variable5. insite static methos using class name
#### 15. WAP to access the static variable
1. inside constructor using self or classname
2. inside instance methos using self or classname
3. inside class method using cls variable or classname
4. inside static method using classname
5. from outside of the class using reference is classname
```
class student14:
schoolName = 'Rose Mary School' # Declaring outside any method / constructor
def __init__(self,name):
self.name=name # Accessing inside instance method using self
student14.address = 'Shillong' # Declaring inside a constructor using class name
def showDetails(self):
student14.rollno = 45 # Declaring inside an instance method using class name
@classmethod
def examCenter(cls):
student14.center_name = 'Holish Due Wih' # Declaring inside a class method using classname
cls.center_id = 'X896' # Declaring inside a class method using cls
print(f"Exam Center loaction is {student14.center_name}") # Accessing inside a method using classname
@staticmethod
def studExamid():
student14.exam_id = 121212 # Declaring inside a static method using classname
s=student14('Pranay')
s.showDetails()
s.principal='Neha Biswas' # Declaring outside the class using object name
student14.studExamid() # Calling static method
student14.examCenter() # Calling class method
print("Exam center id is: ", student14.center_id) # Accesing outside class using classname
print("Student's exam id is: ", student14.exam_id) # Accesing outside class using classname
print(student14.examCenter())
```
#### 16. Modify the value of School name and observe the behaviour
1. Modify using class name
2. Modify using cls variable
```
class student14:
schoolName = 'Rose Mary School'
def __init__(self,name):
self.name=name
student14.address = 'Shillong'
def showDetails(self):
student14.rollno = 45
@classmethod
def examCenter(cls):
student14.center_name = 'Holish Due Wih'
cls.center_id = 'X896'
print(f"School name is: {cls.schoolName}")
cls.schoolName = 'Jony Fen Holigreek' # wrong way
print(f"School name is: {cls.schoolName}")
@staticmethod
def studExamid():
student14.exam_id = 121212
s = student14('Sarthak')
print(s.examCenter())
student14.schoolName = 'Keliyo Unda Funk'
print(s.examCenter())
```
#### 17. WAP to delete the value of School name and observe the behaviour
1. using classname.variable name
2. using cls
```
class student17:
schoolName = 'Rose Mary School'
def __init__(self, StdName, RollNo, StdGender):
self.StdName = StdName
self.RollNo = RollNo
self.StdGender = StdGender
def display_data(self):
return f"The name of the student is {self.StdName} and Roll No. is {self.RollNo}. The gender is {self.StdGender}."
@classmethod
def delVariableInstance(cls):
return f"Old: {cls.schoolName}"
del cls.schoolName # Method for deleting Instance variable inside a class
print(f"New {cls.schoolName}")
s = student17('Sarthak',45,'Male')
print(s.display_data())
print(s.__dict__)
print(s.delVariableInstance())
```
#### 18. Using Instance method: WAP using below details
1. Create a student class having name and marks as parameters
2. Create display method to display name and marks of student
3. CReate method grade which will take marks as parameters and displays the grade of the student. Ex. if marks > 60 then First grade if marke are greter than 50 then Second grade else Student us failed
4. Use setter and getter to initialize the instance variables
```
class student18:
def __init__(self, name, marks):
self.name = name
self.marks = marks
@property
def get_info(self):
print('Getter method called')
if (self.marks > 60):
return "First Grade"
elif (self.marks > 50 and self.marks < 60):
return "Second Grade"
else:
return'failed'
@get_info.setter
def get_info(self, a):
print("Setter method called")
self.marks = a
s1 = student18('Sarthak',56)
print(s1.get_info)
s2 = student18('Sarthak',82)
print(s2.get_info)
s3 = student18('Sarthak',32)
print(s3.get_info)
```
#### 19. Using class method: WAP to get number of objectes created for a class
```
class student18:
counter = 0
def __init__(self, name, marks):
self.name = name
self.marks = marks
student18.counter += 1
@property
def get_info(self):
print('Getter method called')
if (self.marks > 60):
return "First Grade"
elif (self.marks > 50 and self.marks < 60):
return "Second Grade"
else:
return'failed'
@get_info.setter
def get_info(self, a):
print("Setter method called")
self.marks = a
s1 = student18('Sarthak',56)
print(s1.get_info)
s2 = student18('Sarthak',82)
print(s2.get_info)
s3 = student18('Sarthak',32)
print(s3.get_info)
print(student18.counter)
```
#### 20. Using class method: WAP to print the given number if odd or even
```
class oddEven(object):
@classmethod
def get_info(cls, number):
if (number%2 == 0):
return "Even"
else:
return "Odd"
print(oddEven.get_info(2))
print(oddEven.get_info(3))
print(oddEven.get_info(5))
print(oddEven.get_info(8))
```
#### 21. Using Static method: WAP to print addition, substraction and product of two numbers
```
class arithmetic(object):
@staticmethod
def operations(num1,num2):
add = num1 + num2
sub = num1 - num2
mul = num1 * num2
return f"The addition: {add}, the subtraction: {sub}, the multiplication: {mul}."
arithmetic.operations(3,4)
```
#### 22. Pass members of one class to other class: Use below details
1. Create class "Student" with Student name, roll number and marks. Define display method.
2. Create class "StudentModifier" and define modify method which will update the student marks(marks + 10) and prints the updted marks using display method.
```
class studentModifier:
def __init__(self,name,roll_no,marks):
self.name=name
self.roll_no=roll_no
self.marks=marks
def display_method(self):
return f"The name of the student is {self.name} and Roll No. is {self.roll_no}. The marks are {self.marks}."
def modify(self):
marks1 = self.marks*(self.marks+10)
return f"Updated marks are {marks1}"
class student22(studentModifier):
def display(self):
print("This is on lass")
s = student22('Name',45,100)
print(s.display_method())
print(s.modify())
```
23. Using Inner class: WAP using below details
1. Create class "Student" with Student name and roll number. Define display method.
2. Inside Student class create inner class as Address which will be having localoty, district, state and pin as instance variables
3. Create instance of student class and print the student details
4. Inside Student class create another inner class as Marks which will be having marks of Physics, Chem and Maths as instance variables
5. Create instance of student class and print the student details
```
class student23:
def __init__(self):
self.name='Sarthak'
self.roll_no=45
self.add = self.Address()
self.marks=self.Marks()
def display_method(self):
return f"The name of the student is {self.name} and Roll No. is {self.roll_no}."
class Address:
def __init__(self):
self.locality='Anmol Pride'
self.district='Mumbai Subarban'
self.state='MH'
self.pin=400104
def display_method(self):
return f"The locality is {self.locality} district is {self.district}, state is {self.state}, pin is {self.pin}"
class Marks:
def __init__(self):
self.Physics=80
self.Chem=85
self.Maths=78
def display_method(self):
return f"The marks in physics are {self.Physics}, chemistry are {self.Chem}, maths are {self.Maths}"
s = student23()
print(s.display_method())
s1 = s.add
print(s1.display_method())
s2 = s.marks
print(s2.display_method())
```
24. Garbage Collection: WAP to enable, disable GC and check if GC is enabled or not
```
import gc
obj = 2
x = 1
garbage = gc.collect()
gc.disable() # Disable the collector
print("Automatic garbage collection is on ?",gc.isenabled()) # Check status
gc.enable() # Enabled again
gc.collect()
print("Automatic garbage collection is on ?",gc.isenabled()) # Check status
```
25. Write a short not on Garbage Collection
Garbage Collection System (GCS):
Generally, when a program is not being used the interpreter of python frees the memory. The reason behind this capability of Python is because the developer of Python implemented a special system at the backend which is known as Garbage Collector. This system follows a technique knows as ‘reference counting’ in which objects are deallocated or ‘freed’ when there is no longer use or reference to it in a program.
26. WAP to demonstrate use of __del__ i.e. destructor method
```
class Robot():
def __init__(self, name):
print(name + " has been created!")
def __del__(self):
print ("Thing has been destroyed")
K = Robot("Kunal")
A = Robot("Ankit")
New = K
print("Deleting K")
del K
print("Deleting A")
del New
del A
```
27. WAP to check number of references a Student object is having. Use Que 23.
```
class student23:
def __init__(self):
self.name='Sarthak'
self.roll_no=45
self.add = self.Address()
self.marks=self.Marks()
def display_method(self):
return f"The name of the student is {self.name} and Roll No. is {self.roll_no}."
class Address:
def __init__(self):
self.locality='Anmol Pride'
self.district='Mumbai Subarban'
self.state='MH'
self.pin=400104
def display_method(self):
return f"The locality is {self.locality} district is {self.district}, state is {self.state}, pin is {self.pin}"
class Marks:
def __init__(self):
self.Physics=80
self.Chem=85
self.Maths=78
def display_method(self):
return f"The marks in physics are {self.Physics}, chemistry are {self.Chem}, maths are {self.Maths}"
s = student23()
print(s.display_method())
s1 = s.add
print(s1.display_method())
s2 = s.marks
print(s2.display_method())
i = 0
for r in gc.get_referents(s):
i += 1
print("The no. of reference of object s is", i)
```
## Thank You
| github_jupyter |
<a name="top"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Advanced Pythonic Data Analysis</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="http://matplotlib.org/_images/date_demo.png" alt="METAR" style="height: 300px;"></div>
## Overview:
* **Teaching:** 45 minutes
* **Exercises:** 45 minutes
### Questions
1. How can we improve upon the versatility of the plotter developed in the basic time series notebook?
1. How can we iterate over all data file in a directory?
1. How can data processing functions be applied on a variable-by-variable basis?
### Objectives
1. <a href="#basicfunctionality">From Time Series Plotting Episode</a>
1. <a href="#parameterdict">Dictionaries of Parameters</a>
1. <a href="#multipledict">Multiple Dictionaries</a>
1. <a href="#functions">Function Application</a>
1. <a href="#glob">Glob and Multiple Files</a>
<a name="basicfunctionality"></a>
## From Time Series Plotting Episode
Here's the basic set of imports and data reading functionality that we established in the [Basic Time Series Plotting](../Time_Series/Basic%20Time%20Series%20Plotting.ipynb) notebook.
```
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter, DayLocator
from siphon.simplewebservice.ndbc import NDBC
%matplotlib inline
def format_varname(varname):
"""Format the variable name nicely for titles and labels."""
parts = varname.split('_')
title = parts[0].title()
label = varname.replace('_', ' ').title()
return title, label
def read_buoy_data(buoy, days=7):
# Read in some data
df = NDBC.realtime_observations(buoy)
# Trim to the last 7 days
df = df[df['time'] > (pd.Timestamp.utcnow() - pd.Timedelta(days=days))]
return df
```
<a href="#top">Top</a>
<hr style="height:2px;">
<a name="parameterdict"></a>
## Dictionaries of Parameters
When we left off last time, we had created dictionaries that stored line colors and plot properties in a key value pair. To further simplify things, we can actually pass a dictionary of arguements to the plot call. Enter the dictionary of dictionaries. Each key has a value that is a dictionary itself with it's key value pairs being the arguements to each plot call. Notice that different variables can have different arguements!
```
df = read_buoy_data('42039')
# Dictionary of plotting parameters by variable name
styles = {'wind_speed': dict(color='tab:orange'),
'wind_gust': dict(color='tab:olive', linestyle='None', marker='o', markersize=2),
'pressure': dict(color='black')}
plot_variables = [['wind_speed', 'wind_gust'], ['pressure']]
fig, axes = plt.subplots(1, len(plot_variables), sharex=True, figsize=(14, 5))
for col, var_names in enumerate(plot_variables):
ax = axes[col]
for var_name in var_names:
title, label = format_varname(var_name)
ax.plot(df.time, df[var_name], **styles[var_name])
ax.set_ylabel(title)
ax.set_title('Buoy 42039 {}'.format(title))
ax.grid(True)
ax.set_xlabel('Time')
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(DayLocator())
```
<a href="#top">Top</a>
<hr style="height:2px;">
<a name="multipledict"></a>
## Multiple Dictionaries
We can even use multiple dictionaries to define styles for types of observations and then specific observation properties such as levels, sources, etc. One common use case of this would be plotting all temperature data as red, but with different linestyles for an isobaric level and the surface.
```
type_styles = {'Temperature': dict(color='red', marker='o'),
'Relative humidity': dict(color='green', marker='s')}
level_styles = {'isobaric': dict(linestyle='-', linewidth=2),
'surface': dict(linestyle=':', linewidth=3)}
my_style = type_styles['Temperature']
print(my_style)
my_style.update(level_styles['isobaric'])
print(my_style)
```
If we look back at the original entry in `type_styles` we see it was updated too! That may not be the expected or even the desired behavior.
```
type_styles['Temperature']
```
We can use the `copy` method to make a copy of the element and avoid update the original.
```
type_styles = {'Temperature': dict(color='red', marker='o'),
'Relative humidity': dict(color='green', marker='s')}
level_styles = {'isobaric': dict(linestyle='-', linewidth=2),
'surface': dict(linestyle=':', linewidth=3)}
my_style = type_styles['Temperature'].copy() # Avoids altering the original entry
my_style.update(level_styles['isobaric'])
print(my_style)
type_styles['Temperature']
```
Since we don't have data from different levels, we'll work with wind measurements and pressure data. Our <code>format_varname</code> function returns a title and full variable name label.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Create a type styles dictionary of dictionaries with the variable title as the key that has styles for `Wind` and `Pressure` data. The pressure should be a solid black line. Wind should be a solid line.</li>
<li>Create a variable style dictionary of dictionaries with the variable name as the key that specifies an orange line of width 2 for wind speed, olive line of width 0.5 for gusts, and no additional information for pressure.</li>
<li>Update the plotting code below to use the new type and variable styles dictionary.
</ul>
</div>
```
# Your code goes here (modify the skeleton below)
type_styles = {}
variable_styles = {}
fig, axes = plt.subplots(1, len(plot_variables), sharex=True, figsize=(14, 5))
for col, var_names in enumerate(plot_variables):
ax = axes[col]
for var_name in var_names:
title, label = format_varname(var_name)
ax.plot(df.time, df[var_name], **styles[var_name])
ax.set_ylabel(title)
ax.set_title('Buoy 42039 {}'.format(title))
ax.grid(True)
ax.set_xlabel('Time')
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(DayLocator())
```
#### Solution
```
# %load solutions/dict_args.py
```
<a href="#top">Top</a>
<hr style="height:2px;">
<a name="functions"></a>
## Function Application
There are times where we might want to apply a certain amount of pre-processing to the data before they are plotted. Maybe we want to do a unit conversion, scale the data, or filter it. We can create a dictionary in which functions are the values and variable names are the keys.
For example, let's define a function that uses the running median to filter the wind data (effectively a low-pass). We'll also make a do nothing function for data we don't want to alter.
```
from scipy.signal import medfilt
def filter_wind(a):
return medfilt(a, 7)
def donothing(a):
return a
converters = {'Wind': filter_wind, 'Pressure': donothing}
type_styles = {'Pressure': dict(color='black'),
'Wind': dict(linestyle='-')}
variable_styles = {'pressure': dict(),
'wind_speed': dict(color='tab:orange', linewidth=2),
'wind_gust': dict(color='tab:olive', linewidth=0.5)}
fig, axes = plt.subplots(1, len(plot_variables), sharex=True, figsize=(14, 5))
for col, var_names in enumerate(plot_variables):
ax = axes[col]
for var_name in var_names:
title, label = format_varname(var_name)
# Apply our pre-processing
var_data = converters[title](df[var_name])
style = type_styles[title].copy() # So the next line doesn't change the original
style.update(variable_styles[var_name])
ax.plot(df.time, var_data, **style)
ax.set_ylabel(title)
ax.set_title('Buoy 42039 {}'.format(title))
ax.grid(True)
ax.set_xlabel('Time')
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(DayLocator())
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Write a function to convert the pressure data to bars. (**Hint**: 1 bar = 100000 Pa)</li>
<li>Apply your converter in the code below and replot the data.</li>
</ul>
</div>
```
# Your code goes here (modify the code below)
converters = {'Wind': filter_wind, 'Pressure': donothing}
type_styles = {'Pressure': dict(color='black'),
'Wind': dict(linestyle='-')}
variable_styles = {'pressure': dict(),
'wind_speed': dict(color='tab:orange', linewidth=2),
'wind_gust': dict(color='tab:olive', linewidth=0.5)}
fig, axes = plt.subplots(1, len(plot_variables), sharex=True, figsize=(14, 5))
for col, var_names in enumerate(plot_variables):
ax = axes[col]
for var_name in var_names:
title, label = format_varname(var_name)
# Apply our pre-processing
var_data = converters[title](df[var_name])
style = type_styles[title].copy() # So the next line doesn't change the original
style.update(variable_styles[var_name])
ax.plot(df.time, var_data, **style)
ax.set_ylabel(title)
ax.set_title('Buoy 42039 {}'.format(title))
ax.grid(True)
ax.set_xlabel('Time')
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(DayLocator())
```
#### Solution
<div class="alert alert-info">
<b>REMINDER</b>:
You should be using the unit library to convert between various physical units, this is simply for demonstration purposes!
</div>
```
# %load solutions/function_application.py
```
<a href="#top">Top</a>
<hr style="height:2px;">
<a name="glob"></a>
## Multiple Buoys
We can now use the techniques we've seen before to make a plot of multiple buoys in a single figure.
```
buoys = ['42039', '42022']
type_styles = {'Pressure': dict(color='black'),
'Wind': dict(linestyle='-')}
variable_styles = {'pressure': dict(),
'wind_speed': dict(color='tab:orange', linewidth=2),
'wind_gust': dict(color='tab:olive', linewidth=0.5)}
fig, axes = plt.subplots(len(buoys), len(plot_variables), sharex=True, figsize=(14, 10))
for row, buoy in enumerate(buoys):
df = read_buoy_data(buoy)
for col, var_names in enumerate(plot_variables):
ax = axes[row, col]
for var_name in var_names:
title, label = format_varname(var_name)
style = type_styles[title].copy() # So the next line doesn't change the original
style.update(variable_styles[var_name])
ax.plot(df.time, df[var_name], **style)
ax.set_ylabel(title)
ax.set_title('Buoy {} {}'.format(buoy, title))
ax.grid(True)
ax.set_xlabel('Time')
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
ax.xaxis.set_major_locator(DayLocator())
```
<a href="#top">Top</a>
<hr style="height:2px;">
<div class="alert alert-success">
<b>EXERCISE</b>: As a final exercise, use a dictionary to allow all of the plots to share common y axis limits based on the variable title.
</div>
```
# Your code goes here
```
#### Solution
```
# %load solutions/final.py
```
<a href="#top">Top</a>
<hr style="height:2px;">
| github_jupyter |
```
from urllib import request
import pandas as pd
from matplotlib import pyplot as plt
import json
import mpld3
from datetime import date
%matplotlib inline
mpld3.enable_notebook()
def check_bucks_covid(keyword):
#Data on deaths and cases
if keyword=='daily':
#response = request.urlopen('https://services3.arcgis.com/SP47Tddf7RK32lBU/arcgis/rest/services/Buck_County_COVID_Case_Dates_VIEW_2/FeatureServer/0/query?f=json&where=1%3D1&returnGeometry=false&spatialRel=esriSpatialRelIntersects&outFields=*&orderByFields=Date%20asc&resultOffset=0&resultRecordCount=32000&resultType=standard&cacheHint=true')
response = request.urlopen('https://services1.arcgis.com/Nifc7wlHaBPig3Q3/arcgis/rest/services/Covid_Cases_County/FeatureServer/0/query?f=json&where=county%3D%27Bucks%27&returnGeometry=false&spatialRel=esriSpatialRelIntersects&outFields=ObjectId%2Ccases%2Cdate&orderByFields=date%20asc&resultOffset=0&resultRecordCount=32000&resultType=standard&cacheHint=true')
dat = response.read()
data = json.loads(dat)
df = pd.json_normalize(data['features'])
df['date'] = pd.to_datetime(df['attributes.date'],unit='ms')
#Data on onset of cases
if keyword=='onset':
response = request.urlopen('https://services3.arcgis.com/SP47Tddf7RK32lBU/arcgis/rest/services/Bucks_County_COVID_Cases_by_Onset_Date_VIEW/FeatureServer/0/query?f=json&where=1%3D1&returnGeometry=false&spatialRel=esriSpatialRelIntersects&outFields=*&orderByFields=Date%20asc&resultOffset=0&resultRecordCount=32000&resultType=standard&cacheHint=true')
dat = response.read()
data = json.loads(dat)
df = pd.json_normalize(data['features'])
df['date'] = pd.to_datetime(df['attributes.Date'],unit='ms')
if keyword=='daily':
df['cases']= df['attributes.cases']
df['ma3']= df['attributes.cases'].rolling(3).mean()
df['ma7']= df['attributes.cases'].rolling(7).mean()
df['ma11']= df['attributes.cases'].rolling(11).mean()
if keyword=='onset':
df['cases']= df['attributes.Onset']
df['ma3']= df['attributes.Onset'].rolling(3).mean()
df['ma7']= df['attributes.Onset'].rolling(7).mean()
df['ma11']= df['attributes.Onset'].rolling(11).mean()
fig, ax = plt.subplots(figsize=(10,5))
bar = ax.bar('date', 'cases', data=df, label='Daily Count', alpha=0.7)
l1 = ax.plot(df['date'], df['cases'], marker='o', markersize=8, alpha=0.00)
labels =[str(df['cases'][i]) for i in range(len(df['cases']))]
tt = mpld3.plugins.PointLabelTooltip(l1[0], labels=labels)
mpld3.plugins.connect(fig, tt)
#l2 = ax.plot(df['date'], df['ma3'], label='3 Day Rolling Average',color='C1', linewidth=3)
l3 = ax.plot(df['date'], df['ma7'], label='7 Day Rolling Average',color='C2', linewidth=3)
#l4 = ax.plot('date', 'ma11', data=df, label='11 Day Rolling Average',color='C3', linewidth=3)
plt.title(keyword.capitalize() + ' accessed on '+str(date.today()))
plt.ylabel('Daily Cases')
plt.xlabel('Date')
plt.legend()
plt.show()
check_bucks_covid('daily')
check_bucks_covid('onset')
```
| github_jupyter |
```
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex client library: AutoML text sentiment analysis model for batch prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_sentiment_analysis_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_sentiment_analysis_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
## Overview
This tutorial demonstrates how to use the Vertex client library for Python to create text sentiment analysis models and do batch prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users).
### Dataset
The dataset used for this tutorial is the [Crowdflower Claritin-Twitter dataset](https://data.world/crowdflower/claritin-twitter) from [data.world Datasets](https://data.world). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
### Objective
In this tutorial, you create an AutoML text sentiment analysis model from a Python script, and then do a batch prediction using the Vertex client library. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.
The steps performed include:
- Create a Vertex `Dataset` resource.
- Train the model.
- View the model evaluation.
- Make a batch prediction.
There is one key difference between using batch prediction and using online prediction:
* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.
* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready.
### Costs
This tutorial uses billable components of Google Cloud (GCP):
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
## Installation
Install the latest version of Vertex client library.
```
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
```
### Restart the kernel
Once you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
```
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU**
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)
4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
#### Import Vertex client library
Import the Vertex client library into our Python environment.
```
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
```
#### Vertex constants
Setup up the following constants for Vertex:
- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
```
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
```
#### AutoML constants
Set constants unique to AutoML datasets and training:
- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.
- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).
- Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for.
```
# Text Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml"
# Text Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_sentiment_io_format_1.0.0.yaml"
# Text Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_sentiment_1.0.0.yaml"
```
#### Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify `(None, None)` to use a container image to run on a CPU.
```
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
```
#### Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
#### Machine Type
Next, set the machine type to use for prediction.
- Set the variable `DEPLOY_COMPUTE` to configure the compute resources for the VM you will use for prediction.
- `machine type`
- `n1-standard`: 3.75GB of memory per vCPU.
- `n1-highmem`: 6.5GB of memory per vCPU
- `n1-highcpu`: 0.9 GB of memory per vCPU
- `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]
*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*
```
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
```
# Tutorial
Now you are ready to start creating your own AutoML text sentiment analysis model.
## Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
- Dataset Service for `Dataset` resources.
- Model Service for `Model` resources.
- Pipeline Service for training.
- Job Service for batch prediction and custom training.
```
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
```
## Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
### Create `Dataset` resource instance
Use the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:
1. Uses the dataset client service.
2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters:
- `display_name`: The human-readable name you choose to give it.
- `metadata_schema_uri`: The schema for the dataset type.
3. Calls the client dataset service method `create_dataset`, with the following parameters:
- `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources.
- `dataset`: The Vertex dataset object instance you created.
4. The method returns an `operation` object.
An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
```
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("claritin-" + TIMESTAMP, DATA_SCHEMA)
```
Now save the unique dataset identifier for the `Dataset` resource instance you created.
```
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
```
### Data preparation
The Vertex `Dataset` resource for text has a couple of requirements for your text data.
- Text examples must be stored in a CSV or JSONL file.
#### CSV
For text sentiment analysis, the CSV file has a few requirements:
- No heading.
- First column is the text example or Cloud Storage path to text file.
- Second column the label (i.e., sentiment).
- Third column is the maximum sentiment value. For example, if the range is 0 to 3, then the maximum value is 3.
#### Location of Cloud Storage training data.
Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
```
IMPORT_FILE = "gs://cloud-samples-data/language/claritin.csv"
SENTIMENT_MAX = 4
```
#### Quick peek at your data
You will use a version of the Crowdflower Claritin-Twitter dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.
```
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
```
### Import data
Now, import the data into your Vertex Dataset resource. Use this helper function `import_data` to import the data. The function does the following:
- Uses the `Dataset` client.
- Calls the client method `import_data`, with the following parameters:
- `name`: The human readable name you give to the `Dataset` resource (e.g., claritin).
- `import_configs`: The import configuration.
- `import_configs`: A Python list containing a dictionary, with the key/value entries:
- `gcs_sources`: A list of URIs to the paths of the one or more index files.
- `import_schema_uri`: The schema identifying the labeling type.
The `import_data()` method returns a long running `operation` object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
```
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
```
## Train the model
Now train an AutoML text sentiment analysis model using your Vertex `Dataset` resource. To train the model, do the following steps:
1. Create an Vertex training pipeline for the `Dataset` resource.
2. Execute the pipeline to start the training.
### Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
1. Being reusable for subsequent training jobs.
2. Can be containerized and ran as a batch job.
3. Can be distributed.
4. All the steps are associated with the same pipeline job for tracking progress.
Use this helper function `create_pipeline`, which takes the following parameters:
- `pipeline_name`: A human readable name for the pipeline job.
- `model_name`: A human readable name for the model.
- `dataset`: The Vertex fully qualified dataset identifier.
- `schema`: The dataset labeling (annotation) training schema.
- `task`: A dictionary describing the requirements for the training job.
The helper function calls the `Pipeline` client service'smethod `create_pipeline`, which takes the following parameters:
- `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources.
- `training_pipeline`: the full specification for the pipeline training job.
Let's look now deeper into the *minimal* requirements for constructing a `training_pipeline` specification:
- `display_name`: A human readable name for the pipeline job.
- `training_task_definition`: The dataset labeling (annotation) training schema.
- `training_task_inputs`: A dictionary describing the requirements for the training job.
- `model_to_upload`: A human readable name for the model.
- `input_data_config`: The dataset specification.
- `dataset_id`: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
- `fraction_split`: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
```
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
```
### Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.
The minimal fields we need to specify are:
- `sentiment_max`: The maximum value for the sentiment (e.g., 4).
Finally, create the pipeline by calling the helper function `create_pipeline`, which returns an instance of a training pipeline object.
```
PIPE_NAME = "claritin_pipe-" + TIMESTAMP
MODEL_NAME = "claritin_model-" + TIMESTAMP
task = json_format.ParseDict(
{
"sentiment_max": SENTIMENT_MAX,
},
Value(),
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
```
Now save the unique identifier of the training pipeline you created.
```
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
```
### Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:
- `name`: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`.
```
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
```
# Deployment
Training the above model may take upwards of 180 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`.
```
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
```
## Model information
Now that your model is trained, you can get some information on your model.
## Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
### List evaluations for all slices
Use this helper function `list_model_evaluations`, which takes the following parameter:
- `name`: The Vertex fully qualified model identifier for the `Model` resource.
This helper function uses the model client service's `list_model_evaluations` method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (`meanAbsoluteError` and `precision`) you will print the result.
```
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("meanAbsoluteError", metrics["meanAbsoluteError"])
print("precision", metrics["precision"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
```
## Model deployment for batch prediction
Now deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for on-demand prediction.
For online prediction, you:
1. Create an `Endpoint` resource for deploying the `Model` resource to.
2. Deploy the `Model` resource to the `Endpoint` resource.
3. Make online prediction requests to the `Endpoint` resource.
For batch-prediction, you:
1. Create a batch prediction job.
2. The job service will provision resources for the batch prediction request.
3. The results of the batch prediction request are returned to the caller.
4. The job service will unprovision the resoures for the batch prediction request.
## Make a batch prediction request
Now do a batch prediction to your deployed model.
### Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
```
test_items = ! gsutil cat $IMPORT_FILE | head -n2
if len(test_items[0]) == 4:
_, test_item_1, test_label_1, _ = str(test_items[0]).split(",")
_, test_item_2, test_label_2, _ = str(test_items[1]).split(",")
else:
test_item_1, test_label_1, _ = str(test_items[0]).split(",")
test_item_2, test_label_2, _ = str(test_items[1]).split(",")
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
```
### Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:
- `content`: The Cloud Storage path to the file with the text item.
- `mime_type`: The content type. In our example, it is an `text` file.
For example:
{'content': '[your-bucket]/file1.txt', 'mime_type': 'text'}
```
import json
import tensorflow as tf
gcs_test_item_1 = BUCKET_NAME + "/test1.txt"
with tf.io.gfile.GFile(gcs_test_item_1, "w") as f:
f.write(test_item_1 + "\n")
gcs_test_item_2 = BUCKET_NAME + "/test2.txt"
with tf.io.gfile.GFile(gcs_test_item_2, "w") as f:
f.write(test_item_2 + "\n")
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": gcs_test_item_1, "mime_type": "text/plain"}
f.write(json.dumps(data) + "\n")
data = {"content": gcs_test_item_2, "mime_type": "text/plain"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
```
### Compute instance scaling
You have several choices on scaling the compute instances for handling your batch prediction requests:
- Single Instance: The batch prediction requests are processed on a single compute instance.
- Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.
- Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified.
- Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.
- Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances.
- Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
```
MIN_NODES = 1
MAX_NODES = 1
```
### Make batch prediction request
Now that your batch of two test items is ready, let's do the batch request. Use this helper function `create_batch_prediction_job`, with the following parameters:
- `display_name`: The human readable name for the prediction job.
- `model_name`: The Vertex fully qualified identifier for the `Model` resource.
- `gcs_source_uri`: The Cloud Storage path to the input file -- which you created above.
- `gcs_destination_output_uri_prefix`: The Cloud Storage path that the service will write the predictions to.
- `parameters`: Additional filtering parameters for serving prediction results.
The helper function calls the job client service's `create_batch_prediction_job` metho, with the following parameters:
- `parent`: The Vertex location root path for Dataset, Model and Pipeline resources.
- `batch_prediction_job`: The specification for the batch prediction job.
Let's now dive into the specification for the `batch_prediction_job`:
- `display_name`: The human readable name for the prediction batch job.
- `model`: The Vertex fully qualified identifier for the `Model` resource.
- `dedicated_resources`: The compute resources to provision for the batch prediction job.
- `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.
- `starting_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`.
- `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.
- `model_parameters`: Additional filtering parameters for serving prediction results. *Note*, text models do not support additional parameters.
- `input_config`: The input source and format type for the instances to predict.
- `instances_format`: The format of the batch prediction request file: `jsonl` only supported.
- `gcs_source`: A list of one or more Cloud Storage paths to your batch prediction requests.
- `output_config`: The output destination and format for the predictions.
- `prediction_format`: The format of the batch prediction response file: `jsonl` only supported.
- `gcs_destination`: The output destination for the predictions.
This call is an asychronous operation. You will print from the response object a few select fields, including:
- `name`: The Vertex fully qualified identifier assigned to the batch prediction job.
- `display_name`: The human readable name for the prediction batch job.
- `model`: The Vertex fully qualified identifier for the Model resource.
- `generate_explanations`: Whether True/False explanations were provided with the predictions (explainability).
- `state`: The state of the prediction job (pending, running, etc).
Since this call will take a few moments to execute, you will likely get `JobState.JOB_STATE_PENDING` for `state`.
```
BATCH_MODEL = "claritin_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
batch_prediction_job = {
"display_name": display_name,
# Format: 'projects/{project}/locations/{location}/models/{model_id}'
"model": model_name,
"model_parameters": json_format.ParseDict(parameters, Value()),
"input_config": {
"instances_format": IN_FORMAT,
"gcs_source": {"uris": [gcs_source_uri]},
},
"output_config": {
"predictions_format": OUT_FORMAT,
"gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix},
},
"dedicated_resources": {
"machine_spec": machine_spec,
"starting_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try:
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", response.labels)
return response
IN_FORMAT = "jsonl"
OUT_FORMAT = "jsonl" # [jsonl]
response = create_batch_prediction_job(
BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, None
)
```
Now get the unique identifier for the batch prediction job you created.
```
# The full unique ID for the batch job
batch_job_id = response.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
```
### Get information on a batch prediction job
Use this helper function `get_batch_prediction_job`, with the following paramter:
- `job_name`: The Vertex fully qualified identifier for the batch prediction job.
The helper function calls the job client service's `get_batch_prediction_job` method, with the following paramter:
- `name`: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- `batch_job_id`
The helper function will return the Cloud Storage path to where the predictions are stored -- `gcs_destination`.
```
def get_batch_prediction_job(job_name, silent=False):
response = clients["job"].get_batch_prediction_job(name=job_name)
if silent:
return response.output_config.gcs_destination.output_uri_prefix, response.state
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try: # not all data types support explanations
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" error:", response.error)
gcs_destination = response.output_config.gcs_destination
print(" gcs_destination")
print(" output_uri_prefix:", gcs_destination.output_uri_prefix)
return gcs_destination.output_uri_prefix, response.state
predictions, state = get_batch_prediction_job(batch_job_id)
```
### Get the predictions
When the batch prediction is done processing, the job state will be `JOB_STATE_SUCCEEDED`.
Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name `prediction`, and under that folder will be a file called `predictions*.jsonl`.
Now display (cat) the contents. You will see multiple JSON objects, one for each prediction.
The first field `text_snippet` is the text file you did the prediction on, and the second field `annotations` is the prediction, which is further broken down into:
- `sentiment`: The predicted sentiment level.
```
def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
predictions, state = get_batch_prediction_job(batch_job_id, True)
if state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", state)
if state == aip.JobState.JOB_STATE_FAILED:
raise Exception("Batch Job Failed")
else:
folder = get_latest_predictions(predictions)
! gsutil ls $folder/prediction*.jsonl
! gsutil cat $folder/prediction*.jsonl
break
time.sleep(60)
```
# Cleaning up
To clean up all GCP resources used in this project, you can [delete the GCP
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
# Syft Duet for Federated Learning - Data Owner (Australian Bank)
## Setup
First we need to install syft 0.3.0 because for every other syft project in this repo we have used syft 0.2.9. However, a recent update has removed a lot of the old features and replaced them with this new 'Duet' function. To do this go into your terminal and cd into the repo directory and run:
> pip uninstall syft
Then confirm with 'y' and hit enter.
> pip install syft==0.3.0
NOTE: Make sure that you uninstall syft 0.3.0 and reinstall syft 0.2.9 if you want to run any of the other projects in this repo. Unfortunately when PySyft updated from 0.2.9 to 0.3.0 it removed all of the previous functionalities for the FL, DP, and HE that have previously been iplemented.
```
# Double check you are using syft 0.3.0 not 0.2.9
# !pip show syft
import syft as sy
import torch as th
import pandas as pd
```
## Initialising Duet
For each bank there will be this same initialisation step. Ensure that you run the below code. This should produce a Syft logo and some information. The important part is the lines of code;
```python
import syft as sy
duet = sy.duet("xxxxxxxxxxxxxxxxxxxxxxxxxxx")
```
Where the x's are some combination of letters and numbers. You need to take this key and paste it in to the respective banks duet code in the central aggregator. This should be clear and detailed in the central aggregator notebook. In essence, this is similar to the specific banks generating a server and key, and sending the key to the aggregator to give them access to this joint, secure, server process.
Once you have run the key in the code on the aggregator side, it will give you a similar key which it tells you to input on this side. There will be a box within the Syft logo/information output on this notebook to input the key. Once you enter it and hit enter then the connection for this bank should be established.
```
# We now run the initialisation of the duet
# Note: this can be run with a specified network if required
# For exmaple, if you don't trust the netwrok provided by pysyft
# to not look at the data
duet = sy.duet()
sy.logging(file_path="./syft_do.log")
```
>If the connection is established then there should be a green message above saying 'CONNECTED!'. Similarly, there should also be a Live Status indicating the number of objects, requests, and messages on the duet.
# Import Australian Bank Data
```
data = pd.read_csv('datasets/australian-bank-data.csv', sep = ',')
target = pd.read_csv('datasets/australian-bank-target.csv', sep = ',')
data.head()
data = th.tensor(data.values).float()
data
target = th.tensor(target.values).float()
target
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
data = sc_X.fit_transform(data)
data = th.tensor(data).float()
data
```
## Label and Send Data to Server
Here we are tagging, and labeling the specific banks data. Although we are sending the data, this does not mean that it is accessible by the central aggregator. We are sending this data to a trusted network server - hence, the reason we can specify our own when establishing the duet, just in case we don't trust the default one. This specific network should reside in the country of the data, more specifically wihtin the banks own network, therefore adhering to all regulations where neccessary.
```
data = data.tag("data")
data.describe("Australian Bank Training Data")
target = target.tag("target")
target.describe("Australian Bank Training Target")
# Once we have sent the data we are left with a pointer to the data
data_ptr = data.send(duet, searchable=True)
target_ptr = target.send(duet, searchable=True)
# Detail what is stored
duet.store.pandas
```
>NOTE: Although the data has been sent to this 'store' the other end of the connection cannot access/see the data without requesting it from you. However, from this side, because we sent the data, we can retrieve it whenever we want without rewuesting permission. Simply run the following code;
```python
duet.store["tag"].get()
```
>Where you replace the 'tag' with whatever the tag of the data you wish to get. Once you run this, the data will be removed from the store and brought back locally here.
```
# Detail any requests from client side. As mentioned above
# on the other end they need to request access to data/anything
# on duet server/store. This si where you can list any requests
# outstanding.
duet.requests.pandas
# Because on the other end of the connection they/we plan on
# running a model (with lots of requests) we can set up some
# request handlers that will automatically accept/deny certain
# labeled requests.
duet.requests.add_handler(
name="loss",
action="accept",
timeout_secs=-1, # no timeout
print_local=True # print the result in your notebook
)
duet.requests.add_handler(
name="model_download",
action="accept",
print_local=True # print the result in your notebook
)
duet.requests.handlers
```
| github_jupyter |
# Introductory example to ensemble models
This first notebook aims at emphasizing the benefit of ensemble methods over
simple models (e.g. decision tree, linear model, etc.). Combining simple
models result in more powerful and robust models with less hassle.
We will start by loading the california housing dataset. We recall that the
goal in this dataset is to predict the median house value in some district
in California based on demographic and geographic data.
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">If you want a deeper overview regarding this dataset, you can refer to the
Appendix - Datasets description section at the end of this MOOC.</p>
</div>
```
from sklearn.datasets import fetch_california_housing
data, target = fetch_california_housing(as_frame=True, return_X_y=True)
target *= 100 # rescale the target in k$
```
We will check the generalization performance of decision tree regressor with
default parameters.
```
from sklearn.model_selection import cross_validate
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(random_state=0)
cv_results = cross_validate(tree, data, target, n_jobs=2)
scores = cv_results["test_score"]
print(f"R2 score obtained by cross-validation: "
f"{scores.mean():.3f} +/- {scores.std():.3f}")
```
We obtain fair results. However, as we previously presented in the "tree in
depth" notebook, this model needs to be tuned to overcome over- or
under-fitting. Indeed, the default parameters will not necessarily lead to an
optimal decision tree. Instead of using the default value, we should search
via cross-validation the optimal value of the important parameters such as
`max_depth`, `min_samples_split`, or `min_samples_leaf`.
We recall that we need to tune these parameters, as decision trees tend to
overfit the training data if we grow deep trees, but there are no rules on
what each parameter should be set to. Thus, not making a search could lead us
to have an underfitted or overfitted model.
Now, we make a grid-search to tune the hyperparameters that we mentioned
earlier.
```
%%time
from sklearn.model_selection import GridSearchCV
from sklearn.tree import DecisionTreeRegressor
param_grid = {
"max_depth": [5, 8, None],
"min_samples_split": [2, 10, 30, 50],
"min_samples_leaf": [0.01, 0.05, 0.1, 1]}
cv = 3
tree = GridSearchCV(DecisionTreeRegressor(random_state=0),
param_grid=param_grid, cv=cv, n_jobs=2)
cv_results = cross_validate(tree, data, target, n_jobs=2,
return_estimator=True)
scores = cv_results["test_score"]
print(f"R2 score obtained by cross-validation: "
f"{scores.mean():.3f} +/- {scores.std():.3f}")
```
We see that optimizing the hyperparameters will have a positive effect
on the generalization performance. However, it comes with a higher computational
cost.
We can create a dataframe storing the important information collected during
the tuning of the parameters and investigate the results.
Now we will use an ensemble method called bagging. More details about this
method will be discussed in the next section. In short, this method will use
a base regressor (i.e. decision tree regressors) and will train several of
them on a slightly modified version of the training set. Then, the
predictions of all these base regressors will be combined by averaging.
Here, we will use 20 decision trees and check the fitting time as well as the
generalization performance on the left-out testing data. It is important to note
that we are not going to tune any parameter of the decision tree.
```
%%time
from sklearn.ensemble import BaggingRegressor
base_estimator = DecisionTreeRegressor(random_state=0)
bagging_regressor = BaggingRegressor(
base_estimator=base_estimator, n_estimators=20, random_state=0)
cv_results = cross_validate(bagging_regressor, data, target, n_jobs=8)
scores = cv_results["test_score"]
print(f"R2 score obtained by cross-validation: "
f"{scores.mean():.3f} +/- {scores.std():.3f}")
```
Without searching for optimal hyperparameters, the overall generalization
performance of the bagging regressor is better than a single decision tree.
In addition, the computational cost is reduced in comparison of seeking
for the optimal hyperparameters.
This shows the motivation behind the use of an ensemble learner: it gives a
relatively good baseline with decent generalization performance without any
parameter tuning.
Now, we will discuss in detail two ensemble families: bagging and
boosting:
* ensemble using bootstrap (e.g. bagging and random-forest);
* ensemble using boosting (e.g. adaptive boosting and gradient-boosting
decision tree).
| github_jupyter |
# Prepare Dataset for Model Training and Evaluating
# Amazon Customer Reviews Dataset
https://s3.amazonaws.com/amazon-reviews-pds/readme.html
## Schema
- `marketplace`: 2-letter country code (in this case all "US").
- `customer_id`: Random identifier that can be used to aggregate reviews written by a single author.
- `review_id`: A unique ID for the review.
- `product_id`: The Amazon Standard Identification Number (ASIN). `http://www.amazon.com/dp/<ASIN>` links to the product's detail page.
- `product_parent`: The parent of that ASIN. Multiple ASINs (color or format variations of the same product) can roll up into a single parent.
- `product_title`: Title description of the product.
- `product_category`: Broad product category that can be used to group reviews (in this case digital videos).
- `star_rating`: The review's rating (1 to 5 stars).
- `helpful_votes`: Number of helpful votes for the review.
- `total_votes`: Number of total votes the review received.
- `vine`: Was the review written as part of the [Vine](https://www.amazon.com/gp/vine/help) program?
- `verified_purchase`: Was the review from a verified purchase?
- `review_headline`: The title of the review itself.
- `review_body`: The text of the review.
- `review_date`: The date the review was written.
```
import boto3
import sagemaker
import pandas as pd
sess = sagemaker.Session()
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
```
## Download
Let's start by retrieving a subset of the Amazon Customer Reviews dataset.
```
!aws s3 cp 's3://amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz' ./data/
import csv
df = pd.read_csv(
"./data/amazon_reviews_us_Digital_Software_v1_00.tsv.gz",
delimiter="\t",
quoting=csv.QUOTE_NONE,
compression="gzip",
)
df.shape
df.head(5)
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
df[["star_rating", "review_id"]].groupby("star_rating").count().plot(kind="bar", title="Breakdown by Star Rating")
plt.xlabel("Star Rating")
plt.ylabel("Review Count")
```
# Balance the Dataset
```
from sklearn.utils import resample
five_star_df = df.query("star_rating == 5")
four_star_df = df.query("star_rating == 4")
three_star_df = df.query("star_rating == 3")
two_star_df = df.query("star_rating == 2")
one_star_df = df.query("star_rating == 1")
# Check which sentiment has the least number of samples
minority_count = min(
five_star_df.shape[0], four_star_df.shape[0], three_star_df.shape[0], two_star_df.shape[0], one_star_df.shape[0]
)
five_star_df = resample(five_star_df, replace=False, n_samples=minority_count, random_state=27)
four_star_df = resample(four_star_df, replace=False, n_samples=minority_count, random_state=27)
three_star_df = resample(three_star_df, replace=False, n_samples=minority_count, random_state=27)
two_star_df = resample(two_star_df, replace=False, n_samples=minority_count, random_state=27)
one_star_df = resample(one_star_df, replace=False, n_samples=minority_count, random_state=27)
df_balanced = pd.concat([five_star_df, four_star_df, three_star_df, two_star_df, one_star_df])
df_balanced = df_balanced.reset_index(drop=True)
df_balanced.shape
df_balanced[["star_rating", "review_id"]].groupby("star_rating").count().plot(
kind="bar", title="Breakdown by Star Rating"
)
plt.xlabel("Star Rating")
plt.ylabel("Review Count")
df_balanced.head(5)
```
# Split the Data into Train, Validation, and Test Sets
```
from sklearn.model_selection import train_test_split
# Split all data into 90% train and 10% holdout
df_train, df_holdout = train_test_split(df_balanced, test_size=0.10, stratify=df_balanced["star_rating"])
# Split holdout data into 50% validation and 50% test
df_validation, df_test = train_test_split(df_holdout, test_size=0.50, stratify=df_holdout["star_rating"])
# Pie chart, where the slices will be ordered and plotted counter-clockwise:
labels = ["Train", "Validation", "Test"]
sizes = [len(df_train.index), len(df_validation.index), len(df_test.index)]
explode = (0.1, 0, 0)
fig1, ax1 = plt.subplots()
ax1.pie(sizes, explode=explode, labels=labels, autopct="%1.1f%%", startangle=90)
# Equal aspect ratio ensures that pie is drawn as a circle.
ax1.axis("equal")
plt.show()
```
# Show 90% Train Data Split
```
df_train.shape
df_train[["star_rating", "review_id"]].groupby("star_rating").count().plot(
kind="bar", title="90% Train Breakdown by Star Rating"
)
```
# Show 5% Validation Data Split
```
df_validation.shape
df_validation[["star_rating", "review_id"]].groupby("star_rating").count().plot(
kind="bar", title="5% Validation Breakdown by Star Rating"
)
```
# Show 5% Test Data Split
```
df_test.shape
df_test[["star_rating", "review_id"]].groupby("star_rating").count().plot(
kind="bar", title="5% Test Breakdown by Star Rating"
)
```
# Select `star_rating` and `review_body` for Training
```
df_train = df_train[["star_rating", "review_body"]]
df_train.shape
df_train.head(5)
```
# Write a CSV With No Header for Comprehend
```
comprehend_train_path = "./amazon_reviews_us_Digital_Software_v1_00_comprehend.csv"
df_train.to_csv(comprehend_train_path, index=False, header=False)
```
# Upload Train Data to S3 for Comprehend
```
train_s3_prefix = "data"
comprehend_train_s3_uri = sess.upload_data(path=comprehend_train_path, key_prefix=train_s3_prefix)
comprehend_train_s3_uri
!aws s3 ls $comprehend_train_s3_uri
```
# Store the location of our train data in our notebook server to be used next
```
%store comprehend_train_s3_uri
%store
```
# Release Resources
```
%%html
<p><b>Shutting down your kernel for this notebook to release resources.</b></p>
<button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button>
<script>
try {
els = document.getElementsByClassName("sm-command-button");
els[0].click();
}
catch(err) {
// NoOp
}
</script>
%%javascript
try {
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
}
catch(err) {
// NoOp
}
```
| github_jupyter |
#**PySpark no Google Colab**
---
####Configurando o Google Colab para habilitar o uso do PySpark
```
# Instala o Java JDK 8
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
# Download do Apache Spark 3.1.2
!wget -q https://downloads.apache.org/spark/spark-3.1.2/spark-3.1.2-bin-hadoop3.2.tgz
# Descompacta o Apache Spark 3.1.2
!tar xf spark-3.1.2-bin-hadoop3.2.tgz
# Remove o arquivo compactado do Apache Spark 3.1.2
!rm -rf spark-3.1.2-bin-hadoop3.2.tgz
# Instala os módulos FindSpark e PySpark
!pip install -q findspark
!pip install -q pyspark
```
####Configurando o ambiente para uso do PySpark
```
# Importa os módulos
import os
import findspark
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
# Define as variáveis ambientes Home do Java e Spark
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-3.1.2-bin-hadoop3.2"
# Inicia o FindSpark e cria a instância da sessão Spark
findspark.init()
spark = SparkSession.builder.master("local[*]").getOrCreate()
```
####PySpark pronto para uso, divirta-se!
```
dataset = spark.read.format("json") \
.option("multiLine",True) \
.load("sample_data/anscombe.json")
dataset.columns
dataset.show(10)
dataset.printSchema()
dataset_agrupado = dataset \
.groupBy("Series") \
.agg(F.avg("X").alias("X_agrupado")
, F.avg("Y").alias("Y_agrupado")) \
.orderBy("Series")
```
dataset_agrupado.show()
```
dataset_agrupado.explain()
df = spark.createDataFrame(
[ (1., 4.)
, (2., 5.)
, (3., 6.)]
, ["A", "B"])
df.show()
df = spark.createDataFrame(
[
('864.754.453-33,565.878.787-43',)
, ('565.878.787-43 864.754.453-33 565.878.787-43',)
, ('333.444.555-66 222.222.222-33',)
]
, ["cpf",])
df.show(10,False)
df.select(
df.cpf
, F.length(F.regexp_replace(df.cpf, r'\d+\.\d+\.\d+\-\d+', '')).alias('reg1')
, F.length(F.regexp_replace(df.cpf, r'\d{3}\.\d{3}\.\d{3}\-\d{2}', '')).alias('reg2')
, df.cpf.rlike(r'\d{3}\.\d{3}\.\d{3}\-\d{2}').alias('reg3')
).show(10, False)
df2 = spark.createDataFrame(
[
('http://site.com:8080/',)
, ('http://localhost.com:8080/?pub=200',)
, ('http://server.com:1234',)
, ('http://server.com',)
]
, ["url",])
df2.show(10,False)
df2.select(
df2.url
, df2.url.rlike(r'https?:[\/]{2}\s+').alias('reg1')
, df2.url.rlike(r'https?:[\/]{2}([a-zA-Z0-9]+\.[a-zA-Z]{2,4})(:[0-9]+)?').alias('reg2')
, df2.url.rlike(r'https?:[\/]{2}([a-zA-Z0-9]+\.[a-zA-Z]{2,4})(:[0-9]+)').alias('reg3')
, df2.url.rlike(r'https?:\/{2}').alias('reg4')
).show(10, False)
```
| github_jupyter |
ERROR: type should be string, got "https://huggingface.co/docs/transformers/main_classes/output\n\nhttps://www.kaggle.com/code/debarshichanda/bert-multi-label-text-classification/notebook\n\n```\nimport os\nimport re\nimport string\nimport json\n#import emoji\nimport numpy as np\nimport pandas as pd\nfrom sklearn import metrics\nfrom bs4 import BeautifulSoup\nimport transformers\nimport torch\nfrom torch.utils.data import Dataset, DataLoader, RandomSampler, SequentialSampler\nfrom transformers import BertTokenizer, AutoTokenizer, BertModel, BertConfig, AutoModel, AdamW\nimport warnings\nwarnings.filterwarnings('ignore')\n\npd.set_option(\"display.max_columns\", None)\ndf_train = pd.read_csv(\"train.csv\")\ndf_dev = pd.read_csv(\"val.csv\")\n#df_train = pd.read_csv(\"train.csv\", header=None, names=['Text', 'Class', 'ID'])\n#df_dev = pd.read_csv(\"val.csv\", sep='\\t', header=None, names=['Text', 'Class', 'ID'])\ndf_train, df_dev\n#def list_of_classes(row):\n# classes = [\"anger\", \"fear\", \"joy\", \"sadness\", \"surprise\"]\n# arr = [1 if emotion==\"1\" else 0 for emotion in classes]\n# print(arr)\n# return arr\n#df_train = df_train.apply(list_of_classes, axis=1, result_type='expand')\n#df_train['Len of classes'] = df_train['List of classes'].apply(lambda x: len(x))\n#df_dev['List of classes'] = df_dev['Class'].apply(lambda x: x.split(','))\n#df_dev['Len of classes'] = df_dev['List of classes'].apply(lambda x: len(x))\n#df_train.head(10)\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\ndevice\nMAX_LEN = 200\nTRAIN_BATCH_SIZE = 64\nVALID_BATCH_SIZE = 64\nEPOCHS = 10\nLEARNING_RATE = 2e-5\ntokenizer = AutoTokenizer.from_pretrained('roberta-base')\ntarget_cols = [col for col in df_train.columns if col not in ['Text', 'ID']]\ntarget_cols\nclass BERTDataset(Dataset):\n def __init__(self, df, tokenizer, max_len):\n self.df = df\n self.max_len = max_len\n self.text = df.Text\n self.tokenizer = tokenizer\n self.targets = df[target_cols].values\n \n def __len__(self):\n return len(self.df)\n \n def __getitem__(self, index):\n text = self.text[index]\n inputs = self.tokenizer.encode_plus(\n text,\n truncation=True,\n add_special_tokens=True,\n max_length=self.max_len,\n padding='max_length',\n return_token_type_ids=True\n )\n ids = inputs['input_ids']\n mask = inputs['attention_mask']\n token_type_ids = inputs[\"token_type_ids\"]\n \n return {\n 'ids': torch.tensor(ids, dtype=torch.long),\n 'mask': torch.tensor(mask, dtype=torch.long),\n 'token_type_ids': torch.tensor(token_type_ids, dtype=torch.long),\n 'targets': torch.tensor(self.targets[index], dtype=torch.float)\n }\ntrain_dataset = BERTDataset(df_train, tokenizer, MAX_LEN)\nvalid_dataset = BERTDataset(df_dev, tokenizer, MAX_LEN)\ntrain_loader = DataLoader(train_dataset, batch_size=TRAIN_BATCH_SIZE, num_workers=4, shuffle=True, pin_memory=True)\nvalid_loader = DataLoader(valid_dataset, batch_size=VALID_BATCH_SIZE, num_workers=4, shuffle=False, pin_memory=True)\n# Creating the customized model, by adding a drop out and a dense layer on top of distil bert to get the final output for the model. \n\nclass BERTClass(torch.nn.Module):\n def __init__(self):\n super(BERTClass, self).__init__()\n self.roberta = AutoModel.from_pretrained('roberta-base')\n# self.l2 = torch.nn.Dropout(0.3)\n self.fc = torch.nn.Linear(768,5)\n \n def forward(self, ids, mask, token_type_ids):\n _, features = self.roberta(ids, attention_mask = mask, token_type_ids = token_type_ids, return_dict=False)\n# output_2 = self.l2(output_1)\n output = self.fc(features)\n return output\n\nmodel = BERTClass()\nmodel.to(device);\ndef loss_fn(outputs, targets):\n return torch.nn.BCEWithLogitsLoss()(outputs, targets)\noptimizer = AdamW(params=model.parameters(), lr=LEARNING_RATE, weight_decay=1e-6)\nenumerate(train_loader, 0)\ni_break = 2\ndef train(epoch):\n model.train()\n for i, data in enumerate(train_loader, 0):\n print(f\"Epoch {epoch}, step {i}\")\n ids = data['ids'].to(device, dtype = torch.long)\n mask = data['mask'].to(device, dtype = torch.long)\n token_type_ids = data['token_type_ids'].to(device, dtype = torch.long)\n targets = data['targets'].to(device, dtype = torch.float)\n outputs = model(ids, mask, token_type_ids)\n loss = loss_fn(outputs, targets)\n if i%i_break==0:\n print(f'Epoch: {epoch}, Loss: {loss.item()}')\n break\n loss.backward()\n optimizer.step()\n optimizer.zero_grad()\nfor epoch in range(EPOCHS):\n train(epoch)\ndef validation():\n model.eval()\n fin_targets=[]\n fin_outputs=[]\n with torch.no_grad():\n for _, data in enumerate(valid_loader, 0):\n ids = data['ids'].to(device, dtype = torch.long)\n mask = data['mask'].to(device, dtype = torch.long)\n token_type_ids = data['token_type_ids'].to(device, dtype = torch.long)\n targets = data['targets'].to(device, dtype = torch.float)\n outputs = model(ids, mask, token_type_ids)\n fin_targets.extend(targets.cpu().detach().numpy().tolist())\n fin_outputs.extend(torch.sigmoid(outputs).cpu().detach().numpy().tolist())\n return fin_outputs, fin_targets\n```\n\n" | github_jupyter |
# Time Series Cross Validation
```
import pandas as pd
import numpy as np
#suppress ARIMA warnings
import warnings
warnings.filterwarnings('ignore')
```
Up till now we have used a single validation period to select our best model. The weakness of that approach is that it gives you a sample size of 1 (that's better than nothing, but generally poor statistics!). Time series cross validation is an approach to provide more data points when comparing models. In the classicial time series literature time series cross validation is called a **Rolling Forecast Origin**. There may also be benefit of taking a **sliding window** approach to cross validaiton. This second approach maintains a fixed sized training set. I.e. it drops older values from the time series during validation.
## Rolling Forecast Origin
The following code and output provide a simplified view of how rolling forecast horizons work in practice.
```
def rolling_forecast_origin(train, min_train_size, horizon):
'''
Rolling forecast origin generator.
'''
for i in range(len(train) - min_train_size - horizon + 1):
split_train = train[:min_train_size+i]
split_val = train[min_train_size+i:min_train_size+i+horizon]
yield split_train, split_val
full_series = [2502, 2414, 2800, 2143, 2708, 1900, 2333, 2222, 1234, 3456]
test = full_series[-2:]
train = full_series[:-2]
print('full training set: {0}'.format(train))
print('hidden test set: {0}'.format(test))
cv_rolling = rolling_forecast_origin(train, min_train_size=4, horizon=2)
cv_rolling
i = 0
for cv_train, cv_val in cv_rolling:
print(f'CV[{i+1}]')
print(f'Train:\t{cv_train}')
print(f'Val:\t{cv_val}')
print('-----')
i += 1
```
## Sliding Window Cross Validation
```
def sliding_window(train, window_size, horizon, step=1):
'''
sliding window generator.
Parameters:
--------
train: array-like
training data for time series method
window_size: int
lookback - how much data to include.
horizon: int
forecast horizon
step: int, optional (default=1)
step=1 means that a single additional data point is added to the time
series. increase step to run less splits.
Returns:
array-like, array-like
split_training, split_validation
'''
for i in range(0, len(train) - window_size - horizon + 1, step):
split_train = train[i:window_size+i]
split_val = train[i+window_size:window_size+i+horizon]
yield split_train, split_val
```
This code tests its with `step=1`
```
cv_sliding = sliding_window(train, window_size=4, horizon=1)
print('full training set: {0}\n'.format(train))
i = 0
for cv_train, cv_val in cv_sliding:
print(f'CV[{i+1}]')
print(f'Train:\t{cv_train}')
print(f'Val:\t{cv_val}')
print('-----')
i += 1
```
The following code tests it with `step=2`. Note that you get less splits. The code is less computationally expensive at the cost of less data. That is probably okay.
```
cv_sliding = sliding_window(train, window_size=4, horizon=1, step=2)
print('full training set: {0}\n'.format(train))
i = 0
for cv_train, cv_val in cv_sliding:
print(f'CV[{i+1}]')
print(f'Train:\t{cv_train}')
print(f'Val:\t{cv_val}')
print('-----')
i += 1
```
# Parallel Cross Validation Example using Naive1
```
from forecast_tools.baseline import SNaive, Naive1
from forecast_tools.datasets import load_emergency_dept
#optimised version of the functions above...
from forecast_tools.model_selection import (rolling_forecast_origin,
sliding_window,
cross_validation_score)
from sklearn.metrics import mean_absolute_error
train = load_emergency_dept()
model = Naive1()
#%%timeit runs the code multiple times to get an estimate of runtime.
#comment if out to run the code only once.
```
Run on a single core
```
%%time
cv = sliding_window(train, window_size=14, horizon=7, step=1)
results_1 = cross_validation_score(model, train, cv, mean_absolute_error,
n_jobs=1)
```
Run across multiple cores by setting `n_jobs=-1`
```
%%time
cv = sliding_window(train, window_size=14, horizon=7, step=1)
results_2 = cross_validation_score(model, train, cv, mean_absolute_error,
n_jobs=-1)
results_1.shape
results_2.shape
print(results_1.mean(), results_1.std())
```
just to illustrate that the results are the same - the difference is runtime.
```
print(results_2.mean(), results_2.std())
```
# Cross validation with multiple forecast horizons
```
horizons = [7, 14, 21]
cv = sliding_window(train, window_size=14, horizon=max(horizons), step=1)
#note that we now pass in the horizons list to cross_val_score
results_h = cross_validation_score(model, train, cv, mean_absolute_error,
horizons=horizons, n_jobs=-1)
#results are returned as numpy array - easy to cast to dataframe and display
pd.DataFrame(results_h, columns=['7days', '14days', '21days']).head()
```
## Cross validation example using ARIMA - does it speed up when CV run in Parallel?
```
#use ARIMA from pmdarima as that has a similar interface to baseline models.
from pmdarima import ARIMA, auto_arima
#ato_model = auto_arima(train, suppress_warnings=True, n_jobs=-1, m=7)
#auto_model
#create arima model - reasonably complex model
#order=(1, 1, 2), seasonal_order=(2, 0, 2, 7)
args = {'order':(1, 1, 2), 'seasonal_order':(2, 0, 2, 7)}
model = ARIMA(order=args['order'], seasonal_order=args['seasonal_order'],
enforce_stationarity=False, suppress_warnings=True)
%%time
cv = rolling_forecast_origin(train, min_train_size=320, horizon=7)
results_1 = cross_validation_score(model, train, cv, mean_absolute_error,
n_jobs=1)
```
comment out %%timeit to run the code only once!
you should see a big improvement in performance. mine went
from 12.3 seconds to 2.4 seconds.
```
%%time
cv = rolling_forecast_origin(train, min_train_size=320, horizon=7)
results_2 = cross_validation_score(model, train, cv, mean_absolute_error,
n_jobs=-1)
results_1.shape
results_2.shape
results_1.mean()
results_2.mean()
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Classificação de texto com avaliações de filmes
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/pt/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Execute em Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/pt/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Veja a fonte em GitHub</a>
</td>
</table>
Este *notebook* classifica avaliações de filmes como **positiva** ou **negativa** usando o texto da avaliação. Isto é um exemplo de classificação *binária* —ou duas-classes—, um importante e bastante aplicado tipo de problema de aprendizado de máquina.
Usaremos a base de dados [IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) que contém avaliaçòes de mais de 50000 filmes do bando de dados [Internet Movie Database](https://www.imdb.com/). A base é dividida em 25000 avaliações para treinamento e 25000 para teste. Os conjuntos de treinamentos e testes são *balanceados*, ou seja, eles possuem a mesma quantidade de avaliações positivas e negativas.
O notebook utiliza [tf.keras](https://www.tensorflow.org/r1/guide/keras), uma API alto-nível para construir e treinar modelos com TensorFlow. Para mais tutoriais avançados de classificação de textos usando `tf.keras`, veja em [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
```
# keras.datasets.imdb está quebrado em 1.13 e 1.14, pelo np 1.16.3
!pip install tf_nightly
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
```
## Baixe a base de dados IMDB
A base de dados vem empacotada com TensorFlow. Ele já vem pré-processado de forma que as avaliações (sequências de palavras) foi convertida em sequências de inteiros, onde cada inteiro representa uma palavra específica no dicionário.
O código abaixo baixa a base de dados IMDB para a sua máquina (ou usa a cópia em *cache*, caso já tenha baixado):
```
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
```
O argumento `num_words=10000` mantém as 10000 palavras mais frequentes no conjunto de treinamento. As palavras mais raras são descartadas para preservar o tamanho dos dados de forma maleável.
## Explore os dados
Vamos parar um momento para entender o formato dos dados. O conjunto de dados vem pré-processado: cada exemplo é um *array* de inteiros representando as palavras da avaliação do filme. Cada *label* é um inteiro com valor ou de 0 ou 1, onde 0 é uma avaliação negativa e 1 é uma avaliação positiva.
```
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
```
O texto das avaliações foi convertido para inteiros, onde cada inteiro representa uma palavra específica no dicionário. Isso é como se parece a primeira revisão:
```
print(train_data[0])
```
As avaliações dos filmes têm diferentes tamanhos. O código abaixo mostra o número de palavras da primeira e segunda avaliação. Sabendo que o número de entradas da rede neural tem que ser de mesmo também, temos que resolver isto mais tarde.
```
len(train_data[0]), len(train_data[1])
```
### Converta os inteiros de volta a palavras
É util saber como converter inteiros de volta a texto. Aqui, criaremos uma função de ajuda para consultar um objeto *dictionary* que contenha inteiros mapeados em strings:
```
# Um dicionário mapeando palavras em índices inteiros
word_index = imdb.get_word_index()
# Os primeiros índices são reservados
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
```
Agora, podemos usar a função `decode_review` para mostrar o texto da primeira avaliação:
```
decode_review(train_data[0])
```
## Prepare os dados
As avaliações—o *arrays* de inteiros— deve ser convertida em tensores (*tensors*) antes de alimentar a rede neural. Essa conversão pode ser feita de duas formas:
* Converter os arrays em vetores de 0s e 1s indicando a ocorrência da palavra, similar com *one-hot encoding*. Por exemplo, a sequência [3, 5] se tornaria um vetor de 10000 dimensões, onde todos seriam 0s, tirando 3 would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.
* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.
In this tutorial, we will use the second approach.
Since the movie reviews must be the same length, we will use the [pad_sequences](https://keras.io/preprocessing/sequence/#pad_sequences) function to standardize the lengths:
```
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
```
Let's look at the length of the examples now:
```
len(train_data[0]), len(train_data[1])
```
And inspect the (now padded) first review:
```
print(train_data[0])
```
## Construindo o modelo
A rede neural é criada por camadas empilhadas —isso necessita duas decisões arquiteturais principais:
* Quantas camadas serão usadas no modelo?
* Quantas *hidden units* são usadas em cada camada?
Neste exemplo, os dados de entrada são um *array* de palavras-índices. As *labels* para predizer são ou 0 ou 1. Vamos construir um modelo para este problema:
```
# O formato de entrada é a contagem vocabulário usados pelas avaliações dos filmes (10000 palavras)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
```
As camadas são empilhadas sequencialmente para construir o classificador:
1. A primeira camada é uma camada `Embedding` layer (*`Embedding` layer*). Essa camada pega o vocabulário em inteiros e olha o vetor *embedding* em cada palavra-index. Esses vetores são aprendidos pelo modelo, ao longo do treinamento. Os vetores adicionam a dimensão ao *array* de saída. As dimensões resultantes são: `(batch, sequence, embedding)`.
2. Depois, uma camada `GlobalAveragePooling1D` retorna um vetor de saída com comprimento fixo para cada exemplo fazendo a média da sequência da dimensão. Isso permite o modelo de lidar com entradas de tamanhos diferentes da maneira mais simples possível.
3. Esse vetor de saída com tamanho fixo passa por uma camada *fully-connected* (`Dense`) layer com 16 *hidden units*.
4. A última camada é uma *densely connected* com um único nó de saída. Usando uma função de ativação `sigmoid`, esse valor é um float que varia entre 0 e 1, representando a probabilidade, ou nível de confiança.
### Hidden units
O modelo abaixo tem duas camadas intermediárias ou _"hidden"_ (hidden layers), entre a entrada e saída. O número de saídas (unidades— *units*—, nós ou neurônios) é a dimensão do espaço representacional para a camada. Em outras palavras, a quantidade de liberdade que a rede é permitida enquanto aprende uma representação interna.
Se o modelo tem mais *hidden units* (um espaço representacional de maior dimensão), e/ou mais camadas, então a rede pode aprender representações mais complexas. Entretanto, isso faz com que a rede seja computacionamente mais custosa e pode levar o aprendizado de padrões não desejados— padrões que melhoram a performance com os dados de treinamento, mas não com os de teste. Isso se chama *overfitting*, e exploraremos mais tarde.
### Função Loss e otimizadores (optimizer)
O modelo precisa de uma função *loss* e um otimizador (*optimizer*) para treinamento. Já que é um problema de classificação binário e o modelo tem com saída uma probabilidade (uma única camada com ativação sigmoide), usaremos a função loss `binary_crossentropy`.
Essa não é a única escolha de função loss, você poderia escolher, no lugar, a `mean_squared_error`. Mas, geralmente, `binary_crossentropy` é melhor para tratar probabilidades— ela mede a "distância" entre as distribuições de probabilidade, ou, no nosso caso, sobre a distribuição real e as previsões.
Mais tarde, quando explorarmos problemas de regressão (como, predizer preço de uma casa), veremos como usar outra função loss chamada *mean squared error*.
Agora, configure o modelo para usar o *optimizer* a função loss:
```
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['acc'])
```
### Crie um conjunto de validação
Quando treinando. queremos checar a acurácia do modelo com os dados que ele nunca viu. Crie uma conjunto de *validação* tirando 10000 exemplos do conjunto de treinamento original. (Por que não usar o de teste agora? Nosso objetivo é desenvolver e melhorar (tunar) nosso modelo usando somente os dados de treinamento, depois usar o de teste uma única vez para avaliar a acurácia).
```
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
```
## Treine o modelo
Treine o modelo em 40 *epochs* com *mini-batches* de 512 exemplos. Essas 40 iterações sobre todos os exemplos nos tensores `x_train` e `y_train`. Enquanto treina, monitore os valores do loss e da acurácia do modelo nos 10000 exemplos do conjunto de validação:
```
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
```
## Avalie o modelo
E vamos ver como o modelo se saiu. Dois valores serão retornados. Loss (um número que representa o nosso erro, valores mais baixos são melhores), e acurácia.
```
results = model.evaluate(test_data, test_labels)
print(results)
```
Está é uma aproximação ingênua que conseguiu uma acurácia de 87%. Com mais abordagens avançadas, o modelo deve chegar em 95%.
## Crie um gráfico de acurácia e loss por tempo
`model.fit()` retorna um objeto `History` que contém um dicionário de tudo o que aconteceu durante o treinamento:
```
history_dict = history.history
history_dict.keys()
```
Tem 4 entradas: uma para cada métrica monitorada durante a validação e treinamento. Podemos usá-las para plotar a comparação do loss de treinamento e validação, assim como a acurácia de treinamento e validação:
```
import matplotlib.pyplot as plt
acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" de "blue dot" ou "ponto azul"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b de "solid blue line" "linha azul"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # limpa a figura
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
No gráfico, os pontos representam o loss e acurácia de treinamento, e as linhas são o loss e a acurácia de validação.
Note que o loss de treinamento *diminui* a cada *epoch* e a acurácia *aumenta*. Isso é esperado quando usado um gradient descent optimization—ele deve minimizar a quantidade desejada a cada iteração.
Esse não é o caso do loss e da acurácia de validação— eles parecem ter um pico depois de 20 epochs. Isso é um exemplo de *overfitting*: o modelo desempenha melhor nos dados de treinamento do que quando usado com dados nunca vistos. Depois desse ponto, o modelo otimiza além da conta e aprende uma representação *especifica* para os dados de treinamento e não *generaliza* para os dados de teste.
Para esse caso particular, podemos prevenir o *overfitting* simplesmente parando o treinamento após mais ou menos 20 epochs. Depois, você verá como fazer isso automaticamente com um *callback*.
| github_jupyter |
```
from resources.workspace import *
```
$
% START OF MACRO DEF
% DO NOT EDIT IN INDIVIDUAL NOTEBOOKS, BUT IN macros.py
%
\newcommand{\Reals}{\mathbb{R}}
\newcommand{\Expect}[0]{\mathbb{E}}
\newcommand{\NormDist}{\mathcal{N}}
%
\newcommand{\DynMod}[0]{\mathscr{M}}
\newcommand{\ObsMod}[0]{\mathscr{H}}
%
\newcommand{\mat}[1]{{\mathbf{{#1}}}}
%\newcommand{\mat}[1]{{\pmb{\mathsf{#1}}}}
\newcommand{\bvec}[1]{{\mathbf{#1}}}
%
\newcommand{\trsign}{{\mathsf{T}}}
\newcommand{\tr}{^{\trsign}}
\newcommand{\tn}[1]{#1}
\newcommand{\ceq}[0]{\mathrel{≔}}
%
\newcommand{\I}[0]{\mat{I}}
\newcommand{\K}[0]{\mat{K}}
\newcommand{\bP}[0]{\mat{P}}
\newcommand{\bH}[0]{\mat{H}}
\newcommand{\bF}[0]{\mat{F}}
\newcommand{\R}[0]{\mat{R}}
\newcommand{\Q}[0]{\mat{Q}}
\newcommand{\B}[0]{\mat{B}}
\newcommand{\C}[0]{\mat{C}}
\newcommand{\Ri}[0]{\R^{-1}}
\newcommand{\Bi}[0]{\B^{-1}}
\newcommand{\X}[0]{\mat{X}}
\newcommand{\A}[0]{\mat{A}}
\newcommand{\Y}[0]{\mat{Y}}
\newcommand{\E}[0]{\mat{E}}
\newcommand{\U}[0]{\mat{U}}
\newcommand{\V}[0]{\mat{V}}
%
\newcommand{\x}[0]{\bvec{x}}
\newcommand{\y}[0]{\bvec{y}}
\newcommand{\z}[0]{\bvec{z}}
\newcommand{\q}[0]{\bvec{q}}
\newcommand{\br}[0]{\bvec{r}}
\newcommand{\bb}[0]{\bvec{b}}
%
\newcommand{\bx}[0]{\bvec{\bar{x}}}
\newcommand{\by}[0]{\bvec{\bar{y}}}
\newcommand{\barB}[0]{\mat{\bar{B}}}
\newcommand{\barP}[0]{\mat{\bar{P}}}
\newcommand{\barC}[0]{\mat{\bar{C}}}
\newcommand{\barK}[0]{\mat{\bar{K}}}
%
\newcommand{\D}[0]{\mat{D}}
\newcommand{\Dobs}[0]{\mat{D}_{\text{obs}}}
\newcommand{\Dmod}[0]{\mat{D}_{\text{obs}}}
%
\newcommand{\ones}[0]{\bvec{1}}
\newcommand{\AN}[0]{\big( \I_N - \ones \ones\tr / N \big)}
%
% END OF MACRO DEF
$
# The ensemble (Monte-Carlo) approach
is an approximate method for doing Bayesian inference. Instead of computing the full (gridvalues, or parameters, of the) posterior distributions, we instead try to generate ensembles from them.
An ensemble is an *iid* sample. I.e. a set of "members" ("particles", "realizations", or "sample points") that have been drawn ("sampled") independently from the same distribution. With the EnKF, these assumptions are generally tenous, but pragmatic.
Ensembles can be used to characterize uncertainty: either by reconstructing (estimating) the distribution from which it is assumed drawn, or by computing various *statistics* such as the mean, median, variance, covariance, skewness, confidence intervals, etc (any function of the ensemble can be seen as a "statistic"). This is illustrated by the code below.
```
# Parameters
b = 0
B = 25
B12 = sqrt(B)
def true_pdf(x):
return ss.norm.pdf(x,b,sqrt(B))
# Plot true pdf
xx = 3*linspace(-B12,B12,201)
fig, ax = plt.subplots()
ax.plot(xx,true_pdf(xx),label="True");
# Sample and plot ensemble
M = 1 # length of state vector
N = 100 # ensemble size
E = b + B12*randn((N,M))
ax.plot(E, zeros(N), '|k', alpha=0.3, ms=100)
# Plot histogram
nbins = max(10,N//30)
heights, bins, _ = ax.hist(E,density=1,bins=nbins,label="Histogram estimate")
# Plot parametric estimate
x_bar = np.mean(E)
B_bar = np.var(E)
ax.plot(xx,ss.norm.pdf(xx,x_bar,sqrt(B_bar)),label="Parametric estimate")
ax.legend();
# Uncomment AFTER Exc 4:
# dx = bins[1]-bins[0]
# c = 0.5/sqrt(2*pi*B)
# for height, x in zip(heights,bins):
# ax.add_patch(mpl.patches.Rectangle((x,0),dx,c*height/true_pdf(x+dx/2),alpha=0.3))
# Also set
# * N = 10**4
# * nbins = 50
```
The plot demonstrates that the true distribution can be represented by a sample thereof (since we can almost reconstruct the Gaussian distribution by estimating the moments from the sample). However, there are other ways to reconstruct (estimate) a distribution from a sample. For example: a histogram.
**Exc 2:** Which approximation to the true pdf looks better: Histogram or the parametric?
Does one approximation actually start with more information? The EnKF takes advantage of this.
#### Exc 4*:
Use the method of `gaussian_kde` from `scipy.stats` to make a "continuous histogram" and plot it above.
`gaussian_kde`
```
#show_answer("KDE")
```
**Exc 5*:** Suppose the histogram bars get normalized (divided) by the value of the pdf at their location.
How do you expect the resulting histogram to look?
Test your answer by uncommenting the block in the above code.
Being able to sample a Gaussian distribution is a building block of the EnKF.
In the previous example, we generated samples from a Gaussian distribution using the `randn` function.
However, that was just for a scalar (univariate) case, i.e. with `M=1`. We need to be able to sample a multivariate Gaussian distribution. That is the objective of the following exercise.
**Exc 6 (Multivariate Gaussian sampling):**
Suppose $\z$ is a standard Gaussian,
i.e. $p(\z) = \mathcal{N}(\z \mid \bvec{0},\I_M)$,
where $\I_M$ is the $M$-dimensional identity matrix.
Let $\x = \mat{L}\z + \bb$.
Recall [Exc 3.7](T3%20-%20Univariate%20Kalman%20filtering.ipynb#Exc-3.7:-The-forecast-step:),
which yields $p(\x) = \mathcal{N}(\x \mid \bb, \mat{L}^{}\mat{L}^T)$.
* (a). $\z$ can be sampled using `randn((M,1))`. How (where) is `randn` defined?
* (b). Consider the above definition of $\x$ and the code below.
Complete it so as to generate a random realization of $\x$.
Hint: matrix-vector multiplication can be done using the symbol `@`.
```
M = 3 # ndim
b = 10*ones(M)
B = diag(1+arange(M))
L = np.linalg.cholesky(B) # B12
print("True mean and cov:")
print(b)
print(B)
### INSERT ANSWER (b) ###
#show_answer('Gaussian sampling a')
#show_answer('Gaussian sampling b')
```
* (c). In the code cell below, sample $N = 100$ realizations of $\x$
and collect them in an $M$-by-$N$ "ensemble matrix" $\E$.
- Try to avoid `for` loops (the main thing to figure out is: how to add a (mean) vector to a matrix).
- Run the cell and inspect the computed mean and covariance to see if they're close to the true values, printed in the cell above.
```
N = 100 # ensemble size
E = ### INSERT ANSWER (c) ###
# Use the code below to assess whether you got it right
x_bar = np.mean(E,axis=1)
B_bar = np.cov(E)
with printoptions(precision=1):
print("Estimated mean:")
print(x_bar)
print("Estimated covariance:")
print(B_bar)
plt.matshow(B_bar,cmap="Blues"); plt.grid('off'); plt.colorbar()
#show_answer('Gaussian sampling c')
```
**Exc 8*:** How erroneous are the ensemble estimates on average?
```
#show_answer('Average sampling error')
```
**Exc 10:** Above, we used numpy's (`np`) functions to compute the sample-estimated mean and covariance matrix,
$\bx$ and $\barB$,
from the ensemble matrix $\E$.
Now, instead, implement these estimators yourself:
$$\begin{align}\bx &\ceq \frac{1}{N} \sum_{n=1}^N \x_n \, , \\
\barB &\ceq \frac{1}{N-1} \sum_{n=1}^N (\x_n - \bx) (\x_n - \bx)^T \, . \end{align}$$
```
# Don't use numpy's mean, cov
def estimate_mean_and_cov(E):
M, N = E.shape
### INSERT ANSWER ###
return x_bar, B_bar
x_bar, B_bar = estimate_mean_and_cov(E)
with printoptions(precision=1):
print(x_bar)
print(B_bar)
#show_answer('ensemble moments')
```
**Exc 12:** Why is the normalization by $(N-1)$ for the covariance computation?
```
#show_answer('Why (N-1)')
```
**Exc 14:** Like Matlab, Python (numpy) is quicker if you "vectorize" loops.
This is emminently possible with computations of ensemble moments.
Let $\X \ceq
\begin{bmatrix}
\x_1 -\bx, & \ldots & \x_n -\bx, & \ldots & \x_N -\bx
\end{bmatrix} \, .$
* (a). Show that $\X = \E \AN$, where $\ones$ is the column vector of length $N$ with all elements equal to $1$.
Hint: consider column $n$ of $\X$.
* (b). Show that $\barB = \X \X^T /(N-1)$.
* (c). Code up this, latest, formula for $\barB$ and insert it in `estimate_mean_and_cov(E)`
```
#show_answer('ensemble moments vectorized')
```
**Exc 16:** The cross-covariance between two random vectors, $\bx$ and $\by$, is given by
$$\begin{align}
\barC_{\x,\y}
&\ceq \frac{1}{N-1} \sum_{n=1}^N
(\x_n - \bx) (\y_n - \by)^T \\\
&= \X \Y^T /(N-1)
\end{align}$$
where $\Y$ is, similar to $\X$, the matrix whose columns are $\y_n - \by$ for $n=1,\ldots,N$.
Note that this is simply the covariance formula, but for two different variables.
I.e. if $\Y = \X$, then $\barC_{\x,\y} = \barC_{\x}$ (which we have denoted $\barB$ in the above).
Implement the cross-covariance estimator in the code-cell below.
```
def estimate_cross_cov(Ex,Ey):
### INSERT ANSWER ###
#show_answer('estimate cross')
```
**Exc 18 (error notions)*:**
* (a). What's the difference between error residual?
* (b). What's the difference between error and bias?
* (c). Show `MSE = RMSE^2 = Bias^2 + Var`
```
#show_answer('errors')
```
### Next: [Writing your own EnKF](T8%20-%20Writing%20your%20own%20EnKF.ipynb)
| github_jupyter |
# Exploratory data analysis
Exploratory data analysis is an important part of any data science projects. According to [Forbs](https://www.forbes.com/sites/gilpress/2016/03/23/data-preparation-most-time-consuming-least-enjoyable-data-science-task-survey-says/?sh=67e543e86f63), it accounts for about 80% of the work of data scientists. Thus, we are going to pay out attention to that part.
In the notebook are given data description, cleaning, variables preparation, and CTR calculation and visualization.
---
```
import pandas as pd
import random
import seaborn as sns
import matplotlib.pyplot as plt
import gc
%matplotlib inline
```
Given that file occupies 5.9G and has 40 mln rows we are going to read only a few rows to glimpse at data.
```
filename = 'data/train.csv'
!echo 'Number of lines in "train.csv":'
!wc -l {filename}
!echo '"train.csv" file size:'
!du -h {filename}
dataset_5 = pd.read_csv('data/train.csv', nrows=5)
dataset_5.head()
print("Number of columns: {}\n".format(dataset_5.shape[1]))
```
---
## Data preparation
* Column `Hour` has a format `YYMMDDHH` and has to be converted.
* It is necessary to load only `click` and `hour` columns for `CTR` calculation.
* For data exploration purposes we also calculate `hour` and build distributions of `CTR` by `hour` and `weekday`
---
```
pd.to_datetime(dataset_5['hour'], format='%y%m%d%H')
# custom_date_parser = lambda x: pd.datetime.strptime(x, '%y%m%d%H')
# The commented part is for preliminary analysis and reads only 10% of data
# row_num = 40428967
# to read 10% of data
# skip = sorted(random.sample(range(1, row_num), round(0.9 * row_num)))
# data_set = pd.read_csv('data/train.csv',
# header=0,
# skiprows=skip,
# usecols=['click', 'hour'])
data_set = pd.read_csv('data/train.csv',
header=0,
usecols=['click', 'hour'])
data_set['hour'] = pd.to_datetime(data_set['hour'], format='%y%m%d%H')
data_set.isna().sum()
data_set.shape
round(100 * data_set.click.value_counts() / data_set.shape[0])
data_set.hour.dt.date.unique()
```
### Data preparation for CTR time series graph
```
df_CTR = data_set.groupby('hour').agg({
'click': ['count', 'sum']
}).reset_index()
df_CTR.columns = ['hour', 'impressions', 'clicks']
df_CTR['CTR'] = df_CTR['clicks'] / df_CTR['impressions']
del data_set; gc.collect();
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
plt.figure(figsize=[16, 8])
sns.lineplot(x='hour', y='CTR', data=df_CTR, linewidth=3)
plt.title('Hourly CTR for period 2014/10/21 and 2014/10/30', fontsize=20)
```
### Data preparation for CTR by hours graph
```
df_CTR['h'] = df_CTR.hour.dt.hour
df_CTR_h = df_CTR[['h', 'impressions',
'clicks']].groupby('h').sum().reset_index()
df_CTR_h['CTR'] = df_CTR_h['clicks'] / df_CTR_h['impressions']
df_CTR_h_melt = pd.melt(df_CTR_h,
id_vars='h',
value_vars=['impressions', 'clicks'],
value_name='count',
var_name='type')
plt.figure(figsize=[16, 8])
sns.set_style("white")
g1 = sns.barplot(x='h',
y='count',
hue='type',
data=df_CTR_h_melt,
palette="deep")
g1.legend(loc=1).set_title(None)
ax2 = plt.twinx()
sns.lineplot(x='h',
y='CTR',
data=df_CTR_h,
palette="deep",
marker='o',
ax=ax2,
label='CTR',
linewidth=5,
color='lightblue')
plt.title('CTR, Number of Imressions and Clicks by hours', fontsize=20)
ax2.legend(loc=5)
plt.tight_layout()
```
### Data preparation for CTR by weekday graph
```
df_CTR['weekday'] = df_CTR.hour.dt.day_name()
df_CTR['weekday_num'] = df_CTR.hour.dt.weekday
df_CTR_w = df_CTR[['weekday', 'impressions',
'clicks']].groupby('weekday').sum().reset_index()
df_CTR_w['CTR'] = df_CTR_w['clicks'] / df_CTR_w['impressions']
df_CTR_w_melt = pd.melt(df_CTR_w,
id_vars='weekday',
value_vars=['impressions', 'clicks'],
value_name='count',
var_name='type')
plt.figure(figsize=[16, 8])
sns.set_style("white")
g1 = sns.barplot(x='weekday',
y='count',
hue='type',
data=df_CTR_w_melt.sort_values('weekday'),
palette="deep",
order=[
'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday',
'Saturday', 'Sunday'
])
g1.legend(loc=1).set_title(None)
ax2 = plt.twinx()
sns.lineplot(x='weekday',
y='CTR',
data=df_CTR.sort_values(by='weekday_num'),
palette="deep",
marker='o',
ax=ax2,
label='CTR',
linewidth=5,
sort=False)
plt.title('CTR, Number of Imressions and Clicks by weekday', fontsize=20)
ax2.legend(loc=5)
plt.tight_layout()
```
### Normality test
```
from scipy.stats import normaltest, shapiro
def test_interpretation(stat, p, alpha=0.05):
"""
Outputs the result of statistical test comparing test-statistic and p-value
"""
print('Statistics=%.3f, p-value=%.3f, alpha=%.2f' % (stat, p, alpha))
if p > alpha:
print('Sample looks like from normal distribution (fail to reject H0)')
else:
print('Sample is not from Normal distribution (reject H0)')
stat, p = shapiro(df_CTR.CTR)
test_interpretation(stat, p)
stat, p = normaltest(df_CTR.CTR)
test_interpretation(stat, p)
```
---
## Summary
* Number of rows: 40428967
* Date duration: 10 days between 2014/10/21 and 2014/10/30. Each day has 24 hours
* No missing values in variables `click` and `hour`
* For simplicity, analysis is provided for 10% of the data. And as soon as the notebook is finalized, it will be re-run for all available data. And as soon as the hour aggregation takes place, the raw data source is deleted
* Three graphs are provided:
* CTR time serirs for all data duration
* CTR, impressions, and click counts by hour
* CTR, impressions, and click counts by weekday
* Average `CTR` value is **17%**
* Most of the `Impressions` and `Clicks` are appeared on Tuesday, Wednesday and Thursday. But highest `CTR` values is on Monday and Sunday
* The normality in `CTR` time-series is **rejected** by two tests
---
## Hypothesis:
There is a seasonality in `CTR` by an `hour` and `weekday`. For instance, `CTR` at hour 21 is lower than `CTR` at hour 14 which can be observed from graphs. Ideally, it is necessary to use 24-hour lag for anomaly detection. It can be implemented by comparing, for instance, hour 1 at day 10 with an average value of hour 1 at days 3, 4, 5, 6, 7, 8, 9 (one week), etc. One week is chosen because averaging of whole week smooth weekday seasonality: Monday and Sunday are different from Tuesday and Wednesday, but there is no big difference between whole weeks. Additional improvement can be done by the use of the median for central tendency instead of a simple averaging because averaging is biased towards abnormal values.
```
# save the final aggregated data frame to use for anomaly detection in the corresponding notebook
df_CTR.to_pickle('./data/CTR_aggregated.pkl')
```
| github_jupyter |
# Looking for a good ratio
Let's start with what is surely the most important simple ratio, 3/2. We need to find numbers which are within ½% of 1.5. That is they have to be between the following two numbers:
```
1.5 * (1 - 0.005)
1.5 * (1 + 0.005)
```
Let's see if we can find values between 1.4925 and about 1.5075 in $2^{i/n}$:
```
ratio = 3/2
bottom = ratio * (1 - 0.005)
top = ratio * (1 + 0.005)
for n in range(4, 17): # we'll check n values from 4 to 16
for i in range(1, n+1): # for each i value up to n
f = 2**(i/n) # calculate the frequency value
if bottom <= f <= top: # if f is in the right range
print('n =', n, 'and i =', i, 'so 2^(', i, '/', n, ')')
```
How many $n$ values contain a ratio of 1.5?
Let's try the next simplest ratio, 4:3.
```
ratio = 4/3
tolarance = 0.005
bottom = ratio * (1 - tolarance)
top = ratio * (1 + tolarance)
for n in range(4, 17):
for i in range(1, n+1):
f = 2**(i/n)
if bottom <= f <= top:
print('n =', n, 'and i =', i, 'so 2^(', i, '/', n, ')')
```
How many results this time?
If we look at the results for the ratio 5/3, there will also only be one note (this time in $n=15$, feel free to change the code above to check that).
However if we increase our tolerance to 1% then:
```
ratio = 5/3
tolarance = 0.01
bottom = ratio * (1 - tolarance)
top = ratio * (1 + tolarance)
for n in range(4, 17):
for i in range(1, n+1):
f = 2**(i/n)
if bottom <= f <= top:
print('n =', n, 'and i =', i, 'so 2^(', i, '/', n, ')')
```
Again we see that there is a desirable ratio in $n=12$.
Let's check the ratio 5/4, again with a 1% tolerance:
```
ratio = 5/4
tolarance = 0.01
bottom = ratio * (1 - tolarance)
top = ratio * (1 + tolarance)
for n in range(4, 17):
for i in range(1, n+1):
f = 2**(i/n)
if bottom <= f <= top:
print('n =', n, 'and i =', i, 'so 2^(', i, '/', n, ')')
```
And that is all of the possible ratios less than 2 (an octave) with integers that are 5 or less (simple).
Just to review, let's calculate the number of simple ratios for each value of $n$:
```
ratioList = [3/2, 4/3, 5/3, 5/4]
tolerance = 0.01
for n in range(4, 17):
iList = []
for ratio in ratioList:
bottom = ratio * (1 - tolarance)
top = ratio * (1 + tolarance)
for i in range(1, n+1):
f = 2**(i/n)
if bottom <= f <= top:
iList.append(i)
#print('n =', n, 'and i =', i)
print('n =', n, 'has', len(iList), 'simple ratio(s) with i =', iList)
```
So it looks like 12 is the winner, it has the most simple ratios of frequencies. So an octave with 12 notes will have the most pleasant-sounding intervals. Of course we could have guessed this because we know that 12 has more factors than any other number less than 60.
Music with 60 notes in an octave might be a little too complicated.
Go on to [15.4-Problems.ipynb](./15.4-Problems.ipynb)
| github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/24.Improved_Entity_Resolvers_in_SparkNLP_with_sBert.ipynb)
# 24. Improved Entity Resolvers in Spark NLP with sBert
```
import sys
import json
import os
with open('license.json') as f:
license_keys = json.load(f)
import os
locals().update(license_keys)
os.environ.update(license_keys)
# Installing pyspark and spark-nlp
! pip install --upgrade -q pyspark==3.1.2 spark-nlp==$PUBLIC_VERSION
# Installing Spark NLP Healthcare
! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET
# Installing Spark NLP Display Library for visualization
! pip install -q spark-nlp-display
import json
import os
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
import sparknlp_jsl
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
from sparknlp.util import *
from sparknlp.pretrained import ResourceDownloader
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
print ("Spark NLP Version :", sparknlp.version())
print ("Spark NLP_JSL Version :", sparknlp_jsl.version())
spark
```
<h1>!!! Warning !!!</h1>
**If you get an error related to Java port not found 55, it is probably because that the Colab memory cannot handle the model and the Spark session died. In that case, try on a larger machine or restart the kernel at the top and then come back here and rerun.**
## ICD10CM pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
icd10_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_code")\
.setDistanceFunction("EUCLIDEAN")
icd_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
icd10_resolver])
icd_lp = LightPipeline(icd_pipelineModel)
```
## ICD10CM-HCC pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
hcc_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented_billable_hcc","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_hcc_code")\
.setDistanceFunction("EUCLIDEAN")
hcc_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
hcc_resolver])
hcc_lp = LightPipeline(hcc_pipelineModel)
```
## RxNorm pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
rxnorm_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_rxnorm_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("rxnorm_code")\
.setDistanceFunction("EUCLIDEAN")
rxnorm_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
rxnorm_resolver])
rxnorm_lp = LightPipeline(rxnorm_pipelineModel)
```
## NDC pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
ndc_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_ndc", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("ndc_code")\
.setDistanceFunction("EUCLIDEAN")
ndc_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
ndc_resolver])
ndc_lp = LightPipeline(ndc_pipelineModel)
```
## CPT pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
cpt_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
cpt_resolver])
cpt_lp = LightPipeline(cpt_pipelineModel)
```
## SNOMED pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
snomed_ct_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_snomed_findings_aux_concepts","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("snomed_code")\
.setDistanceFunction("EUCLIDEAN")
snomed_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
snomed_ct_resolver])
snomed_lp = LightPipeline(snomed_pipelineModel)
```
## LOINC Pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
loinc_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_loinc_augmented", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("loinc_code")\
.setDistanceFunction("EUCLIDEAN")
loinc_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
loinc_resolver])
loinc_lp = LightPipeline(loinc_pipelineModel)
```
## HCPCS Pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
hcpcs_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_hcpcs", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("hcpcs_code")\
.setDistanceFunction("EUCLIDEAN")
hcpcs_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
hcpcs_resolver])
hcpcs_lp = LightPipeline(hcpcs_pipelineModel)
```
## UMLS Pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
umls_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_umls_major_concepts", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("umls_code")\
.setDistanceFunction("EUCLIDEAN")
umls_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
umls_resolver])
umls_lp = LightPipeline(umls_pipelineModel)
```
## MeSH pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
mesh_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_mesh","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("mesh_code")\
.setDistanceFunction("EUCLIDEAN")
mesh_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
mesh_resolver])
mesh_lp = LightPipeline(mesh_pipelineModel)
```
## HPO Pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
hpo_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_HPO", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("umls_code")\
.setDistanceFunction("EUCLIDEAN")
hpo_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
hpo_resolver])
hpo_lp = LightPipeline(hpo_pipelineModel)
```
## All the resolvers in the same pipeline
### (just to show how it is done.. will not be used in this notebook)
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
snomed_ct_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_snomed_findings","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("snomed_code")\
.setDistanceFunction("EUCLIDEAN")
icd10_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_code")\
.setDistanceFunction("EUCLIDEAN")
rxnorm_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_rxnorm_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("rxnorm_code")\
.setDistanceFunction("EUCLIDEAN")
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
hcc_resolver = SentenceEntityResolverModel.pretrained("sbert_biobertresolve_icd10cm_augmented_billable_hcc","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_hcc_code")\
.setDistanceFunction("EUCLIDEAN")
loinc_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_loinc_augmented", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("loinc_code")\
.setDistanceFunction("EUCLIDEAN")
hcpcs_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_hcpcs", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("hcpcs_code")\
.setDistanceFunction("EUCLIDEAN")
umls_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_umls_major_concepts", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("umls_code")\
.setDistanceFunction("EUCLIDEAN")
hpo_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_HPO", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("umls_code")\
.setDistanceFunction("EUCLIDEAN")
mesh_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_mesh","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("mesh_code")\
.setDistanceFunction("EUCLIDEAN")
ndc_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_ndc", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("ndc_code")\
.setDistanceFunction("EUCLIDEAN")
resolver_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
icd10_resolver,
hcc_resolver,
rxnorm_resolver,
ndc_resolver,
cpt_resolver,
snomed_ct_resolver,
loinc_resolver,
hcpcs_resolver,
umls_resolver,
mesh_resolver
hpo_resolver])
resolver_lp = LightPipeline(resolver_pipelineModel)
```
## Utility functions
```
import pandas as pd
pd.set_option('display.max_colwidth', 0)
def get_codes (lp, text, vocab='icd10cm_code', hcc=False, aux_label=False):
full_light_result = lp.fullAnnotate(text)
chunks = []
codes = []
begin = []
end = []
resolutions=[]
all_distances =[]
all_codes=[]
all_cosines = []
all_k_aux_labels=[]
for chunk, code in zip(full_light_result[0]['ner_chunk'], full_light_result[0][vocab]):
begin.append(chunk.begin)
end.append(chunk.end)
chunks.append(chunk.result)
codes.append(code.result)
all_codes.append(code.metadata['all_k_results'].split(':::'))
resolutions.append(code.metadata['all_k_resolutions'].split(':::'))
all_distances.append(code.metadata['all_k_distances'].split(':::'))
all_cosines.append(code.metadata['all_k_cosine_distances'].split(':::'))
if hcc:
try:
all_k_aux_labels.append(code.metadata['all_k_aux_labels'].split(':::'))
except:
all_k_aux_labels.append([])
elif aux_label:
try:
all_k_aux_labels.append(code.metadata['all_k_aux_labels'].split(':::'))
except:
all_k_aux_labels.append([])
else:
all_k_aux_labels.append([])
df = pd.DataFrame({'chunks':chunks, 'begin': begin, 'end':end, 'code':codes,'all_codes':all_codes,
'resolutions':resolutions, 'all_k_aux_labels':all_k_aux_labels,'all_distances':all_cosines})
if hcc:
df['billable'] = df['all_k_aux_labels'].apply(lambda x: [i.split('||')[0] for i in x])
df['hcc_status'] = df['all_k_aux_labels'].apply(lambda x: [i.split('||')[1] for i in x])
df['hcc_score'] = df['all_k_aux_labels'].apply(lambda x: [i.split('||')[2] for i in x])
elif aux_label:
df['gt'] = df['all_k_aux_labels'].apply(lambda x: [i.split('|')[0] for i in x])
df['concept'] = df['all_k_aux_labels'].apply(lambda x: [i.split('|')[1] for i in x])
df['aux'] = df['all_k_aux_labels'].apply(lambda x: [i.split('|')[2] for i in x])
df = df.drop(['all_k_aux_labels'], axis=1)
return df
```
## Getting some predictions from resolvers
```
text = 'bladder cancer'
%time get_codes (icd_lp, text, vocab='icd10cm_code')
text = 'severe stomach pain'
%time get_codes (icd_lp, text, vocab='icd10cm_code')
text = 'bladder cancer'
%time get_codes (hcc_lp, text, vocab='icd10cm_hcc_code', hcc=True)
text = 'severe stomach pain'
%time get_codes (hcc_lp, text, vocab='icd10cm_hcc_code', hcc=True)
text = 'metformin 100 mg'
%time get_codes (rxnorm_lp, text, vocab='rxnorm_code')
text = 'Advil Allergy Sinus'
%time get_codes (rxnorm_lp, text, vocab='rxnorm_code')
text = 'metformin 500 mg'
%time get_codes (ndc_lp, text, vocab='ndc_code')
text = 'aspirin 81 mg'
%time get_codes (ndc_lp, text, vocab='ndc_code')
text = 'heart surgery'
%time get_codes (cpt_lp, text, vocab='cpt_code')
text = 'ct abdomen'
%time get_codes (cpt_lp, text, vocab='cpt_code')
text = 'Left heart cath'
%time get_codes (cpt_lp, text, vocab='cpt_code')
text = 'bladder cancer'
%time get_codes (snomed_lp, text, vocab='snomed_code', aux_label=True)
text = 'schizophrenia'
%time get_codes (snomed_lp, text, vocab='snomed_code', aux_label=True)
text = 'FLT3 gene mutation analysis'
%time get_codes (loinc_lp, text, vocab='loinc_code')
text = 'Hematocrit'
%time get_codes (loinc_lp, text, vocab='loinc_code')
text = 'urine test'
%time get_codes (loinc_lp, text, vocab='loinc_code')
text = ["Breast prosthesis, mastectomy bra, with integrated breast prosthesis form, unilateral, any size, any type"]
%time get_codes (hcpcs_lp, text, vocab='hcpcs_code')
text= ["Dietary consultation for carbohydrate counting for type I diabetes."]
%time get_codes (hcpcs_lp, text, vocab='hcpcs_code')
text= ["Psychiatric consultation for alcohol withdrawal and dependance."]
%time get_codes (hcpcs_lp, text, vocab='hcpcs_code')
# medical device
text = 'X-Ray'
%time get_codes (umls_lp, text, vocab='umls_code')
# Injuries & poisoning
text = 'out-of-date food poisoning'
%time get_codes (umls_lp, text, vocab='umls_code')
# clinical findings
text = 'type two diabetes mellitus'
%time get_codes (umls_lp, text, vocab='umls_code')
text = 'chest pain '
%time get_codes (mesh_lp, text, vocab='mesh_code')
text = 'pericardectomy'
%time get_codes (mesh_lp, text, vocab='mesh_code')
text = 'bladder cancer'
%time get_codes (hpo_lp, text, vocab='umls_code')
text = 'bipolar disorder'
%time get_codes (hpo_lp, text, vocab='umls_code')
text = 'schizophrenia '
%time get_codes (hpo_lp, text, vocab='umls_code')
icd_chunks = ['advanced liver disease',
'advanced lung disease',
'basal cell carcinoma of skin',
'acute maxillary sinusitis',
'chronic kidney disease stage',
'diabetes mellitus type 2',
'lymph nodes of multiple sites',
'other chronic pain',
'severe abdominal pain',
'squamous cell carcinoma of skin',
'type 2 diabetes mellitus']
snomed_chunks= ['down syndrome', 'adenocarcinoma', 'aortic valve stenosis',
'atherosclerosis', 'atrial fibrillation',
'hypertension', 'lung cancer', 'seizure',
'squamous cell carcinoma', 'stage IIIB', 'mediastinal lymph nodes']
from IPython.display import display
for chunk in icd_chunks:
print ('>> ',chunk)
display(get_codes (icd_lp, chunk, vocab='icd10cm_code'))
for chunk in snomed_chunks:
print ('>> ',chunk)
display(get_codes (snomed_lp, chunk, vocab='snomed_code', aux_label=True))
clinical_chunks = ['bladder cancer',
'anemia in chronic kidney disease',
'castleman disease',
'congestive heart failure',
'diabetes mellitus type 2',
'lymph nodes of multiple sites',
'malignant melanoma of skin',
'malignant neoplasm of lower lobe, bronchus',
'metastatic lung cancer',
'secondary malignant neoplasm of bone',
'type 2 diabetes mellitus',
'type 2 diabetes mellitus/insulin',
'unsp malignant neoplasm of lymph node']
for chunk in clinical_chunks:
print ('>> ',chunk)
print ('icd10cm_code')
display(get_codes (hcc_lp, chunk, vocab='icd10cm_hcc_code', hcc=True))
print ('snomed_code')
display(get_codes (snomed_lp, chunk, vocab='snomed_code', aux_label=True))
```
## How to integrate resolvers with NER models in the same pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_clinical", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['PROBLEM'])
c2doc = Chunk2Doc()\
.setInputCols("ner_chunk")\
.setOutputCol("ner_chunk_doc")
sbert_embedder = BertSentenceEmbeddings\
.pretrained("sbiobert_base_cased_mli",'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)
icd10_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_code")\
.setDistanceFunction("EUCLIDEAN")
sbert_resolver_pipeline = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
c2doc,
sbert_embedder,
icd10_resolver])
data_ner = spark.createDataFrame([[""]]).toDF("text")
sbert_models = sbert_resolver_pipeline.fit(data_ner)
clinical_note = 'A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior episode of HTG-induced pancreatitis three years prior to presentation, associated with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2, presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting. Two weeks prior to presentation, she was treated with a five-day course of amoxicillin for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months at the time of presentation. Physical examination on presentation was significant for dry oral mucosa; significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20, creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c) 10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for starvation ketosis, as she reported poor oral intake for three days prior to admission. However, serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - the original sample was centrifuged and the chylomicron layer removed prior to analysis due to interference from turbidity caused by lipemia again. The patient was treated with an insulin drip for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within 24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She had close follow-up with endocrinology post discharge.'
print (clinical_note)
clinical_note_df = spark.createDataFrame([[clinical_note]]).toDF("text")
icd10_result = sbert_models.transform(clinical_note_df)
pd.set_option("display.max_colwidth",0)
import pandas as pd
pd.set_option('display.max_colwidth', 0)
def get_icd_codes(icd10_res):
icd10_df = icd10_res.select(F.explode(F.arrays_zip('ner_chunk.result',
'ner_chunk.metadata',
'icd10cm_code.result',
'icd10cm_code.metadata')).alias("cols")) \
.select(F.expr("cols['1']['sentence']").alias("sent_id"),
F.expr("cols['0']").alias("ner_chunk"),
F.expr("cols['1']['entity']").alias("entity"),
F.expr("cols['2']").alias("icd10_code"),
F.expr("cols['3']['all_k_results']").alias("all_codes"),
F.expr("cols['3']['all_k_resolutions']").alias("resolutions")).toPandas()
codes = []
resolutions = []
for code, resolution in zip(icd10_df['all_codes'], icd10_df['resolutions']):
codes.append(code.split(':::'))
resolutions.append(resolution.split(':::'))
icd10_df['all_codes'] = codes
icd10_df['resolutions'] = resolutions
return icd10_df
%%time
res_pd = get_icd_codes(icd10_result)
res_pd.head(15)
```
Lets apply some HTML formating by using `sparknlp_display` library to see the results of the pipeline in a nicer layout:
```
from sparknlp_display import EntityResolverVisualizer
# with light pipeline
light_model = LightPipeline(sbert_models)
vis = EntityResolverVisualizer()
# Change color of an entity label
vis.set_label_colors({'PROBLEM':'#008080'})
light_data_icd = light_model.fullAnnotate(clinical_note)
vis.display(light_data_icd[0], 'ner_chunk', 'icd10cm_code')
```
## BertSentenceChunkEmbeddings
This annotator let users to aggregate sentence embeddings and ner chunk embeddings to get more specific and accurate resolution codes. It works by averaging context and chunk embeddings to get contextual information. Input to this annotator is the context (sentence) and ner chunks, while the output is embedding for each chunk that can be fed to the resolver model. The `setChunkWeight` parameter can be used to control the influence of surrounding context.
For more information and examples of `BertSentenceChunkEmbeddings` annotator, you can check here:
[24.1.Improved_Entity_Resolution_with_SentenceChunkEmbeddings.ipynb](https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/24.1.Improved_Entity_Resolution_with_SentenceChunkEmbeddings.ipynb)
### ICD10CM with BertSentenceChunkEmbeddings
Lets do the same process by using `BertSentenceEmbeddings` annotator and compare the results. We will create a new pipeline by using this annotator with SentenceEntityResolverModel.
```
#Get average sentence-chunk Bert embeddings
sentence_chunk_embeddings = BertSentenceChunkEmbeddings.pretrained("sbiobert_base_cased_mli", "en", "clinical/models")\
.setInputCols(["sentence", "ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setCaseSensitive(False)\
.setChunkWeight(0.5) #default : 0.5
icd10_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_code")\
.setDistanceFunction("EUCLIDEAN")
resolver_pipeline_SCE = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
sentence_chunk_embeddings,
icd10_resolver])
empty_data = spark.createDataFrame([['']]).toDF("text")
model_SCE = resolver_pipeline_SCE.fit(empty_data)
icd10_result_SCE = model_SCE.transform(clinical_note_df)
%%time
res_SCE_pd = get_icd_codes(icd10_result_SCE)
res_SCE_pd.head(15)
icd10_SCE_lp = LightPipeline(model_SCE)
light_result = icd10_SCE_lp.fullAnnotate(clinical_note)
visualiser = EntityResolverVisualizer()
# Change color of an entity label
visualiser.set_label_colors({'PROBLEM':'#008080'})
visualiser.display(light_result[0], 'ner_chunk', 'icd10cm_code')
```
**Lets compare the results that we got from these two methods.**
```
sentence_df = icd10_result.select(F.explode(F.arrays_zip('sentence.metadata', 'sentence.result')).alias("cols")) \
.select( F.expr("cols['0']['sentence']").alias("sent_id"),
F.expr("cols['1']").alias("sentence_all")).toPandas()
comparison_df = pd.merge(res_pd.loc[:,'sent_id':'resolutions'],res_SCE_pd.loc[:,'sent_id':'resolutions'], on=['sent_id',"ner_chunk", "entity"], how='inner')
comparison_df.columns=['sent_id','ner_chunk', 'entity', 'icd10_code', 'all_codes', 'resolutions', 'icd10_code_SCE', 'all_codes_SCE', 'resolutions_SCE']
comparison_df = pd.merge(sentence_df, comparison_df,on="sent_id").drop('sent_id', axis=1)
comparison_df.head(15)
```
| github_jupyter |
# **TUTORIAL 1: PRIMER PROYECTO DE MACHINE LEARNING**
# 1. EJECUTAR PYTHON Y COMPROBAR VERSIONES
Importaremos las librerías necesarias para poder ejecutar todo el código.
- SYS - Provee acceso afucniones y objetos del intérprete
- SCIPY - Biblioteca de herramientas y algoritmos matemáticos en Python
- NUMPY - Mayor soporte para vectores y matrices, constituyendo una biblioteca de funciones.
- MATPLOTLIB - Permite generar gráficas a partir de datos cotenidos en listas o vectores
- PANDAS - Ofrece estructura de datos y operaciones para manipular tablas numéricas y series temporales
- SKLEARN - Librería orientada al machine learning que proporcionar algoritmos para realizar clasificaciones, regresiones, clustering...
```
import sys
print('Python: {}'.format(sys.version))
import scipy
print('scipy: {}'.format(scipy.__version__))
import numpy
print('numpy: {}'.format(numpy.__version__))
import matplotlib
print('matplotlib: {}'.format(matplotlib.__version__))
import pandas
print('pandas: {}'.format(pandas.__version__))
import sklearn
print('sklearn: {}'.format(sklearn.__version__))
```
# 2. CARGA DE DATOS E IMPORTACIÓN DE LIBRERÍAS
```
# Carga de librerías
from pandas import read_csv
from pandas.plotting import scatter_matrix
from matplotlib import pyplot
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
# Carga de datos
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/iris.csv"
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class']
dataset = read_csv(url, names=names)
# Mostrar por pantalla el número de filas y columnas (tuple)
print(dataset.shape)
# Mostrar por pantalla los primeros registros
print(dataset.head(3))
```
# 3. RESUMEN ESTADÍSTICOS
```
# Datos estadísticos
print(dataset.describe())
```
# 4. DISTRIBUCIÓN DE CLASE
```
# Número de registros para cada clase agrupados por clase
print(dataset.groupby('class').size())
```
# 5. GRÁFICOS UNIVARIABLES
```
# Desplegamos 4 boxplots cada uno representando un anchura/altura de pétalo y sépalo
dataset.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False)
# Histogramas sobre anchura y altura de pétalos y sépalos de flores
dataset.hist()
pyplot.show()
```
# 6. GRÁFICOS MULTIVARIABLES
```
# Diagrama de dispersión. Muy útil para ver la relación entre variables cuantitativas.
scatter_matrix(dataset)
pyplot.show()
```
# 7. EVALUACIÓN DE ALGORITMOS
```
# Dividiremos la información en conjuntos de datos válidos. Un 80% de la información se usará para
# guardar información, evaluarla y seleccionarla a lo largo de los modelos mientras
# que un 20% se guardará como datos válidos
array = dataset.values
X = array[:,0:4]
y = array[:,4]
X_train, X_validation, Y_train, Y_validation = train_test_split(X, y, test_size=0.20, random_state=1)
```
# 8. CONSTRUCCIÓN DE MODELOS
El siguiente ejemplo muestra 6 ejemplos de algoritmos.
- Logistic Regression
- Linear Discriminant Analysis
- K-Nearest Neighbors
- Classification adn Regression Trees
- Gaussian Naive Bayes
- Support Vector Machines
```
# Agregamos al array 'models' cada uno de los algoritmos
models = []
models.append(('LR', LogisticRegression(solver='liblinear', multi_class='ovr')))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC(gamma='auto')))
# Evaluamos cada uno de los modelos
results = []
names = []
for name, model in models:
kfold = StratifiedKFold(n_splits=10, random_state=1, shuffle=True)
cv_results = cross_val_score(model, X_train, Y_train, cv=kfold, scoring='accuracy')
results.append(cv_results)
names.append(name)
print('%s: %f (%f)' % (name, cv_results.mean(), cv_results.std()))
```
# 9. SELECCIONAR EL MEJOR MODELOS
Al tener cada uno de los modelos, podemos realizar una comparativa entre ellos para utilizar el más preciso y con ella, hacer predicciones.
```
#Una vez tengamos los modelos, los compararemos para seleccionar el más preciso.
pyplot.boxplot(results, labels=names)
pyplot.title('Algorithm Comparison')
pyplot.show()
```
# 10. REALIZAR PREDICCIONES
```
# Podemos hacer predicciones de nuestro conjunto de información. Igualamos nuestro
# array de modelos, usamos Support Vector Classification y lo ajustamos al array.
# Finalmente guardamos en una variable la predicción del array.
model = SVC(gamma='auto')
model.fit(X_train, Y_train)
predictions = model.predict(X_validation)
```
# 11. EVALUAR PREDICCIONES
Evaluamos las predicciones comparandolos con resultados esperados en el conjunto de información y calculamos su precisión en la clasificacion y también un informe de clasificación o matriz de confusión.
- En precisión obtenemos un 96% de todo el conjunto de información
- La matriz nos indica los errores hechos
- El informe desglosa información de cada clase por precisión, recall, f1-score y support
```
print(accuracy_score(Y_validation, predictions))
print(confusion_matrix(Y_validation, predictions))
print(classification_report(Y_validation, predictions))
```
# **TUTORIAL 2: CÓMO HACER PREDICCIONES CON SCIKIT-LEARN**
Antes de comenzar a hacer predicciones, debemos hacer un modelo final. En el siguiente ejemplo haremos un modelo final mediante los recursos que nos proporciona Scikit-Learn.
- LogisticRegression: Modelosde regresión para variables dependientes o de respuesta binomial. Útil para modelar la probabilidad de un evento teniendo en cuenta otros factores
- make_blobs: Genera un cluster de objetos binarios a partir de un conjunto de datos. Estos datos son usado para evaluar su rendimiento en los modelos de machine learning.
- n_features: Determina la cantidad de columnas que se van a generar
- centers: Corresponde al número de clases generados
- n_samples: número total de puntos dividido a lo largo de clusters
- random_state: Especifica si, cada vez que ejecutamos el programa, usamos o no, los mismos valores generados en el conjunto de datos.
Por último destacar que existen 2 tipos de predicciones para un modelo final. Estos son las predicciones de clase y las prediccioes de probabilidad.
# 1. PREDICCIONES DE CLASES
# PREDICCIONES DE MÚLTIPLES CLASES
Podemos realizar una predicción de múltiples instancias a la vez. Estas las indicaremos mediante make_blobs.
```
# Ejemplo de entrenar un modelo de clasificación final.
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_blobs
# Generamos una clasificación de un conjunto de datos en 2d
X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1)
# ajustamos el modelo final
model = LogisticRegression()
model.fit(X, y)
# Instancias donde no sabemos la respuesta, indicamos '_' como segundo parámetro para
# indicar que queremos ignorar 'ynew' al utilizar make_blobs
Xnew, _ = make_blobs(n_samples=3, centers=2, n_features=2, random_state=1)
# Hacemos las predicciones de Xnew mediante la función predict()
ynew = model.predict(Xnew)
# Mostramos las predicciones de las 3 instancias creadas
for i in range(len(Xnew)):
print("X=%s, Predicted=%s" % (Xnew[i], ynew[i]))
```
# PREDICCIÓN DE UNA SOLA CLASE
Si solo tenemos una instancia, podemos definirla dentro del array 'Xnew' que posteriormente pasaremos por parámetro hacia la función predict().
```
# Usamos los recursos de Scikit_Learn
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_blobs
# Generamos la clasificacion del conjunto de datos en 2d
X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1)
# Ajustamos el modelo final
model = LogisticRegression()
model.fit(X, y)
# Definimos una sola instancia
Xnew = [[-0.79415228, 2.10495117]]
# Hacemos la predicción y mostramos el resultado
ynew = model.predict(Xnew)
print("X=%s, Predicted=%s" % (Xnew[0], ynew[0]))
```
# 2. PREDICCIONES DE PROBABILIDAD
A diferencia de las predicciones de clases, una predicción de probabilidad, donde se proporciona una instancia, el modelo devuelve la probabilidad de cada resultado sobre una clase con un valor entre 0 y 1. El siguiente ejemplo realizará una predicción de probabilidad de cada ejemplo en el array de Xnew.
```
# Usamos los recursos de Scikit_Learn
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_blobs
# Generamos la clasificacion del conjunto de datos en 2d
X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1)
# Ajustamos el modelo final
model = LogisticRegression()
model.fit(X, y)
# Instancias donde no sabemos la respuesta
Xnew, _ = make_blobs(n_samples=3, centers=2, n_features=2, random_state=1)
# Realizamos la predicción mediante la función predict_proba()
ynew = model.predict_proba(Xnew)
# Mostramos las predicciones de probabilidad sobre las instancias.
# En este caso las predicciones se devovlerá con un valor entre 0 y 1.
for i in range(len(Xnew)):
print("X=%s, Predicted=%s" % (Xnew[i], ynew[i]))
```
# CÓMO HACER PREDICCIONES CON MODELOS DE REGRESIÓN
En el siguiente ejemplo, utilizaremos el modelo Linear Regression para predecir cantidades mediante un modelo de regresión finalizado llamando a la función predict(). A diferencia del make_blobs, añadimos el parámetro "noise" que se encarga de establecer la dispersión que hay en un gráfico de regresión lineal.
```
# Usamos recursos de Scikit-Learn
from sklearn.linear_model import LinearRegression
from sklearn.datasets import make_regression
# Generamos una regresión de conjunto de datos
X, y = make_regression(n_samples=100, n_features=2, noise=0.1, random_state=1)
# Ajustamoe el modelo final
model = LinearRegression()
model.fit(X, y)
```
# PREDICCIONES DE REGRESIÓN MÚLTIPLE
El siguiente ejemplo demuestra cómo hacer predicciones de regresión sobre instancias con resultados esperados desconocidos.
```
# Usamos recursos de Scikit-Learn
from sklearn.linear_model import LinearRegression
from sklearn.datasets import make_regression
# Generamos una regresión de conjunto de datos
X, y = make_regression(n_samples=100, n_features=2, noise=0.1)
# Ajustamoe el modelo final
model = LinearRegression()
model.fit(X, y)
# Nuevas instancias sin saber su respuesta.
Xnew, _ = make_regression(n_samples=3, n_features=2, noise=0.1, random_state=1)
# Realiza un predicción
ynew = model.predict(Xnew)
# Muestra los resultados por pantalla.
for i in range(len(Xnew)):
print("X=%s, Predicted=%s" % (Xnew[i], ynew[i]))
```
# PREDICCIÓN DE REGRESIÓN SIMPLE
Al igual que con las clases de predicción, también podemos hacer una predicción de regresión a una única instancia por ejemplo, sobre un array.
```
# Importamos los recursos de Scikit-Learn
from sklearn.linear_model import LinearRegression
from sklearn.datasets import make_regression
# Generamos un conjuntos de datos de regresión
X, y = make_regression(n_samples=100, n_features=2, noise=0.1)
# Ajusamos el modelo final
model = LinearRegression()
model.fit(X, y)
# Definimos una instancia en la variable Xnew
Xnew = [[-1.07296862, -0.52817175]]
# Realizamos una predicción
ynew = model.predict(Xnew)
# Mostramos los resultados por pantallas
print("X=%s, Predicted=%s" % (Xnew[0], ynew[0]))
```
# **TUTORIAL 3: Guardar y cargar modelos de Machine Learning**
# Guarda tu modelo con Pickle
Pickle es un manera standard de serializar objetos en Python. Con esto pasaremos una clase u objeto de Python a un formato serializado en un archivo como puede ser JSON o XML.
```
# Guardar modelo usando Picke
# Importamos la librería pandas y los recuros de SciKit-learn
# E importamos pickle
import pandas
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
import pickle
# Declaramos variables URL conteniendo el archivo, un array de nombre y cargamos el
# archivo guardandolo en una variable dataframe.
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = pandas.read_csv(url, names=names)
array = dataframe.values
# Seleccionamos los registros que queremos visualizar en los ejes X e Y
X = array[:,0:8]
Y = array[:,8]
# Establecemos un tamaño a la fuente y una semilla
test_size = 0.33
seed = 7
X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X, Y, test_size=test_size, random_state=seed)
# Ajusta el modelo en un conjunto de entrenamiento
model = LogisticRegression()
model.fit(X_train, Y_train)
# Guarda el modelo en nuestro disco con Pickle usando la función dump() y open()
filename = 'finalized_model.sav'
pickle.dump(model, open(filename, 'wb'))
# Cargamos el modelo del disco con Pickle
loaded_model = pickle.load(open(filename, 'rb'))
result = loaded_model.score(X_test, Y_test)
print(result)
```
# Guardar tu modelo con joblib
```
# Importamos librería pandas, recursos de SciKit-Learn y joblib
import pandas
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
import joblib
# Declaramos variables URL conteniendo el archivo, un array de nombre y cargamos el
# archivo guardandolo en una variable dataframe.
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = pandas.read_csv(url, names=names)
array = dataframe.values
# Seleccionamos los registros que queremos visualizar en los ejes X e Y
X = array[:,0:8]
Y = array[:,8]
# Establecemos un tamaño a la fuente y una semilla
test_size = 0.33
seed = 7
X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X, Y, test_size=test_size, random_state=seed)
# Ajusta el modelo en un conjunto de entrenamiento
model = LogisticRegression()
model.fit(X_train, Y_train)
# Guarda el modelo en nuestro disco con Joblib
# A diferencia de Picke, solo necesitaremos pasar por parámetro el modelo y el nombre
# del archivo en la función dump() sin usar open() como en Joblib.
filename = 'finalized_model.sav'
joblib.dump(model, filename)
# Cargamos el modelo del disco con Joblib
loaded_model = joblib.load(filename)
result = loaded_model.score(X_test, Y_test)
print(result)
```
| github_jupyter |
# Math - Algebra
[](https://colab.research.google.com/github/rhennig/EMA6938/blob/main/Notebooks/4.Math_Algebra.ipynb)
(Based on https://online.stat.psu.edu/stat462/node/132/ and https://www.geeksforgeeks.org/ml-normal-equation-in-linear-regression)
Linear algebra is the branch of mathematics concerning linear equations,
$$
a_{1}x_{1}+\cdots +a_{n}x_{n}=b,
$$
linear maps ,
$$
(x_{1},\ldots ,x_{n})\mapsto a_{1}x_{1}+\cdots +a_{n}x_{n},
$$
and their representations in vector spaces and through matrices. Linear algebra is a key foundation to the field of machine learning, from the notations used to describe the equations and operation of algorithms to the efficient implementation of algorithms in code.
## 1. Motivational Example of Linear Regression
We first derive the linear regression model in matrix form. In linear regression, we fit a linear function to a dataset of $N$ data points $(x_i, y_i)$. The linear model is given by
$$
y(x) = \beta_0 + \beta_1 x.
$$
Linear regression desscribes the data by minimizing the least squares deviation between the data and the linear model:
$$
y_i = \beta_0 + \beta_1 x_i + \epsilon _i, \, \text{for }i = 1, \dots , n.
$$
Here the $\epsilon_i$ describes the deviation between the model and data and are assumed to be Gaussian distributed.
Writing out the set of equations for $i = 1, \dots, n$, we obtain $n$ equations:
$$
y_1 = \beta_0 + \beta_1 x_1 + \epsilon _1 \\
y_2 = \beta_0 + \beta_1 x_2 + \epsilon _2 \\
\vdots \\
y_n = \beta_0 + \beta_1 x_n + \epsilon _n \\
$$
We can formulate the above simple linear regression function in matrix notation:
$$
\begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{bmatrix} =
\begin{bmatrix}
1 & x_1 \\
1 & x_2 \\
\vdots \\
1 & x_n
\end{bmatrix}
\begin{bmatrix}
\beta_0 \\
\beta_1
\end{bmatrix} +
\begin{bmatrix}
\epsilon_1 \\
\epsilon_2 \\
\vdots \\
\epsilon_n
\end{bmatrix}.
$$
We can write this matrix equation in a more compact form
$$
{\bf Y} = {\bf X} {\bf \beta} + {\bf \epsilon},
$$
where
- **X** is an n × 2 matrix.
- **Y** is an n × 1 column vector
- **β** is a 2 × 1 column vector
- **ε** is an n × 1 column vector.
The matrix **X** and vector **β** are multiplied together using the techniques of matrix multiplication.
And, the vector **Xβ** is added to the vector **ε** using the techniques of matrix addition.
Let's quickly review matrix algebra, the subject of mathematics that deals with operations of matrices, vectors, and tensors.
## 2. Definition of a matrix
An r × c matrix is a rectangular array of symbols or numbers arranged in r rows and c columns. A matrix is frequently denoted by a capital letter in boldface type.
Here are three examples of simple matrices. The matrix **A** is a 2 × 2 square matrix containing numbers:
$$
{\bf A} = \begin{bmatrix} 23 & 9 \\ 20 & 7 \end{bmatrix}.
$$
The matrix **B** is a 5 × 3 matrix containing numbers:
$$
{\bf B} = \begin{bmatrix}
1 & 40 & 1.9 \\
1 & 65 & 2.5 \\
1 & 71 & 2.8 \\
1 & 80 & 3.4 \\
1 & 92 & 3.1
\end{bmatrix}.
$$
And, the matrix **X** is a 6 × 3 matrix containing a column of 1's and two columns of various x variables:
$$
{\bf X} = \begin{bmatrix}
1 & x_{11} & x_{12} \\
1 & x_{21} & x_{22} \\
1 & x_{31} & x_{32} \\
1 & x_{41} & x_{42} \\
1 & x_{51} & x_{52}
\end{bmatrix}.
$$
## 3. Definition of a Vector and a Scalar
A column vector is an r × 1 matrix, that is, a matrix with only one column. A vector is almost often denoted by a single lowercase letter in boldface type. The following vector **s** is a 3 × 1 column vector containing numbers:
$$
{\bf s} = \begin{bmatrix} 30 \\ 4 \\ 2013 \end{bmatrix}.
$$
A row vector is an 1 × c matrix, that is, a matrix with only one row. The vector **m** is a 1 × 4 row vector containing numbers:
$$
{\bf m} = \begin{bmatrix} 23 & 9 & 20 & 7 \end{bmatrix}.
$$
A 1 × 1 "matrix" is called a scalar, but it's just an ordinary number, such as 24 or $\pi$.
## 4. Matrix Multiplication
Recall the term **Xβ**, which appears in the regression function:
$$
{\bf Y} = {\bf X} {\bf \beta} + {\bf \epsilon}.
$$
This is an example of a matrix multiplication. Since matrices have different numbers and columns, there are some constraints when multiplying matrices together. Two matrices can be multiplied together only if the **number of columns of the first matrix equals the number of rows of the second matrix**.
When you multiply the two matrices:
- the number of rows of the resulting matrix equals the number of rows of the first matrix, and
- the number of columns of the resulting matrix equals the number of columns of the second matrix.
For example, if **A** is a 2 × 3 matrix and **B** is a 3 × 5 matrix, then the matrix multiplication **AB** is possible. The resulting matrix **C** = **AB** has 2 rows and 5 columns. That is, **C** is a 2 × 5 matrix. Note that the matrix multiplication **BA** is not possible.
For another example, if **X** is an n × (k+1) matrix and **β** is a (k+1) × 1 column vector, then the matrix multiplication **Xβ** is possible. The resulting matrix **Xβ** has n rows and 1 column. That is, **Xβ** is an n × 1 column vector.
Now that we know when we can multiply two matrices together, here is the basic rule for multiplying **A** by **B** to get **C** = **AB**:
The entry in the i$^\mathrm{th}$ row and j$^\mathrm{th}$ column of **C** is the inner product — that is, element-by-element products added together — of the i$^\mathrm{th}$ row of **A** with the j$^\mathrm{th}$ column of **B**.
For example:
$$
A = \begin{bmatrix} 1 & 9 & 7 \\ 8 & 1 & 2 \end{bmatrix}
$$
$$
B = \begin{bmatrix} 3 & 2 & 1 & 5 \\ 5 & 4 & 7 & 3 \\ 6 & 9 & 6 & 8 \end{bmatrix}
$$
$$
C = A B =
\begin{bmatrix} 1 & 9 & 7 \\ 8 & 1 & 2 \end{bmatrix}
\begin{bmatrix} 3 & 2 & 1 & 5 \\ 5 & 4 & 6 & 3 \\ 6 & 9 & 7 & 8 \end{bmatrix}
= \begin{bmatrix} 90 & 101 & 106 & 88 \\ 41 & 38 & 27 & 59 \end{bmatrix}
$$
```
# Check the matrix multiplication result in Python using the numpy function matmul
import numpy as np
A = np.array([[1, 9, 7], [8, 1, 2]])
B = np.array([[3, 2, 1, 5], [5, 4, 7, 3], [6, 9, 6, 8]])
print("A = \n", A)
print("B = \n", B)
# matmul multiplies two matrices
# Remember that the operation "*" multiplies woi matrices element by element,
# which is not a matrix multiplication
C = np.matmul(A, B)
print("AB = \n", C)
```
That is, the entry in the first row and first column of **C**, denoted $c_{11}$, is obtained by:
$$
c_{11} = 1(3) + 9(5) +7(6) = 90
$$
And, the entry in the first row and second column of **C**, denoted $c_{12}$, is obtained by:
$$
c_{12} = 1(2) + 9(4) + 7(9) = 101
$$
And, the entry in the second row and third column of C, denoted c23, is obtained by:
$$
c_{23} = 8(1) + 1(7) + 2(6) = 27
$$
## 5. Matrix Addition
Remember the expression **Xβ** + **ε** that appears in the regression function:
$$
{\bf Y} = \bf{X \beta} + {\bf \epsilon}
$$
is an example of matrix addition. Again, there are some restrictions — you cannot just add any two matrices together. Two matrices can be added together only if they have the same number of rows and columns. Then, to add two matrices, simply add the corresponding elements of the two matrices.
For example:
$$
{\bf C} = {\bf A} + {\bf B} =
\begin{bmatrix} 2 & 1 & 3 \\ 4 & 8 & 5 \\ -1 & 7 & 6 \end{bmatrix}
+
\begin{bmatrix} 7 & 9 & 2 \\ 5 & -3 & 1 \\ 2 & 1 & 8 \end{bmatrix}
=
\begin{bmatrix} 9 & 10 & 5 \\ 9 & 5 & 6 \\ 1 & 8 & 14\end{bmatrix}
$$
```
# Check the matrix addition result in Python using the numpy addition operation
import numpy as np
A = np.array([[2, 1, 3], [4, 8, 5], [-1, 7, 6]])
B = np.array([[7, 9, 2], [5, -3, 1], [2, 1, 8]])
print("A = \n", A)
print("B = \n", B)
C = A + B
print("AB = \n", C)
```
## 6. Least Squares Estimates of Linear Regression Coefficients
As we will discuss later, minimizing the mean squared error of model prediction and data leads to the following equation for the coefficient vector ${\bf \beta}$:
$$
{\bf \beta} = \begin{bmatrix} \beta_0 \\ \vdots \\ \beta_k \end{bmatrix}
= ({\bf X}' {\bf X})^{-1} {\bf X}' {\bf Y},
$$
where
- $({\bf X}' {\bf X})^{-1}$ is the inverse of the ${\bf X}' {\bf X}$ matrix, and
- ${\bf X}'$ is the transpose of the ${\bf X}$ matrix.
Let's remind ourselves of the transpose and inverse of a matrix.
## 7. Transpose of a Matrix
The transpose of a matrix **A** is a matrix, denoted as $\bf A'$ or ${\bf A}^T$, whose rows are the columns of ${\bf A}$ and whose columns are the rows of ${\bf A}$ — all in the same order.
For example, the transpose of the 3 × 2 matrix A:
$$
{\bf A} = \begin{bmatrix} 1 & 4 \\ 7 & 5 \\ 8 & 9 \end{bmatrix}
$$
is the 2 × 3 matrix $\bf A'$:
$$
{\bf A}' = {\bf A}^T = \begin{bmatrix} 1 & 7 & 8 \\ 4 & 5 & 9 \end{bmatrix}
$$
The ${\bf X}$ matrix in the simple linear regression setting is:
$$
{\bf X} = \begin{bmatrix}
1 & x_1 \\
1 & x_2 \\
\vdots \\
1 & x_n
\end{bmatrix}.
$$
Hence, the ${\bf X}'{\bf X}$ matrix in the linear regression is:
$$
{\bf X}'{\bf X} = \begin{bmatrix}
1 & 1 & \dots & 1\\
x_1 & x_2 & & x_n
\end{bmatrix}
\begin{bmatrix}
1 & x_1 \\
1 & x_2 \\
\vdots \\
1 & x_n
\end{bmatrix}
= \begin{bmatrix}
1 & \sum_{i=1}^n x_i \\ \sum_{i=1}^n x_i & \sum_{i=1}^n x_i^2
\end{bmatrix}.
$$
## 8. The Inverse of a Matrix
The inverse ${\bf A}^{-1}$ of a **square matrix A** is the unique matrix such that:
$$
{\bf A}^{-1} {\bf A} = {\bf I} = {\bf A} {\bf A}^{-1}.
$$
That is, the inverse of ${\bf A}$ is the matrix ${\bf A}^{-1}$ that you multiply ${\bf A}$ by to obtain the identity matrix ${\bf I}$. Note that the inverse only exists for square matrices.
Now, finding inverses, particularly for large matrices, is a complicated task. We will use numpy to calculate the inverses.
## 9. Solution for Linear Regresssion
We will use a data set from the Python library sklearn for linear regression.
```
# import required modules
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_regression
# Create data set
x,y = make_regression(n_samples=100,n_features=1,n_informative=1,noise = 10,random_state=10)
# Plot the generated data set
plt.figure(figsize=(8, 6))
plt.rcParams['font.size'] = '16'
plt.scatter(x, y, s = 30, marker = 'o')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Scatter Data', fontsize=20)
plt.show()
# Convert the vector of y variables into a column vector
y=y.reshape(100,1)
# Create matrix X by adding x0=1 to each instance of x and taking the transpose
X = np.array([np.ones(len(x)), x.flatten()]).T
print("Matrix X =\n", X[1:5, :], "\n ...\n")
# Determining the coefficients of linear regression
# by calculating the inverse of (X'X) and multiplying it by X'Y.
XTX = np.matmul(X.T, X)
print("Matrix X'X =\n", XTX, "\n")
XTXinv = np.linalg.inv(XTX)
print("Inverse of (X'X) =\n", XTXinv, "\n")
beta = np.matmul(XTXinv, np.matmul(X.T, y))
# Display best values obtained.
print("Regression coefficients\n β0 = ", beta[0,0], "\n β1 = ", beta[1,0])
# Predict the values for given data instance.
x_sample=np.array([[-2.5],[3]])
x_sample_new=np.array([np.ones(len(x_sample)),x_sample.flatten()]).T
y_predicted = np.matmul(x_sample_new, beta)
# Plot the generated data set
plt.figure(figsize=(8, 6))
plt.rcParams['font.size'] = '16'
plt.scatter(x, y, s = 30, marker = 'o')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Scatter Data', fontsize=20)
plt.plot(x_sample, y_predicted, color='orange')
plt.show()
# Verification using scikit learn function for linear regression
from sklearn.linear_model import LinearRegression
lr = LinearRegression() # Object
lr.fit(x, y) # Fit method.
# Print obtained theta values.
print("β0 = ", lr.intercept_[0], "\nβ1 = ", lr.coef_[0,0])
```
## 10. Practice
The hat matrix converts values from the observed variable $y_i$ into the estimated values $\hat y$ obtained with the least squares method. The hat matrix, $\bf H$, is given by
$$
{\bf H} = {\bf X} ({\bf X}' {\bf X})^{-1} {\bf X}'
$$
Calculate the hat matrix, $\bf H$, and show that you obtain the predicted $y$-values by creating a plot.
```
# Calculate the hat matrix
# Apply the hat matrix to the y-values to generate the y predictions
# Plot the predicted and original y values vs. the x values
```
Knowning the hat matrix, $\bf H$, we can also express the $R^2$ value for the linear regression using a matrix equation:
$$
R^2 = 1 - \frac{{\bf y}'({\bf 1} - {\bf H}){\bf y}}{{\bf y}'({\bf 1} - {\bf M}){\bf y}}
$$
where $\bf 1$ is the identity matrix,
$$
{\bf M} = {\bf l}({\bf l}'{\bf l})^{-1}{\bf l}',
$$
and ${\bf l}$ is a column vector of 1's.
Calculate the $R^2$ value using the above matrix form of the equations.
```
# Create a column vector of 1's with length 100
# Calculate the matrix M
# Calculate R2
```
| github_jupyter |
# TensorFlow Fold Quick Start
TensorFlow Fold is a library for turning complicated Python data structures into TensorFlow Tensors.
```
# boilerplate
import random
import tensorflow as tf
sess = tf.InteractiveSession()
import tensorflow_fold as td
```
The basic elements of Fold are *blocks*. We'll start with some blocks that work on simple data types.
```
scalar_block = td.Scalar()
vector3_block = td.Vector(3)
```
Blocks are functions with associated input and output types.
```
def block_info(block):
print("%s: %s -> %s" % (block, block.input_type, block.output_type))
block_info(scalar_block)
block_info(vector3_block)
```
We can use `eval()` to see what a block does with its input:
```
scalar_block.eval(42)
vector3_block.eval([1,2,3])
```
Not very exciting. We can compose simple blocks together with `Record`, like so:
```
record_block = td.Record({'foo': scalar_block, 'bar': vector3_block})
block_info(record_block)
```
We can see that Fold's type system is a bit richer than vanilla TF; we have tuple types! Running a record block does what you'd expect:
```
record_block.eval({'foo': 1, 'bar': [5, 7, 9]})
```
One useful thing you can do with blocks is wire them up to create pipelines using the `>>` operator, which performs function composition. For example, we can take our two tuple tensors and compose it with `Concat`, like so:
```
record2vec_block = record_block >> td.Concat()
record2vec_block.eval({'foo': 1, 'bar': [5, 7, 9]})
```
Note that because Python dicts are unordered, Fold always sorts the outputs of a record block by dictionary key. If you want to preserve order you can construct a Record block from an OrderedDict.
The whole point of Fold is to get your data into TensorFlow; the `Function` block lets you convert a TITO (Tensors In, Tensors Out) function to a block:
```
negative_block = record2vec_block >> td.Function(tf.negative)
negative_block.eval({'foo': 1, 'bar': [5, 7, 9]})
```
This is all very cute, but where's the beef? Things start to get interesting when our inputs contain sequences of indeterminate length. The `Map` block comes in handy here:
```
map_scalars_block = td.Map(td.Scalar())
```
There's no TF type for sequences of indeterminate length, but Fold has one:
```
block_info(map_scalars_block)
```
Right, but you've done the TF [RNN Tutorial](https://www.tensorflow.org/tutorials/recurrent/) and even poked at [seq-to-seq](https://www.tensorflow.org/tutorials/seq2seq/). You're a wizard with [dynamic rnns](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). What does Fold offer?
Well, how about jagged arrays?
```
jagged_block = td.Map(td.Map(td.Scalar()))
block_info(jagged_block)
```
The Fold type system is fully compositional; any block you can create can be composed with `Map` to create a sequence, or `Record` to create a tuple, or both to create sequences of tuples or tuples of sequences:
```
seq_of_tuples_block = td.Map(td.Record({'foo': td.Scalar(), 'bar': td.Scalar()}))
seq_of_tuples_block.eval([{'foo': 1, 'bar': 2}, {'foo': 3, 'bar': 4}])
tuple_of_seqs_block = td.Record({'foo': td.Map(td.Scalar()), 'bar': td.Map(td.Scalar())})
tuple_of_seqs_block.eval({'foo': range(3), 'bar': range(7)})
```
Most of the time, you'll eventually want to get one or more tensors out of your sequence, for wiring up to your particular learning task. Fold has a bunch of built-in reduction functions for this that do more or less what you'd expect:
```
((td.Map(td.Scalar()) >> td.Sum()).eval(range(10)),
(td.Map(td.Scalar()) >> td.Min()).eval(range(10)),
(td.Map(td.Scalar()) >> td.Max()).eval(range(10)))
```
The general form of such functions is `Reduce`:
```
(td.Map(td.Scalar()) >> td.Reduce(td.Function(tf.multiply))).eval(range(1,10))
```
If the order of operations is important, you should use `Fold` instead of `Reduce` (but if you can use `Reduce` you should, because it will be faster):
```
((td.Map(td.Scalar()) >> td.Fold(td.Function(tf.divide), tf.ones([]))).eval(range(1,5)),
(td.Map(td.Scalar()) >> td.Reduce(td.Function(tf.divide), tf.ones([]))).eval(range(1,5))) # bad, not associative!
```
Now, let's do some learning! This is the part where "magic" happens; if you want a deeper understanding of what's happening here you might want to jump right to our more formal [blocks tutorial](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/blocks.md) or learn more about [running blocks in TensorFlow](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/running.md)
```
def reduce_net_block():
net_block = td.Concat() >> td.FC(20) >> td.FC(1, activation=None) >> td.Function(lambda xs: tf.squeeze(xs, axis=1))
return td.Map(td.Scalar()) >> td.Reduce(net_block)
```
The `reduce_net_block` function creates a block (`net_block`) that contains a two-layer fully connected (FC) network that takes a pair of scalar tensors as input and produces a scalar tensor as output. This network gets applied in a binary tree to reduce a sequence of scalar tensors to a single scalar tensor.
One thing to notice here is that we are calling [`tf.squeeze`](https://www.tensorflow.org/versions/r1.0/api_docs/python/array_ops/shapes_and_shaping#squeeze) with `axis=1`, even though the Fold output type of `td.FC(1, activation=None)` (and hence the input type of the enclosing `Function` block) is a `TensorType` with shape `(1)`. This is because all Fold blocks actually run on TF tensors with an implicit leading batch dimension, which enables execution via [*dynamic batching*](https://arxiv.org/abs/1702.02181). It is important to bear this in mind when creating `Function` blocks that wrap functions that are not applied elementwise.
```
def random_example(fn):
length = random.randrange(1, 10)
data = [random.uniform(0,1) for _ in range(length)]
result = fn(data)
return data, result
```
The `random_example` function generates training data consisting of `(example, fn(example))` pairs, where `example` is a random list of numbers, e.g.:
```
random_example(sum)
random_example(min)
def train(fn, batch_size=100):
net_block = reduce_net_block()
compiler = td.Compiler.create((net_block, td.Scalar()))
y, y_ = compiler.output_tensors
loss = tf.nn.l2_loss(y - y_)
train = tf.train.AdamOptimizer().minimize(loss)
sess.run(tf.global_variables_initializer())
validation_fd = compiler.build_feed_dict(random_example(fn) for _ in range(1000))
for i in range(2000):
sess.run(train, compiler.build_feed_dict(random_example(fn) for _ in range(batch_size)))
if i % 100 == 0:
print(i, sess.run(loss, validation_fd))
return net_block
```
Now we're going to train a neural network to approximate a reduction function of our choosing. Calling `eval()` repeatedly is super-slow and cannot exploit batch-wise parallelism, so we create a [`Compiler`](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/py/td.md#compiler). See our page on [running blocks in TensorFlow](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/running.md) for more on Compilers and how to use them effectively.
```
sum_block = train(sum)
sum_block.eval([1, 1])
```
Breaking news: deep neural network learns to calculate 1 + 1!!!!
Of course we've done something a little sneaky here by constructing a model that can only represent associative functions and then training it to compute an associative function. The technical term for being sneaky in machine learning is [inductive bias](https://en.wikipedia.org/wiki/Inductive_bias).
```
min_block = train(min)
min_block.eval([2, -1, 4])
```
Oh noes! What went wrong? Note that we trained our network to compute `min` on positive numbers; negative numbers are outside of its input distribution.
```
min_block.eval([0.3, 0.2, 0.9])
```
Well, that's better. What happens if you train the network on negative numbers as well as on positives? What if you only train on short lists and then evaluate the net on long ones? What if you used a `Fold` block instead of a `Reduce`? ... Happy Folding!
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/connected_pixel_count.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/connected_pixel_count.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Image/connected_pixel_count.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/connected_pixel_count.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Image.ConnectedPixelCount example.
# Split pixels of band 01 into "bright" (arbitrarily defined as
# reflectance > 0.3) and "dim". Highlight small (<30 pixels)
# standalone islands of "bright" or "dim" type.
img = ee.Image('MODIS/006/MOD09GA/2012_03_09') \
.select('sur_refl_b01') \
.multiply(0.0001)
# Create a threshold image.
bright = img.gt(0.3)
# Compute connected pixel counts stop searching for connected pixels
# once the size of the connected neightborhood reaches 30 pixels, and
# use 8-connected rules.
conn = bright.connectedPixelCount(**{
'maxSize': 30,
'eightConnected': True
})
# Make a binary image of small clusters.
smallClusters = conn.lt(30)
Map.setCenter(-107.24304, 35.78663, 8)
Map.addLayer(img, {'min': 0, 'max': 1}, 'original')
Map.addLayer(smallClusters.updateMask(smallClusters),
{'min': 0, 'max': 1, 'palette': 'FF0000'}, 'cc')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# Experimental Mathimatics: Chronicle of Matlab code - 2008 - 2015
##### Whereas the discovery of Chaos, Fractal Geometry and Non-Linear Dynamical Systems falls outside the domain of analytic function in mathematical terms the path to discovery is taken as experimental computer-programming.
##### Whereas existing discoveries have been most delightfully difference equations this first effor concentrates on guided random search for equations and parameters of that type.
### equation and parameters data were saved in tiff file headers, matlab/python extracted to:
```
import os
import pandas as pd
spreadsheets_directory = '../data/Matlab_Chronicle_2008-2012/'
images_dataframe_filename = os.path.join(spreadsheets_directory, 'Of_tiff_headers.df')
equations_dataframe_filename = os.path.join(spreadsheets_directory, 'Of_m_files.df')
Images_Chronicle_df = pd.read_csv(images_dataframe_filename, sep='\t', index_col=0)
Equations_Chronicle_df = pd.read_csv(equations_dataframe_filename, sep='\t', index_col=0)
def get_number_of_null_parameters(df, print_out=True):
""" Usage good_null_bad_dict = get_number_of_null_parameters(df, print_out=True)
function to show the number of Images with missing or bad parameters
because they are only reproducable with both the equation and parameter set
Args:
df = dataframe in format of historical images - (not shown in this notebook)
Returns:
pars_contidtion_dict:
pars_contidtion_dict['good_pars']: number of good parametrs
pars_contidtion_dict['null_pars']: number of null parameters
pars_contidtion_dict['bad_list']: list of row numbers with bad parameters - for numeric indexing
"""
null_pars = 0
good_pars = 0
bad_list = []
for n, row in df.iterrows():
if row['parameters'] == []:
null_pars += 1
bad_list.append(n)
else:
good_pars += 1
if print_out:
print('good_pars', good_pars, '\nnull_pars', null_pars)
return {'good_pars': good_pars, 'null_pars': null_pars, 'bad_list': bad_list}
def display_images_df_columns_definition():
""" display an explanation of the images """
cols = {}
cols['image_filename'] = 'the file name as found in the header'
cols['function_name'] = 'm-file and function name'
cols['parameters'] = 'function parameters used to produce the image'
cols['max_iter'] = 'escape time algorithm maximum number of iterations'
cols['max_dist'] = 'escape time algorithm quit distance'
cols['Colormap'] = 'Name of colormap if logged'
cols['Center'] = 'center of image location on complex plane'
cols['bounds_box'] = '(upper left corner) ULC, URC, LLC, LRC'
cols['Author'] = 'author name if provied'
cols['location'] = 'file - subdirectory location'
cols['date'] = 'date the image file was written'
for k, v in cols.items():
print('%18s: %s'%(k,v))
def display_equations_df_columns_definition():
""" display an explanation of the images """
cols = {}
cols['arg_in'] = 'Input signiture of the m-file'
cols['arg_out'] = 'Output signiture of the m-file'
cols['eq_string'] = 'The equation as written in MATLAB'
cols['while_test'] = 'The loop test code'
cols['param_iterator'] = 'If parameters are iterated in the while loop'
cols['internal_vars'] = 'Variables that were set inside the m-file'
cols['while_lines'] = 'The actual code of the while loop'
for k, v in cols.items():
print('%15s: %s'%(k,v))
print('\n\tdisplay_images_df_columns_definition\n')
display_images_df_columns_definition()
print('\n\tdisplay_equations_df_columns_definition\n')
display_equations_df_columns_definition()
print('shape:',
Images_Chronicle_df.shape,
'Number of unique files:',
Images_Chronicle_df['image_filename'].nunique())
print('Number of unique functions used:',
Images_Chronicle_df['function_name'].nunique())
par_stats_dict = get_number_of_null_parameters(Images_Chronicle_df) # show parameters data
print('\nFirst 5 lines')
Images_Chronicle_df.head() # show top 5 lines
print(Equations_Chronicle_df.shape)
Equations_Chronicle_df.head()
cols = list(Equations_Chronicle_df.columns)
for c in cols:
print(c)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import math
from scipy import interpolate as interp
def linear_translate(x1,x2,X1,X2):
B=(X1-X2)/(x1-x2)
A=X1-B*x1
return [A,B]
def linear_translate_axis(Ax,Bx,arr):
return Ax+Bx*arr
def log_translate_axis(Ax,Bx,arr):
return 10**(Ax+Bx*arr)
def log_translate(x1,x2,X1,X2):
B=np.log10(float(X1)/float(X2))/(x1-x2)
A=np.log10(X1)-B*x1
return [A,B]
def format_xml_arr(arr):
for i in range(1,len(arr)):
arr[i]+=arr[i-1]
def log_translate_arr(Ax,Bx,Ay,By,arr):
return 10**([Ax,Ay]+[Bx,By]*arr)
def format_xml_file(file):
arr=[[0,0]]
with open(file) as f:
dat=f.read().splitlines()
hold=''
delete_list=[0]
for i in range(0,len(dat)):
dat[i]=dat[i].split(',')
#print(arr[-1],dat[i],hold)
if dat[i][0].isalpha():
hold=dat[i][0]
else:
if hold=='M':
arr.append([float(dat[i][0]),float(dat[i][1])])
elif hold=='m' or hold=='l':
arr.append([arr[-1][0]+float(dat[i][0]),arr[-1][1]+float(dat[i][1])])
elif hold=='H':
arr.append([float(dat[i][0]),arr[-1][1]])
elif hold=='h':
arr.append([arr[-1][0]+float(dat[i][0]),arr[-1][1]])
elif hold=='V':
arr.append([arr[-1][0],float(dat[i][0])])
elif hold=='v':
arr.append([arr[-1][0],arr[-1][1]+float(dat[i][0])])
del arr[0]
return arr
def format_xml(file_in,file_out):
a=format_xml_file(file_in)
b=np.array(a)
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
c=log_translate_arr(Ax,Bx,Ay,By,b)
np.savetxt(file_out,c)
fp_in="vector_portal_visible_raw/"
fp_out="vector_portal_visible_formatted/"
x1=102.117;x2=403.91;X1=1e-2;X2=1;
y1=211.11;y2=98.4858;Y1=1e-3;Y2=2e-4;
format_xml(fp_in+"BES3_1705.04265.dat",fp_out+"BES3_2017_formatted.dat")
format_xml(fp_in+"APEX1108.2750.dat",fp_out+"APEX2011_formatted.dat")
#1906.00176
x1=275.6694;x2=234.62337;X1=1;X2=1e-1;
y1=59.555832;y2=130.28009;Y1=1e-5;Y2=1e-4;
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
NA64_2019=np.array(format_xml_file("NA64_2019_1906.00176.dat"))
NA64_2019[:,0]=log_translate_axis(Ax,Bx,NA64_2019[:,0])
NA64_2019[:,1]=log_translate_axis(Ay,By,NA64_2019[:,1])
np.savetxt("NA64_2019_formatted.dat",NA64_2019)
#1906.00176
x1=145.30411;x2=234.62337;X1=1e-2;X2=1e-1;
y1=96.63295;y2=67.436142;Y1=1e-13;Y2=1e-14;
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
NA64_2019=np.array(format_xml_file("NA64_2019_1906.00176_2.dat"))
NA64_2019[:,0]=log_translate_axis(Ax,Bx,NA64_2019[:,0])
NA64_2019[:,1]=log_translate_axis(Ay,By,NA64_2019[:,1])
np.savetxt("NA64_2019_aD0.5_formatted.dat",NA64_2019)
#1807.05884.dat
x1=138.435;x2=376.021;X1=1e-2;X2=1e-1;
y1=90.8576;y2=178.355;Y1=1e-14;Y2=1e-12;
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
E137_u=np.array(format_xml_file("E1371807.05884.dat"))
E137_u[:,0]=log_translate_axis(Ax,Bx,E137_u[:,0])
E137_u[:,1]=log_translate_axis(Ay,By,E137_u[:,1])
np.savetxt("E137update_Y3_0.5.dat",E137_u)
#1311.0216
x1=1452.5;x2=4420;X1=0.1;X2=0.5;
y1=2427.5;y2=3237.5;Y1=1e-4;Y2=1e-3;
Ax,Bx=linear_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
hadesa=np.array(format_xml_file(fp_in+"HADES1311.0216.dat"))
hadesb=np.array(format_xml_file(fp_in+"HADES1311.0216b.dat"))
hadesc=np.array(format_xml_file(fp_in+"HADES1311.0216c.dat"))
hadesd=np.array(format_xml_file(fp_in+"HADES1311.0216d.dat"))
hades=np.concatenate((hadesa,hadesb,hadesc,hadesd),axis=0)
hades[:,0]=linear_translate_axis(Ax,Bx,hades[:,0])
hades[:,1]=log_translate_axis(Ay,By,hades[:,1])
hades[:,1]=[math.sqrt(y) for y in hades[:,1]]
np.savetxt(fp_out+"HADES2013_formatted.dat",hades)
#1409.0851
x1=100.501;x2=489.907;X1=10;X2=90;
y1=309.828;y2=91.8798;Y1=1e-5;Y2=1e-6;
Ax,Bx=linear_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
phenix=np.array(format_xml_file(fp_in+"PHENIX1409.0851.dat"))
phenix[:,0]=linear_translate_axis(Ax,Bx,phenix[:,0])/1000
phenix[:,1]=log_translate_axis(Ay,By,phenix[:,1])
phenix[:,1]=[math.sqrt(y) for y in phenix[:,1]]
np.savetxt(fp_out+"PHENIX2014_formatted.dat",phenix)
#1304.0671
x1=2152.5;x2=4772.5;X1=40;X2=100;
y1=2220;y2=3805;Y1=1e-5;Y2=1e-4;
Ax,Bx=linear_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
wasa=np.array(format_xml_file(fp_in+"WASA1304.0671.dat"))
wasa[:,0]=linear_translate_axis(Ax,Bx,wasa[:,0])/1000
wasa[:,1]=log_translate_axis(Ay,By,wasa[:,1])
wasa[:,1]=[math.sqrt(y) for y in wasa[:,1]]
np.savetxt(fp_out+"WASA2013_formatted.dat",wasa)
#1404.5502
x1=906.883;x2=2133.43;X1=100;X2=300;
y1=1421.71;y2=821.906;Y1=1e-5;Y2=1e-6;
Ax,Bx=linear_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
a1=format_xml_file(fp_in+"A1_1404.5502.dat")
a1=np.array(a1)
a1[:,0]=linear_translate_axis(Ax,Bx,a1[:,0])/1000
a1[:,1]=log_translate_axis(Ay,By,a1[:,1])
a1[:,1]=[math.sqrt(y) for y in a1[:,1]]
np.savetxt(fp_out+"A12014_formatted.dat",a1)
#0906.0580v1
x1=154.293;x2=277.429;X1=1e-2;X2=1;
y1=96.251;y2=208.027;Y1=1e-4;Y2=1e-8;
format_xml(fp_in+"E774_0906.0580.dat",fp_out+"E774_formatted.dat")
format_xml(fp_in+"E141_0906.0580.dat",fp_out+"E141_formatted.dat")
format_xml(fp_in+"E137_0906.0580.dat",fp_out+"E137_formatted.dat")
1504.00607
x1=1375;x2=4242.5;X1=10;X2=100;
y1=4020;y2=2405;Y1=1e-5;Y2=1e-6
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
na48=format_xml_file(fp_in+"NA482_1504.00607.dat")
na48=np.array(na48)
na48=log_translate_arr(Ax,Bx,Ay,By,na48)
na48[:,0]=na48[:,0]/1000
na48[:,1]=[math.sqrt(y) for y in na48[:,1]]
np.savetxt(fp_out+"NA48_2_formatted.dat",na48)
#1406.2980
x1=250.888;x2=400.15;X1=1e-1;X2=1;
y1=211.11;y2=98.4858;Y1=1e-3;Y2=2e-4;
format_xml(fp_in+"babar1406.2980.dat",fp_out+"Babar2014_formatted.dat")
format_xml(fp_in+"babar0905.4539.dat",fp_out+"babar2009_formatted.dat")
#1509.00740
x1=96.3223;x2=151.6556;X1=10;X2=100;
y1=107.91647;y2=35.94388;Y1=1e-5;Y2=1e-7;
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
kloe2015=np.array(format_xml_file(fp_in+"KLOE1509.00740.dat"))
kloe2015=log_translate_arr(Ax,Bx,Ay,By,kloe2015)
kloe2015[:,0]=kloe2015[:,0]/1000
kloe2015[:,1]=[math.sqrt(y) for y in kloe2015[:,1]]
np.savetxt(fp_out+"KLOE2015_formatted.dat",kloe2015)
kloe2013=np.array(format_xml_file(fp_in+"KLOE1110.0411.dat"))
kloe2013=log_translate_arr(Ax,Bx,Ay,By,kloe2013)
kloe2013[:,0]=kloe2013[:,0]/1000
kloe2013[:,1]=[math.sqrt(y) for y in kloe2013[:,1]]
np.savetxt(fp_out+"KLOE2013_formatted.dat",kloe2013)
kloe2014=np.array(format_xml_file(fp_in+"KLOE1404.7772.dat"))
kloe2014=log_translate_arr(Ax,Bx,Ay,By,kloe2014)
kloe2014[:,0]=kloe2014[:,0]/1000
kloe2014[:,1]=[math.sqrt(y) for y in kloe2014[:,1]]
np.savetxt(fp_out+"KLOE2014_formatted.dat",kloe2014)
#1603.06086
x1=0;x2=38.89273;X1=0;X2=200;
y1=-376.57767;y2=-215.18724;Y1=1e-7;Y2=1e-4;
Ax,Bx=linear_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
kloe2016=np.array(format_xml_file(fp_in+"KLOE1603.06086.dat"))
kloe2016[:,0]=linear_translate_axis(Ax,Bx,kloe2016[:,0])/1000
kloe2016[:,1]=log_translate_axis(Ay,By,kloe2016[:,1])
kloe2016[:,1]=[math.sqrt(y) for y in kloe2016[:,1]]
np.savetxt(fp_out+"KLOE2016_formatted.dat",kloe2016)
xenon10e=np.loadtxt("xenon10e.dat",delimiter=',')
format_xml_arr(xenon10e)
x1=159; x2=217; X1=0.010; X2=0.100
y1=36; y2=83; Y1=1e-34; Y2=1e-36
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
xenon10e=log_translate_arr(Ax,Bx,Ay,By,xenon10e)
np.savetxt("xenon10e_formatted.csv",xenon10e,delimiter=',')
#1703.00910
#This is FDM=1 case, Xenon10. It basically beats Xenon100 everywhere.
xenon10e2017=np.loadtxt("1703.00910.xenonelimits.dat",delimiter=',')
format_xml_arr(xenon10e2017)
x1=93.305; x2=195.719; X1=0.010; X2=0.100
y1=86.695; y2=151.848; Y1=1e-38; Y2=1e-37
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
xenon10e2017=log_translate_arr(Ax,Bx,Ay,By,xenon10e2017)
xenon100e2017=np.loadtxt("1703.00910.xenon100elimits.dat",delimiter=',')
format_xml_arr(xenon100e2017)
xenon100e2017=log_translate_arr(Ax,Bx,Ay,By,xenon100e2017)
np.savetxt("xenon10e_2017_formatted.csv",xenon10e2017,delimiter=',')
np.savetxt("xenon100e_2017_formatted.csv",xenon100e2017,delimiter=',')
babar2017=np.loadtxt("babar2017.dat",delimiter=',')
format_xml_arr(babar2017)
y1=211.843; y2=50.1547; Y1=1e-3; Y2=1e-4;
x1=181.417; x2=430.866; X1=1e-2; X2=1.0
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
babar2017_formatted=log_translate_arr(Ax,Bx,Ay,By,babar2017)
np.savetxt("babar2017_formatted.dat",babar2017_formatted,delimiter=' ')
NA64_2016 = np.loadtxt("NA64_2016_data.dat",delimiter=',')
format_xml_arr(NA64_2016)
NA64_2017 = np.loadtxt("NA64_2017_data.dat",delimiter=',')
format_xml_arr(NA64_2017)
NA64_2018 = np.loadtxt("NA64_2018plus_data.dat",delimiter=',')
format_xml_arr(NA64_2018)
y1=186.935;y2=105.657;Y1=1e-4;Y2=2e-5;
x1=202.677;x2=314.646;X1=1e-2;X2=1e-1;
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
NA64_2016_formatted=log_translate_arr(Ax,Bx,Ay,By,NA64_2016)
NA64_2017_formatted=log_translate_arr(Ax,Bx,Ay,By,NA64_2017)
NA64_2018_formatted=log_translate_arr(Ax,Bx,Ay,By,NA64_2018)
np.savetxt("NA64_2016_formatted.dat",NA64_2016_formatted,delimiter=' ')
np.savetxt("NA64_2017_formatted.dat",NA64_2017_formatted,delimiter=' ')
np.savetxt("NA64_2018_formatted.dat",NA64_2018_formatted,delimiter=' ')
anomalon_1705_06726= np.loadtxt("Anomalon.dat",delimiter=',')
format_xml_arr(anomalon_1705_06726)
BtoKX_1705_06726= np.loadtxt("1705.06726.BtoKX.dat",delimiter=',')
format_xml_arr(BtoKX_1705_06726)
ZtogammaX_1705_06726= np.loadtxt("1705.06726.ZtogammaX.dat",delimiter=',')
format_xml_arr(ZtogammaX_1705_06726)
KtopiX_1705_06726= np.loadtxt("1705.06726.KtopiX.dat",delimiter=',')
format_xml_arr(KtopiX_1705_06726)
y1=389.711;y2=188.273;Y1=10**-2;Y2=10**-5;
x1=272.109;x2=478.285;X1=10**-2;X2=1;
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
anomalon_1705_06726_formatted=log_translate_arr(Ax,Bx,Ay,By,anomalon_1705_06726)
BtoKX_1705_06726_formatted=log_translate_arr(Ax,Bx,Ay,By,BtoKX_1705_06726)
ZtogammaX_1705_06726_formatted=log_translate_arr(Ax,Bx,Ay,By,ZtogammaX_1705_06726)
KtopiX_1705_06726_formatted=log_translate_arr(Ax,Bx,Ay,By,KtopiX_1705_06726)
np.savetxt("Anomalon_formatted.dat",anomalon_1705_06726_formatted,delimiter=' ')
np.savetxt("1705.06726.BtoKX_formatted.dat",BtoKX_1705_06726_formatted,delimiter=' ')
np.savetxt("1705.06726.ZtogammaX_formatted.dat",ZtogammaX_1705_06726_formatted,delimiter=' ')
np.savetxt("1705.06726.KtopiX_formatted.dat",KtopiX_1705_06726_formatted,delimiter=' ')
anomalon_1705_06726_formatted[:,1]**2
NA64_2018 = np.loadtxt("NA64_2018.dat",delimiter=',')
format_xml_arr(NA64_2018)
x1=125.29126;x2=200.49438;X1=1e-2;X2=1e-1;
y1=116.07875;y2=193.88962;Y1=1e-4;Y2=1e-3;
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
NA64_2018_formatted = log_translate_arr(Ax,Bx,Ay,By,NA64_2018)
np.savetxt("NA64_2017_formatted.dat",NA64_2018_formatted,delimiter=' ')
CDMSelec = np.loadtxt("1804.10697.SuperCDMS.dat",delimiter=',')
format_xml_arr(CDMSelec)
x1=105.82982;x2=259.22375;X1=1e-3;X2=100e-3
y1=264.07059;y2=80.258824;Y1=1e-27;Y2=1e-41
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
CDMSelec_2018_formatted=log_translate_arr(Ax,Bx,Ay,By,CDMSelec)
np.savetxt("CDMS_electron_2018_formatted.dat",CDMSelec_2018_formatted,delimiter=' ')
SENSEI2018_1=np.loadtxt("SENSEI2018_1.dat",delimiter=',')
SENSEI2018_2=np.loadtxt("SENSEI2018_2.dat",delimiter=',')
SENSEI2018_3=np.loadtxt("SENSEI2018_3.dat",delimiter=',')
SENSEI2018_4=np.loadtxt("SENSEI2018_4.dat",delimiter=',')
SENSEI2018=[SENSEI2018_1,SENSEI2018_2,SENSEI2018_3,SENSEI2018_4]
for arr in SENSEI2018:
format_xml_arr(arr)
x_set=np.unique(np.append(np.append(np.append(SENSEI2018[0][:,0],SENSEI2018[1][:,0]),
SENSEI2018[2][:,0]),SENSEI2018[3][:,0]))
interp_arr = [interp.interp1d(arr[:,0],arr[:,1],bounds_error=False,fill_value=10000) for arr in SENSEI2018]
x_set=[x for x in x_set]
SENSEI2018f=np.array([[x,min([func(x) for func in interp_arr]).tolist()] for x in x_set])
x1=104.473;x2=192.09;X1=1e-3;X2=10e-3;
y1=347.496;y2=318.992;Y1=1e-28;Y2=1e-29;
Ax,Bx=log_translate(x1,x2,X1,X2)
Ay,By=log_translate(y1,y2,Y1,Y2)
SENSEI2018_formatted=log_translate_arr(Ax,Bx,Ay,By,SENSEI2018f)
np.savetxt("SENSEI2018_formatted.dat",SENSEI2018_formatted,delimiter=' ')
interp_arr[0](1)
SENSEI2018f
x_set
SENSEI2018f=[[x[0],min([func(x) for func in interp_arr])[0]] for x in x_set]
x_set=map(np.unique,sorted(np.append(np.append(np.append(SENSEI2018[0][:,0],SENSEI2018[1][:,0]),
SENSEI2018[2][:,0]),SENSEI2018[3][:,0])))
SENSEI2018
interp.interp1d(SENSEI2018[0]
sorted(np.append(np.append(np.append(SENSEI2018[0][:,0],SENSEI2018[1][:,0]),SENSEI2018[2][:,0]),SENSEI2018[3][:,0]))
```
| github_jupyter |
## Tilting Point Data Test (Part 2)
- You are free to use any package that you deem necessary, the basic pacakge is already imported
- After each question, there will be an answer area, you can add as many cells as you deem necessary between answer and following question
- The accuracy is less important, the path to get there is what's important
```
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
%matplotlib inline
```
### Question: Data Analysis and Machine Learning
Attached is a Game of War type game, given the data, please build a model to predict players' LTV
1. What kind of exploratory analysis would you want to conduct?
2. What procedure would you take to build the model?
3. How would you validate the result?
4. Please choose one model, and perform from step 1 to step 3 in order
5. How would you productionalize it?
```
df = pd.read_csv(os.path.join('data', 'ltv_data.csv'))
df.head()
```
##### Answer below
### 1. What kind of exploratory analysis would you want to conduct?
I would like to conduct CRISP-DM approach for exploratory analysis of given data.
The CRISP-DM Process (Cross Industry Process for Data Mining). It mainly consists of the following parts:
* Business Understanding
* Data Understanding
* Data Preparation
* Data Modeling
* Evaluation
* Deployment
### Business Understading
It looks pretty obvious that we want to be able to predict our users LTV by some available data from game play, and to be confident in the prediction for better decision making.
### Data Understanding
Let's have a look at given data. All its columns are numerical. We don't have any categorical data. It means that all our possible factors are quantitative.
```
df.describe()
```
**1.** Let's explore data shape
```
df.shape
```
**2.** Which columns had no missing values? Provide a set of column names that have no missing values.
```
no_nulls = set(df.dropna(axis='columns').columns.tolist())
print(no_nulls)
```
**3.** Which columns have the most missing values? Provide a set of column names that have more than 75% if their values missing.
```
missing_df = df.isnull().sum(axis=0) / len(df)
missing_df[missing_df > 0.75].index.tolist()
df['vip_level'].unique()
df['core_level'].unique()
vip_levels = df['vip_level'].value_counts()
(vip_levels/df.shape[0]).plot(kind="bar");
plt.title("What kind of vip level have users mostly?");
core_levels = df['core_level'].value_counts()
(core_levels/df.shape[0]).plot(kind="bar");
plt.title("What kind of vip level have users mostly?");
```
### 2. What procedure would you take to build the model?
#### Supervised ML Process
* Instantiate Model
* Fit Model
* Predict on test set
* Score Model using Metric
### Data Preparation
**4.** Mainly, we might answer the question, what columns relate to LTV?
For that purpose let's calculate the Correlation Matrix
```
corr = df.corr()
corr
```
And plot a heatmap of that matrix
```
fig, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(
corr,
xticklabels=corr.columns,
yticklabels=corr.columns,
annot=True,
fmt='.2f',
linewidths=.5,
ax=ax
)
```
As we can see from the Correlation Matrix, the most relevant for LTV are `day7_rev` and `day30_rev`, as it was expected. Aslo `session_length` as well as `core_level` has a weak linear influence on LTV. Other factors are negligable. Of course, Correlation Matrix only describes possible close to linear relation between our numeric features and predictable variable. To get some non-linear relations we need to conduct more advanced analysis.
Let's plot some histograms on picked variables to see their distribution.
```
df[['day7_rev', 'day30_rev', 'session_length', 'core_level', 'ltv']].hist(figsize=(16, 9));
```
### Working with Missing Values
* Remove
* Impute
* Work Around
**1.** What proportion of data is reported LTV?
```
df.ltv.notna()
prop_ltv = df.ltv.notna().mean()
prop_ltv
```
**2.** Remove the rows with missing LTV values from dataset
```
ltv_rm = df.dropna(subset=['ltv'], axis=0)
ltv_rm
df.ltv.describe()
all_rm = df.dropna(axis=0)
all_rm
```
### 3. How would you validate the result?
About the Coefficient of Determination or [R2 Score](https://en.wikipedia.org/wiki/Coefficient_of_determination)
### 4. Please choose one model, and perform from step 1 to step 3 in order
For the purpose of simplicity, let's consider **Linear Regression Model** to predict LTV. We are going to split randomly our dataset on `train` and `test` samples, fit our model on training part of the data, and validate the model on testing part. We will use **Coefficient of Determination** as our validation metric.
```
def test_linear_model(data_frame, features):
X = data_frame[features] # create X using explanatory variables
y = data_frame.ltv
# split data into training and test data, and fit a linear model
X_train, X_test, y_train, y_test = train_test_split(X, y , test_size=.30, random_state=42)
lm_model = LinearRegression(normalize=True)
lm_model.fit(X_train, y_train)
y_test_preds = lm_model.predict(X_test) # predictions using X and lm_model
rsquared_score = r2_score(y_test, y_test_preds) # rsquared for comparing test and preds from lm_model
length_y_test = len(y_test) # num in y_test
mse = mean_squared_error(y_test, y_test_preds)
print("The r-squared score for the model was {} on {} values.".format(rsquared_score, length_y_test))
```
Let's check our model on a dataset with completely removed missing values.
```
test_linear_model(all_rm, list(all_rm.columns)[:-1])
test_linear_model(all_rm, ['day7_rev', 'day30_rev', 'session_length', 'core_level'])
```
That's rather poor result. Let's consider some approaches to improve that.
### Common Imputation Methods
* Mean
* Median
* Mode
* Predict using other columns and a supervised model
* Find similar rows and fill with their values (k-NN approach)
By imputing, we are diluting the impotrance of that feature. Variability in features is what allows us to use them to predict another variable well.
```
drop_ltv_df = df.dropna(subset=['ltv']) # drop the rows with missing ltv
# test look
drop_ltv_df.head()
fill_df = drop_ltv_df.fillna(drop_ltv_df.mean()) # fill all missing values with the mean of the column.
# test look
fill_df.head()
```
By the way, imputing procedure in our case has not really changed Correlation Matrix.
```
fill_df.corr()
```
Now we managed to improve the quality of the model by mean imputation method for missing values in the dataset.
```
test_linear_model(all_rm, list(all_rm.columns)[:-1])
test_linear_model(fill_df, list(fill_df.columns)[:-1])
test_linear_model(fill_df, ['day7_rev', 'day30_rev', 'session_length', 'core_level'])
```
### 5. How would you productionalize it?
This model could be productionalized by Python Package or Cloud micro-service depends on its possible usage.
| github_jupyter |
<a href="https://colab.research.google.com/github/DJCordhose/ux-by-tfjs/blob/master/notebooks/click-sequence-model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Training on Sequences of Clicks on the Server
Make sure to run this from top to bottom to export model, otherwise names of layers cause tf.js to bail out
```
# Gives us a well defined version of tensorflow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
print(tf.__version__)
# a small sanity check, does tf seem to work ok?
hello = tf.constant('Hello TF!')
print("This works: {}".format(hello))
# this should return True even on Colab
tf.test.is_gpu_available()
tf.test.is_built_with_cuda()
tf.executing_eagerly()
```
## load data
```
import pandas as pd
print(pd.__version__)
import numpy as np
print(np.__version__)
# local
# URL = '../data/click-sequence.json'
# remote
URL = 'https://raw.githubusercontent.com/DJCordhose/ux-by-tfjs/master//data/click-sequence.json'
df = pd.read_json(URL, typ='series')
len(df)
df.head()
type(df[0])
df[0], df[1]
all_buttons = set()
for seq in df:
for button in seq:
all_buttons.add(button)
all_buttons.add('<START>')
all_buttons.add('<EMPTY>')
all_buttons = list(all_buttons)
all_buttons
from sklearn.preprocessing import LabelBinarizer, MultiLabelBinarizer, LabelEncoder
encoder = LabelEncoder()
encoder.fit(all_buttons)
encoder.classes_
transfomed_labels = [encoder.transform(seq) for seq in df if len(seq) != 0]
transfomed_labels
```
## pre-process data into chunks
```
empty = encoder.transform(['<EMPTY>'])
start = encoder.transform(['<START>'])
chunk_size = 5
# [ 1, 11, 7, 11, 6] => [[[0, 0, 0, 0, 1], 11], [[0, 0, 0, 1, 11], 7], [[0, 0, 1, 11, 7], 11], [[0, 1, 11, 7, 11], 6]]
def create_sequences(seq, chunk_size, empty, start):
# all sequences implicitly start
seq = np.append(start, seq)
# if sequence is too short, we pad it to minimum size at the beginning
seq = np.append(np.full(chunk_size - 1, empty), seq)
seqs = np.array([])
for index in range(chunk_size, len(seq)):
y = seq[index]
x = seq[index-chunk_size : index]
seqs = np.append(seqs, [x, y])
return seqs
# seq = transfomed_labels[0]
# seq = transfomed_labels[9]
seq = transfomed_labels[3]
seq
create_sequences(seq, chunk_size, empty, start)
seqs = np.array([])
for seq in transfomed_labels:
seqs = np.append(seqs, create_sequences(seq, chunk_size, empty, start))
seqs = seqs.reshape(-1, 2)
seqs.shape
X = seqs[:, 0]
# X = X.reshape(-1, chunk_size)
X = np.vstack(X).astype('int32')
X.dtype, X.shape
y = seqs[:, 1].astype('int32')
y.dtype, y.shape, y
```
## Training
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, LSTM, GRU, SimpleRNN, Embedding, BatchNormalization, Dropout
from tensorflow.keras.models import Sequential, Model
embedding_dim = 2
n_buttons = len(encoder.classes_)
dropout = .6
recurrent_dropout = .6
model = Sequential()
model.add(Embedding(name='embedding',
input_dim=n_buttons,
output_dim=embedding_dim,
input_length=chunk_size))
model.add(SimpleRNN(units=50, activation='relu', name="RNN", recurrent_dropout=recurrent_dropout))
# model.add(GRU(units=25, activation='relu', name="RNN", recurrent_dropout=0.5))
# model.add(LSTM(units=25, activation='relu', name="RNN", recurrent_dropout=0.5))
model.add(BatchNormalization())
model.add(Dropout(dropout))
model.add(Dense(units=n_buttons, name='softmax', activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
%%time
EPOCHS = 1000
BATCH_SIZE = 100
history = model.fit(X, y,
batch_size=BATCH_SIZE,
epochs=EPOCHS, verbose=0, validation_split=0.2)
loss, accuracy = model.evaluate(X, y, batch_size=BATCH_SIZE)
accuracy
%matplotlib inline
import matplotlib.pyplot as plt
# plt.yscale('log')
plt.ylabel('loss')
plt.xlabel('epochs')
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['loss', 'val_loss'])
plt.ylabel('accuracy')
plt.xlabel('epochs')
# TF 2.0
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
# plt.plot(history.history['acc'])
# plt.plot(history.history['val_acc'])
plt.legend(['accuracy', 'val_accuracy'])
model.predict([[X[0]]])
model.predict([[X[0]]]).argmax()
y[0]
y_pred = model.predict([X]).argmax(axis=1)
y_pred
y
# TF 2.0
cm = tf.math.confusion_matrix(labels=tf.constant(y, dtype=tf.int64), predictions=tf.constant(y_pred, dtype=tf.int64))
cm
import seaborn as sns
classes = encoder.classes_
plt.figure(figsize=(8, 8))
sns.heatmap(cm, annot=True, fmt="d", xticklabels=classes, yticklabels=classes);
embedding_layer = model.get_layer('embedding')
embedding_model = Model(inputs=model.input, outputs=embedding_layer.output)
embeddings_2d = embedding_model.predict(X)
embeddings_2d.shape
encoder.classes_
encoded_classes = encoder.transform(encoder.classes_)
same_button_seqs = np.repeat(encoded_classes, 5).reshape(14, 5)
embeddings_2d = embedding_model.predict(same_button_seqs)
embeddings_2d.shape
only_first = embeddings_2d[:, 0, :]
only_first
# for printing only
plt.figure(figsize=(10,10))
# plt.figure(dpi=600)
# plt.figure(dpi=300)
plt.axis('off')
plt.scatter(only_first[:, 0], only_first[:, 1], s=200)
for name, x_pos, y_pos in zip(encoder.classes_, only_first[:, 0], only_first[:, 1]):
# print(name, (x_pos, y_pos))
plt.annotate(name, (x_pos, y_pos), rotation=-60, size=25)
from sklearn.decomposition import PCA
import numpy as np
embeddings_1d = PCA(n_components=1).fit_transform(only_first)
# for printing only
plt.figure(figsize=(25,5))
# plt.figure(dpi=300)
plt.axis('off')
plt.scatter(embeddings_1d, np.zeros(len(embeddings_1d)), s=80)
for name, x_pos in zip(encoder.classes_, embeddings_1d):
plt.annotate(name, (x_pos, 0), rotation=-45)
```
## Convert Model into tfjs format
* https://www.tensorflow.org/js/tutorials/conversion/import_keras
```
!pip install -q tensorflowjs
model.save('ux.h5', save_format='h5')
!ls -l
!tensorflowjs_converter --input_format keras ux.h5 tfjs
!ls -l tfjs
```
Download using _Files_ menu on the left
```
```
| github_jupyter |
### Introduction
This is an instruction set for running SQUID simulations using a handful of packages writen in python 3. Each package is created around a SQUID model and includes a solver and some utilities to use the solver indirectly to produce more complicated output.
This tutorial will walk through using the **noisy_squid.py** package. Includeded in each SQUID package are:
**basic model** - gives timeseries output of the state of the SQUID
**vj_timeseries()** - gives a plot and csv file of the timeseries state of the SQUID
**iv_curve()** - gives contour plot (discrete values of one parameter) of I-V curve and csv file
**vphi_curve()** - gives countour plot (discrete values of one parameter) of V-PHI curve and csv file
**transfer_fn()** - gives plots of average voltage surface and transfer function surface as well as csv files for each. Returns array of average voltage over the input parameter space in i and phia
Associated with each package:
**dev_package_name.ipynb** - a jupyter notebook used in develping model and utilities
**package_name.ipynb** - a refined jupyter notebook with helpful explanations of how the model and utilities work
**package_name.py** - a python 3 file containing only code, inteded for import into python session of some sort
### Packages
There are four packages, **noisy_single_junction.py**, **quiet_squid.py**, **noisy_squid.py**, and **noisy_squid_2nd_order.py**. The first is somewhat separate and documentation is provided in the jupyter notebook for that file.
Each package requires simulation inputs, namely the time step size tau, the number of steps to simulate **nStep**, an initial state vector **s**. Also required are a set of input parameters **i** and **phia**, and physical parameters **alpha**, **betaL**, **eta**, **rho** at a minimum. The two more detailed solvers require a noise parameter **Gamma**, and perhaps two more parameters **betaC** and **kappa**. These are all supplied as one array of values called par.
The other three are useful to simulate a two-junction SQUID in different circumstances.
**quiet_squid.py** simulates a two-junction SQUID with no thermal noise and no appreciable shot noise. This model is first order and so does not account for capacitive effects. This model assumes no noise, and no capacitance. Use this model if the dynamics of the system without noise are to be investigated.
**noisy_squid.py** is similar to the above, but includes the effects of thermal Johnson noise in the junctions. This model is also first order, assuming negligible effects from capacitance. It is sometimes necessary and or safe to assume negligible effects from capacitance, or zero capacitance. It will be necessary to use the first order model in this case, as setting capacitance to zero in the second order model will result in divide by zero errors.
**noisy_squid_2nd_order.py** is similar to the above, but includes second order effects due to capacitance. This model should be used if capacitance should be considered non-zero.
#### Prerequisites
The only prerequisite is to have python 3 installed on your machine. You may prefer to use some python environment like jupyter notebook or Spyder. The easiest way to get everything is by downloading and installing the latest distribution of **Anaconda**, which will install python 3, juputer notebook, and Spyder as well as providing convenient utilities for adding community provided libraries. Anaconda download link provided:
https://www.anaconda.com/distribution/
Working from the console is easy enough, but working in your prefered development environment will be easier.
### Prepare a file for output
This tutorial will work out of the console, but the same commands will work for a development environment.
Create a file folder in a convenient location on your computer. Place the relevant python file described above in the file. Output files including csv data files and png plots will be stored in this file folder. All file outputs are named like **somethingDatetime.csv** or .png.
### Open a python environment
These packages can be used directly from the console or within your favorite python development environment.
This tutorial will assume the user is working from the console. Open a command prompt. You can do this on Windows by typing "cmd" in the start search bar and launching **Command Prompt**. Change directory to the file folder created in the step above.
***cd "file\tree\folder"***
With the command prompt indicating you are in the folder, type "python" and hit enter. If there are multiple instances (different iterations) of python on your machine, this may need to be accounted for in launching the correct version. See the internet for help on this.
If you have a favorite python environment, be sure to launch in the folder or change the working directory of the development environment to the folder you created. If you do not wish to change the working directory, place the package .py file in the working directory.
### Load the relevant package
With python running, at the command prompt in the console, import the python file containing the model needed.
In this tutorial we will use the first order model including noise, **noisy_squid.py**. Type "import noisy_squid". Execute the command by hitting enter on the console. It may be easier to give the package a nickname, as we will have to type it every time we call a function within it. Do this by instead typing "import noisy_squid as nickname", as below.
```
import noisy_squid as ns
```
We need a standard package called **numpy** as well. This library includes some tools we need to create input, namely numpy arrays. Type "import numpy as np" and hit enter.
The code inside the packages aslo relies on other standard packages. Those are loaded from within the package.
```
import numpy as np
```
#### Getting Help
You can access the short help description of each model and utility by typing:
***?package_name.utiltiy_name()***
Example:
```
?ns.transfer_fn()
```
### noisy_squid.noisySQUID()
The model itself, **noisySQUID()** can be run and gives a timeseries output array for the state of the system.
Included in the output array are **delta_1**, **delta_2**, **j**, **v_1**, **v_2**, and **v** at each simulated moment in time.
To run, first we need to set up some parameters. To see detailed explanations of these, see the developement log in the jupyter notebook associated with the package.
Parameter definitions can be handled in the function call or defined before the function call.
An example of the former: Define values using parameter names, build a parameter array, and finally call the function. Remember, s and par are numpy arrays.
```
# define values
nStep = 8000
tau = .1
s = np.array([0.,0.])
alpha = 0.
betaL = 1.
eta = 0.
rho = 0.
i = 1.5
phia = .5
Gamma = .05
par = np.array([alpha,betaL,eta,rho,i,phia,Gamma])
```
Now we can call the simulation. Type
***noisy_squid.noisySQUID(nStep,tau,s,par)***
to call the function. We may wish to define a new variable to hold the simple array output. We can then show the output by typing the variable again. We do this below by letting **S** be the output. **S** will take on the form of the output, here an array.
```
S = noisy_squid.noisySQUID(nStep,tau,s,par)
```
The shortcut method is to call the function only, replacing varibles with values. This example includes a linebreak "\". You can just type it out all in one line if it will fit.
```
S = noisy_squid.noisySQUID(8000,.1,np.array([0.,0.]),\
np.array([0.,1.,0.,0.,1.5,.5,.05]))
```
We can check the output. Have a look at **S** by typing it.
```
S
```
This doesn't mean much. Since the voltage accross the circuit, **v**, is stored in **S** as the 7th row (index 6), we can plot the voltage timeseries. The time **theta** is stored as the first row (index 0). We will need the ploting package **matplotlib.pyplot** to do this.
```
import matplotlib.pyplot as plt
plt.plot(S[0],S[6])
plt.xlabel(r'time, $\theta$')
plt.ylabel(r'ave voltage, $v$')
```
This model will be useful if you wish to extend the existing utilites here or develop new utilities. To explore the nature of SQUID parameter configurations, it may be easier to start with the other utilities provided.
### noisy_squid.vj_timeseries()
use "**'package_name'.vj_timeseries()**" or "**'package_nickname'.vj_timeseries()**".
This utility does the same as the funciton described above, but gives a plot of the output and creates a csv of the timeseries output array. These are named **timeseries'datetime'.csv** and .png. The plot includes the voltage timeseries and the circulating current time series. The csv file contains metadata describing the parameters.
To run this, define parameters as described above, and call the function. This time,we will change the value of **nStep** to be shorter so we can see some detail in the trace.
This utility does not return anything to store. The only output is the plot and the csv file, so don't bother storing the function call output.
```
# define values
nStep = 800
tau = .1
s = np.array([0.,0.])
alpha = 0.
betaL = 1.
eta = 0.
rho = 0.
i = 1.5
phia = .5
Gamma = .05
par = np.array([alpha,betaL,eta,rho,i,phia,Gamma])
ns.vj_timeseries(nStep,tau,s,par)
```
### noisy_squid.iv_curve()
This utility is used to create plots and data of the average voltage output of the SQUID vs the applied bias current. We can sweep any of the other parameters and create contours. The utility outputs a data file and plot, **IV'datetime'.csv** and .png.
Define the parameter array as normal. The parameter to sweep will be passed separately as a list of values at which to draw contours. If a value for the parameter to sweep is passed in **par**, it will simply be ignored in favor of the values in the list.
A list is defined by square brackets and values are separated by a comma. This parameter list must be a list and not an array.
The name of the list should be different than parameter name it represents. In this case, I wish to look at three contours corresponding to values of the applied flux **phia**. I name a list **Phi** and give it values.
Place the parameter list in the function call by typing the **parameter_name=List_name**. In this case, **phia=Phi**.
This utility has no console output, so don't bother storing it in a variable.
```
Phi = [.2,.8,1.6]
ns.iv_curve(nStep,tau,s,par,phia=Phi)
```
This curve is very noisy. To get finer detail, increase **nStep** and decrease **tau**.
The underlying numerical method is accurate but slow. Computation time considerations must be considered from here on. I recommend testing a set of parameters with a small number of large steps first, then adjusting for more detail as needed.
From here on, the utilities are looking at average voltage. To get an accurate average voltage you need lots of voltage values, and thus large **nStep**. The error in the underlying Runge-Kutta 4th order method is determined by the size of the time step, smaller = less error. Thus, a more accurate timeseries is provided by a smaller time step **tau**. A more accurate timeseries will result in better convergence of the model to the expected physical output, thus finer detail.
Computation time will grow directly with the size of **nStep** but will be uneffected by the size of **tau**. If **tau** is larger than one, there will be instability in the method, it will likely not work. There is a minimum size for **tau** as well, to insure stability. Something on the order of 0.1 to 0.01 will usually suffice.
These parameters are your tradeoff control in detail vs computation time.
At any rate, the erratic effect of noise is best dampened by using a larger **nStep**.
Lets try it with 10 times as many time steps.
```
nStep = 8000
tau = .1
ns.iv_curve(nStep,tau,s,par,phia=Phi)
```
This looks better. To get a usable plot, it will probably be necessary to set **nStep** on the order of 10^4 to 10^6. Start lower if possible.
Lets try **nStep**=8\*10^5.
```
nStep = 80000
ns.iv_curve(nStep,tau,s,par,phia=Phi)
```
We can look at a sweep of a different parameter by reseting **phia** to say .2, and creating a list to represent a different parameter. Lets sweep **betaL**.
```
phia = .2
Beta = [.5,1.,2.]
ns.iv_curve(nStep,tau,s,par,betaL=Beta)
```
### noisy_squid.vphi_curve()
This utility is used to create plots and data of the average voltage output of the SQUID vs the applied magnatic flux. We can sweep any of the other parameters and create contours. The utility outputs a data file and plot, **VPhi'datetime'.csv** and .png.
Define the parameter array as normal. The parameter to sweep will be passed separately as a list of values at which to draw contours. If a value for the parameter to sweep is passed in **par**, it will simply be ignored in favor of the values in the list.
A list is defined by square brackets and values are separated by a comma. This parameter list must be a list and not an array.
The name of the list should be different than parameter name it represents. In this case, I wish to look at three contours corresponding to values of the inductance constant **betaL**. I named a list **Beta** above and gave it values.
Place the parameter list in the function call by typing the **parameter_name=List_name**. In this case, **betaL=Beta**.
This utility has no console output, so don't bother storing it in a variable.
This utility can be computationally expensive. See the notes on this in the **noisy_squid.iv_curve()** section.
```
ns.vphi_curve(nStep,tau,s,par,betaL=Beta)
```
### noisy_squid.transfer_fn()
This utility creates the average voltage surface in bias current / applied flux space. It also calculates the partial derivative of the average voltage surface with respec to applied flux and returns this as the transfer funcion. These are named **AveVsurface'datetime'.png** and .csv, and **TransferFn'datetime'.png** and .csv. This utility also returns an array of average voltage values over the surface which can be stored for further manipulation.
This utility requires us to define an axis for both **i** and **phia**. We do this by making an array for each. We can define the individual elements of the array, but there is an easier way. We can make an array of values evenly spaced across an interval using **np.arange(start, stop+step, step)** as below.
Pass the other parameters as in the instructions under **noisy_squid()**. You may want to start with a smaller value for **nStep**.
```
nStep = 800
i = np.arange(-3.,3.1,.1)
phia = np.arange(-1.,1.1,.1)
vsurf = ns.transfer_fn(nStep,tau,s,par,i,phia)
```
The average voltage surface looks ok, but not great. Noisy spots in the surface will negatively effect the transfer function determination. The transfer function surface has large derivatives in the corner which over saturate the plot hiding detail in most of the surface. To fix this, we need a truer average voltage surface. We need more time steps. If you have some time, try **nStep**=8000.
Note that it may be possible to clean the data from the csv file and recover some detail in plotting. Be careful...
```
nStep = 8000
vsurf = ns.transfer_fn(nStep,tau,s,par,i,phia)
```
| github_jupyter |
## Learn Calculus with Python
#### start
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.arange(0,100,0.1)
y = np.cos(x)
plt.plot(x,y)
plt.show()
```
#### normal function
```
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return x**3 / (25)
f(30)
f(10)
# num -> smooth ( how many dots )
x = np.linspace(-30,30,num=1000)
y = f(x)
plt.plot(x,y)
```
#### exp
```
def exp(x):
return np.e**x
exp(2)
np.exp(2)
xz = np.linspace(1, 20, num=100)
yz = exp(xz)
plt.plot(xz, yz)
def exp2(x):
sum = 0
for k in range(100):
sum += float(x**k)/np.math.factorial(k)
return sum
exp2(1)
exp(1)
```
#### log
```
# pick 100 items between 1 and 500 isometricly
x = np.linspace(1,500,100,endpoint=False)
y1 = np.log2(x)
y2 = np.log(x)
y3 = np.log10(x)
plt.plot(x,y1,'red',x,y2,'green',x,y3,'blue')
```
#### trigonometric
```
pi_val = np.pi
pi_range = np.linspace(-2*pi_val,2*pi_val )
plt.plot(
pi_range,
np.sin(pi_range)
)
plt.plot(
pi_range,
np.cos(pi_range)
)
```
#### f(g(x))
```
import numpy as np
import matplotlib.pyplot as plt
f = lambda x:x+20
g = lambda x:x**2
h = lambda x:f(g(x))
x = np.array(range(-30,30))
plt.plot(x,h(x),'bs')
```
#### f<sup>-1</sup>(x)
```
w = lambda x: x**2
winv = lambda x: np.sqrt(x)
x = np.linspace(0,2,100)
plt.plot(x,w(x),'b',x,winv(x),'r',x,x,'g-.')
```
#### *higher order functions*
```
def horizontal_shift(f,W):
return lambda x: f(x-W)
g = lambda x:x**2
x = np.linspace(-20,20,100)
shifted_g = horizontal_shift(g,5)
plt.plot(x,g(x),'b',x,shifted_g(x),'r')
```
<hr>
#### Euler's Formula
```
# 即'欧拉公式'
rules_of_imaginary_number = '''
i^0 = 1 i^1 = i i^2 = -1 i^3 = -i
i^4 = 1 i^5 = i i^6 = -1 i^7 = -i '''
euler_equation = '''
e^(i*x) = cos(x) + i*sin(x)
e^(i*pi) + 1 = 0 <= (if x=pi) '''
# --- sympy ---
from sympy import Symbol,expand,E,I
z = Symbol('z',real=True)
expand(E**(I*z),complex=True)
# --- numpy ---
x = np.linspace(-np.pi,np.pi)
lhs = np.e**(1j*x)
rhs = np.cos(x) + 1j*np.sin(x)
sum(lhs==rhs)
len(x)
```
#### Higher Derivatives
```
from sympy.abc import x,y
f = x**4
f.diff(x,2)
f.diff(x).diff(x)
```
#### Ordinary Differential Equations
```
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
t = Symbol('t')
c = Symbol('c')
domain = np.linspace(-3,3,100)
v = t**3-3*t-6
a = v.diff()
for p in np.linspace(-2,2,20):
slope = a.subs(t,p)
intercept = solve(slope*p+c-v.subs(t,p),c)[0]
lindomain = np.linspace(p-1,p+1,20)
plt.plot(lindomain,slope*lindomain+intercept,'red',linewidth=1)
plt.plot(domain,[v.subs(t,i) for i in domain],linewidth=3)
```
<hr>
| github_jupyter |
# Imports
```
from bs4 import BeautifulSoup
import numpy as np
import pandas as pd
import requests
```
# Website URL list construction
```
## The 'target_url' is the homepage of the target website
## The 'url_prefix' is the specific URL you use to append with the
## for-loop below.
target_url = 'https://sfbay.craigslist.org'
url_prefix = 'https://sfbay.craigslist.org/d/musical-instruments/search/msa?s='
pages = ['120','240','360','480','600','720','840',
'960','1080','1200','1320','1440','1560','1680',
'1800','1920','2040','2160','2280','2400','2520',
'2640','2760','2880','3000']
## This tests to make sure the URL list compiler is working
## on 3 pages.
# pages = ['120', '240', '360']
url_list = []
## This loop takes the base URL and adds just the string from the
## 'pages' object above so that each 'url' that goes into the
## 'url_list' is in the correct step of 120 results.
for page in pages:
url = url_prefix + page
url_list.append(url)
## This prints the 'url_list' as a QC check.
url_list
```
# Scraping for-loop
* This is what I'm calling a "dynamic" scraping function. It's dynamic in the sense that it collects and defines the html as objects in real time.
* Another method would be what I'm calling "static" scraping where the output from the 'url in url_list' for-loop is put into a list outside of the function with the entirity of the url's html. The scraping then happens to a static object.
* Choose ** **ONE** ** approach: Dynamic or Static
## The "dynamic" method
```
'''
****NOTE****
The two empty lists 'df_list' and 'each_html_output' will
need to be empty. Therefore, make sure to restart the kernal before
running this cell.
'''
df_list = []
each_html_output = []
def attribute_scraping(starting_url):
"""
These are the 5 attributes I am scraping from Craigslist. Any
additional pieces of information to be made into objects will
require
* adding an empty list
*an additional for-loop or if statement depending on the find
target
* adding to the dictionary at the end of the this function
* adding to the print statement set at the end of this function
"""
has_pics_bool = []
price = []
just_titles = []
HOOD_list = []
just_posted_datetimes = []
"""
Parameters
----------
response = requests.get(url)
* This makes a request to the URL and returns a status code
page = response.text
* the html text (str object) from the 'get(url)'
soup = BeautifulSoup(page, 'html.parser')
* makes a BeautifulSoup object called 'page'
* utilizes the parser designated in quotes as the second
input of the method
results = soup.find_all('li', class_='result-row')
* returns an element ResultSet object.
* this is the html text that was isolated from using the
'find()' or 'find_all()' methods.
* 'li' is an html list tag.
* 'class_' is the designator for a class attribute.
- Here this corresponds with the 'result_row' class
"""
for url in url_list:
response = requests.get(url)
page = response.text
soup = BeautifulSoup(page, 'html.parser')
results = soup.find_all('li', class_='result-row')
for res in results:
"""PRICE"""
## Loop for finding PRICE for a single page of 120 results
p = res.find('span', class_='result-price').text
price.append(p)
"""PICS"""
## Loop for finding the boolean HAS PICS of a single page of
## 120 results. This tests whether >=1 picture is an attribute
## of the post.
if res.find('span', class_='pictag') is None:
has_pics_bool.append("False")
else:
has_pics_bool.append('True')
"""NEIGHBORHOOD"""
## Loop for finding NEIGHBORHOOD name for a single page of 120
## results. This includes the drop down menu choices on
## Craigslist as well as the manually entered neighborhoods.
if res.find('span', class_="result-hood") is None:
HOOD_list.append("NONE")
else:
h = res.find('span', class_="result-hood").text
HOOD_list.append(h)
"""TITLE"""
## Loop for finding TITLE for a single page of 120 results
titles=soup.find_all('a', class_="result-title hdrlnk")
for title in titles:
just_titles.append(title.text)
"""DATETIME"""
## Loop for finding DATETIME for a single page of 120 results
posted_datetimes=soup.find_all(class_='result-date')
for posted_datetime in posted_datetimes:
if posted_datetime.has_attr('datetime'):
just_posted_datetimes.append(posted_datetime['datetime'])
# Compilation dictionary of for-loop results
comp_dict = {'price': price,
'pics': has_pics_bool,
'hood': HOOD_list,
'title': just_titles,
'datetimes': just_posted_datetimes}
return comp_dict
print(len(price))
print(len(has_pics_bool))
print(len(HOOD_list))
print(len(just_titles))
print(len(just_posted_datetimes))
```
Run the function and check the output dictionary.
```
base_dict = attribute_scraping(target_url)
base_dict
```
Construct dataframe using dictionary
```
df_base = pd.DataFrame(base_dict)
df_base
```
Sort the results by the 'datetime' to order them by posting time.
```
df_base.sort_values('datetimes')
```
Convert to csv for import into regression notebook
```
df_base.to_csv('/Users/johnmetzger/Desktop/Coding/Project2/base_scrape.csv', index = False)
```
| github_jupyter |
# Week 2: Tackle Overfitting with Data Augmentation
Welcome to this assignment! As in the previous week, you will be using the famous `cats vs dogs` dataset to train a model that can classify images of dogs from images of cats. For this, you will create your own Convolutional Neural Network in Tensorflow and leverage Keras' image preprocessing utilities, more so this time around since Keras provides excellent support for augmenting image data.
You will also need to create the helper functions to move the images around the filesystem as you did last week, so if you need to refresh your memory with the `os` module be sure to take a look a the [docs](https://docs.python.org/3/library/os.html).
Let's get started!
```
import warnings
warnings.filterwarnings('ignore')
import os
import zipfile
import random
import shutil
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from shutil import copyfile
import matplotlib.pyplot as plt
```
Download the dataset from its original source by running the cell below.
Note that the `zip` file that contains the images is unzipped under the `/tmp` directory.
```
# If the URL doesn't work, visit https://www.microsoft.com/en-us/download/confirmation.aspx?id=54765
# And right click on the 'Download Manually' link to get a new URL to the dataset
# Note: This is a very large dataset and will take some time to download
!wget --no-check-certificate \
"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip" \
-O "/tmp/cats-and-dogs.zip"
local_zip = '/tmp/cats-and-dogs.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
```
Now the images are stored within the `/tmp/PetImages` directory. There is a subdirectory for each class, so one for dogs and one for cats.
```
source_path = '/tmp/PetImages'
source_path_dogs = os.path.join(source_path, 'Dog')
source_path_cats = os.path.join(source_path, 'Cat')
# os.listdir returns a list containing all files under the given path
print(f"There are {len(os.listdir(source_path_dogs))} images of dogs.")
print(f"There are {len(os.listdir(source_path_cats))} images of cats.")
```
**Expected Output:**
```
There are 12501 images of dogs.
There are 12501 images of cats.
```
You will need a directory for cats-v-dogs, and subdirectories for training
and testing. These in turn will need subdirectories for 'cats' and 'dogs'. To accomplish this, complete the `create_train_test_dirs` below:
```
# Define root directory
root_dir = '/tmp/cats-v-dogs'
# Empty directory to prevent FileExistsError is the function is run several times
if os.path.exists(root_dir):
shutil.rmtree(root_dir)
# GRADED FUNCTION: create_train_test_dirs
def create_train_test_dirs(root_path):
### START CODE HERE
# HINT:
# Use os.makedirs to create your directories with intermediate subdirectories
# Don't hardcode the paths. Use os.path.join to append the new directories to the root_path parameter
try:
os.makedirs(os.path.join(root_dir))
os.makedirs(os.path.join(root_dir, "training"))
os.makedirs(os.path.join(root_dir, "training", "cats"))
os.makedirs(os.path.join(root_dir, "training", "dogs"))
os.makedirs(os.path.join(root_dir, "testing"))
os.makedirs(os.path.join(root_dir, "testing", "cats"))
os.makedirs(os.path.join(root_dir, "testing", "dogs"))
except:
pass
### END CODE HERE
try:
create_train_test_dirs(root_path=root_dir)
except FileExistsError:
print("You should not be seeing this since the upper directory is removed beforehand")
# Test your create_train_test_dirs function
for rootdir, dirs, files in os.walk(root_dir):
for subdir in dirs:
print(os.path.join(rootdir, subdir))
```
**Expected Output (directory order might vary):**
``` txt
/tmp/cats-v-dogs/training
/tmp/cats-v-dogs/testing
/tmp/cats-v-dogs/training/cats
/tmp/cats-v-dogs/training/dogs
/tmp/cats-v-dogs/testing/cats
/tmp/cats-v-dogs/testing/dogs
```
Code the `split_data` function which takes in the following arguments:
- SOURCE: directory containing the files
- TRAINING: directory that a portion of the files will be copied to (will be used for training)
- TESTING: directory that a portion of the files will be copied to (will be used for testing)
- SPLIT SIZE: to determine the portion
The files should be randomized, so that the training set is a random sample of the files, and the test set is made up of the remaining files.
For example, if `SOURCE` is `PetImages/Cat`, and `SPLIT` SIZE is .9 then 90% of the images in `PetImages/Cat` will be copied to the `TRAINING` dir
and 10% of the images will be copied to the `TESTING` dir.
All images should be checked before the copy, so if they have a zero file length, they will be omitted from the copying process. If this is the case then your function should print out a message such as `"filename is zero length, so ignoring."`. **You should perform this check before the split so that only non-zero images are considered when doing the actual split.**
Hints:
- `os.listdir(DIRECTORY)` returns a list with the contents of that directory.
- `os.path.getsize(PATH)` returns the size of the file
- `copyfile(source, destination)` copies a file from source to destination
- `random.sample(list, len(list))` shuffles a list
```
# GRADED FUNCTION: split_data
def split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):
### START CODE HERE
all_files = []
for file_name in os.listdir(SOURCE):
file_path = SOURCE + file_name
if os.path.getsize(file_path):
all_files.append(file_name)
else:
print('{} is zero length, so ignoring'.format(file_name))
n_files = len(all_files)
split_point = int(n_files * SPLIT_SIZE)
shuffled = random.sample(all_files, n_files)
train_set = shuffled[:split_point]
test_set = shuffled[split_point:]
for file_name in train_set:
copyfile(SOURCE + file_name, TRAINING + file_name)
for file_name in test_set:
copyfile(SOURCE + file_name, TESTING + file_name)
### END CODE HERE
# Test your split_data function
# Define paths
CAT_SOURCE_DIR = "/tmp/PetImages/Cat/"
DOG_SOURCE_DIR = "/tmp/PetImages/Dog/"
TRAINING_DIR = "/tmp/cats-v-dogs/training/"
TESTING_DIR = "/tmp/cats-v-dogs/testing/"
TRAINING_CATS_DIR = os.path.join(TRAINING_DIR, "cats/")
TESTING_CATS_DIR = os.path.join(TESTING_DIR, "cats/")
TRAINING_DOGS_DIR = os.path.join(TRAINING_DIR, "dogs/")
TESTING_DOGS_DIR = os.path.join(TESTING_DIR, "dogs/")
# Empty directories in case you run this cell multiple times
if len(os.listdir(TRAINING_CATS_DIR)) > 0:
for file in os.scandir(TRAINING_CATS_DIR):
os.remove(file.path)
if len(os.listdir(TRAINING_DOGS_DIR)) > 0:
for file in os.scandir(TRAINING_DOGS_DIR):
os.remove(file.path)
if len(os.listdir(TESTING_CATS_DIR)) > 0:
for file in os.scandir(TESTING_CATS_DIR):
os.remove(file.path)
if len(os.listdir(TESTING_DOGS_DIR)) > 0:
for file in os.scandir(TESTING_DOGS_DIR):
os.remove(file.path)
# Define proportion of images used for training
split_size = .9
# Run the function
# NOTE: Messages about zero length images should be printed out
split_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)
split_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)
# Check that the number of images matches the expected output
print(f"\n\nThere are {len(os.listdir(TRAINING_CATS_DIR))} images of cats for training")
print(f"There are {len(os.listdir(TRAINING_DOGS_DIR))} images of dogs for training")
print(f"There are {len(os.listdir(TESTING_CATS_DIR))} images of cats for testing")
print(f"There are {len(os.listdir(TESTING_DOGS_DIR))} images of dogs for testing")
```
**Expected Output:**
```
666.jpg is zero length, so ignoring.
11702.jpg is zero length, so ignoring.
```
```
There are 11250 images of cats for training
There are 11250 images of dogs for training
There are 1250 images of cats for testing
There are 1250 images of dogs for testing
```
Now that you have successfully organized the data in a way that can be easily fed to Keras' `ImageDataGenerator`, it is time for you to code the generators that will yield batches of images, both for training and validation. For this, complete the `train_val_generators` function below.
Something important to note is that the images in this dataset come in a variety of resolutions. Luckily, the `flow_from_directory` method allows you to standarize this by defining a tuple called `target_size` that will be used to convert each image to this target resolution. **For this exercise use a `target_size` of (150, 150)**.
**Note:** So far, you have seen the term `testing` being used a lot for referring to a subset of images within the dataset. In this exercise, all of the `testing` data is actually being used as `validation` data. This is not very important within the context of the task at hand but it is worth mentioning to avoid confusion.
```
TRAINING_DIR = '/tmp/cats-v-dogs/training'
VALIDATION_DIR = '/tmp/cats-v-dogs/testing'
# GRADED FUNCTION: train_val_generators
def train_val_generators(TRAINING_DIR, VALIDATION_DIR):
### START CODE HERE
# Instantiate the ImageDataGenerator class (don't forget to set the arguments to augment the images)
train_datagen = ImageDataGenerator(rescale=1.0/255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Pass in the appropriate arguments to the flow_from_directory method
train_generator = train_datagen.flow_from_directory(directory=TRAINING_DIR,
batch_size=64,
class_mode='binary',
target_size=(150, 150))
# Instantiate the ImageDataGenerator class (don't forget to set the rescale argument)
validation_datagen = ImageDataGenerator(rescale=1.0/255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Pass in the appropriate arguments to the flow_from_directory method
validation_generator = validation_datagen.flow_from_directory(directory=VALIDATION_DIR,
batch_size=64,
class_mode='binary',
target_size=(150, 150))
### END CODE HERE
return train_generator, validation_generator
# Test your generators
train_generator, validation_generator = train_val_generators(TRAINING_DIR, TESTING_DIR)
```
**Expected Output:**
```
Found 22498 images belonging to 2 classes.
Found 2500 images belonging to 2 classes.
```
One last step before training is to define the architecture of the model that will be trained.
Complete the `create_model` function below which should return a Keras' `Sequential` model.
Aside from defining the architecture of the model, you should also compile it so make sure to use a `loss` function that is compatible with the `class_mode` you defined in the previous exercise, which should also be compatible with the output of your network. You can tell if they aren't compatible if you get an error during training.
**Note that you should use at least 3 convolution layers to achieve the desired performance.**
```
from tensorflow.keras.optimizers import RMSprop
# GRADED FUNCTION: create_model
def create_model():
# DEFINE A KERAS MODEL TO CLASSIFY CATS V DOGS
# USE AT LEAST 3 CONVOLUTION LAYERS
### START CODE HERE
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), input_shape = (150, 150, 3), activation = tf.nn.relu),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation = tf.nn.relu),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3, 3), activation = tf.nn.relu),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation = tf.nn.relu),
tf.keras.layers.Dense(128, activation = tf.nn.relu),
tf.keras.layers.Dense(1, activation = tf.nn.sigmoid)
])
model.compile( optimizer = RMSprop(lr=0.001),
loss = 'binary_crossentropy',
metrics = ['accuracy'])
### END CODE HERE
return model
```
Now it is time to train your model!
Note: You can ignore the `UserWarning: Possibly corrupt EXIF data.` warnings.
```
# Get the untrained model
model = create_model()
# Train the model
# Note that this may take some time.
history = model.fit(train_generator,
epochs=15,
verbose=1,
validation_data=validation_generator)
```
Once training has finished, you can run the following cell to check the training and validation accuracy achieved at the end of each epoch.
**To pass this assignment, your model should achieve a training and validation accuracy of at least 80% and the final testing accuracy should be either higher than the training one or have a 5% difference at maximum**. If your model didn't achieve these thresholds, try training again with a different model architecture, remember to use at least 3 convolutional layers or try tweaking the image augmentation process.
You might wonder why the training threshold to pass this assignment is significantly lower compared to last week's assignment. Image augmentation does help with overfitting but usually this comes at the expense of requiring more training time. To keep the training time reasonable, the same number of epochs as in the previous assignment are kept.
However, as an optional exercise you are encouraged to try training for more epochs and to achieve really good training and validation accuracies.
```
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['accuracy']
val_acc=history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r', "Training Accuracy")
plt.plot(epochs, val_acc, 'b', "Validation Accuracy")
plt.title('Training and validation accuracy')
plt.show()
print("")
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r', "Training Loss")
plt.plot(epochs, val_loss, 'b', "Validation Loss")
plt.show()
```
You will probably encounter that the model is overfitting, which means that it is doing a great job at classifying the images in the training set but struggles with new data. This is perfectly fine and you will learn how to mitigate this issue in the upcomming week.
Before closing the assignment, be sure to also download the `history.pkl` file which contains the information of the training history of your model. You can download this file by running the cell below:
```
def download_history():
import pickle
from google.colab import files
with open('history_augmented.pkl', 'wb') as f:
pickle.dump(history.history, f)
files.download('history_augmented.pkl')
download_history()
```
You will also need to submit this notebook for grading. To download it, click on the `File` tab in the upper left corner of the screen then click on `Download` -> `Download .ipynb`. You can name it anything you want as long as it is a valid `.ipynb` (jupyter notebook) file.
**Congratulations on finishing this week's assignment!**
You have successfully implemented a convolutional neural network that classifies images of cats and dogs, along with the helper functions needed to pre-process the images!
**Keep it up!**
| github_jupyter |
```
import pandas as pd
import ixmp as ix
import message_ix
from message_ix.utils import make_df
%matplotlib inline
mp = ix.Platform(dbtype='HSQLDB')
base = message_ix.Scenario(mp, model='Westeros Electrified', scenario='baseline')
model = 'Westeros Electrified'
scen = base.clone(model, 'flexibile_generation',
'illustration of flexible-generation formulation', keep_sol=False)
scen.check_out()
year_df = scen.vintage_and_active_years()
vintage_years, act_years = year_df['year_vtg'], year_df['year_act']
model_horizon = scen.set('year')
country = 'Westeros'
```
# Add a carbon tax
Note that the example below is not feasible with a strict bound on emissions, hence this tutorial uses a tax.
Below, we repeat the set-up of the emissions formulation from the "emissions bounds" tutorial.
```
# first we introduce the emission specis CO2 and the emission category GHG
scen.add_set('emission', 'CO2')
scen.add_cat('emission', 'GHG', 'CO2')
# we now add CO2 emissions to the coal powerplant
base_emission_factor = {
'node_loc': country,
'year_vtg': vintage_years,
'year_act': act_years,
'mode': 'standard',
'unit': 'USD/GWa',
}
emission_factor = make_df(base_emission_factor, technology= 'coal_ppl', emission= 'CO2', value = 100)
scen.add_par('emission_factor', emission_factor)
base_tax_emission = {
'node': country,
'type_year': [700,710,720],
'type_tec': 'all',
'unit': 'tCO2',
'type_emission': 'GHG',
'value': [1., 2., 3.]
}
tax_emission = make_df(base_tax_emission)
scen.add_par('tax_emission', tax_emission)
```
## Add Renewable Formulation
## Describing the Renewable Technologies Flexibility
### flexibility of demand and supply --> ensuring the activity flexibility reserve
The wind power plant has a flexibility demand of 5% of its ACT. The coal powerplant can provide 20% of it's activity as flexibility.
```
base_flexibility_factor = pd.DataFrame({
'node_loc': country,
'commodity': 'electricity',
'level' : 'secondary',
'mode': 'standard',
'unit': '???',
'time': 'year',
'year_vtg': vintage_years,
'year_act': act_years,
})
base_rating = pd.DataFrame({
'node': country,
'commodity': 'electricity',
'level' : 'secondary',
'unit': '???',
'time': 'year',
'year_act': model_horizon})
# add the ratings as a set
scen.add_set('rating', ['r1', 'r2'])
# For the Load
flexibility_factor = make_df(base_flexibility_factor, technology= 'grid', rating= 'unrated', value = -0.1)
scen.add_par('flexibility_factor',flexibility_factor)
# For the Wind PPL
rating_bin = make_df(base_rating, technology= 'wind_ppl', value = 0.2, rating= 'r1')
scen.add_par('rating_bin', rating_bin)
flexibility_factor = make_df(base_flexibility_factor, technology= 'wind_ppl', rating= 'r1', value = -0.2)
scen.add_par('flexibility_factor',flexibility_factor)
rating_bin = make_df(base_rating, technology= 'wind_ppl', value = 0.8, rating= 'r2')
scen.add_par('rating_bin', rating_bin)
flexibility_factor = make_df(base_flexibility_factor, technology= 'wind_ppl', rating= 'r2', value = -0.7)
scen.add_par('flexibility_factor',flexibility_factor)
# For the Coal PPL
flexibility_factor = make_df(base_flexibility_factor, technology= 'coal_ppl', rating= 'unrated', value = 1)
scen.add_par('flexibility_factor',flexibility_factor)
```
### commit and solve
```
scen.commit(comment='define parameters for flexibile-generation implementation')
scen.set_as_default()
scen.solve()
scen.var('OBJ')['lvl']
```
### plotting
```
from tools import Plots
p = Plots(scen, country, firstyear=700)
p.plot_activity(baseyear=True, subset=['coal_ppl', 'wind_ppl'])
p.plot_capacity(baseyear=True, subset=['coal_ppl', 'wind_ppl'])
p.plot_prices(subset=['light'], baseyear=True)
mp.close_db()
```
| github_jupyter |
**Author**: _Pradip Kumar Das_
**License:** https://github.com/PradipKumarDas/Competitions/blob/main/LICENSE
**Profile & Contact:** [LinkedIn](https://www.linkedin.com/in/daspradipkumar/) | [GitHub](https://github.com/PradipKumarDas) | [Kaggle](https://www.kaggle.com/pradipkumardas) | pradipkumardas@hotmail.com (Email)
# Ugam Sentiment Analysis | MachineHack
**Dec. 22, 2021 - Jan. 10, 2022**
https://machinehack.com/hackathon/uhack_sentiments_20_decode_code_words/overview
**Sections:**
- Dependencies
- Exploratory Data Analysis (EDA) & Preprocessing
- Modeling & Evaluation
- Submission
NOTE: Running this notebook over CPU will be intractable as it uses Transformers, and hence it is recommended to use GPU.
# Dependencies
```
# The following packages may need to be first installed on cloud hosted Data Science platforms such as Google Colab.
!pip install transformers
# Imports required packages
import pandas as pd
import numpy as np
from sklearn.model_selection import StratifiedKFold
import tensorflow as tf
# from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, TensorBoard
import transformers
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
import matplotlib.pyplot as plt
import seaborn as sns
import datetime, gc
```
# Initialization
```
# Connects drive in Google Colab
from google.colab import drive
drive.mount('/content/drive/')
# Changes working directory to the project directory
cd "/content/drive/MyDrive/Colab/Ugam_Sentiment_Analysis/"
# Configures styles for plotting runtime
plt.style.use("seaborn-whitegrid")
plt.rc(
"figure",
autolayout=True,
figsize=(11, 4),
titlesize=18,
titleweight='bold',
)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=16,
titlepad=10,
)
%config InlineBackend.figure_format = 'retina'
# Sets Tranformer's level of verbosity to INFO level
transformers.logging.set_verbosity_error()
```
# Exploratory Data Analysis (EDA) & Preprocessing
```
# Loads train data set
train = pd.read_csv("./data/train.csv")
# Checks few rows from train data set
display(train)
# Sets dataframe's `Id` columns as its index
train.set_index("Id", drop=True, append=False, inplace=True)
# Loads test data set
test = pd.read_csv("./data/test.csv")
# Checks top few rows from test data set
display(test.head(5))
# Sets dataframe's `Id` columns as its index
test.set_index("Id", drop=True, append=False, inplace=True)
# Checks the distribution of review length (number of characters in review)
fig, ax = plt.subplots(1, 2, sharey=True)
fig.suptitle("Review Length")
train.Review.str.len().plot(kind='hist', bins=50, ax=ax[0])
ax[0].set_xlabel("Train Data")
ax[0].set_ylabel("No. of Reviews")
test.Review.str.len().plot(kind='hist', bins=50, ax=ax[1])
ax[1].set_xlabel("Test Data")
```
The above plot shows that lengthy reviews containing 1000+ characters are less compares to that of reviews having less than 1000 characters. Hence, first 512 characters from the reviews will be considered for analysis.
```
# Finds the distribution of each label
display(train.select_dtypes(["int"]).apply(pd.Series.value_counts))
# Let's find stratified cross validation on 'Polarity' label will have same distribution
sk_fold = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
cv_generator = sk_fold.split(train, train.Polarity)
for fold, (idx_train, idx_val) in enumerate(cv_generator):
display(train.iloc[idx_train].select_dtypes(["int"]).apply(pd.Series.value_counts))
```
It shows the same distribution is available in cross validation.
# Modeling & Evaluation
The approach is to use pretrained **Transfomer** model and to fine-tune, if required. As fine-tuning over cross-validation is time consuming even on GPUs, let's avoid it, and hence prepare one stratified validation set first to check pretrained model's or fine-tuned model's performance.
```
# Creates data set splitter and gets indexes for train and validation rows
cv_generator = sk_fold.split(train, train.Polarity)
idx_train, idx_val = next(cv_generator)
# Sets parameters for Transformer model fine-tuning
model_config = {
"model_name": "distilbert-base-uncased-finetuned-sst-2-english", # selected pretrained model
"max_length": 512, # maximum number of review characters allowed to input to model
}
# Creates tokenizer from pre-trained transformer model
tokenizer = AutoTokenizer.from_pretrained(model_config["model_name"])
# Tokenize reviews for train, validation and test data set
train_encodings = tokenizer(
train.iloc[idx_train].Review.to_list(),
max_length=model_config["max_length"],
truncation=True,
padding=True,
return_tensors="tf"
)
val_encodings = tokenizer(
train.iloc[idx_val].Review.to_list(),
max_length=model_config["max_length"],
truncation=True,
padding=True,
return_tensors="tf"
)
test_encodings = tokenizer(
test.Review.to_list(),
max_length=model_config["max_length"],
truncation=True,
padding=True,
return_tensors="tf"
)
# Performs target specific model fine-tuning
"""
NOTE:
1) It was observed that increasing number of epochs more than one during model fine-tuning
does not improve model performance, and hence epochs is set to 1.
2) As pretrained model being used is already used for predicting sentiment polarity, that model
will not be fine-tuned any further, and will be used directly to predict sentimen polarity
against the test data. Fine-tuning was already experimented and found to be not useful as it
decreases performance with higher log loss and lower accuracy on validation data.
"""
columns = train.select_dtypes(["int"]).columns.tolist()
columns.remove("Polarity")
# Fine-tunes models except that of Polarity
for column in columns:
print(f"Fine tuning model for {column.upper()}...")
print("======================================================\n")
model = TFAutoModelForSequenceClassification.from_pretrained(model_config["model_name"])
# Prepares tensorflow dataset for both train, validation and test data
train_encodings_dataset = tf.data.Dataset.from_tensor_slices((
{"input_ids": train_encodings["input_ids"], "attention_mask": train_encodings["attention_mask"]},
train.iloc[idx_train][[column]]
)).batch(16).prefetch(tf.data.AUTOTUNE)
val_encodings_dataset = tf.data.Dataset.from_tensor_slices((
{"input_ids": val_encodings["input_ids"], "attention_mask": val_encodings["attention_mask"]},
train.iloc[idx_val][[column]]
)).batch(16).prefetch(tf.data.AUTOTUNE)
test_encodings_dataset = tf.data.Dataset.from_tensor_slices(
{"input_ids": test_encodings["input_ids"], "attention_mask": test_encodings["attention_mask"]}
).batch(16).prefetch(tf.data.AUTOTUNE)
predictions = tf.nn.softmax(model.predict(val_encodings_dataset).logits)
print("Pretrained model's perfomance on validation data before fine-tuning:",
tf.keras.metrics.binary_crossentropy(train.iloc[idx_val][column], predictions[:,1], from_logits=False).numpy(), "(log loss)",
tf.keras.metrics.binary_accuracy(train.iloc[idx_val][column], predictions[:,1]).numpy(), "(accuracy)\n"
)
del predictions
print("Starting fine tuning...")
# Freezes model configuration before starting fine-tuning
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
loss=tf.keras.losses.binary_crossentropy,
metrics=[tf.keras.metrics.binary_crossentropy, tf.keras.metrics.binary_accuracy]
)
# Sets model file name to organize storing logs and fine-tuned models against
# model_filename = f"{column}" + "_" + datetime.datetime.now().strftime("%Y.%m.%d-%H:%M:%S")
# Fine tunes model
model.fit(
x=train_encodings_dataset,
validation_data=val_encodings_dataset,
batch_size=16,
epochs=1,
# callbacks=[
# EarlyStopping(monitor="val_loss", mode="min", patience=2, restore_best_weights=True, verbose=1),
# ModelCheckpoint(filepath=f"./models/{model_filename}", monitor="val_loss", mode="min", save_best_only=True, save_weights_only=True),
# TensorBoard(log_dir=f"./logs/{model_filename}", histogram_freq=1, update_freq='epoch')
# ],
use_multiprocessing=True)
print("\nFine tuning was completed.\n")
del train_encodings_dataset, val_encodings_dataset
print("Performing prediction on test data...", end="")
# Performs predictions on test data
predictions = tf.nn.softmax(model.predict(test_encodings_dataset).logits)
test[column] = predictions[:, 1]
del test_encodings_dataset
print("done\n")
del predictions, model
print("Skipping fine-tuning model for POLARITY (as it uses pretrained model) and continuing direct prediction on test data...")
print("======================================================================================================================\n")
print("Performing prediction on test data...", end="")
model = TFAutoModelForSequenceClassification.from_pretrained(model_config["model_name"])
# Prepares tensorflow dataset for test data
test_encodings_dataset = tf.data.Dataset.from_tensor_slices(
{"input_ids": test_encodings["input_ids"], "attention_mask": test_encodings["attention_mask"]}
).batch(16).prefetch(tf.data.AUTOTUNE)
# Performs predictions on test data
predictions = tf.nn.softmax(model.predict(test_encodings_dataset).logits)
test["Polarity"] = predictions[:, 1]
del test_encodings_dataset
del predictions, model
print("done\n")
print("Fine-tuning and test predictions were completed.")
```
# Submission
```
# Saves test predictions
test.select_dtypes(["float"]).to_csv("./submission.csv", index=False)
```
***Leaderboard score for this submission was 8.8942 as against highest of that was 2.74 on Jan 06, 2022 at 11:50 PM.***
| github_jupyter |
# 📝 Exercise M1.05
The goal of this exercise is to evaluate the impact of feature preprocessing
on a pipeline that uses a decision-tree-based classifier instead of a logistic
regression.
- The first question is to empirically evaluate whether scaling numerical
features is helpful or not;
- The second question is to evaluate whether it is empirically better (both
from a computational and a statistical perspective) to use integer coded or
one-hot encoded categories.
```
import pandas as pd
adult_census = pd.read_csv("../datasets/adult-census.csv")
target_name = "class"
target = adult_census[target_name]
data = adult_census.drop(columns=[target_name, "education-num"])
```
As in the previous notebooks, we use the utility `make_column_selector`
to select only columns with a specific data type. Besides, we list in
advance all categories for the categorical columns.
```
from sklearn.compose import make_column_selector as selector
numerical_columns_selector = selector(dtype_exclude=object)
categorical_columns_selector = selector(dtype_include=object)
numerical_columns = numerical_columns_selector(data)
categorical_columns = categorical_columns_selector(data)
```
## Reference pipeline (no numerical scaling and integer-coded categories)
First let's time the pipeline we used in the main notebook to serve as a
reference:
```
import time
from sklearn.model_selection import cross_validate
from sklearn.pipeline import make_pipeline
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OrdinalEncoder
from sklearn.experimental import enable_hist_gradient_boosting
from sklearn.ensemble import HistGradientBoostingClassifier
categorical_preprocessor = OrdinalEncoder(handle_unknown="use_encoded_value",
unknown_value=-1)
preprocessor = ColumnTransformer([
('categorical', categorical_preprocessor, categorical_columns)],
remainder="passthrough")
model = make_pipeline(preprocessor, HistGradientBoostingClassifier())
start = time.time()
cv_results = cross_validate(model, data, target)
elapsed_time = time.time() - start
scores = cv_results["test_score"]
print("The mean cross-validation accuracy is: "
f"{scores.mean():.3f} +/- {scores.std():.3f} "
f"with a fitting time of {elapsed_time:.3f}")
```
## Scaling numerical features
Let's write a similar pipeline that also scales the numerical features using
`StandardScaler` (or similar):
```
# Write your code here.
```
## One-hot encoding of categorical variables
We observed that integer coding of categorical variables can be very
detrimental for linear models. However, it does not seem to be the case for
`HistGradientBoostingClassifier` models, as the cross-validation score
of the reference pipeline with `OrdinalEncoder` is reasonably good.
Let's see if we can get an even better accuracy with `OneHotEncoder`.
Hint: `HistGradientBoostingClassifier` does not yet support sparse input
data. You might want to use
`OneHotEncoder(handle_unknown="ignore", sparse=False)` to force the use of a
dense representation as a workaround.
```
# Write your code here.
```
| github_jupyter |
# SESSIONS ARE ALL YOU NEED
### Workshop on e-commerce personalization
This notebook showcases with working code the main ideas of our ML-in-retail workshop from June lst, 2021 at MICES (https://mices.co/). Please refer to the README in the repo for a bit of context!
While the code below is (well, should be!) fully functioning, please note we aim for functions which are pedagogically useful, more than terse code per se: it should be fairly easy to take these ideas and refactor the code to achieve more speed, better re-usability etc.
_If you want to use Google Colab, you can uncomment this cell:_
```
# if you need requirements....
# !pip install -r requirements.txt
# #from google.colab import drive
#drive.mount('/content/drive',force_remount=True)
#%cd drive/MyDrive/path_to_directory_containing_train_folder
#LOCAL_FOLDER = 'train'
```
## Basic import and some global vars to know where data is!
Here we import the libraries we need and set the working folders - make sure your current python interpreter has all the dependencies installed. If you want to use the same real-world data as I'm using, please download the open dataset you find at: https://github.com/coveooss/SIGIR-ecom-data-challenge.
```
import os
from random import choice
import time
import ast
import json
import numpy as np
import csv
from collections import Counter,defaultdict
# viz stuff
from sklearn.manifold import TSNE
from matplotlib import pyplot as plt
from IPython.display import Image
# gensim stuff for prod2vec
import gensim # gensim > 4
from gensim.similarities.annoy import AnnoyIndexer
# keras stuff for auto-encoder
from keras.layers.core import Dropout
from keras.layers.core import Dense
from keras.layers import Concatenate
from keras.models import Sequential
from keras.layers import Input
from keras.optimizers import SGD, Adam
from keras.models import Model
from keras.callbacks import EarlyStopping
from keras.utils import plot_model
from sklearn.model_selection import train_test_split
from keras import utils
import hashlib
from copy import deepcopy
%matplotlib inline
LOCAL_FOLDER = '/Users/jacopotagliabue/Documents/data_dump/train' # where is the dataset stored?
N_ROWS = 5000000 # how many rows we want to take (to avoid waiting too much for tutorial purposes)?
```
## Step 1: build a prod2vec space
For more information on prod2vec and its use, you can also check our blog post: https://blog.coveo.com/clothes-in-space-real-time-personalization-in-less-than-100-lines-of-code/ or latest NLP paper: https://arxiv.org/abs/2104.02061
```
def read_sessions_from_training_file(training_file: str, K: int = None):
"""
Read the training file containing product interactions, up to K rows.
:return: a list of lists, each list being a session (sequence of product IDs)
"""
user_sessions = []
current_session_id = None
current_session = []
with open(training_file) as csvfile:
reader = csv.DictReader(csvfile)
for idx, row in enumerate(reader):
# if a max number of items is specified, just return at the K with what you have
if K and idx >= K:
break
# just append "detail" events in the order we see them
# row will contain: session_id_hash, product_action, product_sku_hash
_session_id_hash = row['session_id_hash']
# when a new session begins, store the old one and start again
if current_session_id and current_session and _session_id_hash != current_session_id:
user_sessions.append(current_session)
# reset session
current_session = []
# check for the right type and append
if row['product_action'] == 'detail':
current_session.append(row['product_sku_hash'])
# update the current session id
current_session_id = _session_id_hash
# print how many sessions we have...
print("# total sessions: {}".format(len(user_sessions)))
# print first one to check
print("First session is: {}".format(user_sessions[0]))
assert user_sessions[0][0] == 'd5157f8bc52965390fa21ad5842a8502bc3eb8b0930f3f8eafbc503f4012f69c'
assert user_sessions[0][-1] == '63b567f4cef976d1411aecc4240984e46ebe8e08e327f2be786beb7ee83216d0'
return user_sessions
def train_product_2_vec_model(sessions: list,
min_c: int = 3,
size: int = 48,
window: int = 5,
iterations: int = 15,
ns_exponent: float = 0.75):
"""
Train CBOW to get product embeddings. We start with sensible defaults from the literature - please
check https://arxiv.org/abs/2007.14906 for practical tips on how to optimize prod2vec.
:param sessions: list of lists, as user sessions are list of interactions
:param min_c: minimum frequency of an event for it to be calculated for product embeddings
:param size: output dimension
:param window: window parameter for gensim word2vec
:param iterations: number of training iterations
:param ns_exponent: ns_exponent parameter for gensim word2vec
:return: trained product embedding model
"""
model = gensim.models.Word2Vec(sentences=sessions,
min_count=min_c,
vector_size=size,
window=window,
epochs=iterations,
ns_exponent=ns_exponent)
print("# products in the space: {}".format(len(model.wv.index_to_key)))
return model.wv
```
Get sessions from the training file, and train a prod2vec model with standard hyperparameters
```
# get sessions
sessions = read_sessions_from_training_file(
training_file=os.path.join(LOCAL_FOLDER, 'browsing_train.csv'),
K=N_ROWS)
# get a counter on all items for later use
sku_cnt = Counter([item for s in sessions for item in s])
# print out most common SKUs
sku_cnt.most_common(3)
# leave some sessions aside
idx = int(len(sessions) * 0.8)
train_sessions = sessions[0: idx]
test_sessions = sessions[idx:]
print("Train sessions # {}, test sessions # {}".format(len(train_sessions), len(test_sessions)))
# finally, train the p2vec, leaving all the default hyperparameters
prod2vec_model = train_product_2_vec_model(train_sessions)
```
Show how to get a prediction with knn
```
prod2vec_model.similar_by_word(sku_cnt.most_common(1)[0][0], topn=3)
```
Visualize the prod2vec space, color-coding for categories in the catalog
```
def plot_scatter_by_category_with_lookup(title,
skus,
sku_to_target_cat,
results,
custom_markers=None):
groups = {}
for sku, target_cat in sku_to_target_cat.items():
if sku not in skus:
continue
sku_idx = skus.index(sku)
x = results[sku_idx][0]
y = results[sku_idx][1]
if target_cat in groups:
groups[target_cat]['x'].append(x)
groups[target_cat]['y'].append(y)
else:
groups[target_cat] = {
'x': [x], 'y': [y]
}
# DEBUG print
print("Total of # groups: {}".format(len(groups)))
fig, ax = plt.subplots(figsize=(10, 10))
for group, data in groups.items():
ax.scatter(data['x'], data['y'],
alpha=0.3,
edgecolors='none',
s=25,
marker='o' if not custom_markers else custom_markers,
label=group)
plt.title(title)
plt.show()
return
def tsne_analysis(embeddings, perplexity=25, n_iter=1000):
tsne = TSNE(n_components=2, verbose=1, perplexity=perplexity, n_iter=n_iter)
return tsne.fit_transform(embeddings)
def get_sku_to_category_map(catalog_file, depth_index=1):
"""
For each SKU, get category from catalog file (if specified)
:return: dictionary, mapping SKU to a category
"""
sku_to_cats = dict()
with open(catalog_file) as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
_sku = row['product_sku_hash']
category_hash = row['category_hash']
if not category_hash:
continue
# pick only category at a certain depth in the tree
# e.g. x/xx/xxx, with depth=1, -> xx
branches = category_hash.split('/')
target_branch = branches[depth_index] if depth_index < len(branches) else None
if not target_branch:
continue
# if all good, store the mapping
sku_to_cats[_sku] = target_branch
return sku_to_cats
sku_to_category = get_sku_to_category_map(os.path.join(LOCAL_FOLDER, 'sku_to_content.csv'))
print("Total of # {} categories".format(len(set(sku_to_category.values()))))
print("Total of # {} SKU with a category".format(len(sku_to_category)))
# debug with a sample SKU
print(sku_to_category[sku_cnt.most_common(1)[0][0]])
skus = prod2vec_model.index_to_key
print("Total of # {} skus in the model".format(len(skus)))
embeddings = [prod2vec_model[s] for s in skus]
# print out tsne plot with standard params
tsne_results = tsne_analysis(embeddings)
assert len(tsne_results) == len(skus)
plot_scatter_by_category_with_lookup('Prod2vec', skus, sku_to_category, tsne_results)
# do a version with only top K categories
TOP_K = 5
cnt_categories = Counter(list(sku_to_category.values()))
top_categories = [c[0] for c in cnt_categories.most_common(TOP_K)]
# filter out SKUs outside of top categories
top_skus = []
top_tsne_results = []
for _s, _t in zip(skus, tsne_results):
if sku_to_category.get(_s, None) not in top_categories:
continue
top_skus.append(_s)
top_tsne_results.append(_t)
# re-plot tsne with filtered SKUs
print("Top SKUs # {}".format(len(top_skus)))
plot_scatter_by_category_with_lookup('Prod2vec (top {})'.format(TOP_K),
top_skus, sku_to_category, top_tsne_results)
```
### Bonus: faster inference
Gensim is awesome and support approximate, faster inference! You need to have installed ANNOY first, e.g. "pip install annoy". We re-run here on our prod space the original benchmark for word2vec from gensim!
See: https://radimrehurek.com/gensim/auto_examples/tutorials/run_annoy.html
```
# Set up the model and vector that we are using in the comparison
annoy_index = AnnoyIndexer(prod2vec_model, 100)
test_sku = sku_cnt.most_common(1)[0][0]
# test all is good
print(prod2vec_model.most_similar([test_sku], topn=2, indexer=annoy_index))
print(prod2vec_model.most_similar([test_sku], topn=2))
def avg_query_time(model, annoy_index=None, queries=5000):
"""Average query time of a most_similar method over random queries."""
total_time = 0
for _ in range(queries):
_v = model[choice(model.index_to_key)]
start_time = time.process_time()
model.most_similar([_v], topn=5, indexer=annoy_index)
total_time += time.process_time() - start_time
return total_time / queries
gensim_time = avg_query_time(prod2vec_model)
annoy_time = avg_query_time(prod2vec_model, annoy_index=annoy_index)
print("Gensim (s/query):\t{0:.5f}".format(gensim_time))
print("Annoy (s/query):\t{0:.5f}".format(annoy_time))
speed_improvement = gensim_time / annoy_time
print ("\nAnnoy is {0:.2f} times faster on average on this particular run".format(speed_improvement))
```
### Bonus: hyper tuning
For more info on hyper tuning in the context of product embeddings, please see our paper: https://arxiv.org/abs/2007.14906 and our data release: https://github.com/coveooss/fantastic-embeddings-sigir-2020.
We use the sessions we left out to simulate a small optimization loop...
```
def calculate_HR_on_NEP(model, sessions, k=10, min_length=3):
_count = 0
_hits = 0
for session in sessions:
# consider only decently-long sessions
if len(session) < min_length:
continue
# update the counter
_count += 1
# get the item to predict
target_item = session[-1]
# get model prediction using before-last item
query_item = session[-2]
# if model cannot make the prediction, it's a failure
if query_item not in model:
continue
predictions = model.similar_by_word(query_item, topn=k)
# debug
# print(target_item, query_item, predictions)
if target_item in [p[0] for p in predictions]:
_hits += 1
# debug
print("Total test cases: {}".format(_count))
return _hits / _count
# we simulate a test with 3 values for epochs in prod2ve
iterations_values = [1, 10]
# for each value we train a model, and use Next Event Prediction (NEP) to get a quality assessment
for i in iterations_values:
print("\n ======> Hyper value: {}".format(i))
cnt_model = train_product_2_vec_model(train_sessions, iterations=i)
# use hold-out to have NEP performance
_hr = calculate_HR_on_NEP(cnt_model, test_sessions)
print("HR: {}\n".format(_hr))
```
## Step 2: improving low-count vectors
For more information about prod2vec in the cold start scenario, please see our paper: https://dl.acm.org/doi/10.1145/3383313.3411477 and video: https://vimeo.com/455641121
```
def build_mapper(pro2vec_dims=48):
"""
Build a Keras model for content-based "fake" embeddings.
:return: a Keras model, mapping BERT-like catalog representations to the prod2vec space
"""
# input
description_input = Input(shape=(50,))
image_input = Input(shape=(50,))
# model
x = Dense(25, activation="relu")(description_input)
y = Dense(25, activation="relu")(image_input)
combined = Concatenate()([x, y])
combined = Dropout(0.3)(combined)
combined = Dense(25)(combined)
output = Dense(pro2vec_dims)(combined)
return Model(inputs=[description_input, image_input], outputs=output)
# get vectors representing text and images in the catalog
def get_sku_to_embeddings_map(catalog_file):
"""
For each SKU, get the text and image embeddings, as provided pre-computed by the dataset
:return: dictionary, mapping SKU to a tuple of embeddings
"""
sku_to_embeddings = dict()
with open(catalog_file) as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
_sku = row['product_sku_hash']
_description = row['description_vector']
_image = row['image_vector']
# skip when both vectors are not there
if not _description or not _image:
continue
# if all good, store the mapping
sku_to_embeddings[_sku] = (json.loads(_description), json.loads(_image))
return sku_to_embeddings
sku_to_embeddings = get_sku_to_embeddings_map(os.path.join(LOCAL_FOLDER, 'sku_to_content.csv'))
print("Total of # {} SKUs with embeddings".format(len(sku_to_embeddings)))
# print out an example
_d, _i = sku_to_embeddings['438630a8ba0320de5235ee1bedf3103391d4069646d640602df447e1042a61a3']
print(len(_d), len(_i), _d[:5], _i[:5])
# just make sure we have the SKUs in the model and a counter
skus = prod2vec_model.index_to_key
print("Total of # {} skus in the model".format(len(skus)))
print(sku_cnt.most_common(5))
# above which percentile of frequency we consider SKU popular enough to be our training set?
FREQUENT_PRODUCTS_PTILE = 80
_counts = [c[1] for c in sku_cnt.most_common()]
_counts[:3]
# make sure we have just SKUS in the prod2vec space for which we have embeddings
popular_threshold = np.percentile(_counts, FREQUENT_PRODUCTS_PTILE)
popular_skus = [s for s in skus if s in sku_to_embeddings and sku_cnt.get(s, 0) > popular_threshold]
product_embeddings = [prod2vec_model[s] for s in popular_skus]
description_embeddings = [sku_to_embeddings[s][0] for s in popular_skus]
image_embeddings = [sku_to_embeddings[s][1] for s in popular_skus]
# debug
print(popular_threshold, len(skus), len(popular_skus))
# print(description_embeddings[:1][:3])
# print(image_embeddings[:1][:3])
# train the mapper now
training_data_X = [np.array(description_embeddings), np.array(image_embeddings)]
training_data_y = np.array(product_embeddings)
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=20, restore_best_weights=True)
# build and display model
rare_net = build_mapper()
plot_model(rare_net, show_shapes=True, show_layer_names=True, to_file='rare_net.png')
Image('rare_net.png')
# train!
rare_net.compile(loss='mse', optimizer='rmsprop')
rare_net.fit(training_data_X,
training_data_y,
batch_size=200,
epochs=20000,
validation_split=0.2,
callbacks=[es])
# rarest_skus = [_[0] for _ in sku_cnt.most_common()[-500:]]
# test_skus = [s for s in rarest_skus if s in sku_to_embeddings]
# get to rare vectors
test_skus = [s for s in skus if s in sku_to_embeddings and sku_cnt.get(s, 0) < popular_threshold/2]
print(len(skus), len(test_skus))
# prepare embeddings for prediction
rare_description_embeddings = [sku_to_embeddings[s][0] for s in test_skus]
rare_image_embeddings = [sku_to_embeddings[s][1] for s in test_skus]
# prepare embeddings for prediction
test_data_X = [np.array(rare_description_embeddings), np.array(rare_image_embeddings)]
predicted_embeddings = rare_net.predict(test_data_X)
# debug
# print(len(predicted_embeddings))
# print(predicted_embeddings[0][:10])
def calculate_HR_on_NEP_rare(model, sessions, rare_skus, k=10, min_length=3):
_count = 0
_hits = 0
_rare_hits = 0
_rare_count = 0
for session in sessions:
# consider only decently-long sessions
if len(session) < min_length:
continue
# update the counter
_count += 1
# get the item to predict
target_item = session[-1]
# get model prediction using before-last item
query_item = session[-2]
# if model cannot make the prediction, it's a failure
if query_item not in model:
continue
# increment counter if rare sku
if query_item in rare_skus:
_rare_count+=1
predictions = model.similar_by_word(query_item, topn=k)
# debug
# print(target_item, query_item, predictions)
if target_item in [p[0] for p in predictions]:
_hits += 1
# track hits if query is rare sku
if query_item in rare_skus:
_rare_hits+=1
# debug
print("Total test cases: {}".format(_count))
print("Total rare test cases: {}".format(_rare_count))
return _hits / _count, _rare_hits/_rare_count
# make copy of original prod2vec model
prod2vec_rare_model = deepcopy(prod2vec_model)
# update model with new vectors
prod2vec_rare_model.add_vectors(test_skus, predicted_embeddings, replace=True)
prod2vec_rare_model.fill_norms(force=True)
# check
assert np.array_equal(predicted_embeddings[0], prod2vec_rare_model[test_skus[0]])
# test new model
calculate_HR_on_NEP_rare(prod2vec_rare_model, test_sessions, test_skus)
# test original model
calculate_HR_on_NEP_rare(prod2vec_model, test_sessions, test_skus)
```
## Step 3: query scoping
For more information about query scoping, please see our paper: https://www.aclweb.org/anthology/2020.ecnlp-1.2/ and repository: https://github.com/jacopotagliabue/session-path
```
# get vectors representing text and images in the catalog
def get_query_to_category_dataset(search_file, cat_2_id, sku_to_category):
"""
For each query, get a label representing the category in items clicked after the query.
It uses as input a mapping "sku_to_category" to join the search file with catalog meta-data!
:return: two lists, matching query vectors to a label
"""
query_X = list()
query_Y = list()
with open(search_file) as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
_click_products = row['clicked_skus_hash']
if not _click_products: # or _click_product not in sku_to_category:
continue
# clean the string and extract SKUs from array
cleaned_skus = ast.literal_eval(_click_products)
for s in cleaned_skus:
if s in sku_to_category:
query_X.append(json.loads(row['query_vector']))
target_category_as_int = cat_2_id[sku_to_category[s]]
query_Y.append(utils.to_categorical(target_category_as_int, num_classes=len(cat_2_id)))
return query_X, query_Y
sku_to_category = get_sku_to_category_map(os.path.join(LOCAL_FOLDER, 'sku_to_content.csv'))
print("Total of # {} categories".format(len(set(sku_to_category.values()))))
cats = list(set(sku_to_category.values()))
cat_2_id = {c: idx for idx, c in enumerate(cats)}
print(cat_2_id[cats[0]])
query_X, query_Y = get_query_to_category_dataset(os.path.join(LOCAL_FOLDER, 'search_train.csv'),
cat_2_id,
sku_to_category)
print(len(query_X))
print(query_Y[0])
x_train, x_test, y_train, y_test = train_test_split(np.array(query_X), np.array(query_Y), test_size=0.2)
def build_query_scoping_model(input_d, target_classes):
print('Shape tensor {}, target classes {}'.format(input_d, target_classes))
# define model
model = Sequential()
model.add(Dense(64, activation='relu', input_dim=input_d))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(target_classes, activation='softmax'))
return model
query_model = build_query_scoping_model(x_train[0].shape[0], y_train[0].shape[0])
# compile model
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
query_model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# train first
query_model.fit(x_train, y_train, epochs=10, batch_size=32)
# compute and print eval score
score = query_model.evaluate(x_test, y_test, batch_size=32)
score
# get vectors representing text and images in the catalog
def get_query_info(search_file):
"""
For each query, extract relevant metadata of query and to match with session data
:return: list of queries with metadata
"""
queries = list()
with open(search_file) as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
_click_products = row['clicked_skus_hash']
if not _click_products: # or _click_product not in sku_to_category:
continue
# clean the string and extract SKUs from array
cleaned_skus = ast.literal_eval(_click_products)
queries.append({'session_id_hash' : row['session_id_hash'],
'server_timestamp_epoch_ms' : int(row['server_timestamp_epoch_ms']),
'clicked_skus' : cleaned_skus,
'query_vector' : json.loads(row['query_vector'])})
print("# total queries: {}".format(len(queries)))
return queries
def get_session_info_for_queries(training_file: str, query_info: list, K: int = None):
"""
Read the training file containing product interactions for sessions with query, up to K rows.
:return: dict of lists with session_id as key, each list being a session (sequence of product events with metadata)
"""
user_sessions = dict()
current_session_id = None
current_session = []
query_session_ids = set([ _['session_id_hash'] for _ in query_info])
with open(training_file) as csvfile:
reader = csv.DictReader(csvfile)
for idx, row in enumerate(reader):
# if a max number of items is specified, just return at the K with what you have
if K and idx >= K:
break
# just append "detail" events in the order we see them
# row will contain: session_id_hash, product_action, product_sku_hash
_session_id_hash = row['session_id_hash']
# when a new session begins, store the old one and start again
if current_session_id and current_session and _session_id_hash != current_session_id:
user_sessions[current_session_id] = current_session
# reset session
current_session = []
# check for the right type and append event info
if row['product_action'] == 'detail' and _session_id_hash in query_session_ids :
current_session.append({'product_sku_hash': row['product_sku_hash'],
'server_timestamp_epoch_ms' : int(row['server_timestamp_epoch_ms'])})
# update the current session id
current_session_id = _session_id_hash
# print how many sessions we have...
print("# total sessions: {}".format(len(user_sessions)))
return dict(user_sessions)
query_info = get_query_info(os.path.join(LOCAL_FOLDER, 'search_train.csv'))
session_info = get_session_info_for_queries(os.path.join(LOCAL_FOLDER, 'browsing_train.csv'), query_info)
def get_contextual_query_to_category_dataset(query_info, session_info, prod2vec_model, cat_2_id, sku_to_category):
"""
For each query, get a label representing the category in items clicked after the query.
It uses as input a mapping "sku_to_category" to join the search file with catalog meta-data!
It also creates a joint embedding for input by concatenating query vector and average session vector up till
when query was made
:return: two lists, matching query vectors to a label
"""
query_X = list()
query_Y = list()
for row in query_info:
query_timestamp = row['server_timestamp_epoch_ms']
cleaned_skus = row['clicked_skus']
session_id_hash = row['session_id_hash']
if session_id_hash not in session_info or not cleaned_skus: # or _click_product not in sku_to_category:
continue
session_skus = session_info[session_id_hash]
context_skus = [ e['product_sku_hash'] for e in session_skus if query_timestamp > e['server_timestamp_epoch_ms']
and e['product_sku_hash'] in prod2vec_model]
if not context_skus:
continue
context_vector = np.mean([prod2vec_model[sku] for sku in context_skus], axis=0).tolist()
for s in cleaned_skus:
if s in sku_to_category:
query_X.append(row['query_vector'] + context_vector)
target_category_as_int = cat_2_id[sku_to_category[s]]
query_Y.append(utils.to_categorical(target_category_as_int, num_classes=len(cat_2_id)))
return query_X, query_Y
context_query_X, context_query_Y = get_contextual_query_to_category_dataset(query_info,
session_info,
prod2vec_model,
cat_2_id,
sku_to_category)
print(len(context_query_X))
print(context_query_Y[0])
x_train, x_test, y_train, y_test = train_test_split(np.array(context_query_X), np.array(context_query_Y), test_size=0.2)
contextual_query_model = build_query_scoping_model(x_train[0].shape[0], y_train[0].shape[0])
# compile model
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
contextual_query_model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# train first
contextual_query_model.fit(x_train, y_train, epochs=10, batch_size=32)
# compute and print eval score
score = contextual_query_model.evaluate(x_test, y_test, batch_size=32)
score
```
| github_jupyter |
# CrabAgePrediction
__
--
<nav class="table-of-contents"><ol><li><a href="#crabageprediction">CrabAgePrediction</a></li><li><a href="#overview">Overview</a><ol><li><a href="#1.-abstract">1. Abstract</a><ol><li><a href="#paper-summary-%E2%9C%94%EF%B8%8F">Paper Summary ✔️</a></li></ol></li><li><a href="#2.-introduction">2. Introduction</a></li><li><a href="#3.-background">3. Background</a><ol><li><a href="#process-activities-%E2%9C%94%EF%B8%8F">Process Activities ✔️</a></li><li><a href="#models-%E2%9C%94%EF%B8%8F">Models ✔️</a></li><li><a href="#analysis-%E2%9C%94%EF%B8%8F">Analysis ✔️</a></li></ol></li><li><a href="#4.-methods">4. Methods</a><ol><li><a href="#approach-%E2%9C%94%EF%B8%8F">Approach ✔️</a></li><li><a href="#key-contributions-%E2%9C%94%EF%B8%8F">Key Contributions ✔️</a></li></ol></li><li><a href="#5.-experiments">5. Experiments</a><ol><li><a href="#prediction-system-development-workflow-%E2%9C%94%EF%B8%8F">Prediction System Development Workflow ✔️</a></li><li><a href="#predicition-model-workflow-%E2%9C%94%EF%B8%8F">Predicition Model Workflow ✔️</a></li><li><a href="#code-%E2%9C%94%EF%B8%8F">Code ✔️</a></li></ol></li><li><a href="#6.-conclusion">6. Conclusion</a><ol><li><a href="#summary-of-results-%E2%9C%94%EF%B8%8F">Summary of Results ✔️</a></li><li><a href="#future-work-%E2%9C%94%EF%B8%8F">Future work ✔️</a></li></ol></li><li><a href="#7.-references">7. References</a><ol><li><a href="#links-%E2%9C%94%EF%B8%8F">Links ✔️</a></li></ol></li><li><a href="#code">Code</a><ol><li><a href="#proposal-information">Proposal Information</a></li></ol></li><li><a href="#helpful-resources-for-the-project">Helpful Resources for the Project</a></li></ol></li><li><a href="#crabageprediction-1">CrabAgePrediction</a></li></ol></nav>
<h1 id="crabageprediction" id="crabageprediction">CrabAgePrediction</h1>
<p>
<img src="https://img.shields.io/badge/Kaggle-035a7d?style=for-the-badge&logo=kaggle&logoColor=white" alt=""/>
<img src="https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54" alt="" />
<img src="https://img.shields.io/badge/jupyter-%23FA0F00.svg?style=for-the-badge&logo=jupyter&logoColor=white" alt="" />
<img src="https://img.shields.io/badge/pycharm-143?style=for-the-badge&logo=pycharm&logoColor=black&color=black&labelColor=green" alt="" />
</p>
## Project Information:
```py
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
```
**CLASS:** `CPSC-483 Machine Learning Section-02`
**LAST UPDATE:** `May 5, 2022`
**PROJECT NAME:** `Crab Age Prediction`
**PROJECT GROUP:**
| Name | Email | Student |
| ------------ | ------------------------------ | ------------- |
| Brian Lucero | 13rianlucero@csu.fullerton.edu | Undergraduate |
| Justin Heng | justinheng@csu.fullerton.edu | Graduate |
**PROJECT PAPER:** [Here](https://github.com/13rianlucero/CrabAgePrediction/blob/main/FirstDraft/Crab%20Age%20Prediction%20Paper.pdf)
**PROJECT GITHUB REPOSITORY:** [Here](https://github.com/13rianlucero/CrabAgePrediction)
# Overview
> __
>
>
> ## **1. Abstract**
>
> ###### Paper Summary ✔️
>
> Machine learning can be used to predict the age of crabs. It can be more accurate than simply weighing a crab to estimate its age. Several different models can be used, though support vector regression was found to be the most accurate in this experiment.
>
>> __
>>
>>
>> ## **2. Introduction**
>>
>> | The Problem ✔️ | Why it's important? ✔️ | Our Solution Strategy ✔️ |
>> | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
>> | <br /><br />*It is quite difficult to determine a crab's age due to their molting cycles which happen throughout their whole life. Essentially, the failure to harvest at an ideal age, increases cost and crab lives go to waste.* | <br /><br />*Beyond a certain age, there is negligible growth in crab's physical characteristics and hence, it is important to time the harvesting to reduce cost and increase profit.* | <br /><br />Prepare crab data and use it to train several machine learning models. Thus, given certain physcial chraracteristics and the corresponding values, the ML models will accurately determine the age of the crabs. |
>>
>>> __
>>> ## **3. Background**
>>>
>>> ###### **Process Activities ✔️**
>>>
>>> - Feature Selection & Representation
>>> - Evaluation on variety of methods
>>> - Method Selection
>>> - Parameter Tuning
>>> - Classifier Evaluation
>>> - Train-Test Split
>>> - Cross Validation
>>> - Eliminating Data
>>> - Handle Categorical Data
>>> - One-hot encoding
>>> - Data Partitioning
>>> - Feature Scaling
>>> - Feature Selection
>>> - Choose ML Models
>>>
>>> ###### Models ✔️
>>>
>>> - K-Nearest Neighbours (KNN)
>>> - Multiple Linear Regression (MLR)
>>> - Support Vector Machine (SVM)
>>>
>>> ###### Analysis ✔️
>>>
>>> - Evaluate Results
>>> - Performance Metrics
>>> - Compare ML Models using Metrics
>>>
>>>> __
>>>> ## **4. Methods**
>>>>
>>>> ###### Approach ✔️
>>>>
>>>> - Prediction System using 3 main ML Models
>>>>
>>>> ###### Key Contributions ✔️
>>>>
>>>> - Justin
>>>> - `KNN`
>>>> - `SVM`
>>>> - Brian
>>>> - `MLR`
>>>>
>>>>> __
>>>>> ## **5. Experiments**
>>>>>
>>>>> ###### Prediction System Development Workflow ✔️
>>>>>
>>>>> ```mermaid
>>>>> graph TD
>>>>>
>>>>> A[Problem Domain: Age Prediction of Crabs]
>>>>> B[Data Representation: Physical Attribute Values]
>>>>> C[Objective Function: Average Error Rate on Training Crab Data]
>>>>> D[Evalutation: Average Error Rate on Test Crab Data]
>>>>> E[Learning Algorithm: KNN, MLR, SVM]
>>>>>
>>>>> F[Predictive Model: KNN Model]
>>>>> G[Predictive Model: MLR Model]
>>>>> H[Predictive Model: SVM Model]
>>>>>
>>>>> I[Prediction System: Aggregate Model Results]
>>>>> J[Useful Predictions: Low Bias, Low Variance]
>>>>> K[Domain Insights: Types of Crabs]
>>>>>
>>>>> A -- Discrete and Continuous --> B
>>>>> A -- Training dataset --> C
>>>>> A -- Test dataset --> D
>>>>>
>>>>> B --> E
>>>>> C --> E
>>>>> D --> E
>>>>>
>>>>> E --> F
>>>>> E --> G
>>>>> E --> H
>>>>>
>>>>> F --> I
>>>>> G --> I
>>>>> H --> I
>>>>>
>>>>> I --> J
>>>>> I --> K
>>>>>
>>>>> ```
>>>>>
>>>>> ###### Predicition Model Workflow ✔️
>>>>>
>>>>> | KNN | MLR | SVM |
>>>>> | --------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
>>>>> | Import Libraries | Import Libraries | Import Libraries |
>>>>> | Import Dataset, create dataframe | Import Dataset, create dataframe | Import Dataset, create dataframe |
>>>>> | Data Preprocessing | Data Preprocessing | Data Preprocessing |
>>>>> | Check for Missing data, Bad Data, Outliers, Data Types, Choose Classifier, Data Organization, Data Scaling, etc | Check for Missing data, Bad Data, Outliers, Data Types, Choose Classifier, Data Organization, Data Scaling, etc | Check for Missing data, Bad Data, Outliers, Data Types, Choose Classifier, Data Organization, Data Scaling, etc |
>>>>> | Feature Selection | Feature Selection | Feature Selection |
>>>>> | Train-Test Split | Train-Test Split | Train-Test Split |
>>>>> | Build Algorithm | Build Algorithm | Build Algorithm |
>>>>> | Train Algorithm | Train Algorithm | Train Algorithm |
>>>>> | Test Algorithm | Test Algorithm | Test Algorithm |
>>>>> | Produce Performance Metrics from Tests | Produce Performance Metrics from Tests | Produce Performance Metrics from Tests |
>>>>> | Evaluate Results | Evaluate Results | Evaluate Results |
>>>>> | Tune Algorithm | Tune Algorithm | Tune Algorithm |
>>>>> | Retest & Re-Analayze | Retest & Re-Analayze | Retest & Re-Analayze |
>>>>> | Predicition Model defined from new train-test-analyze cycle | Predicition Model defined from new train-test-analyze cycle | Predicition Model defined from new train-test-analyze cycle |
>>>>> | Use model to refine the results | Use model to refine the results | Use model to refine the results |
>>>>> | Draw Conclusions | Draw Conclusions | Draw Conclusions |
>>>>>
>>>>> ###### Code ✔️
>>>>>>
>>>>>> __
>>>>>> ## **6. Conclusion**
>>>>>>
>>>>>> ###### Summary of Results ✔️
>>>>>>
>>>>>> Overall, the models were able to predict the age of crabs reasonably well. On average, the predictions were off by about 1.5 months. Although support vector regression performed slightly better than the other two models, it was still close enough that any of the models could be used with satisfactory results.
>>>>>>
>>>>>> Multiple linear regression was found to be slightly better at predicting older crabs while support vector regression was better at predicting younger crabs. K-nearest neighbor was average overall. What is important to note is that the predictions for all three models were more accurate when the age of the crab was less than 12 months. This makes sense because after a crab reaches full maturity around 12 months, its growth comes to a halt and it is harder to predict its age since its features stay roughly the same.
>>>>>>
>>>>>> Therefore, predicting the age of a crab becomes less accurate the longer a crab has matured. To circumvent this, the dataset could be further preprocessed so that any crab over the age of 12 months will be set to 12 months.
>>>>>>
>>>>>> This would greatly increase the accuracy of the machine learning models though the models would no longer be able to predict any ages over 12 months. Since the purpose is to find which crabs are harvestable, this may be a good compromise.
>>>>>>
>>>>>> | Model | Type | Error (months) |
>>>>>> | --------------------------------- | -------- | -------------- |
>>>>>> | Linear Regression (Weight vs Age) | Baseline | 1.939 |
>>>>>> | K-nearest Neighbor | ML | 1.610 |
>>>>>> | Multiple Linear Regression | ML | 1.560 |
>>>>>>
>>>>>> ###### Future work ✔️
>>>>>>
>>>>>> Predicting the age of a crab becomes less accurate the longer a crab has matured. To circumvent this, the dataset could be further preprocessed so that any crab over the age of 12 months will be set to 12 months.
>>>>>>
>>>>>> This would greatly increase the accuracy of the machine learning models though the models would no longer be able to predict any ages over 12 months. Since the purpose is to find which crabs are harvestable, this may be a good compromise.
>>>>>>
>>>>>>> __
>>>>>>> ## **7. References**
>>>>>>>
>>>>>>> <p align="center">
>>>>>>> <img src="https://img.shields.io/badge/Kaggle-035a7d?style=for-the-badge&logo=kaggle&logoColor=white" alt=""/>
>>>>>>> <img src="https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54" alt="" />
>>>>>>> <img src="https://img.shields.io/badge/jupyter-%23FA0F00.svg?style=for-the-badge&logo=jupyter&logoColor=white" alt="" />
>>>>>>> <img src="https://img.shields.io/badge/pycharm-143?style=for-the-badge&logo=pycharm&logoColor=black&color=black&labelColor=green" alt="" />
>>>>>>> </p>
>>>>>>>
>>>>>>> ###### Links ✔️
>>>>>>>
>>>>>>> [1] [https://www.kaggle.com/datasets/sidhus/crab-age-prediction](https://www.kaggle.com/datasets/sidhus/crab-age-prediction)
>>>>>>>
>>>>>>> [2] [https://scikit-learn.org/stable/modules/svm.html](https://scikit-learn.org/stable/modules/svm.html)
>>>>>>>
>>>>>>> [3] [https://repository.library.noaa.gov/view/noaa/16273/noaa_16273_DS4.pdf](https://repository.library.noaa.gov/view/noaa/16273/noaa_16273_DS4.pdf)
>>>>>>>
>>>>>>> [4] [https://faculty.math.illinois.edu/~hildebr/tex/latex-start.html](https://faculty.math.illinois.edu/~hildebr/tex/latex-start.html)
>>>>>>>
>>>>>>> [5] [https://github.com/krishnaik06/Multiple-Linear-Regression](https://github.com/krishnaik06/Multiple-Linear-Regression)
>>>>>>>
>>>>>>> [6] [https://github.com/13rianlucero/CrabAgePrediction](https://github.com/13rianlucero/CrabAgePrediction)
>>>>>>>
>>>>>>> __
>>>>>>>
>>>>>>
>>>>>> __
>>>>>>
>>>>>
>>>>> __
>>>>>
>>>>
>>>> __
>>>>
>>>
>>> __
>>>
>>
>> __
>>
>
> __
---
## Import the Libraries
---
```
import pandas
import numpy
from scipy import stats
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn import svm
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
```
## Import Dataset into Dataframe variable
---
```
import pandas
import numpy
from scipy import stats
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn import svm
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
data = pandas.read_csv(r"CrabAgePrediction.csv").dropna(axis=0)
print(data.columns)
data["SexValue"] = 0 #create a new column
for index, row in data.iterrows():
#convert male or female to a numerical value Male=1, Female=2, Indeterminate=1.5
if row["Sex"] == "M":
data.iloc[index, 9] = 1
elif row["Sex"] == "F":
data.iloc[index, 9] = 2
else:
data.iloc[index, 9] = 1.5
#putting all our data together and dropping Sex for SexValue
data = data[["SexValue", "Length", "Diameter", "Height", "Weight", "Shucked Weight", "Viscera Weight", "Shell Weight", "Age"]]
X = data[["Length", "Diameter", "Height", "Weight", "Shucked Weight", "Viscera Weight", "Shell Weight"]]
y = data[["Age"]]
#Pearson correlation for every feature
col_cor = stats.pearsonr(data["SexValue"], y)
col1_cor = stats.pearsonr(data["Length"], y)
col2_cor = stats.pearsonr(data["Diameter"], y)
col3_cor = stats.pearsonr(data["Height"], y)
col4_cor = stats.pearsonr(data["Weight"], y)
col5_cor = stats.pearsonr(data["Shucked Weight"], y)
col6_cor = stats.pearsonr(data["Viscera Weight"], y)
col7_cor = stats.pearsonr(data["Shell Weight"], y)
print(col_cor)
print(col1_cor)
print(col2_cor)
print(col3_cor)
print(col4_cor)
print(col5_cor)
print(col6_cor)
print(col7_cor)
#split the data into test and train set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=132)
#n_neighbors plot
error_rate = []
y_test2 = numpy.ravel(y_test)
for k in range(1, 31):
neigh = KNeighborsClassifier(n_neighbors=k)
neigh.fit(X_train, numpy.ravel(y_train))
knn_predict = neigh.predict(X_test)
error_knn = 0
for x in range(0, 1168):
error_knn += abs(knn_predict[x] - y_test2[x])
error_rate.append(error_knn/1169)
plt.plot(range(1, 31), error_rate)
plt.xlabel("n_neighbors")
plt.ylabel("error_rate")
plt.title("Average error vs n_neighbors")
plt.show()
#KNN
neigh = KNeighborsClassifier(n_neighbors=20)
neigh.fit(X_train, numpy.ravel(y_train))
knn_predict = neigh.predict(X_test)
#Multiple Linear Regression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
y_pred = regressor.predict(X_test)
score = r2_score(y_test,y_pred)
#SVR
regr = svm.SVR()
regr.fit(X_train, numpy.ravel(y_train))
regr_predict = regr.predict(X_test)
# #plot the predicted age against the actual age for the test set
plt.plot(range(1, 1169), knn_predict)
plt.plot(range(1, 1169), y_pred)
plt.plot(range(1, 1169), regr_predict)
plt.plot(range(1, 1169), numpy.ravel(y_test))
plt.xlim([0, 50])
#plt.xlim([60, 90])
plt.legend(["KNN Predicted Age", "LR Predicted Age", "SVR Predicted Age", "Actual Age"])
plt.ylabel("Age in months")
plt.title("Predicted vs Actual Crab Age")
plt.show()
error_knn = 0
error_mlr = 0
error_svr = 0
y_test2 = numpy.ravel(y_test)
for x in range(0, 1168):
error_knn += abs(knn_predict[x] - y_test2[x])
error_mlr += abs(y_pred[x] - y_test2[x])
error_svr += abs(regr_predict[x] - y_test2[x])
print (error_knn/1169)
print (error_mlr/1169)
print (error_svr/1169)
```
| github_jupyter |
---
layout: post
title: Introduction to Pandas 4
excerpt:
categories: blog
tags: ["Python", "Data Science", "Pandas"]
published: true
comments: true
share: true
---
Let;s get our hands dirty with another important topic in Pandas. We need to able to handle missing data and we will see some methods to handle them. After making our data more suitable for analysis, or prediction we will learn how to plot it with ```matplotlib``` library.
> I am going to use matplotlib library for this post and other pandas tutorials but I normally prefer using bokeh interactive plotting library.
If you like to follow this small tutorial in notebook format you can check out [here](https://github.com/eneskemalergin/Blog-Notebooks/blob/master/Introduction_to_Pandas-4.ipynb)
## Handling Missing Data
Missing data is usually shows up as ```NULL``` or ```N/A``` or **```NaN```** (Pandas represents this way.)
Other than appearing natively in the source dataset, missing values can be added to a dataset by an operation such as reindexing, or changing frequencies in the case of a time series:
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
date_stngs = ['2014-05-01','2014-05-02','2014-05-05','2014-05-06','2014-05-07']
tradeDates = pd.to_datetime(pd.Series(date_stngs))
closingPrices=[531.35,527.93,527.81,515.14,509.96]
googClosingPrices=pd.DataFrame(data=closingPrices, columns=['closingPrice'], index=tradeDates)
googClosingPrices
```
Pandas is also provides a very easy and fast API to read stock data from various finance data providers.
```
from pandas_datareader import data, wb # pandas data READ API
import datetime
googPrices = data.get_data_yahoo("GOOG", start=datetime.datetime(2014, 5, 1),
end=datetime.datetime(2014, 5, 7))
googFinalPrices=pd.DataFrame(googPrices['Close'], index=tradeDates)
googFinalPrices
```
We now have a time series that depicts the closing price of Google's stock from May 1, 2014 to May 7, 2014 with gaps in the date range since the trading only occur on business days. If we want to change the date range so that it shows calendar days (that is, along with the weekend), we can change the frequency of the time series index from business days to calendar days as follow:
```
googClosingPricesCDays = googClosingPrices.asfreq('D')
googClosingPricesCDays
```
Note that we have now introduced NaN values for the closingPrice for the weekend dates of May 3, 2014 and May 4, 2014.
We can check which values are missing by using the ```isnull()``` and ```notnull()``` functions as follows:
```
googClosingPricesCDays.isnull()
googClosingPricesCDays.notnull()
```
A Boolean DataFrame is returned in each case. In datetime and pandas Timestamps, missing values are represented by the NaT value. This is the equivalent of NaN in pandas for time-based types
```
tDates=tradeDates.copy()
tDates[1]=np.NaN
tDates[4]=np.NaN
tDates
FBVolume=[82.34,54.11,45.99,55.86,78.5]
TWTRVolume=[15.74,12.71,10.39,134.62,68.84]
socialTradingVolume=pd.concat([pd.Series(FBVolume), pd.Series(TWTRVolume),
tradeDates], axis=1,keys=['FB','TWTR','TradeDate'])
socialTradingVolume
socialTradingVolTS=socialTradingVolume.set_index('TradeDate')
socialTradingVolTS
socialTradingVolTSCal=socialTradingVolTS.asfreq('D')
socialTradingVolTSCal
```
We can perform arithmetic operations on data containing missing values. For example, we can calculate the total trading volume (in millions of shares) across the two stocks for Facebook and Twitter as follows:
```
socialTradingVolTSCal['FB']+socialTradingVolTSCal['TWTR']
```
By default, any operation performed on an object that contains missing values will return a missing value at that position as shown in the following command:
```
pd.Series([1.0,np.NaN,5.9,6])+pd.Series([3,5,2,5.6])
pd.Series([1.0,25.0,5.5,6])/pd.Series([3,np.NaN,2,5.6])
```
There is a difference, however, in the way NumPy treats aggregate calculations versus what pandas does.
In pandas, the default is to treat the missing value as 0 and do the aggregate calculation, whereas for NumPy, NaN is returned if any of the values are missing. Here is an illustration:
```
np.mean([1.0,np.NaN,5.9,6])
np.sum([1.0,np.NaN,5.9,6])
```
However, if this data is in a pandas Series, we will get the following output:
```
pd.Series([1.0,np.NaN,5.9,6]).sum()
pd.Series([1.0,np.NaN,5.9,6]).mean()
```
It is important to be aware of this difference in behavior between pandas and NumPy. However, if we wish to get NumPy to behave the same way as pandas, we can use the np.nanmean and np.nansum functions, which are illustrated as follows:
```
np.nanmean([1.0,np.NaN,5.9,6])
np.nansum([1.0,np.NaN,5.9,6])
```
## Handling Missing Values
There are various ways to handle missing values, which are as follows:
__ 1. By using the ```fillna()``` function to fill in the NA values:__
```
socialTradingVolTSCal
socialTradingVolTSCal.fillna(100)
```
We can also fill forward or backward values using the ```ffill()``` or ```bfill()``` arguments:
```
socialTradingVolTSCal.fillna(method='ffill')
socialTradingVolTSCal.fillna(method='bfill')
```
__2. By using the ```dropna()``` function to drop/delete rows and columns with missing values.__
```
socialTradingVolTSCal.dropna()
```
__3. We can also interpolate and fill in the missing values by using the ```interpolate()``` function__
```
pd.set_option('display.precision',4)
socialTradingVolTSCal.interpolate()
```
The ```interpolate()``` function also takes an argument—method that denotes the method. These methods include linear, quadratic, cubic spline, and so on.
## Plotting using matplotlib
The matplotlib api is imported using the standard convention, as shown in the following command: ```import matplotlib.pyplot as plt```
Series and DataFrame have a plot method, which is simply a wrapper around ```plt.plot``` . Here, we will examine how we can do a simple plot of a sine and cosine function. Suppose we wished to plot the following functions over the interval pi to pi:
$$
f(x) = \cos(x) + \sin(x) \\
g(x) = \cos(x) - \sin(x)
$$
```
X = np.linspace(-np.pi, np.pi, 256, endpoint=True)
f,g = np.cos(X)+np.sin(X), np.sin(X)-np.cos(X)
f_ser=pd.Series(f)
g_ser=pd.Series(g)
plotDF=pd.concat([f_ser,g_ser],axis=1)
plotDF.index=X
plotDF.columns=['sin(x)+cos(x)','sin(x)-cos(x)']
plotDF.head()
plotDF.columns=['f(x)','g(x)']
plotDF.plot(title='Plot of f(x)=sin(x)+cos(x), \n g(x)=sinx(x)-cos(x)')
plt.show()
plotDF.plot(subplots=True, figsize=(6,6))
plt.show()
```
There is a lot more to using the plotting functionality of matplotlib within pandas. For more information, take a look at the [documentation](http://pandas.pydata.org/pandas-docs/dev/visualization.html)
### Summary
To summarize, we have discussed how to handle missing data values and manipulate dates in pandas. We also took a brief detour to investigate the plotting functionality in pandas using matplotlib .
| github_jupyter |
# Model Zoo -- Rosenblatt Perceptron
Implementation of the classic Perceptron by Frank Rosenblatt for binary classification (here: 0/1 class labels).
## Imports
```
import numpy as np
import matplotlib.pyplot as plt
import torch
%matplotlib inline
```
## Preparing a toy dataset
```
##########################
### DATASET
##########################
data = np.genfromtxt('../data/perceptron_toydata.txt', delimiter='\t')
X, y = data[:, :2], data[:, 2]
y = y.astype(np.int)
print('Class label counts:', np.bincount(y))
print('X.shape:', X.shape)
print('y.shape:', y.shape)
# Shuffling & train/test split
shuffle_idx = np.arange(y.shape[0])
shuffle_rng = np.random.RandomState(123)
shuffle_rng.shuffle(shuffle_idx)
X, y = X[shuffle_idx], y[shuffle_idx]
X_train, X_test = X[shuffle_idx[:70]], X[shuffle_idx[70:]]
y_train, y_test = y[shuffle_idx[:70]], y[shuffle_idx[70:]]
# Normalize (mean zero, unit variance)
mu, sigma = X_train.mean(axis=0), X_train.std(axis=0)
X_train = (X_train - mu) / sigma
X_test = (X_test - mu) / sigma
plt.scatter(X_train[y_train==0, 0], X_train[y_train==0, 1], label='class 0', marker='o')
plt.scatter(X_train[y_train==1, 0], X_train[y_train==1, 1], label='class 1', marker='s')
plt.xlabel('feature 1')
plt.ylabel('feature 2')
plt.legend()
plt.show()
```
## Defining the Perceptron model
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def custom_where(cond, x_1, x_2):
# cond = torch.tensor(cond,dtype=torch.float32)
cond=cond.float()
return (cond * x_1) + ((1-cond) * x_2)
class Perceptron():
def __init__(self, num_features):
self.num_features = num_features
self.weights = torch.zeros(num_features, 1,
dtype=torch.float32, device=device)
self.bias = torch.zeros(1, dtype=torch.float32, device=device)
def forward(self, x):
linear = torch.add(torch.mm(x, self.weights), self.bias)
predictions = custom_where(linear > 0., 1, 0)
return predictions
def backward(self, x, y):
predictions = self.forward(x)
errors = y - predictions
return errors
def train(self, x, y, epochs):
for e in range(epochs):
for i in range(y.size()[0]):
# use view because backward expects a matrix (i.e., 2D tensor)
errors = self.backward(x[i].view(1, self.num_features), y[i]).view(-1)
self.weights += (errors * x[i]).view(self.num_features, 1)
self.bias += errors
def evaluate(self, x, y):
predictions = self.forward(x).view(-1)
accuracy = torch.sum(predictions == y).float() / y.size()[0]
return accuracy
```
## Training the Perceptron
```
ppn = Perceptron(num_features=2)
X_train_tensor = torch.tensor(X_train, dtype=torch.float32, device=device)
y_train_tensor = torch.tensor(y_train, dtype=torch.float32, device=device)
ppn.train(X_train_tensor, y_train_tensor, epochs=5)
print('Model parameters:')
print(' Weights: %s' % ppn.weights)
print(' Bias: %s' % ppn.bias)
```
## Evaluating the model
```
X_test_tensor = torch.tensor(X_test, dtype=torch.float32, device=device)
y_test_tensor = torch.tensor(y_test, dtype=torch.float32, device=device)
test_acc = ppn.evaluate(X_test_tensor, y_test_tensor)
print('Test set accuracy: %.2f%%' % (test_acc*100))
##########################
### 2D Decision Boundary
##########################
w, b = ppn.weights, ppn.bias
x_min = -2
y_min = ( (-(w[0] * x_min) - b[0])
/ w[1] )
x_max = 2
y_max = ( (-(w[0] * x_max) - b[0])
/ w[1] )
fig, ax = plt.subplots(1, 2, sharex=True, figsize=(7, 3))
ax[0].plot([x_min, x_max], [y_min, y_max])
ax[1].plot([x_min, x_max], [y_min, y_max])
ax[0].scatter(X_train[y_train==0, 0], X_train[y_train==0, 1], label='class 0', marker='o')
ax[0].scatter(X_train[y_train==1, 0], X_train[y_train==1, 1], label='class 1', marker='s')
ax[1].scatter(X_test[y_test==0, 0], X_test[y_test==0, 1], label='class 0', marker='o')
ax[1].scatter(X_test[y_test==1, 0], X_test[y_test==1, 1], label='class 1', marker='s')
ax[1].legend(loc='upper left')
plt.show()
```
| github_jupyter |
We sometimes want to know where a value is in an array.
```
import numpy as np
```
By "where" we mean, which element contains a particular value.
Here is an array.
```
arr = np.array([2, 99, -1, 4, 99])
arr
```
As you know, we can get element using their *index* in the array. In
Python, array indices start at zero.
Here's the value at index (position) 0:
```
arr[0]
```
We might also be interested to find which positions hold particular values.
In our array above, by reading, and counting positions, we can see
that the values of 99 are in positions 1 and 4. We can ask for these
elements by passing a list or an array between the square brackets, to
index the array:
```
positions_with_99 = np.array([1, 4])
arr[positions_with_99]
```
Of course, we are already used to finding and then selecting elements according to various conditions, using *Boolean vectors*.
Here we identify the elements that contain 99. There is a `True` at the position where the array contains 99, and `False` otherwise.
```
contains_99 = arr == 99
contains_99
```
We can then get the 99 values with:
```
arr[contains_99]
```
## Enter "where"
Sometimes we really do need to know the index of the values that meet a certain condition.
In that case, you can use the Numpy [where
function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html).
`where` finds the index positions of the `True` values in Boolean
vectors.
```
indices = np.where(arr == 99)
indices
```
We can use the returned `indices` to index into the array, using square brackets.
```
arr[indices]
```
This also works in two or more dimensions. Here is a two-dimensional array, with some values of 99.
```
arr2d = np.array([[4, 99, 3], [8, 8, 99]])
arr2d
```
`where` now returns two index arrays, one for the rows, and one for the columns.
```
indices2d = np.where(arr2d == 99)
indices2d
```
Just as for the one-dimensional case, we can use the returned indices to index into the array, and get the elements.
```
arr2d[indices2d]
```
## Where summary
Numpy `where` returns the indices of `True` values in a Boolean array/
You can use these indices to index into an array, and get the matching elements.
## Argmin
Numpy has various *argmin* functions that are a shortcut for using `where`, for particular cases.
A typical case is where you want to know the index (position) of the minimum value in an array.
Here is our array:
```
arr
```
We can get the minimum value with Numpy `min`:
```
np.min(arr)
```
Sometimes we want to know the *index position* of the minimum value. Numpy `argmin` returns the index of the minimum value:
```
min_pos = np.argmin(arr)
min_pos
```
Therefore, we can get the minimum value again with:
```
arr[min_pos]
```
There is a matching `argmax` function that returns the position of the maximum value:
```
np.max(arr)
max_pos = np.argmax(arr)
max_pos
arr[max_pos]
```
We could also have found the position of the minimum value above, using `np.min` and `where`:
```
min_value = np.min(arr)
min_indices = np.where(arr == min_value)
arr[min_indices]
```
The `argmin` and `argmax` functions are not quite the same, in that they only return the *first* position of the minimum or maximum, if there are multiple values with the same value.
Compare:
```
np.argmax(arr)
```
to
```
max_value = np.max(arr)
np.where(arr == max_value)
```
| github_jupyter |
```
# from utils import *
import tensorflow as tf
import os
import sklearn.datasets
import numpy as np
import re
import collections
import random
from sklearn import metrics
import jieba
# 写入停用词
with open(r'stopwords.txt','r',encoding='utf-8') as f:
english_stopwords = f.read().split('\n')
def separate_dataset(trainset, ratio = 0.5):
datastring = []
datatarget = []
for i in range(len(trainset.data)):
# 提取每一条文本数据,并过滤None值行文本;
data_ = trainset.data[i].split('\n')
data_ = list(filter(None, data_))
# 抽取len(data_) * ratio个样本,并打乱某类样本顺序;
data_ = random.sample(data_, int(len(data_) * ratio))
# 去除停用词
for n in range(len(data_)):
data_[n] = clearstring(data_[n])
# 提取所有的词
datastring += data_
# 为每一个样本补上标签
for n in range(len(data_)):
datatarget.append(trainset.target[i])
return datastring, datatarget
def clearstring(string):
# 清洗样本,并去停用词
# 去除非中文字符
string = re.sub(r'^[\u4e00-\u9fa5a-zA-Z0-9]', '', string)
string = list(jieba.cut(string, cut_all=False))
string = filter(None, string)
string = [y.strip() for y in string if y.strip() not in english_stopwords]
string = ' '.join(string)
return string.lower()
def str_idx(corpus, dic, maxlen, UNK = 3):
# 词典索引
X = np.zeros((len(corpus), maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i].split()[:maxlen][::-1]):
X[i, -1 - no] = dic.get(k, UNK)
return X
trainset = sklearn.datasets.load_files(container_path = 'dataset', encoding = 'UTF-8')
trainset.data, trainset.target = separate_dataset(trainset,1.0)
print(trainset.target_names)
print(len(trainset.data))
print(len(trainset.target))
import collections
def build_dataset(words, n_words, atleast=1):
# 四种填充词
count = [['PAD', 0], ['GO', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
# 过滤那些只有一个字的字符
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
# 构建词的索引
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
for word in words:
# 如果字典中没有出现的词,用unk表示
index = dictionary.get(word, 3)
data.append(index)
# 翻转字典
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, dictionary, reversed_dictionary
split = (' '.join(trainset.data)).split()
# 去重后的所有单词集合,组成词典
vocabulary_size = len(list(set(split)))
# data为所有词的词索引,词典,反向词典
data, dictionary, rev_dictionary = build_dataset(split, vocabulary_size)
len(dictionary)
def build_char_dataset(words):
# 四种填充词
count = []
dictionary = dict()
# 构建词的索引
for word in words:
dictionary[word] = len(dictionary)
data = list()
for word in words:
# 如果字典中没有出现的词,用unk表示
index = dictionary.get(word, 3)
data.append(index)
# 翻转字典
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, dictionary, reversed_dictionary
# 构建所有中文汉字的字典,3912个汉字
char_split = list(set(list(''.join(trainset.data))))
length = len(char_split)
char_data, char_dictionary, char_rev_dictionary = build_char_dataset(char_split)
# 给中文字符编码
class Vocabulary:
def __init__(self, dictionary, rev_dictionary):
self._dictionary = dictionary
self._rev_dictionary = rev_dictionary
# 起始符
@property
def start_string(self):
return self._dictionary['GO']
# 结束符
@property
def end_string(self):
return self._dictionary['EOS']
# 未知单词
@property
def unk(self):
return self._dictionary['UNK']
@property
def size(self):
return len(self._dictionary)
# 查询词语的数值索引
def word_to_id(self, word):
return self._dictionary.get(word, self.unk)
# 通过索引反查词语
def id_to_word(self, cur_id):
return self._rev_dictionary.get(cur_id, self._rev_dictionary[3])
# 将数字索引解码成字符串并拼接起来
def decode(self, cur_ids):
return ' '.join([self.id_to_word(cur_id) for cur_id in cur_ids])
# 将字符串编码成数字索引
def encode(self, sentence, reverse = False, split = True):
if split:
sentence = sentence.split()
# 将文本转化为数字索引
word_ids = [self.word_to_id(cur_word) for cur_word in sentence]
# 为所有的文本加上起始符和结束符,双向编码都支持
if reverse:
return np.array(
[self.end_string] + word_ids + [self.start_string],
dtype = np.int32,
)
else:
return np.array(
[self.start_string] + word_ids + [self.end_string],
dtype = np.int32,
)
# 英文,数字,字符用自制转码,中文字符,从0开始进行编码,考虑到中文短语的长度一般不超过8,故可取max_length=10
class UnicodeCharsVocabulary(Vocabulary):
def __init__(self, dictionary, rev_dictionary,char_dictionary, char_rev_dictionary, max_word_length, **kwargs):
super(UnicodeCharsVocabulary, self).__init__(
dictionary, rev_dictionary, **kwargs
)
# 最大单词长度
self._max_word_length = max_word_length
self._char_dictionary = char_dictionary
self._char_rev_dictionary = char_rev_dictionary
self.bos_char = 3912
self.eos_char = 3913
self.bow_char = 3914
self.eow_char = 3915
self.pad_char = 3916
self.unk_char = 3917
# 单词的数量
num_words = self.size
# 构建字符级别的词典表,[num_words,max_word_length]
self._word_char_ids = np.zeros(
[num_words, max_word_length], dtype = np.int32
)
# 构建bos和eos的mask,初始化一个_max_word_length的张量,全部用3916填充,第一个字符位用3914,第三个字符位用3915,
# 第二个字符作为输入进行传入
def _make_bos_eos(c):
r = np.zeros([self._max_word_length], dtype = np.int32)
r[:] = self.pad_char
r[0] = self.bow_char
r[1] = c
r[2] = self.eow_char
return r
# 张量化
self.bos_chars = _make_bos_eos(self.bos_char)
self.eos_chars = _make_bos_eos(self.eos_char)
# 遍历字典中的每个单词,并将每个单词都进行字符级别的编码
for i, word in enumerate(self._dictionary.keys()):
self._word_char_ids[i] = self._convert_word_to_char_ids(word)
# 对于起始符GO和结束符EOS进行编码
self._word_char_ids[self.start_string] = self.bos_chars
self._word_char_ids[self.end_string] = self.eos_chars
@property
def word_char_ids(self):
return self._word_char_ids
@property
def max_word_length(self):
return self._max_word_length
# 将单词转化为字符级别的索引
def _convert_word_to_char_ids(self, word):
# 对输入的单词进行张量化
code = np.zeros([self.max_word_length], dtype = np.int32)
code[:] = self.pad_char
# 截取maxlen-2个字符,并将所有字符转化为unicode字符集
word_encoded = [self._char_dictionary.get(item,self.unk_char) for item in list(word)][:(self.max_word_length - 2)]
# 第一个字符位为3914
code[0] = self.bow_char
# 遍历单词的每一个字符,k从1开始
for k, chr_id in enumerate(word_encoded, start = 1):
code[k] = chr_id
# 在单词的末尾补充一个单词末尾结束符3915
code[len(word_encoded) + 1] = self.eow_char
return code
# 将单词转化为自定义字符编码
def word_to_char_ids(self, word):
if word in self._dictionary:
return self._word_char_ids[self._dictionary[word]]
else:
return self._convert_word_to_char_ids(word)
# 将句子转化为自定义字符编码矩阵
def encode_chars(self, sentence, reverse = False, split = True):
if split:
sentence = sentence.split()
chars_ids = [self.word_to_char_ids(cur_word) for cur_word in sentence]
if reverse:
return np.vstack([self.eos_chars] + chars_ids + [self.bos_chars])
else:
return np.vstack([self.bos_chars] + chars_ids + [self.eos_chars])
def _get_batch(generator, batch_size, num_steps, max_word_length):
# generator: 生成器
# batch_size: 每个批次的字符串的数量
# num_steps: 窗口大小
# max_word_length: 最大单词长度,一般设置为50
# 初始化batch_size个字符串
cur_stream = [None] * batch_size
no_more_data = False
while True:
# 初始化单词矩阵输入0值化[batch_size,num_steps]
inputs = np.zeros([batch_size, num_steps], np.int32)
# 初始化字符级矩阵,输入0值化
if max_word_length is not None:
char_inputs = np.zeros(
[batch_size, num_steps, max_word_length], np.int32
)
else:
char_inputs = None
# 初始化单词矩阵输出0值化[batch_size,num_steps]
targets = np.zeros([batch_size, num_steps], np.int32)
for i in range(batch_size):
cur_pos = 0
while cur_pos < num_steps:
if cur_stream[i] is None or len(cur_stream[i][0]) <= 1:
try:
# 每一步都获取词索引,字符集编码器
cur_stream[i] = list(next(generator))
except StopIteration:
no_more_data = True
break
# how_many 取当前总num_steps与文本词向量数量的较小值,累加
how_many = min(len(cur_stream[i][0]) - 1, num_steps - cur_pos)
next_pos = cur_pos + how_many
# 赋值输入对应的词索引范围和字符级别索引范围
inputs[i, cur_pos:next_pos] = cur_stream[i][0][:how_many]
if max_word_length is not None:
char_inputs[i, cur_pos:next_pos] = cur_stream[i][1][
:how_many
]
# targets 我们的目标是预测下一个词来优化emlo,所以我们以向右滑动的1个词作为target,作为预测对象
targets[i, cur_pos:next_pos] = cur_stream[i][0][
1 : how_many + 1
]
cur_pos = next_pos
# 处理完之前那段,重新处理下一段,每段的长度取决于howmany,这里既是window的宽度。
cur_stream[i][0] = cur_stream[i][0][how_many:]
if max_word_length is not None:
cur_stream[i][1] = cur_stream[i][1][how_many:]
if no_more_data:
break
X = {
'token_ids': inputs,
'tokens_characters': char_inputs,
'next_token_id': targets,
}
yield X
class LMDataset:
def __init__(self, string, vocab, reverse = False):
self._vocab = vocab
self._string = string
self._reverse = reverse
self._use_char_inputs = hasattr(vocab, 'encode_chars')
self._i = 0
# 总文本的数量
self._nids = len(self._string)
def _load_string(self, string):
if self._reverse:
string = string.split()
string.reverse()
string = ' '.join(string)
# 将一段文本解析成词索引,会在起始和末尾增加一个标志位
ids = self._vocab.encode(string, self._reverse)
# 将一段文本解析成字符级编码
if self._use_char_inputs:
chars_ids = self._vocab.encode_chars(string, self._reverse)
else:
chars_ids = None
# 返回由词索引和字符集编码的元组
return list(zip([ids], [chars_ids]))[0]
# 生成器,循环生成每个样本的词索引和字符编码
def get_sentence(self):
while True:
if self._i == self._nids:
self._i = 0
ret = self._load_string(self._string[self._i])
self._i += 1
yield ret
@property
def max_word_length(self):
if self._use_char_inputs:
return self._vocab.max_word_length
else:
return None
# batch生成器,每次只拿batch_size个数据,要多少数据就即时处理多少数据
def iter_batches(self, batch_size, num_steps):
for X in _get_batch(
self.get_sentence(), batch_size, num_steps, self.max_word_length
):
yield X
@property
def vocab(self):
return self._vocab
# 双向编码
class BidirectionalLMDataset:
def __init__(self, string, vocab):
# 正向编码和反向编码
self._data_forward = LMDataset(string, vocab, reverse = False)
self._data_reverse = LMDataset(string, vocab, reverse = True)
def iter_batches(self, batch_size, num_steps):
max_word_length = self._data_forward.max_word_length
for X, Xr in zip(
_get_batch(
self._data_forward.get_sentence(),
batch_size,
num_steps,
max_word_length,
),
_get_batch(
self._data_reverse.get_sentence(),
batch_size,
num_steps,
max_word_length,
),
):
# 拼接成一个6个item的字典,前三个为正向,后三个为反向
for k, v in Xr.items():
X[k + '_reverse'] = v
yield X
# maxlens=10,很明显没有超过8的词语,其中两个用来做填充符
uni = UnicodeCharsVocabulary(dictionary, rev_dictionary,char_dictionary,char_rev_dictionary, 10)
bi = BidirectionalLMDataset(trainset.data, uni)
# 每次只输入16个样本数据
batch_size = 16
# 训练用的词典大小
n_train_tokens = len(dictionary)
# 语言模型参数配置项
options = {
# 开启双向编码机制
'bidirectional': True,
# 字符级别的CNN,字符级别词嵌入128维,一共配置7种类型的滤波器,每个词最大长度为50,编码的有效数量为3918个,设置两条高速通道
'char_cnn': {
'activation': 'relu',
'embedding': {'dim': 128},
'filters': [
[1, 32],
[2, 32],
[3, 64],
[4, 128],
[5, 256],
[6, 512],
[7, 1024],
],
'max_characters_per_token': 10,
'n_characters': 3918,
'n_highway': 2,
},
# 随机失活率设置为0.1
'dropout': 0.1,
# lstm单元,设置三层,嵌入维度为512维
'lstm': {
# 截断值
'cell_clip': 3,
'dim': 512,
'n_layers': 2,
'projection_dim': 256,
# 裁剪到[-3,3]之间
'proj_clip': 3,
'use_skip_connections': True,
},
# 一共迭代100轮
'n_epochs': 100,
# 训练词典的大小
'n_train_tokens': n_train_tokens,
# 每个batch的大小
'batch_size': batch_size,
# 所有词的数量
'n_tokens_vocab': uni.size,
# 推断区间为20
'unroll_steps': 20,
'n_negative_samples_batch': 0.001,
'sample_softmax': True,
'share_embedding_softmax': False,
}
# 构建ELMO语言模型
class LanguageModel:
def __init__(self, options, is_training):
self.options = options
self.is_training = is_training
self.bidirectional = options.get('bidirectional', False)
self.char_inputs = 'char_cnn' in self.options
self.share_embedding_softmax = options.get(
'share_embedding_softmax', False
)
if self.char_inputs and self.share_embedding_softmax:
raise ValueError(
'Sharing softmax and embedding weights requires ' 'word input'
)
self.sample_softmax = options.get('sample_softmax', False)
# 建立模型
self._build()
# 配置学习率
lr = options.get('learning_rate', 0.2)
# 配置优化器
self.optimizer = tf.train.AdagradOptimizer(
learning_rate = lr, initial_accumulator_value = 1.0
).minimize(self.total_loss)
def _build_word_embeddings(self):
# 建立词嵌入
# 加载所有的词
n_tokens_vocab = self.options['n_tokens_vocab']
batch_size = self.options['batch_size']
# 上下文推断的窗口大小,这里关联20个单词
unroll_steps = self.options['unroll_steps']
# 词嵌入维度128
projection_dim = self.options['lstm']['projection_dim']
# 词索引
self.token_ids = tf.placeholder(
tf.int32, shape = (None, unroll_steps), name = 'token_ids'
)
self.batch_size = tf.shape(self.token_ids)[0]
with tf.device('/cpu:0'):
# 对单词进行256维的单词编码,初始化数据服从(-1,1)的正态分布
self.embedding_weights = tf.get_variable(
'embedding',
[n_tokens_vocab, projection_dim],
dtype = tf.float32,
initializer = tf.random_uniform_initializer(-1.0, 1.0),
)
# 20个词对应的词嵌入
self.embedding = tf.nn.embedding_lookup(
self.embedding_weights, self.token_ids
)
# 启用双向编码机制
if self.bidirectional:
self.token_ids_reverse = tf.placeholder(
tf.int32,
shape = (None, unroll_steps),
name = 'token_ids_reverse',
)
with tf.device('/cpu:0'):
self.embedding_reverse = tf.nn.embedding_lookup(
self.embedding_weights, self.token_ids_reverse
)
def _build_word_char_embeddings(self):
batch_size = self.options['batch_size']
unroll_steps = self.options['unroll_steps']
projection_dim = self.options['lstm']['projection_dim']
cnn_options = self.options['char_cnn']
filters = cnn_options['filters']
# 求和所有的滤波器数量
n_filters = sum(f[1] for f in filters)
# 最大单词字符长度
max_chars = cnn_options['max_characters_per_token']
# 字符级别嵌入维度,128
char_embed_dim = cnn_options['embedding']['dim']
# 所有字符的类型,一共261种
n_chars = cnn_options['n_characters']
# 配置激活函数
if cnn_options['activation'] == 'tanh':
activation = tf.nn.tanh
elif cnn_options['activation'] == 'relu':
activation = tf.nn.relu
# [batch_size,unroll_steps,max_chars]
self.tokens_characters = tf.placeholder(
tf.int32,
shape = (None, unroll_steps, max_chars),
name = 'tokens_characters',
)
self.batch_size = tf.shape(self.tokens_characters)[0]
with tf.device('/cpu:0'):
# 字符级别词嵌入,嵌入维度128维
self.embedding_weights = tf.get_variable(
'char_embed',
[n_chars, char_embed_dim],
dtype = tf.float32,
initializer = tf.random_uniform_initializer(-1.0, 1.0),
)
self.char_embedding = tf.nn.embedding_lookup(
self.embedding_weights, self.tokens_characters
)
if self.bidirectional:
self.tokens_characters_reverse = tf.placeholder(
tf.int32,
shape = (None, unroll_steps, max_chars),
name = 'tokens_characters_reverse',
)
self.char_embedding_reverse = tf.nn.embedding_lookup(
self.embedding_weights, self.tokens_characters_reverse
)
# 构建卷积层网络,用于字符级别的CNN卷积
def make_convolutions(inp, reuse):
with tf.variable_scope('CNN', reuse = reuse) as scope:
convolutions = []
# 这里构建7层卷积网络
for i, (width, num) in enumerate(filters):
if cnn_options['activation'] == 'relu':
w_init = tf.random_uniform_initializer(
minval = -0.05, maxval = 0.05
)
elif cnn_options['activation'] == 'tanh':
w_init = tf.random_normal_initializer(
mean = 0.0,
stddev = np.sqrt(1.0 / (width * char_embed_dim)),
)
w = tf.get_variable(
'W_cnn_%s' % i,
[1, width, char_embed_dim, num],
initializer = w_init,
dtype = tf.float32,
)
b = tf.get_variable(
'b_cnn_%s' % i,
[num],
dtype = tf.float32,
initializer = tf.constant_initializer(0.0),
)
# 卷积,uroll_nums,characters_nums采用1*1,1*2,...,1*7的卷积核,采用valid卷积策略;
# width上,(uroll_nums-1/1)+1=uroll_nums
# height上,(characters_nums-7/1)+1,捕捉词与词之间的相关性
conv = (
tf.nn.conv2d(
inp, w, strides = [1, 1, 1, 1], padding = 'VALID'
)
+ b
)
# 最大池化,每个词的字符编码
conv = tf.nn.max_pool(
conv,
[1, 1, max_chars - width + 1, 1],
[1, 1, 1, 1],
'VALID',
)
conv = activation(conv)
# 删除第三维,输入为[batch_size,uroll_nums,1,nums]
# 输出为[batch_size,uroll_nums,nums]
conv = tf.squeeze(conv, squeeze_dims = [2])
# 收集每个卷积层,并进行拼接
convolutions.append(conv)
return tf.concat(convolutions, 2)
reuse = tf.get_variable_scope().reuse
# inp [batch_size,uroll_nums,characters_nums,embedding_size]
embedding = make_convolutions(self.char_embedding, reuse)
# [batch_size,20,2048] #经过验证无误
# 增加一维[1,batch_size,uroll_nums,nums++]
self.token_embedding_layers = [embedding]
if self.bidirectional:
embedding_reverse = make_convolutions(
self.char_embedding_reverse, True
)
# 高速网络的数量
n_highway = cnn_options.get('n_highway')
use_highway = n_highway is not None and n_highway > 0
# use_proj 为True
use_proj = n_filters != projection_dim
# 本来已经第三维是2048维了,这么做的原因是?
if use_highway or use_proj:
embedding = tf.reshape(embedding, [-1, n_filters])
if self.bidirectional:
embedding_reverse = tf.reshape(
embedding_reverse, [-1, n_filters]
)
if use_proj:
# 使用投影,将滤波器再投影到一个projection_dim维的向量空间内
assert n_filters > projection_dim
with tf.variable_scope('CNN_proj') as scope:
W_proj_cnn = tf.get_variable(
'W_proj',
[n_filters, projection_dim],
initializer = tf.random_normal_initializer(
mean = 0.0, stddev = np.sqrt(1.0 / n_filters)
),
dtype = tf.float32,
)
b_proj_cnn = tf.get_variable(
'b_proj',
[projection_dim],
initializer = tf.constant_initializer(0.0),
dtype = tf.float32,
)
def high(x, ww_carry, bb_carry, ww_tr, bb_tr):
carry_gate = tf.nn.sigmoid(tf.matmul(x, ww_carry) + bb_carry)
transform_gate = tf.nn.relu(tf.matmul(x, ww_tr) + bb_tr)
return carry_gate * transform_gate + (1.0 - carry_gate) * x
if use_highway:
# 高速网络的维度为2048维
highway_dim = n_filters
for i in range(n_highway):
with tf.variable_scope('CNN_high_%s' % i) as scope:
W_carry = tf.get_variable(
'W_carry',
[highway_dim, highway_dim],
initializer = tf.random_normal_initializer(
mean = 0.0, stddev = np.sqrt(1.0 / highway_dim)
),
dtype = tf.float32,
)
b_carry = tf.get_variable(
'b_carry',
[highway_dim],
initializer = tf.constant_initializer(-2.0),
dtype = tf.float32,
)
W_transform = tf.get_variable(
'W_transform',
[highway_dim, highway_dim],
initializer = tf.random_normal_initializer(
mean = 0.0, stddev = np.sqrt(1.0 / highway_dim)
),
dtype = tf.float32,
)
b_transform = tf.get_variable(
'b_transform',
[highway_dim],
initializer = tf.constant_initializer(0.0),
dtype = tf.float32,
)
embedding = high(
embedding, W_carry, b_carry, W_transform, b_transform
)
if self.bidirectional:
embedding_reverse = high(
embedding_reverse,
W_carry,
b_carry,
W_transform,
b_transform,
)
# 扩展一层和两层经过高速网络的参数
self.token_embedding_layers.append(
tf.reshape(
embedding, [self.batch_size, unroll_steps, highway_dim]
)
)
# 经过一层线性变换[bacth_size,unroll_nums,projection_dim]
if use_proj:
embedding = tf.matmul(embedding, W_proj_cnn) + b_proj_cnn
if self.bidirectional:
embedding_reverse = (
tf.matmul(embedding_reverse, W_proj_cnn) + b_proj_cnn
)
# 只经过线性变换的网络参数
self.token_embedding_layers.append(
tf.reshape(
embedding, [self.batch_size, unroll_steps, projection_dim]
)
)
# 确保矩阵尺寸相同
if use_highway or use_proj:
shp = [self.batch_size, unroll_steps, projection_dim]
embedding = tf.reshape(embedding, shp)
if self.bidirectional:
embedding_reverse = tf.reshape(embedding_reverse, shp)
# 经过线性变化的embdedding [bacth_size,unroll_nums,projection_dim]
# self.token_embedding_layers 由四个嵌入层参数组成
# [bacth_size,unroll_nums,nums++] 原始词嵌入
# [bacth_size,unroll_nums,highway_dim] 经过第一层高速网络的词嵌入
# [bacth_size,unroll_nums,highway_dim] 经过第二层高速网络的词嵌入
# [bacth_size,unroll_nums,projection_dim] 经过低微线性投影的词嵌入
# print(embedding)
# print(self.token_embedding_layers)
self.embedding = embedding
if self.bidirectional:
self.embedding_reverse = embedding_reverse
# 构建模型
def _build(self):
# 所有词的数量
n_tokens_vocab = self.options['n_tokens_vocab']
batch_size = self.options['batch_size']
# window长度
unroll_steps = self.options['unroll_steps']
# lstm编码长度
lstm_dim = self.options['lstm']['dim']
projection_dim = self.options['lstm']['projection_dim']
# lstm的层数
n_lstm_layers = self.options['lstm'].get('n_layers', 1)
dropout = self.options['dropout']
# 保有率
keep_prob = 1.0 - dropout
# 如果是字符级别的输入,则建立词,字符嵌入,否则建立词嵌入,实际上使用前者
if self.char_inputs:
self._build_word_char_embeddings()
else:
self._build_word_embeddings()
# 存储lstm的状态
self.init_lstm_state = []
self.final_lstm_state = []
# 双向
# lstm_inputs单元为[batch_size,uroll_nums,projection_dim]双向单元
if self.bidirectional:
lstm_inputs = [self.embedding, self.embedding_reverse]
else:
lstm_inputs = [self.embedding]
cell_clip = self.options['lstm'].get('cell_clip')
proj_clip = self.options['lstm'].get('proj_clip')
use_skip_connections = self.options['lstm'].get('use_skip_connections')
print(lstm_inputs)
lstm_outputs = []
for lstm_num, lstm_input in enumerate(lstm_inputs):
lstm_cells = []
for i in range(n_lstm_layers):
# 在进行LSTM编码后再接入一个num_proj的全连接层,[batch_size,projection_dim]
# [batch_size,num_proj]
lstm_cell = tf.nn.rnn_cell.LSTMCell(
# 隐含层的单元数
lstm_dim,
num_proj = lstm_dim // 2,
cell_clip = cell_clip,
proj_clip = proj_clip,
)
if use_skip_connections:
if i == 0:
pass
else:
# 将上一个单元的输出,和当前输入映射到下一个单元
lstm_cell = tf.nn.rnn_cell.ResidualWrapper(lstm_cell)
# 添加随机失活层
if self.is_training:
lstm_cell = tf.nn.rnn_cell.DropoutWrapper(
lstm_cell, input_keep_prob = keep_prob
)
lstm_cells.append(lstm_cell)
# 构建多层LSTM
if n_lstm_layers > 1:
lstm_cell = tf.nn.rnn_cell.MultiRNNCell(lstm_cells)
else:
lstm_cell = lstm_cells[0]
with tf.control_dependencies([lstm_input]):
# 初始化状态
self.init_lstm_state.append(
lstm_cell.zero_state(self.batch_size, tf.float32)
)
if self.bidirectional:
with tf.variable_scope('RNN_%s' % lstm_num):
# 从最后一步开始,获取最后一步的输出,和最终的隐含状态,确保正反向LSTM单元可以拼接起来
_lstm_output_unpacked, final_state = tf.nn.static_rnn(
lstm_cell,
# 将每个词对应的张量进行分离并作为LSTM的输入
tf.unstack(lstm_input, axis = 1),
initial_state = self.init_lstm_state[-1],
)
else:
_lstm_output_unpacked, final_state = tf.nn.static_rnn(
lstm_cell,
tf.unstack(lstm_input, axis = 1),
initial_state = self.init_lstm_state[-1],
)
self.final_lstm_state.append(final_state)
# [batch_size,num_proj]
# print(final_state)
# 将一个隐含层的输出拼接起来 [batch_size,20,256]
lstm_output_flat = tf.reshape(
tf.stack(_lstm_output_unpacked, axis = 1), [-1, projection_dim]
)
print(lstm_output_flat)
tf.add_to_collection(
'lstm_output_embeddings', _lstm_output_unpacked
)
lstm_outputs.append(lstm_output_flat)
self._build_loss(lstm_outputs)
# 构建损失函数
def _build_loss(self, lstm_outputs):
batch_size = self.options['batch_size']
unroll_steps = self.options['unroll_steps']
# 所有词的数量
n_tokens_vocab = self.options['n_tokens_vocab']
def _get_next_token_placeholders(suffix):
name = 'next_token_id' + suffix
id_placeholder = tf.placeholder(
tf.int32, shape = (None, unroll_steps), name = name
)
return id_placeholder
self.next_token_id = _get_next_token_placeholders('')
# 每次抽取[batch_size,unroll_nums]个词
print(self.next_token_id)
if self.bidirectional:
self.next_token_id_reverse = _get_next_token_placeholders(
'_reverse'
)
# softmax的维度为projection_dim(256)
softmax_dim = self.options['lstm']['projection_dim']
# 与词嵌入的权重共享
if self.share_embedding_softmax:
self.softmax_W = self.embedding_weights
# 初始化softmax的参数
with tf.variable_scope('softmax'), tf.device('/cpu:0'):
softmax_init = tf.random_normal_initializer(
0.0, 1.0 / np.sqrt(softmax_dim)
)
# softmax分布到每一个词中
if not self.share_embedding_softmax:
self.softmax_W = tf.get_variable(
'W',
[n_tokens_vocab, softmax_dim],
dtype = tf.float32,
initializer = softmax_init,
)
self.softmax_b = tf.get_variable(
'b',
[n_tokens_vocab],
dtype = tf.float32,
initializer = tf.constant_initializer(0.0),
)
self.individual_losses = []
if self.bidirectional:
next_ids = [self.next_token_id, self.next_token_id_reverse]
else:
next_ids = [self.next_token_id]
print(lstm_outputs)
self.output_scores = tf.identity(lstm_outputs, name = 'softmax_score')
print(self.output_scores)
for id_placeholder, lstm_output_flat in zip(next_ids, lstm_outputs):
next_token_id_flat = tf.reshape(id_placeholder, [-1, 1])
with tf.control_dependencies([lstm_output_flat]):
if self.is_training and self.sample_softmax:
losses = tf.nn.sampled_softmax_loss(
self.softmax_W,
self.softmax_b,
next_token_id_flat,
lstm_output_flat,
int(
self.options['n_negative_samples_batch']
* self.options['n_tokens_vocab']
),
self.options['n_tokens_vocab'],
num_true = 1,
)
else:
output_scores = (
tf.matmul(
lstm_output_flat, tf.transpose(self.softmax_W)
)
+ self.softmax_b
)
losses = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits = self.output_scores,
labels = tf.squeeze(
next_token_id_flat, squeeze_dims = [1]
),
)
self.individual_losses.append(tf.reduce_mean(losses))
if self.bidirectional:
self.total_loss = 0.5 * (
self.individual_losses[0] + self.individual_losses[1]
)
else:
self.total_loss = self.individual_losses[0]
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = LanguageModel(options, True)
sess.run(tf.global_variables_initializer())
from tqdm import tqdm
def _get_feed_dict_from_X(X, model, char_inputs, bidirectional):
feed_dict = {}
if not char_inputs:
token_ids = X['token_ids']
feed_dict[model.token_ids] = token_ids
else:
char_ids = X['tokens_characters']
feed_dict[model.tokens_characters] = char_ids
if bidirectional:
if not char_inputs:
feed_dict[model.token_ids_reverse] = X['token_ids_reverse']
else:
feed_dict[model.tokens_characters_reverse] = X['tokens_characters_reverse']
next_id_placeholders = [[model.next_token_id, '']]
if bidirectional:
next_id_placeholders.append([model.next_token_id_reverse, '_reverse'])
for id_placeholder, suffix in next_id_placeholders:
name = 'next_token_id' + suffix
feed_dict[id_placeholder] = X[name]
return feed_dict
bidirectional = options.get('bidirectional', False)
batch_size = options['batch_size']
unroll_steps = options['unroll_steps']
n_train_tokens = options.get('n_train_tokens')
n_tokens_per_batch = batch_size * unroll_steps
n_batches_per_epoch = int(n_train_tokens / n_tokens_per_batch)
n_batches_total = options['n_epochs'] * n_batches_per_epoch
init_state_tensors = model.init_lstm_state
final_state_tensors = model.final_lstm_state
char_inputs = 'char_cnn' in options
if char_inputs:
max_chars = options['char_cnn']['max_characters_per_token']
feed_dict = {
model.tokens_characters: np.zeros(
[batch_size, unroll_steps, max_chars], dtype = np.int32
)
}
else:
feed_dict = {model.token_ids: np.zeros([batch_size, unroll_steps])}
if bidirectional:
if char_inputs:
feed_dict.update(
{
model.tokens_characters_reverse: np.zeros(
[batch_size, unroll_steps, max_chars], dtype = np.int32
)
}
)
else:
feed_dict.update(
{
model.token_ids_reverse: np.zeros(
[batch_size, unroll_steps], dtype = np.int32
)
}
)
init_state_values = sess.run(init_state_tensors, feed_dict = feed_dict)
data_gen = bi.iter_batches(batch_size, unroll_steps)
pbar = tqdm(range(n_batches_total), desc = 'train minibatch loop')
for p in pbar:
batch = next(data_gen)
feed_dict = {t: v for t, v in zip(init_state_tensors, init_state_values)}
feed_dict.update(_get_feed_dict_from_X(batch, model, char_inputs, bidirectional))
score, loss, _, init_state_values = sess.run([model.output_scores,
model.total_loss, model.optimizer, final_state_tensors],
feed_dict = feed_dict)
pbar.set_postfix(cost = loss)
word_embed = model.softmax_W.eval()
from scipy.spatial.distance import cdist
from sklearn.neighbors import NearestNeighbors
word = '金轮'
nn = NearestNeighbors(10, metric = 'cosine').fit(word_embed)
distances, idx = nn.kneighbors(word_embed[dictionary[word]].reshape((1, -1)))
word_list = []
for i in range(1, idx.shape[1]):
word_list.append([rev_dictionary[idx[0, i]], 1 - distances[0, i]])
word_list
```
| github_jupyter |
# Base dos dados
Base dos Dados is a Brazilian project of consolidation of datasets in a common repository with easy to follow codes.
Download *clean, integrated and updated* datasets in an easy way through SQL, Python, R or CLI (Stata in development). With Base dos Dados you have freedom to:
- download whole tables
- download tables partially
- cross datasets and download final product
This Jupyter Notebook aims to help Python coders in main funcionalities of the platform. If you just heard about Base Dos Dados this is the place to come and start your journey.
This manual is divided into three sections:
- Installation and starting commands
- Using Base dos Dados with Pandas package
- Examples
It is recommended to have know about the package Pandas before using *basedosdados* since data will be stored as dataframes.
If you have any feedback about this material please send me an email and I will gladly help: tomasdanobrega@gmail.com
### 1. Installation and starting commands
Base dos Dados are located in pip and to install it you have to run the following code:
```
#!pip install basedosdados==1.3.0a3
```
Using an exclamation mark before the command will pass the command to the shell (not to the Python interpreter). If you prefer you can type *pip install basedosdados* directly in your Command Prompt.<br>
Before using *basedosdados* you have to import it:
```
import basedosdados as bd
```
Now that you have installed and imported the package it is time to see what data is available. The following command will list all current datasets.<br>
<br>
Note: the first time you execute a command in *basedosdados* it will prompt you to login with your Google Cloud. This is because results are delivered through BigQuery (which belongs to Google). BigQuery provides a reliable and quick service for managing data. It is free unless you require more than 1TB per month. If this happens to you just follow the on-screen instructions.
```
bd.list_datasets()
```
Note: the first time you execute a command in *basedosdados* it will prompt you to login with your Google Cloud. This is because results are delivered through BigQuery (which belongs to Google). BigQuery provides a reliable and quick service for managing data. It is free unless you require more than 1TB per month. If the login screen appears to you just follow the on-screen instructions.
You can check the data you want with the *get_dataset_description*. We will use *br_ibge_pib* as example:
```
bd.get_dataset_description('br_ibge_pib')
```
Each of the *datasets* has tables inside and to see what are the available tables you can use the following command:
```
bd.list_dataset_tables('br_ibge_pib')
```
To analyze which information is in that table you can run the following command.
```
bd.get_table_description('br_ibge_pib', 'municipios')
```
With these three commands you are set and know what data is available. You are now able to create your dataframe and start your analysis! In this example we will generate a dataframe called *df* with information regarding Brazilian municipalities' GDP. The data is stored as a dataframe object and it will interact with the same commands used in Pandas.
```
df = bd.read_table('br_ibge_pib', 'municipios')
# Note: depending on how you configured your Google Cloud setup, it might be necessary to add project ID in the request.
# Like `bd.read_table('br_ibge_pib', 'municipios', billing_project_id=<YOUR_PROJECT_ID>)
```
### Example
In this section we will do quick exploratory example on the dataset we have been using. <br>
First by let's define again the table and take a look at it:
```
df.head()
```
#### Example 1: São Paulo
The cities are being indexed by their ID You can see the correspondence in the following website: https://www.ibge.gov.br/explica/codigos-dos-municipios.php <br>
<br>
We will take a look at São Paulo city with the following ID: 3550308<br>
Creating a dataframe with São Paulo information:
```
sao_paulo = df[df.id_municipio == 3550308]
```
Looking at the table created we can check that everything is working as expected.
```
sao_paulo
```
Pandas has some graph functions built in and we can use them to further visualize data:
```
sao_paulo.plot(x='ano', y='PIB')
```
Since the tables are in nominal prices it might be interesting to see results in logarithmic form. For that we can use the following code:
```
import numpy as np
#Adding new columns:
df['log_pib'] = df.PIB.apply(np.log10)
#Separating Sao Paulo again, now it will come with the new columns:
sao_paulo = df[df.id_municipio == 3550308]
```
Analyzing data the way we did before:
```
sao_paulo
sao_paulo.plot(x='ano',y='log_pib')
```
#### Example 2: 2015
If you want to check all value added to GDP by the services sector in the year 2015 you could use the following code:
```
# First separating 2015 data only:
municipios2015 = df[df.ano == 2015]
#Adding logarithmic column:
municipios2015['log_VA_servicos'] = municipios2015.VA_servicos.apply(np.log10)
#Visualizing dataframe as histogram:
municipios2015[municipios2015['log_VA_servicos'] > 0].hist('log_VA_servicos')
```
For the last piece of code we are using conditional statement '> 0' because it might be the case of a city with value of zero in the *VA_servicos* and when we calculate log it is equal to -inf.
| github_jupyter |
# Project 3: Implement SLAM
---
## Project Overview
In this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!
SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem.
Using what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`.
> `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the world
You can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:
```
mu = matrix([[Px0],
[Py0],
[Px1],
[Py1],
[Lx0],
[Ly0],
[Lx1],
[Ly1]])
```
You can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector.
## Generating an environment
In a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.
---
## Create the world
Use the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds!
`data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`.
#### Helper functions
You will be working with the `robot` class that may look familiar from the first notebook,
In fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook.
```
import numpy as np
from helpers import make_data
# your implementation of slam should work with the following inputs
# feel free to change these input values and see how it responds!
# world parameters
num_landmarks = 5 # number of landmarks
N = 20 # time steps
world_size = 100.0 # size of world (square)
# robot parameters
measurement_range = 50.0 # range at which we can sense landmarks
motion_noise = 2.0 # noise in robot motion
measurement_noise = 2.0 # noise in the measurements
distance = 20.0 # distance by which robot (intends to) move each iteratation
# world parameters
# num_landmarks = 1 # number of landmarks
# N = 2 # time steps
# world_size = 10.0 # size of world (square)
# # robot parameters
# measurement_range = 5.0 # range at which we can sense landmarks
# motion_noise = 0.0 # noise in robot motion
# measurement_noise = 0.0 # noise in the measurements
# distance = 2.0 # distance by which robot (intends to) move each iteratation
# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks
data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance)
```
### A note on `make_data`
The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:
1. Instantiating a robot (using the robot class)
2. Creating a grid world with landmarks in it
**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**
The `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.
In `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:
```
measurement = data[i][0]
motion = data[i][1]
```
```
# print out some stats about the data
time_step = 0
print('Example measurements: \n', data[time_step][0])
print('\n')
print('Example motion: \n', data[time_step][1])
print('size of data: \n', len(data[time_step][1]))
```
Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam.
## Initialize Constraints
One of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.
<img src='images/motion_constraint.png' width=50% height=50% />
In *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.
<img src='images/constraints2D.png' width=50% height=50% />
You may also choose to create two of each omega and xi (one for x and one for y positions).
### TODO: Write a function that initializes omega and xi
Complete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.
*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!*
```
def initialize_constraints(N, num_landmarks, world_size):
''' This function takes in a number of time steps N, number of landmarks, and a world_size,
and returns initialized constraint matrices, omega and xi.'''
## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable
matSize = N + num_landmarks
## TODO: Define the constraint matrix, Omega, with two initial "strength" values
## for the initial x, y location of our robot
omegaX = [[0 for i in range(matSize)] for j in range(matSize)]
omegaY = [[0 for i in range(matSize)] for j in range(matSize)]
omegaX[0][0] = 1
omegaY[0][0] = 1
## TODO: Define the constraint *vector*, xi
## you can assume that the robot starts out in the middle of the world with 100% confidence
xiX = [0 for i in range(matSize)]
xiY = [0 for i in range(matSize)]
xiX[0] = world_size / 2
xiY[0] = world_size / 2
return omegaX, omegaY, xiX, xiY
```
### Test as you go
It's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.
Below, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.
**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.
This code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`.
```
# import data viz resources
import matplotlib.pyplot as plt
from pandas import DataFrame
import seaborn as sns
%matplotlib inline
# define a small N and world_size (small for ease of visualization)
N_test = 5
num_landmarks_test = 2
small_world = 10
# initialize the constraints
initial_omegaX, initial_omegaY, initial_xiX, initial_xiY = initialize_constraints(N_test, num_landmarks_test, small_world)
# define figure size
plt.rcParams["figure.figsize"] = (10,7)
# display omega
sns.heatmap(DataFrame(initial_omegaX), cmap='Blues', annot=True, linewidths=.5)
# define figure size
plt.rcParams["figure.figsize"] = (1,7)
# display xi
sns.heatmap(DataFrame(initial_xiX), cmap='Oranges', annot=True, linewidths=.5)
```
---
## SLAM inputs
In addition to `data`, your slam function will also take in:
* N - The number of time steps that a robot will be moving and sensing
* num_landmarks - The number of landmarks in the world
* world_size - The size (w/h) of your world
* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`
* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise`
#### A note on noise
Recall that `omega` holds the relative "strengths" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`.
### TODO: Implement Graph SLAM
Follow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation!
#### Updating with motion and measurements
With a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\mu = \Omega^{-1}\xi$
**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!**
```
## TODO: Complete the code to implement SLAM
## slam takes in 6 arguments and returns mu,
## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations
def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):
## TODO: Use your initilization to create constraint matrices, omega and xi
omegaX, omegaY, xiX, xiY = initialize_constraints(N, num_landmarks, world_size)
## TODO: Iterate through each time step in the data
## get all the motion and measurement data as you iterate
motion = []
measurements = []
for i in range(len(data)):
measurements.append(data[i][0])
motion.append(data[i][1])
## TODO: update the constraint matrix/vector to account for all *measurements*
## this should be a series of additions that take into account the measurement noise
for i in range(len(data)):
for j in range(len(measurements[i])):
ind = i
omegaX[ind][ind] += 1 / measurement_noise
omegaX[ind][N + measurements[i][j][0]] -= 1 / measurement_noise
omegaX[N + measurements[i][j][0]][N + measurements[i][j][0]] += 1 / measurement_noise
omegaX[N + measurements[i][j][0]][ind] -= 1 / measurement_noise
xiX[ind] -= measurements[i][j][1] / measurement_noise
xiX[N + measurements[i][j][0]] += measurements[i][j][1] / measurement_noise
omegaY[ind][ind] += 1 / measurement_noise
omegaY[ind][N + measurements[i][j][0]] -= 1 / measurement_noise
omegaY[N + measurements[i][j][0]][N + measurements[i][j][0]] += 1 / measurement_noise
omegaY[N + measurements[i][j][0]][ind] -= 1 / measurement_noise
xiY[ind] -= measurements[i][j][2] / measurement_noise
xiY[N + measurements[i][j][0]] += measurements[i][j][2] / measurement_noise
## TODO: update the constraint matrix/vector to account for all *motion* and motion noise
for i in range(len(data)):
ind = i+1
omegaX[ind][ind] += 1 / motion_noise
omegaX[ind][ind-1] -= 1 / motion_noise
omegaX[ind-1][ind-1] += 1 / motion_noise
omegaX[ind-1][ind] -= 1 / motion_noise
xiX[ind] += motion[i][0] / motion_noise
xiX[ind-1] -= motion[i][0] / motion_noise
omegaY[ind][ind] += 1 / motion_noise
omegaY[ind][ind-1] -= 1 / motion_noise
omegaY[ind-1][ind-1] += 1 / motion_noise
omegaY[ind-1][ind] -= 1 / motion_noise
xiY[ind] += motion[i][1] / motion_noise
xiY[ind-1] -= motion[i][1] / motion_noise
## TODO: After iterating through all the data
## Compute the best estimate of poses and landmark positions
## using the formula, omega_inverse * Xi
# print("\n\n\n")
# print(omegaX)
# print("\n\n\n")
# print(xiX)
# print("\n\n\n")
# print(omegaY)
# print("\n\n\n")
# print(xiY)
# print("\n\n\n")
# sns.heatmap(DataFrame(omegaX), cmap='Blues', annot=True, linewidths=.5)
muX = np.linalg.inv(np.matrix(omegaX)) * np.transpose(np.matrix(xiX))
muY = np.linalg.inv(np.matrix(omegaY)) * np.transpose(np.matrix(xiY))
return muX, muY # return `mu`
```
## Helper functions
To check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists.
Then, we define a function that nicely print out these lists; both of these we will call, in the next step.
```
# a helper function that creates a list of poses and of landmarks for ease of printing
# this only works for the suggested constraint architecture of interlaced x,y poses
def get_poses_landmarks(muX, muY, N):
# create a list of poses
poses = []
for i in range(N):
poses.append((muX[i].item(), muY[i].item()))
# create a list of landmarks
landmarks = []
for i in range(num_landmarks):
landmarks.append((muX[(N+i)].item(), muY[(N+i)].item()))
# return completed lists
return poses, landmarks
def print_all(poses, landmarks):
print('\n')
print('Estimated Poses:')
for i in range(len(poses)):
print('['+', '.join('%.3f'%p for p in poses[i])+']')
print('\n')
print('Estimated Landmarks:')
for i in range(len(landmarks)):
print('['+', '.join('%.3f'%l for l in landmarks[i])+']')
```
## Run SLAM
Once you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks!
### What to Expect
The `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.
With these values in mind, you should expect to see a result that displays two lists:
1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.
2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length.
#### Landmark Locations
If you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement).
```
# call your implementation of slam, passing in the necessary parameters
muX, muY = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise)
print(muX)
print(muY)
print("\n\n\n", data, "\n\n\n")
# print out the resulting landmarks and poses
if(muX is not None):
# get the lists of poses and landmarks
# and print them out
poses, landmarks = get_poses_landmarks(muX, muY, N)
print_all(poses, landmarks)
```
## Visualize the constructed world
Finally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!
**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.**
```
# import the helper function
from helpers import display_world
# Display the final world!
# define figure size
plt.rcParams["figure.figsize"] = (20,20)
# check if poses has been created
if 'poses' in locals():
# print out the last pose
print('Last pose: ', poses[-1])
# display the last position of the robot *and* the landmark positions
display_world(int(world_size), poses[-1], landmarks)
```
### Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?
You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters.
My final pose was within 1.5 units of the real answer. The difference is caused by the introduction of noise in our measurements. If we were to increase N, the slam algorithm would have more data to use and the final positions of the landmarks would eventually converge to the true value. Of course, if the noise parameters were higher it would take much longer to converge and vice versa with lower noise parameters
## Testing
To confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix.
### Submit your project
If you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit!
```
# Here is the data and estimated outputs for test case 1
test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]
## Test Case 1
##
# Estimated Pose(s):
# [50.000, 50.000]
# [37.858, 33.921]
# [25.905, 18.268]
# [13.524, 2.224]
# [27.912, 16.886]
# [42.250, 30.994]
# [55.992, 44.886]
# [70.749, 59.867]
# [85.371, 75.230]
# [73.831, 92.354]
# [53.406, 96.465]
# [34.370, 100.134]
# [48.346, 83.952]
# [60.494, 68.338]
# [73.648, 53.082]
# [86.733, 38.197]
# [79.983, 20.324]
# [72.515, 2.837]
# [54.993, 13.221]
# [37.164, 22.283]
# Estimated Landmarks:
# [82.679, 13.435]
# [70.417, 74.203]
# [36.688, 61.431]
# [18.705, 66.136]
# [20.437, 16.983]
### Uncomment the following three lines for test case 1 and compare the output to the values above ###
muX_1, muY_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(muX_1, muY_1, 20)
print_all(poses, landmarks)
# Here is the data and estimated outputs for test case 2
test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]]
## Test Case 2
##
# Estimated Pose(s):
# [50.000, 50.000]
# [69.035, 45.061]
# [87.655, 38.971]
# [76.084, 55.541]
# [64.283, 71.684]
# [52.396, 87.887]
# [44.674, 68.948]
# [37.532, 49.680]
# [31.392, 30.893]
# [24.796, 12.012]
# [33.641, 26.440]
# [43.858, 43.560]
# [54.735, 60.659]
# [65.884, 77.791]
# [77.413, 94.554]
# [96.740, 98.020]
# [76.149, 99.586]
# [70.211, 80.580]
# [64.130, 61.270]
# [58.183, 42.175]
# Estimated Landmarks:
# [76.777, 42.415]
# [85.109, 76.850]
# [13.687, 95.386]
# [59.488, 39.149]
# [69.283, 93.654]
### Uncomment the following three lines for test case 2 and compare to the values above ###
mux_2, muy_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mux_2, muy_2, 20)
print_all(poses, landmarks)
```
| github_jupyter |
### Imagenet
Largest image classification dataset at this point of time.
Url: http://image-net.org/
Our setup: classify from a set of 1000 classes.
```
#classes' names are stored here
import pickle
classes = pickle.load(open('classes.pkl','rb'))
print (classes[::100])
```
### Using pre-trained model: inception
Keras has a number of models for which you can use pre-trained weights. The interface is super-straightforward:
```
import tensorflow as tf
gpu_options = tf.GPUOptions(allow_growth=True, per_process_gpu_memory_fraction=0.1)
s = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))
import keras
import keras.applications as zoo
model = zoo.InceptionV3(include_top=True, weights='imagenet')
model.summary()
```
### Predict class probabilities
```
import matplotlib.pyplot as plt
from scipy.misc import imresize
%matplotlib inline
img = imresize(plt.imread('sample_images/albatross.jpg'), (299,299))
plt.imshow(img)
plt.show()
img_preprocessed = zoo.inception_v3.preprocess_input(img[None].astype('float32'))
probs = model.predict(img_preprocessed)
labels = probs.ravel().argsort()[-1:-10:-1]
print ('top-10 classes are:')
for l in labels:
print ('%.4f\t%s' % (probs.ravel()[l], classes[l].split(',')[0]))
```
### Having fun with pre-trained nets
```
!wget http://cdn.com.do/wp-content/uploads/2017/02/Donal-Trum-Derogar.jpeg -O img.jpg
img = imresize(plt.imread('img.jpg'), (299,299))
plt.imshow(img)
plt.show()
img_preprocessed = zoo.inception_v3.preprocess_input(img[None].astype('float32'))
probs = model.predict(img_preprocessed)
labels = probs.ravel().argsort()[-1:-10:-1]
print ('top-10 classes are:')
for l in labels:
print ('%.4f\t%s' % (probs.ravel()[l], classes[l].split(',')[0]))
```
### How do you reuse layers
Since model is just a sequence of layers, one can apply it as any other Keras model. Then you can build more layers on top of it, train them and maybe fine-tune "body" weights a bit.
```
img = keras.backend.Input('float32',[None,299,299,3])
neck = zoo.InceptionV3(include_top=False, weights='imagenet')(img)
hid = keras.layers.GlobalMaxPool2D()(neck)
hid = keras.layers.Dense(512,activation='relu')(hid)
out = keras.layers.Dense(10,activation='softmax')(hid)
#<...> loss, training, etc.
```
# Grand-quest: Dogs Vs Cats
* original competition
* https://www.kaggle.com/c/dogs-vs-cats
* 25k JPEG images of various size, 2 classes (guess what)
### Your main objective
* In this seminar your goal is to fine-tune a pre-trained model to distinguish between the two rivaling animals
* The first step is to just reuse some network layer as features
```
!wget https://www.dropbox.com/s/ae1lq6dsfanse76/dogs_vs_cats.train.zip?dl=1 -O data.zip
!unzip data.zip
```
# for starters
* Train sklearn model, evaluate validation accuracy (should be >80%
```
#extract features from images
from tqdm import tqdm
from scipy.misc import imresize
import os
X = []
Y = []
#this may be a tedious process. If so, store the results in some pickle and re-use them.
for fname in tqdm(os.listdir('train/')):
y = fname.startswith("cat")
img = imread("train/"+fname)
img = imresize(img,(IMAGE_W,IMAGE_W))
img = zoo.inception_v3.preprocess_input(img[None].astype('float32'))
features = <use network to process the image into features>
Y.append(y)
X.append(features)
X = np.concatenate(X) #stack all [1xfeatures] matrices into one.
assert X.ndim==2
#WARNING! the concatenate works for [1xN] matrices. If you have other format, stack them yourself.
#crop if we ended prematurely
Y = Y[:len(X)]
<split data either here or use cross-validation>
```
__load our dakka__

```
from sklearn.ensemble import RandomForestClassifier,ExtraTreesClassifier,GradientBoostingClassifier,AdaBoostClassifier
from sklearn.linear_model import LogisticRegression, RidgeClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
```
# Main quest
* Get the score improved!
* You have to reach __at least 95%__ on the test set. More = better.
No methods are illegal: ensembling, data augmentation, NN hacks.
Just don't let test data slip into training.
### Split the raw image data
* please do train/validation/test instead of just train/test
* reasonable but not optimal split is 20k/2.5k/2.5k or 15k/5k/5k
### Choose which vgg layers are you going to use
* Anything but for prob is okay
* Do not forget that vgg16 uses dropout
### Build a few layers on top of chosen "neck" layers.
* a good idea is to just stack more layers inside the same network
* alternative: stack on top of get_output
### Train the newly added layers for some iterations
* you can selectively train some weights by setting var_list in the optimizer
* `opt = tf.train.AdamOptimizer(learning_rate=...)`
* `updates = opt.minimize(loss,var_list=variables_you_wanna_train)`
* it's cruicial to monitor the network performance at this and following steps
### Fine-tune the network body
* probably a good idea to SAVE your new network weights now 'cuz it's easy to mess things up.
* Moreover, saving weights periodically is a no-nonsense idea
* even more cruicial to monitor validation performance
* main network body may need a separate, much lower learning rate
* you can create two update operations
* `opt1 = tf.train.AdamOptimizer(learning_rate=lr1)`
* `updates1 = opt1.minimize(loss,var_list=head_weights)`
* `opt2 = tf.train.AdamOptimizer(learning_rate=lr2)`
* `updates2 = opt2.minimize(loss,var_list=body_weights)`
* `s.run([updates1,updates2],{...})
### Grading
* 95% accuracy on test yields 10 points
* -1 point per 5% less accuracy
### Some ways to get bonus points
* explore other networks from the model zoo
* play with architecture
* 96%/97%/98%/99%/99.5% test score (screen pls).
* data augmentation, prediction-time data augmentation
* use any more advanced fine-tuning technique you know/read anywhere
* ml hacks that benefit the final score
```
#<A whole lot of your code>
```
| github_jupyter |
### <font color = "darkblue">Updates to Assignment</font>
#### If you were working on the older version:
* Please click on the "Coursera" icon in the top right to open up the folder directory.
* Navigate to the folder: Week 3/ Planar data classification with one hidden layer. You can see your prior work in version 6b: "Planar data classification with one hidden layer v6b.ipynb"
#### List of bug fixes and enhancements
* Clarifies that the classifier will learn to classify regions as either red or blue.
* compute_cost function fixes np.squeeze by casting it as a float.
* compute_cost instructions clarify the purpose of np.squeeze.
* compute_cost clarifies that "parameters" parameter is not needed, but is kept in the function definition until the auto-grader is also updated.
* nn_model removes extraction of parameter values, as the entire parameter dictionary is passed to the invoked functions.
# Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
**You will learn how to:**
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
## 1 - Packages ##
Let's first import all the packages that you will need during this assignment.
- [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.
- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis.
- [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
```
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
```
## 2 - Dataset ##
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
```
X, Y = load_planar_dataset()
```
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. In other words, we want the classifier to define regions as either red or blue.
```
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
Lets first get a better sense of what our data is like.
**Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`?
**Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
```
### START CODE HERE ### (≈ 3 lines of code)
shape_X = X.shape
shape_Y = Y.shape
m = shape_X[1] # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**shape of X**</td>
<td> (2, 400) </td>
</tr>
<tr>
<td>**shape of Y**</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>**m**</td>
<td> 400 </td>
</tr>
</table>
## 3 - Simple Logistic Regression
Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
```
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
```
You can now plot the decision boundary of these models. Run the code below.
```
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**Accuracy**</td>
<td> 47% </td>
</tr>
</table>
**Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
## 4 - Neural Network model
Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.
**Here is our model**:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
**Mathematically**:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
**Reminder**: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data.
### 4.1 - Defining the neural network structure ####
**Exercise**: Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
```
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
### START CODE HERE ### (≈ 3 lines of code)
n_x = X.shape[0] # size of input layer
n_h = 4
n_y = Y.shape[0] # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
```
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width:20%">
<tr>
<td>**n_x**</td>
<td> 5 </td>
</tr>
<tr>
<td>**n_h**</td>
<td> 4 </td>
</tr>
<tr>
<td>**n_y**</td>
<td> 2 </td>
</tr>
</table>
### 4.2 - Initialize the model's parameters ####
**Exercise**: Implement the function `initialize_parameters()`.
**Instructions**:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h,n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y,n_h) * 0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]] </td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.]
[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.]] </td>
</tr>
</table>
### 4.3 - The Loop ####
**Question**: Implement `forward_propagation()`.
**Instructions**:
- Look above at the mathematical representation of your classifier.
- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.
- You can use the function `np.tanh()`. It is part of the numpy library.
- The steps you have to implement are:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`.
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = np.dot(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td> 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 </td>
</tr>
</table>
Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 1}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.
**Instructions**:
- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented
$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:
```python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs) # no need to use a for loop!
```
(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`).
Note that if you use `np.multiply` followed by `np.sum` the end result will be a type `float`, whereas if you use `np.dot`, the result will be a 2D numpy array. We can use `np.squeeze()` to remove redundant dimensions (in the case of single float, this will be reduced to a zero-dimension array). We can cast the array as a type `float` using `float()`.
```
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
[Note that the parameters argument is not used in this function,
but the auto-grader currently expects this parameter.
Future version of this notebook will fix both the notebook
and the auto-grader so that `parameters` is not needed.
For now, please include `parameters` in the function signature,
and also when invoking this function.]
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = (Y)*(np.log(A2)) + (1-Y)*np.log(1-A2)
cost = (-1/m)*np.sum(logprobs)
### END CODE HERE ###
cost = float(np.squeeze(cost)) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**cost**</td>
<td> 0.693058761... </td>
</tr>
</table>
Using the cache computed during forward propagation, you can now implement backward propagation.
**Question**: Implement the function `backward_propagation()`.
**Instructions**:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
- Tips:
- To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = parameters["W1"]
W2 = parameters["W2"]
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = cache["A1"]
A2 = cache["A2"]
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = A2 - Y
dW2 = (1/m)*(np.dot(dZ2, A1.T))
db2 = (1/m)*(np.sum(dZ2, axis=1, keepdims=True))
dZ1 = np.dot(W2.T, dZ2) * (1-np.power(A1, 2))
dW1 = (1/m)*(np.dot(dZ1, X.T))
db1 = (1/m)*(np.sum(dZ1, axis=1, keepdims=True))
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td>**dW1**</td>
<td> [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]] </td>
</tr>
<tr>
<td>**db1**</td>
<td> [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]] </td>
</tr>
<tr>
<td>**dW2**</td>
<td> [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] </td>
</tr>
<tr>
<td>**db2**</td>
<td> [[-0.16655712]] </td>
</tr>
</table>
**Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = grads["dW1"]
db1 = grads["db1"]
dW2 = grads["dW2"]
db2 = grads["db2"]
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 = W1 - (learning_rate * dW1)
b1 = b1 - (learning_rate * db1)
W2 = W2 - (learning_rate * dW2)
b2 = b2 - (learning_rate * db2)
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:80%">
<tr>
<td>**W1**</td>
<td> [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ -1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[ -3.20136836e-06]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.00010457]] </td>
</tr>
</table>
### 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() ####
**Question**: Build your neural network model in `nn_model()`.
**Instructions**: The neural network model has to use the previous functions in the right order.
```
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_h=n_h, n_x=n_x, n_y=n_y)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(parameters=parameters, X=X)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = compute_cost(A2=A2, parameters=parameters, Y=Y)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(cache=cache, parameters=parameters, X=X, Y=Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(grads=grads, parameters=parameters)
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>
**cost after iteration 0**
</td>
<td>
0.692739
</td>
</tr>
<tr>
<td>
<center> $\vdots$ </center>
</td>
<td>
<center> $\vdots$ </center>
</td>
</tr>
<tr>
<td>**W1**</td>
<td> [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-2.45566237 -3.27042274 2.00784958 3.36773273]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.20459656]] </td>
</tr>
</table>
### 4.5 Predictions
**Question**: Use your model to predict by building predict().
Use forward propagation to predict results.
**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \\
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
```
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = forward_propagation(parameters=parameters, X=X)
predictor = np.vectorize(lambda x: int(x>0.5))
predictions = predictor(A2)
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**predictions mean**</td>
<td> 0.666666666667 </td>
</tr>
</table>
It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
```
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**Cost after iteration 9000**</td>
<td> 0.218607 </td>
</tr>
</table>
```
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
```
**Expected Output**:
<table style="width:15%">
<tr>
<td>**Accuracy**</td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression.
Now, let's try out several hidden layer sizes.
### 4.6 - Tuning hidden layer size (optional/ungraded exercise) ###
Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
```
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
```
**Interpretation**:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticeable overfitting.
- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
**Optional questions**:
**Note**: Remember to submit the assignment by clicking the blue "Submit Assignment" button at the upper-right.
Some optional/ungraded questions that you can explore if you wish:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<font color='blue'>
**You've learnt to:**
- Build a complete neural network with a hidden layer
- Make a good use of a non-linear unit
- Implemented forward propagation and backpropagation, and trained a neural network
- See the impact of varying the hidden layer size, including overfitting.
Nice work!
## 5) Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
```
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
Congrats on finishing this Programming Assignment!
Reference:
- http://scs.ryerson.ca/~aharley/neural-networks/
- http://cs231n.github.io/neural-networks-case-study/
| github_jupyter |
# Correcting for multiple comparisons
Geoffrey Brookshire
Here we test how the AR surrogate and robust est. analyses behave when correcting for multiple comparisons using cluster-based permutation tests, Bonferroni corrections, and correcting with the False Discovery Rate (FDR).
```
# Import libraries and set up analyses
%matplotlib inline
import os
os.chdir('..')
import yaml
import copy
import itertools
import numpy as np
from scipy import signal, stats
import matplotlib.pyplot as plt
import analysis
import simulate_behavior as behav
import simulate_experiments as sim_exp
from analysis_methods import shuff_time, alternatives, utils
from generate_plots import remove_topright_axes
from stat_report_helpers import chi_square_report
# Suppress maximum likelihood estimation convergence warnings
import warnings
from statsmodels.tools.sm_exceptions import ConvergenceWarning
warnings.simplefilter('ignore', ConvergenceWarning)
USE_CACHE = True # Whether to use previously-saved simulations
behav_details = yaml.safe_load(open('behav_details.yaml'))
plt.ion()
plot_dir = 'plots/'
n_exp = 1000
behav_kwargs = {'noise_method': 'powerlaw',
'exponent': 2}
osc_parameters = {'Rand walk': {'f_osc': 0, 'osc_amp': 0},
'Rand walk + osc': {'f_osc': 6, 'osc_amp': 0.4}}
method_names = {'Robust est': 'mann_lees',
'AR surr': 'ar'}
colors = {'Rand walk': 'red',
'Rand walk + osc': 'dodgerblue'}
osc_parameters = {'Rand walk': {'f_osc': 0, 'osc_amp': 0},
'Rand walk + osc': {'f_osc': 6, 'osc_amp': 0.4}}
correction_methods = ('Cluster', 'Bonferroni', 'FDR')
exp_functions = {'Robust est': sim_exp.robust_est_experiment,
'AR surr': sim_exp.ar_experiment}
prop_signif = {}
for osc_label, osc_params in osc_parameters.items():
prop_signif[osc_label] = {}
for analysis_meth, exp_fnc in exp_functions.items():
prop_signif[osc_label][analysis_meth] = {}
for correction in correction_methods:
# Can't run a cluster test on robust est.
if analysis_meth == 'Robust est' and correction == 'Cluster':
continue
if correction == 'Cluster': # Re-use main data for cluster
desc = ''
else:
desc = f'-{correction}'
def analysis_fnc(**behav_kwargs):
""" Helper function
"""
res = exp_fnc(correction=correction.lower(),
**behav_kwargs)
return res
if USE_CACHE or correction == 'Cluster':
lit = analysis.load_simulation(method_names[analysis_meth],
desc=desc,
**behav_kwargs,
**osc_params)
else:
lit = analysis.simulate_lit(analysis_fnc, n_exp,
desc=desc,
**behav_kwargs,
**osc_params)
analysis.save_simulation(lit,
method_names[analysis_meth],
desc=desc,
**behav_kwargs,
**osc_params)
p = analysis.prop_sig(lit)
prop_signif[osc_label][analysis_meth][correction] = p
def prop_ci(p, n):
""" 95% CI of a proportion
"""
return 1.96 * np.sqrt((p * (1 - p)) / n)
fig, axes = plt.subplots(1, 2,
gridspec_kw={'width_ratios': [1, 1]},
figsize=(4, 3))
for i_plot, analysis_meth in enumerate(exp_functions.keys()):
plt.subplot(axes[i_plot])
plt.title(analysis_meth)
plt.axhline(y=0.05, color='k', linestyle='--')
for osc_label in osc_parameters.keys():
psig = prop_signif[osc_label][analysis_meth]
labels = psig.keys()
x_pos = np.arange(float(len(psig)))
psig = np.array(list(psig.values()))
plt.errorbar(x_pos, psig,
yerr=prop_ci(psig, n_exp),
fmt='o',
color=colors[osc_label],
label=osc_label)
plt.xticks(x_pos, labels, rotation=45)
plt.xlim([-0.5, len(psig) - 0.5])
plt.ylim(0, 1.05)
plt.ylabel('Prop. signif.')
remove_topright_axes()
plt.tight_layout()
plt.savefig(f"{plot_dir}mult_comp_corrections.eps")
```
These plots show the proportion of significant oscillations identified for each method of multiple comparisons correction. The false positive rate for each method is reflected in the proportion of significant results when the data were simulated as a random walk (in blue). The true positive rate (analogous to experimental power, assuming certain characteristics of the signal) is reflected in the proportion of significant results when the data were simulated as a random walk plus an oscillation (in orange).
## Statistical tests
### Differences between methods for multiple comparisons correction
We can test for differences in performance between the different methods of adjusting for multiple comparisons.
First, test whether the choice of multiple comparison influences the rate of positive results for the AR surrogate analysis.
```
analysis_meth = 'AR surr'
for osc_label in osc_parameters.keys():
print('-', osc_label)
psig = prop_signif[osc_label][analysis_meth]
labels = psig.keys()
tbl = []
for mult_comp_meth, p in psig.items():
row = [int(p * n_exp), int((1 - p) * n_exp)]
tbl.append(row)
tbl = np.array(tbl)
msg = chi_square_report(tbl)
print(' ' + msg)
```
Next, test for pairwise differences between multiple comparisons methods within each analysis method and signal type.
```
for analysis_meth in exp_functions.keys():
print(analysis_meth)
for osc_label in osc_parameters.keys():
print('-', osc_label)
psig = prop_signif[osc_label][analysis_meth]
labels = psig.keys()
for comp in itertools.combinations(labels, 2):
# Make a contingency table
p0 = psig[comp[0]]
p1 = psig[comp[1]]
tbl = [[p0 * n_exp, p1 * n_exp],
[(1 - p0) * n_exp, (1 - p1) * n_exp]]
tbl = np.array(tbl)
msg = f' - {comp[0][:3]} vs {comp[1][:3]}: '
msg += chi_square_report(tbl)
print(msg)
```
### Comparing false positives against alpha = 0.05
Does each method have a rate of false positives higher than 0.05? If so, that method does not adequately control the rate of false positives.
```
for analysis_meth in exp_functions.keys():
print(analysis_meth)
psig = prop_signif['Rand walk'][analysis_meth]
labels = psig.keys()
for mc_meth, prop in psig.items():
pval = stats.binom_test(prop * n_exp,
n_exp,
0.05,
alternative = 'greater')
msg = f'- {mc_meth[:3]}: {prop:.2f}, '
msg += f'p = {pval:.0e}'
if prop > 0.05 and pval < 0.05:
msg += ' *'
print(msg)
```
| github_jupyter |
```
import importlib
from matplotlib import pyplot as plt
plt.plot([1,2,3])
from IPython.display import clear_output
import matplotlib
import numpy as np
import pandas as pd
import pdb
import time
from collections import deque
import torch
import cv2
from Environment.Env import RealExpEnv
from RL.sac import sac_agent, ReplayMemory
from Environment.data_visualization import plot_graph, show_reset, show_done, show_step
from Environment.episode_memory import Episode_Memory
from Environment.get_atom_coordinate import atom_detection, blob_detection, get_atom_coordinate_nm
from skimage import morphology, measure
from Environment.createc_control import Createc_Controller
import glob
from collections import deque
matplotlib.rcParams['image.cmap'] = 'gray'
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
##Define anchor template
createc_controller = Createc_Controller(None, None, None, None)
img_forward = np.array(createc_controller.stm.scandata(1,4))
print(img_forward.shape)
top_left, w, h = (4,4), 14, 14
template = img_forward[top_left[1]:top_left[1]+h, top_left[0]:top_left[0]+w]
plt.imshow(template)
step_nm = 0.4
max_mvolt = 15 #min_mvolt = 0.5*max_mvolt
max_pcurrent_to_mvolt_ratio = 6E3 # min = 0.5*max
goal_nm = 2
current_jump = 4
im_size_nm = 10.027
DactoA = float(createc_controller.stm.getparam('Dacto[A]xy'))
Gain = float(createc_controller.stm.getparam("GainX"))
print('gain:', Gain)
offset_x = -float(createc_controller.stm.getparam('OffsetX'))*DactoA*Gain/10
offset_y = -float(createc_controller.stm.getparam('OffsetY'))*DactoA*Gain/10
print(offset_x, offset_y)
print(float(createc_controller.stm.getparam('PlanDx')), float(createc_controller.stm.getparam('PlanDy')))
offset_x -= float(createc_controller.stm.getparam('PlanDx'))
offset_y -= float(createc_controller.stm.getparam('PlanDy'))
print(offset_x, offset_y)
offset_nm = np.array([190.386,-13.5])
pixel = 128
manip_limit_nm = np.array([189, 195, -11, -5]) #[left, right, up, down]
template_max_y = 25
template_min_x = None
scan_mV = 1000
max_len = 5
env = RealExpEnv(step_nm, max_mvolt, max_pcurrent_to_mvolt_ratio, goal_nm,
template, current_jump, im_size_nm, offset_nm, manip_limit_nm, pixel,
template_max_y, template_min_x, scan_mV, max_len)
batch_size= 64
LEARNING_RATE = 0.0003
replay_size=1000000
#initialize sac_agent
agent = sac_agent(num_inputs = 4, num_actions = 6, action_space = None, device=device, hidden_size=256, lr=LEARNING_RATE,
gamma=0.9, tau=0.005, alpha=0.0973)
#load pretrained parameters
agent.critic.load_state_dict(torch.load('training_4/reward_2_critic_{}.pth'.format(3000)))
agent.policy.load_state_dict(torch.load('training_4/reward_2_policy_{}.pth'.format(3000)))
agent.alpha = torch.load('training_4/reward_2_alpha_{}.pth'.format(3000))
memory = ReplayMemory(replay_size)
#, map_location=torch.device('cpu')
episode_memory = Episode_Memory()
scores_array = []
avg_scores_array = []
alpha = []
temp_nm = []
c_k_min = 2500
eta_0 = 0.996
eta_T = 1.0
n_interactions = 500
max_ep_len = 5
def sac_train(max_steps, num_episodes = 50, episode_start = 0):
global added_episode
for i_episode in range(episode_start,episode_start+num_episodes):
print('Episode:', i_episode)
eta_t = np.minimum(eta_0 + (eta_T - eta_0)*(i_episode/n_interactions), eta_T)
episode_reward = 0
episode_steps = 0
done = False
state, info = env.reset()
show_reset(env.img_info['img_forward'], env.img_info['offset_nm'], env.img_info['len_nm'],
env.atom_start_absolute_nm, env.destination_absolute_nm, env.template_nm, env.template_wh)
print('old value:',env.old_value)
#print(env.atom_absolute_nm)
episode_memory.update_memory_reset(env.img_info, i_episode, info)
temp_nm.append(env.template_nm)
for step in range(max_steps):
print('step:', step)
action = agent.select_action(state)
atom_absolute_nm = env.atom_absolute_nm
next_state, reward, done, info = env.step(action)
print(reward)
#print(action, done, reward)
episode_steps+=1
episode_reward+=reward
mask = float(not done)
memory.push(state,action,reward,next_state,mask)
episode_memory.update_memory_step(state, action, next_state, reward, done, info)
state=next_state
show_step(env.img_info['img_forward'], env.img_info['offset_nm'], env.img_info['len_nm'],
info['start_nm']+atom_absolute_nm, info['end_nm']+atom_absolute_nm,
env.atom_absolute_nm, env.atom_start_absolute_nm, env.destination_absolute_nm,
env.template_nm, env.template_wh, action[4]*env.max_mvolt,
action[5]*env.max_pcurrent_to_mvolt_ratio*action[4]*env.max_mvolt)
print('template_nm',env.template_nm)
temp_nm.append(env.template_nm)
if done:
episode_memory.update_memory_done(env.img_info, env.atom_absolute_nm, env.atom_relative_nm)
episode_memory.save_memory('training_4')
new_destination_absolute_nm = None
atom_to_start = env.atom_relative_nm - env.atom_start_relative_nm
print('atom moved by:', np.linalg.norm(atom_to_start))
print('Episode reward:', episode_reward)
show_done(env.img_info['img_forward'], env.img_info['offset_nm'], env.img_info['len_nm'], env.atom_absolute_nm, env.atom_start_absolute_nm, env.destination_absolute_nm, env.template_nm, env.template_wh, reward, new_destination_absolute_nm)
break
if (len(memory)>batch_size):
if i_episode>100:
train_pi = True
else:
train_pi = True
episode_K = int(episode_steps)
for k in range(episode_K):
c_k = max(int(memory.__len__()*eta_t**(k*(max_ep_len/episode_K))), c_k_min)
#c_k = memory.__len__()
print('TRAINING!')
agent.update_parameters(memory, batch_size, c_k, train_pi)
scores_array.append(episode_reward)
if len(scores_array)>100:
avg_scores_array.append(np.mean(scores_array[-100:]))
else:
avg_scores_array.append(np.mean(scores_array))
print(agent.alpha)
alpha.append(agent.alpha)
if (i_episode+1)%2==0:
plot_graph(scores_array,avg_scores_array)
if (i_episode)%20 == 0:
torch.save(agent.critic.state_dict(), 'training_4/reward_2_critic_{}.pth'.format(i_episode))
torch.save(agent.policy.state_dict(), 'training_4/reward_2_policy_{}.pth'.format(i_episode))
torch.save(agent.alpha, 'training_4/reward_2_alpha_{}.pth'.format(i_episode))
env.atom_absolute_nm = None
sac_train(max_steps=max_len, episode_start = 0,num_episodes = 1000)
```
@1700 change max_ep_len from 50 to 5
@1826 change cut off distance from 0.28 nm to 0.14 nm
@1850 pause for taking bench mark and change cutoff distance to 0.1 nm
@2092 change upper cutoff from self.goal_nm to 1.5*self.goal_nm
@2106 change to 128 pixel
@2138 chnge to check_similarity with absolute coordinates
@2216 system crash, restart computer and tip
```
def save_buffer(buffer):
state, action, reward, next_state, done = map(np.stack,zip(*memory.buffer))
np.save('state.npy',state)
np.save('action.npy',action)
np.save('reward.npy',reward)
np.save('next_state.npy', next_state)
np.save('done.npy', done)
save_buffer(memory.buffer)
```
| github_jupyter |
# Using the PyTorch JIT Compiler with Pyro
This tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models.
#### Summary:
- You can use compiled functions in Pyro models.
- You cannot use pyro primitives inside compiled functions.
- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g.
```diff
- Trace_ELBO()
+ JitTrace_ELBO()
```
- The [HMC](http://docs.pyro.ai/en/dev/mcmc.html#pyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.html#pyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.
- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.
- Each different value of `**kwargs` triggers a separate compilation.
- Use `**kwargs` to specify all variation in structure (e.g. time series length).
- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.
- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`.
#### Table of contents
- [Introduction](#Introduction)
- [A simple model](#A-simple-model)
- [Varying structure](#Varying-structure)
```
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.contrib.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('0.3.0')
pyro.enable_validation(True) # <---- This is always a good idea!
```
## Introduction
PyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".
Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.
The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.html#pyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.html#pyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.html#pyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.html#pyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.html#pyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.html#pyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/uber/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode.
## A simple model
Let's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/contrib.autoguide.html).
```
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
```
First let's run as usual with an SVI object and `Trace_ELBO`.
```
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
```
Next to run with a jit compiled inference, we simply replace
```diff
- elbo = Trace_ELBO()
+ elbo = JitTrace_ELBO()
```
Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
```
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
```
Notice that we have a more than 2x speedup for this small model.
Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
```
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
```
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
```
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
```
We notice a significant increase in sampling throughput when JIT compilation is enabled.
## Varying structure
Time series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$
- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.
- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).
To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
```
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
```
Now lets' run SVI as usual.
```
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
```
Again we'll simply swap in a `Jit*` implementation
```diff
- elbo = TraceEnum_ELBO(max_plate_nesting=1)
+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
```
Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
```
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
```
Again we see more than 2x speedup. Note that since there were three different sequence lengths, compilation was triggered three times.
$^\dagger$ Note this section is only valid for SVI, and HMC/NUTS assume fixed model arguments.
| github_jupyter |
<a href="https://colab.research.google.com/github/ParalelaUnsaac/2020-2/blob/main/GACO_Guia_2_Taxonomia_de_Flynn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
El siguiente código va a permitir que todo código ejecutado en el colab pueda ser medido
```
!pip install ipython-autotime
%load_ext autotime
print(sum(range(10)))
```
Pregunta #1: Que porción de 1 segundo es el valor impreso?
Es mili segundo representado por 1*10^(-3)
---
A seguir, tenemos una librería de Python llamado **numba** que realiza paralelización automatica. Asi, se puede verificar que al usar prange() se tiene mejor tiempo de ejecución que al usar range()
```
from numba import njit, prange
import numpy as np
A = np.arange(5, 1600000)
@njit(parallel=True)
def prange_test(A):
s = 0
# Without "parallel=True" in the jit-decorator
# the prange statement is equivalent to range
for i in prange(A.shape[0]):
s += A[i]
return s
print(prange_test(A))
from numba import njit, prange
import numpy as np
A = np.arange(5, 1600000)
#@njit(parallel=True)
def prange_test(A):
s = 0
# Without "parallel=True" in the jit-decorator
# the prange statement is equivalent to range
for i in range(A.shape[0]):
s += A[i]
return s
print(prange_test(A))
```
Pregunta #2: identifique otros valores en A, de manera que, serializando, tengamos mejor resultado que paralelizando
Como se puede observar cuando tenemos un parámetro mas pequeño el mejor resultado vendria a ser serializando pero cuando se va icrementando los parámetros el mejor resultado sería paralelizando
---
La Taxonomia de Flynn define 4 tipos de arquitecturas para computación paralela: SISD, SIMD, MISD, y MIMD.
---
Pregunta #3 : El ultimo código ejecutado es de tipo?
SISD ya que es serial secuencial
---
Pregunta #4: ¿Segun la Tax. de Flynn, De que tipo es el siguiente código paralelo? Comentar el código para justificar su respuesta
SIMD por que solo se tiene una sola instruccion que viene a ser print_time(name,n) y se tiene multiples datos como t1 y t2 y cada hilo funciona independientemente
```
import threading
import time
def print_time(name, n):
count = 0
print("Para el Hilo: %s, en el momento: %s, su valor de count es: %s" % ( name, time.ctime(), count))
while count < 5:
time.sleep(n)
count+=1
print("%s: %s. count %s" % ( name, time.ctime(), count))
t1 = threading.Thread(target=print_time, args=("Thread-1", 0, ) )
t2 = threading.Thread(target=print_time, args=("Thread-2", 0, ) )
t1.start()
t2.start()
```
---
Una computadora paralela tipo MIMD es utilizado más en la computación distribuida, ejm. Clusters. El siguiente código en python desktop muestra tal funcionamiento
```
#greeting-server.py
import Pyro4
@Pyro4.expose
class GreetingMaker(object):
def get_fortune(self, name):
return "Hello, {0}. Here is your fortune message:\n" \
"Behold the warranty -- the bold print giveth and the fine print taketh away.".format(name)
daemon = Pyro4.Daemon() # make a Pyro daemon
uri = daemon.register(GreetingMaker) # register the greeting maker as a Pyro object
print("Ready. Object uri =", uri) # print the uri so we can use it in the client later
daemon.requestLoop() # start the event loop of the server to wait for calls
#greeting-client.py
import Pyro4
uri = input("What is the Pyro uri of the greeting object? ").strip()
name = input("What is your name? ").strip()
greeting_maker = Pyro4.Proxy(uri) # get a Pyro proxy to the greeting object
print(greeting_maker.get_fortune(name)) # call method normally
```
Pregunta #5: Explique que hace este código de tipo MIMD
presenta un sistema con un flujo de múltiples instrucciones que operan sobre múltiples datos usando un servidor se ve que se recibe un nombre de usuario registrado despues se pide que un usuario se loguee con el codigo que se recupera del server luego de loguearse te pide un nombre para asi con la ayuda de este poder entregar un mensaje aleatorio al usuario con una frase.
---
Ejercicio Propuesto: Crear un ejemplo que muestre una computación paralela de tipo MISD
```
from numba import njit, prange
import numpy as np
A = np.arange(5, 1000)
#@njit(parallel=True)
def prange_test(A):
s = 0
# Without "parallel=True" in the jit-decorator
# the prange statement is equivalent to range
for i in range(A.shape[0]):
s += A[i]
return s
def ValPremio(A):
s = int(input("Ingrese un numero de la suerte: "))
#Verifica
if prange_test(A) <= s or prange_test(A)%s==0:
return "Ganaste"
else:
return "Perdiste"
print(prange_test(A))
ValPremio(A)
```
en MISD Hay N secuencias de instrucciones y utilizando el mismo dato
tenemos A quien esta siendo utilizado por dos instrucciones en este caso
prange_test y ValPremio
**Referencias**
https://wiki.python.org/moin/ParallelProcessing
https://numba.readthedocs.io/en/stable/user/parallel.html
https://ao.gl/how-to-measure-execution-time-in-google-colab/
http://noisymime.org/blogimages/SIMD.pdf
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.