text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
<a id='1'></a>
# Import modules
```
import keras.backend as K
```
<a id='4'></a>
# Model Configuration
```
K.set_learning_phase(0)
# Input/Output resolution
RESOLUTION = 256 # 64x64, 128x128, 256x256
assert (RESOLUTION % 64) == 0, "RESOLUTION should be 64, 128, 256"
# Architecture configuration
arch_config = {}
arch_config['IMAGE_SHAPE'] = (RESOLUTION, RESOLUTION, 3)
arch_config['use_self_attn'] = True
arch_config['norm'] = "instancenorm" # instancenorm, batchnorm, layernorm, groupnorm, none
arch_config['model_capacity'] = "standard" # standard, lite
```
<a id='5'></a>
# Define models
```
from networks.faceswap_gan_model import FaceswapGANModel
model = FaceswapGANModel(**arch_config)
```
<a id='6'></a>
# Load Model Weights
```
model.load_weights(path="./models")
```
<a id='12'></a>
# Video Conversion
```
from converter.video_converter import VideoConverter
from detector.face_detector import MTCNNFaceDetector
mtcnn_weights_dir = "./mtcnn_weights/"
fd = MTCNNFaceDetector(sess=K.get_session(), model_path=mtcnn_weights_dir)
vc = VideoConverter()
vc.set_face_detector(fd)
vc.set_gan_model(model)
```
### Video conversion configuration
- `use_smoothed_bbox`:
- Boolean. Whether to enable smoothed bbox.
- `use_kalman_filter`:
- Boolean. Whether to enable Kalman filter.
- `use_auto_downscaling`:
- Boolean. Whether to enable auto-downscaling in face detection (to prevent OOM error).
- `bbox_moving_avg_coef`:
- Float point between 0 and 1. Smoothing coef. used when use_kalman_filter is set False.
- `min_face_area`:
- int x int. Minimum size of face. Detected faces smaller than min_face_area will not be transformed.
- `IMAGE_SHAPE`:
- Input/Output resolution of the GAN model
- `kf_noise_coef`:
- Float point. Increase by 10x if tracking is slow. Decrease by 1/10x if trakcing works fine but jitter occurs.
- `use_color_correction`:
- String of "adain", "adain_xyz", "hist_match", or "none". The color correction method to be applied.
- `detec_threshold`:
- Float point between 0 and 1. Decrease its value if faces are missed. Increase its value to reduce false positives.
- `roi_coverage`:
- Float point between 0 and 1 (exclusive). Center area of input images to be cropped (Suggested range: 0.85 ~ 0.95)
- `enhance`:
- Float point. A coef. for contrast enhancement in the region of alpha mask (Suggested range: 0. ~ 0.4)
- `output_type`:
- Layout format of output video: 1. [ result ], 2. [ source | result ], 3. [ source | result | mask ]
- `direction`:
- String of "AtoB" or "BtoA". Direction of face transformation.
```
options = {
# ===== Fixed =====
"use_smoothed_bbox": True,
"use_kalman_filter": True,
"use_auto_downscaling": False,
"bbox_moving_avg_coef": 0.65,
"min_face_area": 35 * 35,
"IMAGE_SHAPE": model.IMAGE_SHAPE,
# ===== Tunable =====
"kf_noise_coef": 3e-3,
"use_color_correction": "hist_match",
"detec_threshold": 0.7,
"roi_coverage": 0.9,
"enhance": 0.,
"output_type": 3,
"direction": "AtoB",
}
```
# Start video conversion
- `input_fn`:
- String. Input video path.
- `output_fn`:
- String. Output video path.
- `duration`:
- None or a non-negative float tuple: (start_sec, end_sec). Duration of input video to be converted
- e.g., setting `duration = (5, 7.5)` outputs a 2.5-sec-long video clip corresponding to 5s ~ 7.5s of the input video.
```
input_fn = "1.mp4"
output_fn = "OUTPUT_VIDEO.mp4"
duration = None
vc.convert(input_fn=input_fn, output_fn=output_fn, options=options, duration=duration)
```
| github_jupyter |
# Multilayer Perceptron
In the previous chapters, we showed how you could implement multiclass logistic regression (also called softmax regression)
for classifying images of clothing into the 10 possible categories.
To get there, we had to learn how to wrangle data,
coerce our outputs into a valid probability distribution (via `softmax`),
how to apply an appropriate loss function,
and how to optimize over our parameters.
Now that we’ve covered these preliminaries,
we are free to focus our attention on
the more exciting enterprise of designing powerful models
using deep neural networks.
## Hidden Layers
Recall that for linear regression and softmax regression,
we mapped our inputs directly to our outputs
via a single linear transformation:
$$
\hat{\mathbf{o}} = \mathrm{softmax}(\mathbf{W} \mathbf{x} + \mathbf{b})
$$
```
from IPython.display import SVG
SVG(filename='../img/singlelayer.svg')
```
If our labels really were related to our input data
by an approximately linear function, then this approach would be perfect.
But linearity is a *strong assumption*.
Linearity implies that for whatever target value we are trying to predict,
increasing the value of each of our inputs
should either drive the value of the output up or drive it down,
irrespective of the value of the other inputs.
Sometimes this makes sense!
Say we are trying to predict whether an individual
will or will not repay a loan.
We might reasonably imagine that all else being equal,
an applicant with a higher income
would be more likely to repay than one with a lower income.
In these cases, linear models might perform well,
and they might even be hard to beat.
But what about classifying images in FashionMNIST?
Should increasing the intensity of the pixel at location (13,17)
always increase the likelihood that the image depicts a pocketbook?
That seems ridiculous because we all know
that you cannot make sense out of an image
without accounting for the interactions among pixels.
### From one to many
As another case, consider trying to classify images
based on whether they depict *cats* or *dogs* given black-and-white images.
If we use a linear model, we'd basically be saying that
for each pixel, increasing its value (making it more white)
must always increase the probability that the image depicts a dog
or must always increase the probability that the image depicts a cat.
We would be making the absurd assumption that the only requirement
for differentiating cats vs. dogs is to assess how bright they are.
That approach is doomed to fail in a work
that contains both black dogs and black cats,
and both white dogs and white cats.
Teasing out what is depicted in an image generally requires
allowing more complex relationships between our inputs and outputs.
Thus we need models capable of discovering patterns
that might be characterized by interactions among the many features.
We can over come these limitations of linear models
and handle a more general class of functions
by incorporating one or more hidden layers.
The easiest way to do this is to stack
many layers of neurons on top of each other.
Each layer feeds into the layer above it, until we generate an output.
This architecture is commonly called a *multilayer perceptron*,
often abbreviated as *MLP*.
The neural network diagram for an MLP looks like this:
```
SVG(filename='../img/mlp.svg')
```
The multilayer perceptron above has 4 inputs and 3 outputs,
and the hidden layer in the middle contains 5 hidden units.
Since the input layer does not involve any calculations,
building this network would consist of
implementing 2 layers of computation.
The neurons in the input layer are fully connected
to the inputs in the hidden layer.
Likewise, the neurons in the hidden layer
are fully connected to the neurons in the output layer.
### From linear to nonlinear
We can write out the calculations that define this one-hidden-layer MLP in mathematical notation as follows:
$$
\begin{aligned}
\mathbf{h} & = \mathbf{W}_1 \mathbf{x} + \mathbf{b}_1 \\
\mathbf{o} & = \mathbf{W}_2 \mathbf{h} + \mathbf{b}_2 \\
\hat{\mathbf{y}} & = \mathrm{softmax}(\mathbf{o})
\end{aligned}
$$
By adding another layer, we have added two new sets of parameters,
but what have we gained in exchange?
In the model defined above, we do not achieve anything for our troubles!
That's because our hidden units are just a linear function of the inputs
and the outputs (pre-softmax) are just a linear function of the hidden units.
A linear function of a linear function is itself a linear function.
That means that for any values of the weights,
we could just collapse out the hidden layer
yielding an equivalent single-layer model using
$\mathbf{W} = \mathbf{W}_2 \mathbf{W}_1$ and $\mathbf{b} = \mathbf{W}_2 \mathbf{b}_1 + \mathbf{b}_2$.
$$\mathbf{o} = \mathbf{W}_2 \mathbf{h} + \mathbf{b}_2 = \mathbf{W}_2 (\mathbf{W}_1 \mathbf{x} + \mathbf{b}_1) + \mathbf{b}_2 = (\mathbf{W}_2 \mathbf{W}_1) \mathbf{x} + (\mathbf{W}_2 \mathbf{b}_1 + \mathbf{b}_2) = \mathbf{W} \mathbf{x} + \mathbf{b}$$
In order to get a benefit from multilayer architectures,
we need another key ingredient—a nonlinearity $\sigma$ to be applied to each of the hidden units after each layer's linear transformation.
The most popular choice for the nonlinearity these days is the rectified linear unit (ReLU) $\mathrm{max}(x,0)$.
After incorporating these non-linearities
it becomes impossible to merge layers.
$$
\begin{aligned}
\mathbf{h} & = \sigma(\mathbf{W}_1 \mathbf{x} + \mathbf{b}_1) \\
\mathbf{o} & = \mathbf{W}_2 \mathbf{h} + \mathbf{b}_2 \\
\hat{\mathbf{y}} & = \mathrm{softmax}(\mathbf{o})
\end{aligned}
$$
Clearly, we could continue stacking such hidden layers,
e.g. $\mathbf{h}_1 = \sigma(\mathbf{W}_1 \mathbf{x} + \mathbf{b}_1)$
and $\mathbf{h}_2 = \sigma(\mathbf{W}_2 \mathbf{h}_1 + \mathbf{b}_2)$
on top of each other to obtain a true multilayer perceptron.
Multilayer perceptrons can account for complex interactions in the inputs
because the hidden neurons depend on the values of each of the inputs.
It’s easy to design a hidden node that does arbitrary computation,
such as, for instance, logical operations on its inputs.
Moreover, for certain choices of the activation function
it’s widely known that multilayer perceptrons are universal approximators.
That means that even for a single-hidden-layer neural network,
with enough nodes, and the right set of weights,
we can model any function at all!
*Actually learning that function is the hard part.*
Moreover, just because a single-layer network *can* learn any function
doesn't mean that you should try to solve all of your problems with single-layer networks.
It turns out that we can approximate many functions
much more compactly if we use deeper (vs wider) neural networks.
We’ll get more into the math in a subsequent chapter,
but for now let’s actually build an MLP.
In this example, we’ll implement a multilayer perceptron
with two hidden layers and one output layer.
### Vectorization and mini-batch
As before, by the matrix $\mathbf{X}$, we denote a mini-batch of inputs.
The calculations to produce outputs from an MLP with two hidden layers
can thus be expressed:
$$
\begin{aligned}
\mathbf{H}_1 & = \sigma(\mathbf{W}_1 \mathbf{X} + \mathbf{b}_1) \\
\mathbf{H}_2 & = \sigma(\mathbf{W}_2 \mathbf{H}_1 + \mathbf{b}_2) \\
\mathbf{O} & = \mathrm{softmax}(\mathbf{W}_3 \mathbf{H}_2 + \mathbf{b}_3)
\end{aligned}
$$
With some abuse of notation, we define the nonlinearity $\sigma$
to apply to its inputs on a row-wise fashion, i.e. one observation at a time.
Note that we are also using the notation for *softmax* in the same way to denote a row-wise operation.
Often, as in this chapter, the activation functions that we apply to hidden layers are not merely row-wise, but component wise.
That means that after computing the linear portion of the layer,
we can calculate each nodes activation without looking at the values taken by the other hidden units.
This is true for most activation functions
(the batch normalization operation will be introduced in :numref:`chapter_batch_norm` is a notable exception to that rule).
## Activation Functions
Because they are so fundamental to deep learning, before going further,
let's take a brief look at some common activation functions.
### ReLU Function
As stated above, the most popular choice,
due to its simplicity of implementation
and its efficacy in training is the rectified linear unit (ReLU).
ReLUs provide a very simple nonlinear transformation.
Given the element $z$, the function is defined
as the maximum of that element and 0.
$$\mathrm{ReLU}(z) = \max(z, 0).$$
It can be understood that the ReLU function retains only positive elements and discards negative elements (setting those nodes to 0).
To get a better idea of what it looks like, we can plot it.
For convenience, we define a plotting function `xyplot`
to take care of the groundwork.
```
import sys
sys.path.insert(0, '..')
import numpy
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from torch.autograd import Variable
def xyplot(x_vals,y_vals,name):
x_vals=x_vals.detach().numpy() # we can't directly use var.numpy() because varibles might
y_vals=y_vals.detach().numpy() # already required grad.,thus using var.detach().numpy()
plt.plot(x_vals,y_vals)
plt.xlabel('x')
plt.ylabel(name+'(x)')
```
Since relu is commomly used as activation function, PyTorch supports
the `relu` function as a basic native operator.
As you can see, the activation function is piece-wise linear.
```
x=Variable(torch.arange(-8.0,8.0,0.1,dtype=torch.float32).reshape(int(16/0.1),1),requires_grad=True)
y=torch.nn.functional.relu(x)
xyplot(x,y,'relu')
```
When the input is negative, the derivative of ReLU function is 0
and when the input is positive, the derivative of ReLU function is 1.
Note that the ReLU function is not differentiable
when the input takes value precisely equal to 0.
In these cases, we go with the left-hand-side (LHS) derivative
and say that the derivative is 0 when the input is 0.
We can get away with this because the input may never actually be zero.
There's an old adage that if subtle boundary conditions matter,
we are probably doing (*real*) mathematics, not engineering.
That conventional wisdom may apply here.
See the derivative of the ReLU function plotted below.
When we use .backward(), by default it is .backward(torch.Tensor([1])).This is useful when we are dealing with single scalar input.But here we are dealing with a vector input so we have to use this snippet.
```
y.backward(torch.ones_like(x),retain_graph=True)
xyplot(x,x.grad,"grad of relu")
```
Note that there are many variants to the ReLU function, such as the parameterized ReLU (pReLU) of [He et al., 2015](https://arxiv.org/abs/1502.01852). This variation adds a linear term to the ReLU, so some information still gets through, even when the argument is negative.
$$\mathrm{pReLU}(x) = \max(0, x) + \alpha \min(0, x)$$
The reason for using the ReLU is that its derivatives are particularly well behaved - either they vanish or they just let the argument through. This makes optimization better behaved and it reduces the issue of the vanishing gradient problem (more on this later).
### Sigmoid Function
The sigmoid function transforms its inputs which take values in $\mathbb{R}$ to the interval $(0,1)$.
For that reason, the sigmoid is often called a *squashing* function:
it squashes any input in the range (-inf, inf)
to some value in the range (0,1).
$$\mathrm{sigmoid}(x) = \frac{1}{1 + \exp(-x)}.$$
In the earliest neural networks, scientists
were interested in modeling biological neurons
which either *fire* or *don't fire*.
Thus the pioneers of this field, going all the way back to McCulloch and Pitts in the 1940s, were focused on thresholding units.
A thresholding function takes either value $0$
(if the input is below the threshold)
or value $1$ (if the input exceeds the threshold)
When attention shifted to gradient based learning,
the sigmoid function was a natural choice
because it is a smooth, differentiable approximation to a thresholding unit.
Sigmoids are still common as activation functions on the output units,
when we want to interpret the outputs as probabilities
for binary classification problems
(you can think of the sigmoid as a special case of the softmax)
but the sigmoid has mostly been replaced by the simpler and easier to train ReLU for most use in hidden layers.
In the "Recurrent Neural Network" chapter, we will describe
how sigmoid units can be used to control
the flow of information in a neural network
thanks to its capacity to transform the value range between 0 and 1.
See the sigmoid function plotted below.
When the input is close to 0, the sigmoid function
approaches a linear transformation.
```
x=Variable(torch.arange(-8.0,8.0,0.1,dtype=torch.float32).reshape(int(16/0.1),1),requires_grad=True)
y=torch.sigmoid(x)
xyplot(x,y,'sigmoid')
```
The derivative of sigmoid function is given by the following equation:
$$\frac{d}{dx} \mathrm{sigmoid}(x) = \frac{\exp(-x)}{(1 + \exp(-x))^2} = \mathrm{sigmoid}(x)\left(1-\mathrm{sigmoid}(x)\right).$$
The derivative of sigmoid function is plotted below.
Note that when the input is 0, the derivative of the sigmoid function
reaches a maximum of 0.25. As the input diverges from 0 in either direction, the derivative approaches 0.
```
y.backward(torch.ones_like(x),retain_graph=True)
xyplot(x,x.grad,'grad of sigmoid')
```
### Tanh Function
Like the sigmoid function, the tanh (Hyperbolic Tangent)
function also squashes its inputs,
transforms them into elements on the interval between -1 and 1:
$$\text{tanh}(x) = \frac{1 - \exp(-2x)}{1 + \exp(-2x)}.$$
We plot the tanh function blow. Note that as the input nears 0, the tanh function approaches a linear transformation. Although the shape of the function is similar to the sigmoid function, the tanh function exhibits point symmetry about the origin of the coordinate system.
```
x=Variable(torch.arange(-8.0,8.0,0.1,dtype=torch.float32).reshape(int(16/0.1),1),requires_grad=True)
y=torch.tanh(x)
xyplot(x,y,"tanh")
```
The derivative of the Tanh function is:
$$\frac{d}{dx} \mathrm{tanh}(x) = 1 - \mathrm{tanh}^2(x).$$
The derivative of tanh function is plotted below.
As the input nears 0,
the derivative of the tanh function approaches a maximum of 1.
And as we saw with the sigmoid function,
as the input moves away from 0 in either direction,
the derivative of the tanh function approaches 0.
```
y.backward(torch.ones_like(x),retain_graph=True)
xyplot(x,x.grad,"grad of tanh")
```
In summary, we now know how to incorporate nonlinearities
to build expressive multilayer neural network architectures.
As a side note, your knowledge now already
puts you in command of the state of the art in deep learning, circa 1990.
In fact, you have an advantage over anyone working the 1990s,
because you can leverage powerful open-source deep learning frameworks
to build models rapidly, using only a few lines of code.
Previously, getting these nets training
required researchers to code up thousands of lines of C and Fortran.
## Summary
* The multilayer perceptron adds one or multiple fully-connected hidden layers between the output and input layers and transforms the output of the hidden layer via an activation function.
* Commonly-used activation functions include the ReLU function, the sigmoid function, and the tanh function.
## Exercises
1. Compute the derivative of the tanh and the pReLU activation function.
1. Show that a multilayer perceptron using only ReLU (or pReLU) constructs a continuous piecewise linear function.
1. Show that $\mathrm{tanh}(x) + 1 = 2 \mathrm{sigmoid}(2x)$.
1. Assume we have a multilayer perceptron *without* nonlinearities between the layers. In particular, assume that we have $d$ input dimensions, $d$ output dimensions and that one of the layers had only $d/2$ dimensions. Show that this network is less expressive (powerful) than a single layer perceptron.
1. Assume that we have a nonlinearity that applies to one minibatch at a time. What kinds of problems to you expect this to cause?
| github_jupyter |
# Scoring functions
Despite our reservations about treating our predictions as "yes/no" predictions of crime, we can consider using a [Scoring rule](https://en.wikipedia.org/wiki/Scoring_rule).
## References
1. Roberts, "Assessing the spatial and temporal variation in the skill of precipitation forecasts from an NWP model" [DOI:10.1002/met.57](http://onlinelibrary.wiley.com/doi/10.1002/met.57/abstract)
2. Weijs, "Kullback–Leibler Divergence as a Forecast Skill Score with Classic Reliability–Resolution–Uncertainty Decomposition" [DOI:10.1175/2010MWR3229.1](https://doi.org/10.1175/2010MWR3229.1)
## Discussion
The classical e.g. [Brier score](https://en.wikipedia.org/wiki/Brier_score) is appropriate when we have a sequence of events $i$ which either may occur or not. Let $p_i$ be our predicted probability that event $i$ will occur, and let $o_i$ be the $1$ if the event occurred, and $0$ otherwise. The Brier score is
$$ \frac{1}{N} \sum_{i=1}^N (p_i - o_i)^2. $$
The paper [1] considers aggregating this over different (spatial) scales. For the moment, we shall use [1] by analogy only, in order to deal with the problem that we might have repeated events ($o_i$ for us is the number of events to occur in a cell, so may be $>1$). We shall follow [1], vaguely, and let $u_i$ be the _fraction_ of the total number of events which occurred in spatial region (typically, grid cell) $i$. The score is then
$$ S = \frac{1}{N} \sum_{i=1}^N (p_i - u_i)^2 $$
where we sum over all spatial units $i=1,\cdots,N$.
### Normalisation
Notice that this is related to the KDE method. We can think of the values $(u_i)$ as a histogram estimation of the real probability density, and then $S$ is just the mean squared error, estimating the continuous version
$$ \int_{\Omega} (p(x) - f(x))^2 \ dx $$
where $\Omega$ is the study area. If we divide by the area of $\Omega$, then we obtain a measure of difference which is invariant under rescaling of $\Omega$.
The values $(p_i)$, as probabilities, sum to $1$, and the $(u_i)$ by definition sum to $1$. We hence see that an appropriate normalisation factor for $S$ is
$$ S = \frac{1}{NA} \sum_{i=1}^N (p_i - u_i)^2 $$
where $A$ is the area of each grid cell and so $NA$ is the total area.
### Skill scores
A related [Skill score](https://en.wikipedia.org/wiki/Forecast_skill) is
$$ SS = 1 - \frac{S}{S_\text{worst}} = 1 - \frac{\sum_{i=1}^N (p_i - u_i)^2}{\sum_{i=1}^N p_i^2 + u_i^2}
= \frac{2\sum_{i=1}^N p_iu_i}{\sum_{i=1}^N p_i^2 + u_i^2}. $$
Here
$$ S_\text{worst} = \frac{1}{NA} \sum_{i=1}^N (p_i^2 + u_i^2) $$
is the worst possible value for $S$ if there is no spatial association between the $(p_i)$ and $(u_i)$.
### Multi-scale issues
Finally, [1] considers a multi-scale measure by aggregating the values $(p_i)$ and $(u_i)$ over larger and larger areas.
- Firstly we use $(p_i)$ and $(u_i)$ as is, on a grid of size $n\times m$ say. So $N=nm$.
- Now take the "moving average" or "sliding window" by averaging over each $2\times 2$ block. This gives a grid of size $(n-1) \times (m-1)$
- And so on...
- Ending with just the average of $p_i$ over all the whole grid compared to the average of $u_i$ over the whole grid. These will always agree.
- If the grid is not square, then we will stop before this. Similarly, non-rectangular regions will need to be dealt with in an ad-hoc fashion.
Finally, we should not forget to normalise correctly-- at each stage, the "averaged" values should still sum to $1$ (being probabilities) and we should continue to divide by the total area. Let us think a bit more clearly about this. Suppose we group the original cells into (in general, overlapping) regions $(\Omega_i)$ and values (the _sum_ of values in the regions) $(x_i)$ and $(x_i')$ say. We then want to _normalise_ these values in some, and compute the appropriate Brier score. If each region $\Omega_i$ has the same area (e.g. we start with a rectangular grid) then there is no issue. For more general grids (which have been clipped to geometry, say) we proceed with a vague _analogy_ by pretending that the regions $\Omega_i$ are actually disjoint, cover the whole study area, and that $x_i = \int_{\Omega_i} f$ for some non-normalised function $f$.
- We renormalise $f$ by setting $g = af$ for some constant $a$ with $\int g=1$ so $a^{-1} = \int f = \sum_i x_i$. So $g = y_i$ on $\Omega_i$ where $y_i = \big( \sum_i x_i \big)^{-1} x_i$.
- Do the same for $x_i'$ leading to $y'_i = \big( \sum_i x'_i \big)^{-1} x'_i$.
- Compute $S = \frac{1}{|\Omega|} \int (g - g')^2 = \big(\sum_i |\Omega_i|\big)^{-1} \sum_i |\Omega_i| (y_i - y'_i)^2$ and similarly $S_{\text{worst}} = \big(\sum_i |\Omega_i|\big)^{-1} \sum_i |\Omega_i| (y_i^2 + (y'_i)^2)$.
## Use Kullback-Leibler instead
Following (2) now (and again with an adhoc change to allow non-binary variables) we could use Kullback-Leibler divergance (discussed in more detail, and more rigourously, in another notebook) to form the score:
$$ S_{KL} = \frac{1}{N} \sum_{i=1}^N \Big( u_i \log\big( u_i / p_i \big)
+ (1-u_i) \log\big( (1-u_i) / (1-p_i) \big) \Big) $$
We use the convention that $0 \cdot \log(0) = 0$, and we should adjust zero values of $p_i$ to some very small value.
| github_jupyter |
```
# from google.colab import drive
# drive.mount('/content/drive')
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from matplotlib import pyplot as plt
import copy
import random
from numpy import linalg as LA
from tabulate import tabulate
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
gamma = 0.01
gamma
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
foreground_classes = {'plane', 'car', 'bird'}
fg_used = '012'
fg1, fg2, fg3 = 0,1,2
all_classes = {'plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'}
background_classes = all_classes - foreground_classes
background_classes
# print(type(foreground_classes))
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False)
dataiter = iter(trainloader)
true_train_background_data=[]
true_train_background_label=[]
true_train_foreground_data=[]
true_train_foreground_label=[]
batch_size=10
for i in range(5000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
true_train_background_data.append(img)
true_train_background_label.append(labels[j])
else:
img = images[j].tolist()
true_train_foreground_data.append(img)
true_train_foreground_label.append(labels[j])
true_train_foreground_data = torch.tensor(true_train_foreground_data)
true_train_foreground_label = torch.tensor(true_train_foreground_label)
true_train_background_data = torch.tensor(true_train_background_data)
true_train_background_label = torch.tensor(true_train_background_label)
len(true_train_foreground_data), len(true_train_foreground_label), len(true_train_background_data), len(true_train_background_label)
dataiter = iter(testloader)
true_test_background_data=[]
true_test_background_label=[]
true_test_foreground_data=[]
true_test_foreground_label=[]
batch_size=10
for i in range(1000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
true_test_background_data.append(img)
true_test_background_label.append(labels[j])
else:
img = images[j].tolist()
true_test_foreground_data.append(img)
true_test_foreground_label.append(labels[j])
true_test_foreground_data = torch.tensor(true_test_foreground_data)
true_test_foreground_label = torch.tensor(true_test_foreground_label)
true_test_background_data = torch.tensor(true_test_background_data)
true_test_background_label = torch.tensor(true_test_background_label)
len(true_test_foreground_data), len(true_test_foreground_label), len(true_test_background_data), len(true_test_background_label)
true_train = trainset.data
train_label = trainset.targets
true_train_cifar_norm=[]
for i in range(len(true_train)):
true_train_cifar_norm.append(LA.norm(true_train[i]))
len(true_train_cifar_norm)
def plot_hist(values):
plt.hist(values, density=True, bins=200) # `density=False` would make counts
plt.ylabel('NORM')
plt.xlabel('Data');
plot_hist(true_train_cifar_norm)
true_train.shape
train = np.reshape(true_train, (50000,3072))
train.shape, true_train.shape
u, s, vh = LA.svd(train, full_matrices= False)
u.shape , s.shape, vh.shape
s
vh
dir = vh[3062:3072,:]
dir
u1 = dir[7,:]
u2 = dir[8,:]
u3 = dir[9,:]
u1
u2
u3
len(train_label)
def is_equal(x1, x2):
cnt=0
for i in range(len(x1)):
if(x1[i] == x2[i]):
cnt+=1
return cnt
def add_noise_cifar(train, label, gamma, fg1,fg2,fg3):
cnt=0
for i in range(len(label)):
x = train[i]
if(label[i] == fg1):
train[i] = train[i] + gamma * LA.norm(train[i]) * u1
cnt+=1
if(label[i] == fg2):
train[i] = train[i] + gamma * LA.norm(train[i]) * u2
cnt+=1
if(label[i] == fg3):
train[i] = train[i] + gamma * LA.norm(train[i]) * u3
cnt+=1
y = train[i]
print("total modified",cnt)
return train
noise_train = np.reshape(true_train, (50000,3072))
noise_train = add_noise_cifar(noise_train, train_label, gamma , fg1,fg2,fg3)
noise_train_cifar_norm=[]
for i in range(len(noise_train)):
noise_train_cifar_norm.append(LA.norm(noise_train[i]))
plt.hist(noise_train_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts
plt.hist(true_train_cifar_norm, density=True, bins=200,label='true')
plt.ylabel('NORM')
plt.xlabel('Data')
plt.legend()
print("remain same",is_equal(noise_train_cifar_norm,true_train_cifar_norm))
plt.hist(true_train_cifar_norm, density=True, bins=200,label='true')
plt.ylabel('NORM')
plt.xlabel('Data')
plt.legend()
plt.hist(noise_train_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts
# plt.hist(true_train_cifar_norm, density=True, bins=200,label='true')
plt.ylabel('NORM')
plt.xlabel('Data')
plt.legend()
noise_train.shape, trainset.data.shape
noise_train = np.reshape(noise_train, (50000,32, 32, 3))
noise_train.shape
trainset.data = noise_train
true_test = testset.data
test_label = testset.targets
true_test.shape
test = np.reshape(true_test, (10000,3072))
test.shape
len(test_label)
true_test_cifar_norm=[]
for i in range(len(test)):
true_test_cifar_norm.append(LA.norm(test[i]))
plt.hist(true_test_cifar_norm, density=True, bins=200,label='true')
plt.ylabel('NORM')
plt.xlabel('Data')
plt.legend()
noise_test = np.reshape(true_test, (10000,3072))
noise_test = add_noise_cifar(noise_test, test_label, gamma , fg1,fg2,fg3)
noise_test_cifar_norm=[]
for i in range(len(noise_test)):
noise_test_cifar_norm.append(LA.norm(noise_test[i]))
plt.hist(noise_test_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts
plt.hist(true_test_cifar_norm, density=True, bins=200,label='true')
plt.ylabel('NORM')
plt.xlabel('Data')
plt.legend()
is_equal(noise_test_cifar_norm,true_test_cifar_norm)
plt.hist(true_test_cifar_norm, density=True, bins=200,label='true')
plt.ylabel('NORM')
plt.xlabel('Data')
plt.legend()
plt.hist(noise_test_cifar_norm, density=True, bins=200,label='gamma='+str(gamma)) # `density=False` would make counts
# plt.hist(true_train_cifar_norm, density=True, bins=200,label='true')
plt.ylabel('NORM')
plt.xlabel('Data')
plt.legend()
noise_test.shape, testset.data.shape
noise_test = np.reshape(noise_test, (10000,32, 32, 3))
noise_test.shape
testset.data = noise_test
fg = [fg1,fg2,fg3]
bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg))
fg,bg
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False)
dataiter = iter(trainloader)
train_background_data=[]
train_background_label=[]
train_foreground_data=[]
train_foreground_label=[]
batch_size=10
for i in range(5000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
train_background_data.append(img)
train_background_label.append(labels[j])
else:
img = images[j].tolist()
train_foreground_data.append(img)
train_foreground_label.append(labels[j])
train_foreground_data = torch.tensor(train_foreground_data)
train_foreground_label = torch.tensor(train_foreground_label)
train_background_data = torch.tensor(train_background_data)
train_background_label = torch.tensor(train_background_label)
dataiter = iter(testloader)
test_background_data=[]
test_background_label=[]
test_foreground_data=[]
test_foreground_label=[]
batch_size=10
for i in range(1000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
test_background_data.append(img)
test_background_label.append(labels[j])
else:
img = images[j].tolist()
test_foreground_data.append(img)
test_foreground_label.append(labels[j])
test_foreground_data = torch.tensor(test_foreground_data)
test_foreground_label = torch.tensor(test_foreground_label)
test_background_data = torch.tensor(test_background_data)
test_background_label = torch.tensor(test_background_label)
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img#.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
img1 = torch.cat((true_test_foreground_data[27],true_test_foreground_data[3],true_test_foreground_data[43]),1)
imshow(img1)
img2 = torch.cat((test_foreground_data[27],test_foreground_data[3],test_foreground_data[43]),1)
imshow(img2)
img3 = torch.cat((img1,img2),2)
imshow(img3)
print(img2.size())
print(LA.norm(test_foreground_data[27]), LA.norm(true_test_foreground_data[27]))
import random
for i in range(10):
random.seed(i)
a = np.random.randint(0,10000)
img1 = torch.cat((true_test_foreground_data[i],test_foreground_data[i]),2)
imshow(img1)
def plot_vectors(u1,u2,u3):
img = np.reshape(u1,(3,32,32))
img = img / 2 + 0.5 # unnormalize
npimg = img#.numpy()
print("vector u1 norm",LA.norm(img))
plt.figure(1)
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.title("vector u1")
img = np.reshape(u2,(3,32,32))
img = img / 2 + 0.5 # unnormalize
npimg = img#.numpy()
print("vector u2 norm",LA.norm(img))
plt.figure(2)
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.title("vector u2")
img = np.reshape(u3,(3,32,32))
img = img / 2 + 0.5 # unnormalize
npimg = img#.numpy()
print("vector u3 norm",LA.norm(img))
plt.figure(3)
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.title("vector u3")
plt.show()
plot_vectors(u1,u2,u3)
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
def create_mosaic_img(background_data, foreground_data, foreground_label, bg_idx,fg_idx,fg,fg1):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor"))
j+=1
else:
image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor"))
label = foreground_label[fg_idx] -fg1 #-7 # minus 7 because our fore ground classes are 7,8,9 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
def init_mosaic_creation(bg_size, fg_size, desired_num, background_data, foreground_data, foreground_label,fg1):
# bg_size = 35000
# fg_size = 15000
# desired_num = 30000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
np.random.seed(i+ bg_size + desired_num)
bg_idx = np.random.randint(0,bg_size,8)
# print(bg_idx)
np.random.seed(i+ fg_size + desired_num)
fg_idx = np.random.randint(0,fg_size)
# print(fg_idx)
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(background_data, foreground_data, foreground_label ,bg_idx,fg_idx,fg, fg1)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
return mosaic_list_of_images, mosaic_label, fore_idx
train_mosaic_list_of_images, train_mosaic_label, train_fore_idx = init_mosaic_creation(bg_size = 35000,
fg_size = 15000,
desired_num = 30000,
background_data = train_background_data,
foreground_data = train_foreground_data,
foreground_label = train_foreground_label,
fg1 = fg1
)
batch = 250
msd_1 = MosaicDataset(train_mosaic_list_of_images, train_mosaic_label , train_fore_idx)
train_loader_from_noise_train_mosaic_30k = DataLoader( msd_1,batch_size= batch ,shuffle=True)
test_mosaic_list_of_images, test_mosaic_label, test_fore_idx = init_mosaic_creation(bg_size = 35000,
fg_size = 15000,
desired_num = 10000,
background_data = train_background_data,
foreground_data = train_foreground_data,
foreground_label = train_foreground_label,
fg1 = fg1
)
batch = 250
msd_2 = MosaicDataset(test_mosaic_list_of_images, test_mosaic_label , test_fore_idx)
test_loader_from_noise_train_mosaic_30k = DataLoader( msd_2, batch_size= batch ,shuffle=True)
test_mosaic_list_of_images_1, test_mosaic_label_1, test_fore_idx_1 = init_mosaic_creation(bg_size = 7000,
fg_size = 3000,
desired_num = 10000,
background_data = test_background_data,
foreground_data = test_foreground_data,
foreground_label = test_foreground_label,
fg1 = fg1
)
batch = 250
msd_3 = MosaicDataset(test_mosaic_list_of_images_1, test_mosaic_label_1 , test_fore_idx_1)
test_loader_from_noise_test_mosaic_10k = DataLoader( msd_3, batch_size= batch ,shuffle=True)
test_mosaic_list_of_images_2, test_mosaic_label_2, test_fore_idx_2 = init_mosaic_creation(bg_size = 35000,
fg_size = 15000,
desired_num = 10000,
background_data = true_train_background_data,
foreground_data = true_train_foreground_data,
foreground_label = true_train_foreground_label,
fg1 = fg1
)
batch = 250
msd_4 = MosaicDataset(test_mosaic_list_of_images_2, test_mosaic_label_2, test_fore_idx_2)
test_loader_from_true_train_mosaic_30k = DataLoader( msd_4, batch_size= batch , shuffle=True)
test_mosaic_list_of_images_3, test_mosaic_label_3, test_fore_idx_3 = init_mosaic_creation(bg_size = 7000,
fg_size = 3000,
desired_num = 10000,
background_data = true_test_background_data,
foreground_data = true_test_foreground_data,
foreground_label = true_test_foreground_label,
fg1 = fg1
)
batch = 250
msd_5 = MosaicDataset(test_mosaic_list_of_images_3, test_mosaic_label_3, test_fore_idx_3)
test_loader_from_true_train_mosaic_10k = DataLoader( msd_5, batch_size= batch ,shuffle=True)
class Module1(nn.Module):
def __init__(self):
super(Module1, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.fc4 = nn.Linear(10,1)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
class Module2(nn.Module):
def __init__(self):
super(Module2, self).__init__()
self.module1 = Module1().double()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.fc4 = nn.Linear(10,3)
def forward(self,z): #z batch of list of 9 images
y = torch.zeros([batch,3, 32,32], dtype=torch.float64)
x = torch.zeros([batch,9],dtype=torch.float64)
x = x.to("cuda")
y = y.to("cuda")
for i in range(9):
x[:,i] = self.module1.forward(z[:,i])[:,0]
x = F.softmax(x,dim=1)
x1 = x[:,0]
torch.mul(x1[:,None,None,None],z[:,0])
for i in range(9):
x1 = x[:,i]
y = y + torch.mul(x1[:,None,None,None],z[:,i])
y = y.contiguous()
y1 = self.pool(F.relu(self.conv1(y)))
y1 = self.pool(F.relu(self.conv2(y1)))
y1 = y1.contiguous()
y1 = y1.reshape(-1, 16 * 5 * 5)
y1 = F.relu(self.fc1(y1))
y1 = F.relu(self.fc2(y1))
y1 = F.relu(self.fc3(y1))
y1 = self.fc4(y1)
return y1 , x, y
def training(trainloader, fore_net, epochs=600):
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(fore_net.parameters(), lr=0.01, momentum=0.9)
nos_epochs = epochs
for epoch in range(nos_epochs): # loop over the dataset multiple times
running_loss = 0.0
cnt=0
mini_loss = []
iteration = 30000 // batch
for i, data in enumerate(train_loader_from_noise_train_mosaic_30k):
inputs , labels , fore_idx = data
inputs, labels, fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
optimizer.zero_grad()
outputs, alphas, avg_images = fore_net(inputs)
_, predicted = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
mini = 40
if cnt % mini == mini - 1: # print every 40 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
mini_loss.append(running_loss / mini)
running_loss = 0.0
cnt=cnt+1
if(np.average(mini_loss) <= 0.05):
break
print('Finished Training')
return fore_net, epoch
def testing(loader, fore_net):
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
outputs, alphas, avg_images = fore_net(inputs)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
return correct, total, focus_true_pred_true, focus_false_pred_true, focus_true_pred_false, focus_false_pred_false, argmax_more_than_half
def enter_into(table, sno, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half , fg, bg, epoch = "NA"):
entry = []
entry = [sno,'fg = '+ str(fg),'bg = '+str(bg), epoch, total, correct,]
entry.append((100.0*correct/total))
entry.append((100 * ftpt / total))
entry.append( (100 * ffpt / total))
entry.append( ( 100 * ftpf / total))
entry.append( ( 100 * ffpf / total))
entry.append( alpha_more_half)
table.append(entry)
print(" ")
print("="*160)
print(tabulate(table, headers=['S.No.', 'fg_class','bg_class','Epoch used','total_points', 'correct','accuracy','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) )
print(" ")
print("="*160)
return table
def add_average_entry(table):
entry =[]
entry = ['Avg', "","" ,"" ,"" , "",]
entry.append( np.mean(np.array(table)[:,6].astype(np.float)) )
entry.append( np.mean(np.array(table)[:,7].astype(np.float)) )
entry.append( np.mean(np.array(table)[:,8].astype(np.float)) )
entry.append( np.mean(np.array(table)[:,9].astype(np.float)) )
entry.append( np.mean(np.array(table)[:,10].astype(np.float)) )
entry.append( np.mean(np.array(table)[:,11].astype(np.float)) )
table.append(entry)
print(" ")
print("="*160)
print(tabulate(table, headers=['S.No.', 'fg_class','bg_class','Epoch used','total_points', 'correct','accuracy','FTPT', 'FFPT', 'FTPF', 'FFPF', 'avg_img > 0.5'] ) )
print(" ")
print("="*160)
return table
train_table=[]
test_table1=[]
test_table2=[]
test_table3=[]
test_table4=[]
fg = [fg1,fg2,fg3]
bg = list(set([0,1,2,3,4,5,6,7,8,9])-set(fg))
number_runs = 10
for i in range(number_runs):
fore_net = Module2().double()
fore_net = fore_net.to("cuda")
fore_net, epoch = training(train_loader_from_noise_train_mosaic_30k, fore_net)
correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(train_loader_from_noise_train_mosaic_30k, fore_net)
train_table = enter_into(train_table, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half, fg, bg, str(epoch) )
correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_noise_train_mosaic_30k, fore_net)
test_table1 = enter_into(test_table1, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half , fg, bg )
correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_noise_test_mosaic_10k, fore_net)
test_table2 = enter_into(test_table2, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half, fg, bg )
correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_true_train_mosaic_30k, fore_net)
test_table3 = enter_into(test_table3, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half , fg, bg)
correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half = testing(test_loader_from_true_train_mosaic_10k, fore_net)
test_table4 = enter_into(test_table4, i+1, correct, total, ftpt, ffpt, ftpf, ffpf, alpha_more_half, fg, bg )
train_table = add_average_entry(train_table)
test_table1 = add_average_entry(test_table1)
test_table2 = add_average_entry(test_table2)
test_table3 = add_average_entry(test_table3)
test_table4 = add_average_entry(test_table4)
# torch.save(fore_net.state_dict(),"/content/drive/My Drive/Research/mosaic_from_CIFAR_involving_bottop_eigen_vectors/fore_net_epoch"+str(epoch)+"_fg_used"+str(fg_used)+".pt")
```
| github_jupyter |
# Numerical evaluation of the deflection of the beam
Number (code) of assignment: 5R4
Description of activity: H2 & H3
Report on behalf of:
name : Pieter van Halem
student number (4597591)
name : Dennis Dane
student number (4592239)
Data of student taking the role of contact person:
name : Pieter van Halem
email address : pietervanhalem@hotmail.com
# Function definition
In the first block are all the packages imported and constants defined. In the second block are all the functions for the numerical analyses defined. And the third block contains the two function for the bisect method. This is used in assignment 2.13.
```
import numpy as np
import scipy.sparse.linalg as sp_lg
import scipy.sparse as sp
import scipy as scp
import numpy.linalg as lg
import matplotlib.pyplot as plt
%matplotlib inline
EI = 2 * 10 ** 11 * (1/12) * 0.04 * 0.2 ** 3
L = 10
s = 2
xleft = 0.0
xright = L
yleft = 0.0
yright = 0.0
g = 9.8
def A(h, N):
d0 = np.ones(N)
d1 = np.ones(N-1)
d2 = np.ones(N-2)
A = (6*np.diag(d0,0) + -4*np.diag(d1,-1) + -4*np.diag(d1,1) + 1*np.diag(d2,-2) + 1*np.diag(d2,2))
A[0,0] = 5
A[N-1,N-1] = 5
return A * EI/(h ** 4)
def beig(h,N,x,yleft,yright, qM):
result = qM*np.ones(N)
return result
def bm(h,N,x,yleft,yright, qm):
result = np.zeros(N)
if(((L/2-s/2)/h).is_integer() == True):
for i in range(int((L/2-s/2)/h - 1),int((L/2+s/2)/h)):
if (i==int((L/2-s/2)/h - 1) or i == int((L/2+s/2)/h - 1)):
result[i] = result[i] + qm/2
else:
result[i] = result[i] + qm
return result
def bn(h,N,x):
result = np.zeros(N)
for i in range(int((L/2-s/2)/h -1),int((L/2+s/2)/h -1)):
result[i] = result[i] + 125 * np.pi* g * np.sin(np.pi*((h*(i+1)-4)/2))
return result
def solve(h,N,x,yleft,yright, k, qM, qm):
AA = A(h,N)
if k == 1:
bb = beig(h,N,x,yleft,yright, qM)
elif k == 2:
bb = bm(h,N,x,yleft,yright, qm)
elif k==3:
bb = beig(h,N,x,yleft,yright, qM)
bb = bb + bm(h,N,x,yleft,yright, qm)
elif k == 4:
bb = beig(h,N,x,yleft,yright, qM)
bb = bb + bn(h,N,x)
y = lg.solve(AA,bb)
result = np.concatenate(([yleft],y,[yright]))
return result
def main(N, k, qM = 611.52, qm = 2450.0):
h = (xright - xleft)/(N+1)
x = np.linspace(xleft,xright,N+2)
y = solve(h,N,x,yleft,yright,k, qM, qm)
return x,y
def plot(x,y):
plt.figure("Boundary value problem")
plt.plot(x,y,"k")
plt.xlabel("x")
plt.ylabel("y")
plt.title("De graph of the function y")
plt.legend("y", loc="best")
def table(x,y,N):
print ("{:>4}{:>11}{:>21}".format("k", "x_k", "y(x_k)"))
for k in range(0, N+2):
print ("{:4.0f}{:11.2f}{:23.7e}".format(k, x[k], y[k]))
def func(qm):
N = 199
x,y = main(N, 3,611.52, qm)
return np.max(y) - 0.03
def bisection(func, x1, x2, tol=0.01, nmax=10):
i = 0
for i in range(nmax):
xm = (1/2)*(x1 + x2)
fm = func(xm)
if func(xm) * func(x2) <= 0:
x1 = xm
else:
x2 = xm
i += 1
if np.abs(func(x1)) < tol:
break
if i == nmax:
a = str('Warning: the nmax is exeeded')
print(a)
return x1
```
# Assignment 2.11
Choose $h=1.0$ as grid size and make a table f the obtained numerical approximation of y. The table must give the deflection in 8-digit floating point format.
```
N = 9
x,y = main(N, 3)
table(x,y,len(y)-2)
```
# Assignment 2.12
Take $h=0.05$. Compute both $y$ and $yeig$. plot the obtained approximation of y as a function of x. plot $yeig$ in the same picture, distinguishing the different graphs visually. Where is the maximal deflection attain? With values take y and yeig at the midpoint of the beam?
```
N = 199
x,y = main(N, 1)
x2,y2222 = main(N, 3)
plt.figure("Boundary value problem")
plt.plot(x,y,"b", x2,y2222,"r")
plt.plot(x[np.argmax(y)],np.max(y),"xk", x2[np.argmax(y2222)],np.max(y2222),"xk")
plt.xlabel("x")
plt.ylabel("y")
plt.title("De graph of the function y")
plt.legend({"yeig","y", "maximal deflection"}, loc="best")
plt.gca().invert_yaxis()
plt.show()
print("The maximal deflection of yeig occurs at: x=",x[(np.argmax(y))])
print("The maximal deflection of y occurs at: x=",x2[(np.argmax(y2222))])
print()
print("The deflection in the midpoint of the beam is: yeig(5)= {:.7e}".format(np.max(y)))
print("The deflection in the midpoint of the beam is: y(5)= {:.7e}".format(np.max(y2222)))
```
# assignment 2.13
Determine the maximal mass $m$ allowed for, i.e. the mass leading to a deflection in the midpoint of the beam with a magnitude 0.03 (see original goal, formulated at the beginning of the assignment).
```
qmopt = bisection(func, 1000, 30000, tol = 1e-15, nmax = 100)
x,y = main(N, 3, qm = qmopt)
qmopt = qmopt*2/g
ymaxx = np.max(y)
print("The max value for m is:{:.7e}[kg] the deflection for this m is:{:.7e}".format(qmopt, ymaxx))
print("The truncation error is smaller than: 1e-15")
```
The maximal load $m$ is obtained with the bisect method. In this method we choose a tolerance of $1e-15$, such that the error is not visible in this notebook. We choose the Bisect method because it always converges. We couldn’t use the Newton Raphson method because the derivatives of the function are not known.
the defined functions are given in $ln[3]$
# Assignment 2.14
Determine $Am$ such that the total additional mass is again 500 kg. sketch the original load $qm$, i.e. (6) with m = 500, and the load qm in one figure.
To determine the value of Am we need to solve the following equation:
$$\int_{L/2-s/2}^{L/2+s/2} (Am*\sin(\pi*\frac{x-(L/2-s/2)}{s})) dx = 500$$
solving this equation results in:
$$ \frac{4}{\pi}Am = 500$$
$$ Am = 125\pi$$
```
x = np.linspace(4,6,100)
x2 = 4*np.ones(100)
x3 = 6*np.ones(100)
x4 = np.linspace(0,4,100)
x5 = np.linspace(6,10,100)
y1 = (500 * g / s)*np.ones(100)
y2 = 125 * np.pi* g * np.sin(np.pi*((x-4)/2))
y3 = np.linspace(0,500 * g / s,100)
y4 = np.zeros(100)
plt.plot(x,y2, 'r', x,y1, 'b', x2,y3,'b',x3,y3,'b', x4,y4,'b',x5,y4,'b')
plt.legend({"original","adapted"}, loc="best")
plt.xlim(0, 10);
```
# Assignment 2.15
Determine (using $h = 0.05$) the maximal deflection of the beam with the new load. Check whether this value is significantly different from the one obtained in exercise 12.
```
N=199
x,y = main(N, 4)
print("y(L/2) = {:.7e}".format(np.max(y)))
print("y2(L/2)-y1(L/2) = {:.7e} [m] = {:.7e} %".format(np.max(y) - np.max(y2222),(np.max(y) - np.max(y2222))/np.max(y) * 100 ) )
```
The deflection increases with approximately 0.14 mm with is 0.43% witch is not significant. However it is very logical that the deflection increases. because the load concentrates more in the centre, a larger moment is caused. This increases the deflection of the beam.
| github_jupyter |
```
from pathlib import Path
data_path = Path("../data").resolve()
book_list = {"唐家三少":["斗罗大陆", "斗罗大陆II绝世唐门", "酒神"],
"天蚕土豆":["斗破苍穹", "武动乾坤", "大主宰", "魔兽剑圣异界纵横"],
"猫腻":["庆余年", "间客", "将夜", "朱雀记", "择天记"]
}
```
## 数据清洗
1. 全角 --> 半角; 英文统一小写
2. 去除 html tag, 章节名,括号内容,url 链接 (及变体)
3. 重复字、符合并
```
book = data_path/"唐家三少"/"斗罗大陆.txt"
text = book.read_text()[:1000]
text
from prepare_data import clean_text, split_sentence
from hanlp.utils.lang.zh.char_table import CharTable
new_text = clean_text(text)
new_text
```
## 数据分析
1. 句子段落长度分析
2. 标准化存储分词数据集
3. 词频分析
```
import json
def load_book_res():
book_results = {}
for author in book_list:
author_dir = data_path/author
for book in book_list[author]:
book_res_file = author_dir/"{}.json".format(book)
with book_res_file.open(mode="r") as f:
book_res = json.load(f)
book_results[book] = book_res
return book_results
book_results = load_book_res()
import random
random.choice(book_results["酒神"]["tokens"])
```
### 按照分词计算句子长度
```
book_sentences = []
for book, res in book_results.items():
book_sentences.extend(res["tokens"])
sentences_len = [len(x) for x in book_sentences]
sorted_sentences = sorted(zip(book_sentences, sentences_len), key=lambda x: x[1], reverse=True)
print("Total sentences number: {}".format(len(sorted_sentences)))
import seaborn as sns
import numpy as np
sns.distplot(np.clip(sentences_len, 0, 150), kde=True, )
sns.distplot(sentences_len, rug=True, )
random.choice(sorted_sentences[:100])
```
### 按照字符串长度计算句子长度
```
book_sentences_char = []
for book, res in book_results.items():
book_sentences_char.extend(res["sentences"])
sentences_char_len = [len(x) for x in book_sentences_char]
sorted_sentences_char = sorted(zip(book_sentences_char, sentences_char_len), key=lambda x: x[1], reverse=True)
sns.distplot(sentences_char_len, rug=True)
sns.distplot(np.clip(sentences_char_len, 0, 300), kde=True)
random.choice(sorted_sentences_char[:1000])
```
### 统计 comma seperator 下的句子长度
```
book_sentences_char_comma = []
for book, res in book_results.items():
sentences = res["sentences"]
sentences_comma = []
for sentence in sentences:
sentences_comma.extend(split_sentence(sentence, comma=True))
book_sentences_char_comma.extend(sentences_comma)
sentences_char_comma_len = [len(x) for x in book_sentences_char_comma]
sorted_sentences_char_comma = sorted(zip(book_sentences_char_comma, sentences_char_comma_len), key=lambda x: x[1], reverse=True)
print("Total number of comma separated sentences = {}".format(len(sentences_char_comma_len)))
sns.distplot(sentences_char_comma_len, kde=True, rug=True)
sns.distplot(np.clip(sentences_char_comma_len, 0, 110))
random.choice(sorted_sentences_char_comma[:1000])
```
### 统计对比分词与不分词情况下的句子序列长度分布
```
import numpy as np
def show_list_stat(lst):
quantile1 = np.quantile(lst, 0.01)
quantile10 = np.quantile(lst, 0.1)
quantile20 = np.quantile(lst, 0.2)
median = np.median(lst)
quantile90 = np.quantile(lst, 0.9)
quantile99 = np.quantile(lst, 0.99)
quantile999 = np.quantile(lst,0.999)
quantile9999 = np.quantile(lst,0.9999)
print("1% Quantile =\t\t {}\n10% Quantile =\t\t {}\n20% Quantile =\t\t {}\n"
"Median =\t\t {}\n90% Quantile =\t\t {}\n99% Quantile =\t\t {}\n99.9% Quantile"
" =\t {}\n99.99% Quantile =\t {}".format(
quantile1, quantile10, quantile20, median, quantile90, quantile99, quantile999, quantile9999))
# word level
show_list_stat(sentences_len)
# char level
show_list_stat(sentences_char_len)
# char level with comma separator
show_list_stat(sentences_char_comma_len)
```
Try to remove \. inside sentences, but cannot apply this to text cleaning. Because there're legal words like U.S., V.S., etc.
```
# remove the \. inside sentences
# cannot do this trivially, there's legal words like U.S., V.S., etc.
test_text = "我喜欢你.是我度假的集以。安德森发.asdfasdf"
import re
re.sub(r"([^\.])(\.)([^\.])", r"\1\3", test_text)
# test split sentence with ,
long_sentence = "轻轻杞玩着手\n中卷轴,萧炎!微微皱。着眉,这种具有着改变人体质的奇丹.炼制。起来!困难度极大,失败率也极高,而且最可怕的是,这种阶别的丹药,在出炉之时,有着几率会引起天地能量波动,最后引发出雷劫,这种雷劫,炼药界中又称之为丹劫,威力极大,一个不慎,便是丹毁人亡的下场,因此,即便是一些有能力炼制七品丹药的炼药师,也会尽量避免少炼制这种会引发丹劫的丹药,而这也能猜测出,为什么这么多年,蛇人族从来没有弄出一枚天魂融血丹了。"
list(split_sentence(long_sentence, comma=True))
```
### 标准化储存分词后数据集
```
book_cws_text = {}
for author in book_list:
author_dir = data_path / author
for book in book_list[author]:
book_res_file = author_dir/"{}.json".format(book)
with book_res_file.open(mode="r") as f:
book_res = json.load(f)
tokens = book_res["tokens"]
text = "\n".join([" ".join(sentence) for sentence in tokens])
book_cws_file = author_dir/"{}.cws.txt".format(book)
book_cws_file.write_text(text, encoding="utf-8")
book_cws_text[book] = text
"\n".join([" ".join(sentence) for sentence in tokens[:10]])
long_sentence = "轻轻杞玩着手中卷轴,萧炎微微皱着眉,这种具有着改变人体质的奇丹,炼制起来困难度极大,失败率也极高,而且最可怕的是,这种阶别的丹药,在出炉之时,有着几率会引起天地能量波动,最后引发出雷劫,这种雷劫,炼药界中又称之为丹劫,威力极大,一个不慎,便是丹毁人亡的下场,因此,即便是一些有能力炼制七品丹药的炼药师,也会尽量避免少炼制这种会引发丹劫的丹药,而这也能猜测出,为什么这么多年,蛇人族从来没有弄出一枚天魂融血丹了。"
long_sentence.split(",")
```
| github_jupyter |
# Model with character recognition - single model
Builds on `RNN-Morse-chars-dual` but tries a single model. In fact dit and dah sense could be duplicates of 'E' and 'T' character senses.env, chr and wrd separators are kept. Thus we just drop dit and dah senses from the raw labels.
## Create string
Each character in the alphabet should happen a large enough number of times. As a rule of thumb we will take some multiple of the number of characters in the alphabet. If the multiplier is large enough the probability of each character appearance will be even over the alphabet.
Seems to get better results looking at the gated graphs but procedural decision has to be tuned.
```
import MorseGen
morse_gen = MorseGen.Morse()
alphabet = morse_gen.alphabet14
print(132/len(alphabet))
morsestr = MorseGen.get_morse_str(nchars=132*7, nwords=27*7, chars=alphabet)
print(alphabet)
print(len(morsestr), morsestr)
```
## Generate dataframe and extract envelope
```
Fs = 8000
samples_per_dit = morse_gen.nb_samples_per_dit(Fs, 13)
n_prev = int((samples_per_dit/128)*12*2) + 1
print(f'Samples per dit at {Fs} Hz is {samples_per_dit}. Decimation is {samples_per_dit/128:.2f}. Look back is {n_prev}.')
label_df = morse_gen.encode_df_decim_str(morsestr, samples_per_dit, 128, alphabet)
env = label_df['env'].to_numpy()
print(type(env), len(env))
import numpy as np
def get_new_data(morse_gen, SNR_dB=-23, nchars=132, nwords=27, phrase=None, alphabet="ABC"):
if not phrase:
phrase = MorseGen.get_morse_str(nchars=nchars, nwords=nwords, chars=alphabet)
print(len(phrase), phrase)
Fs = 8000
samples_per_dit = morse_gen.nb_samples_per_dit(Fs, 13)
#n_prev = int((samples_per_dit/128)*19) + 1 # number of samples to look back is slightly more than a "O" a word space (3*4+7=19)
n_prev = int((samples_per_dit/128)*27) + 1 # number of samples to look back is slightly more than a "0" a word space (5*4+7=27)
print(f'Samples per dit at {Fs} Hz is {samples_per_dit}. Decimation is {samples_per_dit/128:.2f}. Look back is {n_prev}.')
label_df = morse_gen.encode_df_decim_tree(phrase, samples_per_dit, 128, alphabet)
# extract the envelope
envelope = label_df['env'].to_numpy()
# remove the envelope
label_df.drop(columns=['env'], inplace=True)
SNR_linear = 10.0**(SNR_dB/10.0)
SNR_linear *= 256 # Apply original FFT
print(f'Resulting SNR for original {SNR_dB} dB is {(10.0 * np.log10(SNR_linear)):.2f} dB')
t = np.linspace(0, len(envelope)-1, len(envelope))
power = np.sum(envelope**2)/len(envelope)
noise_power = power/SNR_linear
noise = np.sqrt(noise_power)*np.random.normal(0, 1, len(envelope))
# noise = butter_lowpass_filter(raw_noise, 0.9, 3) # Noise is also filtered in the original setup from audio. This empirically simulates it
signal = (envelope + noise)**2
signal[signal > 1.0] = 1.0 # a bit crap ...
return envelope, signal, label_df, n_prev
```
Try it...
```
import matplotlib.pyplot as plt
envelope, signal, label_df, n_prev = get_new_data(morse_gen, SNR_dB=-17, phrase=morsestr, alphabet=alphabet)
# Show
print(n_prev)
print(type(signal), signal.shape)
print(type(label_df), label_df.shape)
x0 = 0
x1 = 1500
plt.figure(figsize=(50,4+0.5*len(morse_gen.alphabet)))
plt.plot(signal[x0:x1]*0.7, label="sig")
plt.plot(envelope[x0:x1]*0.9, label='env')
plt.plot(label_df[x0:x1].dit*0.9 + 1.0, label='dit')
plt.plot(label_df[x0:x1].dah*0.9 + 1.0, label='dah')
plt.plot(label_df[x0:x1].ele*0.9 + 2.0, label='ele')
plt.plot(label_df[x0:x1].chr*0.9 + 2.0, label='chr', color="orange")
plt.plot(label_df[x0:x1].wrd*0.9 + 2.0, label='wrd')
plt.plot(label_df[x0:x1].nul*0.9 + 3.0, label='nul')
for i, a in enumerate(alphabet):
plt.plot(label_df[x0:x1][a]*0.9 + 4.0 + i, label=a)
plt.title("signal and labels")
plt.legend(loc=2)
plt.grid()
```
## Create data loader
### Define dataset
```
import torch
class MorsekeyingDataset(torch.utils.data.Dataset):
def __init__(self, morse_gen, device, SNR_dB=-23, nchars=132, nwords=27, phrase=None, alphabet="ABC"):
self.envelope, self.signal, self.label_df0, self.seq_len = get_new_data(morse_gen, SNR_dB=SNR_dB, phrase=phrase, alphabet=alphabet)
self.label_df = self.label_df0.drop(columns=['dit','dah'])
self.X = torch.FloatTensor(self.signal).to(device)
self.y = torch.FloatTensor(self.label_df.values).to(device)
def __len__(self):
return self.X.__len__() - self.seq_len
def __getitem__(self, index):
return (self.X[index:index+self.seq_len], self.y[index+self.seq_len])
def get_envelope(self):
return self.envelope
def get_signal(self):
return self.signal
def get_X(self):
return self.X
def get_labels(self):
return self.label_df
def get_labels0(self):
return self.label_df0
def get_seq_len(self):
return self.seq_len()
```
### Define keying data loader
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_chr_dataset = MorsekeyingDataset(morse_gen, device, -20, 132*5, 27*5, morsestr, alphabet)
train_chr_loader = torch.utils.data.DataLoader(train_chr_dataset, batch_size=1, shuffle=False) # Batch size must be 1
signal = train_chr_dataset.get_signal()
envelope = train_chr_dataset.get_envelope()
label_df = train_chr_dataset.get_labels()
label_df0 = train_chr_dataset.get_labels0()
print(type(signal), signal.shape)
print(type(label_df), label_df.shape)
x0 = 0
x1 = 1500
plt.figure(figsize=(50,3))
plt.plot(signal[x0:x1]*0.5, label="sig")
plt.plot(envelope[x0:x1]*0.9, label='env')
plt.plot(label_df[x0:x1].E*0.9 + 1.0, label='E')
plt.plot(label_df[x0:x1]["T"]*0.9 + 1.0, label='T')
plt.plot(label_df[x0:x1].ele*0.9 + 2.0, label='ele')
plt.plot(label_df[x0:x1].chr*0.9 + 2.0, label='chr')
plt.plot(label_df[x0:x1].wrd*0.9 + 2.0, label='wrd')
plt.title("keying - signal and labels")
plt.legend(loc=2)
plt.grid()
```
## Create model classes
```
import torch
import torch.nn as nn
class MorseLSTM(nn.Module):
"""
Initial implementation
"""
def __init__(self, device, input_size=1, hidden_layer_size=8, output_size=6):
super().__init__()
self.device = device # This is the only way to get things work properly with device
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_layer_size)
self.linear = nn.Linear(hidden_layer_size, output_size)
self.hidden_cell = (torch.zeros(1, 1, self.hidden_layer_size).to(self.device),
torch.zeros(1, 1, self.hidden_layer_size).to(self.device))
def forward(self, input_seq):
lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return predictions[-1]
def zero_hidden_cell(self):
self.hidden_cell = (
torch.zeros(1, 1, self.hidden_layer_size).to(device),
torch.zeros(1, 1, self.hidden_layer_size).to(device)
)
class MorseBatchedLSTM(nn.Module):
"""
Initial implementation
"""
def __init__(self, device, input_size=1, hidden_layer_size=8, output_size=6):
super().__init__()
self.device = device # This is the only way to get things work properly with device
self.input_size = input_size
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_layer_size)
self.linear = nn.Linear(hidden_layer_size, output_size)
self.hidden_cell = (torch.zeros(1, 1, self.hidden_layer_size).to(self.device),
torch.zeros(1, 1, self.hidden_layer_size).to(self.device))
self.m = nn.Softmax(dim=-1)
def forward(self, input_seq):
#print(len(input_seq), input_seq.shape, input_seq.view(-1, 1, 1).shape)
lstm_out, self.hidden_cell = self.lstm(input_seq.view(-1, 1, self.input_size), self.hidden_cell)
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return self.m(predictions[-1])
def zero_hidden_cell(self):
self.hidden_cell = (
torch.zeros(1, 1, self.hidden_layer_size).to(device),
torch.zeros(1, 1, self.hidden_layer_size).to(device)
)
class MorseLSTM2(nn.Module):
"""
LSTM stack
"""
def __init__(self, device, input_size=1, hidden_layer_size=8, output_size=6, dropout=0.2):
super().__init__()
self.device = device # This is the only way to get things work properly with device
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size, hidden_layer_size, num_layers=2, dropout=dropout)
self.linear = nn.Linear(hidden_layer_size, output_size)
self.hidden_cell = (torch.zeros(2, 1, self.hidden_layer_size).to(self.device),
torch.zeros(2, 1, self.hidden_layer_size).to(self.device))
def forward(self, input_seq):
lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return predictions[-1]
def zero_hidden_cell(self):
self.hidden_cell = (
torch.zeros(2, 1, self.hidden_layer_size).to(device),
torch.zeros(2, 1, self.hidden_layer_size).to(device)
)
class MorseNoHLSTM(nn.Module):
"""
Do not keep hidden cell
"""
def __init__(self, device, input_size=1, hidden_layer_size=8, output_size=6):
super().__init__()
self.device = device # This is the only way to get things work properly with device
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size, hidden_layer_size)
self.linear = nn.Linear(hidden_layer_size, output_size)
def forward(self, input_seq):
h0 = torch.zeros(1, 1, self.hidden_layer_size).to(self.device)
c0 = torch.zeros(1, 1, self.hidden_layer_size).to(self.device)
lstm_out, _ = self.lstm(input_seq.view(len(input_seq), 1, -1), (h0, c0))
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return predictions[-1]
class MorseBiLSTM(nn.Module):
"""
Attempt Bidirectional LSTM: does not work
"""
def __init__(self, device, input_size=1, hidden_size=12, num_layers=1, num_classes=6):
super(MorseEnvBiLSTM, self).__init__()
self.device = device # This is the only way to get things work properly with device
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True, bidirectional=True)
self.fc = nn.Linear(hidden_size*2, num_classes) # 2 for bidirection
def forward(self, x):
# Set initial states
h0 = torch.zeros(self.num_layers*2, x.size(0), self.hidden_size).to(device) # 2 for bidirection
c0 = torch.zeros(self.num_layers*2, x.size(0), self.hidden_size).to(device)
# Forward propagate LSTM
out, _ = self.lstm(x.view(len(x), 1, -1), (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size*2)
# Decode the hidden state of the last time step
out = self.fc(out[:, -1, :])
return out[-1]
```
Create the keying model instance and print the details
```
morse_chr_model = MorseBatchedLSTM(device, hidden_layer_size=len(alphabet)*3, output_size=len(alphabet)+4).to(device) # This is the only way to get things work properly with device
morse_chr_loss_function = nn.MSELoss()
morse_chr_optimizer = torch.optim.Adam(morse_chr_model.parameters(), lr=0.001)
print(morse_chr_model)
print(morse_chr_model.device)
# Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu
for m in morse_chr_model.parameters():
print(m.shape, m.device)
X_t = torch.rand(n_prev)
X_t = X_t.cuda()
print("Input shape", X_t.shape, X_t.view(-1, 1, 1).shape)
print(X_t)
morse_chr_model(X_t)
import torchinfo
torchinfo.summary(morse_chr_model)
```
## Train model
```
it = iter(train_chr_loader)
X, y = next(it)
print(X.reshape(n_prev,1).shape, X[0].shape, y[0].shape)
print(X[0], y[0])
X, y = next(it)
print(X[0], y[0])
%%time
from tqdm.notebook import tqdm
epochs = 4
morse_chr_model.train()
for i in range(epochs):
train_losses = []
loop = tqdm(enumerate(train_chr_loader), total=len(train_chr_loader), leave=True)
for j, train in loop:
X_train = train[0][0]
y_train = train[1][0]
morse_chr_optimizer.zero_grad()
if morse_chr_model.__class__.__name__ in ["MorseLSTM", "MorseLSTM2", "MorseBatchedLSTM", "MorseBatchedLSTM2"]:
morse_chr_model.zero_hidden_cell() # this model needs to reset the hidden cell
y_pred = morse_chr_model(X_train)
single_loss = morse_chr_loss_function(y_pred, y_train)
single_loss.backward()
morse_chr_optimizer.step()
train_losses.append(single_loss.item())
# update progress bar
if j % 1000 == 0:
loop.set_description(f"Epoch [{i+1}/{epochs}]")
loop.set_postfix(loss=np.mean(train_losses))
print(f'final: {i+1:3} epochs loss: {np.mean(train_losses):6.4f}')
save_model = True
if save_model:
torch.save(morse_chr_model.state_dict(), 'models/morse_single_model')
else:
morse_chr_model.load_state_dict(torch.load('models/morse_single_model', map_location=device))
%%time
p_char_train = torch.empty(1,18).to(device)
morse_chr_model.eval()
loop = tqdm(enumerate(train_chr_loader), total=len(train_chr_loader))
for j, train in loop:
with torch.no_grad():
X_chr = train[0][0]
pred_val = morse_chr_model(X_chr)
p_char_train = torch.cat([p_char_train, pred_val.reshape(1,18)])
p_char_train = p_char_train[1:] # Remove garbge
print(p_char_train.shape) # t -> chars(t)
```
### Post process
- Move to CPU to ger chars(time)
- Transpose to get times(char)
```
p_char_train_c = p_char_train.cpu() # t -> chars(t) on CPU
p_char_train_t = torch.transpose(p_char_train_c, 0, 1).cpu() # c -> times(c) on CPU
print(p_char_train_c.shape, p_char_train_t.shape)
X_train_chr = train_chr_dataset.X.cpu()
label_df_chr = train_chr_dataset.get_labels()
l_alpha = label_df_chr[n_prev:].reset_index(drop=True)
plt.figure(figsize=(50,4+0.5*len(morse_gen.alphabet)))
plt.plot(l_alpha[x0:x1]["chr"]*(len(alphabet)+1)+2, label="ychr", alpha=0.2, color="black")
plt.plot(X_train_chr[x0+n_prev:x1+n_prev]*1.9, label='sig')
plt.plot(p_char_train_t[1][x0:x1]*0.9 + 2.0, label='c')
plt.plot(p_char_train_t[2][x0:x1]*0.9 + 2.0, label='w')
for i, a in enumerate(alphabet):
plt_a = plt.plot(p_char_train_t[i+4][x0:x1]*0.9 + 3.0 + i, label=a)
plt.plot(l_alpha[a][x0:x1]*0.5 + 3.0 + i, color=plt_a[0].get_color(), alpha=0.5)
plt.title("predictions")
plt.legend(loc=2)
plt.grid()
```
## Test
### Test dataset and data loader
```
teststr = "AAAA USERS ARE USING EGGS AND GRAIN MONGO TEST MADAME WONDER WOMAN GOOD MAMA USSR WAS GREAT AAA"
test_chr_dataset = MorsekeyingDataset(morse_gen, device, -19, 132*5, 27*5, teststr, alphabet)
test_chr_loader = torch.utils.data.DataLoader(test_chr_dataset, batch_size=1, shuffle=False) # Batch size must be 1
```
### Run the model
```
p_chr_test = torch.empty(1,18).to(device)
morse_chr_model.eval()
loop = tqdm(enumerate(test_chr_loader), total=len(test_chr_loader))
for j, test in loop:
with torch.no_grad():
X_test = test[0]
pred_val = morse_chr_model(X_test[0])
p_chr_test = torch.cat([p_chr_test, pred_val.reshape(1,18)])
# drop first garbage sample
p_chr_test = p_chr_test[1:]
print(p_chr_test.shape)
p_chr_test_c = p_chr_test.cpu() # t -> chars(t) on CPU
p_chr_test_t = torch.transpose(p_chr_test_c, 0, 1).cpu() # c -> times(c) on CPU
print(p_chr_test_c.shape, p_chr_test_t.shape)
```
### Show results
```
X_test_chr = test_chr_dataset.X.cpu()
label_df_t = test_chr_dataset.get_labels()
l_alpha_t = label_df_t[n_prev:].reset_index(drop=True)
```
#### Raw results
Envelope reconstruction is obtained by combining the silences (env, chr, wrd) senses. To correct gap between chr and wrd a delayed copy of wrd is added. Eventually the envelope is truncated in the [0:1] interval. It appears on the second line in purple.
```
plt.figure(figsize=(100,4+0.5*len(morse_gen.alphabet)))
plt.plot(l_alpha_t[:]["chr"]*(len(alphabet)+1)+2, label="ychr", alpha=0.2, color="black")
plt.plot(X_test_chr[n_prev:]*0.9, label='sig')
p_wrd_dly = p_chr_test_t[2][6:]
p_wrd_dly = torch.cat([p_wrd_dly, torch.zeros(6)])
p_env = 1.0 - p_chr_test_t[0] - p_chr_test_t[1] - (p_chr_test_t[2] + p_wrd_dly)
p_env[p_env < 0] = 0
p_env[p_env > 1] = 1
plt.plot(p_env*0.9 + 1.0, label='env', color="purple")
plt.plot(p_chr_test_t[1]*0.9 + 2.0, label='c', color="green")
plt.plot(p_chr_test_t[2]*0.9 + 2.0, label='w', color="red")
for i, a in enumerate(alphabet):
plt_a = plt.plot(p_chr_test_t[i+4,:]*0.9 + 3.0 + i, label=a)
plt.plot(l_alpha_t[a]*0.5 + 3.0 + i, color=plt_a[0].get_color(), linestyle="--")
plt.title("predictions")
plt.legend(loc=2)
plt.grid()
plt.savefig('img/predicted.png')
p_chr_test_tn = p_chr_test_t.numpy()
ele_len = round(samples_per_dit*2 / 128)
win = np.ones(ele_len)/ele_len
p_chr_test_tlp = np.apply_along_axis(lambda m: np.convolve(m, win, mode='full'), axis=1, arr=p_chr_test_tn)
plt.figure(figsize=(100,4+0.5*len(morse_gen.alphabet)))
plt.plot(l_alpha_t[:]["chr"]*(len(alphabet)+1)+2, label="ychr", alpha=0.2, color="black")
plt.plot(X_test_chr[n_prev:]*0.9, label='sig')
plt.plot(p_chr_test_tlp[1]*0.9 + 2.0, label='c', color="green")
plt.plot(p_chr_test_tlp[2]*0.9 + 2.0, label='w', color="red")
for i, a in enumerate(alphabet):
plt_a = plt.plot(p_chr_test_tlp[i+4,:]*0.9 + 3.0 + i, label=a)
plt.plot(l_alpha_t[a]*0.5 + 3.0 + i, color=plt_a[0].get_color(), linestyle="--")
plt.title("predictions")
plt.legend(loc=2)
plt.grid()
plt.savefig('img/predicted.png')
```
#### Gated by character prediction
```
plt.figure(figsize=(100,4+0.5*len(morse_gen.alphabet)))
plt.plot(l_alpha_t["chr"]*(len(alphabet)+1)+2, label="ychr", alpha=0.2, color="black")
plt.plot(X_test_chr[n_prev:]*0.9, label='sig')
plt.plot(p_chr_test_tlp[1]*0.9 + 1.0, label="cp", color="green")
plt.plot(p_chr_test_tlp[2]*0.9 + 1.0, label="wp", color="red")
for i, a in enumerate(alphabet):
line_a = p_chr_test_tlp[i+4] * p_chr_test_tlp[1]
plt_a = plt.plot(line_a*0.9 + 2.0 + i, label=a)
plt.plot(l_alpha_t[a]*0.5 + 2.0 + i, color=plt_a[0].get_color(), linestyle="--")
plt.title("predictions")
plt.legend(loc=2)
plt.grid()
plt.savefig('img/predicted_gated.png')
plt.figure(figsize=(100,4+0.5*len(morse_gen.alphabet)))
plt.plot(l_alpha_t["chr"]*(len(alphabet))+2, label="ychr", alpha=0.2, color="black")
plt.plot(X_test_chr[n_prev:]*0.9, label='sig')
plt.plot(p_chr_test_tlp[1]*0.9 + 1.0, label="cp", color="green")
line_wrd = p_chr_test_tlp[2]
plt.plot(line_wrd*0.9 + 1.0, color="red", linestyle="--")
line_wrd[line_wrd < 0.5] = 0
plt.plot(line_wrd*0.9 + 1.0, label="wp", color="red")
for i, a in enumerate(alphabet):
line_a = p_chr_test_tlp[i+4] * p_chr_test_tlp[1]
plt_a = plt.plot(line_a*0.9 + 2.0 + i, linestyle="--")
plt.plot(l_alpha_t[a]*0.3 + 2.0 + i, color=plt_a[0].get_color(), linestyle="--")
line_a[line_a < 0.3] = 0
plt.plot(line_a*0.9 + 2.0 + i, color=plt_a[0].get_color(), label=a)
plt.title("predictions thresholds")
plt.legend(loc=2)
plt.grid()
plt.savefig('img/predicted_thr.png')
```
## Procedural decision making
### take 2
```
class MorseDecoder2:
def __init__(self, alphabet, chr_len, wrd_len):
self.nb_alpha = len(alphabet)
self.alphabet = alphabet
self.chr_len = chr_len
self.wrd_len = wrd_len // 2
self.threshold = 0.25
self.chr_count = 0
self.wrd_count = 0
self.prevs = [0.0 for x in range(self.nb_alpha+3)]
self.res = ""
def new_samples(self, samples):
for i, s in enumerate(samples): # c, w, n, [alpha]
if i > 1:
t = s * samples[0] # gating for alpha characters
else:
t = s
if i == 1: # word separator
if t >= self.threshold and self.prevs[1] < self.threshold and self.wrd_count == 0:
self.wrd_count = self.wrd_len
self.res += " "
elif i > 1: # characters
if t >= self.threshold and self.prevs[i] < self.threshold and self.chr_count == 0:
self.chr_count = self.chr_len
if i > 2:
self.res += self.alphabet[i-3]
self.prevs[i] = t
if self.wrd_count > 0:
self.wrd_count -= 1
if self.chr_count > 0:
self.chr_count -= 1
chr_len = round(samples_per_dit*2 / 128)
wrd_len = round(samples_per_dit*4 / 128)
decoder = MorseDecoder2(alphabet, chr_len, wrd_len)
#p_chr_test_clp = torch.transpose(p_chr_test_tlp, 0, 1)
p_chr_test_clp = p_chr_test_tlp.transpose()
for s in p_chr_test_clp:
decoder.new_samples(s[1:]) # c, w, n, [alpha]
print(decoder.res)
```
### take 1
```
class MorseDecoder1:
def __init__(self, alphabet, chr_len, wrd_len):
self.nb_alpha = len(alphabet)
self.alphabet = alphabet
self.chr_len = chr_len
self.wrd_len = wrd_len
self.alpha = 0.3
self.threshold = 0.45
self.accum = [0.0 for x in range(self.nb_alpha+2)]
self.sums = [0.0 for x in range(self.nb_alpha+2)]
self.tests = [0.0 for x in range(self.nb_alpha+2)]
self.prevs = [0.0 for x in range(self.nb_alpha+2)]
self.counts = [0 for x in range(self.nb_alpha+2)]
self.res = ""
def new_samples(self, samples):
for i, s in enumerate(samples):
if i > 2:
t = s * samples[0] # gating for alpha characters
else:
t = s
# t = s
if i > 0:
j = i-1
t = self.alpha * t + (1 - self.alpha) * self.accum[j] # Exponential average does the low pass filtering
self.accum[j] = t
if t >= self.threshold and self.prevs[j] < self.threshold:
self.counts[j] = 0
if t > self.threshold:
self.sums[j] = self.sums[j] + t
self.tests[j] = 0.0
else:
blk_len = wrd_len if j == 0 else chr_len
if self.counts[j] > blk_len:
self.tests[j] = self.sums[j]
self.sums[j] = 0.0
self.counts[j] += 1
self.prevs[j] = t
if np.sum(self.tests) > 0.0:
ci = np.argmax(self.tests)
if ci == 0:
self.res += " "
elif ci > 1:
self.res += self.alphabet[ci - 2]
chr_len = round(samples_per_dit*2 / 128)
wrd_len = round(samples_per_dit*4 / 128)
decoder = MorseDecoder1(alphabet, chr_len, wrd_len)
for s in p_chr_test_c:
decoder.new_samples(s[1:]) # c, w, n, [alpha]
print(decoder.res)
```
| github_jupyter |
# Uniform longitudinal beam loading
```
%matplotlib notebook
import sys
sys.path.append('/Users/chall/research/github/rswarp/rswarp/utilities/')
import beam_analysis
import file_utils
from mpl_toolkits.mplot3d import Axes3D
import pickle
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
def svecplot(array):
fig = plt.figure(figsize = (8,8))
Q = plt.quiver(array[:,0],array[:,2],array[:,1],array[:,3])
plt.quiverkey(Q,0.0, 0.92, 0.002, r'$2', labelpos='W')
xmax = np.max(array[:,0])
xmin = np.min(array[:,0])
plt.xlim(1.5*xmin,1.5*xmax)
plt.ylim(1.5*xmin,1.5*xmax)
plt.show()
# Load simulation parameters
simulation_parameters = pickle.load(open("simulation_parameters.p", "rb"))
print simulation_parameters['timestep']
```
## Load and View Initial Distribution
```
f0 = file_utils.readparticles('diags/xySlice/hdf5/data00000001.h5')
step0 = beam_analysis.convertunits(f0['Electron'])
beam_analysis.plotphasespace(step0); # semicolon suppresses second plot
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111, projection='3d')
p = ax.scatter(step0[:,1]*1e3,step0[:,4],step0[:,3]*1e3)
ax.set_xlabel('x (mm)')
ax.set_ylabel('z (m)')
ax.set_zlabel('y (mm)')
plt.show()
```
## Load All Steps
```
full = file_utils.loadparticlefiles('diags/xySlice/hdf5/')
allSteps = []
for step in range(100,1400,100):
scon = beam_analysis.convertunits(full[step]['Electron'])
allSteps.append(full[step]['Electron'])
allSteps = np.array(allSteps)
# beam_analysis.plotphasespace(allSteps[10,:,:]):
```
# Magnetostatic Only Check
```
f0 = h5.File('diags/fields/magnetic/bfield00200.h5', 'r')
Bx = f0['data/200/meshes/B/r']
By = f0['data/200/meshes/B/t']
Bz = f0['data/200/meshes/B/z']
fig = plt.figure(figsize=(12,4))
ax = plt.gca()
xpoints = np.linspace(0,0.32,Bx[0,:,8].shape[0])
ax.plot(xpoints,By[0,:,15],label = "Bt from Sim")
ax.plot(xpoints[5:], -2 * 10.0e-6 * 1e-7 / (xpoints[5:]), label="Calc. Bt for r>R")
ax.plot(xpoints[0:16], -2 * 10e-6 * 1e-7 * (xpoints[0:16]) / 0.02**2, label="Calc. Bt for r<R")
ax.legend(loc=4)
plt.show()
fig = plt.figure(figsize=(8,8))
ax = plt.gca()
zslice = 10
ax.set_xlabel("y (m)")
ax.set_ylabel("x (m)")
ax.set_title("$B_%s$ for z=%s" % ('x',zslice))
cax = ax.imshow(By[0,:,:],cmap=plt.cm.viridis)
fig.colorbar(cax)
plt.tight_layout
plt.show()
```
# Electrostatic Only Check
```
# Calculate charge
I = 10e-6
L = 1.0
e = 1.6e-19
c = 3e8
beta = 0.56823
Ntot = int(I * L / e / (c * beta))
Qtot = Ntot * e
f0 = h5.File('diags/fields/electric/efield00200.h5', 'r')
Bx = f0['data/200/meshes/E/r']
By = f0['data/200/meshes/E/t']
Bz = f0['data/200/meshes/E/z']
fig = plt.figure(figsize=(12,4))
ax = plt.gca()
xpoints = np.linspace(0,0.32,Bx[0,:,8].shape[0])
ax.plot(xpoints,Bx[0,:,15],label = "Er from Sim")
ax.plot(xpoints[3:], -1 * Qtot / (2 * np.pi * 8.854e-12 * xpoints[3:]), label="Calc. Er for r>R")
ax.plot(xpoints[0:10], -1 * Qtot * (xpoints[0:10]) / (2 * np.pi * 8.854e-12 * 0.02**2), label="Calc. Er for r<R")
ax.legend(loc=4)
plt.show()
```
# E and B Static Solvers
```
f0 = h5.File('diags/fields/electric/efield00200.h5', 'r')
Ex = f0['data/200/meshes/E/r']
Ey = f0['data/200/meshes/E/t']
Ez = f0['data/200/meshes/E/z']
f0 = h5.File('diags/fields/magnetic/bfield00200.h5', 'r')
Bx = f0['data/200/meshes/B/r']
By = f0['data/200/meshes/B/t']
Bz = f0['data/200/meshes/B/z']
fig = plt.figure(figsize=(12,8))
xpoints = np.linspace(0,0.32,Bx[0,:,8].shape[0])
plt.subplot(211)
plt.plot(xpoints,By[0,:,15],label = "Er from Sim")
plt.plot(xpoints[3:], -2 * 10.0e-6 * 1e-7 / (xpoints[3:]), label="Calc. Bt for r>R")
plt.plot(xpoints[0:8], -2 * 10e-6 * 1e-7 * (xpoints[0:8]) / 0.02**2, label="Calc. Bt for r<R")
plt.legend(loc=4)
plt.title("Magnetic Field")
plt.subplot(212)
xpoints = np.linspace(0,0.32,Bx[0,:,8].shape[0])
plt.plot(xpoints,Ex[0,:,15],label = "Bt from Sim")
plt.plot(xpoints[3:], -1 * Qtot / (2 * np.pi * 8.854e-12 * xpoints[3:]), label="Calc. Er for r>R")
plt.plot(xpoints[0:8], -1 * Qtot * (xpoints[0:8]) / (2 * np.pi * 8.854e-12 * 0.02**2), label="Calc. Er for r<R")
plt.legend(loc=4)
plt.title("Electric Field")
plt.show()
```
| github_jupyter |
# Modeling Workbook
```
import numpy as np
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
```
## Wrangling
```
from preprocessing import spotify_split, scale_data
from preprocessing import modeling_prep
df = modeling_prep()
df.info()
df.head(2)
df.shape
```
---
### Split the Data
```
X_train, y_train, X_validate, y_validate, X_test, y_test, train, validate, test = spotify_split(df, 'popularity')
X_train.head(2)
```
---
### Scale the Data
```
# Using MIN-MAX scaler
X_train_mm, X_validate_mm, X_test_mm = scale_data(train, validate, test, 'popularity', 'MinMax')
X_train_mm.head(3)
```
<div class="alert alert-block alert-info">
<b>Takeaways</b>:
<li>Useable features are added to the data</li>
<li>Data is split into train, validate, test</li>
<li>Data is scaled by minimum and max values (0-1)</li>
</div>
---
## Feature Selection Tools
```
from preprocessing import rfe, select_kbest
# Select K Best Algorithm
skb_features = select_kbest(X_train_mm, y_train, 5)
skb_features
# Recursive Feature Elmination Algorithm
rfe_features = rfe(X_train_mm, y_train, 5)
rfe_features
```
<div class="alert alert-block alert-info">
<b>Takeaways</b>:
<li>Used tools that select important features</li>
<li><i>Select K Best</i>: selects features according to the k highest scores</li>
<li><i>Recursive Feature Elimination</i>: features that perform well on a simple linear regression model</li>
</div>
---
## Feature Groups
```
# Select K Best: Top 5 Features DF
X_tr_skb = X_train_mm[skb_features]
X_v_skb = X_validate_mm[skb_features]
X_te_skb = X_test_mm[skb_features]
# Recursive Feature Elimination: Top 5 Features DF
X_tr_rfe = X_train_mm[rfe_features]
X_v_rfe = X_validate_mm[rfe_features]
X_te_rfe = X_test_mm[rfe_features]
# Combo 3: Top 6 Features
top_feats = ['danceability', 'speechiness', 'explicit', 'is_featured_artist', 'track_number', 'energy', 'single']
X_tr_top = X_train_mm[top_feats]
X_v_top = X_validate_mm[top_feats]
X_te_top = X_test_mm[top_feats]
```
<div class="alert alert-block alert-info">
<b>Takeaways</b>:
<li>Set top 5 features from <i>Select K Best</i> to a dataframe for modeling</li>
<li>Set top 5 features from <i>Recursive Feature Elimination</i> to a dataframe for modeling</li>
<li>Set top 7 features from both <i>RFE and SKB</i> to a dataframe for modeling</li>
</div>
---
## Modeling
### Set the baseline
```
from model import get_baseline_metrics, linear_regression_model, lasso_lars
from model import polynomial_regression, svr_model, glm_model, evaluate_df
bl, bl_train_rmse = get_baseline_metrics(y_train)
```
---
### Models using SKB Features
```
# OLS Model
lm_rmse, lm_rmse_v, lm_rmse_t = linear_regression_model(
X_tr_skb, y_train, X_v_skb, y_validate, X_te_skb, y_test)
# LASSO + LARS Model
lars_rmse, lars_rmse_v, lars_rmse_t = lasso_lars(
X_tr_skb, y_train, X_v_skb, y_validate, X_te_skb, y_test)
# Polynomial Features (squared, deg=2) with Linear Regression
lm_sq_rmse, lm_sq_rmse_v, lm_sq_rmse_t, lm_sq_pred_t = polynomial_regression(
X_tr_skb, y_train, X_v_skb, y_validate, X_te_skb, y_test,
'Squared', degree=2)
# Support Vector Regression with RBF Kernel
svr_rmse, svr_rmse_v, svr_rmse_t = svr_model(
X_tr_skb, y_train, X_v_skb, y_validate, X_te_skb, y_test, 'RBF')
# General Linearized Model with Normal Distribution
glm_rmse, glm_rmse_v, glm_rmse_t, glm_pred_t = glm_model(
X_tr_skb, y_train, X_v_skb, y_validate, X_te_skb, y_test,
'Normal', alpha=0, power=1)
# Dataframe Evaluation for Models with SKB features
columns = ['train_rmse', 'validate_rmse', 'test_rmse']
index = ['baseline', 'ols', 'lassolars', 'pf2_lr', 'SVM', 'GLM']
data = [[bl_train_rmse, '', ''],
[lm_rmse, lm_rmse_v, ''],
[lars_rmse, lars_rmse_v, ''],
[lm_sq_rmse, lm_sq_rmse_v, ''],
[svr_rmse, svr_rmse_v, ''],
[glm_rmse, glm_rmse_v, '']]
print('SKB FEATURES')
print(f'Model beat baseline by {abs((lm_sq_rmse_t - bl_train_rmse)/bl_train_rmse)*100:.2f}%')
print(skb_features)
pd.DataFrame(columns=columns, data=data, index=index).sort_values(by='train_rmse')
```
---
### Models using RFE Features
```
# OLS Model
lm_rmse, lm_rmse_v, lm_rmse_t = linear_regression_model(
X_tr_rfe, y_train, X_v_rfe, y_validate, X_te_rfe, y_test)
# LASSO + LARS Model
lars_rmse, lars_rmse_v, lars_rmse_t = lasso_lars(
X_tr_rfe, y_train, X_v_rfe, y_validate, X_te_rfe, y_test)
# Polynomial Features (squared, deg=2) with Linear Regression
lm_sq_rmse, lm_sq_rmse_v, lm_sq_rmse_t, lm_sq_pred_t = polynomial_regression(
X_tr_rfe, y_train, X_v_rfe, y_validate, X_te_rfe, y_test,
'Squared', degree=2)
# Support Vector Regression with RBF Kernel
svr_rmse, svr_rmse_v, svr_rmse_t = svr_model(
X_tr_rfe, y_train, X_v_rfe, y_validate, X_te_rfe, y_test, 'RBF')
# General Linearized Model with Normal Distribution
glm_rmse, glm_rmse_v, glm_rmse_t, glm_pred_t = glm_model(
X_tr_rfe, y_train, X_v_rfe, y_validate, X_te_rfe, y_test,
'Normal', alpha=0, power=1)
# Dataframe Evaluation for Models with RFE features
columns = ['train_rmse', 'validate_rmse', 'test_rmse']
index = ['baseline', 'ols', 'lassolars', 'pf2_lr', 'SVM', 'GLM']
data = [[bl_train_rmse, '', ''],
[lm_rmse, lm_rmse_v, ''],
[lars_rmse, lars_rmse_v, ''],
[lm_sq_rmse, lm_sq_rmse_v, ''],
[svr_rmse, svr_rmse_v, ''],
[glm_rmse, glm_rmse_v, '']]
print('RFE FEATURES')
print(f'Model beat baseline by {abs((lm_sq_rmse_t - bl_train_rmse)/bl_train_rmse)*100:.2f}%')
print(rfe_features)
pd.DataFrame(columns=columns, data=data, index=index).sort_values(by='train_rmse')
```
---
### Models using COMBO: 7 Features
```
# OLS Model
lm_rmse, lm_rmse_v, lm_rmse_t = linear_regression_model(
X_tr_top, y_train, X_v_top, y_validate, X_te_top, y_test)
# LASSO + LARS Model
lars_rmse, lars_rmse_v, lars_rmse_t = lasso_lars(
X_tr_top, y_train, X_v_top, y_validate, X_te_top, y_test)
# Polynomial Features (squared, deg=2) with Linear Regression
lm_sq_rmse, lm_sq_rmse_v, lm_sq_rmse_t, lm_sq_pred_t = polynomial_regression(
X_tr_top, y_train, X_v_top, y_validate, X_te_top, y_test,
'Squared', degree=2)
# Support Vector Regression with RBF Kernel
svr_rmse, svr_rmse_v, svr_rmse_t = svr_model(
X_tr_top, y_train, X_v_top, y_validate, X_te_top, y_test, 'RBF')
# General Linearized Model with Normal Distribution
glm_rmse, glm_rmse_v, glm_rmse_t, glm_pred_t = glm_model(
X_tr_top, y_train, X_v_top, y_validate, X_te_top, y_test,
'Normal', alpha=0, power=1)
# Dataframe Evaluation for Models with COMBO features
columns = ['train_rmse', 'validate_rmse', 'test_rmse']
iindex = ['Baseline', 'Linear Regression', 'lassolars', 'Polynomial 2nd Degree', 'SVM', 'GLM']
data = [[bl_train_rmse, '', ''],
[lm_rmse, lm_rmse_v, ''],
[lars_rmse, lars_rmse_v, ''],
[lm_sq_rmse, lm_sq_rmse_v, lm_sq_rmse_t],
[svr_rmse, svr_rmse_v, ''],
[glm_rmse, glm_rmse_v, '']]
print('COMBO TOP FEATURES')
print(f'Model beat baseline by {abs((lm_sq_rmse_t - bl_train_rmse)/bl_train_rmse)*100:.2f}%')
print(top_feats)
mod_perf = pd.DataFrame(columns=columns, data=data, index=iindex).sort_values(by='train_rmse')
mod_perf
```
**Conclusion**:
<div class="alert alert-block alert-info">
<b>Takeaways</b>:
<li>COMBO features are the optimum combination for best model performance. These feature combinations are significant</li>
<li>Second Degree Polynomial Features with Linear Regression is best performing model among all feature sets</li>
<li>Final model predicts 6% better than baseline with RMSE of 21.52 on test data(unseen).</li>
</div>
---
## Model Performance Plot
```
from model import polyreg_predictions, plot_polyreg
lm_sq, y_pred_test, pf = polyreg_predictions(X_tr_top, X_te_top, y_train)
plot_polyreg(y_test, lm_sq_pred_t, y_pred_test, bl)
```
<div class="alert alert-block alert-info">
<b>Takeaways</b>:
Model is plotted against the baseline with predictions
</div>
---
# Top Drivers of the Top Model
```
from model import get_important_feats, plot_top_feats
# extracts the most influential features by
# creating a dataframe of the most important features/combination,
# with their rank
feature_importances = get_important_feats(lm_sq, pf, X_tr_top)
# top 5 positive drivers of popularity
feature_importances.head()
# Bottom 5 negative drivers of popularity
feature_importances.tail()
# plots the dataframes above
plot_top_feats(feature_importances)
```
<div class="alert alert-block alert-info">
<b>Takeaways</b>:
<li>Positive drivers on the left</li>
<li>Negative drivers on the right</li>
<li>Some of the feature combinations are shown that may have a more than linear relationship with song popularity</li>
</div>
| github_jupyter |
```
#hide
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
#hide
from fastbook import *
```
# Image Classification
Now that you understand what deep learning is, what it's for, and how to create and deploy a model, it's time for us to go deeper! In an ideal world deep learning practitioners wouldn't have to know every detail of how things work under the hood… But as yet, we don't live in an ideal world. The truth is, to make your model really work, and work reliably, there are a lot of details you have to get right, and a lot of details that you have to check. This process requires being able to look inside your neural network as it trains, and as it makes predictions, find possible problems, and know how to fix them.
So, from here on in the book we are going to do a deep dive into the mechanics of deep learning. What is the architecture of a computer vision model, an NLP model, a tabular model, and so on? How do you create an architecture that matches the needs of your particular domain? How do you get the best possible results from the training process? How do you make things faster? What do you have to change as your datasets change?
We will start by repeating the same basic applications that we looked at in the first chapter, but we are going to do two things:
- Make them better.
- Apply them to a wider variety of types of data.
In order to do these two things, we will have to learn all of the pieces of the deep learning puzzle. This includes different types of layers, regularization methods, optimizers, how to put layers together into architectures, labeling techniques, and much more. We are not just going to dump all of these things on you, though; we will introduce them progressively as needed, to solve actual problems related to the projects we are working on.
## From Dogs and Cats to Pet Breeds
In our very first model we learned how to classify dogs versus cats. Just a few years ago this was considered a very challenging task—but today, it's far too easy! We will not be able to show you the nuances of training models with this problem, because we get a nearly perfect result without worrying about any of the details. But it turns out that the same dataset also allows us to work on a much more challenging problem: figuring out what breed of pet is shown in each image.
In <<chapter_intro>> we presented the applications as already-solved problems. But this is not how things work in real life. We start with some dataset that we know nothing about. We then have to figure out how it is put together, how to extract the data we need from it, and what that data looks like. For the rest of this book we will be showing you how to solve these problems in practice, including all of the intermediate steps necessary to understand the data that you are working with and test your modeling as you go.
We already downloaded the Pet dataset, and we can get a path to this dataset using the same code as in <<chapter_intro>>:
```
from fastai.vision.all import *
path = untar_data(URLs.PETS)
```
Now if we are going to understand how to extract the breed of each pet from each image we're going to need to understand how this data is laid out. Such details of data layout are a vital piece of the deep learning puzzle. Data is usually provided in one of these two ways:
- Individual files representing items of data, such as text documents or images, possibly organized into folders or with filenames representing information about those items
- A table of data, such as in CSV format, where each row is an item which may include filenames providing a connection between the data in the table and data in other formats, such as text documents and images
There are exceptions to these rules—particularly in domains such as genomics, where there can be binary database formats or even network streams—but overall the vast majority of the datasets you'll work with will use some combination of these two formats.
To see what is in our dataset we can use the `ls` method:
```
#hide
Path.BASE_PATH = path
path.ls()
```
We can see that this dataset provides us with *images* and *annotations* directories. The [website](https://www.robots.ox.ac.uk/~vgg/data/pets/) for the dataset tells us that the *annotations* directory contains information about where the pets are rather than what they are. In this chapter, we will be doing classification, not localization, which is to say that we care about what the pets are, not where they are. Therefore, we will ignore the *annotations* directory for now. So, let's have a look inside the *images* directory:
```
(path/"images").ls()
```
Most functions and methods in fastai that return a collection use a class called `L`. `L` can be thought of as an enhanced version of the ordinary Python `list` type, with added conveniences for common operations. For instance, when we display an object of this class in a notebook it appears in the format shown there. The first thing that is shown is the number of items in the collection, prefixed with a `#`. You'll also see in the preceding output that the list is suffixed with an ellipsis. This means that only the first few items are displayed—which is a good thing, because we would not want more than 7,000 filenames on our screen!
By examining these filenames, we can see how they appear to be structured. Each filename contains the pet breed, and then an underscore (`_`), a number, and finally the file extension. We need to create a piece of code that extracts the breed from a single `Path`. Jupyter notebooks make this easy, because we can gradually build up something that works, and then use it for the entire dataset. We do have to be careful to not make too many assumptions at this point. For instance, if you look carefully you may notice that some of the pet breeds contain multiple words, so we cannot simply break at the first `_` character that we find. To allow us to test our code, let's pick out one of these filenames:
```
fname = (path/"images").ls()[0]
```
The most powerful and flexible way to extract information from strings like this is to use a *regular expression*, also known as a *regex*. A regular expression is a special string, written in the regular expression language, which specifies a general rule for deciding if another string passes a test (i.e., "matches" the regular expression), and also possibly for plucking a particular part or parts out of that other string.
In this case, we need a regular expression that extracts the pet breed from the filename.
We do not have the space to give you a complete regular expression tutorial here,but there are many excellent ones online and we know that many of you will already be familiar with this wonderful tool. If you're not, that is totally fine—this is a great opportunity for you to rectify that! We find that regular expressions are one of the most useful tools in our programming toolkit, and many of our students tell us that this is one of the things they are most excited to learn about. So head over to Google and search for "regular expressions tutorial" now, and then come back here after you've had a good look around. The [book's website](https://book.fast.ai/) also provides a list of our favorites.
> a: Not only are regular expressions dead handy, but they also have interesting roots. They are "regular" because they were originally examples of a "regular" language, the lowest rung within the Chomsky hierarchy, a grammar classification developed by linguist Noam Chomsky, who also wrote _Syntactic Structures_, the pioneering work searching for the formal grammar underlying human language. This is one of the charms of computing: it may be that the hammer you reach for every day in fact came from a spaceship.
When you are writing a regular expression, the best way to start is just to try it against one example at first. Let's use the `findall` method to try a regular expression against the filename of the `fname` object:
```
re.findall(r'(.+)_\d+.jpg$', fname.name)
```
This regular expression plucks out all the characters leading up to the last underscore character, as long as the subsequence characters are numerical digits and then the JPEG file extension.
Now that we confirmed the regular expression works for the example, let's use it to label the whole dataset. fastai comes with many classes to help with labeling. For labeling with regular expressions, we can use the `RegexLabeller` class. In this example we use the data block API we saw in <<chapter_production>> (in fact, we nearly always use the data block API—it's so much more flexible than the simple factory methods we saw in <<chapter_intro>>):
```
pets = DataBlock(blocks = (ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(seed=42),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'),
item_tfms=Resize(460),
batch_tfms=aug_transforms(size=224, min_scale=0.75))
dls = pets.dataloaders(path/"images")
```
One important piece of this `DataBlock` call that we haven't seen before is in these two lines:
```python
item_tfms=Resize(460),
batch_tfms=aug_transforms(size=224, min_scale=0.75)
```
These lines implement a fastai data augmentation strategy which we call *presizing*. Presizing is a particular way to do image augmentation that is designed to minimize data destruction while maintaining good performance.
## Presizing
We need our images to have the same dimensions, so that they can collate into tensors to be passed to the GPU. We also want to minimize the number of distinct augmentation computations we perform. The performance requirement suggests that we should, where possible, compose our augmentation transforms into fewer transforms (to reduce the number of computations and the number of lossy operations) and transform the images into uniform sizes (for more efficient processing on the GPU).
The challenge is that, if performed after resizing down to the augmented size, various common data augmentation transforms might introduce spurious empty zones, degrade data, or both. For instance, rotating an image by 45 degrees fills corner regions of the new bounds with emptyness, which will not teach the model anything. Many rotation and zooming operations will require interpolating to create pixels. These interpolated pixels are derived from the original image data but are still of lower quality.
To work around these challenges, presizing adopts two strategies that are shown in <<presizing>>:
1. Resize images to relatively "large" dimensions—that is, dimensions significantly larger than the target training dimensions.
1. Compose all of the common augmentation operations (including a resize to the final target size) into one, and perform the combined operation on the GPU only once at the end of processing, rather than performing the operations individually and interpolating multiple times.
The first step, the resize, creates images large enough that they have spare margin to allow further augmentation transforms on their inner regions without creating empty zones. This transformation works by resizing to a square, using a large crop size. On the training set, the crop area is chosen randomly, and the size of the crop is selected to cover the entire width or height of the image, whichever is smaller.
In the second step, the GPU is used for all data augmentation, and all of the potentially destructive operations are done together, with a single interpolation at the end.
<img alt="Presizing on the training set" width="600" caption="Presizing on the training set" id="presizing" src="images/att_00060.png">
This picture shows the two steps:
1. *Crop full width or height*: This is in `item_tfms`, so it's applied to each individual image before it is copied to the GPU. It's used to ensure all images are the same size. On the training set, the crop area is chosen randomly. On the validation set, the center square of the image is always chosen.
2. *Random crop and augment*: This is in `batch_tfms`, so it's applied to a batch all at once on the GPU, which means it's fast. On the validation set, only the resize to the final size needed for the model is done here. On the training set, the random crop and any other augmentations are done first.
To implement this process in fastai you use `Resize` as an item transform with a large size, and `RandomResizedCrop` as a batch transform with a smaller size. `RandomResizedCrop` will be added for you if you include the `min_scale` parameter in your `aug_transforms` function, as was done in the `DataBlock` call in the previous section. Alternatively, you can use `pad` or `squish` instead of `crop` (the default) for the initial `Resize`.
<<interpolations>> shows the difference between an image that has been zoomed, interpolated, rotated, and then interpolated again (which is the approach used by all other deep learning libraries), shown here on the right, and an image that has been zoomed and rotated as one operation and then interpolated just once on the left (the fastai approach), shown here on the left.
```
#hide_input
#id interpolations
#caption A comparison of fastai's data augmentation strategy (left) and the traditional approach (right).
dblock1 = DataBlock(blocks=(ImageBlock(), CategoryBlock()),
get_y=parent_label,
item_tfms=Resize(460))
dls1 = dblock1.dataloaders([(Path.cwd()/'images'/'grizzly.jpg')]*100, bs=8)
dls1.train.get_idxs = lambda: Inf.ones
x,y = dls1.valid.one_batch()
_,axs = subplots(1, 2)
x1 = TensorImage(x.clone())
x1 = x1.affine_coord(sz=224)
x1 = x1.rotate(draw=30, p=1.)
x1 = x1.zoom(draw=1.2, p=1.)
x1 = x1.warp(draw_x=-0.2, draw_y=0.2, p=1.)
tfms = setup_aug_tfms([Rotate(draw=30, p=1, size=224), Zoom(draw=1.2, p=1., size=224),
Warp(draw_x=-0.2, draw_y=0.2, p=1., size=224)])
x = Pipeline(tfms)(x)
#x.affine_coord(coord_tfm=coord_tfm, sz=size, mode=mode, pad_mode=pad_mode)
TensorImage(x[0]).show(ctx=axs[0])
TensorImage(x1[0]).show(ctx=axs[1]);
```
You can see that the image on the right is less well defined and has reflection padding artifacts in the bottom-left corner; also, the grass iat the top left has disappeared entirely. We find that in practice using presizing significantly improves the accuracy of models, and often results in speedups too.
The fastai library also provides simple ways to check your data looks right before training a model, which is an extremely important step. We'll look at those next.
### Checking and Debugging a DataBlock
We can never just assume that our code is working perfectly. Writing a `DataBlock` is just like writing a blueprint. You will get an error message if you have a syntax error somewhere in your code, but you have no guarantee that your template is going to work on your data source as you intend. So, before training a model you should always check your data. You can do this using the `show_batch` method:
```
dls.show_batch(nrows=1, ncols=3)
```
Take a look at each image, and check that each one seems to have the correct label for that breed of pet. Often, data scientists work with data with which they are not as familiar as domain experts may be: for instance, I actually don't know what a lot of these pet breeds are. Since I am not an expert on pet breeds, I would use Google images at this point to search for a few of these breeds, and make sure the images look similar to what I see in this output.
If you made a mistake while building your `DataBlock`, it is very likely you won't see it before this step. To debug this, we encourage you to use the `summary` method. It will attempt to create a batch from the source you give it, with a lot of details. Also, if it fails, you will see exactly at which point the error happens, and the library will try to give you some help. For instance, one common mistake is to forget to use a `Resize` transform, so you en up with pictures of different sizes and are not able to batch them. Here is what the summary would look like in that case (note that the exact text may have changed since the time of writing, but it will give you an idea):
```
#hide_output
pets1 = DataBlock(blocks = (ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(seed=42),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'))
pets1.summary(path/"images")
```
```
Setting-up type transforms pipelines
Collecting items from /home/sgugger/.fastai/data/oxford-iiit-pet/images
Found 7390 items
2 datasets of sizes 5912,1478
Setting up Pipeline: PILBase.create
Setting up Pipeline: partial -> Categorize
Building one sample
Pipeline: PILBase.create
starting from
/home/sgugger/.fastai/data/oxford-iiit-pet/images/american_bulldog_83.jpg
applying PILBase.create gives
PILImage mode=RGB size=375x500
Pipeline: partial -> Categorize
starting from
/home/sgugger/.fastai/data/oxford-iiit-pet/images/american_bulldog_83.jpg
applying partial gives
american_bulldog
applying Categorize gives
TensorCategory(12)
Final sample: (PILImage mode=RGB size=375x500, TensorCategory(12))
Setting up after_item: Pipeline: ToTensor
Setting up before_batch: Pipeline:
Setting up after_batch: Pipeline: IntToFloatTensor
Building one batch
Applying item_tfms to the first sample:
Pipeline: ToTensor
starting from
(PILImage mode=RGB size=375x500, TensorCategory(12))
applying ToTensor gives
(TensorImage of size 3x500x375, TensorCategory(12))
Adding the next 3 samples
No before_batch transform to apply
Collating items in a batch
Error! It's not possible to collate your items in a batch
Could not collate the 0-th members of your tuples because got the following
shapes:
torch.Size([3, 500, 375]),torch.Size([3, 375, 500]),torch.Size([3, 333, 500]),
torch.Size([3, 375, 500])
```
You can see exactly how we gathered the data and split it, how we went from a filename to a *sample* (the tuple (image, category)), then what item transforms were applied and how it failed to collate those samples in a batch (because of the different shapes).
Once you think your data looks right, we generally recommend the next step should be using to train a simple model. We often see people put off the training of an actual model for far too long. As a result, they don't actually find out what their baseline results look like. Perhaps your probem doesn't need lots of fancy domain-specific engineering. Or perhaps the data doesn't seem to train the model all. These are things that you want to know as soon as possible. For this initial test, we'll use the same simple model that we used in <<chapter_intro>>:
```
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(2)
```
As we've briefly discussed before, the table shown when we fit a model shows us the results after each epoch of training. Remember, an epoch is one complete pass through all of the images in the data. The columns shown are the average loss over the items of the training set, the loss on the validation set, and any metrics that we requested—in this case, the error rate.
Remember that *loss* is whatever function we've decided to use to optimize the parameters of our model. But we haven't actually told fastai what loss function we want to use. So what is it doing? fastai will generally try to select an appropriate loss function based on what kind of data and model you are using. In this case we have image data and a categorical outcome, so fastai will default to using *cross-entropy loss*.
## Cross-Entropy Loss
*Cross-entropy loss* is a loss function that is similar to the one we used in the previous chapter, but (as we'll see) has two benefits:
- It works even when our dependent variable has more than two categories.
- It results in faster and more reliable training.
In order to understand how cross-entropy loss works for dependent variables with more than two categories, we first have to understand what the actual data and activations that are seen by the loss function look like.
### Viewing Activations and Labels
Let's take a look at the activations of our model. To actually get a batch of real data from our `DataLoaders`, we can use the `one_batch` method:
```
x,y = dls.one_batch()
```
As you see, this returns the dependent and independent variables, as a mini-batch. Let's see what is actually contained in our dependent variable:
```
y
```
Our batch size is 64, so we have 64 rows in this tensor. Each row is a single integer between 0 and 36, representing our 37 possible pet breeds. We can view the predictions (that is, the activations of the final layer of our neural network) using `Learner.get_preds`. This function either takes a dataset index (0 for train and 1 for valid) or an iterator of batches. Thus, we can pass it a simple list with our batch to get our predictions. It returns predictions and targets by default, but since we already have the targets, we can effectively ignore them by assigning to the special variable `_`:
```
preds,_ = learn.get_preds(dl=[(x,y)])
preds[0]
```
The actual predictions are 37 probabilities between 0 and 1, which add up to 1 in total:
```
len(preds[0]),preds[0].sum()
```
To transform the activations of our model into predictions like this, we used something called the *softmax* activation function.
### Softmax
In our classification model, we use the softmax activation function in the final layer to ensure that the activations are all between 0 and 1, and that they sum to 1.
Softmax is similar to the sigmoid function, which we saw earlier. As a reminder sigmoid looks like this:
```
plot_function(torch.sigmoid, min=-4,max=4)
```
We can apply this function to a single column of activations from a neural network, and get back a column of numbers between 0 and 1, so it's a very useful activation function for our final layer.
Now think about what happens if we want to have more categories in our target (such as our 37 pet breeds). That means we'll need more activations than just a single column: we need an activation *per category*. We can create, for instance, a neural net that predicts 3s and 7s that returns two activations, one for each class—this will be a good first step toward creating the more general approach. Let's just use some random numbers with a standard deviation of 2 (so we multiply `randn` by 2) for this example, assuming we have 6 images and 2 possible categories (where the first column represents 3s and the second is 7s):
```
#hide
torch.random.manual_seed(42);
acts = torch.randn((6,2))*2
acts
```
We can't just take the sigmoid of this directly, since we don't get rows that add to 1 (i.e., we want the probability of being a 3 plus the probability of being a 7 to add up to 1):
```
acts.sigmoid()
```
In <<chapter_mnist_basics>>, our neural net created a single activation per image, which we passed through the `sigmoid` function. That single activation represented the model's confidence that the input was a 3. Binary problems are a special case of classification problems, because the target can be treated as a single boolean value, as we did in `mnist_loss`. But binary problems can also be thought of in the context of the more general group of classifiers with any number of categories: in this case, we happen to have two categories. As we saw in the bear classifier, our neural net will return one activation per category.
So in the binary case, what do those activations really indicate? A single pair of activations simply indicates the *relative* confidence of the input being a 3 versus being a 7. The overall values, whether they are both high, or both low, don't matter—all that matters is which is higher, and by how much.
We would expect that since this is just another way of representing the same problem, that we would be able to use `sigmoid` directly on the two-activation version of our neural net. And indeed we can! We can just take the *difference* between the neural net activations, because that reflects how much more sure we are of the input being a 3 than a 7, and then take the sigmoid of that:
```
(acts[:,0]-acts[:,1]).sigmoid()
```
The second column (the probability of it being a 7) will then just be that value subtracted from 1. Now, we need a way to do all this that also works for more than two columns. It turns out that this function, called `softmax`, is exactly that:
``` python
def softmax(x): return exp(x) / exp(x).sum(dim=1, keepdim=True)
```
> jargon: Exponential function (exp): Literally defined as `e**x`, where `e` is a special number approximately equal to 2.718. It is the inverse of the natural logarithm function. Note that `exp` is always positive, and it increases _very_ rapidly!
Let's check that `softmax` returns the same values as `sigmoid` for the first column, and those values subtracted from 1 for the second column:
```
sm_acts = torch.softmax(acts, dim=1)
sm_acts
```
`softmax` is the multi-category equivalent of `sigmoid`—we have to use it any time we have more than two categories and the probabilities of the categories must add to 1, and we often use it even when there are just two categories, just to make things a bit more consistent. We could create other functions that have the properties that all activations are between 0 and 1, and sum to 1; however, no other function has the same relationship to the sigmoid function, which we've seen is smooth and symmetric. Also, we'll see shortly that the softmax function works well hand-in-hand with the loss function we will look at in the next section.
If we have three output activations, such as in our bear classifier, calculating softmax for a single bear image would then look like something like <<bear_softmax>>.
<img alt="Bear softmax example" width="280" id="bear_softmax" caption="Example of softmax on the bear classifier" src="images/att_00062.png">
What does this function do in practice? Taking the exponential ensures all our numbers are positive, and then dividing by the sum ensures we are going to have a bunch of numbers that add up to 1. The exponential also has a nice property: if one of the numbers in our activations `x` is slightly bigger than the others, the exponential will amplify this (since it grows, well... exponentially), which means that in the softmax, that number will be closer to 1.
Intuitively, the softmax function *really* wants to pick one class among the others, so it's ideal for training a classifier when we know each picture has a definite label. (Note that it may be less ideal during inference, as you might want your model to sometimes tell you it doesn't recognize any of the classes that it has seen during training, and not pick a class because it has a slightly bigger activation score. In this case, it might be better to train a model using multiple binary output columns, each using a sigmoid activation.)
Softmax is the first part of the cross-entropy loss—the second part is log likeklihood.
### Log Likelihood
When we calculated the loss for our MNIST example in the last chapter we used:
```python
def mnist_loss(inputs, targets):
inputs = inputs.sigmoid()
return torch.where(targets==1, 1-inputs, inputs).mean()
```
Just as we moved from sigmoid to softmax, we need to extend the loss function to work with more than just binary classification—it needs to be able to classify any number of categories (in this case, we have 37 categories). Our activations, after softmax, are between 0 and 1, and sum to 1 for each row in the batch of predictions. Our targets are integers between 0 and 36.
In the binary case, we used `torch.where` to select between `inputs` and `1-inputs`. When we treat a binary classification as a general classification problem with two categories, it actually becomes even easier, because (as we saw in the previous section) we now have two columns, containing the equivalent of `inputs` and `1-inputs`. So, all we need to do is select from the appropriate column. Let's try to implement this in PyTorch. For our synthetic 3s and 7s example, let's say these are our labels:
```
targ = tensor([0,1,0,1,1,0])
```
and these are the softmax activations:
```
sm_acts
```
Then for each item of `targ` we can use that to select the appropriate column of `sm_acts` using tensor indexing, like so:
```
idx = range(6)
sm_acts[idx, targ]
```
To see exactly what's happening here, let's put all the columns together in a table. Here, the first two columns are our activations, then we have the targets, the row index, and finally the result shown immediately above:
```
#hide_input
from IPython.display import HTML
df = pd.DataFrame(sm_acts, columns=["3","7"])
df['targ'] = targ
df['idx'] = idx
df['loss'] = sm_acts[range(6), targ]
t = df.style.hide_index()
#To have html code compatible with our script
html = t._repr_html_().split('</style>')[1]
html = re.sub(r'<table id="([^"]+)"\s*>', r'<table >', html)
display(HTML(html))
```
Looking at this table, you can see that the final column can be calculated by taking the `targ` and `idx` columns as indices into the two-column matrix containing the `3` and `7` columns. That's what `sm_acts[idx, targ]` is actually doing.
The really interesting thing here is that this actually works just as well with more than two columns. To see this, consider what would happen if we added an activation column for every digit (0 through 9), and then `targ` contained a number from 0 to 9. As long as the activation columns sum to 1 (as they will, if we use softmax), then we'll have a loss function that shows how well we're predicting each digit.
We're only picking the loss from the column containing the correct label. We don't need to consider the other columns, because by the definition of softmax, they add up to 1 minus the activation corresponding to the correct label. Therefore, making the activation for the correct label as high as possible must mean we're also decreasing the activations of the remaining columns.
PyTorch provides a function that does exactly the same thing as `sm_acts[range(n), targ]` (except it takes the negative, because when applying the log afterward, we will have negative numbers), called `nll_loss` (*NLL* stands for *negative log likelihood*):
```
-sm_acts[idx, targ]
F.nll_loss(sm_acts, targ, reduction='none')
```
Despite its name, this PyTorch function does not take the log. We'll see why in the next section, but first, let's see why taking the logarithm can be useful.
### Taking the Log
The function we saw in the previous section works quite well as a loss function, but we can make it a bit better. The problem is that we are using probabilities, and probabilities cannot be smaller than 0 or greater than 1. That means that our model will not care whether it predicts 0.99 or 0.999. Indeed, those numbers are so close together—but in another sense, 0.999 is 10 times more confident than 0.99. So, we want to transform our numbers between 0 and 1 to instead be between negative infinity and infinity. There is a mathematical function that does exactly this: the *logarithm* (available as `torch.log`). It is not defined for numbers less than 0, and looks like this:
```
plot_function(torch.log, min=0,max=4)
```
Does "logarithm" ring a bell? The logarithm function has this identity:
```
y = b**a
a = log(y,b)
```
In this case, we're assuming that `log(y,b)` returns *log y base b*. However, PyTorch actually doesn't define `log` this way: `log` in Python uses the special number `e` (2.718...) as the base.
Perhaps a logarithm is something that you have not thought about for the last 20 years or so. But it's a mathematical idea that is going to be really critical for many things in deep learning, so now would be a great time to refresh your memory. The key thing to know about logarithms is this relationship:
log(a*b) = log(a)+log(b)
When we see it in that format, it looks a bit boring; but think about what this really means. It means that logarithms increase linearly when the underlying signal increases exponentially or multiplicatively. This is used, for instance, in the Richter scale of earthquake severity, and the dB scale of noise levels. It's also often used on financial charts, where we want to show compound growth rates more clearly. Computer scientists love using logarithms, because it means that modification, which can create really really large and really really small numbers, can be replaced by addition, which is much less likely to result in scales that are difficult for our computers to handle.
> s: It's not just computer scientists that love logs! Until computers came along, engineers and scientists used a special ruler called a "slide rule" that did multiplication by adding logarithms. Logarithms are widely used in physics, for multiplying very big or very small numbers, and many other fields.
Taking the mean of the positive or negative log of our probabilities (depending on whether it's the correct or incorrect class) gives us the *negative log likelihood* loss. In PyTorch, `nll_loss` assumes that you already took the log of the softmax, so it doesn't actually do the logarithm for you.
> warning: Confusing Name, Beware: The nll in `nll_loss` stands for "negative log likelihood," but it doesn't actually take the log at all! It assumes you have _already_ taken the log. PyTorch has a function called `log_softmax` that combines `log` and `softmax` in a fast and accurate way. `nll_loss` is deigned to be used after `log_softmax`.
When we first take the softmax, and then the log likelihood of that, that combination is called *cross-entropy loss*. In PyTorch, this is available as `nn.CrossEntropyLoss` (which, in practice, actually does `log_softmax` and then `nll_loss`):
```
loss_func = nn.CrossEntropyLoss()
```
As you see, this is a class. Instantiating it gives you an object which behaves like a function:
```
loss_func(acts, targ)
```
All PyTorch loss functions are provided in two forms, the class just shown above, and also a plain functional form, available in the `F` namespace:
```
F.cross_entropy(acts, targ)
```
Either one works fine and can be used in any situation. We've noticed that most people tend to use the class version, and that's more often used in PyTorch's official docs and examples, so we'll tend to use that too.
By default PyTorch loss functions take the mean of the loss of all items. You can use `reduction='none'` to disable that:
```
nn.CrossEntropyLoss(reduction='none')(acts, targ)
```
> s: An interesting feature about cross-entropy loss appears when we consider its gradient. The gradient of `cross_entropy(a,b)` is just `softmax(a)-b`. Since `softmax(a)` is just the final activation of the model, that means that the gradient is proportional to the difference between the prediction and the target. This is the same as mean squared error in regression (assuming there's no final activation function such as that added by `y_range`), since the gradient of `(a-b)**2` is `2*(a-b)`. Because the gradient is linear, that means we won't see sudden jumps or exponential increases in gradients, which should lead to smoother training of models.
We have now seen all the pieces hidden behind our loss function. But while this puts a number on how well (or badly) our model is doing, it does nothing to help us know if it's actually any good. Let's now see some ways to interpret our model's predictions.
## Model Interpretation
It's very hard to interpret loss functions directly, because they are designed to be things computers can differentiate and optimize, not things that people can understand. That's why we have metrics. These are not used in the optimization process, but just to help us poor humans understand what's going on. In this case, our accuracy is looking pretty good already! So where are we making mistakes?
We saw in <<chapter_intro>> that we can use a confusion matrix to see where our model is doing well, and where it's doing badly:
```
#width 600
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
```
Oh dear—in this case, a confusion matrix is very hard to read. We have 37 different breeds of pet, which means we have 37×37 entries in this giant matrix! Instead, we can use the `most_confused` method, which just shows us the cells of the confusion matrix with the most incorrect predictions (here, with at least 5 or more):
```
interp.most_confused(min_val=5)
```
Since we are not pet breed experts, it is hard for us to know whether these category errors reflect actual difficulties in recognizing breeds. So again, we turn to Google. A little bit of Googling tells us that the most common category errors shown here are actually breed differences that even expert breeders sometimes disagree about. So this gives us some comfort that we are on the right track.
We seem to have a good baseline. What can we do now to make it even better?
## Improving Our Model
We will now look at a range of techniques to improve the training of our model and make it better. While doing so, we will explain a little bit more about transfer learning and how to fine-tune our pretrained model as best as possible, without breaking the pretrained weights.
The first thing we need to set when training a model is the learning rate. We saw in the previous chapter that it needs to be just right to train as efficiently as possible, so how do we pick a good one? fastai provides a tool for this.
### The Learning Rate Finder
One of the most important things we can do when training a model is to make sure that we have the right learning rate. If our learning rate is too low, it can take many, many epochs to train our model. Not only does this waste time, but it also means that we may have problems with overfitting, because every time we do a complete pass through the data, we give our model a chance to memorize it.
So let's just make our learning rate really high, right? Sure, let's try that and see what happens:
```
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1, base_lr=0.1)
```
That doesn't look good. Here's what happened. The optimizer stepped in the correct direction, but it stepped so far that it totally overshot the minimum loss. Repeating that multiple times makes it get further and further away, not closer and closer!
What do we do to find the perfect learning rate—not too high, and not too low? In 2015 the researcher Leslie Smith came up with a brilliant idea, called the *learning rate finder*. His idea was to start with a very, very small learning rate, something so small that we would never expect it to be too big to handle. We use that for one mini-batch, find what the losses are afterwards, and then increase the learning rate by some percentage (e.g., doubling it each time). Then we do another mini-batch, track the loss, and double the learning rate again. We keep doing this until the loss gets worse, instead of better. This is the point where we know we have gone too far. We then select a learning rate a bit lower than this point. Our advice is to pick either:
- One order of magnitude less than where the minimum loss was achieved (i.e., the minimum divided by 10)
- The last point where the loss was clearly decreasing
The learning rate finder computes those points on the curve to help you. Both these rules usually give around the same value. In the first chapter, we didn't specify a learning rate, using the default value from the fastai library (which is 1e-3):
```
learn = cnn_learner(dls, resnet34, metrics=error_rate)
lr_min,lr_steep = learn.lr_find()
print(f"Minimum/10: {lr_min:.2e}, steepest point: {lr_steep:.2e}")
```
We can see on this plot that in the range 1e-6 to 1e-3, nothing really happens and the model doesn't train. Then the loss starts to decrease until it reaches a minimum, and then increases again. We don't want a learning rate greater than 1e-1 as it will give a training that diverges like the one before (you can try for yourself), but 1e-1 is already too high: at this stage we've left the period where the loss was decreasing steadily.
In this learning rate plot it appears that a learning rate around 3e-3 would be appropriate, so let's choose that:
```
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(2, base_lr=3e-3)
```
> Note: Logarithmic Scale: The learning rate finder plot has a logarithmic scale, which is why the middle point between 1e-3 and 1e-2 is between 3e-3 and 4e-3. This is because we care mostly about the order of magnitude of the learning rate.
It's interesting that the learning rate finder was only discovered in 2015, while neural networks have been under development since the 1950s. Throughout that time finding a good learning rate has been, perhaps, the most important and challenging issue for practitioners. The soltuon does not require any advanced maths, giant computing resources, huge datasets, or anything else that would make it inaccessible to any curious researcher. Furthermore, Leslie Smith, was not part of some exclusive Silicon Valley lab, but was working as a naval researcher. All of this is to say: breakthrough work in deep learning absolutely does not require access to vast resources, elite teams, or advanced mathematical ideas. There is lots of work still to be done that requires just a bit of common sense, creativity, and tenacity.
Now that we have a good learning rate to train our model, let's look at how we can fine-tune the weights of a pretrained model.
### Unfreezing and Transfer Learning
We discussed briefly in <<chapter_intro>> how transfer learning works. We saw that the basic idea is that a pretrained model, trained potentially on millions of data points (such as ImageNet), is fine-tuned for some other task. But what does this really mean?
We now know that a convolutional neural network consists of many linear layers with a nonlinear activation function between each pair, followed by one or more final linear layers with an activation function such as softmax at the very end. The final linear layer uses a matrix with enough columns such that the output size is the same as the number of classes in our model (assuming that we are doing classification).
This final linear layer is unlikely to be of any use for us when we are fine-tuning in a transfer learning setting, because it is specifically designed to classify the categories in the original pretraining dataset. So when we do transfer learning we remove it, throw it away, and replace it with a new linear layer with the correct number of outputs for our desired task (in this case, there would be 37 activations).
This newly added linear layer will have entirely random weights. Therefore, our model prior to fine-tuning has entirely random outputs. But that does not mean that it is an entirely random model! All of the layers prior to the last one have been carefully trained to be good at image classification tasks in general. As we saw in the images from the [Zeiler and Fergus paper](https://arxiv.org/pdf/1311.2901.pdf) in <<chapter_intro>> (see <<img_layer1>> through <<img_layer4>>), the first few layers encode very general concepts, such as finding gradients and edges, and later layers encode concepts that are still very useful for us, such as finding eyeballs and fur.
We want to train a model in such a way that we allow it to remember all of these generally useful ideas from the pretrained model, use them to solve our particular task (classify pet breeds), and only adjust them as required for the specifics of our particular task.
Our challenge when fine-tuning is to replace the random weights in our added linear layers with weights that correctly achieve our desired task (classifying pet breeds) without breaking the carefully pretrained weights and the other layers. There is actually a very simple trick to allow this to happen: tell the optimizer to only update the weights in those randomly added final layers. Don't change the weights in the rest of the neural network at all. This is called *freezing* those pretrained layers.
When we create a model from a pretrained network fastai automatically freezes all of the pretrained layers for us. When we call the `fine_tune` method fastai does two things:
- Trains the randomly added layers for one epoch, with all other layers frozen
- Unfreezes all of the layers, and trains them all for the number of epochs requested
Although this is a reasonable default approach, it is likely that for your particular dataset you may get better results by doing things slightly differently. The `fine_tune` method has a number of parameters you can use to change its behavior, but it might be easiest for you to just call the underlying methods directly if you want to get some custom behavior. Remember that you can see the source code for the method by using the following syntax:
learn.fine_tune??
So let's try doing this manually ourselves. First of all we will train the randomly added layers for three epochs, using `fit_one_cycle`. As mentioned in <<chapter_intro>>, `fit_one_cycle` is the suggested way to train models without using `fine_tune`. We'll see why later in the book; in short, what `fit_one_cycle` does is to start training at a low learning rate, gradually increase it for the first section of training, and then gradually decrease it again for the last section of training.
```
learn.fine_tune??
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fit_one_cycle(3, 3e-3)
```
Then we'll unfreeze the model:
```
learn.unfreeze()
```
and run `lr_find` again, because having more layers to train, and weights that have already been trained for three epochs, means our previously found learning rate isn't appropriate any more:
```
learn.lr_find()
```
Note that the graph is a little different from when we had random weights: we don't have that sharp descent that indicates the model is training. That's because our model has been trained already. Here we have a somewhat flat area before a sharp increase, and we should take a point well before that sharp increase—for instance, 1e-5. The point with the maximum gradient isn't what we look for here and should be ignored.
Let's train at a suitable learning rate:
```
learn.fit_one_cycle(6, lr_max=1e-5)
```
This has improved our model a bit, but there's more we can do. The deepest layers of our pretrained model might not need as high a learning rate as the last ones, so we should probably use different learning rates for those—this is known as using *discriminative learning rates*.
### Discriminative Learning Rates
Even after we unfreeze, we still care a lot about the quality of those pretrained weights. We would not expect that the best learning rate for those pretrained parameters would be as high as for the randomly added parameters, even after we have tuned those randomly added parameters for a few epochs. Remember, the pretrained weights have been trained for hundreds of epochs, on millions of images.
In addition, do you remember the images we saw in <<chapter_intro>>, showing what each layer learns? The first layer learns very simple foundations, like edge and gradient detectors; these are likely to be just as useful for nearly any task. The later layers learn much more complex concepts, like "eye" and "sunset," which might not be useful in your task at all (maybe you're classifying car models, for instance). So it makes sense to let the later layers fine-tune more quickly than earlier layers.
Therefore, fastai's default approach is to use discriminative learning rates. This was originally developed in the ULMFiT approach to NLP transfer learning that we will introduce in <<chapter_nlp>>. Like many good ideas in deep learning, it is extremely simple: use a lower learning rate for the early layers of the neural network, and a higher learning rate for the later layers (and especially the randomly added layers). The idea is based on insights developed by [Jason Yosinski](https://arxiv.org/abs/1411.1792), who showed in 2014 that with transfer learning different layers of a neural network should train at different speeds, as seen in <<yosinski>>.
<img alt="Impact of different layers and training methods on transfer learning (Yosinski)" width="680" caption="Impact of different layers and training methods on transfer learning (courtesy of Jason Yosinski et al.)" id="yosinski" src="images/att_00039.png">
fastai lets you pass a Python `slice` object anywhere that a learning rate is expected. The first value passed will be the learning rate in the earliest layer of the neural network, and the second value will be the learning rate in the final layer. The layers in between will have learning rates that are multiplicatively equidistant throughout that range. Let's use this approach to replicate the previous training, but this time we'll only set the *lowest* layer of our net to a learning rate of 1e-6; the other layers will scale up to 1e-4. Let's train for a while and see what happens:
```
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fit_one_cycle(3, 3e-3)
learn.unfreeze()
learn.fit_one_cycle(12, lr_max=slice(1e-6,1e-4))
```
Now the fine-tuning is working great!
fastai can show us a graph of the training and validation loss:
```
learn.recorder.plot_loss()
```
As you can see, the training loss keeps getting better and better. But notice that eventually the validation loss improvement slows, and sometimes even gets worse! This is the point at which the model is starting to over fit. In particular, the model is becoming overconfident of its predictions. But this does *not* mean that it is getting less accurate, necessarily. Take a look at the table of training results per epoch, and you will often see that the accuracy continues improving, even as the validation loss gets worse. In the end what matters is your accuracy, or more generally your chosen metrics, not the loss. The loss is just the function we've given the computer to help us to optimize.
Another decision you have to make when training the model is for how long to train for. We'll consider that next.
### Selecting the Number of Epochs
Often you will find that you are limited by time, rather than generalization and accuracy, when choosing how many epochs to train for. So your first approach to training should be to simply pick a number of epochs that will train in the amount of time that you are happy to wait for. Then look at the training and validation loss plots, as shown above, and in particular your metrics, and if you see that they are still getting better even in your final epochs, then you know that you have not trained for too long.
On the other hand, you may well see that the metrics you have chosen are really getting worse at the end of training. Remember, it's not just that we're looking for the validation loss to get worse, but the actual metrics. Your validation loss will first get worse during training because the model gets overconfident, and only later will get worse because it is incorrectly memorizing the data. We only care in practice about the latter issue. Remember, our loss function is just something that we use to allow our optimizer to have something it can differentiate and optimize; it's not actually the thing we care about in practice.
Before the days of 1cycle training it was very common to save the model at the end of each epoch, and then select whichever model had the best accuracy out of all of the models saved in each epoch. This is known as *early stopping*. However, this is very unlikely to give you the best answer, because those epochs in the middle occur before the learning rate has had a chance to reach the small values, where it can really find the best result. Therefore, if you find that you have overfit, what you should actually do is retrain your model from scratch, and this time select a total number of epochs based on where your previous best results were found.
If you have the time to train for more epochs, you may want to instead use that time to train more parameters—that is, use a deeper architecture.
### Deeper Architectures
In general, a model with more parameters can model your data more accurately. (There are lots and lots of caveats to this generalization, and it depends on the specifics of the architectures you are using, but it is a reasonable rule of thumb for now.) For most of the architectures that we will be seeing in this book, you can create larger versions of them by simply adding more layers. However, since we want to use pretrained models, we need to make sure that we choose a number of layers that have already been pretrained for us.
This is why, in practice, architectures tend to come in a small number of variants. For instance, the ResNet architecture that we are using in this chapter comes in variants with 18, 34, 50, 101, and 152 layer, pretrained on ImageNet. A larger (more layers and parameters; sometimes described as the "capacity" of a model) version of a ResNet will always be able to give us a better training loss, but it can suffer more from overfitting, because it has more parameters to overfit with.
In general, a bigger model has the ability to better capture the real underlying relationships in your data, and also to capture and memorize the specific details of your individual images.
However, using a deeper model is going to require more GPU RAM, so you may need to lower the size of your batches to avoid an *out-of-memory error*. This happens when you try to fit too much inside your GPU and looks like:
```
Cuda runtime error: out of memory
```
You may have to restart your notebook when this happens. The way to solve it is to use a smaller batch size, which means passing smaller groups of images at any given time through your model. You can pass the batch size you want to the call creating your `DataLoaders` with `bs=`.
The other downside of deeper architectures is that they take quite a bit longer to train. One technique that can speed things up a lot is *mixed-precision training*. This refers to using less-precise numbers (*half-precision floating point*, also called *fp16*) where possible during training. As we are writing these words in early 2020, nearly all current NVIDIA GPUs support a special feature called *tensor cores* that can dramatically speed up neural network training, by 2-3x. They also require a lot less GPU memory. To enable this feature in fastai, just add `to_fp16()` after your `Learner` creation (you also need to import the module).
You can't really know ahead of time what the best architecture for your particular problem is—you need to try training some. So let's try a ResNet-50 now with mixed precision:
```
from fastai.callback.fp16 import *
learn = cnn_learner(dls, resnet50, metrics=error_rate).to_fp16()
learn.fine_tune(6, freeze_epochs=3)
```
You'll see here we've gone back to using `fine_tune`, since it's so handy! We can pass `freeze_epochs` to tell fastai how many epochs to train for while frozen. It will automatically change learning rates appropriately for most datasets.
In this case, we're not seeing a clear win from the deeper model. This is useful to remember—bigger models aren't necessarily better models for your particular case! Make sure you try small models before you start scaling up.
## Conclusion
In this chapter you learned some important practical tips, both for getting your image data ready for modeling (presizing, data block summary) and for fitting the model (learning rate finder, unfreezing, discriminative learning rates, setting the number of epochs, and using deeper architectures). Using these tools will help you to build more accurate image models, more quickly.
We also discussed cross-entropy loss. This part of the book is worth spending plenty of time on. You aren't likely to need to actually implement cross-entropy loss from scratch yourself in practice, but it's really important you understand the inputs to and output from that function, because it (or a variant of it, as we'll see in the next chapter) is used in nearly every classification model. So when you want to debug a model, or put a model in production, or improve the accuracy of a model, you're going to need to be able to look at its activations and loss, and understand what's going on, and why. You can't do that properly if you don't understand your loss function.
If cross-entropy loss hasn't "clicked" for you just yet, don't worry—you'll get there! First, go back to the last chapter and make sure you really understand `mnist_loss`. Then work gradually through the cells of the notebook for this chapter, where we step through each piece of cross-entropy loss. Make sure you understand what each calculation is doing, and why. Try creating some small tensors yourself and pass them into the functions, to see what they return.
Remember: the choices made in the implementation of cross-entropy loss are not the only possible choices that could have been made. Just like when we looked at regression we could choose between mean squared error and mean absolute difference (L1). If you have other ideas for possible functions that you think might work, feel free to give them a try in this chapter's notebook! (Fair warning though: you'll probably find that the model will be slower to train, and less accurate. That's because the gradient of cross-entropy loss is proportional to the difference between the activation and the target, so SGD always gets a nicely scaled step for the weights.)
## Questionnaire
1. Why do we first resize to a large size on the CPU, and then to a smaller size on the GPU?
1. If you are not familiar with regular expressions, find a regular expression tutorial, and some problem sets, and complete them. Have a look on the book's website for suggestions.
1. What are the two ways in which data is most commonly provided, for most deep learning datasets?
1. Look up the documentation for `L` and try using a few of the new methods is that it adds.
1. Look up the documentation for the Python `pathlib` module and try using a few methods of the `Path` class.
1. Give two examples of ways that image transformations can degrade the quality of the data.
1. What method does fastai provide to view the data in a `DataLoaders`?
1. What method does fastai provide to help you debug a `DataBlock`?
1. Should you hold off on training a model until you have thoroughly cleaned your data?
1. What are the two pieces that are combined into cross-entropy loss in PyTorch?
1. What are the two properties of activations that softmax ensures? Why is this important?
1. When might you want your activations to not have these two properties?
1. Calculate the `exp` and `softmax` columns of <<bear_softmax>> yourself (i.e., in a spreadsheet, with a calculator, or in a notebook).
1. Why can't we use `torch.where` to create a loss function for datasets where our label can have more than two categories?
1. What is the value of log(-2)? Why?
1. What are two good rules of thumb for picking a learning rate from the learning rate finder?
1. What two steps does the `fine_tune` method do?
1. In Jupyter Notebook, how do you get the source code for a method or function?
1. What are discriminative learning rates?
1. How is a Python `slice` object interpreted when passed as a learning rate to fastai?
1. Why is early stopping a poor choice when using 1cycle training?
1. What is the difference between `resnet50` and `resnet101`?
1. What does `to_fp16` do?
### Further Research
1. Find the paper by Leslie Smith that introduced the learning rate finder, and read it.
1. See if you can improve the accuracy of the classifier in this chapter. What's the best accuracy you can achieve? Look on the forums and the book's website to see what other students have achieved with this dataset, and how they did it.
| github_jupyter |
```
import numpy as np
import pandas as pd
import seaborn as sns; sns.set()
from sklearn.metrics import confusion_matrix
# initiating random number
np.random.seed(11)
#### Creating the dataset
# mean and standard deviation for the x belonging to the first class
mu_x1, sigma_x1 = 0, 0.1
# constat to make the second distribution different from the first
x2_mu_diff = 0.35
# creating the first distribution
d1 = pd.DataFrame({'x1': np.random.normal(mu_x1, sigma_x1 , 1000),
'x2': np.random.normal(mu_x1, sigma_x1 , 1000),
'type': 0})
# creating the second distribution
d2 = pd.DataFrame({'x1': np.random.normal(mu_x1, sigma_x1 , 1000) + x2_mu_diff,
'x2': np.random.normal(mu_x1, sigma_x1 , 1000) + x2_mu_diff,
'type': 1})
data = pd.concat([d1, d2], ignore_index=True)
ax = sns.scatterplot(x="x1", y="x2", hue="type",
data=data)
class Perceptron(object):
"""
Simple implementation of the perceptron algorithm
"""
def __init__(self, w0=1, w1=0.1, w2=0.1):
# weights
self.w0 = w0 # bias
self.w1 = w1
self.w2 = w2
def step_function(self, z):
if z >= 0:
return 1
else:
return 0
def weighted_sum_inputs(self, x1, x2):
return sum([1 * self.w0, x1 * self.w1, x2 * self.w2])
def predict(self, x1, x2):
"""
Uses the step function to determine the output
"""
z = self.weighted_sum_inputs(x1, x2)
return self.step_function(z)
def predict_boundary(self, x):
"""
Used to predict the boundaries of our classifier
"""
return -(self.w1 * x + self.w0) / self.w2
def fit(self, X, y, epochs=1, step=0.1, verbose=True):
"""
Train the model given the dataset
"""
errors = []
for epoch in range(epochs):
error = 0
for i in range(0, len(X.index)):
x1, x2, target = X.values[i][0], X.values[i][1], y.values[i]
# The update is proportional to the step size and the error
update = step * (target - self.predict(x1, x2))
self.w1 += update * x1
self.w2 += update * x2
self.w0 += update
error += int(update != 0.0)
errors.append(error)
if verbose:
print('Epochs: {} - Error: {} - Errors from all epochs: {}'\
.format(epoch, error, errors))
# Splitting the dataset in training and test set
msk = np.random.rand(len(data)) < 0.8
# Roughly 80% of data will go in the training set
train_x, train_y = data[['x1','x2']][msk], data.type[msk]
# Everything else will go into the validation set
test_x, test_y = data[['x1','x2']][~msk], data.type[~msk]
my_perceptron = Perceptron(0.1,0.1)
my_perceptron.fit(train_x, train_y, epochs=1, step=0.005)
pred_y = test_x.apply(lambda x: my_perceptron.predict(x.x1, x.x2), axis=1)
cm = confusion_matrix(test_y, pred_y, labels=[0, 1])
print(pd.DataFrame(cm,
index=['True 0', 'True 1'],
columns=['Predicted 0', 'Predicted 1']))
my_perceptron.w0, my_perceptron.w1, my_perceptron.w2
# Adds decision boundary line to the scatterplot
ax = sns.scatterplot(x="x1", y="x2", hue="type",
data=data[~msk])
ax.autoscale(False)
x_vals = np.array(ax.get_xlim())
y_vals = my_perceptron.predict_boundary(x_vals)
ax.plot(x_vals, y_vals, '--', c="red")
import numpy as np
from keras.models import Sequential
from keras.optimizers import SGD
from keras.layers import Dense
import keras.backend as K
import tensorflow as tf
network = Sequential()
network.add(Dense(1, input_dim = 2, activation = 'sigmoid'))
network.compile(loss = "mse", optimizer = SGD(lr = 0.01))
network.fit(train_x, train_y , nb_epoch = 1, batch_size = 1, shuffle = False)
network.get_weights()
pred_y = network.predict(test_x)
from sklearn.metrics import roc_auc_score
roc_auc_score(test_y, pred_y)
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import SGD
from sklearn.metrics import mean_squared_error
my_perceptron = Sequential()
input_layer = Dense(1, input_dim=2, activation="sigmoid", kernel_initializer="zero")
my_perceptron.add(input_layer)
my_perceptron.compile(loss="mse", optimizer=SGD(lr=0.01))
my_perceptron.fit(train_x.values, train_y, epochs=30, batch_size=1, shuffle=True)
pred_y = my_perceptron.predict(test_x)
print('MSE on the test set:', mean_squared_error(pred_y, test_y))
```
| github_jupyter |
# Ingest Image Data
When working on computer vision tasks, you may be using a common library such as OpenCV, matplotlib, or pandas. Once we are moving to cloud and start your machine learning journey in Amazon Sagemaker, you will encounter new challenges of loading, reading, and writing files from S3 to a Sagemaker Notebook, and we will discuss several approaches in this section. Due to the size of the data we are dealing with, copying data into the instance is not recommended; you do not need to download data to the Sagemaker to train a model either. But if you want to take a look at a few samples from the image dataset and decide whether any transformation/pre-processing is needed, here are ways to do it.
### Image data: COCO (Common Objects in Context)
**COCO** is a large-scale object detection, segmentation, and captioning dataset. COCO has several features:
* Object segmentation
* Recognition in context
* Superpixel stuff segmentation
* 330K images (>200K labeled)
* 1.5 million object instances
* 80 object categories
* 91 stuff categories
* 5 captions per image
* 250,000 people with keypoints
## Set Up Notebook
```
%pip install -qU 'sagemaker>=2.15.0' 's3fs==0.4.2'
import io
import boto3
import sagemaker
import glob
import tempfile
# Get SageMaker session & default S3 bucket
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = "image_coco/coco_val/val2017"
filename = "000000086956.jpg"
```
## Download image data and write to S3
**Note**: COCO data size is large so this could take around one minute or two. You can download partial files by using [COCOAPI](https://github.com/cocodataset/cocoapi). We recommend to go with a bigger storage instance when you start your notebook instance if you are experimenting with the full dataset.
```
# helper functions to upload data to s3
def write_to_s3(bucket, prefix, filename):
key = "{}/{}".format(prefix, filename)
return boto3.Session().resource("s3").Bucket(bucket).upload_file(filename, key)
# run this cell if you are in SageMaker Studio notebook
#!apt-get install unzip
!wget http://images.cocodataset.org/zips/val2017.zip -O coco_val.zip
# Uncompressing
!unzip -qU -o coco_val.zip -d coco_val
# upload the files to the S3 bucket, we only upload 20 images to S3 bucket to showcase how ingestion works
csv_files = glob.glob("coco_val/val2017/*.jpg")
for filename in csv_files[:20]:
write_to_s3(bucket, prefix, filename)
```
## Method 1: Streaming data from S3 to the SageMaker instance-memory
**Use AWS compatible Python Packages with io Module**
The easiest way to access your files in S3 without copying files into your instance storage is to use pre-built packages that already have implemented options to access data with a specified path string. Streaming means to read the object directly to memory instead of writing it to a file. As an example, the `matplotlib` library has a pre-built function `imread` that usually an URL or path to an image, but here we use S3 objects and BytesIO method to read the image. You can also go with `PIL` package.
```
import matplotlib.image as mpimage
import matplotlib.pyplot as plt
key = "{}/{}".format(prefix, filename)
image_object = boto3.resource("s3").Bucket(bucket).Object(key)
image = mpimage.imread(io.BytesIO(image_object.get()["Body"].read()), "jpg")
plt.figure(0)
plt.imshow(image)
from PIL import Image
im = Image.open(image_object.get()["Body"])
plt.figure(0)
plt.imshow(im)
```
## Method 2: Using temporary files on the SageMaker instance
Another way to work with your usual methods is to create temporary files on your SageMaker instance and feed them into the standard methods as a file path. Tempfiles provides automatic cleanup, meaning that creates temporary files that will be deleted as the file is closed.
```
tmp = tempfile.NamedTemporaryFile()
with open(tmp.name, "wb") as f:
image_object.download_fileobj(f)
f.seek(
0, 2
) # the file will be downloaded in a lazy fashion, so add this to the file descriptor
img = plt.imread(tmp.name)
print(img.shape)
plt.imshow(im)
```
## Method 3: Use AWS native methods
#### s3fs
[S3Fs](https://s3fs.readthedocs.io/en/latest/) is a Pythonic file interface to S3. It builds on top of botocore. The top-level class S3FileSystem holds connection information and allows typical file-system style operations like cp, mv, ls, du, glob, etc., as well as put/get of local files to/from S3.
```
import s3fs
fs = s3fs.S3FileSystem()
data_s3fs_location = "s3://{}/{}/".format(bucket, prefix)
# To List first file in your accessible bucket
fs.ls(data_s3fs_location)[0]
# open it directly with s3fs
data_s3fs_location = "s3://{}/{}/{}".format(bucket, prefix, filename) # S3 URL
with fs.open(data_s3fs_location) as f:
display(Image.open(f))
```
### Citation
Lin, Tsung-Yi, Maire, Michael, Belongie, Serge, Bourdev, Lubomir, Girshick, Ross, Hays, James, Perona, Pietro, Ramanan, Deva, Zitnick, C. Lawrence and Dollár, Piotr Microsoft COCO: Common Objects in Context. (2014). , cite arxiv:1405.0312Comment: 1) updated annotation pipeline description and figures; 2) added new section describing datasets splits; 3) updated author list .
| github_jupyter |
# Algorithms: linear classifier

This work by Jephian Lin is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
## Linear classifier
The concept of a linear classifier
is to find a line (or hyperplane)
that separates two groups data points
(each data points have labels either $1$ or $-1$).
The normal vector of this line (or hyperplane)
is then called the **linear classifier**.
The line (or hyperplane)
separates the space into two parts.
When a new data point is given,
we may check which part does this point belong to
and then predict the label.

### Linear classifier and inner product
Let ${\bf v}$ be a vector.
Then ${\bf v}$ separate the space into two parts,
$H_+ = \{{\bf p}\in\mathbb{R}^d: {\bf p}\cdot {\bf v} > 0\}$
and
$H_- = \{{\bf p}\in\mathbb{R}^d: {\bf p}\cdot {\bf v} < 0\}$.
Let $\{{\bf x}\}_{i=1}^N$ be a set of $N$ points
with each point labeled as $1$ or $-1$.
Let $y_i$ be the label of ${\bf x}_i$.
We say ${\bf v}$ is a **linear classifier** if
* ${\bf x}_i\cdot{\bf v} > 0 \iff y_i = 1$ for all $i$
* ${\bf x}_i\cdot{\bf v} < 0 \iff y_i = -1$ for all $i$
### `normal` being a linear classifier
Let `normal` be a vector.
Let `X` be a dataset of shape `(N, d)`.
(`X` has `N` samples with `d` features.)
Let `y` be the label of shape `(N, )`.
Through Numpy,
whether `normal` is a linear classifier
can be checked by
`np.all( np.sign(np.dot(X, normal))== y )`.
### `normal` not being a linear classifier
If `normal` is not a linear classifier,
then there must be a data point `X[i]` such that
`np.dot(X[i], normal)` and `y[i]` have the opposite signs.
Equivalently, `np.dot(X[i], normal) * y[i] < 0`.
In this case, we say
`normal` is not a linear classifier
witnessed by `X[i]` and `y[i]`.
### Algorithm
Let `X` be a dataset.
Each data point is labled by `1` or `-1`.
The label is recorded by `y`.
The goal is to find a normal vector `normal`
such that `np.sign(np.dot(X, normal))` is `y` (if possible).
1. Set `normal` as the zero vector (or any vector).
2. If `normal` is not a linear classifier witnessed by `X[i]` and `y[i]`,
then update `normal` by
* `normal -= X[i]` if `np.dot(X[i], normal) > 0` and `y[i] < 0`
* `normal += X[i]` if `np.dot(X[i], normal) > 0` and `y[i] < 0`
3. Repeat Step 2 until `normal` becomes a linear classifier.
Note:
The update process can be simplified into one line below.
```Python
normal = normal + y[i]*X[i]
```
### Pseudocode
**Input**:
a dataset `X` of shape `(N, d)` and
the label `y` of shape `(N,)`
(the label is either `1` or `-1`)
**Output**:
an array `normal = [c1 ... cd]`
such that `np.sign(np.dot(X, normal))` is `y`
```Python
normal = zero vector of dimension d (or any vector)
again = True
while again:
again = False
if `normal` is not a linear classifier
(witnessed by X[i] and y[i]):
update normal
again = True
```
##### Exercise
Given `d = 2`,
create a zero array `normal` of shape `(d, )`.
```
### your answer here
```
##### Exercise
Let `X = np.random.randn(100, 2)`,
`normal = np.random.randn(2)`, and
`y = np.random.choice([-1,1], 100)`.
Write a `for` loop to find every pair `X[i]` and `y[i]`
that witness `normal` not being a linear classifier.
```
### your asnwer here
```
##### Exercise
Following the setting of the previous exercise,
sometimes you just want to find
one pair of `X[i]` and `y[i]`
that witness `normal` not being a linear classifier.
Use `break` to stop the `for` loop when you find one.
```
### your asnwer here
```
##### Exercise
Obtain `X` and `y` by the code below.
```Python
X = np.random.randn(100, 2)
y = np.sign(np.dot(X, np.random.randn(2)))
```
```
### your answer here
```
##### Exercise
Write a function `linear_classifier(X, y)`
that returns a linear classifier `normal`.
```
### your answer here
```
##### Exercise
Let `X = np.random.randn(100, 2)` and
`normal = np.random.randn(2)`.
Compute the accuracy of `normal`.
That is, calculate the number of pairs `X[i]` and `y[i]`
such that `np.dot(X[i], normal)` and `y[i]` have the same sign,
and divide this number by the total number of samples
to get the accuracy.
```
### your answer here
```
##### Exercise
Add a new keyword `acc` to your `linear_classifier` function.
When `acc` is `True`,
print the current accuracy
whenever `normal` is updated.
```
### your answer here
```
##### Exercise
You `linear_classifier` can be very powerful.
Obtain the required settings below.
```Python
X = np.random.randn(100, 2)
y = np.sign(np.dot(X, np.random.randn(2)) + 0.1)
plt.scatter(X[:,0], X[:,1], c=y, cmap='viridis')
df = pd.DataFrame(X, columns=['x1', 'x2'])
df['1'] = 1
```
It is likely that `linear_classifier(X, y, acc=True)` never stops.
But `linear_classifier(df.values, y, acc=True)` will work.
Why? What is the meaning of the output?
```
### your answer here
```
##### Exercise
You `linear_classifier` can be very powerful.
Obtain the required settings below.
```Python
X = np.random.randn(100, 2)
y = np.sign(np.sum(X**2, axis=1) - 0.5)
plt.scatter(X[:,0], X[:,1], c=y, cmap='viridis')
df = pd.DataFrame(X, columns=['x1', 'x2'])
df['x1^2'] = df['x1']**2
df['x2^2'] = df['x2']**2
df['1'] = 1
```
It is likely that `linear_classifier(X, y, acc=True)` never stops.
But `linear_classifier(df.values, y, acc=True)` will work.
Why? What is the meaning of the output?
```
### your answer here
```
##### Sample code for a linear classifier passing through the origin
```
def linear_classifier(X, y, acc=False):
"""
Input:
X: array of shape (N, d) with N samples and d features
y: array of shape (N,); labels of points (-1 or 1)
Output:
an array normal = [c1, ..., cd] of shape (d,)
such that np.sign(np.dot(X, ans)) is y.
"""
N,d = X.shape
normal = np.array([0]*d, X.dtype)
again = True
while again:
again = False
for i in range(N):
row = X[i]
label = y[i]
if np.dot(row, normal) * label <= 0:
normal += label * row
again = True
break
if acc:
print((np.sign(np.dot(X, normal)) == y).mean())
return normal
N = 100
d = 2
X = np.random.randn(N, d)
y = np.sign(np.dot(X, np.random.randn(d)))
plt.scatter(X[:,0], X[:,1], c=y, cmap='viridis')
normal = linear_classifier(X, y, acc=True)
def draw_classifier_origin(X, y, normal):
"""
Input:
X, y: the X, y to be used for linear_classifier
normal: a normal vector
Output:
an illustration of the classifier
This function works only when X.shape[1] == 2.
"""
fig = plt.figure(figsize=(5,5))
ax = plt.axes()
### draw data points
ax.scatter(X[:,0], X[:,1], c=y, cmap='viridis')
### set boundary
xleft, xright = X[:,0].min(), X[:,0].max()
yleft, yright = X[:,1].min(), X[:,1].max()
xwidth = xright - xleft
ywidth = yright - yleft
width = max([xwidth, ywidth])
xleft, xright = xleft - (width-xwidth)/2, xright + (width-xwidth)/2
yleft, yright = yleft - (width-ywidth)/2, yright + (width-ywidth)/2
ax.set_xlim(xleft, xright)
ax.set_ylim(yleft, yright)
### draw normal vector and the line
length = np.sqrt(np.sum(normal ** 2))
c1,c2 = normal / length * (0.25*width)
ax.arrow(0, 0, c1, c2, color='red', head_width=0.05*width)
ax.plot([-4*width*c2, 4*width*c2], [4*width*c1, -4*width*c1], color='red')
# fig.savefig('linear_classifier.png')
draw_classifier_origin(X, y, normal)
```
| github_jupyter |
# 1. Installing twint
Installing twint will install all related packages (like numpy,etc) that makes twint function properly.
### Upgrading twint
For those who already have twint and wish to upgrade it due to certain functionality not working, or any other reasons, run the uninstall command first.
Otherwise, for a fresh install, you may skip running the code below and start from the second cell
```
!pip3 uninstall twint --yes
!pip3 install --user --upgrade git+https://github.com/twintproject/twint.git@origin/master #egg=twint
!pip3 install nest_asyncio
!pip3 install python-dotenv
!pip freeze > requirements.txt
```
# 2. Import Packages
Importing modules and essential packages for running this notebook :)
```
import os, sys, time, csv, math
# Basic utilities for processing data in python
import pandas as pd
import numpy as np
# Open source Twitter intelligence tool
import twint
# Twitter authentication client
from twitterclient import get_twitter_client
# Twitter API client
import tweepy
from tweepy import Cursor
# Removes runtimeError in jupyterlab -> This event loop is already running
import nest_asyncio
nest_asyncio.apply()
```
# 3. Find network actors using twint
Use the cell below to gather information/data about different search queries, hasthags, etc to identify which actors and conversations you wish to study.
All data from this section will be stored in '/search queries' folder
```
%%capture
# Configure twint query parameters to find all users talking about FSL
qp = twint.Config()
qp.Search = "French as a second language"
qp.Output = "data/search queries/fsl-test.csv"
qp.Store_csv = True
# Run twint on configured qp
twint.run.Search(qp)
print("Running search query")
```
## Other ways to find network actors (Domain knowledge)
A group of researchers with domain knowledge about FSL communities have shared some of their finidngs. I have discovered new users who are quite relevant to our analysis that I will manually add to our list
### Frequently mentioned users in posts discussing FSL (Research conducted by Liam Bekirsky)
```
liams_userlist = ['tdsb_fsl', 'tdsbvs', 'Natasha_Faroogh', 'scholasticCDA', 'OEEO_OCT', 'OCT_OEEO', 'sarnott_uottawa', 'EducLang', 'mimi_masson', 'MonsieurSteve1', 'CASLT_ACPLS', 'CPFontario', 'sdvlil', 'CecileRobertso5', 'Alex076', 'HDSBFSL', 'aliceF11', 'transformingfsl', 'ginagkozak1', 'edu_chantal', 'MmeCoulson', 'EricKeunne', 'FrenchStreetCa', 'CPFNational', 'stephendavisSK', 'mmecarr', 'Cinefranco', 'broadwayprofe', 'MlleMouland', 'booklamations', 'ClasseMHussain', 'SylviaJudek', 'MmeFagnan1', 'joel_055', 'RECRAE_RSEKN', 'KNAER_RSEKN', 'JoseeLeBouthill', 'cabirees', 'eostaffdevnet', 'DeniseAndre613']
```
# 4. Build network
In this step we need to identify all actors that we will include in the network we are going to study. To do this, we will extract usernames from all the user names we extracted in the /search queries folder
```
# variables
MAX_FRIENDS = 15000
client = get_twitter_client()
max_pages = math.ceil(MAX_FRIENDS/ 5000)
#setup paginate function
def paginate(items, n):
"""
Generate n-sized chunks from Items
"""
for i in range(0, len(items), n):
yield items[i:i+n]
#Open csv file to extract column names
with open('data/search queries/fsl-test.csv') as csv_file:
csv_reader = csv.reader(csv_file)
columnNames = []
for row in csv_reader:
columnNames.append(row)
#break after reading first row
break
columnNames = columnNames[0]
# Todo: Add code to extract list of username from multiple csv files
data = pd.read_csv('data/search queries/fsl-test.csv', names=columnNames)
usernames = data.username.tolist()
print(usernames[0])
```
## Username trimming
Because there is simply toooooo many tweets that meet our keyword criteria, we would like to intorduce an additional threshold value for us to consider a username.
Most usernames, if not trimmed, are irrelevant to our search and may actually appear in our dataset only by chance and not because they regularly participate in the FSL discussion.
Our threshold cutoff would therefore be: participations in the FSL discussion for us to consider expanding into any users network.
```
# How many users are in our list
print(len(usernames))
# Find duplicates
import collections
dupli = [item for item, count in collections.Counter(usernames).items() if count > 1]
#print(len(dupli))
#print(len(liams_userlist))
# For each duplicate find out how many times it actually occurs in our usernames list - this will identify "active users in the FSL discussion"
shortlist = "data/usershortlist-original.csv"
fieldnames = ['Username', 'Count']
with open(shortlist, 'a') as f:
for u in dupli:
csv_writer = csv.DictWriter(f, fieldnames=fieldnames)
count = usernames.count(u)
#print(u, count)
csv_writer.writerow({'Username': u, 'Count': count})
```
### Semi-automatic trimming of shortlist dataset
Manually trimmed dataset of duplicates
- Kept usernames with 40+ occurences of the queried phrase
- Kept users who I recognized as part of the network (regardless of their threshold occurence value)
Next, read the semi-automated shortlist of users ....
```
# read shortlisted users
shortlist_attributes = ['Username', 'Count']
shortlist_file = pd.read_csv('data/usershortlist.csv', names=shortlist_attributes)
shortlisted = shortlist_file.Username.tolist()
# Append Liam's list and shortlisted list
usernames = []
print(len(shortlisted))
usernames.extend(shortlisted)
print(len(liams_userlist))
usernames.extend(liams_userlist)
print(len(usernames))
# Ensure you didnt introduce any further duplicates. We dont wanna do double scrape work :)
def checkIfDuplicates_1(listOfElems):
''' Check if given list contains any duplicates '''
if len(listOfElems) == len(set(listOfElems)):
return False
else:
return True
checkIfDuplicates_1(usernames)
```
## Creating Edges
In the next part I will create directed edges from our network of users. This is directed because we are goimg to represent followers/following relationships for each user in our general group of users.
```
# make sure you are not repeating already scraped users. You will waste time
# for this, check your network-progress list for users you already have scraped and remove them from usernames
# load updated username list before you continue to build your network
progress_params = ['explored_users', 'skipped_users']
network_progress = pd.read_csv('data/network/network-progress2.csv', names=progress_params)
explored_users = network_progress.explored_users.tolist()
skipped_users = network_progress.skipped_users.tolist()
# ignoring the nan's and column names, len(explored) + len(skipped) = len(usernames)
unames = [ x for x in usernames if x not in (explored_users + skipped_users)]
#loop through all users and write their followers to csv
try:
for user in unames:
fname = "data/network/network-edges2.csv"
fieldnames = ['Source', 'Target']
with open(fname, 'a') as f:
csv_writer = csv.DictWriter(f, fieldnames=fieldnames)
# get a user's followers list
for followers in Cursor(client.followers_ids, screen_name=user).pages(max_pages):
for chunk in paginate(followers, 100):
userFollowers = client.lookup_users(user_ids=chunk)
for follower in userFollowers:
#print(len(users))
#print(follower.screen_name)
csv_writer.writerow({'Source': follower.screen_name, 'Target': user})
# f.write(json.dumps(user._json)+"\n")
if len(followers) == 5000:
print("More results available. Sleeping for 60 seconds to avoid rate limit")
time.sleep(60)
# get a user's following list
for friends in Cursor(client.friends_ids, screen_name=user).pages(max_pages):
for chunk in paginate(friends, 100):
userFollowing = client.lookup_users(user_ids=chunk)
for followed in userFollowing:
csv_writer.writerow({'Source': user, 'Target': followed.screen_name})
if len(friends) == 5000:
print("More results available. Sleeping for 60 seconds to avoid rate limit")
time.sleep(60)
f.close()
# Update network-progress
with open('data/network/network-progress2.csv', 'a') as nprog:
nprog_updater = csv.DictWriter(nprog, fieldnames=progress_params)
nprog_updater.writerow({'explored_users': user, 'skipped_users': ' '})
nprog.close()
except tweepy.TweepError as err:
print("Tweepy error occured: {}. Skipped adding user: {}".format(err, user))
with open('data/network/network-progress2.csv', 'a') as nprog:
nprog_updater = csv.DictWriter(nprog, fieldnames=progress_params)
nprog_updater.writerow({'explored_users': ' ', 'skipped_users': user})
nprog.close()
pass
```
## Creating Nodes
In the next part I will create nodes from our network actors.
### What I did
Used excel to find all unique names from my edge-list.
### Optional
You can add code instead find unique usernames and also you can include specific node attributes to enhance network visualization in gephi.
# 5. Send data to Gephi
In this step download gephi. It is an open source network analysis software
Next, watch a few youtube videos to understand how you can use gephi to upload your network nodes and edges.
Learn how to visualize your network on gephi

Learn how to run algorithms such as PageRank, Eigenvector centrality and clustering algorithms on your network
Here's an example of community detection based on pagerank

# 6. Sentiment Analysis
# 7. Infographics
| github_jupyter |
# Machine Learning Engineer Nanodegree
## Supervised Learning
## Project 2: Building a Student Intervention System
Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
### Question 1 - Classification vs. Regression
*Your goal for this project is to identify students who might need early intervention before they fail to graduate. Which type of supervised learning problem is this, classification or regression? Why?*
**Answer: **
<br>
In my opinion,it's obviously a classification,because the answer is yes or no.
+ Classification is to divide things into different labels.For example,give a people'picture and say it's a man or woman.
+ Regression is more likely to use a function.For example,we get lots of examples,and finally know that y=mx+b fitting it well.And if give a new x1,i can get the answer is mx1+b.
## Exploring the Data
Run the code cell below to load necessary Python libraries and load the student data. Note that the last column from this dataset, `'passed'`, will be our target label (whether the student graduated or didn't graduate). All other columns are features about each student.
```
# Import libraries
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# Read student data
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
```
### Implementation: Data Exploration
Let's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following:
- The total number of students, `n_students`.
- The total number of features for each student, `n_features`.
- The number of those students who passed, `n_passed`.
- The number of those students who failed, `n_failed`.
- The graduation rate of the class, `grad_rate`, in percent (%).
```
# TODO: Calculate number of students
n_students = np.count_nonzero(student_data['passed'])
# TODO: Calculate number of features
n_features = np.count_nonzero(student_data.columns[:-1])
# TODO: Calculate passing students
n_passed = np.count_nonzero(student_data['passed']=='yes')
# TODO: Calculate failing students
n_failed = np.count_nonzero(student_data['passed']=='no')
# TODO: Calculate graduation rate
grad_rate = n_passed*100/(n_passed+n_failed)#Here we should take care that we must times 100 first or use float()
# Print the results
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
```
## Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
### Identify feature and target columns
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Run the code cell below to separate the student data into feature and target columns to see if any features are non-numeric.
```
# Extract feature columns
feature_cols = list(student_data.columns[:-1])
# Extract target column 'passed'
target_col = student_data.columns[-1]
# Show the list of columns
print "Feature columns:\n{}".format(feature_cols)
print "\nTarget column: {}".format(target_col)
# Separate the data into feature data and target data (X_all and y_all, respectively)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
# Show the feature information by printing the first five rows
print "\nFeature values:"
print X_all.head()
```
### Preprocess Feature Columns
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply `yes`/`no`, e.g. `internet`. These can be reasonably converted into `1`/`0` (binary) values.
Other columns, like `Mjob` and `Fjob`, have more than two values, and are known as _categorical variables_. The recommended way to handle such a column is to create as many columns as possible values (e.g. `Fjob_teacher`, `Fjob_other`, `Fjob_services`, etc.), and assign a `1` to one of them and `0` to all others.
These generated columns are sometimes called _dummy variables_, and we will use the [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummies#pandas.get_dummies) function to perform this transformation. Run the code cell below to perform the preprocessing routine discussed in this section.
```
def preprocess_features(X):
''' Preprocesses the student data and converts non-numeric binary variables into
binary (0/1) variables. Converts categorical variables into dummy variables. '''
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is non-numeric, replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
# Example: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns))
```
### Implementation: Training and Testing Data Split
So far, we have converted all _categorical_ features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following:
- Randomly shuffle and split the data (`X_all`, `y_all`) into training and testing subsets.
- Use 300 training points (approximately 75%) and 95 testing points (approximately 25%).
- Set a `random_state` for the function(s) you use, if provided.
- Store the results in `X_train`, `X_test`, `y_train`, and `y_test`.
```
# TODO: Import any additional functionality you may need here
from sklearn.cross_validation import train_test_split
# TODO: Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
ts=float(num_test)/(num_train+num_test)
# TODO: Shuffle and split the dataset into the number of training and testing points above
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, test_size=ts, random_state=20)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
#print y_train[100:101]
```
## Training and Evaluating Models
In this section, you will choose 3 supervised learning models that are appropriate for this problem and available in `scikit-learn`. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set.
**The following supervised learning models are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:**
- Gaussian Naive Bayes (GaussianNB)
- Decision Trees
- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K-Nearest Neighbors (KNeighbors)
- Stochastic Gradient Descent (SGDC)
- Support Vector Machines (SVM)
- Logistic Regression
### Question 2 - Model Application
*List three supervised learning models that are appropriate for this problem. For each model chosen*
- Describe one real-world application in industry where the model can be applied. *(You may need to do a small bit of research for this — give references!)*
- What are the strengths of the model; when does it perform well?
- What are the weaknesses of the model; when does it perform poorly?
- What makes this model a good candidate for the problem, given what you know about the data?
**Answer: **
### SVM
+ We may use SVM to predict if student grade can pass the test based on many features:)
+ It is effective in high dimensional spaces.So if we have many features it may perform well.
+ When the number of features is much greater than the number of samples, it may give bad performances.
+ If the data has many features i may take into consideration.
<br>
### GaussianNB
+ We may use GaussianNB to classify if an email is spam.
+ It can peform well when we need to calculate the probability to classify,and it has a advantage that it won't make model very complex.
+ It's really naive that it consider everything isolated incident.If it's not indepent event it may give bad performances.
+ When i know the question looks like NLP i may use it.
<br>
### KNN
+ We can use KNN to predict the price of a house.
+ It has low cost if we need to re-practice.When the data have many overlapping features it may perform well.
+ It's a lasy learner and if the data is not balanced it may give bad performances.
+ If the data sample looks like big and features look like overlipping i may use it.
### Setup
Run the code cell below to initialize three helper functions which you can use for training and testing the three supervised learning models you've chosen above. The functions are as follows:
- `train_classifier` - takes as input a classifier and training data and fits the classifier to the data.
- `predict_labels` - takes as input a fit classifier, features, and a target labeling and makes predictions using the F<sub>1</sub> score.
- `train_predict` - takes as input a classifier, and the training and testing data, and performs `train_clasifier` and `predict_labels`.
- This function will report the F<sub>1</sub> score for both the training and testing data separately.
```
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test))
```
### Implementation: Model Performance Metrics
With the predefined functions above, you will now import the three supervised learning models of your choice and run the `train_predict` function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, and 300. Hence, you should expect to have 9 different outputs below — 3 for each model using the varying training set sizes. In the following code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in `clf_A`, `clf_B`, and `clf_C`.
- Use a `random_state` for each model you use, if provided.
- **Note:** Use the default settings for each model — you will tune one specific model in a later section.
- Create the different training set sizes to be used to train each model.
- *Do not reshuffle and resplit the data! The new training points should be drawn from `X_train` and `y_train`.*
- Fit each model with each training set size and make predictions on the test set (9 in total).
**Note:** Three tables are provided after the following code cell which can be used to store your results.
```
# TODO: Import the three supervised learning models from sklearn
#from sklearn.cross_validation import train_test_split
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn import svm
#import random
clf_A = GaussianNB()
clf_B = KNeighborsClassifier(n_neighbors=5)
clf_C = svm.SVC(random_state=3)
X_train_100=X_train[:100]
y_train_100=y_train[:100]
X_train_200=X_train[:200]
y_train_200=y_train[:200]
X_train_300=X_train[:300]
y_train_300=y_train[:300]
#We should not split again!!!
#X_train_100, X_test, y_train_100, y_test= train_test_split(x100 , y100, test_size=0.2, random_state=20)
#X_train_200, X_test, y_train_200, y_test= train_test_split(x200 , y200, test_size=0.2, random_state=20)
#X_train_300, X_test, y_train_300, y_test= train_test_split(x300 , y300, test_size=0.2, random_state=20)
train_predict(clf_A, X_train_100, y_train_100, X_test, y_test)
train_predict(clf_A, X_train_200, y_train_200, X_test, y_test)
train_predict(clf_A, X_train_300, y_train_300, X_test, y_test)
train_predict(clf_B, X_train_100, y_train_100, X_test, y_test)
train_predict(clf_B, X_train_200, y_train_200, X_test, y_test)
train_predict(clf_B, X_train_300, y_train_300, X_test, y_test)
train_predict(clf_C, X_train_100, y_train_100, X_test, y_test)
train_predict(clf_C, X_train_200, y_train_200, X_test, y_test)
train_predict(clf_C, X_train_300, y_train_300, X_test, y_test)
```
### Tabular Results
Edit the cell below to see how a table can be designed in [Markdown](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#tables). You can record your results from above in the tables provided.
** Classifer 1 - GaussianNB
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0063 | 0.0019 | 0.5637 | 0.3956 |
| 200 | 0.0039 | 0.0015 | 0.7942 | 0.6875 |
| 300 | 0.0029 | 0.0044 | 0.7677 | 0.7302 |
** Classifer 2 - KNN
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0012 | 0.0047 | 0.8750 | 0.7755 |
| 200 | 0.0008 | 0.0058 | 0.8534 | 0.8163 |
| 300 | 0.0011 | 0.0162 | 0.8610 | 0.8143 |
** Classifer 3 - SVM
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0025 | 0.0020 | 0.8889 | 0.8077 |
| 200 | 0.0039 | 0.0036 | 0.8669 | 0.8205 |
| 300 | 0.0088 | 0.0074 | 0.8565 | 0.8205 |
## Choosing the Best Model
In this final section, you will choose from the three supervised learning models the *best* model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (`X_train` and `y_train`) by tuning at least one parameter to improve upon the untuned model's F<sub>1</sub> score.
### Question 3 - Choosing the Best Model
*Based on the experiments you performed earlier, in one to two paragraphs, explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?*
**Answer: **
<br>
In my opinion,i will choose SVM for the best model.We can see in the table that it have the higher F1 Score(test).Usually we consider model which has the best performance.
### Available Data
I will choose GaussianNB because the score increase when the train data increase.
### Limited Resources
I will choose SVM because when have 100 train data it has the higher score.
### Cost
I will choose KNN because it use least training time.
### Performance
I will choose SVM because it has the higher score.
### Question 4 - Model in Layman's Terms
*In one to two paragraphs, explain to the board of directors in layman's terms how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.*
**Answer: **
<br>
For example,There are chicken and wolves,in order to avoid eating by wolves,you need to build a wall.
<br>
How to build the wall?
<br>
Obviously we need to build our wall to divide them into two sides.
<br>
And SVM works like the wall.
<br>
Then we should try many times to build the wall,decide where to build the wall.And when chicken and wolves are separated the wall is correct and the model is trained successfully.
<br>
When we predict,we can see that if there is a wolf in chicken,the wall doesn't work well,and the model doesn't work well.
### Implementation: Model Tuning
Fine tune the chosen model. Use grid search (`GridSearchCV`) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import [`sklearn.grid_search.gridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html) and [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html).
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: `parameters = {'parameter' : [list of values]}`.
- Initialize the classifier you've chosen and store it in `clf`.
- Create the F<sub>1</sub> scoring function using `make_scorer` and store it in `f1_scorer`.
- Set the `pos_label` parameter to the correct value!
- Perform grid search on the classifier `clf` using `f1_scorer` as the scoring method, and store it in `grid_obj`.
- Fit the grid search object to the training data (`X_train`, `y_train`), and store it in `grid_obj`.
```
# TODO: Import 'GridSearchCV' and 'make_scorer'
from sklearn import svm
from sklearn.metrics import make_scorer,f1_score
from sklearn.grid_search import GridSearchCV
# TODO: Create the parameters list you wish to tune
parameters = {'kernel': ['rbf','sigmoid','poly'],'C':[11,12,13,14,15,16],'gamma':[0.0003,0.001,0.003,0.01]}
# TODO: Initialize the classifier
clf = svm.SVC()
#clf.fit(X_train, y_train)
# TODO: Make an f1 scoring function using 'make_scorer'
f1_scorer = make_scorer(f1_score,pos_label='yes')
# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(clf,param_grid=parameters,scoring=f1_scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters
#grid_obj = GridSearchCV(clf,param_grid=parameters,cv=X_train)
grid_obj=grid_obj.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
print grid_obj.best_estimator_.get_params()
# Report the final F1 score for training and testing after parameter tuning
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
```
### Question 5 - Final F<sub>1</sub> Score
*What is the final model's F<sub>1</sub> score for training and testing? How does that score compare to the untuned model?*
**Answer: **
<br>
The final F1 score for training is 0.8230,and for testing is 0.8050.
<br>
To be honest,i spend a lot of time changing parameters,and firstly find that it's true that the most time people spent in machine learning is changing parameters.And i'm sorry to say that the score is not as good as untuned model.And my parameters are:
+ 'kernel': 'poly',
+ 'C': 13,
+ 'verbose': False,
+ 'probability': False,
+ 'degree': 3,
+ 'shrinking': True,
+ 'max_iter': -1,
+ 'decision_function_shape': None,
+ 'random_state': None,
+ 'tol': 0.001,
+ 'cache_size': 200,
+ 'coef0': 0.0,
+ 'gamma': 0.001,
+ 'class_weight': None
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
| github_jupyter |
# Exp 90 analysis
See `./informercial/Makefile` for experimental
details.
```
import os
import numpy as np
from IPython.display import Image
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_style('ticks')
matplotlib.rcParams.update({'font.size': 16})
matplotlib.rc('axes', titlesize=16)
from infomercial.exp import meta_bandit
from infomercial.local_gym import bandit
from infomercial.exp.meta_bandit import load_checkpoint
import gym
def plot_meta(env_name, result):
"""Plots!"""
# episodes, actions, scores_E, scores_R, values_E, values_R, ties, policies
episodes = result["episodes"]
actions =result["actions"]
bests =result["p_bests"]
scores_E = result["scores_E"]
scores_R = result["scores_R"]
values_R = result["values_R"]
values_E = result["values_E"]
ties = result["ties"]
policies = result["policies"]
# -
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best, np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# Policy
policies = np.asarray(policies)
episodes = np.asarray(episodes)
plt.subplot(grid[1, 0])
m = policies == 0
plt.scatter(episodes[m], policies[m], alpha=.4, s=2, label="$\pi_E$", color="purple")
m = policies == 1
plt.scatter(episodes[m], policies[m], alpha=.4, s=2, label="$\pi_R$", color="grey")
plt.ylim(-.1, 1+.1)
plt.ylabel("Controlling\npolicy")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# score
plt.subplot(grid[2, 0])
plt.scatter(episodes, scores_E, color="purple", alpha=0.4, s=2, label="E")
plt.plot(episodes, scores_E, color="purple", alpha=0.4)
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.plot(episodes, scores_R, color="grey", alpha=0.4)
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Score")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[3, 0])
plt.scatter(episodes, values_E, color="purple", alpha=0.4, s=2, label="$Q_E$")
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.plot(episodes, np.repeat(tie_threshold, np.max(episodes)+1),
color="violet", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Value")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Ties
plt.subplot(grid[4, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
# Ties
plt.subplot(grid[5, 0])
plt.scatter(episodes, ties, color="black", alpha=.5, s=2, label="$\pi_{tie}$ : 1\n $\pi_\pi$ : 0")
plt.ylim(-.1, 1+.1)
plt.ylabel("Ties index")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
def plot_epsilon(env_name, result):
"""Plots!"""
# episodes, actions, scores_E, scores_R, values_E, values_R, ties, policies
episodes = result["episodes"]
actions =result["actions"]
bests =result["p_bests"]
scores_R = result["scores_R"]
values_R = result["values_R"]
epsilons = result["epsilons"]
# -
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best, np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# score
plt.subplot(grid[1, 0])
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.ylabel("Score")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[2, 0])
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.ylabel("Value")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# best
plt.subplot(grid[3, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
# Decay
plt.subplot(grid[4, 0])
plt.scatter(episodes, epsilons, color="black", alpha=.5, s=2)
plt.ylabel("$\epsilon_R$")
plt.xlabel("Episode")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
def plot_critic(critic_name, env_name, result):
# -
env = gym.make(env_name)
best = env.best
# Data
critic = result[critic_name]
arms = list(critic.keys())
values = list(critic.values())
# Plotz
fig = plt.figure(figsize=(8, 3))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0])
plt.scatter(arms, values, color="black", alpha=.5, s=30)
plt.plot([best]*10, np.linspace(min(values), max(values), 10), color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Value")
plt.xlabel("Arm")
```
# Load and process data
```
data_path ="/Users/qualia/Code/infomercial/data/"
exp_name = "exp90"
sorted_params = load_checkpoint(os.path.join(data_path, f"{exp_name}_sorted.pkl"))
# print(sorted_params.keys())
best_params = sorted_params[0]
tie_threshold=best_params['tie_threshold']
sorted_params
```
# Performance
of best parameters
```
env_name = 'BanditHardAndSparse121-v0'
# env_name = 'BanditOneHigh10-v0'
num_episodes = 60500
# Run w/ best params
result = meta_bandit(
env_name=env_name,
num_episodes=num_episodes,
lr_E=best_params["lr_E"],
lr_R=best_params["lr_R"],
tie_threshold=best_params["tie_threshold"],
seed_value=12,
)
print(best_params)
plot_meta(env_name, result=result)
plot_critic('critic_R', env_name, result)
plot_critic('critic_E', env_name, result)
```
# Sensitivity
to parameter choices
```
total_Rs = []
ties = []
lrs_R = []
lrs_E = []
trials = list(sorted_params.keys())
for t in trials:
total_Rs.append(sorted_params[t]['total_E_R'])
ties.append(sorted_params[t]['tie_threshold'])
lrs_E.append(sorted_params[t]['lr_E'])
lrs_R.append(sorted_params[t]['lr_R'])
# Init plot
fig = plt.figure(figsize=(5, 18))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Do plots:
# Arm
plt.subplot(grid[0, 0])
plt.scatter(trials, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("total E")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.scatter(trials, ties, color="black", alpha=1, s=6, label="total R")
# plt.yscale('log')
plt.xlabel("Sorted params")
plt.ylabel("Tie threshold")
_ = sns.despine()
plt.subplot(grid[2, 0])
plt.scatter(trials, lrs_R, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("lr_R")
_ = sns.despine()
plt.subplot(grid[3, 0])
plt.scatter(trials, lrs_E, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("lr_E")
_ = sns.despine()
plt.subplot(grid[4, 0])
plt.scatter(lrs_R, lrs_E, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("lr_R")
plt.ylabel("lr_E")
_ = sns.despine()
plt.subplot(grid[5, 0])
plt.scatter(ties, lrs_E, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Tie threshold")
plt.ylabel("lr_E")
# plt.xlim(0, 0.002)
_ = sns.despine()
```
# Parameter correlations
```
from scipy.stats import spearmanr
spearmanr(ties, lrs_E)
spearmanr(lrs_R, lrs_E)
```
# Distributions
of parameters
```
# Init plot
fig = plt.figure(figsize=(5, 6))
grid = plt.GridSpec(3, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(ties, color="black")
plt.xlabel("tie threshold")
plt.ylabel("Count")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.hist(lrs_R, color="black")
plt.xlabel("lr_R")
plt.ylabel("Count")
_ = sns.despine()
plt.subplot(grid[2, 0])
plt.hist(lrs_E, color="black")
plt.xlabel("lr_E")
plt.ylabel("Count")
_ = sns.despine()
```
of total reward
```
# Init plot
fig = plt.figure(figsize=(5, 2))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(total_Rs, color="black", bins=50)
plt.xlabel("Total reward")
plt.ylabel("Count")
# plt.xlim(0, 10)
_ = sns.despine()
```
| github_jupyter |
# Node and Link analysis: Centrality measures
Centrality measures are used to appraise the "importance" of the elements of the network. The problem is that "importance"
* Is not well-defined
* Depends on the domain of the network
During this seminar we will consider two node centrality measures: *degree centrality* and *closeness centrality*
## Degree Centrality
In fact you have already met the degree centrality in this course.
Given adjacency matrix $A$ of the **unweighted** and **undirected** graph $G = (V,E)$ degree centrality of the node $v_i$ is computed as:
$$ C_D(i) = \sum_j A_{ji} $$
In order to compare nodes across graphs this measure can be normalized by a factor $\frac{1}{N-1}$
## Closeness Centrality
The most correspondent to the word "central". Closeness centrality is used to identify nodes that can reach other nodes quickly.
$$ C_C(i) = \left[ \sum_{j,\ j\neq i} d(v_i, v_j) \right]^{-1}\text{,} $$
where $d(v_i, v_j)$ is a length of the shortest path between $v_i$ and $v_j$. Again, to be normalized it is multiplied by $(N-1)$.
## Why?
Centralities allow us to
* Understand the structure of the graph without looking at it
* Compare nodes of a graph (between graphs) and identify the most "important"
* Compare graphs*
```
import networkx as nx
import random
import numpy as np
import matplotlib.pyplot as plt
import pprint
pp = pprint.PrettyPrinter(indent=4)
%matplotlib inline
```
## Example: Zachary's Karate Club
Let's load Zachary's Karate Club network. This is quite small example so we can both calculate centralities and map them of the picture of the graph
```
G = nx.karate_club_graph()
pos = nx.spring_layout(G) # Fix node positions on all pictures
# Original network
plt.figure(1, figsize=(7,7))
nx.draw_networkx(G, pos)
# Degree centrality
dc = nx.degree_centrality(G)
plt.figure(2, figsize=(7,7))
coord = nx.spring_layout(G)
nx.draw(G,
pos,
nodelist=list(dc.keys()),
node_size = [d*7000 for d in list(dc.values())],
node_color=list(dc.values()),
font_size=8,
cmap=plt.cm.Reds,
)
# Closeness centrality
cl = nx.closeness_centrality(G)
plt.figure(1, figsize=(7,7))
coord = nx.spring_layout(G)
nx.draw(G,
pos,
nodelist=list(cl.keys()),
node_size = [d*3000 for d in list(cl.values())],
node_color=list(cl.values()),
font_size=8,
cmap=plt.cm.Reds,
)
# Plot degree-closeness
xdata = list(dc.values())
ydata = list(cl.values())
plt.figure(1, figsize=(7,7))
plt.plot(xdata,ydata, '+')
plt.xlabel('Degree Centrality')
plt.ylabel('Closeness Centrality')
# Not Clear. Lets add node ids:
fig = plt.figure(1, figsize=(14,7))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
for v in range(len(dc)):
ax1.text(x = xdata[v], y = ydata[v], s=str(v))
ax1.set_xlim(0, 0.6)
ax1.set_ylim(0.25, 0.6)
ax1.set_xlabel('Degree Centrality')
ax1.set_ylabel('Closeness Centrality')
ax2 = nx.draw_networkx(G, pos)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/thomascong121/SocialDistance/blob/master/camera_colibration.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
%%capture
!pip install gluoncv
!pip install mxnet-cu101
import gluoncv
from gluoncv import model_zoo, data, utils
from matplotlib import pyplot as plt
import numpy as np
from collections import defaultdict
from mxnet import nd
import mxnet as mx
from skimage import io
import cv2
import os
img_path = '/content/drive/My Drive/social distance/0.png'
img = io.imread(img_path)
io.imshow(img)
io.show()
# keypoint = np.array([[739, 119], [990, 148], [614, 523], [229, 437]]).astype(float)
# keypoint *= 1.59
# keypoint = keypoint.astype(int)
# keypoint
keypoints = [(1175, 189), (1574, 235), (976, 831), (364, 694)]
```
```
import itertools
for i in keypoints:
cv2.circle(img, (i[0], i[1]), 10, (0, 0, 525), -1)
for i in itertools.combinations(keypoints, 2):
print(i)
cv2.line(img, (i[0][0], i[0][1]), (i[1][0], i[1][1]), (0, 255, 0), 2)
plt.imshow(img)
plt.show()
```
```
keypoints_birds_eye_view = [(700, 400), (1200, 400), (1200, 900), (700, 900)]
keypoint = np.float32(keypoints)
keypoints_birds_eye_view = np.float32(keypoints_birds_eye_view)
M = cv2.getPerspectiveTransform(keypoint, keypoints_birds_eye_view)
M
dst_img = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]))
plt.imshow(dst_img)
plt.show()
img = io.imread(img_path)
class Bird_eye_view_Transformer:
def __init__(self, keypoints, keypoints_birds_eye_view, actual_length, actual_width):
'''
keypoints input order
0 1
3 2
'''
self.keypoint = np.float32(keypoints)
self.keypoints_birds_eye_view = np.float32(keypoints_birds_eye_view)
self.M = cv2.getPerspectiveTransform(self.keypoint, self.keypoints_birds_eye_view)
self.length_ratio = actual_width/(keypoints_birds_eye_view[3][1] - keypoints_birds_eye_view[0][1])
self.width_ratio = actual_length/(keypoints_birds_eye_view[1][0] - keypoints_birds_eye_view[0][0])
def imshow(self, img):
dst_img = cv2.warpPerspective(img, self.M, (img.shape[1], img.shape[0]))
plt.imshow(dst_img)
plt.show()
keypoints = [(1175, 189), (1574, 235), (976, 831), (364, 694)]
keypoints_birds_eye_view = [(700, 400), (1200, 400), (1200, 900), (700, 900)]
actual_length = 10
actual_width = 5
transformer = Bird_eye_view_Transformer(keypoints, keypoints_birds_eye_view, actual_length, actual_width)
transformer.imshow(img)
```
| github_jupyter |
# Decimal
Такая особенность встречается во многих языках программирования:
```
1.1 + 2.2
0.1 + 0.1 + 0.1 - 0.3
from decimal import Decimal
float(Decimal('1.1') + Decimal('2.2'))
float(Decimal('0.1') + Decimal('0.1') + Decimal('0.1') - Decimal('0.3'))
```
# Logging
https://habr.com/ru/post/144566/
Когда проект разрастается до определенной степени, появляется необзодимость в ведении журнала событий - лога. Это нужно, чтобы быстро понимать причины ошибок, улавливать нетипичное поведение программы, искать аномалии во входящих данных и т.д.
В Python есть встроенная библиотека, которая позволяет удобно логировать события. Изначально представлены 5 уровне логирования:
- debug - для отладки
- info - просто информационное сообщение
- warning - предупреждение
- error - ошибка
- critical - критическая ошибка
```
import logging
logging.debug("Сообщение для отладки")
logging.info("Самое обыкновенное информационное сообщение")
logging.warning("Предупреждение")
logging.error("Ошибка")
logging.critical("Полный крах")
```
Вывелись не все сообщения, поскольку по умолчанию уровень вывода сообщений - warning. Можем его поменять, но это нужно сделать до первого вызова вывода ошибки.
```
import logging
logging.basicConfig(level=logging.DEBUG)
logging.debug("Сообщение для отладки")
logging.info("Самое обыкновенное информационное сообщение")
logging.warning("Предупреждение")
logging.error("Ошибка")
logging.critical("Полный крах")
```
Есть несколько встроенных в библиотеку значений, которые могут помочь сделать лог более подробным:
<table class="docutils align-default">
<colgroup>
<col style="width: 18%">
<col style="width: 28%">
<col style="width: 53%">
</colgroup>
<thead>
<tr class="row-odd"><th class="head"><p>Attribute name</p></th>
<th class="head"><p>Format</p></th>
<th class="head"><p>Description</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>args</p></td>
<td><p>You shouldn’t need to
format this yourself.</p></td>
<td><p>The tuple of arguments merged into <code class="docutils literal notranslate"><span class="pre">msg</span></code> to
produce <code class="docutils literal notranslate"><span class="pre">message</span></code>, or a dict whose values
are used for the merge (when there is only one
argument, and it is a dictionary).</p></td>
</tr>
<tr class="row-odd"><td><p>asctime</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(asctime)s</span></code></p></td>
<td><p>Human-readable time when the
<a class="reference internal" href="#logging.LogRecord" title="logging.LogRecord"><code class="xref py py-class docutils literal notranslate"><span class="pre">LogRecord</span></code></a> was created. By default
this is of the form ‘2003-07-08 16:49:45,896’
(the numbers after the comma are millisecond
portion of the time).</p></td>
</tr>
<tr class="row-even"><td><p>created</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(created)f</span></code></p></td>
<td><p>Time when the <a class="reference internal" href="#logging.LogRecord" title="logging.LogRecord"><code class="xref py py-class docutils literal notranslate"><span class="pre">LogRecord</span></code></a> was created
(as returned by <a class="reference internal" href="time.html#time.time" title="time.time"><code class="xref py py-func docutils literal notranslate"><span class="pre">time.time()</span></code></a>).</p></td>
</tr>
<tr class="row-odd"><td><p>exc_info</p></td>
<td><p>You shouldn’t need to
format this yourself.</p></td>
<td><p>Exception tuple (à la <code class="docutils literal notranslate"><span class="pre">sys.exc_info</span></code>) or,
if no exception has occurred, <code class="docutils literal notranslate"><span class="pre">None</span></code>.</p></td>
</tr>
<tr class="row-even"><td><p>filename</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(filename)s</span></code></p></td>
<td><p>Filename portion of <code class="docutils literal notranslate"><span class="pre">pathname</span></code>.</p></td>
</tr>
<tr class="row-odd"><td><p>funcName</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(funcName)s</span></code></p></td>
<td><p>Name of function containing the logging call.</p></td>
</tr>
<tr class="row-even"><td><p>levelname</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(levelname)s</span></code></p></td>
<td><p>Text logging level for the message
(<code class="docutils literal notranslate"><span class="pre">'DEBUG'</span></code>, <code class="docutils literal notranslate"><span class="pre">'INFO'</span></code>, <code class="docutils literal notranslate"><span class="pre">'WARNING'</span></code>,
<code class="docutils literal notranslate"><span class="pre">'ERROR'</span></code>, <code class="docutils literal notranslate"><span class="pre">'CRITICAL'</span></code>).</p></td>
</tr>
<tr class="row-odd"><td><p>levelno</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(levelno)s</span></code></p></td>
<td><p>Numeric logging level for the message
(<code class="xref py py-const docutils literal notranslate"><span class="pre">DEBUG</span></code>, <code class="xref py py-const docutils literal notranslate"><span class="pre">INFO</span></code>,
<code class="xref py py-const docutils literal notranslate"><span class="pre">WARNING</span></code>, <code class="xref py py-const docutils literal notranslate"><span class="pre">ERROR</span></code>,
<code class="xref py py-const docutils literal notranslate"><span class="pre">CRITICAL</span></code>).</p></td>
</tr>
<tr class="row-even"><td><p>lineno</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(lineno)d</span></code></p></td>
<td><p>Source line number where the logging call was
issued (if available).</p></td>
</tr>
<tr class="row-odd"><td><p>message</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(message)s</span></code></p></td>
<td><p>The logged message, computed as <code class="docutils literal notranslate"><span class="pre">msg</span> <span class="pre">%</span>
<span class="pre">args</span></code>. This is set when
<a class="reference internal" href="#logging.Formatter.format" title="logging.Formatter.format"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Formatter.format()</span></code></a> is invoked.</p></td>
</tr>
<tr class="row-even"><td><p>module</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(module)s</span></code></p></td>
<td><p>Module (name portion of <code class="docutils literal notranslate"><span class="pre">filename</span></code>).</p></td>
</tr>
<tr class="row-odd"><td><p>msecs</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(msecs)d</span></code></p></td>
<td><p>Millisecond portion of the time when the
<a class="reference internal" href="#logging.LogRecord" title="logging.LogRecord"><code class="xref py py-class docutils literal notranslate"><span class="pre">LogRecord</span></code></a> was created.</p></td>
</tr>
<tr class="row-even"><td><p>msg</p></td>
<td><p>You shouldn’t need to
format this yourself.</p></td>
<td><p>The format string passed in the original
logging call. Merged with <code class="docutils literal notranslate"><span class="pre">args</span></code> to
produce <code class="docutils literal notranslate"><span class="pre">message</span></code>, or an arbitrary object
(see <a class="reference internal" href="../howto/logging.html#arbitrary-object-messages"><span class="std std-ref">Using arbitrary objects as messages</span></a>).</p></td>
</tr>
<tr class="row-odd"><td><p>name</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(name)s</span></code></p></td>
<td><p>Name of the logger used to log the call.</p></td>
</tr>
<tr class="row-even"><td><p>pathname</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(pathname)s</span></code></p></td>
<td><p>Full pathname of the source file where the
logging call was issued (if available).</p></td>
</tr>
<tr class="row-odd"><td><p>process</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(process)d</span></code></p></td>
<td><p>Process ID (if available).</p></td>
</tr>
<tr class="row-even"><td><p>processName</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(processName)s</span></code></p></td>
<td><p>Process name (if available).</p></td>
</tr>
<tr class="row-odd"><td><p>relativeCreated</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(relativeCreated)d</span></code></p></td>
<td><p>Time in milliseconds when the LogRecord was
created, relative to the time the logging
module was loaded.</p></td>
</tr>
<tr class="row-even"><td><p>stack_info</p></td>
<td><p>You shouldn’t need to
format this yourself.</p></td>
<td><p>Stack frame information (where available)
from the bottom of the stack in the current
thread, up to and including the stack frame
of the logging call which resulted in the
creation of this record.</p></td>
</tr>
<tr class="row-odd"><td><p>thread</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(thread)d</span></code></p></td>
<td><p>Thread ID (if available).</p></td>
</tr>
<tr class="row-even"><td><p>threadName</p></td>
<td><p><code class="docutils literal notranslate"><span class="pre">%(threadName)s</span></code></p></td>
<td><p>Thread name (if available).</p></td>
</tr>
</tbody>
</table>
Применяются они так:
```
import logging
logging.basicConfig(
format='%(filename)s[LINE:%(lineno)d]# %(levelname)-8s [%(asctime)s] %(message)s',
level=logging.DEBUG
)
logging.debug("Сообщение для отладки")
logging.info("Самое обыкновенное информационное сообщение")
logging.warning("Предупреждение")
logging.error("Ошибка")
logging.critical("Полный крах")
```
## Запись лога в файл
Конечно, просто выводить логи на экран - бессмысленная затея. Нужно сохранять их в файл:
```
import logging
logging.basicConfig(
format='%(filename)s[LINE:%(lineno)d]# %(levelname)-8s [%(asctime)s] %(message)s',
level=logging.DEBUG,
filename="log.txt",
filemode=
)
logging.debug("Сообщение для отладки")
logging.info("Самое обыкновенное информационное сообщение")
logging.warning("Предупреждение")
logging.error("Ошибка")
logging.critical("Полный крах")
with open("log.txt") as f:
print(f.read())
```
## Несколько логгеров
Использование общей конфигурации для логов на весь проект - плохая идея, поскольку это влияет и на логи окружения, и всё сливается в одну кашу. Лучше завести для каждой отдельной части крупного приложения свой логгер.
```
import logging
# получим логгер для нашего приложения либо создадим новый, если он еще не создан (паттерн Синглтон)
logger = logging.getLogger("our_app_name")
logger.setLevel(logging.DEBUG)
# опишем, куда и как будем сохранять логи: зададим файл и формат
handler = logging.FileHandler('our_app_log.txt', 'a', 'utf-8')
formatter = logging.Formatter("%(filename)s[LINE:%(lineno)d]# %(levelname)-8s [%(asctime)s] %(message)s")
# установим файлу нужный формат, а нужный файл - логгеру
handler.setFormatter(formatter)
logger.addHandler(handler)
# можно даже записывать сразу в несколько файлов
handler2 = logging.FileHandler('our_app_log2.txt', 'a', 'utf-8')
handler2.setFormatter(formatter)
logger.addHandler(handler2)
logger.info("Наш новый логгер работает")
```
| github_jupyter |
Text classification is the task of assigning a set of predefined categories to open-ended text. Text classifiers can be used to organize, structure, and categorize pretty much any kind of text – from documents, medical studies and files, and all over the web.We will classify the text into 9 categories.The 9 categories are:
- computer
- science
- politics
- sport
- automobile
- religion
- medicine
- sales
- alt.atheism
# Import Libraries
Let's first import all the required libraries
```
import os
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.graph_objects as go
import re
import string
import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.datasets import fetch_20newsgroups
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn import svm
from time import time
from sklearn import linear_model
from sklearn import preprocessing
from sklearn.model_selection import StratifiedKFold
from imblearn.over_sampling import SMOTE
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn import model_selection
from sklearn.metrics import accuracy_score, precision_score, recall_score, plot_confusion_matrix, confusion_matrix, f1_score
from statistics import mean
import pickle
from tensorflow import keras
from keras import layers
from keras import losses
from keras import utils
from keras.layers.experimental.preprocessing import TextVectorization
from keras.callbacks import EarlyStopping
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D, Bidirectional, Dropout
from tensorflow.keras.models import load_model
import torch
from tqdm.notebook import tqdm
from transformers import BertTokenizer
from torch.utils.data import TensorDataset
from transformers import BertForSequenceClassification
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
from transformers import AdamW, get_linear_schedule_with_warmup
```
# Load Dataset
We will going to use the 20 news group dataset.Let's load the dataset in dataframe
```
dataset = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'), shuffle=True, random_state=42)
df = pd.DataFrame()
df['text'] = dataset.data
df['source'] = dataset.target
label=[]
for i in df['source']:
label.append(dataset.target_names[i])
df['label']=label
# first few rows of the dataset
df.head()
```
We will later use the label enocder to convert the labels (categorical value) into numeric value.So now, we will drop that column
```
# drop source column
df.drop(['source'],axis=1,inplace=True)
```
Let's see the count of each label
```
# value count
df['label'].value_counts()
```
In our dataset we have very less data in a each categorical label and there are 20 categories which are too much.We will combine the sub-categories
- So in politics we have mideast, guns and misc sub-topics we will replace all to politics
- We have sub-categories in sports, we will going to replace this also into sports
- We have two sub categories in religion, we will replace them to one
- We are going to make 9 categories in all
```
# replace to politics
df['label'].replace({'talk.politics.misc':'politics','talk.politics.guns':'politics',
'talk.politics.mideast':'politics'},inplace=True)
# replace to sport
df['label'].replace({'rec.sport.hockey':'sport','rec.sport.baseball':'sport'},inplace=True)
# replace to religion
df['label'].replace({'soc.religion.christian':'religion','talk.religion.misc':'religion'},inplace=True)
# replace to computer
df['label'].replace({'comp.windows.x':'computer','comp.sys.ibm.pc.hardware':'computer',
'comp.os.ms-windows.misc':'computer','comp.graphics':'computer',
'comp.sys.mac.hardware':'computer'},inplace=True)
# replace to sales
df['label'].replace({'misc.forsale':'sales'},inplace=True)
# replace to automobile
df['label'].replace({'rec.autos':'automobile','rec.motorcycles':'automobile'},inplace=True)
# replace to science
df['label'].replace({'sci.crypt':'science','sci.electronics':'science','sci.space':'science'},inplace=True)
# replace to medicine
df['label'].replace({'sci.med':'medicine'},inplace=True)
```
Let's see the number of unique targets
```
# number of targets
df['label'].nunique()
# value count
df['label'].value_counts()
```
We are going to make a number of words column in which there is the number of words in a particular text
```
df['Number_of_words'] = df['text'].apply(lambda x:len(str(x).split()))
df.head()
```
Check the basic stats of number of words, like maximum, minimum, average number of words
```
# basic stats
df['Number_of_words'].describe()
```
So the maximum number of words in our dataset is 11,765.Let's have a look at it
```
df[df['Number_of_words']==11765]
```
So maximu number of words text is belongs to electronics category.In our dataset we have some rows where there are no text at all i.e. the number of words is 0.We will drop those rows
```
no_text = df[df['Number_of_words']==0]
print(len(no_text))
# drop these rows
df.drop(no_text.index,inplace=True)
plt.style.use('ggplot')
plt.figure(figsize=(12,6))
sns.distplot(df['Number_of_words'],kde = False,color="red",bins=200)
plt.title("Frequency distribution of number of words for each text extracted", size=20)
```
# Data Pre-Processing
Now it's time to clean our dataset, we will lower the text, remove the text in square brackets, remove links and remove words containing numbers
```
# cleaning the text
def clean_text(text):
'''Make text lowercase, remove text in square brackets,remove links,remove punctuation
and remove words containing numbers.'''
text = text.lower()
text = re.sub('\[.*?\]', '', text)
text = re.sub('https?://\S+|www\.\S+', '', text)
text = re.sub('<.*?>+', '', text)
text = re.sub('[%s]' % re.escape(string.punctuation), '', text)
text = re.sub('\n', '', text)
text = re.sub('\w*\d\w*', '', text)
return text
# Applying the cleaning function to datasets
df['cleaned_text'] = df['text'].apply(lambda x: clean_text(x))
# updated text
df['cleaned_text'].head()
```
Let's convert our cleaned text into tokens
```
tokenizer=nltk.tokenize.RegexpTokenizer(r'\w+')
df['tokens'] = df['cleaned_text'].apply(lambda x:tokenizer.tokenize(x))
df.head()
```
Stopwords are those english words which do not add much meaning to a sentence.They are very commonly used words and we do not required those words. So we can remove those stopwords
```
# stopwords
stopwords.words('english')[0:5]
```
Let's check number of stopwords in nltk library
```
len(stopwords.words('english'))
```
Now we are going to remome the stopwords from the sentences
```
# removing stopwords
def remove_stopwords(text):
words = [w for w in text if w not in stopwords.words('english')]
return words
df['stopwordremove_tokens'] = df['tokens'].apply(lambda x : remove_stopwords(x))
df.head()
```
It's time to do lemmatization
```
# lemmatization
lem = WordNetLemmatizer()
def lem_word(x):
return [lem.lemmatize(w) for w in x]
df['lemmatized_text'] = df['stopwordremove_tokens'].apply(lem_word)
df.head()
```
Now we are going to combine our text, this is our final text
```
def combine_text(list_of_text):
'''Takes a list of text and combines them into one large chunk of text.'''
combined_text = ' '.join(list_of_text)
return combined_text
df['final_text'] = df['lemmatized_text'].apply(lambda x : combine_text(x))
df.head()
```
So we have cleaned the dataset and remove stopwords, it's possible that there are rows in which the text length is 0.We will find those rows and remove them
```
df['Final_no_of_words'] = df['final_text'].apply(lambda x:len(str(x).split()))
df.head()
# basic stats
df['Final_no_of_words'].describe()
# number of rows with text lenth = 0
print(len(df[df['Final_no_of_words']==0]))
# drop those rows
df.drop(df[df['Final_no_of_words']==0].index,inplace=True)
```
Now our text has been cleaned, we will convert the labels into numeric values using LableEncoder()
```
# label_encoder object knows how to understand word labels.
label_encoder = preprocessing.LabelEncoder()
# Encode labels in column 'species'.
df['target']= label_encoder.fit_transform(df['label'])
df['target'].unique()
```
# Dependent and Independent Variable
```
# dependent and independent variable
X = df['final_text']
y = df['target']
X.shape,y.shape
```
# Bag-of-Words
CountVectorizer is used to transform a given text into a vector on the basis of the frequency(count) of each word that occurs in the entire text.It involves counting the number of occurences each words appears in a document(text)
```
count_vectorizer = CountVectorizer()
count_vector = count_vectorizer.fit_transform(X)
print(count_vector[0].todense())
```
# Tf-Idf
Tf-Idf stands for Term Frequency-Inverse document frequency.It is a techinque to quantify a word in documents,we generally compute a weight to each word which signifies the importance of the word which signifies the importance of the word in the document and corpus
```
tfidf_vectorizer = TfidfVectorizer(min_df = 2,max_df = 0.5,ngram_range = (1,2))
tfidf = tfidf_vectorizer.fit_transform(X)
print(tfidf[0].todense())
```
# SMOTE technique to balance the dataset
So we can clearly see that our dataset is imbalanced dataset.We will use SMOTE technique to balance the dataset.SMOTE is an oversampling technique where the synthetic samples are generated for the minority class.The algorithm helps to overcome the overfitting problem posed by random sampling.
```
# count vector
smote = SMOTE(random_state = 402)
X_smote, Y_smote = smote.fit_resample(count_vector,y)
sns.countplot(Y_smote)
# tfidf
smote = SMOTE(random_state = 402)
X_smote_tfidf, Y_smote_tfidf = smote.fit_resample(tfidf,y)
sns.countplot(Y_smote_tfidf)
```
## Train-Test Split
```
# train-test split countvector
X_train, X_test, y_train, y_test = train_test_split(X_smote, Y_smote, test_size = 0.20, random_state = 0)
X_train.shape, X_test.shape,y_train.shape, y_test.shape
# train-test split tfidf
X_train_tfidf, X_test_tfidf, y_train_tfidf, y_test_tfidf = train_test_split(X_smote_tfidf, Y_smote_tfidf , test_size = 0.20, random_state = 0)
training_time_container = {'linear_svm_tfidf':0,'linear_svm':0,'mnb_naive_bayes_tfidf':0,
'mnb_naive_bayes':0,'random_forest_tfidf':0,'random_forest':0,
'logistic_reg':0,'logistic_reg_tfidf':0}
prediction_time_container = {'linear_svm_tfidf':0,'linear_svm':0,'mnb_naive_bayes_tfidf':0,
'mnb_naive_bayes':0,'random_forest_tfidf':0,'random_forest':0,
'logistic_reg':0,'logistic_reg_tfidf':0}
accuracy_container = {'linear_svm_tfidf':0,'linear_svm':0,'mnb_naive_bayes_tfidf':0,
'mnb_naive_bayes':0,'random_forest_tfidf':0,'random_forest':0,
'logistic_reg':0,'logistic_reg_tfidf':0}
```
# Logistic Regression
```
# on countvector
lg = LogisticRegression(C = 1.0)
#Fitting the model
t0=time()
lg.fit(X_train,y_train)
training_time_container['logistic_reg']=time()-t0
# Predicting the Test set results
t0 = time()
y_pred_lg = lg.predict(X_test)
prediction_time_container['logistic_reg']=time()-t0
lg_test_accuracy = accuracy_score(y_test,y_pred_lg)
accuracy_container['logistic_reg'] = lg_test_accuracy
print('Training Accuracy : ', accuracy_score(y_train,lg.predict(X_train)))
print('Testing Accuracy: ',lg_test_accuracy)
print("Training Time: ",training_time_container['logistic_reg'])
print("Prediction Time: ",prediction_time_container['logistic_reg'])
print(confusion_matrix(y_test,y_pred_lg))
# on tfidf
lg = LogisticRegression(C = 1.0)
#Fitting the model
t0=time()
lg.fit(X_train_tfidf,y_train_tfidf)
training_time_container['logistic_reg_tfidf']=time()-t0
# Predicting the Test set results
t0=time()
ypred_lg_tf = lg.predict(X_test_tfidf)
prediction_time_container['logistic_reg_tfidf']=time()-t0
lg_test_accuracy_tf = accuracy_score(y_test_tfidf,ypred_lg_tf)
accuracy_container['logistic_reg_tfidf'] = lg_test_accuracy_tf
print('Training Accuracy: ', accuracy_score(y_train_tfidf,lg.predict(X_train_tfidf)))
print('Testing Accuracy: ', lg_test_accuracy_tf)
print("Training Time: ",training_time_container['logistic_reg_tfidf'])
print("Prediction Time: ",prediction_time_container['logistic_reg_tfidf'])
print(confusion_matrix(y_test,ypred_lg_tf))
```
## Multinomial Naive Bayes
```
# on countvector
nb = MultinomialNB()
#Fitting the model
t0=time()
nb.fit(X_train,y_train)
training_time_container['mnb_naive_bayes']=time()-t0
# Predicting the Test set results
t0 = time()
y_pred_nb = nb.predict(X_test)
prediction_time_container['mnb_naive_bayes']=time()-t0
mnb_test_accuracy = accuracy_score(y_test,y_pred_nb)
accuracy_container['mnb_naive_bayes'] = mnb_test_accuracy
print('Training Accuracy : ', accuracy_score(y_train,nb.predict(X_train)))
print('Testing Accuracy: ',mnb_test_accuracy)
print("Training Time: ",training_time_container['mnb_naive_bayes'])
print("Prediction Time: ",prediction_time_container['mnb_naive_bayes'])
print(confusion_matrix(y_test,y_pred_nb))
# on tfidf
nb = MultinomialNB()
#Fitting the model
t0=time()
nb.fit(X_train_tfidf,y_train_tfidf)
training_time_container['mnb_naive_bayes_tfidf']=time()-t0
# Predicting the Test set results
t0=time()
ypred_nb_tf = nb.predict(X_test_tfidf)
prediction_time_container['mnb_naive_bayes_tfidf']=time()-t0
mnb_tfidf_test_accuracy = accuracy_score(y_test_tfidf,ypred_nb_tf)
accuracy_container['mnb_naive_bayes_tfidf'] = mnb_tfidf_test_accuracy
print('Training Accuracy: ', accuracy_score(y_train_tfidf,nb.predict(X_train_tfidf)))
print('Testing Accuracy: ',mnb_tfidf_test_accuracy )
print("Training Time: ",training_time_container['mnb_naive_bayes_tfidf'])
print("Prediction Time: ",prediction_time_container['mnb_naive_bayes_tfidf'])
print(confusion_matrix(y_test,ypred_nb_tf))
```
## SVM using Stochastic Gradient Descent
```
# Used hinge loss which gives linear Support Vector Machine. Also set the learning rate to 0.0001 (also the default value)
# which is a constant that's gets multiplied with the regularization term. For penalty, I've used L2 which is the standard
#regularizer for linear SVMs
# on countvector
svm_classifier = linear_model.SGDClassifier(loss='hinge',alpha=0.0001)
t0=time()
svm_classifier.fit(X_train,y_train)
training_time_container['linear_svm']=time()-t0
# Predicting the Test set results
t0=time()
y_pred_svm = svm_classifier.predict(X_test)
prediction_time_container['linear_svm']=time()-t0
svm_test_accuracy = accuracy_score(y_test,y_pred_svm)
accuracy_container['linear_svm'] = svm_test_accuracy
print('Training Accuracy : ', accuracy_score(y_train,svm_classifier.predict(X_train)))
print('Testing Accuracy: ',svm_test_accuracy )
print("Training Time: ",training_time_container['linear_svm'])
print("Prediction Time: ",prediction_time_container['linear_svm'])
print(confusion_matrix(y_test,y_pred_svm))
# on tfidf
svm_classifier = linear_model.SGDClassifier(loss='hinge',alpha=0.0001)
#Fitting the model
t0=time()
svm_classifier.fit(X_train_tfidf,y_train_tfidf)
training_time_container['linear_svm_tfidf']=time()-t0
# Predicting the Test set results
t0=time()
ypred_svm_tf = svm_classifier.predict(X_test_tfidf)
prediction_time_container['linear_svm_tfidf']=time()-t0
svm_test_accuracy_tf = accuracy_score(y_test_tfidf,ypred_svm_tf)
accuracy_container['linear_svm_tfdif'] = svm_test_accuracy_tf
print('Training Accuracy: ', accuracy_score(y_train_tfidf,svm_classifier.predict(X_train_tfidf)))
print('Testing Accuracy: ', svm_test_accuracy_tf)
print("Training Time: ",training_time_container['linear_svm_tfidf'])
print("Prediction Time: ",prediction_time_container['linear_svm_tfidf'])
print(confusion_matrix(y_test,ypred_svm_tf))
```
## RandomForest
```
# on count vectorizer
rf = RandomForestClassifier(n_estimators=50)
t0=time()
rf.fit(X_train,y_train)
training_time_container['random_forest']=time()-t0
# Predicting the Test set results
t0=time()
y_pred_rf = rf.predict(X_test)
prediction_time_container['random_forest']=time()-t0
rf_test_accuracy = accuracy_score(y_test,y_pred_rf)
accuracy_container['random_forest'] = rf_test_accuracy
print('Training Accuracy : ', accuracy_score(y_train,rf.predict(X_train)))
print('Testing Accuracy: ',rf_test_accuracy )
print("Training Time: ",training_time_container['random_forest'])
print("Prediction Time: ",prediction_time_container['random_forest'])
print(confusion_matrix(y_test,y_pred_rf))
# on tfidf
rf = RandomForestClassifier(n_estimators=50)
#Fitting the model
t0=time()
rf.fit(X_train_tfidf,y_train_tfidf)
training_time_container['random_forest_tfidf']=time()-t0
# Predicting the Test set results
t0=time()
ypred_rf_tf = rf.predict(X_test_tfidf)
prediction_time_container['random_forest_tfidf']=time()-t0
rf_test_accuracy_tf = accuracy_score(y_test_tfidf,ypred_rf_tf)
accuracy_container['random_forest_tfidf'] = rf_test_accuracy_tf
print('Training Accuracy: ', accuracy_score(y_train_tfidf,rf.predict(X_train_tfidf)))
print('Testing Accuracy: ',rf_test_accuracy_tf )
print("Training Time: ",training_time_container['random_forest_tfidf'])
print("Prediction Time: ",prediction_time_container['random_forest_tfidf'])
print(confusion_matrix(y_test,ypred_rf_tf ))
fig=go.Figure(data=[go.Bar(y=list(training_time_container.values()),x=list(training_time_container.keys()),
marker={'color':np.arange(len(list(training_time_container.values())))}
,text=list(training_time_container.values()), textposition='auto' )])
fig.update_layout(autosize=True ,plot_bgcolor='rgb(275, 275, 275)',
title="Comparison of Training Time of different classifiers",
xaxis_title="Machine Learning Models",
yaxis_title="Training time in seconds" )
fig.data[0].marker.line.width = 3
fig.data[0].marker.line.color = "black"
fig
fig=go.Figure(data=[go.Bar(y=list(prediction_time_container.values()),x=list(prediction_time_container.keys()),
marker={'color':np.arange(len(list(prediction_time_container.values())))}
,text=list(prediction_time_container.values()), textposition='auto' )])
fig.update_layout(autosize=True ,plot_bgcolor='rgb(275, 275, 275)',
title="Comparison of Prediction Time of different classifiers",
xaxis_title="Machine Learning Models",
yaxis_title="Prediction time in seconds" )
fig.data[0].marker.line.width = 3
fig.data[0].marker.line.color = "black"
fig
fig=go.Figure(data=[go.Bar(y=list(accuracy_container.values()),x=list(accuracy_container.keys()),
marker={'color':np.arange(len(list(accuracy_container.values())))}
,text=list(accuracy_container.values()), textposition='auto' )])
fig.update_layout(autosize=True ,plot_bgcolor='rgb(275, 275, 275)',
title="Comparison of Accuracy Scores of different classifiers",
xaxis_title="Machine Learning Models",
yaxis_title="Accuracy Scores" )
fig.data[0].marker.line.width = 3
fig.data[0].marker.line.color = "black"
fig
```
# Stratified K-fold CV
In machine learning, when we want to train our ML model we split our entire dataset into train set and test set using train test split class present in sklearn.Then we train our model on train set and test our model on test set. The problems that we face are, whenever we change the random_state parameter present in train_test_split(), we get different accuracy for different random_state and hence we can’t exactly point out the accuracy for our model.<br>
The solution for the this problem is to use K-Fold Cross-Validation. But K-Fold Cross Validation also suffer from second problem i.e. random sampling.<br>
The solution for both first and second problem is to use Stratified K-Fold Cross-Validation.Stratified k-fold cross-validation is same as just k-fold cross-validation, But in Stratified k-fold cross-validation, it does stratified sampling instead of random sampling.
## SVM
```
svm_skcv = linear_model.SGDClassifier(loss='hinge',alpha=0.0001)
# StratifiedKFold object.
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=1)
lst_accu_stratified_svm = []
for train_index, test_index in skf.split(X_smote_tfidf,Y_smote_tfidf):
x_train_fold, x_test_fold = X_smote_tfidf[train_index], X_smote_tfidf[test_index]
y_train_fold, y_test_fold = Y_smote_tfidf[train_index], Y_smote_tfidf[test_index]
svm_skcv.fit(x_train_fold, y_train_fold)
lst_accu_stratified_svm.append(svm_skcv.score(x_test_fold, y_test_fold))
# Print the output.
print('List of possible accuracy:', lst_accu_stratified_svm)
print('\nMaximum Accuracy That can be obtained from this model is:',max(lst_accu_stratified_svm)*100, '%')
print('\nMinimum Accuracy:', min(lst_accu_stratified_svm)*100, '%')
print('\nOverall Accuracy:',mean(lst_accu_stratified_svm)*100, '%')
```
## RandomForest
```
rf_skcv = RandomForestClassifier(n_estimators=50)
# StratifiedKFold object.
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=1)
lst_accu_stratified_rf = []
for train_index, test_index in skf.split(X_smote_tfidf,Y_smote_tfidf):
x_train_fold, x_test_fold = X_smote_tfidf[train_index], X_smote_tfidf[test_index]
y_train_fold, y_test_fold = Y_smote_tfidf[train_index], Y_smote_tfidf[test_index]
rf_skcv.fit(x_train_fold, y_train_fold)
lst_accu_stratified_rf.append(rf_skcv.score(x_test_fold, y_test_fold))
# Print the output.
print('List of possible accuracy:', lst_accu_stratified_rf)
print('\nMaximum Accuracy That can be obtained from this model is:', max(lst_accu_stratified_rf)*100, '%')
print('\nMinimum Accuracy:', min(lst_accu_stratified_rf)*100, '%')
print('\nOverall Accuracy:', mean(lst_accu_stratified_rf)*100, '%')
```
## Multinomial Naive Bayes
```
nb_skcv = MultinomialNB()
# StratifiedKFold object.
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=1)
lst_accu_stratified_nb = []
for train_index, test_index in skf.split(X_smote_tfidf,Y_smote_tfidf):
x_train_fold, x_test_fold = X_smote_tfidf[train_index], X_smote_tfidf[test_index]
y_train_fold, y_test_fold = Y_smote_tfidf[train_index], Y_smote_tfidf[test_index]
nb_skcv.fit(x_train_fold, y_train_fold)
lst_accu_stratified_nb.append(nb_skcv.score(x_test_fold, y_test_fold))
# Print the output.
print('List of possible accuracy:', lst_accu_stratified_nb)
print('\nMaximum Accuracy That can be obtained from this model is:', max(lst_accu_stratified_nb)*100, '%')
print('\nMinimum Accuracy:', min(lst_accu_stratified_nb)*100, '%')
print('\nOverall Accuracy:', mean(lst_accu_stratified_nb)*100, '%')
```
# Save the models
```
import joblib
# cv and tfidf
joblib.dump(count_vectorizer, open('cv.pkl', 'wb'),8)
joblib.dump(tfidf_vectorizer, open('tfidf.pkl', 'wb'),8)
# mnb
joblib.dump(nb, open('mnb.pkl', 'wb'),8)
# svm
joblib.dump(svm_classifier, open('svm.pkl', 'wb'),8)
# randomforest
joblib.dump(rf , open('rf.pkl', 'wb'),8)
```
# LSTM
We will not going to create RNN model due to its vanishing gradient problem instead of that we will going to create LSTM model.LSTMs have an additional state called ‘cell state’ through which the network makes adjustments in the information flow. The advantage of this state is that the model can remember or forget the leanings more selectively.
First of all we are going to do tokenization then we will generate sequence of n-grams.After that we will going to do padding.Padding is required because all the sentences are of different length so we need to make them of same length.We will going to do this by adding 0 in the end of the text with the help of pad_sequences function of keras
```
max_features = 6433 # the maximum number of words to keep, based on word frequency
tokenizer = Tokenizer(num_words=max_features )
tokenizer.fit_on_texts(df['cleaned_text'].values)
X = tokenizer.texts_to_sequences(df['cleaned_text'].values)
X = pad_sequences(X, padding = 'post', maxlen = 6433 )
X
X.shape[1]
Y = pd.get_dummies(df['label']).values
Y
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size = 0.25, random_state = 42,stratify = Y)
print(X_train.shape,Y_train.shape)
print(X_test.shape,Y_test.shape)
embid_dim = 300
lstm_out = 32
model = keras.Sequential()
model.add(Embedding(max_features, embid_dim, input_length = X.shape[1] ))
model.add(Bidirectional(LSTM(lstm_out)))
model.add(Dropout(0.4))
model.add(Dense(32, activation = 'relu'))
model.add(Dropout(0.4))
model.add(Dense(9,activation = 'softmax'))
model.summary()
```
So our model is created now it's time to train our model, we will going to use 10 epochs
```
batch_size = 128
earlystop = EarlyStopping(monitor='loss', min_delta=0, patience=3, verbose=0, mode='auto')
model.compile(loss = 'categorical_crossentropy', optimizer='adam',metrics = ['accuracy'])
history = model.fit(X_train, Y_train, epochs = 10, batch_size=batch_size, verbose = 1, validation_data= (X_test, Y_test),callbacks=[earlystop])
```
### Plot Accuracy and Loss
```
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
### Save LSTM model
```
model.save('lstm.h5')
```
# BERT
So now we will going to make the bert model.In our kernel we have less memory so we will going to take 50% of our dataset
```
df_bert = df.sample(frac=0.5)
df_bert.reset_index(inplace=True)
df_bert['target'].value_counts()
```
So our dataset is imbalanced, we split the dataset in a stratified way
```
X_train, X_val, y_train, y_val = train_test_split(df_bert.index.values,
df_bert.target.values,
test_size=0.15,
random_state=42,
stratify=df_bert.target.values)
df_bert['data_type'] = ['not_set']*df_bert.shape[0]
df_bert.loc[X_train, 'data_type'] = 'train'
df_bert.loc[X_val, 'data_type'] = 'val'
```
Now we will construct the BERT Tokenizer.Based on wordpiece.We will intantiate a pre-trained model configuration to encode our data
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
do_lower_case=True)
```
- To convert all the titles from text into encoded form, we use a function called *batch_encode_plus* and we will proceed train and test data seperately.The first parameter inside the function is the text.
- *add_special_tokens = True* means the sequences will encoded with the special tokens realtive to their model
- *return_attention_mask=True* returns the attention mask according to the special tokenizer defined by *max_length* attribute
```
encoded_data_train = tokenizer.batch_encode_plus(
df_bert[df_bert.data_type=='train'].final_text.values,
add_special_tokens=True,
return_attention_mask=True,
pad_to_max_length=True,
max_length=256,
return_tensors='pt'
)
encoded_data_val = tokenizer.batch_encode_plus(
df_bert[df_bert.data_type=='val'].final_text.values,
add_special_tokens=True,
return_attention_mask=True,
pad_to_max_length=True,
max_length=256,
return_tensors='pt'
)
input_ids_train = encoded_data_train['input_ids']
attention_masks_train = encoded_data_train['attention_mask']
labels_train = torch.tensor(df_bert[df_bert.data_type=='train'].target.values)
input_ids_val = encoded_data_val['input_ids']
attention_masks_val = encoded_data_val['attention_mask']
labels_val = torch.tensor(df_bert[df_bert.data_type=='val'].target.values)
```
Now we got encoded dataset, we can create training data and validation data
```
dataset_train = TensorDataset(input_ids_train, attention_masks_train, labels_train)
dataset_val = TensorDataset(input_ids_val, attention_masks_val, labels_val)
# length of training and validation data
len(dataset_train), len(dataset_val)
```
We are treating each title as its unique sequence, so one sequence will be classified into one of the 12 labels
```
model = BertForSequenceClassification.from_pretrained("bert-base-uncased",
num_labels=12,
output_attentions=False,
output_hidden_states=False)
```
DataLoader combines a dataset and a sampler and provides an iterable over the given dataset.
```
batch_size = 3
dataloader_train = DataLoader(dataset_train,
sampler=RandomSampler(dataset_train),
batch_size=batch_size)
dataloader_validation = DataLoader(dataset_val,
sampler=SequentialSampler(dataset_val),
batch_size=batch_size)
optimizer = AdamW(model.parameters(),
lr=1e-5,
eps=1e-8)
epochs = 3
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps=0,
num_training_steps=len(dataloader_train)*epochs)
```
We will use f1 score as a performance metrics
```
def f1_score_func(preds, labels):
preds_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return f1_score(labels_flat, preds_flat, average='weighted')
seed_val = 17
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
print(device)
```
### Training loop
```
def evaluate(dataloader_val):
model.eval()
loss_val_total = 0
predictions, true_vals = [], []
for batch in dataloader_val:
batch = tuple(b.to(device) for b in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2],
}
with torch.no_grad():
outputs = model(**inputs)
loss = outputs[0]
logits = outputs[1]
loss_val_total += loss.item()
logits = logits.detach().cpu().numpy()
label_ids = inputs['labels'].cpu().numpy()
predictions.append(logits)
true_vals.append(label_ids)
loss_val_avg = loss_val_total/len(dataloader_val)
predictions = np.concatenate(predictions, axis=0)
true_vals = np.concatenate(true_vals, axis=0)
return loss_val_avg, predictions, true_vals
for epoch in tqdm(range(1, epochs+1)):
#model.train()
loss_train_total = 0
progress_bar = tqdm(dataloader_train, desc='Epoch {:1d}'.format(epoch), leave=False, disable=False)
for batch in progress_bar:
model.zero_grad()
batch = tuple(b.to(device) for b in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2],
}
outputs = model(**inputs)
loss = outputs[0]
loss_train_total += loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
progress_bar.set_postfix({'training_loss': '{:.3f}'.format(loss.item()/len(batch))})
tqdm.write(f'\nEpoch {epoch}')
loss_train_avg = loss_train_total/len(dataloader_train)
tqdm.write(f'Training loss: {loss_train_avg}')
val_loss, predictions, true_vals = evaluate(dataloader_validation)
val_f1 = f1_score_func(predictions, true_vals)
tqdm.write(f'Validation loss: {val_loss}')
tqdm.write(f'F1 Score (Weighted): {val_f1}')
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Upload a Fairness Dashboard to Azure Machine Learning Studio
**This notebook shows how to generate and upload a fairness assessment dashboard from Fairlearn to AzureML Studio**
## Table of Contents
1. [Introduction](#Introduction)
1. [Loading the Data](#LoadingData)
1. [Processing the Data](#ProcessingData)
1. [Training Models](#TrainingModels)
1. [Logging in to AzureML](#LoginAzureML)
1. [Registering the Models](#RegisterModels)
1. [Using the Fairlearn Dashboard](#LocalDashboard)
1. [Uploading a Fairness Dashboard to Azure](#AzureUpload)
1. Computing Fairness Metrics
1. Uploading to Azure
1. [Conclusion](#Conclusion)
<a id="Introduction"></a>
## Introduction
In this notebook, we walk through a simple example of using the `azureml-contrib-fairness` package to upload a collection of fairness statistics for a fairness dashboard. It is an example of integrating the [open source Fairlearn package](https://www.github.com/fairlearn/fairlearn) with Azure Machine Learning. This is not an example of fairness analysis or mitigation - this notebook simply shows how to get a fairness dashboard into the Azure Machine Learning portal. We will load the data and train a couple of simple models. We will then use Fairlearn to generate data for a Fairness dashboard, which we can upload to Azure Machine Learning portal and view there.
### Setup
To use this notebook, an Azure Machine Learning workspace is required.
Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.
This notebook also requires the following packages:
* `azureml-contrib-fairness`
* `fairlearn==0.4.6`
* `joblib`
* `shap`
<a id="LoadingData"></a>
## Loading the Data
We use the well-known `adult` census dataset, which we load using `shap` (for convenience). We start with a fairly unremarkable set of imports:
```
from sklearn import svm
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.linear_model import LogisticRegression
import pandas as pd
import shap
```
Now we can load the data:
```
X_raw, Y = shap.datasets.adult()
```
We can take a look at some of the data. For example, the next cells shows the counts of the different races identified in the dataset:
```
print(X_raw["Race"].value_counts().to_dict())
```
<a id="ProcessingData"></a>
## Processing the Data
With the data loaded, we process it for our needs. First, we extract the sensitive features of interest into `A` (conventionally used in the literature) and put the rest of the feature data into `X`:
```
A = X_raw[['Sex','Race']]
X = X_raw.drop(labels=['Sex', 'Race'],axis = 1)
X = pd.get_dummies(X)
```
Next, we apply a standard set of scalings:
```
sc = StandardScaler()
X_scaled = sc.fit_transform(X)
X_scaled = pd.DataFrame(X_scaled, columns=X.columns)
le = LabelEncoder()
Y = le.fit_transform(Y)
```
Finally, we can then split our data into training and test sets, and also make the labels on our test portion of `A` human-readable:
```
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test, A_train, A_test = train_test_split(X_scaled,
Y,
A,
test_size = 0.2,
random_state=0,
stratify=Y)
# Work around indexing issue
X_train = X_train.reset_index(drop=True)
A_train = A_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
A_test = A_test.reset_index(drop=True)
# Improve labels
A_test.Sex.loc[(A_test['Sex'] == 0)] = 'female'
A_test.Sex.loc[(A_test['Sex'] == 1)] = 'male'
A_test.Race.loc[(A_test['Race'] == 0)] = 'Amer-Indian-Eskimo'
A_test.Race.loc[(A_test['Race'] == 1)] = 'Asian-Pac-Islander'
A_test.Race.loc[(A_test['Race'] == 2)] = 'Black'
A_test.Race.loc[(A_test['Race'] == 3)] = 'Other'
A_test.Race.loc[(A_test['Race'] == 4)] = 'White'
```
<a id="TrainingModels"></a>
## Training Models
We now train a couple of different models on our data. The `adult` census dataset is a classification problem - the goal is to predict whether a particular individual exceeds an income threshold. For the purpose of generating a dashboard to upload, it is sufficient to train two basic classifiers. First, a logistic regression classifier:
```
lr_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)
lr_predictor.fit(X_train, Y_train)
```
And for comparison, a support vector classifier:
```
svm_predictor = svm.SVC()
svm_predictor.fit(X_train, Y_train)
```
<a id="LoginAzureML"></a>
## Logging in to AzureML
With our two classifiers trained, we can log into our AzureML workspace:
```
from azureml.core import Workspace, Experiment, Model
ws = Workspace.from_config()
ws.get_details()
```
<a id="RegisterModels"></a>
## Registering the Models
Next, we register our models. By default, the subroutine which uploads the models checks that the names provided correspond to registered models in the workspace. We define a utility routine to do the registering:
```
import joblib
import os
os.makedirs('models', exist_ok=True)
def register_model(name, model):
print("Registering ", name)
model_path = "models/{0}.pkl".format(name)
joblib.dump(value=model, filename=model_path)
registered_model = Model.register(model_path=model_path,
model_name=name,
workspace=ws)
print("Registered ", registered_model.id)
return registered_model.id
```
Now, we register the models. For convenience in subsequent method calls, we store the results in a dictionary, which maps the `id` of the registered model (a string in `name:version` format) to the predictor itself:
```
model_dict = {}
lr_reg_id = register_model("fairness_linear_regression", lr_predictor)
model_dict[lr_reg_id] = lr_predictor
svm_reg_id = register_model("fairness_svm", svm_predictor)
model_dict[svm_reg_id] = svm_predictor
```
<a id="LocalDashboard"></a>
## Using the Fairlearn Dashboard
We can now examine the fairness of the two models we have training, both as a function of race and (binary) sex. Before uploading the dashboard to the AzureML portal, we will first instantiate a local instance of the Fairlearn dashboard.
Regardless of the viewing location, the dashboard is based on three things - the true values, the model predictions and the sensitive feature values. The dashboard can use predictions from multiple models and multiple sensitive features if desired (as we are doing here).
Our first step is to generate a dictionary mapping the `id` of the registered model to the corresponding array of predictions:
```
ys_pred = {}
for n, p in model_dict.items():
ys_pred[n] = p.predict(X_test)
```
We can examine these predictions in a locally invoked Fairlearn dashboard. This can be compared to the dashboard uploaded to the portal (in the next section):
```
from fairlearn.widget import FairlearnDashboard
FairlearnDashboard(sensitive_features=A_test,
sensitive_feature_names=['Sex', 'Race'],
y_true=Y_test.tolist(),
y_pred=ys_pred)
```
<a id="AzureUpload"></a>
## Uploading a Fairness Dashboard to Azure
Uploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. The required stages are therefore:
1. Precompute all the required metrics
1. Upload to Azure
### Computing Fairness Metrics
We use Fairlearn to create a dictionary which contains all the data required to display a dashboard. This includes both the raw data (true values, predicted values and sensitive features), and also the fairness metrics. The API is similar to that used to invoke the Dashboard locally. However, there are a few minor changes to the API, and the type of problem being examined (binary classification, regression etc.) needs to be specified explicitly:
```
sf = { 'Race': A_test.Race, 'Sex': A_test.Sex }
from fairlearn.metrics._group_metric_set import _create_group_metric_set
dash_dict = _create_group_metric_set(y_true=Y_test,
predictions=ys_pred,
sensitive_features=sf,
prediction_type='binary_classification')
```
The `_create_group_metric_set()` method is currently underscored since its exact design is not yet final in Fairlearn.
### Uploading to Azure
We can now import the `azureml.contrib.fairness` package itself. We will round-trip the data, so there are two required subroutines:
```
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
```
Finally, we can upload the generated dictionary to AzureML. The upload method requires a run, so we first create an experiment and a run. The uploaded dashboard can be seen on the corresponding Run Details page in AzureML Studio. For completeness, we also download the dashboard dictionary which we uploaded.
```
exp = Experiment(ws, "notebook-01")
print(exp)
run = exp.start_logging()
try:
dashboard_title = "Sample notebook upload"
upload_id = upload_dashboard_dictionary(run,
dash_dict,
dashboard_name=dashboard_title)
print("\nUploaded to id: {0}\n".format(upload_id))
downloaded_dict = download_dashboard_by_upload_id(run, upload_id)
finally:
run.complete()
```
Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:
```
print(dash_dict == downloaded_dict)
```
<a id="Conclusion"></a>
## Conclusion
In this notebook we have demonstrated how to generate and upload a fairness dashboard to AzureML Studio. We have not discussed how to analyse the results and apply mitigations. Those topics will be covered elsewhere.
| github_jupyter |
```
import numpy as np
import json
from scipy import optimize
from os import listdir
import re
import os
from os.path import isfile, join
import pandas as pd
from joblib import Parallel, delayed, dump
%matplotlib inline
import matplotlib.pyplot as plt
from skimage import measure
import matplotlib
import seaborn as sns
import csv
from scipy.stats import stats
NJOBS = 35
#sns.set(style="white")
#sns.set_context("paper")
def figsize(scale,ratio):
fig_width_pt = 468.0 # Get this from LaTeX using \the\textwidth
inches_per_pt = 1.0/72.27 # Convert pt to inch
golden_mean = (np.sqrt(5.0)-1.0)/2.0 # Aesthetic ratio (you could change this)
fig_width = fig_width_pt*inches_per_pt*scale # width in inches
if(ratio == "golden"): # Golden ratio ...
fig_height = fig_width*golden_mean # height in inches
else: # ... or other ratio
fig_height = fig_width*ratio
fig_size = [fig_width,fig_height]
return fig_size
pgf_with_latex = {'backend': 'pdf',
'axes.labelsize': 8,
'xtick.labelsize': 8,
'ytick.labelsize': 8,
'legend.fontsize': 8,
'lines.markersize': 3,
'font.size': 8,
'font.family': u'sans-serif',
'font.sans-serif': ['Arial'],
'pdf.fonttype': 42,
'ps.fonttype': 42,
'text.usetex': False}
matplotlib.rcParams.update(pgf_with_latex)
from collections import defaultdict
import gzip
from tqdm import tqdm_notebook
from itertools import chain
def collect_distance_file(prefix, directoryname, tileid):
with gzip.open(directoryname + '{}{}.json.gz'.format(prefix, tileid), 'rt') as f:
d = json.load(f)
return d['distances'][0]
def collect_distances(directoryname, df_classes, distance_name='energy', modelname=''):
dours = {}
drybski = {}
prefix_name = '{}_'.format(distance_name)
if modelname:
prefix_name = '{}_{}_'.format(distance_name, modelname)
prefix_name_rybski = '{}_{}_'.format(distance_name, 'rybski')
for c in [1,2,3,4]:
tiles = df_classes[df_classes['class'] == c]['tileid'].values
dours[c] = [r for r in Parallel(n_jobs=NJOBS)(delayed(collect_distance_file)(prefix_name, directoryname, t) for t in tqdm_notebook(tiles) if isfile(directoryname + '{}{}.json.gz'.format(prefix_name, t)))]
drybski[c] = [r for r in Parallel(n_jobs=NJOBS)(delayed(collect_distance_file)(prefix_name_rybski, directoryname, t) for t in tqdm_notebook(tiles) if isfile(directoryname + '{}{}.json.gz'.format(prefix_name, t)))]
return dours, drybski
def create_dataframe_distances(dours, drybski):
data = [[ 10, 'tom', 'c']]
df = pd.DataFrame(data, columns = ['dist', 'Class', 'model'])
for c in [1,2,3,4]:
array_distances = np.hstack((np.array(dours[c]), np.array(drybski[c]))).reshape(-1, 1)
repeated = np.repeat(c, len(array_distances)).reshape(-1, 1)
repeated_model = np.hstack((np.repeat('multi', len(dours[c])), np.repeat('rybski', len(dours[c])))).reshape(-1, 1)
df = pd.concat((df, pd.DataFrame(np.hstack((array_distances, repeated, repeated_model)), columns = ['dist', 'Class', 'model']) ))
df = df[df['Class'] != 'tom']
df['dist'] = df['dist'].astype('float32')
df = df.sort_values(['model', 'Class'])
return df
def compute_kolmogorov_classes(df):
for c in ['1', '2', '3', '4']:
data1 = df[(df['Class'] == c) & (df['model'] == 'multi')]['dist'].values
data2 = df[(df['Class'] == c) & (df['model'] == 'rybski')]['dist'].values
d, p = stats.ks_2samp(data1, data2)
print("KL class {}: {} (p-value {})".format(c, d, p))
df_classes = pd.read_csv('../data/generated_files/quantiles_classes.csv', dtype={'tileid': str})
df_classes.head()
df = pd.read_csv('../data/generated_files/summary_tiles_05x05.csv', dtype={'tileid': 'str'})
df['perc_urban'] = df['urban_area_km2'] / df['original_km2']
df['perc_constraint'] = 1 - (df['tile_km2'] / df['original_km2'])
df = pd.merge(df[['tileid', 'perc_urban', 'perc_constraint']], df_classes, on='tileid')
rootdir = "../data/generated_files/simulations/distances/"
dours, drybski = collect_distances(rootdir, df_classes, distance_name='energy')
df = create_dataframe_distances(dours, drybski)
df.head()
print("Model multi")
data1 = df[(df['model'] == 'multi')]['dist'].values
data2 = df[(df['model'] == 'rybski')]['dist'].values
d, p = stats.ks_2samp(data1, data2)
print("KL all: {} (p-value {})".format(d, p))
compute_kolmogorov_classes(df)
colors = np.array([[109, 110, 113],
[230, 231, 232]])/255
f, axs = plt.subplots(1,1, figsize=figsize(1, 0.8))
g = sns.boxplot(x="Class", y="dist", hue='model', data=df, linewidth=0.5, showfliers=False,
palette=sns.color_palette(colors))
leg = g.axes.get_legend()
leg.set_title('Model')
renames = {'multi': 'Multi parameter', 'rybski': 'Single parameter'}
for t in leg.texts:
old = t.get_text()
t.set_text(renames[old])
original = np.median(df[df['model'] == 'rybski']['dist'].values)
decrease = original - np.median(df[df['model'] == 'multi']['dist'].values)
print("Median diff:", decrease/original*100)
plt.savefig('../figures/manuscript/boxplot_distances.pdf', dpi=150, bbox_inches='tight', pad_inches=0.05)
doursKL, drybskiKL = collect_distances(rootdir, df_classes, distance_name='kl')
dfKL = create_dataframe_distances(doursKL, drybskiKL)
dfKL.head()
doursJS, drybskiJS = collect_distances(rootdir, df_classes, distance_name='js')
dfJS = create_dataframe_distances(doursJS, drybskiJS)
dfJS.head()
def edit_distance_legend(g):
leg = g.axes.get_legend()
leg.set_title('Model')
renames = {'multi': 'Multi parameter', 'rybski': 'Single parameter'}
for t in leg.texts:
old = t.get_text()
t.set_text(renames[old])
colors = np.array([[109, 110, 113],
[230, 231, 232]])/255
f, axs = plt.subplots(1,3, figsize=figsize(1.5, 0.4))
g = sns.boxplot(x="Class", y="dist", hue='model', data=df, linewidth=0.5, showfliers=False,
palette=sns.color_palette(colors), ax=axs[0])
edit_distance_legend(g)
g = sns.boxplot(x="Class", y="dist", hue='model', data=dfKL, linewidth=0.5, showfliers=False,
palette=sns.color_palette(colors), ax=axs[1])
edit_distance_legend(g)
g = sns.boxplot(x="Class", y="dist", hue='model', data=dfJS, linewidth=0.5, showfliers=False,
palette=sns.color_palette(colors), ax=axs[2])
edit_distance_legend(g)
#axs[0].set_ylim((0.2, 2.2))
axs[0].set_ylabel("Distance")
axs[1].set_ylabel("Distance")
axs[2].set_ylabel("Distance")
axs[0].set_title("Energy distance")
axs[1].set_title("Kullback Leibler divergence")
axs[2].set_title("Jensen–Shannon divergence")
f.tight_layout()
plt.savefig('../figures/manuscript/appendix_boxplot_distances.pdf', dpi=150, bbox_inches='tight', pad_inches=0.05)
dmultiprob, drybski = collect_distances(rootdir, df_classes, distance_name='energy', modelname='multi')
df_multiprob = create_dataframe_distances(dmultiprob, drybski)
df_multiprob.head()
print("Model multiprob")
data1 = df_multiprob[(df_multiprob['model'] == 'multi')]['dist'].values
data2 = df_multiprob[(df_multiprob['model'] == 'rybski')]['dist'].values
d, p = stats.ks_2samp(data1, data2)
print("KL all: {} (p-value {})".format(d, p))
compute_kolmogorov_classes(df_multiprob)
```
## Classes real vs simulated
```
df_classes = pd.read_csv('../data/generated_files/quantiles_classes.csv', dtype={'tileid': str})
print(len(df_classes))
df_classes.head()
from functools import reduce
dfs = []
for m in ['1000', 'rybski', 'marco']:#
df_temp = pd.read_csv('../data/generated_files/quantiles_classes_{}_all.csv'.format(m), dtype={'tileid': 'str'})
df_temp = df_temp.rename(columns={'class': 'class_{}'.format(m), 'quantile': 'quantile_{}'.format(m)})
df_temp['class_{}'.format(m)] = df_temp['class_{}'.format(m)].astype('int')
dfs.append(df_temp)
df_merged_classes = reduce(lambda left,right: pd.merge(left,right, on='tileid'), dfs)
df_merged_classes = pd.merge(df_merged_classes, df_classes.rename(columns={'class': 'class_truth', 'quantile': 'quantile_truth'}), on='tileid')
print(len(df_merged_classes))
df_merged_classes.head()
df_merged_classes.to_csv('../data/generated_files/quantiles_classes_merged_all.csv', index=False)
df_melted_classes = df_merged_classes.melt(id_vars=['tileid'], value_vars=['class_1000', 'class_rybski', 'class_truth'], #, 'class_marco'
var_name='model', value_name='class')
df_melted_classes.loc[:, 'model'] = df_melted_classes['model'].str.replace('class_', '')
df_melted_classes.head()
df_count_classes = df_melted_classes.groupby(['model', 'class'], as_index=False).count()
df_count_classes = df_count_classes.rename(columns={'tileid': 'num'})
df_count_classes.head()
from matplotlib.ticker import PercentFormatter
g = sns.catplot(x="class", y='num', hue="model",
data=df_count_classes[df_count_classes.model.isin({'rybski', '1000'})].sort_values('model', ascending=False),
height=5, kind="bar", palette="muted")
#plt.axhline(y=500, linestyle='--', color='gray')
plt.ylabel("Number of tiles")
plt.savefig('../figures/manuscript/boxplot_classes.pdf', dpi=150, bbox_inches='tight', pad_inches=0.05)
len(pd.merge(df_merged_classes, df_classes, on='tileid'))
len(df_merged_classes), len(df_classes)
from sklearn.metrics import f1_score
def f1_score_classes(true_classes, sim_classes, class_num):
n = len(sim_classes)
y_true = np.zeros(n)
y_pred = np.zeros(n)
y_true[true_classes == class_num] = 1
y_pred[sim_classes == class_num] = 1
#y_pred[sim_classes != class_num] = 0
return f1_score(y_true, y_pred)
results = []
df_merged_classes2 = df_merged_classes.copy()
#print(len(df_merged_classes2))
#df_merged_classes2 = df_merged_classes2[~df_merged_classes2.tileid.isin(df_summ[df_summ.perc_constraint > 0.6].tileid.values)]
#print(len(df_merged_classes2))
for m in ['1000', 'rybski', 'marco']: #
print("MODEL", m)
for class_num in [1,2,3,4]:
df_sub = df_merged_classes2[(df_merged_classes2['class_truth'] == class_num)]
true_classes = df_sub['class_truth'].values
sim_classes = df_sub['class_{}'.format(m)].values
print(class_num, f1_score_classes(true_classes, sim_classes, class_num))
results.append((m, class_num, f1_score_classes(true_classes, sim_classes, class_num)))
print("")
for m in ['1000', 'rybski', 'marco']: #
print("MODEL", m)
df_sub = df_merged_classes2.copy()
true_classes = df_sub['class_truth'].values
sim_classes = df_sub['class_{}'.format(m)].values
print(class_num, f1_score(true_classes, sim_classes, average='weighted'))
print("")
df = pd.DataFrame(data=results, columns=['model', 'class', 'f1'])
df.head()
original = np.median(df[df['model'] == 'rybski']['f1'].values)
decrease = original - np.median(df[df['model'] == '1000']['f1'].values)
print("Median diff:", decrease/original*100)
flatui = [np.array([151, 152, 155])/255., np.array([230,231,232])/255.]
palette = sns.color_palette(flatui)
g = sns.catplot(x="class", y='f1', hue="model", data=df[df.model.isin({'1000', 'rybski'})],
height=5, kind="bar", palette=palette)
plt.ylabel("F1-score")
plt.savefig('../figures/manuscript/boxplot_classes.pdf', dpi=150, bbox_inches='tight', pad_inches=0.05)
g = sns.catplot(x="class", y='f1', hue="model", data=df,
height=3, kind="bar", palette=palette, aspect=1.5)
plt.ylabel("F1-score")
plt.savefig('../figures/manuscript/multi_boxplot_classes.pdf', dpi=150, bbox_inches='tight', pad_inches=0.05)
```
| github_jupyter |
# Heaps
## Overview
For this assignment you will start by modifying the heap data stucture implemented in class to allow it to keep its elements sorted by an arbitrary priority (identified by a `key` function), then use the augmented heap to efficiently compute the running median of a set of numbers.
## 1. Augmenting the Heap with a `key` function
The heap implementation covered in class is for a so-called "max-heap" — i.e., one where elements are organized such that the one with the maximum value can be efficiently extracted.
This limits our usage of the data structure, however. Our heap can currently only accommodate elements that have a natural ordering (i.e., they can be compared using the '`>`' and '`<`' operators as used in the implementation), and there's no way to order elements based on some partial or computed property.
To make our heap more flexible, you'll update it to allow a `key` function to be passed to its initializer. This function will be used to extract a value from each element added to the heap; these values, in turn, will be used to order the elements.
We can now easily create heaps with different semantics, e.g.,
- `Heap(len)` will prioritize elements based on their length (e.g., applicable to strings, sequences, etc.)
- `Heap(lambda x: -x)` can function as a *min-heap* for numbers
- `Heap(lambda x: x.prop)` will prioritize elements based on their `prop` attribute
If no `key` function is provided, the default max-heap behavior should be used — the "`lambda x:x`" default value for the `__init__` method does just that.
You will, at the very least, need to update the `_heapify` and `add` methods, below, to complete this assignment. (Note, also, that `pop_max` has been renamed `pop`, while `max` has been renamed `peek`, to reflect their more general nature.)
```
class Heap:
def __init__(self, key=lambda x:x):
self.data = []
self.key = key
@staticmethod
def _parent(idx):
return (idx-1)//2
@staticmethod
def _left(idx):
return idx*2+1
@staticmethod
def _right(idx):
return idx*2+2
def heapify(self, idx=0):
key = self.key
current = idx
left = Heap._left(current)
right = Heap._right(current)
limit = len(self.data)
maxidx = current
while True:
if left < limit and key(self.data[current]) < key(self.data[left]):
maxidx = left
if right < limit and key(self.data[maxidx]) < key(self.data[right]):
maxidx = right
if maxidx != current:
self.data[current], self.data[maxidx] = self.data[maxidx], self.data[current]
current = maxidx
left = Heap._left(current)
right = Heap._right(current)
else:
break
def add(self, x):
key = self.key
self.data.append(x)
current = len(self.data)-1
parent = Heap._parent(current)
while True:
if current != 0 and key(self.data[current]) > key(self.data[parent]):
self.data[current], self.data[parent] = self.data[parent], self.data[current]
current = parent
parent = Heap._parent(current)
else:
break
def peek(self):
return self.data[0]
def pop(self):
ret = self.data[0]
self.data[0] = self.data[len(self.data)-1]
del self.data[len(self.data)-1]
self.heapify()
return ret
def __bool__(self):
return len(self.data) > 0
def __len__(self):
return len(self.data)
def __repr__(self):
return repr(self.data)
# (1 point)
from unittest import TestCase
import random
tc = TestCase()
h = Heap()
random.seed(0)
for _ in range(10):
h.add(random.randrange(100))
tc.assertEqual(h.data, [97, 61, 65, 49, 51, 53, 62, 5, 38, 33])
# (1 point)
from unittest import TestCase
import random
tc = TestCase()
h = Heap(lambda x:-x)
random.seed(0)
for _ in range(10):
h.add(random.randrange(100))
tc.assertEqual(h.data, [5, 33, 53, 38, 49, 65, 62, 97, 51, 61])
# (2 points)
from unittest import TestCase
import random
tc = TestCase()
h = Heap(lambda s:len(s))
h.add('hello')
h.add('hi')
h.add('abracadabra')
h.add('supercalifragilisticexpialidocious')
h.add('0')
tc.assertEqual(h.data,
['supercalifragilisticexpialidocious', 'abracadabra', 'hello', 'hi', '0'])
# (2 points)
from unittest import TestCase
import random
tc = TestCase()
h = Heap()
random.seed(0)
lst = list(range(-1000, 1000))
random.shuffle(lst)
for x in lst:
h.add(x)
for x in range(999, -1000, -1):
tc.assertEqual(x, h.pop())
# (2 points)
from unittest import TestCase
import random
tc = TestCase()
h = Heap(key=lambda x:abs(x))
random.seed(0)
lst = list(range(-1000, 1000, 3))
random.shuffle(lst)
for x in lst:
h.add(x)
for x in reversed(sorted(range(-1000, 1000, 3), key=lambda x:abs(x))):
tc.assertEqual(x, h.pop())
```
## 2. Computing the Running Median
The median of a series of numbers is simply the middle term if ordered by magnitude, or, if there is no middle term, the average of the two middle terms. E.g., the median of the series [3, 1, 9, 25, 12] is **9**, and the median of the series [8, 4, 11, 18] is **9.5**.
If we are in the process of accumulating numerical data, it is useful to be able to compute the *running median* — where, as each new data point is encountered, an updated median is computed. This should be done, of course, as efficiently as possible.
The following function demonstrates a naive way of computing the running medians based on the series passed in as an iterable.
```
def running_medians_naive(iterable):
values = []
medians = []
for i, x in enumerate(iterable):
values.append(x)
values.sort()
if i%2 == 0:
medians.append(values[i//2])
else:
medians.append((values[i//2] + values[i//2+1]) / 2)
return medians
running_medians_naive([3, 1, 9, 25, 12])
running_medians_naive([8, 4, 11, 18])
```
Note that the function keeps track of all the values encountered during the iteration and uses them to compute the running medians, which are returned at the end as a list. The final running median, naturally, is simply the median of the entire series.
Unfortunately, because the function sorts the list of values during every iteration it is incredibly inefficient. Your job is to implement a version that computes each running median in O(log N) time using, of course, the heap data structure!
### Hints
- You will need to use two heaps for your solution: one min-heap, and one max-heap.
- The min-heap should be used to keep track of all values *greater than* the most recent running median, and the max-heap for all values *less than* the most recent running median — this way, the median will lie between the minimum value on the min-heap and the maximum value on the max-heap (both of which can be efficiently extracted)
- In addition, the difference between the number of values stored in the min-heap and max-heap must never exceed 1 (to ensure the median is being computed). This can be taken care of by intelligently `pop`-ping/`add`-ing elements between the two heaps.
```
def running_medians(iterable):
min_heap = Heap(lambda x:-x)
max_heap = Heap()
medians = []
for val in iterable:
try:
if middle != None:
if val >= middle:
min_heap.add(val)
max_heap.add(middle)
else:
min_heap.add(middle)
max_heap.add(val)
middle = None
medians.append((min_heap.peek() + max_heap.peek()) / 2)
else:
median = (min_heap.peek() + max_heap.peek()) / 2
if val >= median:
min_heap.add(val)
middle = min_heap.pop()
else:
max_heap.add(val)
middle = max_heap.pop()
medians.append(middle)
except:
middle = val
medians.append(middle)
return medians
# (2 points)
from unittest import TestCase
tc = TestCase()
tc.assertEqual([3, 2.0, 3, 6.0, 9], running_medians([3, 1, 9, 25, 12]))
# (2 points)
import random
from unittest import TestCase
tc = TestCase()
vals = [random.randrange(10000) for _ in range(1000)]
tc.assertEqual(running_medians_naive(vals), running_medians(vals))
# (4 points) MUST COMPLETE IN UNDER 10 seconds!
import random
from unittest import TestCase
tc = TestCase()
vals = [random.randrange(100000) for _ in range(100001)]
m_mid = sorted(vals[:50001])[50001//2]
m_final = sorted(vals)[len(vals)//2]
running = running_medians(vals)
tc.assertEqual(m_mid, running[50000])
tc.assertEqual(m_final, running[-1])
```
| github_jupyter |
<a href="https://colab.research.google.com/github/DingLi23/s2search/blob/pipelining/pipelining/exp-csit/exp-csit_csit_shapley_value.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Experiment Description
> This notebook is for experiment \<exp-csit\> and data sample \<csit\>.
### Initialization
```
%reload_ext autoreload
%autoreload 2
import numpy as np, sys, os
sys.path.insert(1, '../../')
from shapley_value import compute_shapley_value, feature_key_list
sv = compute_shapley_value('exp-csit', 'csit')
```
### Plotting
```
import matplotlib.pyplot as plt
import numpy as np
from s2search_score_pdp import pdp_based_importance
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(12, 5), dpi=200)
# generate some random test data
all_data = []
average_sv = []
sv_global_imp = []
for player_sv in [f'{player}_sv' for player in feature_key_list]:
all_data.append(sv[player_sv])
average_sv.append(pdp_based_importance(sv[player_sv]))
sv_global_imp.append(np.mean(np.abs(list(sv[player_sv]))))
# average_sv.append(np.std(sv[player_sv]))
# print(np.max(sv[player_sv]))
# plot violin plot
axs[0].violinplot(all_data,
showmeans=False,
showmedians=True)
axs[0].set_title('Violin plot')
# plot box plot
axs[1].boxplot(all_data,
showfliers=False,
showmeans=True,
)
axs[1].set_title('Box plot')
# adding horizontal grid lines
for ax in axs:
ax.yaxis.grid(True)
ax.set_xticks([y + 1 for y in range(len(all_data))],
labels=['title', 'abstract', 'venue', 'authors', 'year', 'n_citations'])
ax.set_xlabel('Features')
ax.set_ylabel('Shapley Value')
plt.show()
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(12, 4), dpi=200)
# Example data
feature_names = ('title', 'abstract', 'venue', 'authors', 'year', 'n_citations')
y_pos = np.arange(len(feature_names))
# error = np.random.rand(len(feature_names))
# ax.xaxis.grid(True)
ax.barh(y_pos, average_sv, align='center', color='#008bfb')
ax.set_yticks(y_pos, labels=feature_names)
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('PDP-based Feature Importance on Shapley Value')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
_, xmax = plt.xlim()
plt.xlim(0, xmax + 1)
for i, v in enumerate(average_sv):
margin = 0.05
ax.text(v + margin if v > 0 else margin, i, str(round(v, 4)), color='black', ha='left', va='center')
plt.show()
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(12, 4), dpi=200)
# Example data
feature_names = ('title', 'abstract', 'venue', 'authors', 'year', 'n_citations')
y_pos = np.arange(len(feature_names))
# error = np.random.rand(len(feature_names))
# ax.xaxis.grid(True)
ax.barh(y_pos, sv_global_imp, align='center', color='#008bfb')
ax.set_yticks(y_pos, labels=feature_names)
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('SHAP Feature Importance')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
_, xmax = plt.xlim()
plt.xlim(0, xmax + 1)
for i, v in enumerate(sv_global_imp):
margin = 0.05
ax.text(v + margin if v > 0 else margin, i, str(round(v, 4)), color='black', ha='left', va='center')
plt.show()
```
| github_jupyter |
# How to avoid "SQL injection"?
### using `%s` token
```
def add_input(self, data):
connection = self.connect()
try:
q = "INSERT INTO crimes (description) VALUES (%s);"
with connection.cursor() as cur:
cur.execute(q)
connection.commit()
finally:
connection.close()
#dbhelper.py
import pymysql
import ch6_dbconfig
class DBHelper:
def connect(self, database="crimemap"):
return pymysql.connect(host="54.180.124.182",
user=ch6_dbconfig.db_user,
passwd=ch6_dbconfig.db_password,
db=database)
def get_all_inputs(self):
connection = self.connect()
try:
q = "SELECT description FROM crimes;"
with connection.cursor() as cur:
cur.execute(q)
return cur.fetchall()
finally:
connection.close()
def add_input(self, data):
connection = self.connect()
try:
q = "INSERT INTO crimes (description) VALUES('{}')".format(data)
with connection.cursor() as cur:
cur.execute(q)
connection.commit()
finally:
connection.close()
def clear_all(self):
connection = self.connect()
try:
q = "DELETE FROM crimes;"
with connection.cursor() as cur:
cur.execute(q)
connection.commit()
finally:
connection.close()
```
# How to get user input data?
### submit - /add(save) - /home(return)
- 1) click submit
- 2) **request** data in `userinput` by POST method
- 3) .py `/add` receive data
- 4) .py `add_input()` **save** data in DB
- 5) .py return `home()`
- 6) .py `get_all_inputs() `**get** data
- 7) .py return data having .html display data
```
#mod1_ch6.py
from dbhelper import DBHelper
from flask import Flask
from flask import render_template
from flask import request
app = Flask(__name__)
DB = DBHelper()
@app.route("/")
def home():
try:
data = DB.get_all_inputs()
except Exception as e:
print(e)
data = None
return render_template("home_ch6.html", data=data)
@app.route("/add", methods=["POST"])
def add():
try:
data = request.form.get("userinput")
DB.add_input(data)
except Exception as e:
print(e)
return home()
@app.route("/clear")
def clear():
try:
DB.clear_all()
except Exception as e:
print(e)
return home()
if __name__ == '__main__':
app.run(port=5000, debug=True)
# sql_setup
import pymysql
import ch6_dbconfig
connection = pymysql.connect(host="54.180.124.182",
user=ch6_dbconfig.db_user,
passwd=ch6_dbconfig.db_password)
try:
with connection.cursor() as cur:
query = "CREATE DATABASE IF NOT EXISTS crimemap"
cur.execute(query)
query="""
CREATE TABLE IF NOT EXISTS crimemap.crimes (
id int NOT NULL AUTO_INCREMENT,
latitude FLOAT(10,6),
longitude FLOAT(10,6),
date DATETIME,
category VARCHAR(50),
description VARCHAR(1000),
updated_at TIMESTAMP,
PRIMARY KEY (id)
)
"""
cur.execute(query)
connection.commit()
finally:
connection.close()
```
| github_jupyter |
# Example charge diagram
```
import os
from qcodes import Station, load_or_create_experiment
from qcodes.dataset.plotting import plot_dataset
from qcodes.dataset.data_set import load_by_run_spec
import nanotune as nt
from nanotune.tests.mock_classifier import MockClassifer
from nanotune.tuningstages.chargediagram import ChargeDiagram
from nanotune.tuningstages.settings import DataSettings, SetpointSettings, Classifiers
from nanotune.device.device import Readout, NormalizationConstants
from sim.data_providers import QcodesDataProvider
from sim.qcodes_mocks import MockDoubleQuantumDotInstrument
nt_path = os.path.dirname(os.path.dirname(os.path.abspath(nt.__file__)))
```
Define databases
```
db_name_original_data2D = "dot_tuning_sequences.db"
db_folder_original_data = os.path.join(nt_path, "data", "tuning")
chargediagram_path = os.path.join(db_folder_original_data, db_name_original_data2D)
db_name_replay = 'qdsim_test.db'
db_folder_replay = os.getcwd()
```
Create qc.Station
```
station = Station()
qd_mock_instrument = MockDoubleQuantumDotInstrument()
station.add_component(qd_mock_instrument, name="qdmock")
qdsim = qd_mock_instrument.mock_device
```
Create the data provider
```
charge_state_data = QcodesDataProvider(
input_providers=[qdsim.left_plunger, qdsim.right_plunger],
db_path=chargediagram_path,
exp_name="GB_Newtown_Dev_1_1",
run_id = 19,
)
qd_mock_instrument.drain.set_data_provider(charge_state_data)
ds = nt.Dataset(19, db_name_original_data2D, db_folder_original_data)
range_to_sweep = [
(min(ds.data.voltage_x.values), max(ds.data.voltage_x.values)),
(min(ds.data.voltage_y.values), max(ds.data.voltage_y.values)),
]
# range_to_sweep
nt.set_database(db_name_replay, db_folder_replay)
exp = load_or_create_experiment("simtest")
chadiag = ChargeDiagram(
data_settings=DataSettings(
db_name=db_name_replay,
db_folder=db_folder_replay,
normalization_constants=NormalizationConstants(**ds.normalization_constants),
segment_size=0.2,
),
setpoint_settings=SetpointSettings(
voltage_precision=0.01,
parameters_to_sweep=[qd_mock_instrument.left_plunger, qd_mock_instrument.right_plunger],
safety_voltage_ranges=[(-3, 0)],
ranges_to_sweep=range_to_sweep,
),
readout=Readout(transport=qd_mock_instrument.drain),
classifiers=Classifiers(
singledot=MockClassifer('singledot'),
doubledot=MockClassifer('doubledot'),
dotregime=MockClassifer('dotregime')),
)
tuning_result = chadiag.run_stage()
tuning_result.success
# Note that a mock classifier was used, which always returns True == good result
tuning_result.ml_result
tuning_result.data_ids
```
| github_jupyter |
<a href="https://colab.research.google.com/github/WarwickAI/natural-selection-sim/blob/main/Lesson1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<p align="center">
<img width="300" height="300" src="https://amplify-waiplatform-dev-222739-deployment.s3.eu-west-2.amazonaws.com/courses/wai118-logo.png" />
</p>
# Part 1 - Variables
The first thing we're going to do, is create some variables.
Variables are just containers for storing data values.
These values can be different types. For today, we'll focus on **strings** (words), **integers** (numbers) and **booleans** (true or false).
### Ints & Strings
To declare a string, we have to use quotes (single or double quotes are both fine)
```
myFirstInteger = 5
myFirstString = "Hello World"
```
The equals (=) assigns the value on the right, to the identifier on the left. We now have an integer (5) stored in myFirstInteger , and "Hello World" stored in myFirstString . Note that myFirstInteger and myFirstString can be whatever you like - we've just used these letters for simplicity.
You'll note that if you run the program now, nothing happens. That's because we haven't actually done anything with these variables. This is where we introduce our first function. Functions take values, and perform some action. In the case of print(), the action is the printing of the value to the terminal.
```
myFirstInteger = 5
myFirstString = "Hello World"
print(myFirstInteger)
print(myFirstString)
```
Note that the above code is the same as:
```
print(5)
print("Hello World")
```
### Booleans
Another type of variable alongside strings and integers that we often use in Python is the Boolean type.
A Boolean is simply a type that can have one of two values - usually denoted as true or false. It's named after George Boole who first defined an algebraic system of logic in the mid 19th century.
```
t = True
f = False
print(t)
print(f)
```
When we print booleans they simply display "True" or "False" in the terminal. Although it might not seem like it, Booleans turn out to be extremely useful, which we'll come on to in Part 2 - Operators.
# Part 2 - Operators
### Arithmetic Operators
Let's move on to some operators. The standard mathematical operators we can use in Python are...
```
+ addition
/ division
- subtraction
** power
% modulo
```
Let's use some of these operators. What do you think the results are going to be? Run the program and find out.
```
x = (2+2) / 4
y = (10 - 5) ** 2
z = 13 % 5
print(x)
print(y)
print(z)
```
Interestingly enough, we can also use one of these operators - the plus operator - for strings...
```
a = "Hello"
b = "World"
print(a + b)
```
**Exercise 1**) Run this code. You'll notice a problem - there's no space between the words.
There are two ways we can fix this: either add a space to the variable a, making it "Hello ", or add the space in the print statement. It's better in this case to do the latter. Type it out here:
```
# enter code here
```
What if we want to print an integer, and a string in the same print call? In this case, we have a problem. All arguments (the things inside the brackets that get passed to the function) have to be of the same type.
Therefore, in order to print both an integer and a string at the same time, we have to convert the integer into a string.
This is done with the str() function. It takes an integer, and returns the string version of that number in it's place, as follows:
```
number = 999
print("We're going to print a number here: " + str(number))
```
**Exercise 2**) In our simulation we're going to have some cells. The cells feed on food. We're going to want to add some food to the environment for the cells to eat. Each peice of food will have an energy value.
In the box below, print "Adding 35 food with energy 20!"
```
num_food = input('Enter number of food: ')
energy = input('Enter food energy: ')
<your print statement goes here>
```
### Assignment Operators
We've already seen one assignment operator; **=**
Equals can be combined with all of the other mathematical operators to make new operators, like **+=**, **/=**, **-=**
```
a = 6
a = a / 2 # this is equivalent to a = 6 divided by 2
print(a)
b = 6
b /= 2 # this is also equivalent to a = 6 divided by 2
```
**Exercise 3**) Without running it, what does the following code output for the value of a, b and c? Once you have your answers, run the see if you were correct.
```
a = 12345
a *= 2
print(a)
b = 2
b **= 6
print(b)
c = 10
d = 7
d %= 5
c += 100 / d
print(c)
```
### Comparison Operators
Remember those Booleans we discussed earlier? There are a few fun things we can do with those. To illustrate the concept of these operators, we'll introduce a new function - input(), which takes input from the user.
The first comparison operator we're going to look at is ==, e.g. x == y
You can read this as "x is equal to y" or "x is equiavelnt to y".
Consider the following code. it's going to take a number from the user, and then print whether or not that number is equivalent to 10. The result will be a Boolean value - either True or False. This value is then printed.
```
number = input("Enter a number: ")
print(number == 1)
```
**Exercise 4**) Run the above code, and enter the number 1, then run the code again and enter 2. What do you expect to be displayed on each run?
You might be suprised to notice that it returns false both times, even when you enter 1. This is because **==** checks the types of the variables, as well as the values. input() returns the user input in the form of a string, while 1 is an integer.
Convert the user input into a string with str() as before, or convert the userValue into an int as below, before comparing.
```
# enter code here
```
Other comparison operators include
```
> - greater than
< - less than
>= greater than or equal to
<= less than or equal to
```
What 3 Boolean value would the following code print?
```
x = 10
y = 10
print(x == y)
print(x != y)
print(y > 10)
print(x >= 10)
```
# Part 3 - Functions
We've already used a couple of functions:
* print() to display information
* input() to get it from the user
* int() to convert to integer
* str() to convert to string
However, what if you want to make your own functions? Well we can do just that.
Functions are defined using the `def` keyword, followed by a name for the function. Remember that the indented code below def will NEVER run until you CALL the function using the function name, as shown below.
Functions are called using the function name, followed by brackets, e.g. myFunction()
```
# this is a definition of a function
def myFunction():
print("Hello, I'm in a function!")
# this is how you run the function
myFunction()
```
You can also add values in between the brackets, and use them as variables in a function. What do you think the following will result in?
```
def myFunction(a, b):
print("Hello, I'm in a function!")
print(a)
print(b)
myFunction("argument 1", "argument 2")
```
Functions can also return values. Thus, we can set the value of a variable to a function.
```
def findAverage(a, b):
total = a + b
return total / 2
avg = findAverage(1343465, 2094838)
print(avg)
```
**Exercise 5**) Write a function isGreaterThan() that takes two numbers, a and b, and returns a Boolean value, specifying whether a is greater than b. Print the result.
```
# enter the code here
```
**Exercise 6**) Write a function distance(x1, y1, x2, y2) which returns the distance between the points (x1, y1) and (x2, y2).
Use `sqrt(number)` to calculate a square root.
<img src="https://i.imgur.com/v7u3LXc.png" />
```
from math import sqrt
def distance(x1, y1, x2, y2):
# code here
```
| github_jupyter |
# Intro to PyTorch
```
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(10)
V = torch.Tensor([1., 2., 3.])
M = torch.Tensor([[1., 2., 3.], [4.,5.,6.]])
torch.randn((2, 3, 4, 5)).view(12, -1)
data = autograd.Variable(torch.randn(2, 2))
print(data)
print(F.relu(data))
print(F.sigmoid(data))
print(data.view(-1))
print(F.softmax(data.view(-1)))
print(F.log_softmax(data.view(-1)))
```
## Logistic Regression with Bag of Words
```
data = [("me gusta comer en la cafeteria".split(), "SPANISH"),
("Give it to me".split(), "ENGLISH"),
("No creo que sea una buena idea".split(), "SPANISH"),
("No it is not a good idea to get lost at sea".split(), "ENGLISH")]
test_data = [("Yo creo que si".split(), "SPANISH"),
("it is lost on me".split(), "ENGLISH")]
# word_to_ix maps each word in the vocab to a unique integer, which will be its
# index into the Bag of words vector
word_to_ix = {}
for sent, _ in data + test_data:
for word in sent:
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)
print(word_to_ix)
VOCAB_SIZE = len(word_to_ix)
NUM_LABELS = 2
class BoWClassifier(nn.Module): # inheriting from nn.Module!
def __init__(self, num_labels, vocab_size):
# calls the init function of nn.Module. Dont get confused by syntax,
# just always do it in an nn.Module
super(BoWClassifier, self).__init__()
# Define the parameters that you will need. In this case, we need A and b,
# the parameters of the affine mapping.
# Torch defines nn.Linear(), which provides the affine map.
# Make sure you understand why the input dimension is vocab_size
# and the output is num_labels!
self.linear = nn.Linear(vocab_size, num_labels)
# NOTE! The non-linearity log softmax does not have parameters! So we don't need
# to worry about that here
def forward(self, bow_vec):
# Pass the input through the linear layer,
# then pass that through log_softmax.
# Many non-linearities and other functions are in torch.nn.functional
return F.log_softmax(self.linear(bow_vec))
def make_bow_vector(sentence, word_to_ix):
vec = torch.zeros(len(word_to_ix))
for word in sentence:
vec[word_to_ix[word]] += 1
return vec.view(1, -1)
def make_target(label, label_to_ix):
return torch.LongTensor([label_to_ix[label]])
model = BoWClassifier(NUM_LABELS, VOCAB_SIZE)
# the model knows its parameters. The first output below is A, the second is b.
# Whenever you assign a component to a class variable in the __init__ function
# of a module, which was done with the line
# self.linear = nn.Linear(...)
# Then through some Python magic from the Pytorch devs, your module
# (in this case, BoWClassifier) will store knowledge of the nn.Linear's parameters
for param in model.parameters():
print(param)
# To run the model, pass in a BoW vector, but wrapped in an autograd.Variable
sample = data[0]
bow_vector = make_bow_vector(sample[0], word_to_ix)
log_probs = model(autograd.Variable(bow_vector))
print(log_probs)
label_to_ix = {"SPANISH": 0, "ENGLISH": 1}
# Run on test data before we train, just to see a before-and-after
for instance, label in test_data:
bow_vec = autograd.Variable(make_bow_vector(instance, word_to_ix))
log_probs = model(bow_vec)
print(log_probs)
# Print the matrix column corresponding to "creo"
print(next(model.parameters())[:, word_to_ix["creo"]])
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
# Usually you want to pass over the training data several times.
# 100 is much bigger than on a real data set, but real datasets have more than
# two instances. Usually, somewhere between 5 and 30 epochs is reasonable.
for epoch in range(100):
for instance, label in data:
# Step 1. Remember that Pytorch accumulates gradients.
# We need to clear them out before each instance
model.zero_grad()
# Step 2. Make our BOW vector and also we must wrap the target in a
# Variable as an integer. For example, if the target is SPANISH, then
# we wrap the integer 0. The loss function then knows that the 0th
# element of the log probabilities is the log probability
# corresponding to SPANISH
bow_vec = autograd.Variable(make_bow_vector(instance, word_to_ix))
target = autograd.Variable(make_target(label, label_to_ix))
# Step 3. Run our forward pass.
log_probs = model(bow_vec)
# Step 4. Compute the loss, gradients, and update the parameters by
# calling optimizer.step()
loss = loss_function(log_probs, target)
loss.backward()
optimizer.step()
for instance, label in test_data:
bow_vec = autograd.Variable(make_bow_vector(instance, word_to_ix))
log_probs = model(bow_vec)
print(log_probs)
# Index corresponding to Spanish goes up, English goes down!
print(next(model.parameters())[:, word_to_ix["creo"]])
```
| github_jupyter |
```
import keras
import keras.backend as K
from keras.datasets import mnist
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Conv1D, MaxPooling1D, LSTM, ConvLSTM2D, GRU, BatchNormalization, LocallyConnected2D, Permute, TimeDistributed, Bidirectional
from keras.layers import Concatenate, Reshape, Conv2DTranspose, Embedding, Multiply, Activation
from functools import partial
from collections import defaultdict
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import matplotlib.pyplot as plt
class MySequence :
def __init__(self) :
self.dummy = 1
keras.utils.Sequence = MySequence
import isolearn.io as isoio
import isolearn.keras as isol
import matplotlib.pyplot as plt
import pandas as pd
from sequence_logo_helper import dna_letter_at, plot_dna_logo
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
def contain_tf_gpu_mem_usage() :
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
set_session(sess)
contain_tf_gpu_mem_usage()
#Define dataset/experiment name
dataset_name = "optimus5_synth"
def one_hot_encode(df, col='utr', seq_len=50):
# Dictionary returning one-hot encoding of nucleotides.
nuc_d = {'a':[1,0,0,0],'c':[0,1,0,0],'g':[0,0,1,0],'t':[0,0,0,1], 'n':[0,0,0,0]}
# Creat empty matrix.
vectors=np.empty([len(df),seq_len,4])
# Iterate through UTRs and one-hot encode
for i,seq in enumerate(df[col].str[:seq_len]):
seq = seq.lower()
a = np.array([nuc_d[x] for x in seq])
vectors[i] = a
return vectors
#Load cached dataframe
e_train = pd.read_csv("otherHalfOfHumanUTRs_UseForBackground.csv")
#one hot encode with optimus encoders
seq_e_train = one_hot_encode(e_train,seq_len=50)
x_train = seq_e_train[:, None, ...]
print("x_train.shape = " + str(x_train.shape))
#Define sequence template
sequence_template = 'N' * 50
sequence_mask = np.array([1 if sequence_template[j] == 'N' else 0 for j in range(len(sequence_template))])
#Visualize background sequence distribution
pseudo_count = 1.0
x_mean = (np.sum(x_train, axis=(0, 1)) + pseudo_count) / (x_train.shape[0] + 4. * pseudo_count)
x_mean_logits = np.log(x_mean / (1. - x_mean))
#Load Predictor
predictor_path = 'optimusRetrainedMain.hdf5'
predictor = load_model(predictor_path)
predictor.trainable = False
predictor.compile(optimizer=keras.optimizers.SGD(lr=0.1), loss='mean_squared_error')
#Gradient saliency/backprop visualization
import matplotlib.collections as collections
import operator
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
import matplotlib as mpl
from matplotlib.text import TextPath
from matplotlib.patches import PathPatch, Rectangle
from matplotlib.font_manager import FontProperties
from matplotlib import gridspec
from matplotlib.ticker import FormatStrFormatter
def plot_importance_scores(importance_scores, ref_seq, figsize=(12, 2), score_clip=None, sequence_template='', plot_start=0, plot_end=96) :
end_pos = ref_seq.find("#")
fig = plt.figure(figsize=figsize)
ax = plt.gca()
if score_clip is not None :
importance_scores = np.clip(np.copy(importance_scores), -score_clip, score_clip)
max_score = np.max(np.sum(importance_scores[:, :], axis=0)) + 0.01
for i in range(0, len(ref_seq)) :
mutability_score = np.sum(importance_scores[:, i])
dna_letter_at(ref_seq[i], i + 0.5, 0, mutability_score, ax)
plt.sca(ax)
plt.xlim((0, len(ref_seq)))
plt.ylim((0, max_score))
plt.axis('off')
plt.yticks([0.0, max_score], [0.0, max_score], fontsize=16)
for axis in fig.axes :
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
plt.tight_layout()
plt.show()
import sis
#Run attribution method
encoder = isol.OneHotEncoder(50)
score_clip = None
allFiles = ["optimus5_synthetic_random_insert_if_uorf_1_start_1_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_1_start_2_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_2_start_1_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_2_start_2_stop_variable_loc_512.csv",
"optimus5_synthetic_examples_3.csv"]
for csv_to_open in allFiles :
#Load dataset for benchmarking
dataset_name = csv_to_open.replace(".csv", "")
data_df = pd.read_csv(csv_to_open) #open from scores folder
seq_e_test = one_hot_encode(data_df, seq_len=50)
x_test = seq_e_test[:, None, ...]
print(x_test.shape)
#Plot distribution of (MRL)
y_pred_test = predictor.predict(x=[x_test[:, 0, ...]], batch_size=32)[:, 0]
#Run SIS on test set
dynamic_threshold_scale = 0.8
n_seqs_to_test = x_test.shape[0]
importance_scores_test = []
predictor_calls_test = []
for data_ix in range(n_seqs_to_test) :
if data_ix % 100 == 0 :
print("Processing example " + str(data_ix) + "...")
is_pos = True if y_pred_test[data_ix] >= 0.0 else False
threshold = 0.
if is_pos :
threshold = dynamic_threshold_scale * y_pred_test[data_ix]
else :
threshold = dynamic_threshold_scale * (-y_pred_test[data_ix])
x_curr = x_test[data_ix, 0, ...]
bg = x_mean
seq_mask = np.max(x_test[data_ix, 0, ...], axis=-1, keepdims=True)
predictor_counter = { 'acc' : 0 }
def _temp_pred_func(batch, is_pos=is_pos, mask=seq_mask, predictor_counter=predictor_counter) :
temp_data = np.concatenate([np.expand_dims(np.expand_dims(arr * mask, axis=0), axis=0) for arr in batch], axis=0)
predictor_counter['acc'] += temp_data.shape[0]
temp_out = None
if is_pos :
temp_out = predictor.predict(x=[temp_data[:, 0, ...]], batch_size=64)[:, 0]
else :
temp_out = -predictor.predict(x=[temp_data[:, 0, ...]], batch_size=64)[:, 0]
return temp_out
F_PRED = lambda batch: _temp_pred_func(batch)
x_fully_masked = np.copy(bg)
initial_mask = sis.make_empty_boolean_mask_broadcast_over_axis(x_curr.shape, 1)
collection = sis.sis_collection(F_PRED, threshold, x_curr, x_fully_masked, initial_mask=initial_mask)
importance_scores_test_curr = np.expand_dims(np.expand_dims(np.zeros(x_curr.shape), axis=0), axis=0)
if collection[0].sis.shape[0] > 0 :
imp_index = collection[0].sis[:, 0].tolist()
importance_scores_test_curr[0, 0, imp_index, :] = 1.
importance_scores_test_curr[0, 0, ...] = importance_scores_test_curr[0, 0, ...] * x_curr
importance_scores_test.append(importance_scores_test_curr)
predictor_calls_test.append(predictor_counter['acc'])
importance_scores_test = np.concatenate(importance_scores_test, axis=0)
predictor_calls_test = np.array(predictor_calls_test)
#Print predictor call statistics
print("Total number of predictor calls = " + str(np.sum(predictor_calls_test)))
print("Average number of predictor calls = " + str(np.mean(predictor_calls_test)))
for plot_i in range(0, 3) :
print("Test sequence " + str(plot_i) + ":")
plot_dna_logo(x_test[plot_i, 0, :, :], sequence_template=sequence_template, plot_sequence_template=True, figsize=(12, 1), plot_start=0, plot_end=50)
plot_importance_scores(importance_scores_test[plot_i, 0, :, :].T, encoder.decode(x_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=score_clip, sequence_template=sequence_template, plot_start=0, plot_end=50)
#Save predicted importance scores
model_name = "sufficient_input_subsets_" + dataset_name + "_dynamic_thresh_08_mean"
np.save(model_name + "_importance_scores_test", importance_scores_test)
```
| github_jupyter |
# Code for Linear Regression
## Importing Data
```
import csv, pandas as pd, numpy as np
csvfile = open('aapl.csv', newline='')
workbook = csv.reader(csvfile, delimiter=' ', quotechar='|')
print(workbook)
dates = []
opening_price = []
high_price = []
low_price = []
closing_price = []
volume = []
for row in workbook:
try:
array_row = row[0].split(',')
try:
float(array_row[1])
float(array_row[2])
float(array_row[3])
float(array_row[4])
float(array_row[5])
except:
print('Invalid Data: ', array_row)
continue
dates.append(array_row[0])
opening_price.append(float(array_row[1]))
high_price.append(float(array_row[2]))
low_price.append(float(array_row[3]))
closing_price.append(float(array_row[4]))
volume.append(float(array_row[5]))
except:
print(row)
features = {'opening_price':opening_price, 'high_price':high_price, 'low_price':low_price, 'volume':volume}
m = len(dates)
y = {'closing_price' : closing_price}
```
## Get X,Y DataFrames
```
df_x = pd.DataFrame(features)
print(df_x[:10])
df_y = pd.DataFrame(y)
print(df_y[:10])
df_x.describe()
df_y.describe()
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import train_test_split
```
## Split Training Data and Test Data
```
reg = LinearRegression()
x_train, x_test, y_train, y_test = train_test_split(df_x, df_y, test_size=0.2,random_state=4)
reg.fit(x_train, y_train)
```
## Coefficients and Linear Regression
```
reg.coef_
y_predicted = reg.predict(x_test)
print(y_predicted[:5])
```
## Plots
```
import matplotlib.pyplot as plt
from datetime import date
new_dates = []
for x in dates:
series = x.split('-')
if 'Jan' == series[1]:
series[1] = 1
elif 'Feb' == series[1]:
series[1] = 2
elif 'Mar' == series[1]:
series[1] = 3
elif 'Apr' == series[1]:
series[1] = 4
elif 'May' == series[1]:
series[1] = 5
elif 'Jun' == series[1]:
series[1] = 6
elif 'Jul' == series[1]:
series[1] = 7
elif 'Aug' == series[1]:
series[1] = 8
elif 'Sep' == series[1]:
series[1] = 9
elif 'Oct' == series[1]:
series[1] = 10
elif 'Nov' == series[1]:
series[1] = 11
elif 'Dec' == series[1]:
series[1] = 12
else:
print(series[1], 'Left')
continue
try:
new_dates.append(date(year=int(series[2]) + 2000, month=series[1],day=int(series[0])))
except:
print(series)
plt.figure(figsize=(30,7))
plt.plot(new_dates[:100:1],y_predicted[:100:1], color='blue', linewidth=1, label='Predicted')
plt.plot(new_dates[:100:1], y_test['closing_price'][:100:1], color='red', linewidth=1, label='Actual')
plt.title('Linear Regression Prediction')
plt.ylabel('Stock Price')
plt.xlabel('Dates')
plt.grid(True)
plt.show()
```
## Actual Value
```
for x,y1 in zip(dates[:5:1], y_test['closing_price'][:5:1]):
print('Date: ',x, 'Closing Price: ', y1)
```
## Predicted Value
```
for x,y1 in zip(dates[:5:-1], y_predicted[:5:1]):
print('Date: ',x, 'Closing Price: ', y1[0])
new_y_test = []
# print(y_test)
for y1 in y_test:
try:
new_y_test.append(float(y1[0]))
except:
#print(y[0])
continue
# print(type(new_y_test), new_y_test[:10])
new_y_predicted = []
# print(y_predicted)
for y1 in y_predicted:
new_y_predicted.append(float(y1))
# print(type(new_y_predicted[0]))
```
## Accuracy
```
errors = abs (new_y_predicted - y_test['closing_price'])
print('Mean Absolute Error: ', round(np.mean(errors), 2), 'degrees.')
# Calculate Mean absolute percentage error
mape = 100 * (errors/y_test['closing_price'])
# Calculate & display Accuracy
accuracy = 100 - np.mean(mape)
print('Accuracy: ', round(accuracy, 2), '%.')
```
# SVR
```
num_test = 3990
data_train = {}
data_test = {}
data_train['high_price'] = features['high_price'][:-num_test:]
data_train['low_price'] = features['low_price'][:-num_test: ]
data_train['volume'] = features['volume'][:-num_test: ]
data_train['closing_price'] = y['closing_price'][:-num_test: ]
#print(len(data_train['high_price']),len(data_train['low_price']), len(data_train['volume']), len(data_train['closing_price']))
x_train = pd.DataFrame(data_train)
y_train = y['closing_price'][:-num_test]
#print(len(y_train))
data_test['high_price'] = features['high_price'][-num_test::]
data_test['low_price'] = features['low_price'][-num_test::]
data_test['volume'] = features['volume'][-num_test::]
data_test['closing_price'] = y['closing_price'][-num_test::]
x_test = pd.DataFrame(data_test)
y_test = y['closing_price'][-num_test::]
#print(len(data_test['high_price']),len(data_test['low_price']), len(data_test['volume']), len(data_test['closing_price']))
#print(len(y_test))
#print('Train: ', x_train[0:10], y_train[:10])
#print('Test: ', x_test[:10], y_test[:10])
from sklearn.svm import SVR
clf = SVR(kernel='rbf', C=1e3, gamma=0.1)
clf.fit(x_train, y_train)
predictions = clf.predict(x_test)
#print(predictions[:10], y_test[:10])
import matplotlib.pyplot as plt
plt.figure(figsize=(30,7))
#plt.scatter(new_dates[:num_test], y_test[:num_test], color='darkorange', label='data')
plt.plot(new_dates[:num_test],predictions[:num_test], color='blue', linewidth=1, label='Predicted')
plt.plot(new_dates[:num_test], y_test[:num_test], color='red', linewidth=1, label='Actual')
```
## Accuracy
```
errors = abs(predictions - y_test)
#print('Mean Absolute Error: ', round(np.mean(errors), 2), 'degrees.')
# Calculate Mean absolute percentage error
mape = 100 * (errors/y_test)
# Calculate & display Accuracy
accuracy = 100 - np.mean(mape)
print('Accuracy: ', round(accuracy, 2), '%.')
```
## Regression Forest
```
num_test = 100
data_train = {}
data_test = {}
data_train['high_price'] = features['high_price'][:-num_test:]
data_train['low_price'] = features['low_price'][:-num_test: ]
data_train['volume'] = features['volume'][:-num_test: ]
data_train['closing_price'] = y['closing_price'][:-num_test: ]
#print(len(data_train['high_price']),len(data_train['low_price']), len(data_train['volume']), len(data_train['closing_price']))
x_train = pd.DataFrame(data_train)
y_train = y['closing_price'][:-num_test]
#print(len(y_train))
data_test['high_price'] = features['high_price'][-num_test::]
data_test['low_price'] = features['low_price'][-num_test::]
data_test['volume'] = features['volume'][-num_test::]
data_test['closing_price'] = y['closing_price'][-num_test::]
x_test = pd.DataFrame(data_test)
y_test = y['closing_price'][-num_test::]
#print(len(data_test['high_price']),len(data_test['low_price']), len(data_test['volume']), len(data_test['closing_price']))
#print(len(y_test))
#print('Train: ', len(x_train[0]), len(y_train))
# print('Test: ', len(x_test[0]), len(y_test))
from sklearn.ensemble import RandomForestRegressor
regressor = RandomForestRegressor(n_estimators=100, max_depth=100, random_state=0)
regressor.fit(x_train, y_train)
predictions = regressor.predict(x_test)
# print(predictions, y_test)
import matplotlib.pyplot as plt
plt.figure(figsize=(30,7))
plt.plot(new_dates[:num_test],predictions[:num_test], color='blue', linewidth=1, label='Predicted')
plt.plot(new_dates[:num_test], y_test[:num_test], color='red', linewidth=1, label='Actual')
```
## Accuracy
```
errors = abs (predictions - y_test)
print('Mean Absolute Error: ', round(np.mean(errors), 2), 'degrees.')
# Calculate Mean absolute percentage error
mape = 100 * (errors/y_test)
# Calculate & display Accuracy
accuracy = 100 - np.mean(mape)
print('Accuracy: ', round(accuracy, 2), '%.')
```
# Decision Tree Regression
```
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor(max_depth=10)
regressor.fit(x_train, y_train)
predictions = regressor.predict(x_test)
import matplotlib.pyplot as plt
plt.figure(figsize=(30,7))
plt.plot(new_dates[:num_test],predictions[:num_test], color='blue', linewidth=1, label='Predicted')
plt.plot(new_dates[:num_test], y_test[:num_test], color='red', linewidth=1, label='Actual')
```
## Accuracy
```
errors = abs (predictions - y_test)
print('Mean Absolute Error: ', round(np.mean(errors), 2), 'degrees.')
# Calculate Mean absolute percentage error
mape = 100 * (errors/y_test)
# Calculate & display Accuracy
accuracy = 100 - np.mean(mape)
print('Accuracy: ', round(accuracy, 2), '%.')
```
| github_jupyter |
# Step 6: Serve new imported input data from OA into Excel then GDX through WaMDaM
#### By Adel M. Abdallah, Dec 2020
Execute the following cells by pressing `Shift-Enter`, or by pressing the play button <img style='display:inline;padding-bottom:15px' src='play-button.png'> on the toolbar above.
### Overview
You'll use the downloaded models (now in SQLite WaMDaM database) and this Jupyter Notebook to run both WEAP and Wash models for the new scenarios defined in OpenAgua GUI. I published the SQLite WaMDaM database in HydroShare so its easier to query it in this script directly from there.
The script here will also read the models' results and visulize them or export the results to CSV or Excel file that can be loaded back to WaMDaM. Later in the next steps, those results can be uploaded back to OpenAgua to view them there in a dashboard.
You will need to have both WEAP and GAMS software installed on your machine
# 1. Import python libraries
```
# 1. Import python libraries
### set the notebook mode to embed the figures within the cell
import sqlite3
import numpy as np
import pandas as pd
import getpass
from hs_restclient import HydroShare, HydroShareAuthBasic
import os
import plotly
plotly.__version__
import plotly.offline as offline
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
offline.init_notebook_mode(connected=True)
from plotly.offline import init_notebook_mode, iplot
from plotly.graph_objs import *
init_notebook_mode(connected=True) # initiate notebook for offline plot
import sys
from shutil import copyfile
import os
import csv
from collections import OrderedDict
import sqlite3
import pandas as pd
import numpy as np
from IPython.display import display, Image, SVG, Math, YouTubeVideo
import urllib
import calendar
import sqlite3
import pandas as pd
# https://github.com/NREL/gdx-pandas
from IPython.display import display, Image, SVG, Math, YouTubeVideo
from openpyxl import load_workbook
import logging
import inspect
import gdxpds
# ! pip install gdxpds
# import gdxpds
# import gams
# from gams import *
#from ctypes import c_bool
print 'The needed Python libraries have been imported'
```
# 2. Connect to the WaMDaM SQLite on HydroShare
### Provide the HydroShare ID for your resource
Example
https://www.hydroshare.org/resource/af71ef99a95e47a89101983f5ec6ad8b/
```
# enter your HydroShare username and password here between the quotes
username = ''
password = ''
auth = HydroShareAuthBasic(username=username, password=password)
hs = HydroShare(auth=auth)
# Then we can run queries against it within this notebook :)
resource_url='https://www.hydroshare.org/resource/af71ef99a95e47a89101983f5ec6ad8b/'
resource_id= resource_url.split("https://www.hydroshare.org/resource/",1)[1]
resource_id=resource_id.replace('/','')
print resource_id
print 'Connected to HydroShare'
resource_md = hs.getSystemMetadata(resource_id)
# print resource_md
print 'Resource title'
print(resource_md['resource_title'])
print '----------------------------'
resources=hs.resource(resource_id).files.all()
file = ""
for f in hs.resource(resource_id).files.all():
file += f.decode('utf8')
import json
file_json = json.loads(file)
for f in file_json["results"]:
FileURL= f["url"]
SQLiteFileName=FileURL.split("contents/",1)[1]
cwd = os.getcwd()
print cwd
fpath = hs.getResourceFile(resource_id, SQLiteFileName, destination=cwd)
conn = sqlite3.connect(SQLiteFileName,timeout=10)
print 'done'
# Test if the connection works
conn = sqlite3.connect(SQLiteFileName)
df = pd.read_sql_query("SELECT ResourceTypeAcronym FROM ResourceTypes Limit 1 ", conn)
print df
print '--------------------'
print '\n Connected to the WaMDaM SQLite file called'+': '+ SQLiteFileName
```
# Query the seasonal data from WaMDaM into a dataframe
```
# The query has hard coded input parameters
# 2.2Identify_aggregate_TimeSeriesValues.csv
Query_UseCase_URL="""https://raw.githubusercontent.com/WamdamProject/WaMDaM_JupyterNotebooks/master/3_VisualizePublish/SQL_queries/WASH/Query_demand_seasonal.sql
"""
# Read the query text inside the URL
Query_UseCase_text = urllib.urlopen(Query_UseCase_URL).read()
# # return query result in a pandas data frame
df_Seasonal_WaMDaM= pd.read_sql_query(Query_UseCase_text, conn)
display (df_Seasonal_WaMDaM)
```
# Make copies of the origianl input file to the WASH Model
```
# Demand conservation file
copyfile("WASH_1yr_InputData_original.xlsx", "WASH_1yr_InputData_conserve.xlsx")
# Demand Increase file
copyfile("WASH_1yr_InputData_original.xlsx", "WASH_1yr_InputData_increase.xlsx")
```
# Preapre input data and write it to Excel input to WASH
```
Instance=df_Seasonal_WaMDaM['InstanceName'][1]
print Instance
column_name = ["ScenarioName"]
subsets = df_Seasonal_WaMDaM.groupby(column_name)
for subset in subsets.groups.keys():
dt = subsets.get_group(name=subset)
print subset
df=dt.loc[:, 'SeasonName':'SeasonNumericValue'].T
if subset=='ConsDemand':
WASH_ExcelFile='WASH_1yr_InputData_conserve.xlsx'
df=df.loc['SeasonNumericValue':, ]
display(df)
elif subset=='IncrDemand':
WASH_ExcelFile='WASH_1yr_InputData_increase.xlsx'
df=df.loc['SeasonNumericValue':, ]
columns = dict(map(reversed, enumerate(df.columns)))
df = df.rename(columns=columns)
df.head()
display(df)
else:
continue
sheetname='demandReq'
#
book = load_workbook(WASH_ExcelFile)
UpdateDemand = book[sheetname]
i =0
for index, row in df.iterrows():
for j, column in row.iteritems():
UpdateDemand.cell(row=i+2, column=j+2, value=float(column))
i += 1
book.save(WASH_ExcelFile)
print 'done writing the value to WASH excel file'
```
# Follow these steps to run the WASH Models in GAMS
#### 1. Download and install [GAMS V.24.2.3](https://www.gams.com/) and have Excel 2007 onward
##### You will need a license to run GAMS solvers.
#### 2. Open GAMS software. Go to File-> Project -> New Project. Navigate to the folder you created and create a new project file with any name you want.
#### 3. Double click on the gams file name: WASH-WaMDaM.gms to open the GAMS file
#### 4. Comment out the lines in 218-230 to choose one input file at a time. Then comment out the lines in 590-597 to write the results to the GDX file one at a time
WASH_1yr_InputData_original.xlsx
WASH_1yr_InputData_conserve.xlsx
WASH_1yr_InputData_increase.xlsx
-----------------------------------------------
WASH-solution-original.gdx
WASH-solution-conserve.gdx
WASH-solution-increase.gdx
# Read the WASH Area result objective function value from GDX
First, read the three result .gdx files.
If you have issues in running GAMS scenarios, I already posted the soultion .gdx files here
https://github.com/WamdamProject/WaMDaM_JupyterNotebooks/tree/master/3_VisualizePublish
```
os.getcwd()
gdx_files = {'Original':'WASH-solution-original.gdx',
'Conserve':'WASH-solution-conserve.gdx',
'Increase':'WASH-solution-increase.gdx'}
for key,gdx_file in gdx_files.iteritems():
with gdxpds.gdx.GdxFile(lazy_load=False) as f:
f.read(gdx_file)
for symbol in f:
symbol_name = symbol.name
if symbol_name=='Z':
df = symbol.dataframe
Zvalue=str(df.iloc[0]['Level'])
ZvalueApprox=float(Zvalue)
print 'WASH Area for ' + key+'='+str(int(ZvalueApprox)) # acres
print '--------------------'
# Conserve_Original=Original-Conserve
# Increase_Original=Original-Conserve
print 'Results are replicated'
```
<img src="https://github.com/WamdamProject/WaMDaM-software-ecosystem/blob/master/mkdocs/Edit_MD_Files/images/WASH_result.PNG?raw=true" style="float:center;width:600px;padding:20px">
# The End :) Congratulations!
## The code below is part of the trial to run GAMS from here but didnt work for me. Feel free to give it a try
# Execute GAMS (Before and after update)
https://www.gams.com/latest/docs/API_PY_TUTORIAL.html
```
import os
# os.path.dirname(os.path.abspath(__file__))
command="""start cmd cd C:\GAMS\win64/24.7 & gams.exe WASH-CEE6410"""
var=os.system(command)
print var
from gams import *
! conda install -c goop gams
version = GamsWorkspace.api_version
print version
ws = GamsWorkspace('Test')
# ws = GamsWorkspace(debug=DebugLevel.KeepFiles)
print ws
job = ws.add_job_from_file("C:\Users\Adel\Documents\GitHub\WEAP_WASH_OA\WASH-CEE6410.gms")
job.run()
GamsDatabase = job.out_db
ws = GamsWorkspace()
job = ws.add_job_from_file("C:\Users\Adel\Documents\GitHub\WEAP_WASH_OA\WASH-CEE6410.gms")
job.run()
GamsDatabase = job.out_db
if len(sys.argv) > 1:
ws = GamsWorkspace(system_directory = sys.argv[1])
else:
ws = GamsWorkspace()
GAMSWorkspace.GAMSWorkspace(workingDirectory = null,
systemDirectory = null,
DebugLevel = DebugLevel.Off
)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/teatime77/xbrl-reader/blob/master/notebook/sklearn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Matplotlibで日本語が表示できるようにします。
#### IPAフォントをインストールします。
```
!apt-get -y install fonts-ipafont-gothic
```
#### Matplotlibのフォントのキャッシュを再構築します。
```
import matplotlib
matplotlib.font_manager._rebuild()
```
#### <font color="red">キャッシュの再構築を有効にするために、ここでランタイムを再起動してください。</font>
### <font color="red">以下の中から予測したい項目のコメントをはずしてください。</font>
```
target = '売上高'
# target = '営業利益'
# target = '経常利益'
# target = '税引前純利益'
```
### <font color="red">グリッドサーチをする場合は、以下の変数の値をTrueにしてください。</font>
```
use_grid_search = False
```
### 選択した項目に対応するファイルをダウンロードします。
```
if target == '売上高':
! wget http://lkzf.info/xbrl/data/2020-04-08/preprocess-uriage.pickle
elif target == '営業利益':
! wget http://lkzf.info/xbrl/data/2020-04-08/preprocess-eigyo.pickle
elif target == '経常利益':
! wget http://lkzf.info/xbrl/data/2020-04-08/preprocess-keijo.pickle
elif target == '税引前純利益':
! wget http://lkzf.info/xbrl/data/2020-04-08/preprocess-jun.pickle
```
### CatBoostをインストールします。
```
! pip install catboost
```
### 必要なライブラリをインポートします。
```
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.linear_model import Ridge
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
from catboost import CatBoostRegressor
sns.set(font='IPAGothic')
```
### データを読み込みます。
```
import pickle
if target == '売上高':
file_name = 'preprocess-uriage.pickle'
elif target == '営業利益':
file_name = 'preprocess-eigyo.pickle'
elif target == '経常利益':
file_name = 'preprocess-keijo.pickle'
elif target == '税引前純利益':
file_name = 'preprocess-jun.pickle'
else:
assert False
with open(file_name, 'rb') as f:
data = pickle.load(f)
df = data['data_frame']
y_column = data['y_column']
```
### トレーニングデータとテストデータに分けます。
```
X_columns = [ x for x in df.columns if x != y_column ]
# トレーニングデータとテストデータに分けます。
X_train, X_test, y_train, y_test = train_test_split(df[X_columns], df[y_column], test_size=0.2, random_state=0)
```
### モデルを実行します。
```
from sklearn.model_selection import GridSearchCV
def run_model(model):
global y_test, y_pred
if use_grid_search and type(model) in [ RandomForestRegressor, XGBRegressor, LGBMRegressor, CatBoostRegressor ]:
# グリッドサーチをする場合
model = GridSearchCV(model, {'max_depth': [2,4,6], 'n_estimators': [50,100,200]}, verbose=2, n_jobs=-1)
# トレーニングデータで学習します。
result = model.fit(X_train, y_train)
if hasattr(result, 'best_params_'):
# 最適なパラメータがある場合
print ('best params =', result.best_params_)
# テストデータで予測します。
y_pred = model.predict(X_test)
# 平均二乗誤差を計算します。
accu1 = mean_squared_error(y_test, y_pred)
accu2 = mean_squared_error(y_test, [y_test.mean()] * len(y_test) )
# 平均絶対誤差を計算します。
accu3 = mean_absolute_error(y_test, y_pred)
accu4 = mean_absolute_error(y_test, [y_test.mean()] * len(y_test) )
print('\n平均二乗誤差 : %.4f ( %.4f ) 平均絶対誤差 : %.4f ( %.4f ) ※ カッコ内は全予測値を平均値で置き換えた場合\n' % (accu1, accu2, accu3, accu4))
if hasattr(model, 'feature_importances_'):
# 特徴量の重要度がある場合
# 重要度の順にソートします。
sorted_idx_names = sorted(enumerate(model.feature_importances_), key=lambda x: x[1], reverse=True)
print('特徴量の重要度')
for i, (idx, x) in enumerate(sorted_idx_names[:20]):
print(' %2d %.05f %s' % (i, 100 * x, X_train.columns[idx]))
# 正解と予測を散布図に表示します。
sns.jointplot(y_test, y_pred, kind="reg").set_axis_labels('正解', '予測')
```
### リッジ回帰
```
model = Ridge(alpha=.5)
run_model(model)
# 売上高の予測の場合、これくらいになるはず
# 平均二乗誤差 : 0.0645 ( 0.0667 ) 平均絶対誤差 : 0.1784 ( 0.1968 )
```
### サポートベクターマシン
```
model = SVR(kernel='rbf')
run_model(model)
# 売上高の予測の場合、これくらいになるはず
# 平均二乗誤差 : 0.0517 ( 0.0667 ) 平均絶対誤差 : 0.1670 ( 0.1968 )
```
### ランダムフォレスト
```
model = RandomForestRegressor(max_depth=6, n_estimators=200)
run_model(model)
# 売上高の予測の場合、これくらいになるはず
# 平均二乗誤差 : 0.0510 ( 0.0667 ) 平均絶対誤差 : 0.1680 ( 0.1968 )
```
### XGBoost
```
model = XGBRegressor(max_depth=2, n_estimators=200)
run_model(model)
# 売上高の予測の場合、これくらいになるはず
# 平均二乗誤差 : 0.0496 ( 0.0667 ) 平均絶対誤差 : 0.1642 ( 0.1968 )
```
### LightGBM
```
model = LGBMRegressor(objective='regression', num_leaves = 31, max_depth=4, n_estimators=50)
run_model(model)
# 売上高の予測の場合、これくらいになるはず
# 平均二乗誤差 : 0.0495 ( 0.0667 ) 平均絶対誤差 : 0.1645 ( 0.1968 )
```
### CatBoost
```
model = CatBoostRegressor(max_depth=2, n_estimators=200, verbose=0)
run_model(model)
# 売上高の予測の場合、これくらいになるはず
# 平均二乗誤差 : 0.0500 ( 0.0667 ) 平均絶対誤差 : 0.1646 ( 0.1968 )
```
| github_jupyter |
# Import Python Modules
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import copy
```
# Generate data for Regression
In the tree of depth 1 example that we looked at, the model we fit had two terminal nodes and a single feature. He're we're going to fit a model with eight terminal nodes and two features.
```
n = 2000 #Number of observations in the training set
theta = [4, 4, 10, 12]
c = [15, 40, 5, 30, 10]
x0 = np.random.uniform(0, 16, n)
x1 = np.random.uniform(0, 16, n)
x = np.array([x0,x1]).reshape((-1,2))
def generateY(x, splitPoints, theta, sd):
if x[0] > theta[0]:
if x[1] > theta[2]:
y = np.random.normal(c[0], sd, 1)
else:
if x[0] <= theta[3]:
y = np.random.normal(c[1], sd, 1)
else:
y = np.random.normal(c[2], sd, 1)
else:
if x[1] <= theta[1]:
y = np.random.normal(c[3], sd, 1)
else:
y = np.random.normal(c[4], sd, 1)
return y[0]
y = [generateY(i, c, theta, 3) for i in x]
#Concatenate features and labels to create dataframe
dataFeatures = pd.DataFrame(x)
dataFeatures.columns = [f'X{i}' for i in range(2)]
dataTarget = pd.DataFrame(y)
dataTarget.columns = ['Y']
data = pd.concat([dataFeatures, dataTarget], axis = 1)
#Split up data into a training and testing set
trainTestRatio = 0.05
trainData, testData = train_test_split(data, test_size=1-trainTestRatio)
trainData.head()
```
# Quickly plot the data so we know what it looks like
```
plt.scatter( data['X0'], data['X1'] , c=data['Y'],
cmap = 'inferno', alpha =0.5)
cbar = plt.colorbar()
cbar.set_label('Target value')
```
The relationship between the features (the x and y axis in this case) and the target (the corresponding colour of each datapoint) is fairly non-linear.
By inspecting the dataset it's hopefully quite clear that a standard linear regression (without feature engineering) would struggle to model this dataset with a high degree of accuracy, and that a single split point (i.e. the previous decision tree example) won't be enough either.
To model this dataset, we're going to require a decision tree with depth > 1
(**Disclaimer**: as far as I'm aware this dataset is not representative of any real-world data generation process, I constructed it specifically to show an example where Decision Trees would perform well!)
# Fitting the Decision Tree
Our solution is going to look slightly different here than it has for the earlier methods we might have seen, where the whole process (fitting, prediction etc.) was contained entirely within a class.
We're still going to define a class, but this time our class will be a representation of a node in a tree. Two of the attributes of the node class will be node.left and node.right - node.left and node.right will ALSO be instances of a node (and they will in turn have their own attributes, node.left.left and node.right.left etc.). This structure allows us to naturally capture the hierarchical nature between nodes in a decision tree.
Once we have this structure in place, we'll assign further attributes to each node. One of them will be a dataset - the top-level node (the root) will be assigned the whole training set, but as we train our decision tree by obtaining split points to partition the feature space, lower level nodes will be assigned a subset of the training set corresponding to the split points which take us to that node.
For a couple of nice diagrams which will allow you to visualise a Decision-Tree structure, check out Bishop's Pattern Recognition and Machine Learning (which you can view here: http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf) on pages 663-664
## Simpler example
We can think of the training process as a complicated way of splitting the training set up. Using the method described above we want to split up the dataset into a series of clusters so that we can accurately predict the value of each member of a given cluster.
Let's look at a simpler example: Instead of thinking about partitioning a dataset in an optimal fashion, lets just think about how we can use the recursive class structure defined above to split up a dataset. We'll take a sorted array, and we want to define a tree where the terminal nodes contain a single element of the array, with the condition that if a node has two children, we send the lower half of the array associated with the node to the left child and the upper half to the right child
```
class Node:
def __init__(self,arr):
self.left = None #This will be filled by another node, the left child of this node
self.right = None #Right child of this node
self.arr = arr #Array associated with this particular node in the tree
root = Node(np.array(list(range(8)))) #Define the root and assign to it the whole array
#We've defined the root of our tree by defining an instance of a Node. The root comes equipped with an array.
#We want to define root.left to be a node with the bottom half of root.arr and root.arr to be a node with the
#top half of root.arr
#Like this:
root.left = Node(root.arr[:int(np.floor(root.arr.shape[0]/2))]) #root.left is now also a node
root.right = Node(root.arr[int(np.floor(root.arr.shape[0]/2)):]) #root.right is now a node
print('Array associated with the root')
print(root.arr)
print('Arrays associated with the children of the root')
print('Left: ', root.left.arr, 'Right: ', root.right.arr)
print(root.left, root.right) #We see that these are also instances of a Node, too.
```
We can then follow the same process on root.left so that it also has two children, root.left.left and root.left.right, each of which is associated with its own smaller array.
To save us writing out the above cell a bunch of times we can use a recursive function which keeps on splitting an array until the splits only have one element
```
def treeSplitting(Tree):
if len(Tree.arr) > 1: #If the array has length > 1 we need to keep splitting
Tree.left = Node(Tree.arr[:int(np.floor(Tree.arr.shape[0]/2))]) #Split the array and assign it to a child node
Tree.right = Node(Tree.arr[int(np.floor(Tree.arr.shape[0]/2)):])
treeSplitting(Tree.left) #Use recursion to keep on splitting until we reach a single value
treeSplitting(Tree.right)
root = Node(np.array(list(range(8))))
treeSplitting(root)
root.right.left.right.arr #Checking one of the terminal nodes
```
## What was the point of all that?
Hopefully it gives us some intuition about a tree structure and how we might go about growing a tree. Whilst in this simple example we split up the data by sending the top half left and the bottom half right, in our decision tree example we'll want to determine an optimal split point for the associated dataset and, based on that split point, send one subset of the dataset left and the other subset right for further splitting.
Once we've split a certain number of times then we use those terminal datasets to make the predictions for the new values which filter down to that particular terminal node.
The key point is that whilst the method we use to determine how to split the dataset might be a bit more complicated than just splitting it in half, in principle it's exactly the same.
```
class decisionTreeNode:
def __init__(self,data, target, features, currentDepth):
self.left = None #Left child node
self.right = None #Left child node
self.currentDepth = currentDepth
self.data = data
self.target = target
self.features = features
self.splitPointMesh = {}
for feature in self.features:
#We have a set of features and to determine how we're going to split this dataset
#we'll go through each feature in turn and find an optimal split point for that feature
#Then we'll split using the feature which gives the smallest error for the dataset
#(This is not necessarily an optimal strategy but the combinatorial space is too big to
#investigate every possible tree)
#So the first point of that is defining a mesh for each feature
meshMin = np.min(self.data[feature])
meshMax = np.max(self.data[feature])
self.splitPointMesh[feature] = np.linspace(meshMin, meshMax, 500)
def computeMeansGivenSplitPoint(self, splitPoint, feature):
#Given a split point, we want to split the training set in two
#One containing all the points below the split point and one containing all the points above the split point
#The means are then the means of the targets in those datasets and they are the values we want to return
#Hint: When using pandas datasets, you can use the .loc command to easily extract a subset of the dataframe
#Remember we want to return two values
def computeSquaredError(self, splitPoint, feature):
#Once we have a split point and a set of means, we need to have some way of identifying whether it's
#a good split point or not
#First apply compuuteMeansGivenSplitPoint to get the values for above and below the dataset
#Then compute the sum of squared errors yielded by assigning the corresponding mean to each point in the training set
#If we add these two sums of squares together then we have a single error number which indicates how good our split point is
#Code goes here...
#To get the value of errorBelow, subset the training set to the points below the split points
#Then calculate the squared difference between target and c0 for each observation in the subset
#Then sum them up (This can all be done in one line)
#Code goes here...
totalError = errorBelow + errorAbove
return totalError
def createSplitDatasetsGivenSplitPointAndFeature(self, splitPoint, feature):
#Given a split point, split the dataset up and return two datasets
pass
```
So the decisionTreeNode has all the ingredients we need to fit the model, but we need a function to actually fit it.
```
def fitDT(Node, maxDepth):
if Node.currentDepth < maxDepth:
#if node depth > max depth then we continue to split
#Do splitting here
#We want to find the best error for each of the features, then use that feature to do the splitting
errors = {}
for feature in Node.features:
errors[feature] = #use Node.computeSquaredError for each splitPoint in that feature's mesh - save as list
#Now we want to extract the feature and splitPoint which gave the best overall error
currentBestError = min(errors[Node.features[0]]) + 1 #Initialise
for feature in Node.features:
if min(errors[feature]) < currentBestError:
bestFeature = feature
currentBestError = min(errors[feature])
bestSplitPoint = Node.splitPointMesh[feature][np.argmin(errors[feature])]
#Now we have the best feature to split on and the place where we should split it
#Use Node.createSplitDatasetsGivenSplitPointAndFeature
#Record the splitting process
Node.featureSplitOn = bestFeature
Node.bestSplitPoint = bestSplitPoint
#print(bestFeature, bestSplitPoint)
if Node.data.drop_duplicates().shape[0] > 1:
#i.e. the data is non-empty
#We want the left and right attributes to be decision tree nodes too
Node.left = decisionTreeNode#(...) #Define nodes on the levels below (increment depth by 1)
Node.right = decisionTreeNode#(...)
#Now do the recursive part, which works exactly the same as we saw for the simpler example above
else: #If there is only one example left in this dataset then there's no need to do any splitting
Node.left = copy.deepcopy(Node)
Node.right = copy.deepcopy(Node)
Node.left.currentDepth = Node.currentDepth + 1
Node.right.currentDepth = Node.currentDepth + 1
#Now do the recursive part, which works exactly the same as we saw for the simpler example above
elif Node.currentDepth == maxDepth:
#If we're at a terminal node then we need to return a value to predict
#Don't need to do any splitting or anything like that, just want to return the mean value
Node.prediction = #The mean of the targets
def predictSingleExample(decisionTreeNode, xrow, maxDepth):
#decisionTreeNode should be the root node of a fitted decision tree
#maxDepth needs to be the same maxDepth as the fitted decision tree
#xrow needs to be a row of a pandas dataframe with the same column names as the features
#in the training set
if decisionTreeNode.currentDepth < maxDepth:
if xrow[decisionTreeNode.featureSplitOn] < decisionTreeNode.bestSplitPoint:
#Use recursion to send to the left child
return #...
else:
#Use recursion to send to the right child
return #...
elif decisionTreeNode.currentDepth == maxDepth:
#Now we're at a terminal node, which will have a prediction attribute
return #...
root = decisionTreeNode(trainData, 'Y', ['X0', 'X1'], 0)
fitDT(root, 3)
```
## Now, as always we want to see how our predictions reflect 'reality'
We could do this in lots of different ways but one way of nicely illustrating how we've done in this case is to predict the value assigned to every value in the whole training set and see how the heat map compares to the true heatmap
```
data['YPred'] = [predictSingleExample(root, row, 3) for index, row in data.iterrows()]
plt.scatter( data['X0'], data['X1'] , c=data['Y'],
cmap = 'inferno', alpha =0.5)
cbar = plt.colorbar()
cbar.set_label('Color Intensity')
plt.title('True Y Values')
plt.xlabel('X0')
plt.ylabel('X1')
plt.show()
plt.scatter( data['X0'], data['X1'] , c=data['YPred'],
cmap = 'inferno', alpha =0.5)
cbar = plt.colorbar()
cbar.set_label('Color Intensity')
plt.title('Predicted Y Values')
plt.xlabel('X0')
plt.ylabel('X1')
plt.show()
```
## So how have we done then?
Hopefully we can see that our model has successfully captured at least some of the non-linear structure in the training set (slight differences in colour may reflect different scales on the colour bars). On my implementation the model performed slightly poorly on the values where X0 < 4, partitioning the space with an incorrect value of X1 - that is the price we must be prepared to pay for using a greedy implementation of the algorithm. However hopefully this notebook demonstrates that Decision Trees can be powerful tools for learning non-linear relationships.
# Next Step
An immediate next step would be to enable the model to be pruned after fitting. Pruning means reducing the depth of the tree in some directions and we'd like to prune the nodes which have little or no effect on performance - for example if two sibling terminal nodes tell us to predict an almost identical value for each, then we could simply convert their parent into a terminal node and assign every observation in the amalgamated space the same value.
In the real world, Decision Trees are most commonly used a building block for more complex models. Random Forests and Boosting are two examples of widely used algorithms which leverage Decision Trees to make predictions.
| github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_LAB.ipynb)
# **Detect lab results**
To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens.
## 1. Colab Setup
Import license keys
```
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
```
Install dependencies
```
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
```
Import dependencies into Python and start the Spark session
```
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(secret)
```
## 2. Select the NER model and construct the pipeline
Select the NER model - Lab Results models: **ner_jsl, ner_jsl_enriched**
For more details: https://github.com/JohnSnowLabs/spark-nlp-models#pretrained-models---spark-nlp-for-healthcare
```
# You can change this to the model you want to use and re-run cells below.
# Lab models: ner_jsl, ner_jsl_enriched
MODEL_NAME = "ner_jsl"
```
Create the pipeline
```
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = NerDLModel.pretrained(MODEL_NAME, 'en', 'clinical/models') \
.setInputCols(['sentence', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
```
## 3. Create example inputs
```
# Enter examples as strings in this array
input_list = [
"""Tumor cells show no reactivity with cytokeratin AE1/AE3. No significant reactivity with CAM5.2 and no reactivity with cytokeratin-20 are seen. Tumor cells show partial reactivity with cytokeratin-7. PAS with diastase demonstrates no convincing intracytoplasmic mucin. No neuroendocrine differentiation is demonstrated with synaptophysin and chromogranin stains. Tumor cells show cytoplasmic and nuclear reactivity with S100 antibody. No significant reactivity is demonstrated with melanoma marker HMB-45 or Melan-A. Tumor cell nuclei (spindle cell and pleomorphic/giant cell carcinoma components) show nuclear reactivity with thyroid transcription factor marker (TTF-1). The immunohistochemical studies are consistent with primary lung sarcomatoid carcinoma with pleomorphic/giant cell carcinoma and spindle cell carcinoma components."""
]
```
## 4. Use the pipeline to create outputs
```
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
```
## 5. Visualize results
Visualize outputs as data frame
```
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
```
Functions to display outputs as HTML
```
from IPython.display import HTML, display
import random
def get_color():
r = lambda: random.randint(128,255)
return "#%02x%02x%02x" % (r(), r(), r())
def annotation_to_html(full_annotation):
ner_chunks = full_annotation[0]['ner_chunk']
text = full_annotation[0]['document'][0].result
label_color = {}
for chunk in ner_chunks:
label_color[chunk.metadata['entity']] = get_color()
html_output = "<div>"
pos = 0
for n in ner_chunks:
if pos < n.begin and pos < len(text):
html_output += f"<span class=\"others\">{text[pos:n.begin]}</span>"
pos = n.end + 1
html_output += f"<span class=\"entity-wrapper\" style=\"color: black; background-color: {label_color[n.metadata['entity']]}\"> <span class=\"entity-name\">{n.result}</span> <span class=\"entity-type\">[{n.metadata['entity']}]</span></span>"
if pos < len(text):
html_output += f"<span class=\"others\">{text[pos:]}</span>"
html_output += "</div>"
display(HTML(html_output))
```
Display example outputs as HTML
```
for example in input_list:
annotation_to_html(light_pipeline.fullAnnotate(example))
```
| github_jupyter |
# 2. Download and preprocess the video
In this notebook, we'll download the preprocess the video that we will be applying style transfer to. The output of the tutorial will be the extracted audio file of the video, which will be reused when stitching the video back together, as well as the video separated into individual frames.
The video that will be used in this tutorial is also of orangutans (just like the provided sample content images). The video is stored in a publib blob that we will download. However, for this section of the tutorial, you can choose to switch out the video with something of your own choice. Likewise, feel free to switch out the style image instead of using the provided image of a Renior painting.
```md
pytorch
├── images/
│ ├── orangutan/ [<-- this folder will contain all individual frames from the video]
│ ├── sample_content_images/
│ ├── sample_output_images/
│ └── style_images/
├── video/ [<-- create this new folder to put video content in]
│ ├── orangutan.mp4 [<-- this is the downloaded video]
│ └── orangutan.mp3 [<-- this is the extracted audio file from the video]
├── style_transfer_script.py
└── style_transfer_script.log
```
---
Import utilities to help us display images and html embeddings:
```
from IPython.display import HTML
import os
%load_ext dotenv
%dotenv
```
First, create the video folder store your video contents in.
```
%%bash
mkdir pytorch/video
```
Download the video that is stored in a public blob storage, located at https://happypathspublic.blob.core.windows.net/videos/orangutan.mp4
```
%%bash
cd pytorch/video &&
wget https://happypathspublic.blob.core.windows.net/videos/orangutan.mp4
```
Set the environment variable __VIDEO_NAME__ to the name of the video as this will be used throughout the tutorial for convinience.
```
%%bash
dotenv set VIDEO_NAME orangutan
```
Lets check out the video so we know what it looks like before hand:
```
%dotenv
HTML('\
<video width="360" height="360" controls> \
<source src="pytorch/video/{0}.mp4" type="video/mp4"> \
</video>'\
.format(os.getenv('VIDEO_NAME'))
)
```
Next, use __ffmpeg__ to extract the audio file and save it as orangutan.mp3 under the video directory.
```
%%bash
cd pytorch/video &&
ffmpeg -i ${VIDEO_NAME}.mp4 ${VIDEO_NAME}.mp3
```
Finally, break up the frames of the video into separate individual images. The images will be saved inside a new folder under the `/images` directory, called `/orangutan`.
```
%%bash
cd pytorch/images/ &&
mkdir ${VIDEO_NAME} && cd ${VIDEO_NAME} &&
ffmpeg -i ../../video/${VIDEO_NAME}.mp4 %05d_${VIDEO_NAME}.jpg -hide_banner
```
To make sure that the frames were successfully extracted, print out the number of images under `/pytorch/images/orangutan`. For the orangutan video, that number should be 823 individual images:
```
!cd pytorch/images/${VIDEO_NAME} && ls -1 | wc -l
```
---
## Conclusion
In this notebook, we downloaded the video that we will be applying neural style transfer to, and processed it so that we have the individual frames and audio track as seperate entities. In other scenarios, this can be thought of as preprocessing the data so that it is ready to be scored.
Next, we will use the style transfer script from the previous notebook to batch apply style transfer to all extracted frames using Batch AI in Azure. But first, we need to [setup Azure so that we have the appropriate credentials and storage accounts.](./03_setup_azure.ipynb)
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
RESULT_FILE = '../files/results/experimental_results_CECOVEL.csv'
results = pd.read_csv(RESULT_FILE, delimiter=";")
# Add column with architecture type (TCN or LSTM)
results['ARCHITECTURE'] = results['MODEL'].map(lambda x: 'LSTM' if 'LSTM' in x else ('TCN' if 'TCN' in x else 'NONE'))
#Set model characteristic as index
index_rows = results[['MODEL', 'MODEL_DESCRIPTION', 'FORECAST_HORIZON', 'PAST_HISTORY', 'BATCH_SIZE']].copy()
results = results.set_index(['MODEL', 'MODEL_DESCRIPTION', 'FORECAST_HORIZON', 'PAST_HISTORY', 'BATCH_SIZE'])
results.head()
metric_columns = ['mse', 'rmse', 'nrmse', 'mae', 'wape', 'mpe', 'mape',
'mdape', 'smape', 'smdape', 'mase', 'rmspe', 'rmsse', 'mre', 'rae',
'mrae', 'std_ae', 'std_ape']
METRIC = 'wape'
cmap={'TCN':(0.5019607843137255, 0.5019607843137255, 0.9916562369628703), 'LSTM':(0.4519420198186802, 0.16470588235294117, 0.32941176470588235)}
top_by_pasthistory = pd.DataFrame(columns=['PAST_HISTORY','ARCHITECTURE', METRIC])
for ph, _results in results.groupby('PAST_HISTORY'):
for arch, __results in _results.groupby('ARCHITECTURE'):
top_by_pasthistory = top_by_pasthistory.append(__results.reset_index().sort_values(METRIC)[['PAST_HISTORY','ARCHITECTURE', METRIC]].head(21), ignore_index=True)
fig, ax = plt.subplots(1,1,figsize=(4.75,3))
sns.boxplot(data=top_by_pasthistory, x='PAST_HISTORY', y=METRIC, hue='ARCHITECTURE', linewidth=0.75, palette=cmap, width=0.5, whis=100, hue_order=['TCN', 'LSTM'], ax=ax)
ax.set_xlabel('Past history')
ax.set_ylabel(METRIC.upper())
fig.savefig('../files/images/EV-ResultsDistribution-pastHistory.eps'.format(ph), format='eps', bbox_inches='tight')
fig.show()
top_tcn = results[results['ARCHITECTURE']=='TCN'].sort_values(METRIC).head(1)
print('TCN')
print('\t'+'\n\t'.join(['{}: {}'.format(k,v) for k,v in zip(top_tcn.index.names,top_tcn.index[0])]))
print('\tEPHOCS: '+str(top_tcn[['EPOCHS']].values[0][0]))
top_lstm = results[results['ARCHITECTURE']=='LSTM'].sort_values(METRIC).head(1)
print('LSTM')
print('\t'+'\n\t'.join(['{}: {}'.format(k,v) for k,v in zip(top_lstm.index.names,top_lstm.index[0])]))
print('\tEPHOCS: '+str(top_lstm[['EPOCHS']].values[0][0]))
top_by_pasthistory = pd.DataFrame(columns=['PAST_HISTORY','ARCHITECTURE', METRIC])
for ph, _results in results.groupby('PAST_HISTORY'):
for arch, __results in _results.groupby('ARCHITECTURE'):
top_by_pasthistory = top_by_pasthistory.append(__results.reset_index().sort_values(METRIC)[['PAST_HISTORY','ARCHITECTURE', METRIC]].head(1), ignore_index=True)
sns.barplot(data=top_by_pasthistory, x='PAST_HISTORY', y=METRIC, hue='ARCHITECTURE')
plt.show()
top_by_pasthistory.pivot(index='ARCHITECTURE', columns='PAST_HISTORY', values=METRIC)
# 25 Epochs
fig,ax = plt.subplots(1,3, figsize=(16,4))
for i, (ph, _results) in enumerate(results[results['EPOCHS']==25].groupby('PAST_HISTORY')):
line_columns = []
for arch, __results in _results.groupby('ARCHITECTURE'):
loss = np.fromstring(__results.sort_values(METRIC).head(1)['loss'].iloc[0].replace('[','').replace(']',''), dtype='float32', sep=',')
val_loss = np.fromstring(__results.sort_values(METRIC).head(1)['val_loss'].iloc[0].replace('[','').replace(']',''), dtype='float32', sep=',')
loss, val_loss = (loss[1:], val_loss[1:]) if i < 2 else (loss[3:], val_loss[3:])
loss_line = ax[i].plot(loss, label=('loss',arch), c=cmap[arch])
valloss_line = ax[i].plot(val_loss, label=('val',arch), c=cmap[arch], linestyle='--')
line_columns.append(loss_line[0])
line_columns.append(valloss_line[0])
ax[i].set_xlabel('Epochs')
if i==0:
ax[i].set_ylabel('loss')
ax[i].set_title('Past history '+str(ph))
# Create legend
leg = ax[i].legend(line_columns, ['', '', ' Training', ' Validation'],
title='LSTM TCN ', handletextpad=-0.5,
ncol=2, numpoints=1)
fig.suptitle('25 epochs', y=1.025, fontsize=15)
fig.show()
# 50 Epochs
fig,ax = plt.subplots(1,3, figsize=(16,4))
for i, (ph, _results) in enumerate(results[results['EPOCHS']==50].groupby('PAST_HISTORY')):
line_columns = []
for arch, __results in _results.groupby('ARCHITECTURE'):
loss = np.fromstring(__results.sort_values(METRIC).head(1)['loss'].iloc[0].replace('[','').replace(']',''), dtype='float32', sep=',')
val_loss = np.fromstring(__results.sort_values(METRIC).head(1)['val_loss'].iloc[0].replace('[','').replace(']',''), dtype='float32', sep=',')
loss, val_loss = (loss[1:], val_loss[1:]) if i < 2 else (loss[3:], val_loss[3:])
loss_line = ax[i].plot(loss, label=('loss',arch), c=cmap[arch])
valloss_line = ax[i].plot(val_loss, label=('val',arch), c=cmap[arch], linestyle='--')
line_columns.append(loss_line[0])
line_columns.append(valloss_line[0])
ax[i].set_xlabel('Epochs')
if i==0:
ax[i].set_ylabel('loss')
ax[i].set_title('Past history '+str(ph))
# Create legend
leg = ax[i].legend(line_columns, ['', '', ' Training', ' Validation'],
title='LSTM TCN ', handletextpad=-0.5,
ncol=2, numpoints=1)
fig.suptitle('50 epochs', y=1.025, fontsize=15)
fig.show()
# 100 Epochs
fig,ax = plt.subplots(1,3, figsize=(16,4))
for i, (ph, _results) in enumerate(results[results['EPOCHS']==100].groupby('PAST_HISTORY')):
line_columns = []
for arch, __results in _results.groupby('ARCHITECTURE'):
loss = np.fromstring(__results.sort_values(METRIC).head(1)['loss'].iloc[0].replace('[','').replace(']',''), dtype='float32', sep=',')
val_loss = np.fromstring(__results.sort_values(METRIC).head(1)['val_loss'].iloc[0].replace('[','').replace(']',''), dtype='float32', sep=',')
loss, val_loss = (loss[1:], val_loss[1:]) if i < 2 else (loss[3:], val_loss[3:])
loss_line = ax[i].plot(loss, label=('loss',arch), c=cmap[arch])
valloss_line = ax[i].plot(val_loss, label=('val',arch), c=cmap[arch], linestyle='--')
line_columns.append(loss_line[0])
line_columns.append(valloss_line[0])
ax[i].set_xlabel('Epochs')
if i==0:
ax[i].set_ylabel('loss')
ax[i].set_title('Past history '+str(ph))
# Create legend
leg = ax[i].legend(line_columns, ['', '', ' Training', ' Validation'],
title='LSTM TCN ', handletextpad=-0.5,
ncol=2, numpoints=1)
fig.suptitle('100 epochs', y=1.025, fontsize=15)
fig.show()
fig,ax = plt.subplots(1,3, figsize=(16,4))
for i, (ph, _results) in enumerate(results.groupby('PAST_HISTORY')):
for arch, __results in _results.groupby('ARCHITECTURE'):
loss = np.fromstring(__results.sort_values(METRIC).head(1)['loss'].iloc[0].replace('[','').replace(']',''), dtype='float32', sep=',')
val_loss = np.fromstring(__results.sort_values(METRIC).head(1)['val_loss'].iloc[0].replace('[','').replace(']',''), dtype='float32', sep=',')
loss, val_loss = (loss[1:], val_loss[1:]) if i < 2 else (loss[3:], val_loss[3:])
ax[i].plot(loss, label=('loss',arch), c=cmap[arch])
ax[i].plot(val_loss, label=('val',arch), c=cmap[arch], linestyle='--')
ax[i].set_xlabel('Epochs')
if i==0:
ax[i].set_ylabel('loss')
ax[i].legend()
ax[1].set_title('100 epochs')
fig.show()
for i, (ph, _results) in enumerate(results[results['EPOCHS']==50].groupby('PAST_HISTORY')):
fig,ax = plt.subplots(1,1, figsize=(3,2.5))
line_columns = []
for arch, __results in _results.groupby('ARCHITECTURE'):
print(str(i), str(ph),arch)
top_tcn = __results.sort_values(METRIC).head(1)
print('\t'+'\n\t'.join(['{}: {}'.format(k,v) for k,v in zip(top_tcn.index.names,top_tcn.index[0])]))
print('\tEPHOCS: '+str(top_tcn[['EPOCHS']].values[0][0]))
loss = np.fromstring(__results.sort_values(METRIC).head(1)['loss'].iloc[0].replace('[','').replace(']',''), dtype='float32', sep=',')
val_loss = np.fromstring(__results.sort_values(METRIC).head(1)['val_loss'].iloc[0].replace('[','').replace(']',''), dtype='float32', sep=',')
loss, val_loss = (loss[1:], val_loss[1:]) if i < 2 else (loss[3:], val_loss[3:])
loss_line = ax.plot(loss, label=('loss',arch), c=cmap[arch])
valloss_line = ax.plot(val_loss, label=('val',arch), c=cmap[arch], linestyle='--')
line_columns.append(loss_line[0])
line_columns.append(valloss_line[0])
ax.set_xlabel('Epochs')
ax.set_ylabel('Loss (MAE)')
# Create legend
leg = ax.legend(line_columns, ['', '', ' Training', ' Validation'],
title='LSTM TCN ', handletextpad=-0.5,
ncol=2, numpoints=1)
fig.savefig('../files/images/EV-loss_50epochs_{}PastHistory.eps'.format(ph), format='eps', bbox_inches='tight')
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Post-training dynamic range quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
[TensorFlow Lite](https://www.tensorflow.org/lite/) now supports
converting weights to 8 bit precision as part of model conversion from
tensorflow graphdefs to TensorFlow Lite's flat buffer format. Dynamic range quantization achieves a 4x reduction in the model size. In addition, TFLite supports on the fly quantization and dequantization of activations to allow for:
1. Using quantized kernels for faster implementation when available.
2. Mixing of floating-point kernels with quantized kernels for different parts
of the graph.
The activations are always stored in floating point. For ops that
support quantized kernels, the activations are quantized to 8 bits of precision
dynamically prior to processing and are de-quantized to float precision after
processing. Depending on the model being converted, this can give a speedup over
pure floating point computation.
In contrast to
[quantization aware training](https://github.com/tensorflow/tensorflow/tree/r1.14/tensorflow/contrib/quantize)
, the weights are quantized post training and the activations are quantized dynamically
at inference in this method.
Therefore, the model weights are not retrained to compensate for quantization
induced errors. It is important to check the accuracy of the quantized model to
ensure that the degradation is acceptable.
This tutorial trains an MNIST model from scratch, checks its accuracy in
TensorFlow, and then converts the model into a Tensorflow Lite flatbuffer
with dynamic range quantization. Finally, it checks the
accuracy of the converted model and compare it to the original float model.
## Build an MNIST model
### Setup
```
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pathlib
```
### Train a TensorFlow model
```
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
validation_data=(test_images, test_labels)
)
```
For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy.
### Convert to a TensorFlow Lite model
Using the Python [TFLiteConverter](https://www.tensorflow.org/lite/convert/python_api), you can now convert the trained model into a TensorFlow Lite model.
Now load the model using the `TFLiteConverter`:
```
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
```
Write it out to a tflite file:
```
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
```
To quantize the model on export, set the `optimizations` flag to optimize for size:
```
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)
```
Note how the resulting file, is approximately `1/4` the size.
```
!ls -lh {tflite_models_dir}
```
## Run the TFLite models
Run the TensorFlow Lite model using the Python TensorFlow Lite
Interpreter.
### Load the model into an interpreter
```
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))
interpreter_quant.allocate_tensors()
```
### Test the model on one image
```
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
```
### Evaluate the models
```
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
```
Repeat the evaluation on the dynamic range quantized model to obtain:
```
print(evaluate_model(interpreter_quant))
```
In this example, the compressed model has no difference in the accuracy.
## Optimizing an existing model
Resnets with pre-activation layers (Resnet-v2) are widely used for vision applications.
Pre-trained frozen graph for resnet-v2-101 is available on
[Tensorflow Hub](https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4).
You can convert the frozen graph to a TensorFLow Lite flatbuffer with quantization by:
```
import tensorflow_hub as hub
resnet_v2_101 = tf.keras.Sequential([
keras.layers.InputLayer(input_shape=(224, 224, 3)),
hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4")
])
converter = tf.lite.TFLiteConverter.from_keras_model(resnet_v2_101)
# Convert to TF Lite without quantization
resnet_tflite_file = tflite_models_dir/"resnet_v2_101.tflite"
resnet_tflite_file.write_bytes(converter.convert())
# Convert to TF Lite with quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
resnet_quantized_tflite_file = tflite_models_dir/"resnet_v2_101_quantized.tflite"
resnet_quantized_tflite_file.write_bytes(converter.convert())
!ls -lh {tflite_models_dir}/*.tflite
```
The model size reduces from 171 MB to 43 MB.
The accuracy of this model on imagenet can be evaluated using the scripts provided for [TFLite accuracy measurement](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification).
The optimized model top-1 accuracy is 76.8, the same as the floating point model.
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/FeatureCollection/reverse_mask.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/reverse_mask.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=FeatureCollection/reverse_mask.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/reverse_mask.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for the first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
Map.setCenter(-100, 40, 4)
fc = (ee.FeatureCollection('ft:1Ec8IWsP8asxN-ywSqgXWMuBaxI6pPaeh6hC64lA')
.filter(ee.Filter().eq('ECO_NAME', 'Great Basin shrub steppe')))
# Start with a black image.
empty = ee.Image(0).toByte()
# Fill and outline the polygons in two colors
filled = empty.paint(fc, 2)
both = filled.paint(fc, 1, 5)
# Mask off everything that matches the fill color.
result = both.mask(filled.neq(2))
Map.addLayer(result, {
'palette': '000000,FF0000',
'max': 1,
'opacity': 0.5
}, "Basin")
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
# MedleyDB content analysis and instrumental based remix
This notebook contains the MedleyDB instrumental and genre analysis to selection of most representative instrumental families in its musical content.
```
import medleydb as mdb
import medleydb.mix
import seaborn as sns
import pandas as pd
import os
```
## 1. Selection of singed pieces
For my studies, I use the definition of singing voice as the following: (???)
- female and male singers
- vocalists
- choir
```
sing_voice = ["AClassicEducation_NightOwl","AimeeNorwich_Child","AlexanderRoss_GoodbyeBolero",
"AlexanderRoss_VelvetCurtain","Auctioneer_OurFutureFaces","AvaLuna_Waterduct",
"BigTroubles_Phantom","BrandonWebster_DontHearAThing","BrandonWebster_YesSirICanFly",
"CelestialShore_DieForUs","ClaraBerryAndWooldog_AirTraffic","ClaraBerryAndWooldog_Boys",
"ClaraBerryAndWooldog_Stella","ClaraBerryAndWooldog_TheBadGuys","ClaraBerryAndWooldog_WaltzForMyVictims",
"Creepoid_OldTree","Debussy_LenfantProdigue","DreamersOfTheGhetto_HeavyLove","FacesOnFilm_WaitingForGa",
"FamilyBand_Again","Handel_TornamiAVagheggiar","HeladoNegro_MitadDelMundo","HezekiahJones_BorrowedHeart",
"HopAlong_SisterCities","InvisibleFamiliars_DisturbingWildlife","LizNelson_Coldwar",
"LizNelson_ImComingHome"] # essa ultima não tem algo com mais de 2 (ver o que é... Não fez a do novocal)
sing= ["LizNelson_Rainfall","MatthewEntwistle_DontYouEver","MatthewEntwistle_Lontano",
"Meaxic_TakeAStep","Meaxic_YouListen","Mozart_BesterJungling","Mozart_DiesBildnis","MusicDelta_80sRock",
"MusicDelta_Beatles","MusicDelta_Britpop","MusicDelta_Country1","MusicDelta_Country2","MusicDelta_Disco",
"MusicDelta_Gospel","MusicDelta_Grunge","MusicDelta_Hendrix","MusicDelta_Punk","MusicDelta_Reggae",
"MusicDelta_Rock","MusicDelta_Rockabilly","NightPanther_Fire","PortStWillow_StayEven","PurlingHiss_Lolita",
"Schubert_Erstarrung","Schumann_Mignon","SecretMountains_HighHorse","Snowmine_Curfews",
"StevenClark_Bounty","StrandOfOaks_Spacestation","SweetLights_YouLetMeDown","TheDistricts_Vermont",
"TheScarletBrand_LesFleursDuMal","TheSoSoGlos_Emergency","Wolf_DieBekherte"]
singed = sing_voice+sing
```
## 2. Generate remixes predefined on MedleyDB API
### O detector é sensível à mudança de instrumentação?
Avaliar resultados em músicas com diferentes instrumentações
_Mixar uma música com diferentes instrumentações (se possível, com instrumentações similares para diferentes músicas)_
Sugestões:
- Apenas instrumentos melódicos
- Apenas instrumentos monofônicos
- Apenas instrumentos monofônicos + percussivos
- Sem vocais (todos as fontes)
- Conjuntos personalizados de instrumentos
```
def generate_mixes(piece_name):
# Load all multitracks
mtrack_generator = mdb.load_all_multitracks()
mtrack = mdb.MultiTrack(piece_name)
print ("Stems of the piece:", piece_name)
for stem in mtrack.stems.values():
print (stem.instrument)
outpath = "/home/shayenne/Documents/Mestrado/MedleyDBMIX/"
dir_name = outpath+piece_name
try:
os.mkdir(dir_name)
print("Directory " , dir_name , " created ")
except FileExistsError:
print("Directory " , dir_name , " already exists")
#filepaths, weights = medleydb.mix.mix_multitrack(mtrack, "test.wav", stem_indices=[1,2,3,5])
filepath = outpath+piece_name+"/"+piece_name
filepaths, weights = medleydb.mix.mix_melody_stems(mtrack, filepath+"_melody.wav")
filepaths, weights = medleydb.mix.mix_mono_stems(mtrack, filepath+"_mono.wav")
filepaths, weights = medleydb.mix.mix_mono_stems(mtrack, filepath+"_mono_drums.wav", include_percussion=True)
track_idx = medleydb.mix.mix_no_vocals(mtrack, filepath+"_novocal.wav")
print ("Generated all mixes defined for", piece_name)
# for piece in sing:
# generate_mixes(piece)
medleydb.multitrack.get_valid_instrument_labels()
# Select a set with the same instruments to evaluate the instrumentation
# instrument = "piano"
# inst_list = medleydb.utils.get_files_for_instrument(instrument,multitrack_list=mtrack_generator)
# # Generators are finite (Use just once, run all cells if you want to recharge it again)
# for file in inst_list:
# print (file)
```
## Dataset statistics
Construimos um dataframe com informações sobre o banco de dados para avaliar quais tipos de experimentos serão úteis utilizando seus dados.
### Criação dos dataframes
```
# Load all multitracks
mtrack_generator = mdb.load_all_multitracks()
piece_stems = {}
df_pieces = pd.DataFrame(columns=['track_id', 'artist', 'title', 'genre', 'has_melody', 'is_instrumental'])
df = pd.DataFrame(columns=['track_id', 'instrument', 'family', 'is_instrumental'])
inst_family = {'percussion': ['drum set', 'tambourine', 'timpani', 'tabla', 'snare drum', 'glockenspiel', 'cymbal', 'drum machine',
'kick drum', 'auxiliary percussion', 'vibraphone', "beatboxing", 'shaker', 'claps', 'bongo'],
'plucked-strings': ['electric bass', 'mandolin', 'distorted electric guitar', 'clean electric guitar',
'acoustic guitar', 'banjo', 'double bass', 'lap steel guitar'],
'electronics': ['synthesizer', 'fx/processed sound','scratches','sampler'],
'singing': ["male singer", "female singer", "vocalists"],
'speak': ["male speaker","female speaker","male rapper","female rapper"],
'strings': ['cello', 'violin section', 'string section', 'viola section','viola','violin'],
'woodwind': ['flute', 'clarinet', 'harmonica', 'tenor saxophone', 'melodica'],
'brass': ['trumpet section', 'horn section', 'brass section', 'accordion','tuba','trombone','bassoon','french horn section'],
'piano': ['piano', 'tack piano']
}
real_family = None
for mtrack in mtrack_generator:
piece_stems[mtrack.track_id] = []
#print ("Stems of the piece saved:", mtrack.track_id)
df_pieces.loc[len(df_pieces)] = [mtrack.track_id, mtrack.artist, mtrack.title, mtrack.genre, mtrack.has_melody, mtrack.is_instrumental]
for stem in mtrack.stems.values():
real_family = 'Not Defined'
piece_stems[mtrack.track_id].append(stem.instrument)
for family, instruments in inst_family.items(): # for name, age in dictionary.iteritems(): (for Python 2.x)
if stem.instrument[0] in instruments:
real_family = family
df = df.append({'track_id': mtrack.track_id, 'instrument': stem.instrument[0], 'family': real_family, 'is_instrumental': mtrack.is_instrumental}, ignore_index=True)
#print (stem.instrument)
df_pieces.head()
sns.set(rc={'figure.figsize':(10,15)})
sns.set_color_codes("pastel")
sns.countplot(y='instrument', data=df, color="b")
teste = df.loc[df['family'] == 'Not Defined']
teste
# Musica com vocalists que não é instrumental é a Matthew Entwistle _ An Evening with Oliver
## TODO: ordenar este gráfico por número de instrumentos presentes em voz cantada
sns.set(rc={'figure.figsize':(10,15)})
df2 = df.where(df.is_instrumental == 0)
sns.set_color_codes("pastel")
sns.countplot(y='instrument', data=df, color="b")
sns.set_color_codes("muted")
sns.countplot(y="instrument", data=df2, color="b")
sns.set(rc={'figure.figsize':(10,15)})
df2 = df.where(df.is_instrumental == 0)
sns.set_color_codes("pastel")
sns.countplot(y='family', data=df, color="b")
sns.set_color_codes("muted")
sns.countplot(y="family", data=df2, color="b")
# groups_inst = df.groupby(['track_id', 'is_instrumental'])
# sns.countplot(y='artist', hue='is_instrumental', data=df_pieces)
sns.set(rc={'figure.figsize':(13,8)})
sns.set_color_codes("pastel")
sns.countplot(x='genre', data=df_pieces, palette='pastel')
sns.set(rc={'figure.figsize':(13,8)})
df_pieces2 = df_pieces.where(df_pieces.is_instrumental == 0)
sns.set_color_codes("muted")
sns.countplot(x='genre', data=df_pieces2, palette='muted')
sns.set_color_codes("pastel")
sns.countplot(x='genre', data=df_pieces, palette='pastel')
```
## Mix specific groups of instrumentation
Groups defined by the families.
```
def mix_specifc_sources(mtrack, output_path, sources):
"""Remixes a multitrack with all stems in sources.
Parameters
----------
mtrack : Multitrack
Multitrack object
output_path : str
Path to save output file.
sources : list
List of sources desired in mix.
Returns
-------
stem_indices : list
List of stem indices used in mix.
"""
stems = mtrack.stems
stem_indices = []
for i in stems.keys():
present = all([inst in sources for inst in stems[i].instrument])
if present:
stem_indices.append(i)
medleydb.mix.mix_multitrack(mtrack, output_path, stem_indices=stem_indices)
return stem_indices
mtrack_generator = mdb.load_all_multitracks()
families = ["percussion",
"electronics",
"strings",
"woodwind",
"brass",
"piano"]
outpath = "/home/shayenne/Documents/Mestrado/MedleyDBMIX/"
# Get all singed pieces and remix with one specific family and the voiced stems
# If there is no source from one family, do not make a new remix
for piece in singed:
mtrack = mdb.MultiTrack(piece)
for family in families:
print ("Processing family", family)
dir_name = outpath+piece
try:
os.mkdir(dir_name)
# print("Directory " , dir_name , " created ")
except FileExistsError:
# print("Directory " , dir_name , " already exists")
pass
#filepaths, weights = medleydb.mix.mix_multitrack(mtrack, "test.wav", stem_indices=[1,2,3,5])
filepath = outpath+piece+"/"+piece
output_path = filepath+'_'+family+'_vocals.wav'
sources = inst_family[family]+inst_family['singing']
#print (sources)
try:
indices = mix_specifc_sources(mtrack, output_path, sources)
print (indices) # Ver se isso é None quando não tem as fontes da família. Se sim, usar isso para evitar a mixagem.
except(ValueError):
print(output_path)
print ("Cannot remix with this family :/")
continue
for family in inst_family.keys():
print (family)
```
## 3. Experiment
```
# Load all multitracks
mtrack_generator = mdb.load_all_multitracks()
piece_name = 'CelestialShore_DieForUs'
mtrack = mdb.MultiTrack(piece_name)
for stem in mtrack.stems.values():
print (stem.instrument)
sources = ['male singer','female singer','synthesizer','clean electric guitar']
output_path = "~/shayenne/FirstTrySources.wav"
mix_specifc_sources(mtrack, output_path, sources)
```
| github_jupyter |
# Feature Transformation
Current state of transformations:
- alt: Currently fully transformed via `log(x + 1 - min(x))`. This appears to be acceptable in some circles but doubted in others.
- minimum_lap_time: Normalized by raceId, then used imputation by the median for outliers.
- average_lap_time: Normalized by raceId, then transformed using `log(x)`. Also used imputation by the median for outliers.
- PRCP: Ended up being transformed with `log(x + 1)`. Again, this appears to be acceptable in some circles but doubted in others.
## Set Up
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as stats
%matplotlib inline
# Read in MasterData5.
master_data_url = 'https://raw.githubusercontent.com/georgetown-analytics/Formula1/main/data/processed/MasterData5.csv'
master_data = pd.read_csv(master_data_url, sep = ',', engine = 'python')
# Rename Completion Status to CompletionStatus.
master_data = master_data.rename(columns = {"Completion Status": "CompletionStatus"})
# Only include the final, decided features we'll be using in our models. Do not include identifiable features besides raceId and driverId.
features = master_data[['raceId', 'driverId', 'CompletionStatus', 'alt', 'grid', 'trackType2',
'average_lap_time', 'minimum_lap_time',
'year', 'PRCP', 'TAVG', 'isHistoric', 'oneHot_circuits_1',
'oneHot_circuits_2', 'oneHot_circuits_3', 'oneHot_circuits_4',
'oneHot_circuits_5', 'oneHot_circuits_6']]
# Rename trackType2 to trackType.
features = features.rename(columns = {"trackType2": "trackType"})
# Make trackType a string feature instead of numeric.
features["trackType"] = features.trackType.astype("str")
```
## Transform Features
We know from our `Variable_Distribution.csv` workbook that the features with positive skews that we need to transform are `alt` and `average_lap_time`. Features with outliers are `minimum_lap_time` and `PRCP`.
The site that has the code for using log to transform our features is here: https://towardsdatascience.com/feature-transformation-for-multiple-linear-regression-in-python-8648ddf070b8
### alt
The feature `alt`, or the altitude of each track, is positively skewed. You can see how below.
```
# Show the distribution of alt.
alt_dis = sns.displot(features, x = 'alt').set(title = "Distribution of Variable: alt")
# What does alt look like?
features['alt'].describe()
"""
Transform alt below using log(x + 1 - min(x)). These were suggested by the following sites:
https://blogs.sas.com/content/iml/2011/04/27/log-transformations-how-to-handle-negative-data-values.html
https://www.researchgate.net/post/can_we_replace-Inf_values_after_log_transformations_with_zero
"""
features['alt_trans'] = np.log(features['alt'] + 1 - min(features['alt']))
features.describe()
# Show the new distribution of alt.
alt_trans_dis = sns.displot(features, x = 'alt_trans').set(title = "Transformed Distribution of Variable: alt")
```
`alt` is still not exactly a normal distribution, but it is no longer skewed. This should help it perform better in our models.
### minimum_lap_time
Before we can deal with any outliers, we need to normalize the feature. Right now `minimum_lap_time` is not normalized across races, which is a problem because different races have different track lengths. In order to normalize this feature, we'll group by `raceId`, aggregate for the mean, and then join this new variable back up with our master data. From there we can find `normalized_minlaptime = minimum_lap_time / mean_minlaptime`.
```
# Group by raceId and aggregate for the mean.
raceId_min = master_data.groupby(['raceId'], as_index = False).agg({'minimum_lap_time':'mean'})
raceId_min.describe()
# Rename raceId_average's average_lap_time to mean_avglaptime.
raceId_min = raceId_min.rename(columns = {"minimum_lap_time": "mean_minlaptime"})
# Merge master_data with raceId_average by "raceId" to get avglaps_avg.
avglaps_min = pd.merge(master_data, raceId_min, on = "raceId")
avglaps_min.head()
# Normalize the average lap time by dividing average_lap_time by mean_avglaptime.
avglaps_min['normalized_minLapTime'] = avglaps_min['minimum_lap_time'] / avglaps_min['mean_minlaptime']
avglaps_min[['normalized_minLapTime', 'minimum_lap_time', 'mean_minlaptime']].describe()
```
Now that we have minimum lap time normalized, let's take a look at the distribution again. From the description above, with a large gap between the 75% quartile and the max, it looks like the data will still be skewed.
```
# Plot the distribution of our new variable.
norm_minlaptime_dist = sns.displot(avglaps_min, x = 'normalized_minLapTime').set(title = "Distribution of Variable: normalized_minLapTime")
# Plot a boxplot to find our outliers.
normmin_boxplt = sns.boxplot(data = avglaps_min, x = 'normalized_minLapTime').set(title = "Boxplot of Variable: normalized_minLapTime")
```
We have one significant outlier by 5.0, and three around the 2.0 to 3.5 range.
We'll go ahead and use imputation to deal with these outliers, as per this site (https://medium.com/analytics-vidhya/how-to-remove-outliers-for-machine-learning-24620c4657e8). We're essentially replacing our outliers' values with the medians. As Analytics Vidhya states, "median is appropriate because it is not affected by outliers." We also used Analytics Vidhya's code.
```
"""
Use Analytics Vidhya's code to create an if statement within a for loop to replace outliers above the
uppertail or below the lowertail with the median. Although Analytics Vidhya used 1.5 as their modifier,
we'll use 2.0 in order to try and collect less outliers while still allowing the distribution to normalize.
"""
for i in avglaps_min['normalized_minLapTime']:
Q1 = avglaps_min['normalized_minLapTime'].quantile(0.25)
Q3 = avglaps_min['normalized_minLapTime'].quantile(0.75)
IQR = Q3 - Q1
lowertail = Q1 - 2.0 * IQR
uppertail = Q3 + 2.0 * IQR
if i > uppertail or i < lowertail:
avglaps_min['normalized_minLapTime'] = avglaps_min['normalized_minLapTime'].replace(i, np.median(avglaps_min['normalized_minLapTime']))
# Plot a boxplot to find our outliers.
impnormmin_boxplt = sns.boxplot(data = avglaps_min, x = 'normalized_minLapTime').set(title = "Imputed Boxplot of Variable: normalized_minLapTime")
# Plot the new distribution of our new variable.
newnorm_minlaptime_dist = sns.displot(avglaps_min, x = 'normalized_minLapTime').set(title = "Imputed Distribution of Variable:\nnormalized_minLapTime")
```
We have a normalized feature with what appears to be a normal distribution! The one thing to worry about here is that there were so many outliers that had to be imputated, and that may cause a problem with our distribution.
### average_lap_time
Right now our `average_lap_time` is not normalized across races, which is a problem because different races have different track lengths. In order to normalize this feature, we'll group by `raceId`, aggregate for the mean, and then join this new variable back up with our master data. From there we can find `normalized_avglaptime = average_lap_time / mean_avglaptime`.
```
# Group by raceId and aggregate for the mean.
raceId_average = master_data.groupby(['raceId'], as_index = False).agg({'average_lap_time':'mean'})
raceId_average.describe()
# Rename raceId_average's average_lap_time to mean_avglaptime.
raceId_average = raceId_average.rename(columns = {"average_lap_time": "mean_avglaptime"})
# Merge master_data with raceId_average by "raceId" to get avglaps_avg.
avglaps_avg = pd.merge(master_data, raceId_average, on = "raceId")
avglaps_avg.head()
# Normalize the average lap time by dividing average_lap_time by mean_avglaptime.
avglaps_avg['normalized_avgLapTime'] = avglaps_avg['average_lap_time'] / avglaps_avg['mean_avglaptime']
avglaps_avg[['normalized_avgLapTime', 'average_lap_time', 'mean_avglaptime']].describe()
```
Now that we have average lap time normalized, let's take a look at the distribution again. From the description above, with a large gap between the 75% quartile and the max, it looks like the data will still be skewed.
```
# Plot the distribution of our new variable.
normalized_avglaptime_dist = sns.displot(avglaps_avg, x = 'normalized_avgLapTime').set(title = "Distribution of Variable: normalized_avgLapTime")
```
We can see that there appears to be a positive skew, so we'll use log(x) to transform the feature.
```
"""
Transform normalized_avgLapTime with log(x). This was suggested by Towards Data Science, linked above.
"""
avglaps_avg['normalized_avgLapTime'] = np.log(avglaps_avg['normalized_avgLapTime'])
avglaps_avg.describe()
# Plot the distribution of our transformed variable.
normalized_avglaptime_dist = sns.displot(avglaps_avg, x = 'normalized_avgLapTime').set(title = "Transformed Distribution of Variable: normalized_avgLapTime")
```
This looks much better, but we can see that there are still some extreme outliers.
```
"""
Use Analytics Vidhya's code to create an if statement within a for loop to replace outliers above the
uppertail or below the lowertail with the median. We'll use 2.5 as our modifier to try and collect less
outliers while still allowing the distribution to normalize.
"""
for i in avglaps_avg['normalized_avgLapTime']:
Q1 = avglaps_avg['normalized_avgLapTime'].quantile(0.25)
Q3 = avglaps_avg['normalized_avgLapTime'].quantile(0.75)
IQR = Q3 - Q1
lowertail = Q1 - 2.5 * IQR
uppertail = Q3 + 2.5 * IQR
if i > uppertail or i < lowertail:
avglaps_avg['normalized_avgLapTime'] = avglaps_avg['normalized_avgLapTime'].replace(i, np.median(avglaps_avg['normalized_avgLapTime']))
# Plot a boxplot to find our outliers.
impnormmin_boxplt = sns.boxplot(data = avglaps_avg, x = 'normalized_avgLapTime').set(title = "Transformed and Imputed Boxplot of Variable:\nnormalized_avgLapTime")
# Plot the new distribution of our new variable.
newnorm_minlaptime_dist = sns.displot(avglaps_avg, x = 'normalized_avgLapTime').set(title = "Transformed and Imputed Distribution of Variable:\nnormalized_avgLapTime")
```
We have a normalized feature with what appears to be a normal distribution! The one thing to worry about here is that there were so many outliers that had to be imputated, and that may cause a problem with our distribution.
### PRCP
```
# Plot a boxplot to find our outliers.
PRCP_boxplt = sns.boxplot(data = features, x = 'PRCP').set(title = "Distribution of Variable: PRCP")
```
We have one significant outlier.
```
# What does PRCP look like?
features['PRCP'].describe()
```
The high max with low 75% quartile definitely suggests a positive skew.
```
"""
Transform PRCP with log(x + 1). This was suggested by the following site:
https://discuss.analyticsvidhya.com/t/methods-to-deal-with-zero-values-while-performing-log-transformation-of-variable/2431/9
"""
features['PRCP_trans'] = np.log(features['PRCP'] + 1)
features.describe()
# Create a new variable distribution.
new2PRCP_dis = sns.displot(features, x = 'PRCP_trans').set(title = 'Transformed Distribution of Variable: PRCP')
# Plot a boxplot to find our outliers.
PRCPtrans_boxplt = sns.boxplot(data = features, x = 'PRCP_trans').set(title = "Transformed Boxplot of Variable: PRCP")
```
Although there still seem to be some outliers, they aren't as extreme as before.
## Rejoin the Features into One Dataset
Current locations of our features:
- `alt` and `alt_trans` are both in the `features` dataset
- `minimum_lap_time` and `normalized_minLapTime` are both in the `avglaps_min` dataset
- `average_lap_time` and `normalized_avgLapTime` are both in the `avglaps_avg` dataset
- `PRCP` and `PRCP_trans` are both in `features` dataset
```
# What columns are in features?
features.columns
```
We need to bring normalized_minLapTime and normalized_avgLapTime over to features.
```
# What columns are in avglaps_min?
avglaps_min.columns
# Select just the wanted features for a new dataset.
pref_minlapsdata = avglaps_min[["raceId", "driverId", "normalized_minLapTime"]]
pref_minlapsdata.describe()
# What columns are in avglaps_avg?
avglaps_avg.columns
# Select just the wanted features for a new dataset.
pref_avglapsdata = avglaps_avg[["raceId", "driverId", "normalized_avgLapTime"]]
pref_avglapsdata.describe()
# Merge features with pref_minlapsdata by "raceId" and "driverId" to get min_features.
min_features = pd.merge(features, pref_minlapsdata, on = ["raceId", "driverId"])
min_features.head()
# Merge min_features with pref_avglapsdata by "raceId" and "driverId" to get final_features.
final_features = pd.merge(min_features, pref_avglapsdata, on = ["raceId", "driverId"])
final_features.head()
```
### Create a csv file
```
# Use pandas.DataFrame.to_csv to read our final_features dataset into a new CSV file.
final_features.to_csv("./data/processed/final_features.csv", index = False)
```
| github_jupyter |
---
title: "Inference Analysis"
date: 2021-04-25
type: technical_note
draft: false
---
## Check monitoring analysis
Collect statistics, outliers and drift detections from Parquet and Kafka.
```
from hops import hdfs
import pyarrow.parquet as pq
from hops import kafka
from hops import tls
from confluent_kafka import Producer, Consumer
import json
import pandas as pd
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.set_option('display.max_colwidth', None)
```
### Inference Statistics
Read inference statistics from parquet files
```
MONITORING_DIR = "hdfs:///Projects/" + hdfs.project_name() + "/Resources/CardFraudDetection/Monitoring/"
LOGS_STATS_DIR = MONITORING_DIR + "credit_card_activity_stats-parquet/"
credit_card_activity_stats = spark.read.parquet(LOGS_STATS_DIR + "*.parquet")
credit_card_activity_stats.createOrReplaceTempView("credit_card_activity_stats")
desc_stats_df = spark.sql("SELECT window, feature, min, max, mean, stddev FROM credit_card_activity_stats ORDER BY window")
distr_stats_df = spark.sql("SELECT feature, distr FROM credit_card_activity_stats ORDER BY window")
corr_stats_df = spark.sql("SELECT window, feature, corr FROM credit_card_activity_stats ORDER BY window")
cov_stats_df = spark.sql("SELECT feature, cov FROM credit_card_activity_stats ORDER BY window")
```
#### Descriptive statistics
```
print(desc_stats_df.show(6, truncate=False))
```
#### Distributions
```
print(distr_stats_df.show(6, truncate=False))
```
#### Correlations
```
print(corr_stats_df.show(6, truncate=False))
```
#### Covariance
```
print(cov_stats_df.show(6, truncate=False))
```
## Outliers and Data Drift Detection (kafka)
```
def get_consumer(topic):
config = kafka.get_kafka_default_config()
config['default.topic.config'] = {'auto.offset.reset': 'earliest'}
consumer = Consumer(config)
consumer.subscribe([topic])
return consumer
def poll(consumer, n=2):
df = pd.DataFrame([])
for i in range(0, n):
msg = consumer.poll(timeout=5.0)
if msg is not None:
value = msg.value()
try:
d = json.loads(value.decode('utf-8'))
df_msg = pd.DataFrame(d.items()).transpose()
df_msg.columns = df_msg.iloc[0]
df = df.append(df_msg.drop(df_msg.index[[0]]))
except Exception as e:
print("A message was read but there was an error parsing it")
print(e)
return df
```
### Outliers detected
```
outliers_consumer = get_consumer("credit_card_activity_outliers")
outliers = poll(outliers_consumer, 20)
outliers.head(10)
```
### Data drift detected
```
drift_consumer = get_consumer("credit_card_activity_drift")
drift = poll(drift_consumer, 10)
drift.head(5)
```
| github_jupyter |
<b>Section One – Image Captioning with Tensorflow</b>
```
# load essential libraries
import math
import os
import tensorflow as tf
%pylab inline
# load Tensorflow/Google Brain base code
# https://github.com/tensorflow/models/tree/master/research/im2txt
from im2txt import configuration
from im2txt import inference_wrapper
from im2txt.inference_utils import caption_generator
from im2txt.inference_utils import vocabulary
# tell our function where to find the trained model and vocabulary
checkpoint_path = './model'
vocab_file = './model/word_counts.txt'
# this is the function we'll call to produce our captions
# given input file name(s) -- separate file names by a ,
# if more than one
def gen_caption(input_files):
# only print serious log messages
tf.logging.set_verbosity(tf.logging.FATAL)
# load our pretrained model
g = tf.Graph()
with g.as_default():
model = inference_wrapper.InferenceWrapper()
restore_fn = model.build_graph_from_config(configuration.ModelConfig(),
checkpoint_path)
g.finalize()
# Create the vocabulary.
vocab = vocabulary.Vocabulary(vocab_file)
filenames = []
for file_pattern in input_files.split(","):
filenames.extend(tf.gfile.Glob(file_pattern))
tf.logging.info("Running caption generation on %d files matching %s",
len(filenames), input_files)
with tf.Session(graph=g) as sess:
# Load the model from checkpoint.
restore_fn(sess)
# Prepare the caption generator. Here we are implicitly using the default
# beam search parameters. See caption_generator.py for a description of the
# available beam search parameters.
generator = caption_generator.CaptionGenerator(model, vocab)
captionlist = []
for filename in filenames:
with tf.gfile.GFile(filename, "rb") as f:
image = f.read()
captions = generator.beam_search(sess, image)
print("Captions for image %s:" % os.path.basename(filename))
for i, caption in enumerate(captions):
# Ignore begin and end words.
sentence = [vocab.id_to_word(w) for w in caption.sentence[1:-1]]
sentence = " ".join(sentence)
print(" %d) %s (p=%f)" % (i, sentence, math.exp(caption.logprob)))
captionlist.append(sentence)
return captionlist
testfile = 'test_images/ballons.jpeg'
figure()
imshow(imread(testfile))
capts = gen_caption(testfile)
input_files = 'test_images/ballons.jpeg,test_images/bike.jpeg,test_images/dog.jpeg,test_images/fireworks.jpeg,test_images/football.jpeg,test_images/giraffes.jpeg,test_images/headphones.jpeg,test_images/laughing.jpeg,test_images/objects.jpeg,test_images/snowboard.jpeg,test_images/surfing.jpeg'
capts = gen_caption(input_files)
```
<p><p><p><p><p>
<b>Retraining the image captioner</b>
```
# First download pretrained Inception (v3) model
import webbrowser
webbrowser.open("http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz")
# Completely unzip tar.gz file to get inception_v3.ckpt,
# --recommend storing in im2txt/data directory
# Now gather and prepare the mscoco data
# Comment out cd magic command if already in data directory
%cd im2txt/data
# This command will take an hour or more to run typically.
# Note, you will need a lot of HD space (>100 GB)!
%run build_mscoco_data.py
# At this point you have files in im2txt/data/mscoco/raw-data that you can train
# on, or you can substitute your own data
%cd ..
# load needed modules
import tensorflow as tf
from im2txt import configuration
from im2txt import show_and_tell_model
# Define (but don't run yet) our captioning training function
def train():
model_config = configuration.ModelConfig()
model_config.input_file_pattern = input_file_pattern
model_config.inception_checkpoint_file = inception_checkpoint_file
training_config = configuration.TrainingConfig()
# Create training directory.
train_dir = train_dir
if not tf.gfile.IsDirectory(train_dir):
tf.logging.info("Creating training directory: %s", train_dir)
tf.gfile.MakeDirs(train_dir)
# Build the TensorFlow graph.
g = tf.Graph()
with g.as_default():
# Build the model.
model = show_and_tell_model.ShowAndTellModel(
model_config, mode="train", train_inception=train_inception)
model.build()
# Set up the learning rate.
learning_rate_decay_fn = None
if train_inception:
learning_rate = tf.constant(training_config.train_inception_learning_rate)
else:
learning_rate = tf.constant(training_config.initial_learning_rate)
if training_config.learning_rate_decay_factor > 0:
num_batches_per_epoch = (training_config.num_examples_per_epoch /
model_config.batch_size)
decay_steps = int(num_batches_per_epoch *
training_config.num_epochs_per_decay)
def _learning_rate_decay_fn(learning_rate, global_step):
return tf.train.exponential_decay(
learning_rate,
global_step,
decay_steps=decay_steps,
decay_rate=training_config.learning_rate_decay_factor,
staircase=True)
learning_rate_decay_fn = _learning_rate_decay_fn
# Set up the training ops.
train_op = tf.contrib.layers.optimize_loss(
loss=model.total_loss,
global_step=model.global_step,
learning_rate=learning_rate,
optimizer=training_config.optimizer,
clip_gradients=training_config.clip_gradients,
learning_rate_decay_fn=learning_rate_decay_fn)
# Set up the Saver for saving and restoring model checkpoints.
saver = tf.train.Saver(max_to_keep=training_config.max_checkpoints_to_keep)
# Run training.
tf.contrib.slim.learning.train(
train_op,
train_dir,
log_every_n_steps=log_every_n_steps,
graph=g,
global_step=model.global_step,
number_of_steps=number_of_steps,
init_fn=model.init_fn,
saver=saver)
# Initial training
input_file_pattern = 'im2txt/data/mscoco/train-?????-of-00256'
# change these if you put your stuff somewhere else
inception_checkpoint_file = 'im2txt/data/inception_v3.ckpt'
train_dir = 'im2txt/model'
# Don't train inception for initial run
train_inception = False
number_of_steps = 1000000
log_every_n_steps = 1
# Now run the training (warning: takes days-to-weeks!!!)
train()
# Fine tuning
input_file_pattern = 'im2txt/data/mscoco/train-?????-of-00256'
# change these if you put your stuff somewhere else
inception_checkpoint_file = 'im2txt/data/inception_v3.ckpt'
train_dir = 'im2txt/model'
# This will refine our results
train_inception = True
number_of_steps = 3000000
log_every_n_steps = 1
# Now run the training (warning: takes even longer than initial training!!!)
train()
# If you completed this, you can go back to the start of this notebook and
# point checkpoint_path and vocab_file to your generated files.
```
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
#export
from nb_007a import *
```
# IMDB
## Fine-tuning the LM
Data has been prepared in csv files at the beginning 007a, we will use it know.
### Loading the data
```
PATH = Path('../data/aclImdb/')
CLAS_PATH = PATH/'clas'
LM_PATH = PATH/'lm'
MODEL_PATH = LM_PATH/'models'
os.makedirs(CLAS_PATH, exist_ok=True)
os.makedirs(LM_PATH, exist_ok=True)
os.makedirs(MODEL_PATH, exist_ok=True)
tokenizer = Tokenizer(rules=default_rules, special_cases=[BOS, FLD, UNK, PAD])
bs,bptt = 50,70
data = data_from_textcsv(LM_PATH, tokenizer, data_func=lm_data, max_vocab=60000, bs=bs, bptt=bptt)
```
### Adapt the pre-trained weights to the new vocabulary
Download the pretrained model and the corresponding itos dictionary [here](http://files.fast.ai/models/wt103_v1/) and put them in the MODEL_PATH folder.
```
itos_wt = pickle.load(open(MODEL_PATH/'itos_wt103.pkl', 'rb'))
stoi_wt = {v:k for k,v in enumerate(itos_wt)}
#export
Weights = Dict[str,Tensor]
def convert_weights(wgts:Weights, stoi_wgts:Dict[str,int], itos_new:Collection[str]) -> Weights:
"Converts the model weights to go with a new vocabulary."
dec_bias, enc_wgts = wgts['1.decoder.bias'], wgts['0.encoder.weight']
bias_m, wgts_m = dec_bias.mean(0), enc_wgts.mean(0)
new_w = enc_wgts.new_zeros((len(itos_new),enc_wgts.size(1))).zero_()
new_b = dec_bias.new_zeros((len(itos_new),)).zero_()
for i,w in enumerate(itos_new):
r = stoi_wgts[w] if w in stoi_wgts else -1
new_w[i] = enc_wgts[r] if r>=0 else wgts_m
new_b[i] = dec_bias[r] if r>=0 else bias_m
wgts['0.encoder.weight'] = new_w
wgts['0.encoder_dp.emb.weight'] = new_w.clone()
wgts['1.decoder.weight'] = new_w.clone()
wgts['1.decoder.bias'] = new_b
return wgts
wgts = torch.load(MODEL_PATH/'lstm_wt103.pth', map_location=lambda storage, loc: storage)
wgts['1.decoder.bias'][:10]
itos_wt[:10]
wgts = convert_weights(wgts, stoi_wt, data.train_ds.vocab.itos)
wgts['1.decoder.bias'][:10]
data.train_ds.vocab.itos[:10]
```
## Define the model
```
#export
def lm_split(model:Model) -> List[Model]:
"Splits a RNN model in groups for differential learning rates."
groups = [nn.Sequential(rnn, dp) for rnn, dp in zip(model[0].rnns, model[0].hidden_dps)]
groups.append(nn.Sequential(model[0].encoder, model[0].encoder_dp, model[1]))
return groups
SplitFunc = Callable[[Model], List[Model]]
OptSplitFunc = Optional[SplitFunc]
OptStrTuple = Optional[Tuple[str,str]]
class RNNLearner(Learner):
"Basic class for a Learner in RNN"
def __init__(self, data:DataBunch, model:Model, bptt:int=70, split_func:OptSplitFunc=None, clip:float=None,
adjust:bool=False, alpha:float=2., beta:float=1., **kwargs):
super().__init__(data, model)
self.callbacks.append(RNNTrainer(self, bptt, alpha=alpha, beta=beta, adjust=adjust))
if clip: self.callback_fns.append(partial(GradientClipping, clip=clip))
if split_func: self.split(split_func)
self.metrics = [accuracy]
def save_encoder(self, name:str):
"Saves the encoder to the model directory"
torch.save(self.model[0].state_dict(), self.path/self.model_dir/f'{name}.pth')
def load_encoder(self, name:srt):
"Loads the encoder from the model directory"
self.model[0].load_state_dict(torch.load(self.path/self.model_dir/f'{name}.pth'))
def load_pretrained(self, wgts_fname:str, itos_fname:str):
"Loads a pretrained model and adapts it to the data vocabulary."
old_itos = pickle.load(open(self.path/self.model_dir/f'{itos_fname}.pkl', 'rb'))
old_stoi = {v:k for k,v in enumerate(old_itos)}
wgts = torch.load(self.path/self.model_dir/f'{wgts_fname}.pth', map_location=lambda storage, loc: storage)
wgts = convert_weights(wgts, old_stoi, self.data.train_ds.vocab.itos)
self.model.load_state_dict(wgts)
@classmethod
def language_model(cls, data:DataBunch, bptt:int=70, emb_sz:int=400, nh:int=1150, nl:int=3, pad_token:int=1,
drop_mult:float=1., tie_weights:bool=True, bias:bool=True, qrnn:bool=False,
pretrained_fnames:OptStrTuple=None, **kwargs) -> 'RNNLearner':
"Creates a `Learner` with a language model."
dps = np.array([0.25, 0.1, 0.2, 0.02, 0.15]) * drop_mult
vocab_size = len(data.train_ds.vocab.itos)
model = get_language_model(vocab_size, emb_sz, nh, nl, pad_token, input_p=dps[0], output_p=dps[1],
weight_p=dps[2], embed_p=dps[3], hidden_p=dps[4], tie_weights=tie_weights, bias=bias, qrnn=qrnn)
learn = cls(data, model, bptt, split_func=lm_split, **kwargs)
if pretrained_fnames is not None: learn.load_pretrained(*pretrained_fnames)
return learn
data = data_from_textcsv(LM_PATH, Tokenizer(), data_func=lm_data, bs=bs)
learn = RNNLearner.language_model(data, drop_mul=0.3, pretrained_fnames=['lstm_wt103', 'itos_wt103'])
learn.freeze()
lr_find(learn)
learn.recorder.plot()
learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7), wd=0.03)
learn.save('fit_head')
learn.load('fit_head')
learn.unfreeze()
learn.fit_one_cycle(10, 1e-3, moms=(0.8,0.7), wd=0.03 pct_start=0.25)
learn.save('fine_tuned60kb')
learn.save_encoder('fine_tuned_enc60kb')
```
## Classifier
```
#export
from torch.utils.data import Sampler, BatchSampler
NPArrayList = Collection[np.ndarray]
KeyFunc = Callable[[int], int]
class SortSampler(Sampler):
"Go through the text data by order of length"
def __init__(self, data_source:NPArrayList, key:KeyFunc): self.data_source,self.key = data_source,key
def __len__(self) -> int: return len(self.data_source)
def __iter__(self):
return iter(sorted(range(len(self.data_source)), key=self.key, reverse=True))
class SortishSampler(Sampler):
"Go through the text data by order of length with a bit of randomness"
def __init__(self, data_source:NPArrayList, key:KeyFunc, bs:int):
self.data_source,self.key,self.bs = data_source,key,bs
def __len__(self) -> int: return len(self.data_source)
def __iter__(self):
idxs = np.random.permutation(len(self.data_source))
sz = self.bs*50
ck_idx = [idxs[i:i+sz] for i in range(0, len(idxs), sz)]
sort_idx = np.concatenate([sorted(s, key=self.key, reverse=True) for s in ck_idx])
sz = self.bs
ck_idx = [sort_idx[i:i+sz] for i in range(0, len(sort_idx), sz)]
max_ck = np.argmax([self.key(ck[0]) for ck in ck_idx]) # find the chunk with the largest key,
ck_idx[0],ck_idx[max_ck] = ck_idx[max_ck],ck_idx[0] # then make sure it goes first.
sort_idx = np.concatenate(np.random.permutation(ck_idx[1:]))
sort_idx = np.concatenate((ck_idx[0], sort_idx))
return iter(sort_idx)
#export
BatchSamples = Collection[Tuple[Collection[int], int]]
def pad_collate(samples:BatchSamples, pad_idx:int=1, pad_first:bool=True) -> Tuple[LongTensor, LongTensor]:
"Function that collect samples and adds padding"
max_len = max([len(s[0]) for s in samples])
res = torch.zeros(max_len, len(samples)).long() + pad_idx
for i,s in enumerate(samples): res[-len(s[0]):,i] = LongTensor(s[0])
return res, LongTensor([s[1] for s in samples]).squeeze()
#export
def classifier_data(datasets:Collection[TextDataset], path:PathOrStr, **kwargs) -> DataBunch:
"Function that transform the `datasets` in a `DataBunch` for classification"
bs = kwargs.pop('bs') if 'bs' in kwargs else 64
pad_idx = kwargs.pop('pad_idx') if 'pad_idx' in kwargs else 1
train_sampler = SortishSampler(datasets[0].ids, key=lambda x: len(datasets[0].ids[x]), bs=bs//2)
train_dl = DeviceDataLoader.create(datasets[0], bs//2, sampler=train_sampler, collate_fn=pad_collate)
dataloaders = [train_dl]
for ds in datasets[1:]:
sampler = SortSampler(ds.ids, key=lambda x: len(ds.ids[x]))
dataloaders.append(DeviceDataLoader.create(ds, bs, sampler=sampler, collate_fn=pad_collate))
return DataBunch(*dataloaders, path=path)
```
We need to use the same vocab as for the LM.
```
vocab = Vocab(LM_PATH/'tmp')
data = data_from_textcsv(CLAS_PATH, Tokenizer(), vocab=vocab, data_func=classifier_data, bs=50)
data.train_ds.vocab.itos[40:60]
vocab.itos[40:60]
x,y = next(iter(data.train_dl))
vocab.textify(x[:,15]), y[2]
#export
class MultiBatchRNNCore(RNNCore):
"Creates a RNNCore module that can process a full sentence."
def __init__(self, bptt:int, max_seq:int, *args, **kwargs):
self.max_seq,self.bptt = max_seq,bptt
super().__init__(*args, **kwargs)
def concat(self, arrs:Collection[Tensor]) -> Tensor:
"Concatenates the arrays along the batch dimension."
return [torch.cat([l[si] for l in arrs]) for si in range(len(arrs[0]))]
def forward(self, input:LongTensor) -> Tuple[Tensor,Tensor]:
sl,bs = input.size()
self.reset()
raw_outputs, outputs = [],[]
for i in range(0, sl, self.bptt):
r, o = super().forward(input[i: min(i+self.bptt, sl)])
if i>(sl-self.max_seq):
raw_outputs.append(r)
outputs.append(o)
return self.concat(raw_outputs), self.concat(outputs)
#export
class PoolingLinearClassifier(nn.Module):
"Creates a linear classifier with pooling."
def __init__(self, layers:Collection[int], drops:Collection[float]):
super().__init__()
mod_layers = []
activs = [nn.ReLU(inplace=True)] * (len(layers) - 2) + [None]
for n_in,n_out,p,actn in zip(layers[:-1],layers[1:], drops, activs):
mod_layers += bn_drop_lin(n_in, n_out, p=p, actn=actn)
self.layers = nn.Sequential(*mod_layers)
def pool(self, x:Tensor, bs:int, is_max:bool):
"Pools the tensor along the seq_len dimension."
f = F.adaptive_max_pool1d if is_max else F.adaptive_avg_pool1d
return f(x.permute(1,2,0), (1,)).view(bs,-1)
def forward(self, input:Tuple[Tensor,Tensor]) -> Tuple[Tensor,Tensor,Tensor]:
raw_outputs, outputs = input
output = outputs[-1]
sl,bs,_ = output.size()
avgpool = self.pool(output, bs, False)
mxpool = self.pool(output, bs, True)
x = torch.cat([output[-1], mxpool, avgpool], 1)
x = self.layers(x)
return x, raw_outputs, outputs
#export
def rnn_classifier_split(model:Model) -> List[Model]:
"Splits a RNN model in groups."
groups = [nn.Sequential(model[0].encoder, model[0].encoder_dp)]
groups += [nn.Sequential(rnn, dp) for rnn, dp in zip(model[0].rnns, model[0].hidden_dps)]
groups.append(model[1])
return groups
#export
def get_rnn_classifier(bptt:int, max_seq:int, n_class:int, vocab_sz:int, emb_sz:int, n_hid:int, n_layers:int,
pad_token:int, layers:Collection[int], drops:Collection[float], bidir:bool=False, qrnn:bool=False,
hidden_p:float=0.2, input_p:float=0.6, embed_p:float=0.1, weight_p:float=0.5) -> Model:
"Creates a RNN classifier model"
rnn_enc = MultiBatchRNNCore(bptt, max_seq, vocab_sz, emb_sz, n_hid, n_layers, pad_token=pad_token, bidir=bidir,
qrnn=qrnn, hidden_p=hidden_p, input_p=input_p, embed_p=embed_p, weight_p=weight_p)
return SequentialRNN(rnn_enc, PoolingLinearClassifier(layers, drops))
#export
SplitFunc = Callable[[Model], List[Model]]
OptSplitFunc = Optional[SplitFunc]
OptStrTuple = Optional[Tuple[str,str]]
class RNNLearner(Learner):
"Basic class for a Learner in RNN"
def __init__(self, data:DataBunch, model:Model, bptt:int=70, split_func:OptSplitFunc=None, clip:float=None,
adjust:bool=False, alpha:float=2., beta:float=1., **kwargs):
super().__init__(data, model)
self.callbacks.append(RNNTrainer(self, bptt, alpha=alpha, beta=beta, adjust=adjust))
if clip: self.callback_fns.append(partial(GradientClipping, clip=clip))
if split_func: self.split(split_func)
self.metrics = [accuracy]
def save_encoder(self, name:str):
"Saves the encoder to the model directory"
torch.save(self.model[0].state_dict(), self.path/self.model_dir/f'{name}.pth')
def load_encoder(self, name:str):
"Loads the encoder from the model directory"
self.model[0].load_state_dict(torch.load(self.path/self.model_dir/f'{name}.pth'))
def load_pretrained(self, wgts_fname:str, itos_fname:str):
"Loads a pretrained model and adapts it to the data vocabulary."
old_itos = pickle.load(open(self.path/self.model_dir/f'{itos_fname}.pkl', 'rb'))
old_stoi = {v:k for k,v in enumerate(old_itos)}
wgts = torch.load(self.path/self.model_dir/f'{wgts_fname}.pth', map_location=lambda storage, loc: storage)
wgts = convert_weights(wgts, old_stoi, self.data.train_ds.vocab.itos)
self.model.load_state_dict(wgts)
@classmethod
def language_model(cls, data:DataBunch, bptt:int=70, emb_sz:int=400, nh:int=1150, nl:int=3, pad_token:int=1,
drop_mult:float=1., tie_weights:bool=True, bias:bool=True, qrnn:bool=False,
pretrained_fnames:OptStrTuple=None, **kwargs) -> 'RNNLearner':
"Creates a `Learner` with a language model."
dps = np.array([0.25, 0.1, 0.2, 0.02, 0.15]) * drop_mult
vocab_size = len(data.train_ds.vocab.itos)
model = get_language_model(vocab_size, emb_sz, nh, nl, pad_token, input_p=dps[0], output_p=dps[1],
weight_p=dps[2], embed_p=dps[3], hidden_p=dps[4], tie_weights=tie_weights, bias=bias, qrnn=qrnn)
learn = cls(data, model, bptt, split_func=lm_split, **kwargs)
if pretrained_fnames is not None: learn.load_pretrained(*pretrained_fnames)
return learn
@classmethod
def classifier(cls, data:DataBunch, bptt:int=70, max_len:int=70*20, emb_sz:int=400, nh:int=1150, nl:int=3,
layers:Collection[int]=None, drops:Collection[float]=None, pad_token:int=1,
drop_mult:float=1., qrnn:bool=False, **kwargs) -> 'RNNLearner':
"Creates a RNN classifier."
dps = np.array([0.4,0.5,0.05,0.3,0.4]) * drop_mult
if layers is None: layers = [50]
if drops is None: drops = [0.1]
vocab_size = len(data.train_ds.vocab.itos)
n_class = len(data.train_ds.classes)
layers = [emb_sz*3] + layers + [n_class]
drops = [dps[4]] + drops
model = get_rnn_classifier(bptt, max_len, n_class, vocab_size, emb_sz, nh, nl, pad_token,
layers, drops, input_p=dps[0], weight_p=dps[1], embed_p=dps[2], hidden_p=dps[3], qrnn=qrnn)
learn = cls(data, model, bptt, split_func=rnn_classifier_split, **kwargs)
return learn
data = data_from_textcsv(CLAS_PATH, Tokenizer(), vocab=Vocab(LM_PATH/'tmp'), data_func=classifier_data, bs=50)
learn = RNNLearner.classifier(data, drop_mult=0.5)
learn.load_encoder('fine_tuned_enc60ka')
learn.freeze()
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7))
learn.save('first')
learn.load('first')
learn.freeze_to(-2)
learn.fit_one_cycle(1, slice(1e-2/2.6,1e-2), moms=(0.8,0.7), pct_start=0.1)
learn.save('second')
learn.load('second')
learn.freeze_to(-3)
learn.fit_one_cycle(1, slice(5e-3/(2.6**2),5e-3), moms=(0.8,0.7), pct_start=0.1)
learn.save('third')
learn.load('third')
learn.unfreeze()
learn.fit_one_cycle(2, slice(1e-3/(2.6**4),1e-3), moms=(0.8,0.7), pct_start=0.1)
```
| github_jupyter |
```
import itertools
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix, f1_score
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
```
# Load dataset
```
# Get data
import pandas as pd
from sklearn.datasets import load_iris
data = load_iris(as_frame=True)
dataset = data.frame
dataset.head()
# Print labels for target values
[print(f'{target}: {label}') for target, label in zip(data.target.unique(), data.target_names)]
# Process feature names
dataset.columns = [colname.strip(' (cm)').replace(' ', '_') for colname in dataset.columns.tolist()]
feature_names = dataset.columns.tolist()[:4]
feature_names
```
# Features engineering
```
dataset['sepal_length_to_sepal_width'] = dataset['sepal_length'] / dataset['sepal_width']
dataset['petal_length_to_petal_width'] = dataset['petal_length'] / dataset['petal_width']
dataset = dataset[[
'sepal_length', 'sepal_width', 'petal_length', 'petal_width',
'sepal_length_to_sepal_width', 'petal_length_to_petal_width',
'target'
]]
dataset.head()
```
# Split train/test dataset
```
test_size=0.2
train_dataset, test_dataset = train_test_split(dataset, test_size=test_size, random_state=42)
train_dataset.shape, test_dataset.shape
```
# Train
```
# Get X and Y
y_train = train_dataset.loc[:, 'target'].values.astype('int32')
X_train = train_dataset.drop('target', axis=1).values.astype('float32')
# Create an instance of Logistic Regression Classifier CV and fit the data
logreg = LogisticRegression(C=0.001, solver='lbfgs', multi_class='multinomial', max_iter=100)
logreg.fit(X_train, y_train)
```
# Evaluate
```
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
"""
given a sklearn confusion matrix (cm), make a nice plot
Arguments
---------
cm: confusion matrix from sklearn.metrics.confusion_matrix
target_names: given classification classes such as [0, 1, 2]
the class names, for example: ['high', 'medium', 'low']
title: the text to display at the top of the matrix
cmap: the gradient of the values displayed from matplotlib.pyplot.cm
see http://matplotlib.org/examples/color/colormaps_reference.html
plt.get_cmap('jet') or plt.cm.Blues
normalize: If False, plot the raw numbers
If True, plot the proportions
Usage
-----
plot_confusion_matrix(cm = cm, # confusion matrix created by
# sklearn.metrics.confusion_matrix
normalize = True, # show proportions
target_names = y_labels_vals, # list of names of the classes
title = best_estimator_name) # title of graph
Citiation
---------
http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
"""
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(8, 6))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
# Get X and Y
y_test = test_dataset.loc[:, 'target'].values.astype('int32')
X_test = test_dataset.drop('target', axis=1).values.astype('float32')
prediction = logreg.predict(X_test)
cm = confusion_matrix(prediction, y_test)
f1 = f1_score(y_true = y_test, y_pred = prediction, average='macro')
# f1 score value
f1
plot_confusion_matrix(cm, data.target_names, normalize=False)
```
| github_jupyter |
Last updated: June 29th 2016
# Climate data exploration: a journey through Pandas
Welcome to a demo of Python's data analysis package called `Pandas`. Our goal is to learn about Data Analysis and transformation using Pandas while exploring datasets used to analyze climate change.
## The story
The global goal of this demo is to provide the tools to be able to try and reproduce some of the analysis done in the IPCC global climate reports published in the last decade (see for example https://www.ipcc.ch/pdf/assessment-report/ar5/syr/SYR_AR5_FINAL_full.pdf).
We are first going to load a few public datasets containing information about global temperature, global and local sea level infomation, and global concentration of greenhouse gases like CO2, to see if there are correlations and how the trends are to evolve, assuming no fundamental change in the system. For all these datasets, we will download them, visualize them, clean them, search through them, merge them, resample them, transform them and summarize them.
In the process, we will learn about:
Part 1:
1. Loading data
2. Pandas datastructures
3. Cleaning and formatting data
4. Basic visualization
Part 2:
5. Accessing data
6. Working with dates and times
7. Transforming datasets
8. Statistical analysis
9. Data agregation and summarization
10. Correlations and regressions
11. Predictions from auto regression models
## Some initial setup
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.set_option("display.max_rows", 16)
LARGE_FIGSIZE = (12, 8)
# Change this cell to the demo location on YOUR machine
# %cd ~/Projects/pandas_tutorial/climate_timeseries/
%cd ~/Dropbox/Projects/pandas_tutorial/climate_timeseries
%ls
```
## 1. Loading data
More details, see http://pandas.pydata.org/pandas-docs/stable/io.html
To find all reading functions in pandas, ask ipython's tab completion:
```
# pd.read_table
pd.read_table?
```
### From a local text file
Let's first load some temperature data which covers all lattitudes. Since read_table is supposed to do the job for a text file, let's just try it:
```
filename = "data/temperatures/annual.land_ocean.90S.90N.df_1901-2000mean.dat"
full_globe_temp = pd.read_table(filename)
full_globe_temp
```
There is only 1 column! Let's try again stating that values are separated by any number of spaces:
```
full_globe_temp = pd.read_table(filename, sep="\s+")
full_globe_temp
```
There are columns but the column names are 1880 and -0.1591!
```
full_globe_temp = pd.read_table(filename, sep="\s+", names=["year", "mean temp"])
full_globe_temp
```
Since we only have 2 columns, one of which would be nicer to access the data (the year of the record), let's try using the `index_col` option:
```
full_globe_temp = pd.read_table(filename, sep="\s+", names=["year", "mean temp"],
index_col=0)
full_globe_temp
```
Last step: the index is made of dates. Let's make that explicit:
```
full_globe_temp = pd.read_table(filename, sep="\s+", names=["year", "mean temp"],
index_col=0, parse_dates=True)
full_globe_temp
```
### From a chunked file
Since every dataset can contain mistakes, let's load a different file with temperature data. NASA's GISS dataset is written in chunks: look at it in `data/temperatures/GLB.Ts+dSST.txt`
```
giss_temp = pd.read_table("data/temperatures/GLB.Ts+dSST.txt", sep="\s+", skiprows=7,
skip_footer=11, engine="python")
giss_temp
```
**QUIZ:** What happens if you remove the `skiprows`? `skipfooter`? `engine`?
**EXERCISE:** Load some readings of CO2 concentrations in the atmosphere from the `data/greenhouse_gaz/co2_mm_global.txt` data file.
```
# Your code here
co2_concentration = pd.read_table("data/greenhouse_gaz/co2_mm_global.txt", sep="\s+",
parse_dates = [[0, 1]])
# parse two comlums(1 and 2) together inside a column
co2_concentration
```
### From a remote text file
So far, we have only loaded temperature datasets. Climate change also affects the sea levels on the globe. Let's load some datasets with the sea levels. The university of colorado posts updated timeseries for mean sea level globably, per
hemisphere, or even per ocean, sea, ... Let's download the global one, and the ones for the northern and southern hemisphere.
That will also illustrate that to load text files that are online, there is no more work than replacing the filepath by a URL n `read_table`:
```
# Local backup: data/sea_levels/sl_nh.txt
northern_sea_level = pd.read_table("http://sealevel.colorado.edu/files/current/sl_nh.txt",
sep="\s+")
northern_sea_level
# Local backup: data/sea_levels/sl_sh.txt
southern_sea_level = pd.read_table("http://sealevel.colorado.edu/files/current/sl_sh.txt",
sep="\s+")
southern_sea_level
# The 2015 version of the global dataset:
# Local backup: data/sea_levels/sl_ns_global.txt
url = "http://sealevel.colorado.edu/files/2015_rel2/sl_ns_global.txt"
global_sea_level = pd.read_table(url, sep="\s+")
global_sea_level
```
There are clearly lots of cleanup to be done on these datasets. See below...
### From a local or remote HTML file
To be able to grab more local data about mean sea levels, we can download and extract data about mean sea level stations around the world from the PSMSL (http://www.psmsl.org/). Again to download and parse all tables in a webpage, just give `read_html` the URL to parse:
```
# Needs `lxml`, `beautifulSoup4` and `html5lib` python packages
# Local backup in data/sea_levels/Obtaining Tide Gauge Data.html
table_list = pd.read_html("http://www.psmsl.org/data/obtaining/")
table_list
# there is 1 table on that page which contains metadata about the stations where
# sea levels are recorded
local_sea_level_stations = table_list[0]
local_sea_level_stations
```
That table can be used to search for a station in a region of the world we choose, extract an ID for it and download the corresponding time series with the URL http://www.psmsl.org/data/obtaining/met.monthly.data/< ID >.metdata
## 2. Pandas DataStructures
For more details, see http://pandas.pydata.org/pandas-docs/stable/dsintro.html
Now that we have used `read_**` functions to load datasets, we need to understand better what kind of objects we got from them to learn to work with them.
### DataFrame, the pandas 2D structure
```
# Type of the object?
type(giss_temp)
# Internal nature of the object
print(giss_temp.shape)
print(giss_temp.dtypes)
```
Descriptors for the vertical axis (axis=0)
```
giss_temp.index
```
Descriptors for the horizontal axis (axis=1)
```
giss_temp.columns
```
A lot of information at once including memory usage:
```
giss_temp.info()
```
### Series, the pandas 1D structure
A series can be constructed with the `pd.Series` constructor (passing a list or array of values) or from a `DataFrame`, by extracting one of its columns.
```
# Do we already have a series for the full_globe_temp?
type(full_globe_temp)
full_globe_temp
# Since there is only one column of values, we can make this a Series without
# loosing information:
full_globe_temp = full_globe_temp["mean temp"]
```
Core attributes/information:
```
print(type(full_globe_temp))
print(full_globe_temp.dtype)
print(full_globe_temp.shape)
print(full_globe_temp.nbytes)
```
Probably the most important attribute of a `Series` or `DataFrame` is its `index` since we will use that to, well, index into the structures to access te information:
```
full_globe_temp.index
```
### NumPy arrays as backend of Pandas
It is always possible to fall back to a good old NumPy array to pass on to scientific libraries that need them: SciPy, scikit-learn, ...
```
full_globe_temp.values
type(full_globe_temp.values)
```
### Creating new DataFrames manually
`DataFrame`s can also be created manually, by grouping several Series together. Let's make a new frame from the 3 sea level datasets we downloaded above. They will be displayed along the same index. Wait, does that makes sense to do that?
```
# Are they aligned?
southern_sea_level.year == northern_sea_level.year
# So, are they aligned?
np.all(southern_sea_level.year == northern_sea_level.year)
```
So the northern hemisphere and southern hemisphere datasets are aligned. What about the global one?
```
len(global_sea_level.year) == len(northern_sea_level.year)
```
For now, let's just build a DataFrame with the 2 hemisphere datasets then. We will come back to add the global one later...
```
mean_sea_level = pd.DataFrame({"northern_hem": northern_sea_level["msl_ib(mm)"],
"southern_hem": southern_sea_level["msl_ib(mm)"],
"date": northern_sea_level.year})
mean_sea_level
```
Note: there are other ways to create DataFrames manually, for example from a 2D numpy array.
There is still the date in a regular column and a numerical index that is not that meaningful. We can specify the `index` of a `DataFrame` at creation. Let's try:
```
mean_sea_level = pd.DataFrame({"northern_hem": northern_sea_level["msl_ib(mm)"],
"southern_hem": southern_sea_level["msl_ib(mm)"]},
index = northern_sea_level.year)
mean_sea_level
```
Now the fact that it is failing show that Pandas does auto-alignment of values: for each value of the index, it searches for a value in each Series that maps the same value. Since these series have a dumb numerical index, no values are found.
Since we know that the order of the values match the index we chose, we can replace the Series by their values only at creation of the DataFrame:
```
mean_sea_level = pd.DataFrame({"northern_hem": northern_sea_level["msl_ib(mm)"].values,
"southern_hem": southern_sea_level["msl_ib(mm)"].values},
index = northern_sea_level.year)
mean_sea_level
```
## 3. Cleaning and formatting data
The datasets that we obtain straight from the reading functions are pretty raw. A lot of pre-processing can be done during data read but we haven't used all the power of the reading functions. Let's learn to do a lot of cleaning and formatting of the data.
The GISS temperature dataset has a lot of issues too: useless numerical index, redundant columns, useless rows, placeholder (`****`) for missing values, and wrong type for the columns. Let's fix all this:
### Renaming columns
```
# The columns of the local_sea_level_stations aren't clean: they contain spaces and dots.
local_sea_level_stations.columns
# Let's clean them up a bit:
local_sea_level_stations.columns = [name.strip().replace(".", "")
for name in local_sea_level_stations.columns]
local_sea_level_stations.columns
```
We can also rename an index by setting its name. For example, the index of the `mean_sea_level` dataFrame could be called `date` since it contains more than just the year:
```
mean_sea_level.index.name = "date"
mean_sea_level
```
### Setting missing values
In the full globe dataset, -999.00 was used to indicate that there was no value for that year. Let's search for all these values and replace them with the missing value that Pandas understand: `np.nan`
```
full_globe_temp == -999.000
full_globe_temp[full_globe_temp == -999.000] = np.nan
full_globe_temp.tail()
```
### Choosing what is the index
```
# We didn't set a column number of the index of giss_temp, we can do that afterwards:
giss_temp = giss_temp.set_index("Year")
giss_temp.head()
```
### Dropping rows and columns
```
# 1 column is redundant with the index:
giss_temp.columns
# Let's drop it:
giss_temp = giss_temp.drop("Year.1", axis=1)
giss_temp
# We can also just select the columns we want to keep:
giss_temp = giss_temp[[u'Jan', u'Feb', u'Mar', u'Apr', u'May', u'Jun', u'Jul',
u'Aug', u'Sep', u'Oct', u'Nov', u'Dec']]
giss_temp
# Let's remove all these extra column names (Year Jan ...). They all correspond to the index "Year"
giss_temp = giss_temp.drop("Year")
giss_temp
```
Let's also set `****` to a real missing value (`np.nan`). We can often do it using a boolean mask, but that may trigger pandas warning. Another way to assign based on a boolean condition is to use the `where` method:
```
#giss_temp[giss_temp == "****"] = np.nan
giss_temp = giss_temp.where(giss_temp != "****", np.nan)
giss_temp.tail()
```
### Adding columns
While building the `mean_sea_level` dataFrame earlier, we didn't include the values from `global_sea_level` since the years were not aligned. Adding a column to a dataframe is as easy as adding an entry to a dictionary. So let's try:
```
mean_sea_level["mean_global"] = global_sea_level["msl_ib_ns(mm)"]
mean_sea_level
```
The column is full of NaNs again because the auto-alignment feature of Pandas is searching for the index values like `1992.9323` in the index of `global_sea_level["msl_ib_ns(mm)"]` series and not finding them. Let's set its index to these years so that that auto-alignment can work for us and figure out which values we have and not:
```
global_sea_level = global_sea_level.set_index("year")
global_sea_level["msl_ib_ns(mm)"]
mean_sea_level["mean_global"] = global_sea_level["msl_ib_ns(mm)"]
mean_sea_level
```
**EXERCISE:** Create a new series containing the average of the 2 hemispheres minus the global value to see if that is close to 0. Work inside the mean_sea_level dataframe first. Then try with the original Series to see what happens with data alignment while doing computations.
```
# Your code here
```
### Changing dtype of series
Now that the sea levels are looking pretty good, let's got back to the GISS temperature dataset. Because of the labels (strings) found in the middle of the timeseries, every column only assumed to contain strings (didn't convert them to floating point values):
```
giss_temp.dtypes
```
That can be changed after the fact (and after the cleanup) with the `astype` method of a `Series`:
```
giss_temp["Jan"].astype("float32")
for col in giss_temp.columns:
giss_temp.loc[:, col] = giss_temp[col].astype(np.float32)
```
An index has a `dtype` just like any Series and that can be changed after the fact too.
```
giss_temp.index.dtype
```
For now, let's change it to an integer so that values can at least be compared properly. We will learn below to change it to a datetime object.
```
giss_temp.index = giss_temp.index.astype(np.int32)
```
### Removing missing values
Removing missing values - once they have been converted to `np.nan` - is very easy. Entries that contain missing values can be removed (dropped), or filled with many strategies.
```
full_globe_temp
full_globe_temp.dropna()
# This will remove any year that has a missing value. Use how='all' to keep partial years
giss_temp.dropna(how="any").tail()
giss_temp.fillna(value=0).tail()
# This fills them with the previous year. See also temp3.interpolate
giss_temp.fillna(method="ffill").tail()
```
Let's also mention the `.interpolate` method on a `Series`:
```
giss_temp.Aug.interpolate().tail()
```
For now, we will leave the missing values in all our datasets, because it wouldn't be meaningful to fill them.
**EXERCISE:** Go back to the reading functions, and learn more about other options that could have allowed us to fold some of these pre-processing steps into the data loading.
## 4. Basic visualization
Now they have been formatted, visualizing your datasets is the next logical step and is trivial with Pandas. The first thing to try is to invoke the `.plot` to generate a basic visualization (uses matplotlib under the covers).
### Line plots
```
full_globe_temp.plot()
giss_temp.plot(figsize=LARGE_FIGSIZE)
mean_sea_level.plot(subplots=True, figsize=(16, 12));
```
### Showing distributions information
```
# Distributions of mean sean level globally and per hemisphere?
mean_sea_level.plot(kind="kde", figsize=(12, 8))
```
**QUIZ:** How to list the possible kinds of plots that the plot method can allow?
```
# Distributions of temperature in each month since 1880
giss_temp.boxplot();
```
### Correlations
There are more plot options inside `pandas.tools.plotting`:
```
# Is there correlations between the northern and southern sea level timeseries we loaded?
from pandas.tools.plotting import scatter_matrix
scatter_matrix(mean_sea_level, figsize=LARGE_FIGSIZE);
```
We will confirm the correlations we think we see further down...
## 5. Storing our work
For each `read_**` function to load data, there is a `to_**` method attached to Series and DataFrames.
**EXERCISE**: explore how the to_csv method work using `ipython`'s ? and store the `giss_temp` dataframe. Do the same to store the full_globe_temp series to another file.
Another file format that is commonly used is Excel, and there multiple datasets can be stored in 1 file.
```
writer = pd.ExcelWriter("test.xls")
giss_temp.to_excel(writer, sheet_name="GISS temp data")
full_globe_temp.to_excel(writer, sheet_name="NASA temp data")
writer.close()
with pd.ExcelWriter("test.xls") as writer:
giss_temp.to_excel(writer, sheet_name="GISS temp data")
pd.DataFrame({"Full Globe Temp": full_globe_temp}).to_excel(writer, sheet_name="FullGlobe temp data")
%ls
```
Another, more powerful file format to store binary data, which allows us to store both `Series` and `DataFrame`s without having to cast anybody is HDF5.
```
with pd.HDFStore("all_data.h5") as writer:
giss_temp.to_hdf(writer, "/temperatures/giss")
full_globe_temp.to_hdf(writer, "/temperatures/full_globe")
mean_sea_level.to_hdf(writer, "/sea_level/mean_sea_level")
local_sea_level_stations.to_hdf(writer, "/sea_level/stations")
```
**EXERCISE**: Add the greenhouse gas dataset in this data store. Store it in a separate folder.
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by downloading the client and [reading the primer](https://plotly.com/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plotly.com/python/getting-started/#initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Vertical and Horizontal Lines Positioned Relative to the Axes
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[2, 3.5, 6],
y=[1, 1.5, 1],
text=['Vertical Line', 'Horizontal Dashed Line', 'Diagonal dotted Line'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 7]
},
'yaxis': {
'range': [0, 2.5]
},
'shapes': [
# Line Vertical
{
'type': 'line',
'x0': 1,
'y0': 0,
'x1': 1,
'y1': 2,
'line': {
'color': 'rgb(55, 128, 191)',
'width': 3,
},
},
# Line Horizontal
{
'type': 'line',
'x0': 2,
'y0': 2,
'x1': 5,
'y1': 2,
'line': {
'color': 'rgb(50, 171, 96)',
'width': 4,
'dash': 'dashdot',
},
},
# Line Diagonal
{
'type': 'line',
'x0': 4,
'y0': 0,
'x1': 6,
'y1': 2,
'line': {
'color': 'rgb(128, 0, 128)',
'width': 4,
'dash': 'dot',
},
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-lines')
```
#### Lines Positioned Relative to the Plot & to the Axes
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[2, 6],
y=[1, 1],
text=['Line positioned relative to the plot',
'Line positioned relative to the axes'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 8]
},
'yaxis': {
'range': [0, 2]
},
'shapes': [
# Line reference to the axes
{
'type': 'line',
'xref': 'x',
'yref': 'y',
'x0': 4,
'y0': 0,
'x1': 8,
'y1': 1,
'line': {
'color': 'rgb(55, 128, 191)',
'width': 3,
},
},
# Line reference to the plot
{
'type': 'line',
'xref': 'paper',
'yref': 'paper',
'x0': 0,
'y0': 0,
'x1': 0.5,
'y1': 0.5,
'line': {
'color': 'rgb(50, 171, 96)',
'width': 3,
},
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-line-ref')
```
#### Creating Tangent Lines with Shapes
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.linspace(1, 3, 200)
y0 = x0 * np.sin(np.power(x0, 2)) + 1
trace0 = go.Scatter(
x=x0,
y=y0,
)
data = [trace0]
layout = {
'title': "$f(x)=x\\sin(x^2)+1\\\\ f\'(x)=\\sin(x^2)+2x^2\\cos(x^2)$",
'shapes': [
{
'type': 'line',
'x0': 1,
'y0': 2.30756,
'x1': 1.75,
'y1': 2.30756,
'opacity': 0.7,
'line': {
'color': 'red',
'width': 2.5,
},
},
{
'type': 'line',
'x0': 2.5,
'y0': 3.80796,
'x1': 3.05,
'y1': 3.80796,
'opacity': 0.7,
'line': {
'color': 'red',
'width': 2.5,
},
},
{
'type': 'line',
'x0': 1.90,
'y0': -1.1827,
'x1': 2.50,
'y1': -1.1827,
'opacity': 0.7,
'line': {
'color': 'red',
'width': 2.5,
},
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='tangent-line')
```
#### Rectangles Positioned Relative to the Axes
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1.5, 4.5],
y=[0.75, 0.75],
text=['Unfilled Rectangle', 'Filled Rectangle'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 7],
'showgrid': False,
},
'yaxis': {
'range': [0, 3.5]
},
'shapes': [
# unfilled Rectangle
{
'type': 'rect',
'x0': 1,
'y0': 1,
'x1': 2,
'y1': 3,
'line': {
'color': 'rgba(128, 0, 128, 1)',
},
},
# filled Rectangle
{
'type': 'rect',
'x0': 3,
'y0': 1,
'x1': 6,
'y1': 2,
'line': {
'color': 'rgba(128, 0, 128, 1)',
'width': 2,
},
'fillcolor': 'rgba(128, 0, 128, 0.7)',
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-rectangle')
```
#### Rectangle Positioned Relative to the Plot & to the Axes
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1.5, 3],
y=[2.5, 2.5],
text=['Rectangle reference to the plot',
'Rectangle reference to the axes'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 4],
'showgrid': False,
},
'yaxis': {
'range': [0, 4]
},
'shapes': [
# Rectangle reference to the axes
{
'type': 'rect',
'xref': 'x',
'yref': 'y',
'x0': 2.5,
'y0': 0,
'x1': 3.5,
'y1': 2,
'line': {
'color': 'rgb(55, 128, 191)',
'width': 3,
},
'fillcolor': 'rgba(55, 128, 191, 0.6)',
},
# Rectangle reference to the plot
{
'type': 'rect',
'xref': 'paper',
'yref': 'paper',
'x0': 0.25,
'y0': 0,
'x1': 0.5,
'y1': 0.5,
'line': {
'color': 'rgb(50, 171, 96)',
'width': 3,
},
'fillcolor': 'rgba(50, 171, 96, 0.6)',
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-rectangle-ref')
```
#### Highlighting Time Series Regions with Rectangle Shapes
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=['2015-02-01', '2015-02-02', '2015-02-03', '2015-02-04', '2015-02-05',
'2015-02-06', '2015-02-07', '2015-02-08', '2015-02-09', '2015-02-10',
'2015-02-11', '2015-02-12', '2015-02-13', '2015-02-14', '2015-02-15',
'2015-02-16', '2015-02-17', '2015-02-18', '2015-02-19', '2015-02-20',
'2015-02-21', '2015-02-22', '2015-02-23', '2015-02-24', '2015-02-25',
'2015-02-26', '2015-02-27', '2015-02-28'],
y=[-14, -17, -8, -4, -7, -10, -12, -14, -12, -7, -11, -7, -18, -14, -14,
-16, -13, -7, -8, -14, -8, -3, -9, -9, -4, -13, -9, -6],
mode='lines',
name='temperature'
)
data = [trace0]
layout = {
# to highlight the timestamp we use shapes and create a rectangular
'shapes': [
# 1st highlight during Feb 4 - Feb 6
{
'type': 'rect',
# x-reference is assigned to the x-values
'xref': 'x',
# y-reference is assigned to the plot paper [0,1]
'yref': 'paper',
'x0': '2015-02-04',
'y0': 0,
'x1': '2015-02-06',
'y1': 1,
'fillcolor': '#d3d3d3',
'opacity': 0.2,
'line': {
'width': 0,
}
},
# 2nd highlight during Feb 20 - Feb 23
{
'type': 'rect',
'xref': 'x',
'yref': 'paper',
'x0': '2015-02-20',
'y0': 0,
'x1': '2015-02-22',
'y1': 1,
'fillcolor': '#d3d3d3',
'opacity': 0.2,
'line': {
'width': 0,
}
}
]
}
py.iplot({'data': data, 'layout': layout}, filename='timestamp-highlight')
```
#### Circles Positioned Relative to the Axes
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1.5, 3.5],
y=[0.75, 2.5],
text=['Unfilled Circle',
'Filled Circle'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 4.5],
'zeroline': False,
},
'yaxis': {
'range': [0, 4.5]
},
'width': 800,
'height': 800,
'shapes': [
# unfilled circle
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': 1,
'y0': 1,
'x1': 3,
'y1': 3,
'line': {
'color': 'rgba(50, 171, 96, 1)',
},
},
# filled circle
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'fillcolor': 'rgba(50, 171, 96, 0.7)',
'x0': 3,
'y0': 3,
'x1': 4,
'y1': 4,
'line': {
'color': 'rgba(50, 171, 96, 1)',
},
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-circle')
```
#### Highlighting Clusters of Scatter Points with Circle Shapes
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.normal(2, 0.45, 300)
y0 = np.random.normal(2, 0.45, 300)
x1 = np.random.normal(6, 0.4, 200)
y1 = np.random.normal(6, 0.4, 200)
x2 = np.random.normal(4, 0.3, 200)
y2 = np.random.normal(4, 0.3, 200)
trace0 = go.Scatter(
x=x0,
y=y0,
mode='markers',
)
trace1 = go.Scatter(
x=x1,
y=y1,
mode='markers'
)
trace2 = go.Scatter(
x=x2,
y=y2,
mode='markers'
)
trace3 = go.Scatter(
x=x1,
y=y0,
mode='markers'
)
layout = {
'shapes': [
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': min(x0),
'y0': min(y0),
'x1': max(x0),
'y1': max(y0),
'opacity': 0.2,
'fillcolor': 'blue',
'line': {
'color': 'blue',
},
},
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': min(x1),
'y0': min(y1),
'x1': max(x1),
'y1': max(y1),
'opacity': 0.2,
'fillcolor': 'orange',
'line': {
'color': 'orange',
},
},
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': min(x2),
'y0': min(y2),
'x1': max(x2),
'y1': max(y2),
'opacity': 0.2,
'fillcolor': 'green',
'line': {
'color': 'green',
},
},
{
'type': 'circle',
'xref': 'x',
'yref': 'y',
'x0': min(x1),
'y0': min(y0),
'x1': max(x1),
'y1': max(y0),
'opacity': 0.2,
'fillcolor': 'red',
'line': {
'color': 'red',
},
},
],
'showlegend': False,
}
data = [trace0, trace1, trace2, trace3]
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='clusters')
```
#### Venn Diagram with Circle Shapes
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 1.75, 2.5],
y=[1, 1, 1],
text=['$A$', '$A+B$', '$B$'],
mode='text',
textfont=dict(
color='black',
size=18,
family='Arail',
)
)
data = [trace0]
layout = {
'xaxis': {
'showticklabels': False,
'showgrid': False,
'zeroline': False,
},
'yaxis': {
'showticklabels': False,
'showgrid': False,
'zeroline': False,
},
'shapes': [
{
'opacity': 0.3,
'xref': 'x',
'yref': 'y',
'fillcolor': 'blue',
'x0': 0,
'y0': 0,
'x1': 2,
'y1': 2,
'type': 'circle',
'line': {
'color': 'blue',
},
},
{
'opacity': 0.3,
'xref': 'x',
'yref': 'y',
'fillcolor': 'gray',
'x0': 1.5,
'y0': 0,
'x1': 3.5,
'y1': 2,
'type': 'circle',
'line': {
'color': 'gray',
},
}
],
'margin': {
'l': 20,
'r': 20,
'b': 100
},
'height': 600,
'width': 800,
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='venn-diagram')
```
#### SVG Paths
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[2, 1, 8, 8],
y=[0.25, 9, 2, 6],
text=['Filled Triangle',
'Filled Polygon',
'Quadratic Bezier Curves',
'Cubic Bezier Curves'],
mode='text',
)
data = [trace0]
layout = {
'xaxis': {
'range': [0, 9],
'zeroline': False,
},
'yaxis': {
'range': [0, 11],
'showgrid': False,
},
'shapes': [
# Quadratic Bezier Curves
{
'type': 'path',
'path': 'M 4,4 Q 6,0 8,4',
'line': {
'color': 'rgb(93, 164, 214)',
},
},
# Cubic Bezier Curves
{
'type': 'path',
'path': 'M 1,4 C 2,8 6,4 8,8',
'line': {
'color': 'rgb(207, 114, 255)',
},
},
# filled Triangle
{
'type': 'path',
'path': ' M 1 1 L 1 3 L 4 1 Z',
'fillcolor': 'rgba(44, 160, 101, 0.5)',
'line': {
'color': 'rgb(44, 160, 101)',
},
},
# filled Polygon
{
'type': 'path',
'path': ' M 3,7 L2,8 L2,9 L3,10, L4,10 L5,9 L5,8 L4,7 Z',
'fillcolor': 'rgba(255, 140, 184, 0.5)',
'line': {
'color': 'rgb(255, 140, 184)',
},
},
]
}
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='shapes-path')
```
### Dash Example
[Dash](https://plotly.com/products/dash/) is an Open Source Python library which can help you convert plotly figures into a reactive, web-based application. Below is a simple example of a dashboard created using Dash. Its [source code](https://github.com/plotly/simple-example-chart-apps/tree/master/dash-shapesplot) can easily be deployed to a PaaS.
```
from IPython.display import IFrame
IFrame(src= "https://dash-simple-apps.plotly.host/dash-shapesplot/", width="100%", height="650px", frameBorder="0")
from IPython.display import IFrame
IFrame(src= "https://dash-simple-apps.plotly.host/dash-shapesplot/code", width="100%", height=500, frameBorder="0")
```
#### Reference
See https://plotly.com/python/reference/#layout-shapes for more information and chart attribute options!
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'shapes.ipynb', 'python/shapes/', 'Shapes | plotly',
'How to make SVG shapes in python. Examples of lines, circle, rectangle, and path.',
title = 'Shapes | plotly',
name = 'Shapes',
thumbnail='thumbnail/shape.jpg', language='python',
has_thumbnail='true', display_as='file_settings', order=32,
ipynb='~notebook_demo/14')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/barksdaleaz/big_transfer/blob/master/big_transfer_tf2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2020 Google LLC.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<a href="https://colab.research.google.com/github/google-research/big_transfer/blob/master/colabs/big_transfer_tf2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# BigTransfer (BiT): A step-by-step tutorial for state-of-the-art vision
By Jessica Yung and Joan Puigcerver
In this colab, we will show you how to load one of our BiT models (a ResNet50 trained on ImageNet-21k), use it out-of-the-box and fine-tune it on a dataset of flowers.
This colab accompanies our [TensorFlow blog post](https://ai.googleblog.com/2020/05/open-sourcing-bit-exploring-large-scale.html) and is based on the [BigTransfer paper](https://arxiv.org/abs/1912.11370).
The models from the paper can be found on [TensorFlow Hub](https://tfhub.dev/google/collections/bit/1) and additional models obtained by finetuning BiT-m in more specific sets of data can be found [here](https://tfhub.dev/google/collections/experts/bit/1).
We also share code to fine-tune our models in TensorFlow2, JAX and PyTorch in [our GitHub repository](https://github.com/google-research/big_transfer).
```
#@title Imports
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
import time
from PIL import Image
import requests
from io import BytesIO
import matplotlib.pyplot as plt
import numpy as np
import os
import pathlib
#@title Construct imagenet logit-to-class-name dictionary (imagenet_int_to_str)
!wget https://storage.googleapis.com/bit_models/ilsvrc2012_wordnet_lemmas.txt
imagenet_int_to_str = {}
with open('ilsvrc2012_wordnet_lemmas.txt', 'r') as f:
for i in range(1000):
row = f.readline()
row = row.rstrip()
imagenet_int_to_str.update({i: row})
#@title cifar-10 label names (hidden)
cifar_10_labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
```
# Load the pre-trained BiT model
First, we will load a pre-trained BiT model. There are ten models you can choose from, spanning two upstream training datasets and five ResNet architectures.
In this tutorial, we will use a ResNet50x1 model trained on ImageNet-21k.
### Where to find the models
Models that output image features (pre-logit layer) can be found at
* `https://tfhub.dev/google/bit/m-{archi, e.g. r50x1}/1`
whereas models that return outputs in the Imagenet (ILSVRC-2012) label space can be found at
* `https://tfhub.dev/google/bit/m-{archi, e.g. r50x1}/ilsvrc2012_classification/1`
The architectures we have include R50x1, R50x3, R101x1, R101x3 and R152x4. The architectures are all in lowercase in the links.
So for example, if you want image features from a ResNet-50, you could use the model at `https://tfhub.dev/google/bit/m-r50x1/1`. This is also the model we'll use in this tutorial.
```
# Load model into KerasLayer
model_url = "https://tfhub.dev/google/bit/m-r50x3/1"
module = hub.KerasLayer(model_url)
```
## Use BiT out-of-the-box
If you don’t yet have labels for your images (or just want to have some fun), you may be interested in using the model out-of-the-box, i.e. without fine-tuning it. For this, we will use a model fine-tuned on ImageNet so it has the interpretable ImageNet label space of 1k classes. Many common objects are not covered, but it gives a reasonable idea of what is in the image.
```
# Load model fine-tuned on ImageNet
model_url = "https://tfhub.dev/google/bit/m-r50x1/ilsvrc2012_classification/1"
imagenet_module = hub.KerasLayer(model_url)
```
Using the model is very simple:
```
logits = module(image)
```
Note that the BiT models take inputs of shape [?, ?, 3] (i.e. 3 colour channels) with values between 0 and 1.
```
#@title Helper functions for loading image (hidden)
def preprocess_image(image):
image = np.array(image)
# reshape into shape [batch_size, height, width, num_channels]
img_reshaped = tf.reshape(image, [1, image.shape[0], image.shape[1], image.shape[2]])
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
image = tf.image.convert_image_dtype(img_reshaped, tf.float32)
return image
def load_image_from_url(url):
"""Returns an image with shape [1, height, width, num_channels]."""
response = requests.get(url)
image = Image.open(BytesIO(response.content))
image = preprocess_image(image)
return image
#@title Plotting helper functions (hidden)
#@markdown Credits to Xiaohua Zhai, Lucas Beyer and Alex Kolesnikov from Brain Zurich, Google Research
# Show the MAX_PREDS highest scoring labels:
MAX_PREDS = 5
# Do not show labels with lower score than this:
MIN_SCORE = 0.8
def show_preds(logits, image, correct_cifar_label=None, cifar_logits=False):
if len(logits.shape) > 1:
logits = tf.reshape(logits, [-1])
fig, axes = plt.subplots(1, 2, figsize=(7, 4), squeeze=False)
ax1, ax2 = axes[0]
ax1.axis('off')
ax1.imshow(image)
if correct_cifar_label is not None:
ax1.set_title(cifar_10_labels[correct_cifar_label])
classes = []
scores = []
logits_max = np.max(logits)
softmax_denominator = np.sum(np.exp(logits - logits_max))
for index, j in enumerate(np.argsort(logits)[-MAX_PREDS::][::-1]):
score = 1.0/(1.0 + np.exp(-logits[j]))
if score < MIN_SCORE: break
if not cifar_logits:
# predicting in imagenet label space
classes.append(imagenet_int_to_str[j])
else:
# predicting in tf_flowers label space
classes.append(cifar_10_labels[j])
scores.append(np.exp(logits[j] - logits_max)/softmax_denominator*100)
ax2.barh(np.arange(len(scores)) + 0.1, scores)
ax2.set_xlim(0, 100)
ax2.set_yticks(np.arange(len(scores)))
ax2.yaxis.set_ticks_position('right')
ax2.set_yticklabels(classes, rotation=0, fontsize=14)
ax2.invert_xaxis()
ax2.invert_yaxis()
ax2.set_xlabel('Prediction probabilities', fontsize=11)
```
**TODO: try replacing the URL below with a link to an image of your choice!**
```
# Load image (image provided is CC0 licensed)
img_url = "https://p0.pikrepo.com/preview/853/907/close-up-photo-of-gray-elephant.jpg"
image = load_image_from_url(img_url)
# Run model on image
logits = imagenet_module(image)
# Show image and predictions
show_preds(logits, image[0])
```
Here the model correctly classifies the photo as an elephant. It is likely an Asian elephant because of the size of its ears.
We will also try predicting on an image from the dataset we're going to fine-tune on - [TF flowers](https://www.tensorflow.org/datasets/catalog/tf_flowers), which has also been used in [other tutorials](https://www.tensorflow.org/tutorials/load_data/images). This dataset contains 3670 images of 5 classes of flowers.
Note that the correct label of the image we're going to predict on (‘tulip’) is not a class in ImageNet and so the model cannot predict that at the moment - let’s see what it tries to do instead.
```
# Import tf_flowers data from tfds
# dataset_name = 'CIFAR-10'
# ds, info = tfds.load(name=dataset_name, split=['train'], with_info=True)
ds, info = tfds.load(
name='cifar10',
split='train',
with_info=True,
)
# ds = ds[0] <- this threw an error so... i just got rid of it? lol idk what it is for
print(ds)
num_examples = info.splits['train'].num_examples
NUM_CLASSES = 10
#@title Alternative code for loading a dataset
#@markdown We provide alternative code for loading `tf_flowers` from an URL in this cell to make it easy for you to try loading your own datasets.
#@markdown This code is commented out by default and replaces the cell immediately above. Note that using this option may result in a different example image below.
"""
data_dir = tf.keras.utils.get_file(origin='https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
fname='CIFAR-10', untar=True)
data_dir = pathlib.Path(data_dir)
IMG_HEIGHT = 224
IMG_WIDTH = 224
CLASS_NAMES = cifar_10_labels # from plotting helper functions above
NUM_CLASSES = len(CLASS_NAMES)
num_examples = len(list(data_dir.glob('*/*.jpg')))
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
return tf.where(parts[-2] == CLASS_NAMES)[0][0]
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
return img
def process_path(file_path):
label = get_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
features = {'image': img, 'label': label}
return features
# list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'))
list_ds = tf.keras.datasets.cifar10.load_data()
ds = list_ds.map(process_path, num_parallel_calls=tf.data.experimental.AUTOTUNE)
"""
# Split into train and test sets
# We have checked that the classes are reasonably balanced.
train_split = 0.9
num_train = int(train_split * num_examples)
ds_train = ds.take(num_train)
ds_test = ds.skip(num_train)
DATASET_NUM_TRAIN_EXAMPLES = num_examples
for features in ds_train.take(1):
image = features['image']
image = preprocess_image(image)
# Run model on image
logits = imagenet_module(image)
# Show image and predictions
show_preds(logits, image[0], correct_cifar_label=features['label'].numpy())
```
In this case, In this case, 'tulip' is not a class in ImageNet, and the model predicts a reasonably similar-looking classe, 'bell pepper'.
## Fine-tuning the BiT model
Now we are going to fine-tune the BiT model so it performs better on a specific dataset. Here we are going to use Keras for simplicity and we are going to fine-tune the model on the CIFAR-10 dataset (`cifar_10`).
We will use the model we loaded at the start (i.e. the one not fine-tuned on ImageNet) so the model is less biased towards ImageNet-like images.
There are two steps:
1. Create a new model with a new final layer (which we call the ‘head’), and
2. Fine-tune this model using BiT-HyperRule, our hyperparameter heuristic.
### 1. Creating the new model
To create the new model, we:
1. Cut off the BiT model’s original head. This leaves us with the “pre-logits” output.
- We do not have to do this if we use the ‘feature extractor’ models ((i.e. all those in subdirectories titled `feature_vectors`), since for those models the head has already been cut off.
2. Add a new head with the number of outputs equal to the number of classes of our new task. Note that it is important that we initialise the head to all zeroes.
#### Add new head to the BiT model
Since we want to use BiT on a new dataset (not the one it was trained on), we need to replace the final layer with one that has the correct number of output classes. This final layer is called the head.
Note that it is important to **initialise the new head to all zeros**.
```
# Add new head to the BiT model
class MyBiTModel(tf.keras.Model):
"""BiT with a new head."""
def __init__(self, num_classes, module):
super().__init__()
self.num_classes = num_classes
self.head = tf.keras.layers.Dense(num_classes, kernel_initializer='zeros')
self.bit_model = module
def call(self, images):
# No need to cut head off since we are using feature extractor model
bit_embedding = self.bit_model(images)
return self.head(bit_embedding)
model = MyBiTModel(num_classes=NUM_CLASSES, module=module)
```
### Data and preprocessing
#### BiT Hyper-Rule: Our hyperparameter selection heuristic
When we fine-tune the model, we use BiT-HyperRule, our heuristic for choosing hyperparameters for downstream fine-tuning. This is **not a hyperparameter sweep** - given a dataset, it specifies one set of hyperparameters that we’ve seen produce good results. You can often obtain better results by running a more expensive hyperparameter sweep, but BiT-HyperRule is an effective way of getting good initial results on your dataset.
**Hyperparameter heuristic details**
In BiT-HyperRule, we use a vanilla SGD optimiser with an initial learning rate of 0.003, momentum 0.9 and batch size 512. We decay the learning rate by a factor of 10 at 30%, 60% and 90% of the training steps.
As data preprocessing, we resize the image, take a random crop, and then do a random horizontal flip (details in table below). We do random crops and horizontal flips for all tasks except those where such actions destroy label semantics. E.g. we don’t apply random crops to counting tasks, or random horizontal flip to tasks where we’re meant to predict the orientation of an object.
Image area | Resize to | Take random crop of size
--- | --- | ---
Smaller than 96 x 96 px | 160 x 160 px | 128 x 128 px
At least 96 x 96 px | 512 x 512 px | 480 x 480 px
*Table 1: Downstream resizing and random cropping details. If images are larger, we resize them to a larger fixed size to take advantage of benefits from fine-tuning on higher resolution.*
We also use MixUp for datasets with more than 20k examples. Since the dataset used in this tutorial does not use MixUp, for simplicity and speed, we do not include it in this colab, but include it in our [github repo implementation](https://github.com/google-research/big_transfer).
```
#@title Set dataset-dependent hyperparameters
#@markdown Here we set dataset-dependent hyperparameters. For example, our dataset of flowers has 3670 images of varying size (a few hundred x a few hundred pixels), so the image size is larger than 96x96 and the dataset size is <20k examples. However, for speed reasons (since this is a tutorial and we are training on a single GPU), we will select the `<96x96 px` option and train on lower resolution images. As we will see, we can still attain strong results.
#@markdown **Algorithm details: how are the hyperparameters dataset-dependent?**
#@markdown It's quite intuitive - we resize images to a smaller fixed size if they are smaller than 96 x 96px and to a larger fixed size otherwise. The number of steps we fine-tune for is larger for larger datasets.
IMAGE_SIZE = "=\u003C96x96 px" #@param ["=<96x96 px","> 96 x 96 px"]
DATASET_SIZE = "\u003C20k examples" #@param ["<20k examples", "20k-500k examples", ">500k examples"]
if IMAGE_SIZE == "=<96x96 px":
RESIZE_TO = 160
CROP_TO = 128
else:
RESIZE_TO = 512
CROP_TO = 480
if DATASET_SIZE == "<20k examples":
SCHEDULE_LENGTH = 500
SCHEDULE_BOUNDARIES = [200, 300, 400]
elif DATASET_SIZE == "20k-500k examples":
SCHEDULE_LENGTH = 10000
SCHEDULE_BOUNDARIES = [3000, 6000, 9000]
else:
SCHEDULE_LENGTH = 20000
SCHEDULE_BOUNDARIES = [6000, 12000, 18000]
```
**Tip**: if you are running out of memory, decrease the batch size. A way to adjust relevant parameters is to linearly scale the schedule length and learning rate.
```
SCHEDULE_LENGTH = SCHEDULE_LENGTH * 512 / BATCH_SIZE
lr = 0.003 * BATCH_SIZE / 512
```
These adjustments have already been coded in the cells below - you only have to change the `BATCH_SIZE`. If you change the batch size, please re-run the cell above as well to make sure the `SCHEDULE_LENGTH` you are starting from is correct as opposed to already altered from a previous run.
```
# Preprocessing helper functions
# Create data pipelines for training and testing:
BATCH_SIZE = 512
SCHEDULE_LENGTH = SCHEDULE_LENGTH * 512 / BATCH_SIZE
STEPS_PER_EPOCH = 10
def cast_to_tuple(features):
return (features['image'], features['label'])
def preprocess_train(features):
# Apply random crops and horizontal flips for all tasks
# except those for which cropping or flipping destroys the label semantics
# (e.g. predict orientation of an object)
features['image'] = tf.image.random_flip_left_right(features['image'])
features['image'] = tf.image.resize(features['image'], [RESIZE_TO, RESIZE_TO])
features['image'] = tf.image.random_crop(features['image'], [CROP_TO, CROP_TO, 3])
features['image'] = tf.cast(features['image'], tf.float32) / 255.0
return features
def preprocess_test(features):
features['image'] = tf.image.resize(features['image'], [RESIZE_TO, RESIZE_TO])
features['image'] = tf.cast(features['image'], tf.float32) / 255.0
return features
pipeline_train = (ds_train
.shuffle(10000)
.repeat(int(SCHEDULE_LENGTH * BATCH_SIZE / DATASET_NUM_TRAIN_EXAMPLES * STEPS_PER_EPOCH) + 1 + 50) # repeat dataset_size / num_steps
.map(preprocess_train, num_parallel_calls=8)
.batch(BATCH_SIZE)
.map(cast_to_tuple) # for keras model.fit
.prefetch(2))
pipeline_test = (ds_test.map(preprocess_test, num_parallel_calls=1)
.map(cast_to_tuple) # for keras model.fit
.batch(BATCH_SIZE)
.prefetch(2))
```
### Fine-tuning loop
The fine-tuning will take about 15 minutes. If you wish, you can manually set the number of epochs to be 10 instead of 50 for the tutorial, and you will likely still obtain a model with validation accuracy > 99%.
```
# Define optimiser and loss
lr = 0.003 * BATCH_SIZE / 512
# Decay learning rate by a factor of 10 at SCHEDULE_BOUNDARIES.
lr_schedule = tf.keras.optimizers.schedules.PiecewiseConstantDecay(boundaries=SCHEDULE_BOUNDARIES,
values=[lr, lr*0.1, lr*0.001, lr*0.0001])
optimizer = tf.keras.optimizers.SGD(learning_rate=lr_schedule, momentum=0.9)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer,
loss=loss_fn,
metrics=['accuracy'])
# Fine-tune model
history = model.fit(
pipeline_train,
batch_size=BATCH_SIZE,
steps_per_epoch=STEPS_PER_EPOCH,
epochs=10,
# epochs= int(SCHEDULE_LENGTH / STEPS_PER_EPOCH), # TODO: replace with `epochs=10` here to shorten fine-tuning for tutorial if you wish
# validation_data=pipeline_test # here we are only using
# this data to evaluate our performance
)
```
We see that our model attains over 98-99% training and validation accuracy.
## Save fine-tuned model for later use
It is easy to save your model to use later on. You can then load your saved model in exactly the same way as we loaded the BiT models at the start.
```
# Save fine-tuned model as SavedModel
export_module_dir = '/tmp/my_saved_bit_model/'
tf.saved_model.save(model, export_module_dir)
# Load saved model
saved_module = hub.KerasLayer(export_module_dir, trainable=True)
# Visualise predictions from new model
for features in ds_train.take(1):
image = features['image']
image = preprocess_image(image)
image = tf.image.resize(image, [CROP_TO, CROP_TO])
# Run model on image
logits = saved_module(image)
# Show image and predictions
show_preds(logits, image[0], correct_cifar_label=features['label'].numpy(), cifar_logits=True)
```
Voila - we now have a model that predicts tulips as tulips and not bell peppers. :)
## Summary
In this post, you learned about the key components we used to train models that can transfer well to many different tasks. You also learned how to load one of our BiT models, fine-tune it on your target task and save the resulting model. Hope this helped and happy fine-tuning!
## Acknowledgements
This colab is based on work by Alex Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly and Neil Houlsby. We thank many members of Brain Research Zurich and the TensorFlow team for their feedback, especially Luiz Gustavo Martins and Marcin Michalski.
| github_jupyter |
# My Server の Container イメージをリストアする
<HR>
My Server の Container イメージファイルから、Container イメージを各Notebook Container ホストに配布し、配布したイメージで My Server が起動するように設定します。
# Container イメージファイルを格納するディレクトリの用意
配布元の Container イメージファイルを格納するディレクトリです。
`BACKUP_WORK_DIR`は、`$HOME`からの相対パスです。
```
BACKUP_WORK_DIR = 'images'
import os
import os.path
backup_dir = os.path.expanduser(os.path.join('~', BACKUP_WORK_DIR))
if not os.path.exists(backup_dir):
os.makedirs(backup_dir)
```
# Container イメージファイルを転送
* リストアする Container イメージバックアップファイルを、ディレクトリ `$HOME/images` に転送して下さい。
* イメージファイルは `docker save`コマンドで作られる tar 形式のファイルです。
* イメージファイルを、`16_SaveNotebookContainerImage.ipynb` 作成した場合は、すでに`$HOME/images`に保存されていますので、先へ進んでください
# Container イメージバックアップファイル名を指定
* 次のセルで、`ARCHIVE_FILE_NAME` にリストアする Container イメージのバックアップファイル名を指定して下さい。
```
ARCHIVE_FILE_NAME="server-container-170513-065930.tar"
```
* バックアップファイルは、上のセルで作成した BACKUP_WORK_DIR に格納されているものとします。
```
!cd ~/{BACKUP_WORK_DIR} && ls -l *.tar
ARCHIVE_FILE_NAME="server-container-20190322-020641.tar"
import os.path
imagefilepath = os.path.expanduser(os.path.join('~', BACKUP_WORK_DIR, ARCHIVE_FILE_NAME))
assert os.path.exists(imagefilepath)
!ls -l ~/{BACKUP_WORK_DIR}/{ARCHIVE_FILE_NAME}
```
# Containerイメージの配布
CoursewareHubの各ノードにイメージを配布します。
```
import os
target_hub = ['-i', os.path.expanduser('~/ansible/inventory'), 'ch-master']
!ansible -m ping {' '.join(target_hub)}
target_hub_all = ['-i', os.path.expanduser('~/ansible/inventory'), 'ch-all']
!ansible -m ping {' '.join(target_hub_all)}
```
**上記セルが動作しない場合**
```
The authenticity of host 'xxx.xxx.205.128 (xxx.xxx.205.128)' can't be established.
ECDSA key fingerprint is SHA256:qjPDx7y/926gHJL9+SgMGKpicRORzffk1/xiUyIP00w.
Are you sure you want to continue connecting (yes/no)?
```
(IPアドレスと、fingerprintは例です)
となり実行中のまま状態変化しなくなる場合は、JupyterのTerminalから、
```
$ ssh xxx.xxx.205.128
```
を実行し、ECDSA key fingerprintが `SHA256:qjPDx7y/926gHJL9+SgMGKpicRORzffk1/xiUyIP00w` であることを確認してyesを実行し、上記のセルを停止の上再実行してください。
## 現行版のイメージの情報を控える
新しいイメージに問題があった場合に、現行版のイメージを復元するための情報を控えておきます。
```
def get_container(name):
import subprocess
try:
sid = subprocess.check_output(['ansible', '-b', '-a', 'docker service ps {} -q'.format(name)] + target_hub)
sid = sid.decode('utf-8').split('\n')[1].strip()
cinfo = subprocess.check_output(
['ansible', '-b', '-a',
'docker inspect --format "{% raw %} {{.NodeID}} {{.Status.ContainerStatus.ContainerID}} {% endraw %}" ' + sid
] + target_hub)
nodeid, cid = cinfo.decode('utf-8').split('\n')[1].strip().split()
nodeip = subprocess.check_output(
['ansible', '-b', '-a',
'docker node inspect --format "{% raw %} {{.Status.Addr}} {% endraw %}" ' + nodeid
] + target_hub)
nodeip = nodeip.decode('utf-8').split('\n')[1].split()[0]
return (nodeip, cid)
except subprocess.CalledProcessError as e:
print(e.output.decode('utf-8'))
raise
def get_container_env(service_name, env_name):
c = get_container(service_name)
import subprocess
import json
target = ['-i', os.path.expanduser('~/ansible/inventory'), c[0]]
env = subprocess.check_output(
['ansible', '-b', '-a',
'docker inspect --format "{% raw %} {{json .Config.Env}} {% endraw %}" ' + c[1]
] + target)
env = json.loads(env.decode('utf-8').split('\n')[1])
env = dict([tuple(x.split('=')) for x in env])
return env[env_name]
image_tag = get_container_env('jupyterhub', 'CONTAINER_IMAGE')
image_tag
import subprocess
org_image_id = subprocess.check_output(
['ansible', '-b', '-a',
'docker inspect --format "{% raw %} {{.Id}} {% endraw %}" ' + image_tag
] + target_hub)
org_image_id = org_image_id.decode('utf-8').split('\n')[1].strip()
org_image_id
```
万が一、起動不可能になったとき管理者が復旧作業ができるように、現行版のイメージにタグをつけておきます。
```
from datetime import datetime
image_tag_body = image_tag.rsplit(':', 1)[0]
now = datetime.now()
failback_latest_tag = '{}:fallback-latest'.format(image_tag_body)
failback_date_tag = '{}:fallback-{}'.format(image_tag_body, now.strftime('%Y%m%d-%H%M%S'))
print('Setting image tag {} => {}'.format(org_image_id, failback_date_tag) )
subprocess.check_output(['ansible', '-b', '-a',
'docker tag {} {}'.format(org_image_id, failback_date_tag)]
+ target_hub_all).decode('utf-8')
print('Setting image tag {} => {}'.format(org_image_id, failback_latest_tag) )
subprocess.check_output(['ansible', '-b', '-a',
'docker tag {} {}'.format(org_image_id, failback_latest_tag)]
+ target_hub_all).decode('utf-8')
```
## イメージの配布
イメージの配布を実行します。
```
import subprocess
import re
print('Loading image from the file: {}'.format(imagefilepath))
loaded_image = subprocess.check_output(['ansible', '-b', '-a',
'docker load -q -i /jupyter/users/{}/{}/{}'.format(
os.environ['JUPYTERHUB_USER'],
BACKUP_WORK_DIR,
ARCHIVE_FILE_NAME,
)] + target_hub_all)
loaded_image = loaded_image.decode('utf-8').split('\n')
loaded_images = set()
for l in loaded_image:
m = re.match(r'^Loaded image:\s*(.+)$', l)
if m:
loaded_images.add(m.group(1))
assert len(loaded_images) == 1
loaded_image = loaded_images.pop()
print('Loaded image: {}'.format(loaded_image))
print('Set image tag {} => {}'.format(loaded_image, image_tag) )
subprocess.check_output(['ansible', '-b', '-a',
'docker tag {} {}'.format(loaded_image, image_tag)]
+ target_hub_all)
print('Completed')
```
## テスト
1. 以下のURLにアクセスして、Adminのコントロールパネルを開いてください。
* [/hub/admin](/hub/admin)
1. 現在使用しているユーザー**以外**の**テスト用のユーザー**で、「stop server」、「start server」でJupyter Notebokの再起動を行ってください。
* 起動可能であると確認できるまでは、自分のユーザーでは試さないでください。
* テスト用のユーザーがない場合は、あらかじめ作成しておく必要があります。
1. 「access server」ボタンで、テスト用ユーザーのJupyter Notebookサーバーにアクセスし、動作を確認してください。
1. 新しいイメージは、各ユーザーのMy Serverを再起動するまでは反映されません。
## 復元
このセクションの手順は、上のテストで、配布したイメージが動作しなかった場合に、元のイメージ復元するためのものです。
成功した場合は、実行の必要はありません。
```
# Run All で以下のセルが実行されるのを防止します
assert False
print('Setting image tag {} => {}'.format(org_image_id, image_tag) )
subprocess.check_output(['ansible', '-b', '-a',
'docker tag {} {}'.format(org_image_id, image_tag)]
+ target_hub_all).decode('utf-8')
print('Completed')
```
| github_jupyter |
# Robot Class
In this project, we'll be localizing a robot in a 2D grid world. The basis for simultaneous localization and mapping (SLAM) is to gather information from a robot's sensors and motions over time, and then use information about measurements and motion to re-construct a map of the world.
### Uncertainty
As you've learned, robot motion and sensors have some uncertainty associated with them. For example, imagine a car driving up hill and down hill; the speedometer reading will likely overestimate the speed of the car going up hill and underestimate the speed of the car going down hill because it cannot perfectly account for gravity. Similarly, we cannot perfectly predict the *motion* of a robot. A robot is likely to slightly overshoot or undershoot a target location.
In this notebook, we'll look at the `robot` class that is *partially* given to you for the upcoming SLAM notebook. First, we'll create a robot and move it around a 2D grid world. Then, **you'll be tasked with defining a `sense` function for this robot that allows it to sense landmarks in a given world**! It's important that you understand how this robot moves, senses, and how it keeps track of different landmarks that it sees in a 2D grid world, so that you can work with it's movement and sensor data.
---
Before we start analyzing robot motion, let's load in our resources and define the `robot` class. You can see that this class initializes the robot's position and adds measures of uncertainty for motion. You'll also see a `sense()` function which is not yet implemented, and you will learn more about that later in this notebook.
```
# import some resources
import numpy as np
import matplotlib.pyplot as plt
import random
%matplotlib inline
# the robot class
class robot:
# --------
# init:
# creates a robot with the specified parameters and initializes
# the location (self.x, self.y) to the center of the world
#
def __init__(self, world_size = 100.0, measurement_range = 30.0,
motion_noise = 1.0, measurement_noise = 1.0):
self.measurement_noise = 0.0
self.world_size = world_size
self.measurement_range = measurement_range
self.x = world_size / 2.0
self.y = world_size / 2.0
self.motion_noise = motion_noise
self.measurement_noise = measurement_noise
self.landmarks = []
self.num_landmarks = 0
# returns a positive, random float
def rand(self):
return random.random() * 2.0 - 1.0
# --------
# move: attempts to move robot by dx, dy. If outside world
# boundary, then the move does nothing and instead returns failure
#
def move(self, dx, dy):
x = self.x + dx + self.rand() * self.motion_noise
y = self.y + dy + self.rand() * self.motion_noise
if x < 0.0 or x > self.world_size or y < 0.0 or y > self.world_size:
return False
else:
self.x = x
self.y = y
return True
# --------
# sense: returns x- and y- distances to landmarks within visibility range
# because not all landmarks may be in this range, the list of measurements
# is of variable length. Set measurement_range to -1 if you want all
# landmarks to be visible at all times
#
## TODO: complete the sense function
def sense(self):
''' This function does not take in any parameters, instead it references internal variables
(such as self.landamrks) to measure the distance between the robot and any landmarks
that the robot can see (that are within its measurement range).
This function returns a list of landmark indices, and the measured distances (dx, dy)
between the robot's position and said landmarks.
This function should account for measurement_noise and measurement_range.
One item in the returned list should be in the form: [landmark_index, dx, dy].
'''
measurements = []
## TODO: iterate through all of the landmarks in a world
## TODO: For each landmark
## 1. compute dx and dy, the distances between the robot and the landmark
## 2. account for measurement noise by *adding* a noise component to dx and dy
## - The noise component should be a random value between [-1.0, 1.0)*measurement_noise
## - Feel free to use the function self.rand() to help calculate this noise component
## - It may help to reference the `move` function for noise calculation
## 3. If either of the distances, dx or dy, fall outside of the internal var, measurement_range
## then we cannot record them; if they do fall in the range, then add them to the measurements list
## as list.append([index, dx, dy]), this format is important for data creation done later
for index, landmark in enumerate(self.landmarks):
dx = landmark[0] - self.x
dy = landmark[1] - self.y
dx += self.rand() * self.measurement_noise
dy += self.rand() * self.measurement_noise
if (abs(dx) <= self.measurement_range) and (abs(dy) <= self.measurement_range):
measurements.append([index,dx,dy])
## TODO: return the final, complete list of measurements
return measurements
# --------
# make_landmarks:
# make random landmarks located in the world
#
def make_landmarks(self, num_landmarks):
self.landmarks = []
for i in range(num_landmarks):
self.landmarks.append([round(random.random() * self.world_size),
round(random.random() * self.world_size)])
self.num_landmarks = num_landmarks
# called when print(robot) is called; prints the robot's location
def __repr__(self):
return 'Robot: [x=%.5f y=%.5f]' % (self.x, self.y)
```
## Define a world and a robot
Next, let's instantiate a robot object. As you can see in `__init__` above, the robot class takes in a number of parameters including a world size and some values that indicate the sensing and movement capabilities of the robot.
In the next example, we define a small 10x10 square world, a measurement range that is half that of the world and small values for motion and measurement noise. These values will typically be about 10 times larger, but we ust want to demonstrate this behavior on a small scale. You are also free to change these values and note what happens as your robot moves!
```
world_size = 10.0 # size of world (square)
measurement_range = 5.0 # range at which we can sense landmarks
motion_noise = 0.2 # noise in robot motion
measurement_noise = 0.2 # noise in the measurements
# instantiate a robot, r
r = robot(world_size, measurement_range, motion_noise, measurement_noise)
# print out the location of r
print(r)
```
## Visualizing the World
In the given example, we can see/print out that the robot is in the middle of the 10x10 world at (x, y) = (5.0, 5.0), which is exactly what we expect!
However, it's kind of hard to imagine this robot in the center of a world, without visualizing the grid itself, and so in the next cell we provide a helper visualization function, `display_world`, that will display a grid world in a plot and draw a red `o` at the location of our robot, `r`. The details of how this function wors can be found in the `helpers.py` file in the home directory; you do not have to change anything in this `helpers.py` file.
```
# import helper function
from helpers import display_world
# define figure size
plt.rcParams["figure.figsize"] = (5,5)
# call display_world and display the robot in it's grid world
print(r)
display_world(int(world_size), [r.x, r.y])
```
## Movement
Now you can really picture where the robot is in the world! Next, let's call the robot's `move` function. We'll ask it to move some distance `(dx, dy)` and we'll see that this motion is not perfect by the placement of our robot `o` and by the printed out position of `r`.
Try changing the values of `dx` and `dy` and/or running this cell multiple times; see how the robot moves and how the uncertainty in robot motion accumulates over multiple movements.
#### For a `dx` = 1, does the robot move *exactly* one spot to the right? What about `dx` = -1? What happens if you try to move the robot past the boundaries of the world?
```
# choose values of dx and dy (negative works, too)
dx = 1
dy = 2
r.move(dx, dy)
# print out the exact location
print(r)
# display the world after movement, not that this is the same call as before
# the robot tracks its own movement
display_world(int(world_size), [r.x, r.y])
```
## Landmarks
Next, let's create landmarks, which are measurable features in the map. You can think of landmarks as things like notable buildings, or something smaller such as a tree, rock, or other feature.
The robot class has a function `make_landmarks` which randomly generates locations for the number of specified landmarks. Try changing `num_landmarks` or running this cell multiple times to see where these landmarks appear. We have to pass these locations as a third argument to the `display_world` function and the list of landmark locations is accessed similar to how we find the robot position `r.landmarks`.
Each landmark is displayed as a purple `x` in the grid world, and we also print out the exact `[x, y]` locations of these landmarks at the end of this cell.
```
# create any number of landmarks
num_landmarks = 3
r.make_landmarks(num_landmarks)
# print out our robot's exact location
print(r)
# display the world including these landmarks
display_world(int(world_size), [r.x, r.y], r.landmarks)
# print the locations of the landmarks
print('Landmark locations [x,y]: ', r.landmarks)
```
## Sense
Once we have some landmarks to sense, we need to be able to tell our robot to *try* to sense how far they are away from it. It will be up t you to code the `sense` function in our robot class.
The `sense` function uses only internal class parameters and returns a list of the the measured/sensed x and y distances to the landmarks it senses within the specified `measurement_range`.
### TODO: Implement the `sense` function
Follow the `##TODO's` in the class code above to complete the `sense` function for the robot class. Once you have tested out your code, please **copy your complete `sense` code to the `robot_class.py` file in the home directory**. By placing this complete code in the `robot_class` Python file, we will be able to refernce this class in a later notebook.
The measurements have the format, `[i, dx, dy]` where `i` is the landmark index (0, 1, 2, ...) and `dx` and `dy` are the measured distance between the robot's location (x, y) and the landmark's location (x, y). This distance will not be perfect since our sense function has some associated `measurement noise`.
---
In the example in the following cell, we have a given our robot a range of `5.0` so any landmarks that are within that range of our robot's location, should appear in a list of measurements. Not all landmarks are guaranteed to be in our visibility range, so this list will be variable in length.
*Note: the robot's location is often called the **pose** or `[Pxi, Pyi]` and the landmark locations are often written as `[Lxi, Lyi]`. You'll see this notation in the next notebook.*
```
# try to sense any surrounding landmarks
measurements = r.sense()
# this will print out an empty list if `sense` has not been implemented
print(measurements)
```
**Refer back to the grid map above. Do these measurements make sense to you? Are all the landmarks captured in this list (why/why not)?**
---
## Data
#### Putting it all together
To perform SLAM, we'll collect a series of robot sensor measurements and motions, in that order, over a defined period of time. Then we'll use only this data to re-construct the map of the world with the robot and landmar locations. You can think of SLAM as peforming what we've done in this notebook, only backwards. Instead of defining a world and robot and creating movement and sensor data, it will be up to you to use movement and sensor measurements to reconstruct the world!
In the next notebook, you'll see this list of movements and measurements (which you'll use to re-construct the world) listed in a structure called `data`. This is an array that holds sensor measurements and movements in a specific order, which will be useful to call upon when you have to extract this data and form constraint matrices and vectors.
`data` is constructed over a series of time steps as follows:
```
data = []
# after a robot first senses, then moves (one time step)
# that data is appended like so:
data.append([measurements, [dx, dy]])
# for our example movement and measurement
print(data)
# in this example, we have only created one time step (0)
time_step = 0
# so you can access robot measurements:
print('Measurements: ', data[time_step][0])
# and its motion for a given time step:
print('Motion: ', data[time_step][1])
```
### Final robot class
Before moving on to the last notebook in this series, please make sure that you have copied your final, completed `sense` function into the `robot_class.py` file in the home directory. We will be using this file in the final implementation of slam!
| github_jupyter |
```
import keras
keras.__version__
```
# 영화 리뷰 분류: 이진 분류 예제
이 노트북은 [케라스 창시자에게 배우는 딥러닝](https://tensorflow.blog/%EC%BC%80%EB%9D%BC%EC%8A%A4-%EB%94%A5%EB%9F%AC%EB%8B%9D/) 책의 3장 4절의 코드 예제입니다. 책에는 더 많은 내용과 그림이 있습니다. 이 노트북에는 소스 코드에 관련된 설명만 포함합니다.
----
2종 분류 또는 이진 분류는 아마도 가장 널리 적용된 머신 러닝 문제일 것입니다. 이 예제에서 리뷰 텍스트를 기반으로 영화 리뷰를 긍정과 부정로 분류하는 법을 배우겠습니다.
## IMDB 데이터셋
인터넷 영화 데이터베이스로부터 가져온 양극단의 리뷰 50,000개로 이루어진 IMDB 데이터셋을 사용하겠습니다. 이 데이터셋은 훈련 데이터 25,000개와 테스트 데이터 25,000개로 나뉘어 있고 각각 50%는 부정, 50%는 긍정 리뷰로 구성되어 있습니다.
왜 훈련 데이터와 테스트 데이터를 나눌까요? 같은 데이터에서 머신 러닝 모델을 훈련하고 테스트해서는 절대 안 되기 때문입니다! 모델이 훈련 데이터에서 잘 작동한다는 것이 처음 만난 데이터에서도 잘 동작한다는 것을 보장하지 않습니다. 중요한 것은 새로운 데이터에 대한 모델의 성능입니다(사실 훈련 데이터의 레이블은 이미 알고 있기 때문에 이를 예측하는 모델은 필요하지 않습니다). 예를 들어 모델이 훈련 샘플과 타깃 사이의 매핑을 모두 외워버릴 수 있습니다. 이런 모델은 처음 만나는 데이터에서 타깃을 예측하는 작업에는 쓸모가 없습니다. 다음 장에서 이에 대해 더 자세히 살펴보겠습니다.
MNIST 데이터셋처럼 IMDB 데이터셋도 케라스에 포함되어 있습니다. 이 데이터는 전처리되어 있어 각 리뷰(단어 시퀀스)가 숫자 시퀀스로 변환되어 있습니다. 여기서 각 숫자는 사전에 있는 고유한 단어를 나타냅니다.
다음 코드는 데이터셋을 로드합니다(처음 실행하면 17MB 정도의 데이터가 컴퓨터에 다운로드됩니다):
```
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
```
매개변수 `num_words=10000`은 훈련 데이터에서 가장 자주 나타나는 단어 10,000개만 사용하겠다는 의미입니다. 드물게 나타나는 단어는 무시하겠습니다. 이렇게 하면 적절한 크기의 벡터 데이터를 얻을 수 있습니다.
변수 `train_data`와 `test_data`는 리뷰의 목록입니다. 각 리뷰는 단어 인덱스의 리스트입니다(단어 시퀀스가 인코딩된 것입니다). `train_labels`와 `test_labels`는 부정을 나타내는 0과 긍정을 나타내는 1의 리스트입니다:
```
train_data[0]
train_labels[0]
```
가장 자주 등장하는 단어 10,000개로 제한했기 때문에 단어 인덱스는 10,000을 넘지 않습니다:
```
max([max(sequence) for sequence in train_data])
```
재미 삼아 이 리뷰 데이터 하나를 원래 영어 단어로 어떻게 바꾸는지 보겠습니다:
```
# word_index는 단어와 정수 인덱스를 매핑한 딕셔너리입니다
word_index = imdb.get_word_index()
# 정수 인덱스와 단어를 매핑하도록 뒤집습니다
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# 리뷰를 디코딩합니다.
# 0, 1, 2는 '패딩', '문서 시작', '사전에 없음'을 위한 인덱스이므로 3을 뺍니다
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_review
```
## 데이터 준비
신경망에 숫자 리스트를 주입할 수는 없습니다. 리스트를 텐서로 바꾸는 두 가지 방법이 있습니다:
* 같은 길이가 되도록 리스트에 패딩을 추가하고 `(samples, sequence_length)` 크기의 정수 텐서로 변환합니다. 그다음 이 정수 텐서를 다룰 수 있는 층을 신경망의 첫 번째 층으로 사용합니다(`Embedding` 층을 말하며 나중에 자세히 다루겠습니다).
* 리스트를 원-핫 인코딩하여 0과 1의 벡터로 변환합니다. 예를 들면 시퀀스 `[3, 5]`를 인덱스 3과 5의 위치는 1이고 그 외는 모두 0인 10,000차원의 벡터로 각각 변환합니다. 그다음 부동 소수 벡터 데이터를 다룰 수 있는 `Dense` 층을 신경망의 첫 번째 층으로 사용합니다.
여기서는 두 번째 방식을 사용하고 이해를 돕기 위해 직접 데이터를 원-핫 벡터로 만들겠습니다:
```
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# 크기가 (len(sequences), dimension))이고 모든 원소가 0인 행렬을 만듭니다
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # results[i]에서 특정 인덱스의 위치를 1로 만듭니다
return results
# 훈련 데이터를 벡터로 변환합니다
x_train = vectorize_sequences(train_data)
# 테스트 데이터를 벡터로 변환합니다
x_test = vectorize_sequences(test_data)
```
이제 샘플은 다음과 같이 나타납니다:
```
x_train[0]
```
레이블은 쉽게 벡터로 바꿀 수 있습니다:
```
# 레이블을 벡터로 바꿉니다
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
```
이제 신경망에 주입할 데이터가 준비되었습니다.
## 신경망 모델 만들기
입력 데이터가 벡터이고 레이블은 스칼라(1 또는 0)입니다. 아마 앞으로 볼 수 있는 문제 중에서 가장 간단할 것입니다. 이런 문제에 잘 작동하는 네트워크 종류는 `relu` 활성화 함수를 사용한 완전 연결 층(즉, `Dense(16, activation='relu')`)을 그냥 쌓은 것입니다.
`Dense` 층에 전달한 매개변수(16)는 은닉 유닛의 개수입니다. 하나의 은닉 유닛은 층이 나타내는 표현 공간에서 하나의 차원이 됩니다. 2장에서 `relu` 활성화 함수를 사용한 `Dense` 층을 다음과 같은 텐서 연산을 연결하여 구현하였습니다:
`output = relu(dot(W, input) + b)`
16개의 은닉 유닛이 있다는 것은 가중치 행렬 `W`의 크기가 `(input_dimension, 16)`이라는 뜻입니다. 입력 데이터와 `W`를 점곱하면 입력 데이터가 16 차원으로 표현된 공간으로 투영됩니다(그리고 편향 벡터 `b`를 더하고 `relu` 연산을 적용합니다). 표현 공간의 차원을 '신경망이 내재된 표현을 학습할 때 가질 수 있는 자유도'로 이해할 수 있습니다. 은닉 유닛을 늘리면 (표현 공간을 더 고차원으로 만들면) 신경망이 더욱 복잡한 표현을 학습할 수 있지만 계산 비용이 커지고 원치 않은 패턴을 학습할 수도 있습니다(훈련 데이터에서는 성능이 향상되지만 테스트 데이터에서는 그렇지 않은 패턴입니다).
`Dense` 층을 쌓을 때 두 가진 중요한 구조상의 결정이 필요합니다:
* 얼마나 많은 층을 사용할 것인가
* 각 층에 얼마나 많은 은닉 유닛을 둘 것인가
4장에서 이런 결정을 하는 데 도움이 되는 일반적인 원리를 배우겠습니다. 당분간은 저를 믿고 선택한 다음 구조를 따라 주세요.
* 16개의 은닉 유닛을 가진 두 개의 은닉층
* 현재 리뷰의 감정을 스칼라 값의 예측으로 출력하는 세 번째 층
중간에 있는 은닉층은 활성화 함수로 `relu`를 사용하고 마지막 층은 확률(0과 1 사이의 점수로, 어떤 샘플이 타깃 '1'일 가능성이 높다는 것은 그 리뷰가 긍정일 가능성이 높다는 것을 의미합니다)을 출력하기 위해 시그모이드 활성화 함수를 사용합니다. `relu`는 음수를 0으로 만드는 함수입니다. 시그모이드는 임의의 값을 [0, 1] 사이로 압축하므로 출력 값을 확률처럼 해석할 수 있습니다.
다음이 이 신경망의 모습입니다:

다음은 이 신경망의 케라스 구현입니다. 이전에 보았던 MNIST 예제와 비슷합니다:
```
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
```
마지막으로 손실 함수와 옵티마이저를 선택해야 합니다. 이진 분류 문제이고 신경망의 출력이 확률이기 때문에(네트워크의 끝에 시그모이드 활성화 함수를 사용한 하나의 유닛으로 된 층을 놓았습니다), `binary_crossentropy` 손실이 적합합니다. 이 함수가 유일한 선택은 아니고 예를 들어 `mean_squared_error`를 사용할 수도 있습니다. 확률을 출력하는 모델을 사용할 때는 크로스엔트로피가 최선의 선택입니다. 크로스엔트로피는 정보 이론 분야에서 온 개념으로 확률 분포 간의 차이를 측정합니다. 여기에서는 원본 분포와 예측 분포 사이를 측정합니다.
다음은 `rmsprop` 옵티마이저와 `binary_crossentropy` 손실 함수로 모델을 설정하는 단계입니다. 훈련하는 동안 정확도를 사용해 모니터링하겠습니다.
```
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
```
케라스에 `rmsprop`, `binary_crossentropy`, `accuracy`가 포함되어 있기 때문에 옵티마이저, 손실 함수, 측정 지표를 문자열로 지정하는 것이 가능합니다. 이따금 옵티마이저의 매개변수를 바꾸거나 자신만의 손실 함수, 측정 함수를 전달해야 할 경우가 있습니다. 전자의 경우에는 옵티마이저 파이썬 클래스를 사용해 객체를 직접 만들어 `optimizer` 매개변수에 전달하면 됩니다:
```
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
```
후자의 경우는 `loss`와 `metrics` 매개변수에 함수 객체를 전달하면 됩니다:
```
from keras import losses
from keras import metrics
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
```
## 훈련 검증
훈련하는 동안 처음 본 데이터에 대한 모델의 정확도를 측정하기 위해서는 원본 훈련 데이터에서 10,000의 샘플을 떼어서 검증 세트를 만들어야 합니다:
```
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
```
이제 모델을 512개 샘플씩 미니 배치를 만들어 20번의 에포크 동안 훈련시킵니다(`x_train`과 `y_train` 텐서에 있는 모든 샘플에 대해 20번 반복합니다). 동시에 따로 떼어 놓은 10,000개의 샘플에서 손실과 정확도를 측정할 것입니다. 이렇게 하려면 `validation_data` 매개변수에 검증 데이터를 전달해야 합니다:
```
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
```
CPU를 사용해도 에포크마다 2초가 걸리지 않습니다. 전체 훈련은 20초 이상 걸립니다. 에포크가 끝날 때마다 10,000개의 검증 샘플 데이터에서 손실과 정확도를 계산하기 때문에 약간씩 지연됩니다.
`model.fit()` 메서드는 `History` 객체를 반환합니다. 이 객체는 훈련하는 동안 발생한 모든 정보를 담고 있는 딕셔너리인 `history` 속성을 가지고 있습니다. 한 번 확인해 보죠:
```
history_dict = history.history
history_dict.keys()
```
이 딕셔너리는 훈련과 검증하는 동안 모니터링할 측정 지표당 하나씩 모두 네 개의 항목을 담고 있습니다. 맷플롯립을 사용해 훈련과 검증 데이터에 대한 손실과 정확도를 그려 보겠습니다:
```
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# ‘bo’는 파란색 점을 의미합니다
plt.plot(epochs, loss, 'bo', label='Training loss')
# ‘b’는 파란색 실선을 의미합니다
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # 그래프를 초기화합니다
acc = history_dict['acc']
val_acc = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
점선은 훈련 손실과 정확도이고 실선은 검증 손실과 정확도입니다. 신경망의 무작위한 초기화 때문에 사람마다 결과거 조금 다를 수 있습니다.
여기에서 볼 수 있듯이 훈련 손실이 에포크마다 감소하고 훈련 정확도는 에포크마다 증가합니다. 경사 하강법 최적화를 사용했을 때 반복마다 최소화되는 것이 손실이므로 기대했던 대로입니다. 검증 손실과 정확도는 이와 같지 않습니다. 4번째 에포크에서 그래프가 역전되는 것 같습니다. 이것이 훈련 세트에서 잘 작동하는 모델이 처음 보는 데이터에 잘 작동하지 않을 수 있다고 앞서 언급한 경고의 한 사례입니다. 정확한 용어로 말하면 과대적합되었다고 합니다. 2번째 에포크 이후부터 훈련 데이터에 과도하게 최적화되어 훈련 데이터에 특화된 표현을 학습하므로 훈련 세트 이외의 데이터에는 일반화되지 못합니다.
이런 경우에 과대적합을 방지하기 위해서 3번째 에포크 이후에 훈련을 중지할 수 있습니다. 일반적으로 4장에서 보게 될 과대적합을 완화하는 다양한 종류의 기술을 사용할 수 있습니다.
처음부터 다시 새로운 신경망을 4번의 에포크 동안만 훈련하고 테스트 데이터에서 평가해 보겠습니다:
```
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
results
```
아주 단순한 방식으로도 87%의 정확도를 달성했습니다. 최고 수준의 기법을 사용하면 95%에 가까운 성능을 얻을 수 있습니다.
## 훈련된 모델로 새로운 데이터에 대해 예측하기
모델을 훈련시킨 후에 이를 실전 환경에서 사용하고 싶을 것입니다. `predict` 메서드를 사용해서 어떤 리뷰가 긍정일 확률을 예측할 수 있습니다:
```
model.predict(x_test)
```
여기에서처럼 이 모델은 어떤 샘플에 대해 확신을 가지고 있지만(0.99 또는 그 이상, 0.01 또는 그 이하) 어떤 샘플에 대해서는 확신이 부족합니다(0.6, 0.4).
## 추가 실험
* 여기에서는 두 개의 은닉층을 사용했습니다. 한 개 또는 세 개의 은닉층을 사용하고 검증과 테스트 정확도에 어떤 영향을 미치는지 확인해 보세요.
* 층의 은닉 유닛을 추가하거나 줄여 보세요: 32개 유닛, 64개 유닛 등
* `binary_crossentropy` 대신에 `mse` 손실 함수를 사용해 보세요.
* `relu` 대신에 `tanh` 활성화 함수(초창기 신경망에서 인기 있었던 함수입니다)를 사용해 보세요.
다음 실험을 진행하면 여기에서 선택한 구조가 향상의 여지는 있지만 어느 정도 납득할 만한 수준이라는 것을 알게 것입니다!
## 정리
다음은 이 예제에서 배운 것들입니다:
* 원본 데이터를 신경망에 텐서로 주입하기 위해서는 꽤 많은 전처리가 필요합니다. 단어 시퀀스는 이진 벡터로 인코딩될 수 있고 다른 인코딩 방식도 있습니다.
* `relu` 활성화 함수와 함께 `Dense` 층을 쌓은 네트워크는 (감성 분류를 포함하여) 여러 종류의 문제에 적용할 수 있어서 앞으로 자주 사용하게 될 것입니다.
* (출력 클래스가 두 개인) 이진 분류 문제에서 네트워크는 하나의 유닛과 `sigmoid` 활성화 함수를 가진 `Dense` 층으로 끝나야 합니다. 이 신경망의 출력은 확률을 나타내는 0과 1 사이의 스칼라 값입니다.
* 이진 분류 문제에서 이런 스칼라 시그모이드 출력에 대해 사용할 손실 함수는 `binary_crossentropy`입니다.
* `rmsprop` 옵티마이저는 문제에 상관없이 일반적으로 충분히 좋은 선택입니다. 걱정할 거리가 하나 줄은 셈입니다.
* 훈련 데이터에 대해 성능이 향상됨에 따라 신경망은 과대적합되기 시작하고 이전에 본적 없는 데이터에서는 결과가 점점 나빠지게 됩니다. 항상 훈련 세트 이외의 데이터에서 성능을 모니터링해야 합니다.
| github_jupyter |
# Skript to look at SKS Results and make maps
```
#import splitwavepy as sw
from obspy import read
from obspy.clients.fdsn import Client
from obspy import UTCDateTime
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:75% !important; }</style>"))
import matplotlib.pyplot as plt
import numpy as np
from obspy.core import UTCDateTime
from itertools import izip_longest as zip_longest
import os
```
### Load in the eig matrixes FROM METHODS
### read in and weighted average of directions
### histogram
### implement QC, splitting Intensity, SNR
### Go through Splitting Windows and Classify as SKS or not
Plotting I D E A S
- make Histograms of found directions
- plot mean phi and dt on map (interpolation)
- plot (topographic) map with event and station and ray path
- overview maps of stations and events + histogram of azimuths
- plot waveforms from stations against distance to events
- deviation SKS arrival from tau p on map
```
## Function to read in CHEVROT Files
path='/media/hein/home2/SplitWave_Results/SKS/Chevrot/'
filename='SKS_Splitting_Chevrot.txt'
def read_SKS_files(path,filename):
filename = '{0}/{1}'.format(path,filename) #removed first two header lines
with open(filename) as f:
content = f.readlines()
station = ['']*len(content)
dt = ['']*len(content)
dtlag = ['']*len(content)
fast_dir = ['']*len(content)
dfast_dir = ['']*len(content)
for i in range(1,len(content)-1):
data = zip_longest(*(x.split(' ') for x in content[i].splitlines()), fillvalue='\t')
for row in zip(*data):
new_data = tuple(np.nan if x == '' else x for x in row)
line = new_data
station[i] = line[0][1:-1]
dt[i] = float(line[1][1:-2])
dtlag[i] = float(line[2][1:-2])
fast_dir[i] = float(line[3][1:-2])
dfast_dir[i] = float(line[4][1:-2])
station = np.asarray(station[1:-1])
dt = np.asarray(dt[1:-1])
dtlag = np.asarray(dtlag[1:-1])
fast_dir = np.asarray(fast_dir[1:-1])
## convert from -90-90 to 0-180
fast_dir = (fast_dir+180)%180
dfast_dir = np.asarray(dfast_dir[1:-1])
return station,dt,dtlag,fast_dir,dfast_dir
# Function to read in OTHER SKS Files
def read_SKS_methods(save_loc,method,station):
filename= 'SKS_Splitting_{0}_{1}.txt'.format(station,method)
filepath = '{0}/../SplitWave_Results/SKS/{1}/{2}'.format(save_loc,method,filename)
# filename = '{0}/{1}'.format(path,filename) #removed first two header lines
with open(filepath) as f:
content = f.readlines()
station = ['']*len(content)
st_lat = ['']*len(content)
st_lon = ['']*len(content)
ev_time = ['']*len(content)
ev_depth = ['']*len(content)
ev_mag = ['']*len(content)
ev_lat = ['']*len(content)
ev_lon = ['']*len(content)
fast_dir = ['']*len(content)
dfast_dir = ['']*len(content)
lag = ['']*len(content)
dlag = ['']*len(content)
SNR = ['']*len(content)
for i in range(1,len(content)-1):
data = zip_longest(*(x.split(' ') for x in content[i].splitlines()), fillvalue='\t')
for row in zip(*data):
new_data = tuple(np.nan if x == '' else x for x in row)
line = new_data
station[i] = line[0][1:-1]
st_lat[i] = float(line[1][1:-1])
st_lon[i] = float(line[2][1:-1])
ev_time[i] = UTCDateTime(float(line[3][1:-1]))
ev_depth[i] = float(line[4][1:-1])
ev_mag[i] = float(line[5][1:-1])
ev_lat[i] = float(line[6][1:-1])
ev_lon[i] = float(line[7][1:-1])
fast_dir[i] = float(line[8][1:-1])
dfast_dir[i] = float(line[9][1:-1])
lag[i] = float(line[10][1:-1])
dlag[i] = float(line[11][1:-1])
if line[11][1:-1]=='nan':
SNR[i] = np.nan
else:
SNR[i] = float(line[12][1:-1])
station = np.asarray(station[1:-1])
st_lat = np.asarray(st_lat[1:-1])
st_lon = np.asarray(st_lon[1:-1])
ev_time = np.asarray(ev_time[1:-1])
ev_depth = np.asarray(ev_depth[1:-1])
ev_mag = np.asarray(ev_mag[1:-1])
ev_lat = np.asarray(ev_lat[1:-1])
ev_lon = np.asarray(ev_lon[1:-1])
fast_dir = np.asarray(fast_dir[1:-1])
## convert from -90-90 to 0-180
fast_dir = (fast_dir+180)%180
dfast_dir = np.asarray(dfast_dir[1:-1])
lag = np.asarray(lag[1:-1])
dlag = np.asarray(dlag[1:-1])
SNR = np.asarray(SNR[1:-1])
# ## sort out values greater 2.6s
# thres = 2.5
# ilag = np.where(lag<2.6)
# print(len(lag))
# print(ilag)
# lag = lag[ilag]
# print(len(lag))
# station = station[ilag]
# st_lat = st_lat[ilag]
# st_lon = st_lon[ilag]
# ev_time = ev_time[ilag]
# ev_depth = ev_depth[ilag]
# ev_mag = ev_mag[ilag]
# ev_lat = ev_lat[ilag]
# ev_lon = ev_lon[ilag]
# fast_dir = fast_dir[ilag]
# dfast_dir = dfast_dir[ilag]
# dlag = dlag[ilag]
# SNR = SNR[ilag]
return station,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir,dfast_dir,lag,dlag,SNR
station='BNALP'
method = ['CrossC','Eig3D','TransM','EigM']
method = method[0]
station_l,st_lat,st_lon,ev_time_1,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_1,dfast_dir_1,lag_1,dlag_1,SNR = read_SKS_methods(save_loc,method,station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[3]
station_l,st_lat,st_lon,ev_time_2,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_1,dfast_dir_1,lag_1,dlag_1,SNR = read_SKS_methods(save_loc,method,station)
### criteria for same event
#print(ev_time)
from tqdm import tqdm
import matplotlib.mlab as mlab
from scipy.stats import norm
save_loc = '/media/hein/home2/SplitWave_Data'
station_list = os.listdir(save_loc)
save_dir = '/media/hein/home2/SplitWave_Results/Comparison/'
#station = station_list[np.random.randint(0,len(station_list))]
#for station in tqdm(station_list[0:1]):
for station in tqdm(station_list[0:1]):
# print(station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[0]
station_l,st_lat,st_lon,ev_time_1,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_1,dfast_dir_1,lag_1,dlag_1,SNR = read_SKS_methods(save_loc,method,station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[1]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_2,dfast_dir_2,lag_2,dlag_2,SNR = read_SKS_methods(save_loc,method,station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[2]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_3,dfast_dir_3,lag_3,dlag_3,SNR = read_SKS_methods(save_loc,method,station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[3]
station_l,st_lat,st_lon,ev_time_4,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_4,dfast_dir_4,lag_4,dlag_4,SNR = read_SKS_methods(save_loc,method,station)
station_Chev,dt,dtlag,fast_dir,dfast_dir = read_SKS_files(path,filename)
plt.figure(figsize=(16,9))
plt.subplot(2,4,1)
n,b,p = plt.hist(fast_dir_1,bins=10,color='#0504aa',alpha=0.7, rwidth=0.85,normed=True,weights=SNR,label='CrossC')
plt.vlines(x=fast_dir[np.where(station_Chev==station)][0],ymin=0,ymax=np.max(n),color='#DC143C',alpha=0.5,label='Chevrot: $\phi$='+'{0}'.format(round(fast_dir[np.where(station_Chev==station)][0],2))+'$^\circ \pm$'+str(round(dfast_dir[np.where(station_Chev==station)][0],3)))
plt.vlines(x=fast_dir[np.where(station_Chev==station)][0]+dfast_dir[np.where(station_Chev==station)][0],ymin=0,ymax=np.max(n),color='#DC143C',alpha=0.5,linestyle='dashed')
plt.vlines(x=fast_dir[np.where(station_Chev==station)][0]-dfast_dir[np.where(station_Chev==station)][0],ymin=0,ymax=np.max(n),color='#DC143C',alpha=0.5,linestyle='dashed')
plt.xlabel('$\phi$')
plt.ylabel('Probability')
plt.title('Fast direction')
plt.xlim(0,180)
plt.legend()
plt.subplot(2,4,2)
n,b,p = plt.hist(fast_dir_2,bins=10,color='#A2CD5A',alpha=0.7, rwidth=0.85,normed=True,weights=SNR,label='Eig3D')
plt.vlines(x=fast_dir[np.where(station_Chev==station)][0],ymin=0,ymax=np.max(n),color='#DC143C',alpha=0.5,label='Chevrot: $\phi$='+'{0}'.format(round(fast_dir[np.where(station_Chev==station)][0],2))+'$^\circ \pm$'+str(round(dfast_dir[np.where(station_Chev==station)][0],3)))
plt.xlabel('$\phi$')
plt.xlim(0,180)
plt.legend()
plt.subplot(2,4,3)
n,b,p = plt.hist(fast_dir_3,bins=10,color='#D2691E',alpha=0.7, rwidth=0.85,normed=True,weights=SNR,label='TransM')
plt.vlines(x=fast_dir[np.where(station_Chev==station)][0],ymin=0,ymax=np.max(n),color='#DC143C',alpha=0.5,label='Chevrot: $\phi$='+'{0}'.format(round(fast_dir[np.where(station_Chev==station)][0],2))+'$^\circ \pm$'+str(round(dfast_dir[np.where(station_Chev==station)][0],3)))
plt.xlabel('$\phi$')
plt.xlim(0,180)
plt.legend()
plt.subplot(2,4,4)
n,b,p = plt.hist(fast_dir_4,bins=10,color='palevioletred',alpha=0.7, rwidth=0.85,normed=True,weights=SNR,label='EigM')
plt.vlines(x=fast_dir[np.where(station_Chev==station)][0],ymin=0,ymax=np.max(n),color='#DC143C',alpha=0.5,label='Chevrot: $\phi$='+'{0}'.format(round(fast_dir[np.where(station_Chev==station)][0],2))+'$^\circ \pm$'+str(round(dfast_dir[np.where(station_Chev==station)][0],3)))
plt.xlabel('$\phi$')
plt.xlim(0,180)
plt.legend()
plt.subplot(2,4,5)
n,b,p = plt.hist(lag_1,bins=10,color='#0504aa',alpha=0.7, rwidth=0.85,normed=1,weights=SNR,label='CrossC')
plt.vlines(x=dt[np.where(station_Chev==station)][0],ymin=0,ymax=np.max(n),color='#DC143C',alpha=0.5,label='Chevrot: $\Delta$ t='+'{0} s'.format(round(dt[np.where(station_Chev==station)][0],2))+'$\pm$'+str(round(dtlag[np.where(station_Chev==station)][0],3)))
plt.xlabel('$\Delta$ t')
plt.ylabel('Occurence')
plt.title('Time lag')
plt.xlim(0,3)
plt.legend()
plt.subplot(2,4,6)
n,b,p = plt.hist(lag_2,bins=10,color='#A2CD5A',alpha=0.7, rwidth=0.85,normed=1,weights=SNR,label='Eig3D')
plt.vlines(x=dt[np.where(station_Chev==station)][0],ymin=0,ymax=np.max(n),color='#DC143C',alpha=0.5,label='Chevrot: $\Delta$ t='+'{0} s'.format(round(dt[np.where(station_Chev==station)][0],2))+'$\pm$'+str(round(dtlag[np.where(station_Chev==station)][0],3)))
plt.xlabel('$\Delta$ t')
plt.legend()
plt.subplot(2,4,7)
n,b,p = plt.hist(lag_3,bins=10,color='#D2691E',alpha=0.7, rwidth=0.85,normed=1,weights=SNR,label='TransM')
plt.vlines(x=dt[np.where(station_Chev==station)][0],ymin=0,ymax=np.max(n),color='#DC143C',alpha=0.5,label='Chevrot: $\Delta$ t='+'{0} s'.format(round(dt[np.where(station_Chev==station)][0],2))+'$\pm$'+str(round(dtlag[np.where(station_Chev==station)][0],3)))
plt.xlabel('$\Delta$ t')
plt.xlim(0,3)
plt.legend()
plt.subplot(2,4,8)
n,b,p = plt.hist(lag_4,bins=10,color='palevioletred',alpha=0.7, rwidth=0.85,normed=1,weights=SNR,label='EigM')
plt.vlines(x=dt[np.where(station_Chev==station)][0],ymin=0,ymax=np.max(n),color='#DC143C',alpha=0.5,label='Chevrot: $\Delta$ t='+'{0} s'.format(round(dt[np.where(station_Chev==station)][0],2))+'$\pm$'+str(round(dtlag[np.where(station_Chev==station)][0],3)))
plt.xlabel('$\Delta$ t')
plt.xlim(0,3)
plt.legend()
plt.suptitle('station: {0}, SNR weighted Histograms of fast direction and time lag for all methods'.format(station))
plt.savefig('{0}/{1}_method_comparison.png'.format(save_dir,station))
plt.close()
## go through measurements and
## check agreement among stations and
## and look for SNR threshold
## --> then accept and plot final
## CLEAN UP A BIT AND ORDER IT
## implement the two other methods
#0504aa
#palevioletred
## Aus Baruols
val_B = [43.95,4.11,1.34, 0.13, 15]
station='LLS'
method = ['CrossC','Eig3D','TransM','EigM']
method = method[0]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_1,dfast_dir_1,lag_1,dlag_1,SNR = read_SKS_methods(save_loc,method,station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[2]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_4,dfast_dir_4,lag_4,dlag_4,SNR = read_SKS_methods(save_loc,method,station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[3]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_4,dfast_dir_4,lag_4,dlag_4,SNR = read_SKS_methods(save_loc,method,station)
idx = np.where(station_Chev==station)
omega,delta,igood_nonnull =get_Q(fast_dir_1,fast_dir_4,lag_1,lag_4)
fig, ax = plt.subplots(figsize=(16, 9))
plt.subplot(121)
bins = 25
#ax = sns.distplot(fast_dir_1,bins=bins, hist=False, kde=True, color='#A2CD5A', vertical=False, norm_hist=True, axlabel='$\phi$', label='EV', ax=None)
p=sns.kdeplot(fast_dir_4, shade=False, color='#D2691E', label='SC')
x,y = p.get_lines()[0].get_data()
#plt.plot(x1[np.argmax(y1)],np.max(y1),marker='x',color='#D2691E',label='$\phi_{max}=$'+'{0}$^\circ$'.format(round(x1[np.argmax(y1)],2)))
p=sns.kdeplot(fast_dir_1, shade=False, color='#A2CD5A', label='RC')
plt.vlines(fast_dir[idx],0,np.max(y)*1.2,color='green',linewidth=2,label='Chevrot')
plt.vlines(fast_dir[idx]+dfast_dir[idx],0,np.max(y)*1.2,color='green',linestyle='dashed',linewidth=2)
plt.vlines(fast_dir[idx]-dfast_dir[idx],0,np.max(y)*1.2,color='green',linestyle='dashed',linewidth=2)
plt.vlines(43.95,0,np.max(y)*1.2,color='yellow',linewidth=2,label='Barruol')
plt.vlines(43.95+4.11,0,np.max(y)*1.2,color='yellow',linewidth=2,linestyle='dashed')
plt.vlines(43.95-4.11,0,np.max(y)*1.2,color='yellow',linewidth=2,linestyle='dashed')
#ax.set_ylim(0,0.02)
plt.xlabel('$\phi$')
plt.title('Fast axis')
plt.xticks(np.arange(0,210,30))
plt.xlim(0,180)
#plt.ylim(0,np.max(y)*1.5)
plt.grid()
plt.legend(loc='upper center', bbox_to_anchor=(0.75, 1.0), shadow=True)
plt.subplot(321)
# ax2 = sns.distplot(lag_4,bins=bins, hist=False, kde=True, color='#D2691E', vertical=False, norm_hist=True, axlabel='dt', label='RC', ax=None)
# ax2 = sns.distplot(lag_4[igood_nonnull],bins=None, hist=True, kde=False, color='#D2691E', vertical=False, norm_hist=True, axlabel='dt', label='RC-good SKS', ax=None)
p = sns.kdeplot(lag_4, shade=False, color='#D2691E', label='SC')
#ax2.plot(x1[np.argmax(y1)],np.max(y1),marker='x',color='#D2691E',label='dt')
x,y = p.get_lines()[0].get_data()
p = sns.kdeplot(lag_1, shade=False, color='#A2CD5A', label='RC')
plt.vlines(dt[idx],0,np.max(y)*1.2,color='green',linewidth=3,label='Chevrot')
plt.vlines(dt[idx]+dtlag[idx],0,np.max(y)*1.2,color='green',linestyle='dashed',linewidth=3)
plt.vlines(dt[idx]-dtlag[idx],0,np.max(y)*1.2,color='green',linestyle='dashed',linewidth=3)
plt.vlines(1.3,0,np.max(y)*1.2,color='yellow',linewidth=3,label='Barruol')
plt.vlines(1.3+0.13,0,np.max(y)*1.2,color='yellow',linewidth=3,linestyle='dashed')
plt.vlines(1.3-0.13,0,np.max(y)*1.2,color='yellow',linewidth=3,linestyle='dashed')
plt.grid()
plt.ylim(0,np.max(y)*1.5)
plt.xlim(0,3.5)
plt.xticks(np.arange(0,4,0.5))
plt.title('Time lag')
plt.legend(loc='upper center', bbox_to_anchor=(0.8, 0.9), shadow=True)
plt.subplot(123)
fig.suptitle('station: {0}'.format(station))
#ax3 = plt.subplot(123)
import math
## define quality criteria
def Q(fastev,lagev,fastrc,lagrc):
"""Following Wuestefeld et al. 2010"""
# omega = math.fabs((fastev - fastrc + 3645)%90 - 45) / 45
omega = math.fabs((fastev - fastrc + 360)%90) / 45
# omega = math.fabs((fastev - fastrc))/45
delta = lagrc / lagev
dnull = math.sqrt(delta**2 + (omega-1)**2) * math.sqrt(2)
dgood = math.sqrt((delta-1)**2 + omega**2) * math.sqrt(2)
if dnull < dgood:
return -(1 - dnull)
else:
return (1 - dgood)
def get_Q(fast_dir_1,fast_dir_4,lag_1,lag_4):
rho_l = 0.2
rho_u = 0.1
delta_y = 8
omega = []
delta = []
for i in range(0,len(fast_dir_4)):
tmp = math.fabs((fast_dir_4[i]-fast_dir_1[i]+ 360)%90)
tmp2 = lag_1[i]/lag_4[i] ## SC/RC
omega.append(tmp)
delta.append(tmp2)
omega = np.asarray(omega)
delta = np.asarray(delta)
igood_nonnull = np.where((delta<=1+rho_u) & (delta>=1-rho_l) & (omega<=delta_y))
return omega,delta,igood_nonnull
Quality = []
omega =[]
delta = []
for i in range(0,len(fast_dir_4)):
tmp = Q(fast_dir_4[i],lag_4[i],fast_dir_1[i],lag_1[i])
Quality.append(tmp)
Quality = np.asarray(Quality)
iQ = np.where(Quality>0.9)
plt.hist(Quality)
plt.hist(Quality[iQ],alpha=0.8,color='red',zorder=2)
plt.xlabel('Quality')
plt.ylabel('Quantity')
print('### Automatic QUALITY DETERMINATION CRITERIA')
print('Quality: Fast XC',fast_dir_1[iQ])
print('error XC', dfast_dir_1[iQ])
print('Quality: Fast EV',fast_dir_4[iQ])
print('error XC', dfast_dir_4[iQ])
print('TIME LAGS')
print('Quality: dt XC',lag_1[iQ])
print('Quality: dt EV',lag_4[iQ])
print('SNR ',SNR[iQ])
print('AVERAGES')
print('CROSS CORRLEATION')
print('mean XC', np.mean((fast_dir_1[iQ]+360)%90))
print('SNR weighted mean XC', np.average((fast_dir_1[iQ]+360)%90,weights=SNR[iQ]))
print('inverse error weighted mean XC', np.average((fast_dir_1[iQ]+360)%90,weights=1/dfast_dir_1[iQ]))
print('EIGENVECTOR METHOD')
print('mean EV', np.mean((fast_dir_4[iQ]+360)%90))
#print('error EV', np.mean((dfast_dir_4[iQ])))
print('SNR weighted mean EV', np.average((fast_dir_4[iQ]+360)%90,weights=SNR[iQ]))
print('inverse Error weighted mean EV', np.average((fast_dir_4[iQ]+360)%90,weights=1/dfast_dir_4[iQ]))
print('###############')
print('mean XC', np.mean(lag_1[iQ]))
print('mean EV', np.mean(lag_4[iQ]))
#print(ev_time[iQ])
print('Found indices ',iQ)
print('### Compare with CHEVROT')
idx = np.where(station_Chev==station)
print('Chevrot fast direction',fast_dir[idx])
print('Chevrot time lag',dt[idx])
rho_l = 0.2
rho_u = 0.1
delta_y = 8
rho_l_2 = 0.3
rho_u_2 = 0.2
delta_y_2 = 15
save_img = '/media/hein/home2/SplitWave_Results/Project_images/'
plt.rcParams.update({'font.size': 22})
#plt.rc('text', usetex=True)
plt.figure(figsize=(16,9))
print(station_list)
n_total =0
n_good_nonnull = 0
n_fair_nonnull = 0
n_good_null = 0
n_fair_null = 0
#station_list = ['LLS']
for station in tqdm(station_list):
# print(station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[0]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_1,dfast_dir_1,lag_1,dlag_1,SNR = read_SKS_methods(save_loc,method,station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[2]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_4,dfast_dir_4,lag_4,dlag_4,SNR = read_SKS_methods(save_loc,method,station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[3]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_3,dfast_dir_3,lag_3,dlag_3,SNR = read_SKS_methods(save_loc,method,station)
omega = []
delta = []
for i in range(0,len(fast_dir_4)):
tmp = math.fabs((fast_dir_4[i]-fast_dir_1[i]+ 360)%90)
tmp2 = lag_1[i]/lag_4[i] ## SC/RC
omega.append(tmp)
delta.append(tmp2)
omega = np.asarray(omega)
delta = np.asarray(delta)
# print(lag_1[np.where(delta==1)])
# print(lag_4[np.where(delta==1)])
# thresh = 0.2
# iSort_t = np.where()
# omega = omega[iSort_t]
# delta = delta[iSort_t]
igood_nonnull = np.where((delta<=1+rho_u) & (delta>=1-rho_l) & (omega<=delta_y) )
ifair_nonnull = np.where((delta<=1+rho_u_2) & (delta>=1-rho_l_2) & (omega<=delta_y_2))
igood_null = np.where((delta<=rho_u) & (omega>=37) & (omega<=53))
ifair_null = np.where((delta<=rho_u) & (omega>=32) & (omega<=58))
n_total = n_total + len(omega)
n_good_nonnull = n_good_nonnull + len(igood_nonnull[0])
n_fair_nonnull = n_fair_nonnull+ len(ifair_nonnull[0])
n_good_null = n_good_null + len(igood_null[0])
n_fair_null = n_fair_null + len(ifair_null[0])
plt.plot(delta,omega,'x', color='black')
plt.plot(delta[igood_nonnull],omega[igood_nonnull],'rx')
plt.plot(delta[igood_null],omega[igood_null],'bx')
n_total = np.sum(n_total)
n_good_nonnull = np.sum(n_good_nonnull)
n_fair_nonnull = np.sum(n_fair_nonnull)
n_good_null = np.sum(n_good_null)
n_fair_null = np.sum(n_fair_null)
plt.plot([0,2],[22.5,22.5],color='red',linestyle='dashed')
plt.text(1.7, 25, 'Null')
plt.text(1.7, 20, 'non-Null')
plt.plot([0,2],[67.5,67.5],color='red',linestyle='dashed')
plt.text(1.7, 64, 'Null')
plt.text(1.7, 69, 'non-Null')
plt.fill_between([1-rho_l, 1+rho_u], [0, 0],[delta_y,delta_y],color='gray', alpha=0.3,lw=5, edgecolor='black',label='$n_{good-non-Null}}$'+'={0}'.format(n_good_nonnull))
plt.fill_between([1-rho_l_2, 1+rho_u_2], [0, 0],[delta_y_2,delta_y_2],color='lightgray', facecolor='black',lw=5, alpha=0.25, label='$n_{fair-non-Null}}$'+'={0}'.format(n_fair_nonnull))
plt.fill_between([0, rho_u], [37, 37],[53,53], color='gray', facecolor='black',lw=5, alpha=0.5, label='$n_{good-Null}}$'+'={0}'.format(n_good_null))
plt.fill_between([0, rho_u_2], [32, 32], [58,58], color='lightgray', facecolor='black',lw=5, alpha=0.5, label='$n_{fair-Null}}$'+'={0}'.format(n_fair_null))
plt.xlim(0,2)
plt.ylim(0,90)
plt.xticks(np.arange(0,2.2,0.2))
plt.yticks(np.arange(0,100,10))
plt.xlabel('Delay time ratios, $dt_{RC}$/$dt_{SC}$')
plt.ylabel('Fast Axis Misfit, $\phi_{SC}-\phi_{RC}$')
plt.grid(linestyle='-')
plt.legend(loc='upper center', bbox_to_anchor=(0.8, 0.7), shadow=True)
plt.savefig('{0}/Quality_criteria.png'.format(save_img),dpi=150)
plt.show()
print('Total SKS ',n_total)
print('N: good SKS ',n_good_nonnull)
print('N: fair SKS ',n_fair_nonnull)
print('N: good Null ',n_good_null)
print('N: fair Null ',n_fair_null)
print('Quality: Fast XC',(fast_dir_1[igood_nonnull]+360)%90)
print('error XC', (dfast_dir_1[igood_nonnull]))
print('Quality: Fast EV',(fast_dir_4[igood_nonnull]+360)%90)
print('error EV', (dfast_dir_4[igood_nonnull]))
print('TIME LAGS')
print('Quality: dt XC',dlag_1[igood_nonnull])
print('Quality: dt EV',dlag_4[igood_nonnull])
print('SNR ',SNR[igood_nonnull])
print('#### AVERAGES')
print('CROSS CORRLEATION')
print('mean XC', np.mean((fast_dir_1[igood_nonnull]+360)%90))
print('error XC', np.mean((dfast_dir_1[igood_nonnull])))
print('SNR weighted mean XC', np.average((fast_dir_1[igood_nonnull]+360)%90, weights=SNR[igood_nonnull]))
print('TransM')
print('mean EV', np.mean((fast_dir_4[igood_nonnull]+360)%90))
print('error EV', np.mean((dfast_dir_4[igood_nonnull])))
print('SNR weighted mean EV', np.average((fast_dir_4[igood_nonnull]+360)%90, weights=SNR[igood_nonnull]))
print('EIGENVECTOR METHOD')
print('mean EV', np.mean((fast_dir_3[igood_nonnull]+360)%90))
print('error EV', np.mean((dfast_dir_3[igood_nonnull])))
print('SNR weighted mean EV', np.average((fast_dir_4[igood_nonnull]+360)%90, weights=SNR[igood_nonnull]))
print('###############')
print('mean XC', np.mean(dlag_1[igood_nonnull]))
print('mean TM', np.mean(dlag_4[igood_nonnull]))
print('mean EV', np.mean(dlag_3[igood_nonnull]))
print(station)
idx = np.where(station_Chev==station)
print('WüsteFeld Quality determination Technique')
print('Chevrot fast direction',fast_dir[idx])
print('Chevrot time lag',dt[idx])
from windrose import WindroseAxes
from matplotlib import pyplot as plt
import matplotlib.cm as cm
import numpy as np
val_B = [43.95,4.11,1.34, 0.13, 15]
#plt.hist((fast_dir_1+180)%180)
bins=30
plt.hist(fast_dir_1,color='red',bins=bins,alpha=0.6,normed=True,label='CrossC')
n,b,p = plt.hist(fast_dir_4,color='blue',bins=bins,alpha=0.6,normed=True,label='EigM')
val_B = [43.95,4.11,1.34, 0.13, 15]
plt.vlines(fast_dir[idx],0,np.max(n),color='green',linewidth=2,label='Chevrot')
plt.vlines(43.95,0,np.max(n),color='yellow',linewidth=2,label='Barruol')
plt.xlabel('$\phi$')
plt.title('station {0}'.format(station))
plt.show()
# quality_dirs_1 = (fast_dir_4[ix]+180)%180
# quality_ts_1 = lag_4[ix]
# for i in range(0,len(quality_dirs)):
# u,v = calc_u_v(quality_ts_1[i],quality_dirs_1[i])
# a = plt.quiver(0, 0, u, v,pivot='mid',color='red', alpha=0.1, headlength=0, headwidth = 1, label='Quality EV',scale=scale)
#fast_dir_1 =fast_dir_1
# ws = lag_1
# wd = fast_dir_1-180
# ax = WindroseAxes.from_ax()
# ax.bar(wd, ws, normed=True, opening=0.8, edgecolor='white')
# ax.set_legend()
## to Do
## Plot upper figure better
## plot histogram and arrows next to it
## Use automatic Criteria Q
## or Data-based!!
## use weighted average ?
## fully automatize and use seaborn distribution
## get Barruol
print(iQ)
print(ix)
#iQ =np.where(Quality>0.9)
#print(fast_dir_1[iQ])
#plt.hist(Quality[iQ],alpha=0.5,color='red')
#plt.hist(fast_dir_1,color='blue')
#plt.hist((fast_dir_1[iQ]+360)%360,alpha=0.5,color='red')
#plt.hist(Quality)
quality_dirs_1 = fast_dir_4[iQ]
quality_ts_1 = lag_4[iQ]
quality_dirs_2 = fast_dir_1[iQ]
quality_ts_2 = lag_1[iQ]
print(len(quality_dirs))
scale=5
plt.figure(figsize=(16,9))
for i in range(0,len(quality_dirs)):
u,v = calc_u_v(quality_ts_1[i],quality_dirs_1[i])
a = plt.quiver(0, 0, u, v,pivot='mid',color='red', alpha=0.1, headlength=0, headwidth = 1, label='Quality EV',scale=scale)
u,v = calc_u_v(quality_ts_2[i],quality_dirs_2[i])
b = plt.quiver(0, 0, u, v,pivot='mid',color='blue', alpha=0.1, headlength=0, headwidth = 1, label='Quality XC',scale=scale)
u,v = calc_u_v(np.mean(lag_1[iQ]),np.mean((fast_dir_1[iQ]+360)%90))
c = plt.quiver(0, 0, u, v,pivot='mid',color='red', alpha=1, headlength=0, headwidth = 1, linewidth=1,label='mean XC: $\phi$={0}$^\circ \pm$ {1}$^\circ$'.format(round(np.mean((fast_dir_1[iQ]+360)%90),2),round(np.std((fast_dir_1[iQ]+360)%90),2)),scale=scale)
u,v = calc_u_v(np.mean(lag_4[iQ]),np.mean((fast_dir_4[iQ]+360)%90))
d = plt.quiver(0, 0, u, v,pivot='mid',color='blue', alpha=1, headlength=0, headwidth = 1, linewidth=1,label='mean EV: $\phi$={0}$^\circ \pm$ {1}$^\circ$'.format(round(np.mean((fast_dir_4[iQ]+360)%90),2),round(np.std((fast_dir_4[iQ]+360)%90),2)),scale=scale)
u,v = calc_u_v(dt[idx],fast_dir[idx])
e = plt.quiver(0, 0, u, v,pivot='mid',color='green', alpha=1, headlength=0, headwidth = 1, linewidth=3,label='Chevrot: $\phi$={0}$^\circ \pm {1}^\circ$'.format(round(fast_dir[idx],2),round(dfast_dir[idx])),scale=scale)
## plot Barroul
#Station Lat. (°N) Lon. (°E) ϕ (°) Error_ϕ (°) δt (s) Error_δt (s) Number of measurements
val_B = [43.95,4.11,1.34, 0.13, 15]
u,v = calc_u_v(val_B[2],val_B[0])
f = plt.quiver(0, 0, u, v,pivot='mid',color='yellow', alpha=1, headlength=0, headwidth = 1, linewidth=3,label='Barruol: $\phi$={0}$^\circ \pm {1}^\circ$'.format(val_B[0],val_B[1]),scale=scale)
print(fast_dir[idx])
## plot Mean
## plot times together
plt.legend(handles = [a,b,c,d,e,f])
plt.xlabel('Direction E-N')
plt.ylabel('Direction N-S')
plt.title('Quality SKS measurements: station={0}, number of Quality SKS={1}'.format(station,len(quality_dirs)))
plt.grid()
plt.show()
Quality = []
for i in range(0,len(fast_dir_4)):
tmp = Q(fast_dir_4[i],dlag_4[i],fast_dir_1[i],dlag_1[i])
Quality.append(tmp)
fig, ax = plt.subplots()
ax.plot([0,1],[0,1],'r',linestyle='dashed')
circle1 = plt.Circle((1, 0), 0.15, color='r',linestyle='dashed', fill=False)
circle2 = plt.Circle((1, 0), 0.3, color='r',linestyle='dashed', fill=False)
circle3 = plt.Circle((1, 0), 0.45, color='r',linestyle='dashed', fill=False)
circle4 = plt.Circle((1, 0), 0.6, color='r',linestyle='dashed', fill=False)
ax.plot(dlag_1/dlag_4,Quality,'x')
ax.add_artist(circle1)
ax.add_artist(circle2)
ax.add_artist(circle3)
ax.add_artist(circle4)
#max_diff = np.max([fast_dir_4 - fast_dir_1, fast_dir_1 - fast_dir_4],axis=0)
#plt.plot(dlag_4/dlag_1,max_diff,'x')
ax.set_xlim(0,2)
ax.set_ylim(0,1)
ax.set_xlabel('delay time ratio dt(XC)/dt(EV)')
ax.set_ylabel('Normalised fast axis misfit')
plt.show()
### Current Problem:
import scipy
import pandas as pd
import seaborn as sns
from matplotlib.patches import Ellipse
##########################################################################################
plt.figure(figsize=(16,9))
plt.subplot(2,3,1)
station = station_list[7]
station ='LLS'
print(station)
print(method)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[0]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_1,dfast_dir_1,lag_1,dlag_1,SNR = read_SKS_methods(save_loc,method,station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[1]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_2,dfast_dir_2,lag_2,dlag_2,SNR = read_SKS_methods(save_loc,method,station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[2]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_3,dfast_dir_3,lag_3,dlag_3,SNR = read_SKS_methods(save_loc,method,station)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[3]
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_4,dfast_dir_4,lag_4,dlag_4,SNR = read_SKS_methods(save_loc,method,station)
##########################################################################################
p=sns.kdeplot(fast_dir_1, shade=True)
x,y = p.get_lines()[0].get_data()
#care with the order, it is first y
#initial fills a 0 so the result has same length than x
cdf = scipy.integrate.cumtrapz(y, x, initial=0)
#nearest_05 = np.abs(cdf-0.5).argmin()
#nearest_025 = np.abs(cdf-0.25).argmin()
#nearest_075 = np.abs(cdf-0.75).argmin()
#x_quarter = x[nearest_075]
#y_quarter = y[nearest_075]
nearest_05 = y.argmax()
nearest_025 = np.abs(cdf-0.25).argmin()
nearest_075 = np.abs(cdf-0.75).argmin()
x_median = x[nearest_05]
y_median = y[nearest_05]
x_quarter2 = x_median-max(nearest_025-nearest_075,nearest_075-nearest_025)/2
y_quarter2 = y_median
x_quarter = x_median+max(nearest_025-nearest_075,nearest_075-nearest_025)/2
y_quarter = y_median
plt.vlines(x_median, 0, y_median)
plt.vlines(x_quarter,0,y_quarter,'g',linewidth=4)
plt.vlines(x_quarter2,0,y_quarter2,'b')
plt.vlines(fast_dir[np.where(station_Chev==station)][0],0,y_median, color='red')
plt.vlines(fast_dir[np.where(station_Chev==station)][0]-dfast_dir[np.where(station_Chev==station)][0],0,y_median, color='red')
print('CrossC ',x_median)
##########################################################################################
plt.subplot(2,3,2)
p=sns.kdeplot(fast_dir_2, shade=True)
x,y = p.get_lines()[0].get_data()
#care with the order, it is first y
#initial fills a 0 so the result has same length than x
cdf = scipy.integrate.cumtrapz(y, x, initial=0)
#nearest_05 = np.abs(cdf-0.5).argmin()
#nearest_025 = np.abs(cdf-0.25).argmin()
#nearest_075 = np.abs(cdf-0.75).argmin()
#x_quarter = x[nearest_075]
#y_quarter = y[nearest_075]
nearest_05 = y.argmax()
nearest_025 = np.abs(cdf-0.25).argmin()
nearest_075 = np.abs(cdf-0.75).argmin()
x_median = x[nearest_05]
y_median = y[nearest_05]
x_quarter2 = x_median-max(nearest_025-nearest_075,nearest_075-nearest_025)/2
y_quarter2 = y_median
x_quarter = x_median+max(nearest_025-nearest_075,nearest_075-nearest_025)/2
y_quarter = y_median
plt.vlines(x_median, 0, y_median)
plt.vlines(x_quarter,0,y_quarter,'g',linewidth=4)
plt.vlines(x_quarter2,0,y_quarter2,'b')
plt.vlines(fast_dir[np.where(station_Chev==station)][0],0,y_median, color='red')
plt.vlines(fast_dir[np.where(station_Chev==station)][0]-dfast_dir[np.where(station_Chev==station)][0],0,y_median, color='red')
print('Eig3D ',x_median)
##########################################################################################
plt.subplot(2,3,3)
p=sns.kdeplot(fast_dir_3, shade=True)
x,y = p.get_lines()[0].get_data()
#care with the order, it is first y
#initial fills a 0 so the result has same length than x
cdf = scipy.integrate.cumtrapz(y, x, initial=0)
#nearest_05 = np.abs(cdf-0.5).argmin()
#nearest_025 = np.abs(cdf-0.25).argmin()
#nearest_075 = np.abs(cdf-0.75).argmin()
#x_quarter = x[nearest_075]
#y_quarter = y[nearest_075]
nearest_05 = y.argmax()
nearest_025 = np.abs(cdf-0.25).argmin()
nearest_075 = np.abs(cdf-0.75).argmin()
x_median = x[nearest_05]
y_median = y[nearest_05]
x_quarter2 = x_median-max(nearest_025-nearest_075,nearest_075-nearest_025)/2
y_quarter2 = y_median
x_quarter = x_median+max(nearest_025-nearest_075,nearest_075-nearest_025)/2
y_quarter = y_median
plt.vlines(x_median, 0, y_median)
plt.vlines(x_quarter,0,y_quarter,'g',linewidth=4)
plt.vlines(x_quarter2,0,y_quarter2,'b')
plt.vlines(fast_dir[np.where(station_Chev==station)][0],0,y_median, color='red')
plt.vlines(fast_dir[np.where(station_Chev==station)][0]-dfast_dir[np.where(station_Chev==station)][0],0,y_median, color='red')
print('TransM ',x_median)
##########################################################################################
print('width')
print(abs(x_quarter-x_quarter2))
print('Chevrot',fast_dir[np.where(station_Chev==station)][0])
print(-dfast_dir[np.where(station_Chev==station)][0])
## check if within bandwidth for respective method
max_diff = np.max([fast_dir_3 - fast_dir_1, fast_dir_1 - fast_dir_3],axis=0)
plt.figure(figsize=(16,9))
delta_phi = 12
rho_l = 0.2
rho_u = 0.1
plt.plot(dlag_3/dlag_1,max_diff,'x')
#plt.hlines(fast_dir[np.where(station_Chev==station)][0],0,2, color='red',linewidth=0.5)
print(fast_dir[np.where(station_Chev==station)][0])
plt.fill_between([1-rho_l, 1+rho_u], [0, 0],
[delta_phi,delta_phi],
color='gray', alpha=0.25, label='good')
plt.ylim(0,90)
### give out the EQs, which are in the boundaries
idx = np.where((max_diff<delta_phi) & (dlag_3/dlag_1<1+rho_u) & (dlag_3/dlag_1>1-rho_l))
print(ev_time[idx])
print(fast_dir_1[idx])
print(fast_dir_3[idx])
average_fast =np.mean([fast_dir_1[idx],fast_dir_3[idx]])
print(average_fast)
plt.hlines(average_fast,0,2, color='black',linewidth=0.5)
plt.show()
def _psurf(self,ax,**kwargs):
"""
Plot an error surface.
**kwargs
- cmap = 'magma'
- vals = (M.lam1-M.lam2) / M.lam2
- ax = None (creates new)
"""
if 'cmap' not in kwargs:
kwargs['cmap'] = 'magma'
if 'vals' not in kwargs:
kwargs['vals'] = (self.lam1-self.lam2) / self.lam2
# error surface
cax = ax.contourf(self.lags,self.degs,kwargs['vals'],26,cmap=kwargs['cmap'])
cbar = plt.colorbar(cax)
ax.set_ylabel(r'Fast Direction ($^\circ$)')
ax.set_xlabel('Delay Time (' + self.units + ')')
# confidence region
if 'conf95' in kwargs and kwargs['conf95'] == True:
ax.contour(self.lags,self.degs,self.errsurf,levels=[self.conf_95()])
# marker
if 'marker' in kwargs and kwargs['marker'] == True:
ax.errorbar(self.lag,self.fast,xerr=self.dlag,yerr=self.dfast)
ax.set_xlim([self.lags[0,0], self.lags[-1,0]])
ax.set_ylim([self.degs[0,0], self.degs[0,-1]])
# optional title
if 'title' in kwargs:
ax.set_title(kwargs['title'])
# add info in text box
if 'info' in kwargs and kwargs['info'] == True:
textstr = '$\phi=%.1f\pm%.1f$\n$\delta t=%.2f\pm%.2f$'%\
(self.fast,self.dfast,self.lag,self.dlag)
# place a text box in upper left in axes coords
props = dict(boxstyle='round', facecolor='white', alpha=0.5)
ax.text(0.6, 0.95, textstr, transform=ax.transAxes, fontsize=12,
verticalalignment='top', bbox=props)
return ax
import splitwavepy as sw
## Read in the Error Surfaces
station_list = os.listdir(save_loc)
method = ['CrossC','Eig3D','TransM','EigM']
method = method[3]
#station = station_list[np.random.randint(0,len(station_list))]
station = 'BNALP'
results_path = '/media/hein/home2/SplitWave_Results/Methods/'
fig, ax = plt.subplots(figsize=(16, 9))
station_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_1,dfast_dir_1,lag_1,dlag_1,SNR = read_SKS_methods(save_loc,method,station)
tmp = 0
for ievent in ev_time:
event = ievent.strftime("%Y-%m-%d")
filename='{0}/{1}/{2}/{2}_{3}.eig'.format(results_path,method,station,event)
m = sw.load(filename)
tmp = tmp + m.errsurf
tmp = tmp/len(ev_time)
## calc the average errorsurface
m.errsurf = tmp
ax = _psurf(m,ax,marker=True,conf95=True,title='{0}, method={1}, {2}'.format(station,method,event),info=True)
#plt.contour(np.transpose(m.errsurf),vmin=np.min(m.errsurf),vmax=np.max(m.errsurf),levels=[m.conf_95],cmap='magma_r')
plt.savefig('/media/hein/home2/SplitWave_Results/Error_surfaces/{0}_errsurf_{1}_{2}.png'.format(station,method,event))
#plt.xlim(0,3)
#print(m.errsurf)
## ACB_2015-12-04.eig
## make list of events coordinates
## make list of all station coordinates
#print(idx)
#data = fast_dir_1.copy()
# ax.plot(fast_dir[np.where(station_Chev==station)][0],dt[np.where(station_Chev==station)][0],'rx')
# u=fast_dir[np.where(station_Chev==station)][0] #x-position of the center
# v=dt[np.where(station_Chev==station)][0] #y-position of the center
# a=dfast_dir[np.where(station_Chev==station)][0] #radius on the x-axis
# b=dtlag[np.where(station_Chev==station)][0] #radius on the y-axis
# t = np.linspace(0, 2*np.pi, 100)
# ax.plot( u+a*np.cos(t) , v+b*np.sin(t),color='red' )
# #plot red circle around
# #plt.figure()#
# #ax = plt.gca()
# data = np.vstack((fast_dir_1,lag_1))
# d = {'fast': fast_dir_1, 'lag': lag_1}
# df = pd.DataFrame(d)
# sns.jointplot(x="fast", y="lag", data=df, kind="kde",color='lightsalmon')
# #plt.grid(color='lightgrey',linestyle='--',ax=ax)
# plt.show()
#sb.plt.show()
#plt.plot(fast_dir[np.where(station_Chev==station)][0],dt[np.where(station_Chev==station)][0],'rx')
# f, ax = plt.subplots(figsize=(6, 6))
# cmap = sns.cubehelix_palette(as_cmap=True, dark=0, light=1, reverse=True)
# sns.kdeplot(df.x, df.y, cmap=cmap, n_levels=60, shade=True);
## check agreement and make single station measurement
#plt.plot(b[0:-1],n)
#plt.show()
#print(dlag[np.where(station_Chev==station)][0])
### plot together on map and see if any correlation with coordinates
#### Plot THE HISTOGRAM of CHEVROT
plt.figure(figsize=(8,6))
plt.subplot2grid((1,2), (0,0), colspan=1, rowspan=1)
nbins=10
fast_dir = fast_dir
bins, ang, patches = plt.hist(fast_dir,bins='auto', color='black', normed=1)
phi = ang[np.where(bins==bins.max())][0]
mu = np.mean(fast_dir)
sigma = np.std(fast_dir)
plt.title('Fast direction')
print(phi)
print(sigma)
x = ang[0:-1]
y = bins
plt.subplot2grid((1,2), (0,1), colspan=1, rowspan=1)
nbins=5
bins, ang, patches = plt.hist(dt,bins=nbins, color='red', normed=1)
phi = ang[np.where(bins==bins.max())][0]
mu = np.mean(dt)
sigma = np.std(dt)
#print(mu)
#print(sigma)
x = ang[0:-1]
y = bins
plt.title('time Lag')
station_l = []
lat_l = []
lon_l = []
fi = np.zeros((len(station_list)))
dt_tmp = np.zeros(len(station_list))
for i in range(0,len(station_list)-1):
st_l,st_lat,st_lon,ev_time,ev_depth,ev_mag,ev_lat,ev_lon,fast_dir_1,dfast_dir_1,lag_1,dlag_1,SNR = read_SKS_methods(save_loc,method,station_list[i])
station_l.append(st_l[0])
lat_l.append(st_lat[0])
lon_l.append(st_lon[0])
fi[i] = fast_dir[np.where(station_Chev==station_list[i])]
dt_tmp[i]=dtlag[np.where(station_Chev==station_list[i])]
station_l = np.asarray(station_l)
lat_l = np.asarray(lat_l)
lon_l = np.asarray(lon_l)
print(fi)
print(station_l)
print('Average time lag',np.mean(dt))
print('Average time lag Error',np.mean(dtlag))
print('Average fast direction',np.mean(fast_dir))
print('Average fast direction Error',np.mean(dfast_dir))
## improve Map
## make Topographic one
##### MOVE ALL THE PLOTTING DOWN HERE
### to make a map of study area
import cartopy
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import shapely.geometry as sgeom
#from scalebar import scale_bar
figpath='/media/hein/home2/SplitWave_Results/Project_images'
plt.rcParams.update({'font.size': 18})
## use topography map?
plt.figure(figsize=(16,9))
proj = ccrs.PlateCarree()
ax = plt.axes(projection=proj)
#ax = plt.axes([0.2,0.2,0.75,0.75])
ax.set_extent([5, 11, 45, 48],proj)
#ax = fig.add_axes([0.2,0.2,0.75,0.75])
#ax.set_extent([0, 45, 0, 90],proj)
places = cfeature.NaturalEarthFeature('cultural','populated_places','10m',facecolor='black')
land = cfeature.NaturalEarthFeature('physical','land','10m',
edgecolor='k',facecolor='lightgoldenrodyellow',)
rivers = cfeature.NaturalEarthFeature(category='physical',name='rivers_lake_centerlines',scale='110m')
graticules = cfeature.NaturalEarthFeature(category='physical',name='graticules_1',scale='110m',facecolor='gray')
bounding_box = cfeature.NaturalEarthFeature(category='physical',name='wgs84_bounding_box',scale='10m',facecolor='none')
physical_building_blocks = cfeature.NaturalEarthFeature(category='physical',name='land_ocean_label_points',scale='10m',facecolor='gray')
geography_regions_points=cfeature.NaturalEarthFeature(
category='physical',
name='geography_regions_elevation_points',
scale='10m',
facecolor='black')
borders = cfeature.NaturalEarthFeature('cultural', 'admin_0_boundary_lines_land','10m',
edgecolor='black',facecolor='none')
lakes = cfeature.NaturalEarthFeature(category='physical',name='lakes_europe',scale='10m')
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='10m',
facecolor='none')
geoprahic_lines = cfeature.NaturalEarthFeature(
category='physical',
name='geographic_lines',
scale='10m',
facecolor='black')
SOURCE = 'Natural Earth'
LICENSE = 'public domain'
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.COASTLINE)
#ax.add_feature(rivers)
ax.add_feature(lakes)
ax.add_feature(states_provinces, edgecolor='none')
ax.add_feature(borders)
#ax.add_feature(geoprahic_lines)#
#ax.add_feature(graticules)
#ax.add_feature(geography_regions_points)
#ax.background_img()
ax.background_img(name='gray-earth', resolution='low')
ax.plot(lon_l,lat_l,'^r',transform=ccrs.PlateCarree(),markersize=10,zorder=11)
for i in range(0,len(station_list)-1):
ax.annotate(station_l[i],(lon_l[i],lat_l[i]-0.1),transform=ccrs.PlateCarree(),
ha='center',va='top',weight='bold')
r = dtlag
phi = fast_dir
for i in range(0,len(phi)):
u,v = calc_u_v(r[i],phi[i])
ax.quiver(lon_l[i], lat_l[i], u, v,pivot='mid',color='green',width=0.004, zorder=10,headlength=0, headwidth = 1)
ax.quiver(5+0.2,48-0.1,1,0,width=0.003,color='black',headlength=0, headwidth = 1)
ax.annotate('dt=1 s',(5+0.3,48-0.2),transform=ccrs.PlateCarree(), ha='center',va='top',weight='bold')
#ax.background_img(name='BM', resolution='low')
#ax.background_img()
ext = [5, 11, 45, 48]
############# For Small Plot
sub_ax = plt.axes([0.55,0.12,0.25,0.25], projection=proj)
# Add coastlines and background
sub_ax.coastlines()
sub_ax.background_img()
# Plot box with position of main map
extent_box = sgeom.box(ext[0],ext[2],ext[1],ext[3])
sub_ax.add_geometries([extent_box], proj, color='none',
edgecolor='red', linewidth=3)
sub_ax.background_img()
sub_ax.plot(ev_lon,ev_lat,'*y',transform=proj,markersize=7)
#scale_bar(ax,(0.75,0.05),10)
### plot EQ location and Great circle path
plt.savefig('{0}/Station_overview.png'.format(figpath),dpi=150)
plt.show()
print(len(lon_l))
```
### how to add background image:
## downloda files from Natural Earth data
## convert to png
## add to /home/hein/miniconda3/envs/py27/lib/python2.7/site-packages/cartopy/data/raster/natural_earth
## make entry in images.json
```
print(0.005*16)
def main():
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([80, 170, -45, 30])
# Put a background image on for nice sea rendering.
ax.stock_img()
# pyguymer.add_map_background(ax, resolution = "large4096px")
# Create a feature for States/Admin 1 regions at 1:50m from Natural Earth
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none')
SOURCE = 'Natural Earth'
LICENSE = 'public domain'
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(states_provinces, edgecolor='gray')
# Add a text annotation for the license information to the
# the bottom right corner.
plt.show()
if __name__ == '__main__':
main()
plt.show()
import matplotlib.pyplot as plt
import numpy as np
import cartopy
import cartopy.crs as ccrs
def calc_u_v(r,phi):
if (phi>=0 and phi<=90):
phi=90-phi
u = r*np.cos(np.deg2rad(phi))
v = r*np.sin(np.deg2rad(phi))
elif (phi>90 and phi<=180):
phi=180-phi
u = r*np.sin(np.deg2rad(phi))
v = -r*np.cos(np.deg2rad(phi))
elif (phi>180 and phi<=270):
phi=270-phi
u = -r*np.cos(np.deg2rad(phi))
v = -r*np.sin(np.deg2rad(phi))
elif (phi>270 and phi<=360):
phi=360-phi
u = -r*np.sin(np.deg2rad(phi))
v = r*np.cos(np.deg2rad(phi))
elif (phi>=-90 and phi<0):
phi=abs(phi)
u = -r*np.sin(np.deg2rad(phi))
v = r*np.cos(np.deg2rad(phi))
elif (phi>=-180 and phi<-90):
phi=180-abs(phi)
u = -r*np.sin(np.deg2rad(phi))
v = -r*np.cos(np.deg2rad(phi))
return u,v
r = 1
phi= -175
u,v = calc_u_v(r,phi)
print(u,v)
#ax = plt.axes()
plt.quiver(0, 0, u, v,color='black')
#plt.ylim(0,2)
#plt.xlim(0,2)
plt.show()
def sample_data(shape=(20, 30)):
"""
Returns ``(x, y, u, v, crs)`` of some vector data
computed mathematically. The returned crs will be a rotated
pole CRS, meaning that the vectors will be unevenly spaced in
regular PlateCarree space.
"""
crs = ccrs.RotatedPole(pole_longitude=177.5, pole_latitude=37.5)
x = np.linspace(311.9, 391.1, shape[1])
y = np.linspace(-23.6, 24.8, shape[0])
x2d, y2d = np.meshgrid(x, y)
u = 10 * (2 * np.cos(2 * np.deg2rad(x2d) + 3 * np.deg2rad(y2d + 30)) ** 2)
v = 20 * np.cos(6 * np.deg2rad(x2d))
return x, y, u, v, crs
def main():
plt.figure(figsize=(20,10))
ax = plt.axes(projection=ccrs.Orthographic(-10, 65))
ax.add_feature(cartopy.feature.OCEAN, zorder=0)
ax.add_feature(cartopy.feature.LAND, zorder=0, edgecolor='black')
ax.set_global()
ax.gridlines()
x, y, u, v, vector_crs = sample_data()
ax.quiver(x, y, u, v, transform=vector_crs)
plt.show()
#if __name__ == '__main__':
# main()
#from netCDF4 import Dataset, MFDataset, num2date
import matplotlib.pylab as plt
%matplotlib inline
import numpy as np
from matplotlib import cm
import cartopy.crs as ccrs
#from cmocean import cm as cmo
import cartopy.mpl.geoaxes
import sys
import os
from cartopy.util import add_cyclic_point
#os.environ["CARTOPY_USER_BACKGROUNDS"] = "/home/hein/Documents/cartopy/"
os.environ["CARTOPY_USER_BACKGROUNDS"] ='/home/hein/miniconda3/envs/py27/lib/python2.7/site-packages/cartopy/data/raster/natural_earth/'
import json
# filename='/home/hein/Documents/cartopy/images.json'
# with open(filename, "r") as f:
# data = json.loads(f.read())
# #json.loads()
# print(data)
#read_user_background_images(True)
plt.figure(figsize=(13,6.2))
ax = plt.subplot(111, projection=ccrs.PlateCarree())
ax.background_img(name='ne_shaded', resolution='low')
ax.background_img(name='gray-earth', resolution='low')
mm = ax.pcolormesh(lon_cyc,\
lat,\
temp_cyc,\
vmin=-2,\
vmax=30,\
transform=ccrs.PlateCarree(),\
cmap=cmo.balance )
ax.coastlines(resolution='110m');
# Import modules ...
import cartopy
import os
# Print path ...
print os.path.join(cartopy.__path__[0], "data", "raster", "natural_earth")
os.getenv('CARTOPY_USER_BACKGROUNDS')
name='gray-earth'
resolution='low'
bgdir = os.getenv('CARTOPY_USER_BACKGROUNDS')
if bgdir is None:
bgdir = os.path.join(config["repo_data_dir"],
'raster', 'natural_earth')
print(bgdir)
print(name)
# now get the filename we want to use:
print(_USER_BG_IMGS)
try:
fname = _USER_BG_IMGS[name][resolution]
print(fname)
except KeyError:
msg = ('Image "{}" and resolution "{}" are not present in '
'the user background image metadata in directory "{}"')
raise ValueError(msg.format(name, resolution, bgdir))
# Now obtain the image data from file or cache:
fpath = os.path.join(bgdir, fname)
background_img(self, name='ne_shaded', resolution='low', extent=None,
cache=False)
import os
import json
bgdir = os.getenv('CARTOPY_USER_BACKGROUNDS')
if bgdir is None:
bgdir = os.path.join(config["repo_data_dir"],
'raster', 'natural_earth')
json_file = os.path.join(bgdir, 'images.json')
print(json_file)
print(js_obj)
with open(json_file, 'r') as js_obj:
dict_in = json.load(js_obj)
for img_type in dict_in:
_USER_BG_IMGS[img_type] = dict_in[img_type]
print(_USER_BG_IMGS)
verify=True
if verify:
required_info = ['__comment__', '__source__', '__projection__']
for img_type in _USER_BG_IMGS:
if img_type == '__comment__':
# the top level comment doesn't need verifying:
pass
else:
# check that this image type has the required info:
for required in required_info:
if required not in _USER_BG_IMGS[img_type]:
msg = ('User background metadata file "{}", '
'image type "{}", does not specify '
'metadata item "{}"')
raise ValueError(msg.format(json_file, img_type,
required))
for resln in _USER_BG_IMGS[img_type]:
# the required_info items are not resolutions:
if resln not in required_info:
img_it_r = _USER_BG_IMGS[img_type][resln]
test_file = os.path.join(bgdir, img_it_r)
if not os.path.isfile(test_file):
msg = 'File "{}" not found'
raise ValueError(msg.format(test_file))
```
| github_jupyter |

<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Mathematics/BudgetAndBankingAssignment/budget-and-banking-assignment.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
```
%%html
<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<p> Code is hidden for ease of viewing. Click the Show/Hide button to see. </>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>
# Modules
import string
import numpy as np
import pandas as pd
import qgrid as q
import matplotlib.pyplot as plt
# Widgets & Display modules, etc..
from ipywidgets import widgets as w
from ipywidgets import Button, Layout
from IPython.display import display, Javascript, Markdown, HTML
# grid features for interactive grids
grid_features = { 'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': True,
'rowHeight': 40,
'enableColumnReorder': True,
'enableTextSelectionOnCells': True,
'editable': True,
'filterable': False,
'sortable': False,
'highlightSelectedRow': True}
def rerun_cell( b ):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1,IPython.notebook.get_selected_index()+2)'))
def check_answers(series_input,answer_list):
# convert valid answer list to string format
valid_answers = ""
for item in answer_list:
valid_answers = valid_answers + item + ","
valid_answers = valid_answers[:len(valid_answers)-1]
# compare student's answers to answer list
for entry in series_input:
if(entry != '' and entry not in answer_list):
# display(Markdown("Some of your inputs are invalid. Please enter valid inputs from the following: ", valid_answers))
return 0
return 1
name_text = w.Textarea( value='', placeholder='STUDENT NAME', description='', disabled=False , layout=Layout(width='30%', height='32.5px') )
date_text = w.Textarea( value='', placeholder='DATE', description='', disabled=False , layout=Layout(width='30%', height='32.5px') )
profile_button = w.Button(button_style='info',description="Save", layout=Layout(width='15%', height='30px'))
display(name_text)
display(date_text)
display(profile_button)
profile_button.on_click( rerun_cell )
name = name_text.value
date = date_text.value
name_saved = False
date_saved = False
if(name != ''):
name_text.close()
display(Markdown("### Student Name: $\hspace{1.5cm}$"+ name ))
name_saved = True
if(date != ''):
date_text.close()
display(Markdown("### $\hspace{2.15cm}$Date: $\hspace{1.5cm}$"+ date ))
date_saved = True
if(name_saved == True and date_saved == True):
profile_button.close()
```
# Budget and Banking
## Assignment Lesson 1
```
answers_recorded = 0
q1_answered = 0
q2_answered = 0
q3a_answered = 0
q3b_answered = 0
q3c_answered = 0
q3d_answered = 0
```
For question 1 and 2, choose the best answer.
**Question 1.** A good reason for preparing a budget
```
if(q1_answered == 1):
q1_answered += 1
q1_student_answer = q1_choices.value
correct_answer = 'd.) All the above are good reasons for preparing a budget'
if(q1_student_answer == correct_answer):
display(Markdown("### You answered: "))
display(Markdown(q1_student_answer))
display(Markdown("This is correct!"))
else:
display(Markdown("### You answered: "))
display(Markdown(q1_student_answer))
display(Markdown("This is incorrect."))
display(Markdown("### Correct answer: "))
display(Markdown(correct_answer))
else:
# Question 1 Answer Choices
choice_1 = 'a.) Running out of money each month'
choice_2 = 'b.) To reduce debt'
choice_3 = 'c.) To save for something special'
choice_4 = 'd.) All the above are good reasons for preparing a budget'
answer_choices = [ choice_1,choice_2,choice_3,choice_4 ]
# Question 1 choices widget
q1_choices = w.RadioButtons( options=answer_choices , description="" , disabled=False , layout=Layout(width='100%'))
display(q1_choices)
q1_answered += 1
def record_answer(button_widget):
display(Javascript('IPython.notebook.execute_cell_range( IPython.notebook.get_selected_index()-1 , IPython.notebook.get_selected_index()+1) '))
q1_choices.close()
if(q1_answered >= 2):
q1_button.close()
else:
q1_button = w.Button(button_style='info',description="Record Answer", layout=Layout(width='15%', height='30px'))
q1_button.on_click( record_answer )
display(q1_button)
```
**Question 2.** Which equation best describes gross and net pay?
```
if(q2_answered == 1):
q2_answered += 1
q2_student_answer = q2_choices.value
correct_answer = 'b.) Net Pay = Gross Pay - Deductions'
if(q2_student_answer == correct_answer):
display(Markdown("### You answered: "))
display(Markdown(q2_student_answer))
display(Markdown("This is correct!"))
else:
display(Markdown("### You answered: "))
display(Markdown(q2_student_answer))
display(Markdown("This is incorrect."))
display(Markdown("### Correct answer: "))
display(Markdown(correct_answer))
else:
# Question 2 Answer Choices
choice_1 = 'a.) Gross Pay = Net Pay - Deductions'
choice_2 = 'b.) Net Pay = Gross Pay - Deductions'
choice_3 = 'c.) Gross Pay = Net Pay ÷ Deductions'
choice_4 = 'd.) Net Pay = Gross Pay ÷ Deductions'
answer_choices = [ choice_1,choice_2,choice_3,choice_4 ]
q2_choices = w.RadioButtons( options=answer_choices , description="" , disabled=False , layout=Layout(width='100%'))
display(q2_choices)
q2_answered += 1
def record_answer(button_widget):
display(Javascript('IPython.notebook.execute_cell_range( IPython.notebook.get_selected_index()-1 , IPython.notebook.get_selected_index()+1) '))
q2_choices.close()
if(q2_answered >= 2):
q2_button.close()
else:
q2_button = w.Button(button_style='info',description="Record Answer", layout=Layout(width='15%', height='30px'))
q2_button.on_click( record_answer )
display(q2_button)
```
**Question 3.** Label the following incomes as **fixed** or **variable**. Explain reasons for each.
$\hspace{0.35cm}$**a.)** Lloyd earns $50.00 for every set of knife he sells.
```
if(q3a_answered == 1):
q3a_answered += 1
q3a_student_answer = q3a_choices.value
correct_answer = 'Variable'
if(q3a_student_answer == correct_answer):
display(Markdown("### You answered: "))
display(Markdown(q3a_student_answer))
display(Markdown("This is correct!"))
else:
display(Markdown("### You answered: "))
display(Markdown(q3a_student_answer))
display(Markdown("This is incorrect."))
display(Markdown("### Correct answer: "))
display(Markdown(correct_answer))
else:
# Question 3.a Answer Choices
choice_1 = 'Fixed'
choice_2 = 'Variable'
answer_choices = [ choice_1,choice_2 ]
q3a_choices = w.RadioButtons( options=answer_choices , description="" , disabled=False , layout=Layout(width='100%'))
display(q3a_choices)
q3a_answered += 1
def record_answer(button_widget):
display(Javascript('IPython.notebook.execute_cell_range( IPython.notebook.get_selected_index()-1 , IPython.notebook.get_selected_index()+1) '))
q3a_choices.close()
if(q3a_answered >= 2):
q3a_button.close()
else:
q3a_button = w.Button(button_style='info',description="Record Answer", layout=Layout(width='15%', height='30px'))
q3a_button.on_click( record_answer )
display(q3a_button)
```
$\hspace{0.35cm}$**b.)** Each month, Florence is paid 4.5% commission on her first $1,500.00 in sales. If she makes more than this, Florence is paid 6% in commission.
```
if(q3b_answered == 1):
q3b_answered += 1
q3b_student_answer = q3b_choices.value
correct_answer = 'Variable'
if(q3b_student_answer == correct_answer):
display(Markdown("### You answered: "))
display(Markdown(q3b_student_answer))
display(Markdown("This is correct!"))
else:
display(Markdown("### You answered: "))
display(Markdown(q3b_student_answer))
display(Markdown("This is incorrect."))
display(Markdown("### Correct answer: "))
display(Markdown(correct_answer))
else:
# Question 3.b Answer Choices
choice_1 = 'Fixed'
choice_2 = 'Variable'
answer_choices = [ choice_1,choice_2 ]
q3b_choices = w.RadioButtons( options=answer_choices , description="" , disabled=False , layout=Layout(width='100%'))
display(q3b_choices)
q3b_answered += 1
def record_answer(button_widget):
display(Javascript('IPython.notebook.execute_cell_range( IPython.notebook.get_selected_index()-1 , IPython.notebook.get_selected_index()+1) '))
q3b_choices.close()
if(q3b_answered >= 2):
q3b_button.close()
else:
q3b_button = w.Button(button_style='info',description="Record Answer", layout=Layout(width='15%', height='30px'))
q3b_button.on_click( record_answer )
display(q3b_button)
```
$\hspace{0.35cm}$**c.)** Corinne works 40 hours a week at $17.00 an hour. She is paid bi-weekly (every two weeks).
```
if(q3c_answered == 1):
q3c_answered += 1
q3c_student_answer = q3c_choices.value
correct_answer = 'Fixed'
if(q3c_student_answer == correct_answer):
display(Markdown("### You answered: "))
display(Markdown(q3c_student_answer))
display(Markdown("This is correct!"))
else:
display(Markdown("### You answered: "))
display(Markdown(q3c_student_answer))
display(Markdown("This is incorrect."))
display(Markdown("### Correct answer: "))
display(Markdown(correct_answer))
else:
# Question 3.c Answer Choices
choice_1 = 'Fixed'
choice_2 = 'Variable'
answer_choices = [ choice_1,choice_2 ]
q3c_choices = w.RadioButtons( options=answer_choices , description="" , disabled=False , layout=Layout(width='100%'))
display(q3c_choices)
q3c_answered += 1
def record_answer(button_widget):
display(Javascript('IPython.notebook.execute_cell_range( IPython.notebook.get_selected_index()-1 , IPython.notebook.get_selected_index()+1) '))
q3c_choices.close()
if(q3c_answered >= 2):
q3c_button.close()
else:
q3c_button = w.Button(button_style='info',description="Record Answer", layout=Layout(width='15%', height='30px'))
q3c_button.on_click( record_answer )
display(q3c_button)
```
$\hspace{0.35cm}$**d.)** Gord earns a salary of $3,000.00 each month.
```
if(q3d_answered == 1):
q3d_answered += 1
q3d_student_answer = q3d_choices.value
correct_answer = 'Fixed'
if(q3d_student_answer == correct_answer):
display(Markdown("### You answered: "))
display(Markdown(q3d_student_answer))
display(Markdown("This is correct!"))
else:
display(Markdown("### You answered: "))
display(Markdown(q3d_student_answer))
display(Markdown("This is incorrect."))
display(Markdown("### Correct answer: "))
display(Markdown(correct_answer))
else:
# Question 3.d Answer Choices
choice_1 = 'Fixed'
choice_2 = 'Variable'
answer_choices = [ choice_1,choice_2 ]
q3d_choices = w.RadioButtons( options=answer_choices , description="" , disabled=False , layout=Layout(width='100%'))
display(q3d_choices)
q3d_answered += 1
def record_answer(button_widget):
display(Javascript('IPython.notebook.execute_cell_range( IPython.notebook.get_selected_index()-1 , IPython.notebook.get_selected_index()+1) '))
q3d_choices.close()
if(q3d_answered >= 2):
q3d_button.close()
else:
q3d_button = w.Button(button_style='info',description="Record Answer", layout=Layout(width='15%', height='30px'))
q3d_button.on_click( record_answer )
display(q3d_button)
```
4) Lian works in a retail store $35$ hours a week. She is paid $\$12.75$ an hour bi-weekly (every two weeks). Her last paycheque had a deduction of $\$71.19$ for Income Tax, $\$35.82$ for CPP, and $\$14.85$ for EI.
$\hspace{1.5cm}$a.) Determine Lian's gross pay for two weeks.
**Write your calculations below.**
* Valid inputs: Numbers and decimals.
* Valid operations: `+,-,*` for addition, subtraction, and multiplication, respectively.
**Example input:** `40 * 10 * 2 + 30`
```
q4a_text = w.Textarea( value='', placeholder='Your calculations for Exercise 4.a.', description='', disabled=False , layout=Layout(width='100%', height='30px') )
q4a_button = w.Button(button_style='info',description="Calculate", layout=Layout(width='15%', height='30px'))
display(q4a_text)
display(q4a_button)
q4a_button.on_click( rerun_cell )
# Obtain user's input
q4a_input = q4a_text.value
# Define the valid character inputs for this exercise
numbers = '0123456789'
operations = '+-*'
others = ' .'
valid_inputs = numbers + operations + others
# Check if every character in user's string input is valid
user_input_valid = all( ch in valid_inputs for ch in q4a_input)
# Check for correctness of user's calculation
if(q4a_input != '' and user_input_valid == True):
user_answer = eval(q4a_input)
display(Markdown("### You answered: "))
display(Markdown("$\$"+str(round(user_answer,2))+"$"))
if(user_answer == 892.50):
display(Markdown("Your calculation is correct!"))
q4a_button.close()
q4a_text.close()
else:
display(Markdown("Your calculation is incorrect. Try again."))
if(q4a_input != '' and user_input_valid == False):
display(Markdown("Your answer contains invalid inputs or operations."))
```
$\hspace{1.5cm}$b.) Determine Lian's net pay for two weeks.
```
q4b_text = w.Textarea( value='', placeholder='Your calculations for Exercise 4.b.', description='', disabled=False , layout=Layout(width='100%', height='30px') )
q4b_button = w.Button(button_style='info',description="Calculate", layout=Layout(width='15%', height='30px'))
display(q4b_text)
display(q4b_button)
q4b_button.on_click( rerun_cell )
# Obtain user's input
q4b_input = q4b_text.value
# Define the valid character inputs for this exercise
numbers = '0123456789'
operations = '+-*'
others = ' .'
valid_inputs = numbers + operations + others
# Check if every character in user's string input is valid
user_input_valid = all( ch in valid_inputs for ch in q4a_input)
# Check for correctness of user's calculation
if(q4b_input != '' and user_input_valid == True):
user_answer = round(eval(q4b_input),2)
correct_answer = round(eval("892.5 - 71.19 - 35.82 - 14.85"),2)
display(Markdown("### You answered: "))
display(Markdown("$\$"+str(round(user_answer,2))+"$"))
if(user_answer == correct_answer):
q4b_button.close()
q4b_text.close()
display(Markdown("Your calculation is correct!"))
else:
display(Markdown("Your calculation is incorrect. Try again."))
if(q4b_input != '' and user_input_valid == False):
display(Markdown("Your answer contains invalid inputs or operations."))
```
---
<h1 align='center'>Character Section Lesson</h1>
To conclude the Chapter 1 Assignment, you will be constructing a conservative budget and answering a "what if" question. You will be marked using a rubric that is located on the last page of the booklet.
### Character Background
Emma, a high school student working at a local grocery store as a cashier.
### Personal Background
Emma is 16 years old, in grade 11 and living at home with her parents. She plays soccer and rugby. Emma is the bass player in a band. Currently, she is using an old bass guitar that her dad played in the 1980s, and she would like to buy her own. Her goal is to buy it in five months because the band has a big gig coming up in six months. The new bass costs $828.45 including GST.
### Salary
Emma works part-time at Gobey’s Grocery. She is paid a biweekly amount of $321.13.
Emma babysits occasionally for her neighbours for $10.00 an hour. Several of Emma's babysitting client pay her by cheque, and Emma puts the money directly into her bank account.
### Expenses
Emma’s parents have encouraged her to save for post-secondary schooling. To achieve this, Emma and her parents set up a direct withdrawal from her bank account of $40.00 a paycheque into an RESP account. Emma pays for her own transit pass, which she uses to travel around the city. Emma is very thrifty and creative, so she chooses to shop for clothing at local second-hand stores. Emma’s position as the bass player in a local band requires her to maintain her instrument and provide her own sound equipment.
Emma downloads the bank record of her spending during two months (February and March).
### Notes
* The credit column contains all income (part-time job and babysitting).
* The debit column contains all expenses that Emma has paid.
* An automatic withdrawal of $40.00 each paycheque goes into an RESP account.
* In February, Emma paid rugby fees with cash, which are $180.00.
* At the end of March, Emmas’ soccer team went to an overnight tournament for which she paid $160.00 in cash as her portion of the hotel room.
<h2 align='center'>Character Section Lesson Exercises</h2>
**Question 1.** List **at least two** reasons Emma might want to prepare a budget for herself.
```
emma1_text = w.Textarea( value='', placeholder='Write your answer here for Question 1.', description='', disabled=False , layout=Layout(width='100%', height='75px') )
emma1_button = w.Button(button_style='info',description="Record Answer", layout=Layout(width='15%', height='30px'))
display(emma1_text)
display(emma1_button)
emma1_button.on_click( rerun_cell )
emma1_input = emma1_text.value
if(emma1_input != ''):
emma1_text.close()
emma1_button.close()
display(Markdown("### Your answer for Question 1:"))
display(Markdown(emma1_input))
```
**Question 2.** Determine Emma’s net income for the month of February and March. Her bank statements are displayed below.
```
# Prepare dataframes for Emma's February & March Transactions
entry_types = {'Debit ($)': str , 'Credit ($)' : str}
february_transactions_df = pd.read_csv('./data/februarytransactions.csv',converters=entry_types)
february_transactions_df.set_index('Date',inplace=True)
february_transactions_df[['Debit ($)','Credit ($)']] = february_transactions_df[['Debit ($)','Credit ($)']].replace(np.nan,"0.00")
march_transactions_df = pd.read_csv('./data/marchtransactions.csv',converters=entry_types)
march_transactions_df.set_index('Date',inplace=True)
march_transactions_df[['Debit ($)','Credit ($)']] = march_transactions_df[['Debit ($)','Credit ($)']].replace(np.nan,"0.00")
# Control grid features
emma_grid_features = { 'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': True,
'rowHeight': 40,
'enableColumnReorder': True,
'enableTextSelectionOnCells': True,
'editable': False,
'filterable': False,
'sortable': False,
'highlightSelectedRow': True}
q_emma_feb = q.show_grid( february_transactions_df , grid_options = emma_grid_features )
q_emma_mar = q.show_grid( march_transactions_df , grid_options = emma_grid_features )
display(Markdown("<h2 align='center'>Emma's February Transactions</h2>"))
display(q_emma_feb)
```
**Write your calculations below.**
* Valid inputs: Numbers and decimals.
* Valid operations: `+` for addition.
**Example input:** `35 + 27.13 + 11.32`
```
feb_text = w.Textarea( value='', placeholder="Enter your calculation here to determine Emma's net income for the month of February. Hint: which column do Emma's income come from?", description='', disabled=False , layout=Layout(width='100%', height='30px') )
feb_button = w.Button(button_style='info',description="Calculate", layout=Layout(width='15%', height='30px'))
display(feb_text)
display(feb_button)
feb_button.on_click( rerun_cell )
# Obtain user's input
feb_input = feb_text.value
# Define the valid character inputs for this exercise
numbers = '0123456789'
operations = '+'
others = ' .'
valid_inputs = numbers + operations + others
# Check if every character in user's string input is valid
user_input_valid = all( ch in valid_inputs for ch in feb_input)
# Check for correctness of user's calculation
if(feb_input != '' and user_input_valid == True):
user_answer = eval(feb_input)
display(Markdown("### You answered: "))
display(Markdown("$\$"+str(user_answer)+"$"))
if(user_answer == 1042.26):
feb_button.close()
feb_text.close()
display(Markdown("Your calculation is correct!"))
else:
display(Markdown("Your calculation is incorrect. Try again."))
if(feb_input != '' and user_input_valid == False):
display(Markdown("Your answer contains invalid inputs or operations."))
display(Markdown("<h2 align='center'>Emma's March Transactions</h2>"))
display(q_emma_mar)
```
**Write your calculations below.**
* Valid inputs: Numbers and decimals.
* Valid operations: `+` for addition.
**Example input:** `35 + 27.13 + 11.32`
```
mar_text = w.Textarea( value='', placeholder="Enter your calculation here to determine Emma's net income for the month of March. Hint: which column do Emma's income come from?", description='', disabled=False , layout=Layout(width='100%', height='30px') )
mar_button = w.Button(button_style='info',description="Calculate", layout=Layout(width='15%', height='30px'))
display(mar_text)
display(mar_button)
mar_button.on_click( rerun_cell )
# Obtain user's input
mar_input = mar_text.value
# Define the valid character inputs for this exercise
numbers = '0123456789'
operations = '+'
others = ' .'
valid_inputs = numbers + operations + others
# Check if every character in user's string input is valid
user_input_valid = all( ch in valid_inputs for ch in mar_input)
# Check for correctness of user's calculation
if(mar_input != '' and user_input_valid == True):
user_answer = eval(mar_input)
display(Markdown("### You answered: "))
display(Markdown("$\$"+str(user_answer)+"$"))
if(user_answer == 922.26):
mar_button.close()
mar_text.close()
display(Markdown("Your calculation is correct!"))
else:
display(Markdown("Your calculation is incorrect. Try again."))
if(mar_input != '' and user_input_valid == False):
display(Markdown("Your answer contains invalid inputs or operations."))
```
**Question 3.** Categorize Emma’s income as fixed, variable or both. Explain.
```
emma3_text = w.Textarea( value='', placeholder='Write your answer here for Question 3.', description='', disabled=False , layout=Layout(width='100%', height='75px') )
emma3_button = w.Button(button_style='info',description="Record Answer", layout=Layout(width='15%', height='30px'))
display(emma3_text)
display(emma3_button)
emma3_button.on_click( rerun_cell )
emma3_input = emma3_text.value
if(emma3_input != ''):
emma3_text.close()
emma3_button.close()
display(Markdown("### Your answer for Question 3:"))
display(Markdown(emma3_input))
```
<h2 align='center'>Interactive Exercise: Entering & Saving Spreadsheet Data</h2>
**Question 4.** By analyzing the bank statements for the two months, complete the following tables for each month.
$\hspace{0.35cm}$**a.)** For each row in the table, fill the **Expense Category** column with the appropriate expense label. The valid choices are listed below - write the associated upper case letter for each expense. Expand the **Description** column to see the expense entry.
**Categories: **
$$\text{(S) Savings, (U) Utilities, (C) Clothing, (E) Entertainment, (M) Miscellaneous, (P) Personal Care, (T) Transportation, (F) Food}$$
```
# Note to developers:
# 1. data in tables below obtained by converting relevant pages from Math20-3Unit1Key.pdf into .csv format.
# 2. this part of the interactive focuses on categorizing expenses
# Answer key for this lesson
feb_expenses_answer_key = ['S', 'T', 'F', 'U', 'F', 'F', 'F', 'C', 'E', 'S', 'F', 'F', 'E', 'M', 'M', 'F', 'F', 'M', 'M', 'C']
feb_fv_answer_key = ['V','V','V','V','F','F','F']
# Setting up the dataframe
pd.options.display.max_rows = 50
entry_types = {'Debit ($)': str , 'Balance ($)' : str}
february_df = pd.read_csv('./data/februarydebits.csv',converters=entry_types)
february_df.set_index('Transaction #',inplace=True)
february_df['Debit ($)'] = february_df['Debit ($)'].replace(np.nan,"0.00")
february_df['Expense Category'] = february_df['Expense Category'].replace(np.nan,"")
original_feb_df = february_df[['Date','Description','Debit ($)','Balance ($)']]
# Display interactive grid 1: categorizing expenses
q_february_df = q.show_grid( february_df , grid_options = grid_features )
# Recording answers
q4a_button = w.Button(button_style='info',description="Record Answers", layout=Layout(width='15%', height='30px'))
def record_spreadsheet(button_widget):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1,IPython.notebook.get_selected_index()+3)'))
display(q_february_df)
display(q4a_button)
q4a_button.on_click( record_spreadsheet )
# Recover entries
q4a_recover_button = w.Button(button_style='info',description="Reset all values", layout=Layout(width='15%', height='30px'))
def recover_entries(button_widget):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()-1,IPython.notebook.get_selected_index()+0)'))
display(q4a_recover_button)
q4a_recover_button.on_click( recover_entries )
# Obtain the changed dataframe
student_feb_df = q_february_df.get_changed_df()
# Check answers
answer_list = ['S','F','C','E','T','U','M']
answers_valid = check_answers(student_feb_df['Expense Category'],answer_list)
if(answers_valid == 0):
display(Markdown("Some of your inputs are invalid. Please enter inputs from the following list: S,U,C,E,M,P,T,F"))
# Group the dataframe according to student's expense categories once student's answers are correct
if(answers_valid == 1):
# Check if every entry matches the answer key
student_answers = (student_feb_df['Expense Category'].values).tolist()
if( feb_expenses_answer_key == student_answers):
display(Markdown("Your selections are correct!"))
q4a_button.close()
q4a_recover_button.close()
q_february_df.close()
q4a_grid_features = { 'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': True,
'rowHeight': 40,
'enableColumnReorder': True,
'enableTextSelectionOnCells': True,
'editable': False,
'filterable': False,
'sortable': False,
'highlightSelectedRow': True}
student_q4a_df = q.show_grid( student_feb_df , grid_options = q4a_grid_features )
display(student_q4a_df)
else:
display(Markdown("Some of your inputs are incorrect."))
```
$\hspace{0.35cm}$**b.)** Calculate the total of each expense category you used in (a).
$\hspace{3cm}$This is done for you. Proceed to exercise (c).
```
q4b_df = pd.read_csv('./data/expensesum.csv')
q4b_df.set_index("Expense Category",inplace=True)
q_q4b_df = q.show_grid( q4b_df , grid_options = grid_features )
display(q_q4b_df)
```
$\hspace{0.35cm}$**c.)** For each category, determine whether it is **fixed** or **variable**. Enter F for fixed and V for variable.
```
# Setting up the dataframe
fixed_or_var_df = pd.read_csv('./data/fixedorvar.csv')
fixed_or_var_df.set_index('Expense Category',inplace=True)
fixed_or_var_df['Fixed or Variable'] = fixed_or_var_df['Fixed or Variable'].replace(np.nan,"")
q_fixed_or_var_df = q.show_grid( fixed_or_var_df , grid_options = grid_features)
q4c_button = w.Button(button_style='info',description="Record Answers", layout=Layout(width='15%', height='30px'))
def record_spreadsheet(button_widget):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1,IPython.notebook.get_selected_index()+2)'))
display(q_fixed_or_var_df)
display(q4c_button)
q4c_button.on_click( record_spreadsheet )
# Obtain the changed dataframe
q4c_df = q_fixed_or_var_df.get_changed_df()
# Check answers
q4c_answer_list = ['V','F','V','F','V','V','V','V']
q4c_student_answer = (q4c_df['Fixed or Variable'].values).tolist()
def check_q4c(student_inputs):
valid_inputs = 'VF'
for entry in student_inputs:
if(entry not in valid_inputs):
display(Markdown("Please enter valid inputs only."))
return False
if(entry == ''):
display(Markdown("Please fill all the entries in the spreadsheet above."))
return False
return True
if(check_q4c(q4c_student_answer) == True):
if(q4c_student_answer == q4c_answer_list):
display(Markdown("Your selections are correct!"))
q4c_grid_features = { 'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': True,
'rowHeight': 40,
'enableColumnReorder': True,
'enableTextSelectionOnCells': True,
'editable': False,
'filterable': False,
'sortable': False,
'highlightSelectedRow': True}
student_q4c_df = q.show_grid( q4c_df , grid_options = q4c_grid_features )
q_fixed_or_var_df.close()
display(student_q4c_df)
else:
display(Markdown("Some inputs are incorrect."))
```
**Question 5.** Construct a **monthly** conservative budget skeleton for Emma based upon her February and March data. To do this, compare the data for February and March and use the _highest_ expense amount for each category.
**Note: ** The **February Expense Data** and **March Expense Data** are given below.
```
entry_types = {'February Totals ($)': str , 'March Totals ($)' : str}
comparison_df = pd.read_csv('./data/expensecomparison.csv',converters = entry_types)
comparison_df.set_index('Expense Categories',inplace=True)
comparison_df['Highest Expense Amount ($)'] = comparison_df['Highest Expense Amount ($)'].replace(np.nan,"")
del comparison_df['Transaction #']
q_comparison_df = q.show_grid( comparison_df , grid_options = grid_features )
expense_button = w.Button(button_style='info',description="Record Answers", layout=Layout(width='15%', height='30px'))
def record_spreadsheet(button_widget):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1,IPython.notebook.get_selected_index()+3)'))
display(q_comparison_df)
display(expense_button)
expense_button.on_click( record_spreadsheet )
# Recover entries
recover_button = w.Button(button_style='info',description="Reset all values", layout=Layout(width='15%', height='30px'))
def recover_entries(button_widget):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()-1,IPython.notebook.get_selected_index()+0)'))
display(recover_button)
recover_button.on_click( recover_entries )
# Obtain changed dataframe
student_comparison_df = q_comparison_df.get_changed_df()
# Answers
correct_answers = ['80.00', '73.50', '91.95', '30.15', '189.45', '233.64', '206.00', '116.53']
student_answers = (student_comparison_df['Highest Expense Amount ($)'].values).tolist()
# Function to check if every entry is a valid input
def check_floats(input_array):
valid_inputs = '0123456789.'
for answer in input_array:
current_user_input = all( ch in valid_inputs for ch in answer)
if(current_user_input == False):
return False
return True
# Check if answer is correct
if(check_floats(student_answers) == False):
display(Markdown("Please enter decimal numbers only."))
else:
if(student_answers == correct_answers):
display(Markdown("Your choices are correct!"))
recover_button.close()
expense_button.close()
q_comparison_df.close()
comparison_grid_features = { 'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': True,
'rowHeight': 40,
'enableColumnReorder': True,
'enableTextSelectionOnCells': True,
'editable': False,
'filterable': False,
'sortable': False,
'highlightSelectedRow': True}
student_comparison_df = q.show_grid( student_comparison_df , grid_options = comparison_grid_features )
display(student_comparison_df)
else:
display(Markdown("Some of your entries are incorrect. Please fill them with the correct values. Make sure to write numbers in decimal form. Write 80.00 for 80, 206.00, etc."))
```
**Question 6.** If there is excess money, where should Emma put it? If there is not enough income for expenses, suggest how Emma can cover her expenses or reduce her expenses.
```
q6_text = w.Textarea( value='', placeholder='Write your answer here for Question 6.', description='', disabled=False , layout=Layout(width='100%', height='75px') )
q6_button = w.Button(button_style='info',description="Record Answer", layout=Layout(width='15%', height='30px'))
display(q6_text)
display(q6_button)
q6_button.on_click( rerun_cell )
q6_input = q6_text.value
if(q6_input != ''):
q6_text.close()
q6_button.close()
display(Markdown("### Your answer for question 6:"))
display(Markdown(q6_input))
```
**Question 7.**
$\hspace{0.35cm}$**a.)** How much will Emma need to save each month to purchase a new base, in 5 months, that costs $\$828.45$ (including GST)?
```
q7a_text = w.Textarea( value='', placeholder='Write your answer here for Question 7.a.', description='', disabled=False , layout=Layout(width='100%', height='75px') )
q7a_button = w.Button(button_style='info',description="Record Answer", layout=Layout(width='15%', height='30px'))
display(q7a_text)
display(q7a_button)
q7a_button.on_click( rerun_cell )
q7a_input = q7a_text.value
if(q7a_input != ''):
q7a_text.close()
q7a_button.close()
display(Markdown("### Your answer for question 7.a:"))
display(Markdown(q7a_input))
```
$\hspace{0.35cm}$**b.)** Based upon her budget, will Emma be able to save enough? If not, modify her budget so that she can afford it. A new category has been added to show her savings for the bass.
**Note:** Income and expenses should be balanced at this point.
From the previous exercises, we saw that a conservative income for Emma was $\$922.26$. In this exercise, we want to create a budget amount in such a way that the sum of each category is exactly $\$922.26$.
```
# Prepare dataframes for Emma's February & March Transactions
ex_7b_df = pd.read_csv('./data/exercise7b.csv')
ex_7b_df[['Budgeted Amount ($)']] = ex_7b_df[['Budgeted Amount ($)']].replace(np.nan,"")
ex_7b_df.set_index('Category',inplace=True)
q_ex_7b = q.show_grid( ex_7b_df , grid_options = grid_features )
display(Markdown("<h2 align='center'>Emma's Modified Conservative Budget</h2>"))
display(q_ex_7b)
ex_7b_button = w.Button(button_style='info',description="Record Answers", layout=Layout(width='15%', height='30px'))
def record_spreadsheet(button_widget):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1,IPython.notebook.get_selected_index()+3)'))
display(ex_7b_button)
ex_7b_button.on_click( record_spreadsheet )
```
**Question 8.** Determine the percent of Emma’s income on each category.
To calculate percentage of income for each category use the formula:
$$\text{Percent of Income} = \frac{\text{Amount Allotted for Expense}}{\text{Budgeted Income}}\times 100\%$$
**Note:** This part of the exercise is done for you and changes based on your inputs in exercise 7.b., the formula above is for your reference.
```
# Set up student inputs as a list
ex_7b_student_df = q_ex_7b.get_changed_df()
student_budget_input = ex_7b_student_df['Budgeted Amount ($)']
student_budget_col = (student_budget_input.values).tolist()
# Check if every entry is valid
def check_ex_7b(student_inputs):
valid_inputs = '0123456789. '
for entry in student_inputs:
for ch in entry:
if(ch not in valid_inputs):
display(Markdown("Please enter valid inputs only."))
return False
if(entry == ''):
display(Markdown("Please fill all the entries in the spreadsheet above."))
return False
return True
# Check if student's budget matches the total income
valid_input = check_ex_7b(student_budget_col)
if(valid_input == True):
# Convert every entry to float and get the sum
budget_sum = 0
for entry in student_budget_col:
current_entry = eval(entry)
budget_sum += current_entry
# If the budget sum matches, then create the percentages chart
percentages_col = []
if(budget_sum != 922.26):
display(Markdown("Your budget proposal does not total $922.26. Please try again."))
else:
for entry in student_budget_col:
current_percent = round( eval(entry)/budget_sum , 4)
percentages_col.append( round(current_percent*100,4) )
indices = ['Savings (RESP)','Transportation','Utilities','Food','Clothing','Entertainment','Miscellaneous','Personal Care','Savings (Bass)']
percentage_df = pd.DataFrame({'Percentage (%): ': percentages_col },index=indices)
updated_ex_7b_student_df = pd.concat( [ex_7b_student_df,percentage_df] , axis = 1 )
ex_7b_features = { 'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': True,
'rowHeight': 40,
'enableColumnReorder': True,
'enableTextSelectionOnCells': True,
'editable': False,
'filterable': False,
'sortable': False,
'highlightSelectedRow': True}
q_ex_7b_updated = q.show_grid( updated_ex_7b_student_df , grid_options = ex_7b_features )
display(Markdown("<h2 align='center'>Emma's Modified Budget Percentage Breakdown</h2>"))
display(q_ex_7b_updated)
```
**Question 9.** Are there any categories that Emma is far above or far below the spending guidelines? How does being a high school student affect Emma’s consideration of the spending guidelines?
<img src="./images/spending_guidelines.jpg" alt="drawing" width="400px"/>
```
q9_text = w.Textarea( value='', placeholder='Write your answer here for Question 9', description='', disabled=False , layout=Layout(width='100%', height='75px') )
q9_button = w.Button(button_style='info',description="Record Answer", layout=Layout(width='15%', height='30px'))
display(q9_text)
display(q9_button)
q9_button.on_click( rerun_cell )
q9_input = q9_text.value
if(q9_input != ''):
q9_text.close()
q9_button.close()
display(Markdown("### Your answer for question 9:"))
display(Markdown(q9_input))
```
---
<h1 align='center'>Student Interactive Section</h1>
In this section, you will enter your own expense categories and see how your expense percentage breakdown compares to the spending guideline above.
```
# Create button and dropdown widget
number_of_cat = 13
dropdown_options = [ str(i+1) for i in range(number_of_cat) ]
dropdown_widget = w.Dropdown( options = dropdown_options , value = '3' , description = 'Categories' , disabled=False )
categories_button = w.Button(button_style='info',description="Save", layout=Layout(width='15%', height='30px'))
# Display widgets
display(dropdown_widget)
display(categories_button)
categories_button.on_click( rerun_cell )
# Create dataframe
df_num_rows = int(dropdown_widget.value)
empty_list = [ '' for i in range(df_num_rows) ]
category_list = [ i+1 for i in range(df_num_rows) ]
# Set up data input for dataframe
df_dict = {'Category #': category_list, 'Budget ($)': empty_list , 'Expense Category': empty_list}
student_df = pd.DataFrame(data = df_dict)
student_df.set_index('Category #',inplace=True)
# Reorder column labels
student_df = student_df[['Expense Category','Budget ($)']]
# Set up & display as Qgrid
q_student_df = q.show_grid( student_df , grid_options = grid_features )
display(q_student_df)
# Create & display save entries widget button
save_student_entries_button = w.Button(button_style='info',description="Plot Pie Chart", layout=Layout(width='15%', height='30px'))
display(save_student_entries_button)
save_student_entries_button.on_click( rerun_cell )
# Convert qgrid to dataframe
student_updated_df = q_student_df.get_changed_df()
student_budget_col = student_updated_df['Budget ($)'].values.tolist()
student_labels_col = student_updated_df['Expense Category'].values.tolist()
# Check if a number if float
def isfloat(value):
try:
float(value)
return True
except ValueError:
return False
# Function: Check for validity of budget column entries
# Input: Student's budget column values
# Output: Boolean, false if one of the entries is invalid
def check_budget_column(input_list):
valid_inputs = '0123456789.'
# Check if inputs are valid
for entry in input_list:
if( isfloat(entry) == False):
return False
return True
# Function: Calculate the percentage of each expense category
# Input: Student expense category lists and student budget column values
# Output: Percentages column
def get_percentages(input_list):
total = 0
percentage_col = []
# Obtain total
for entry in input_list:
entry = eval(entry)
total += entry
# Obtain percentages
for entry in input_list:
entry = eval(entry)
current_percentage = entry/total
percentage_col.append( round(current_percentage*100,2) )
return percentage_col
# If student input is valid, create a pie chart plot
colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral','darkseagreen','lightcyan','lightpink','coral','tan','slateblue','azure','tomato','lawngreen']
if(check_budget_column(student_budget_col) == True):
student_values = get_percentages(student_budget_col)
labels = student_labels_col
plt.figure(figsize=(20,10))
plt.rcParams['font.size'] = 20
plt.title('Your Expense Category Percentage Breakdown',fontsize=25)
plt.pie(student_values, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=35)
plt.axis('equal')
plt.show()
else:
display(Markdown("Please enter decimal numbers only for the budget column"))
```
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
# Transfer learning with `braai`
`20200211` nb status: unfinished
We will fine-tune `braai` with transfer learning using the ZUDS survey as an example.
```
from astropy.io import fits
from astropy.visualization import ZScaleInterval
from bson.json_util import loads, dumps
import gzip
import io
from IPython import display
import json
from matplotlib.colors import LogNorm
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from pandas.plotting import register_matplotlib_converters
from penquins import Kowalski
import tensorflow as tf
from tensorflow.keras.models import model_from_json, load_model
from tensorflow.keras.utils import normalize as tf_norm
import os
plt.style.use(['dark_background'])
register_matplotlib_converters()
%matplotlib inline
def load_model_helper(path, model_base_name):
"""
Build keras model using json-file with architecture and hdf5-file with weights
"""
with open(os.path.join(path, f'{model_base_name}.architecture.json'), 'r') as json_file:
loaded_model_json = json_file.read()
m = model_from_json(loaded_model_json)
m.load_weights(os.path.join(path, f'{model_base_name}.weights.h5'))
return m
def make_triplet(alert, normalize: bool = False, old_norm: bool = False, to_tpu: bool = False):
"""
Feed in alert packet
"""
cutout_dict = dict()
for cutout in ('science', 'template', 'difference'):
cutout_data = loads(dumps([alert[f'cutout{cutout.capitalize()}']]))[0]
# unzip
with gzip.open(io.BytesIO(cutout_data), 'rb') as f:
with fits.open(io.BytesIO(f.read())) as hdu:
data = hdu[0].data
# replace nans with zeros
cutout_dict[cutout] = np.nan_to_num(data)
# normalize
if normalize:
if not old_norm:
cutout_dict[cutout] /= np.linalg.norm(cutout_dict[cutout])
else:
# model <= d6_m7
cutout_dict[cutout] = tf_norm(cutout_dict[cutout])
# pad to 63x63 if smaller
shape = cutout_dict[cutout].shape
#print(shape)
if shape != (63, 63):
# print(f'Shape of {candid}/{cutout}: {shape}, padding to (63, 63)')
cutout_dict[cutout] = np.pad(cutout_dict[cutout], [(0, 63 - shape[0]), (0, 63 - shape[1])],
mode='constant', constant_values=1e-9)
triplet = np.zeros((63, 63, 3))
triplet[:, :, 0] = cutout_dict['science']
triplet[:, :, 1] = cutout_dict['template']
triplet[:, :, 2] = cutout_dict['difference']
if to_tpu:
# Edge TPUs require additional processing
triplet = np.rint(triplet * 128 + 128).astype(np.uint8).flatten()
return triplet
def plot_triplet(tr):
fig = plt.figure(figsize=(8, 2), dpi=120)
ax1 = fig.add_subplot(131)
ax1.axis('off')
interval = ZScaleInterval()
limits = interval.get_limits(triplet[:, :, 0])
# norm=LogNorm()
ax1.imshow(tr[:, :, 0], origin='upper', cmap=plt.cm.bone, vmin=limits[0], vmax=limits[1])
ax1.title.set_text('Science')
ax2 = fig.add_subplot(132)
ax2.axis('off')
limits = interval.get_limits(triplet[:, :, 1])
ax2.imshow(tr[:, :, 1], origin='upper', cmap=plt.cm.bone, vmin=limits[0], vmax=limits[1])
ax2.title.set_text('Reference')
ax3 = fig.add_subplot(133)
ax3.axis('off')
limits = interval.get_limits(triplet[:, :, 2])
ax3.imshow(tr[:, :, 2], origin='upper', cmap=plt.cm.bone, vmin=limits[0], vmax=limits[1])
ax3.title.set_text('Difference')
plt.show()
```
## Kowalski
Get some example triplets from Kowalski
```
with open('secrets.json', 'r') as f:
secrets = json.load(f)
k = Kowalski(username=secrets['kowalski']['username'],
password=secrets['kowalski']['password'])
connection_ok = k.check_connection()
print(f'Connection OK: {connection_ok}')
```
High `braai` version `d6_m9` scores:
```
q = {"query_type": "aggregate",
"query": {"catalog": "ZUDS_alerts",
"pipeline": [{'$match': {'classifications.braai': {'$gt': 0.9}}},
{'$project': {'candidate': 0, 'coordinates': 0}},
{'$sample': { 'size': 20 }}]
}
}
r = k.query(query=q)
high_braai = r['result_data']['query_result']
for alert in high_braai[-2:]:
triplet = make_triplet(alert, normalize=True, old_norm=False)
plot_triplet(triplet)
```
Low `braai` version `d6_m9` scores:
```
q = {"query_type": "aggregate",
"query": {"catalog": "ZUDS_alerts",
"pipeline": [{'$match': {'classifications.braai': {'$lt': 0.1}}},
{'$project': {'candidate': 0, 'coordinates': 0}},
{'$sample': { 'size': 20 }}]
}
}
r = k.query(query=q)
low_braai = r['result_data']['query_result']
for alert in low_braai[-2:]:
triplet = make_triplet(alert, normalize=True, old_norm=False)
plot_triplet(triplet)
```
## Load the model
```
model = load_model_helper(path='../models', model_base_name='d6_m9')
# remove the output layer, leave the feature extraction part
model_fe = tf.keras.Model(inputs=model.inputs, outputs=model.layers[-2].output)
for alert in low_braai[-2:]:
triplet = make_triplet(alert, normalize=True, old_norm=False)
tr = np.expand_dims(triplet, axis=0)
# get extracted features
features = model_fe.predict(tr)
print(features.shape)
# print(features)
output = tf.keras.layers.Dense(1, activation='sigmoid')(model_fe.output)
model_tl = tf.keras.Model(inputs=model_fe.inputs, outputs=output)
# mark layers as not trainable
for layer in model_tl.layers[:-1]:
layer.trainable = False
model_tl.summary()
```
| github_jupyter |
Sci-Kit Learn and Yellowbrick's Precision-Recall Curves can help us better understand the precision of our testing. These curves will show the tradeoff between a classifier's precision (the ratio of true positives to the sum of true and false positives) and recall (the ratio of true positives to the sum of true positives and false negatives).
```
#Import necessary packages
import pandas as pd
import boto3
import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport
from sklearn.linear_model import LogisticRegression, SGDClassifier, LogisticRegressionCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, BaggingClassifier, ExtraTreesClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import GaussianNB
pd.set_option('display.max_columns', 200)
%matplotlib inline
#Pull the Machine Learning Table 1 csv from S3 bucket
#TODO: enter the credentails below to run the code
S3_Key_id=''
S3_Secret_key=''
def pull_data(Key_id, Secret_key, file):
BUCKET_NAME = "gtown-wildfire-ds"
OBJECT_KEY = file
client = boto3.client(
's3',
aws_access_key_id= Key_id,
aws_secret_access_key= Secret_key)
obj = client.get_object(Bucket= BUCKET_NAME, Key= OBJECT_KEY)
file_df = pd.read_csv(obj['Body'])
return (file_df)
#Pull in the Machine Learning Table 1 csv
file = 'MLTable1.csv'
df = pull_data(S3_Key_id, S3_Secret_key, file)
df.head()
#Drop the Unnamed: 0 column, as it is unnecessary
df = df.drop(['Unnamed: 0'], axis=1)
df.head()
#Seperate data sets as labels and features
X = df.drop('FIRE_DETECTED', axis=1)
y = df['FIRE_DETECTED']
#Train test splitting of data. Here we use an 20% test size.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) #random state is a random seed
#Plot precision-recall for each machine learning model using Sci-Kit Learn's plot_precision_recall_curve.
from sklearn.metrics import plot_precision_recall_curve
# logistic regression
model1 = LogisticRegression()
# k nearest neighbors
model2 = KNeighborsClassifier(n_neighbors=4)
# fit model
model1.fit(X_train, y_train)
model2.fit(X_train, y_train)
# plot precision recall curves for Logistic Regression and K Nearest Neighbors
plot_precision_recall_curve(model1, X_test, y_test, name = 'Logistic Regression')
plot_precision_recall_curve(model2, X_test, y_test, name = 'K Nearest Neighbors')
```
As we can see, we get an AP (average precision) score of 5% for Logistic Regression and 39% for K Nearest Neighbors.
```
# logistic regression CV
model3 = LogisticRegressionCV()
# Random Forest
model4 = RandomForestClassifier()
# fit model
model3.fit(X_train, y_train)
model4.fit(X_train, y_train)
from sklearn.metrics import plot_precision_recall_curve
plot_precision_recall_curve(model3, X_test, y_test, name = 'Logistic Regression CV')
plot_precision_recall_curve(model4, X_test, y_test, name = 'Random Forest Classifier');
```
As we can see, we get an AP (average precision) score of 5% for Logistic Regression CV and 76% for Random Forest Classifier
```
# decision tree
model5 = DecisionTreeClassifier()
# linear svc
model6 = AdaBoostClassifier()
# fit model
model5.fit(X_train, y_train)
model6.fit(X_train, y_train)
plot_precision_recall_curve(model5, X_test, y_test, name = 'Decision Tree Classifier')
plot_precision_recall_curve(model6, X_test, y_test, name = 'Ada Boost Classifier');
```
As we can see, we get an AP (average precision) score of 31% for Decision Tree Classifier and 10% for Ada Boost Classifier.
```
# bagging classifier
model7 = BaggingClassifier()
# extra trees
model8 = ExtraTreesClassifier()
# fit model
model7.fit(X_train, y_train)
model8.fit(X_train, y_train)
plot_precision_recall_curve(model7, X_test, y_test, name = 'Bagging Classifier')
plot_precision_recall_curve(model8, X_test, y_test, name = 'Extra Trees Classifier');
```
As we can see, we get an AP (average precision) score of 64% for Bagging Classifier and 77% for Extra Trees Classifier.
| github_jupyter |
```
%matplotlib notebook
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
plt.cm.Greens([i/10 for i in range(9)])
data = pd.read_csv('iest_experiments.csv')
data
max_value = data.max().max() + 0.5
min_value = data.min()[1:].min()
f = lambda x: f'{x:.4}'
cell_text = []
cell_colours = []
for row in range(len(data)):
data_i = data.iloc[row]
print(data_i[0])
cell_text_i = [data_i[0]] + [f'{i:.2f}' for i in data_i[1:]]
cell_text.append(cell_text_i)
colors_i = [0]
colors_i += ((data_i[1:].values - min_value) / (max_value - min_value)).tolist()
colors_i = plt.cm.Greens(colors_i)
cell_colours.append(colors_i)
cell_colours
cell_text
fig, ax = plt.subplots(1)
ax.axis('tight')
ax.axis('off')
# Add a table at the bottom of the axes
the_table = plt.table(cellText=cell_text,
colLabels=data.columns,
loc='best',
rowLoc='center',
colLoc='center',
cellLoc='center',
cellColours=cell_colours)
plt.tight_layout()
plt.savefig(format='pdf', bbox_inches='tight', fname='output.pdf',
transparent=True, pad_inches=0)
plt.table?
def heatmap(data, row_labels, col_labels, ax=None,
cbar_kw={}, cbarlabel="", ylabel=None, xlabel=None, **kwargs):
"""
Create a heatmap from a numpy array and two lists of labels.
Arguments:
data : A 2D numpy array of shape (N,M)
row_labels : A list or array of length N with the labels
for the rows
col_labels : A list or array of length M with the labels
for the columns
Optional arguments:
ax : A matplotlib.axes.Axes instance to which the heatmap
is plotted. If not provided, use current axes or
create a new one.
cbar_kw : A dictionary with arguments to
:meth:`matplotlib.Figure.colorbar`.
cbarlabel : The label for the colorbar
All other arguments are directly passed on to the imshow call.
"""
if not ax:
ax = plt.gca()
# Plot the heatmap
im = ax.imshow(data, **kwargs)
# Create colorbar
cbar = ax.figure.colorbar(im, ax=ax, **cbar_kw)
cbar.ax.set_ylabel(cbarlabel, rotation=-90, va="bottom")
# We want to show all ticks...
ax.set_xticks(np.arange(data.shape[1]))
ax.set_yticks(np.arange(data.shape[0]))
# ... and label them with the respective list entries.
ax.set_xticklabels(col_labels)
ax.set_yticklabels(row_labels)
if xlabel:
ax.set_xlabel(xlabel)
ax.xaxis.set_label_position('top')
if ylabel:
ax.set_ylabel(ylabel)
# Let the horizontal axes labeling appear on top.
ax.tick_params(top=True, bottom=False,
labeltop=True, labelbottom=False)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=-30, ha="right",
rotation_mode="anchor")
# Turn spines off and create white grid.
for edge, spine in ax.spines.items():
spine.set_visible(False)
ax.set_xticks(np.arange(data.shape[1]+1)-.5, minor=True)
ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True)
ax.grid(which="minor", color="w", linestyle='-', linewidth=3)
ax.tick_params(which="minor", bottom=False, left=False)
return im, cbar
def annotate_heatmap(im, data=None, valfmt="{x:.2f}",
textcolors=["black", "white"],
threshold=None, **textkw):
"""
A function to annotate a heatmap.
Arguments:
im : The AxesImage to be labeled.
Optional arguments:
data : Data used to annotate. If None, the image's data is used.
valfmt : The format of the annotations inside the heatmap.
This should either use the string format method, e.g.
"$ {x:.2f}", or be a :class:`matplotlib.ticker.Formatter`.
textcolors : A list or array of two color specifications. The first is
used for values below a threshold, the second for those
above.
threshold : Value in data units according to which the colors from
textcolors are applied. If None (the default) uses the
middle of the colormap as separation.
Further arguments are passed on to the created text labels.
"""
if not isinstance(data, (list, np.ndarray)):
data = im.get_array()
# Normalize the threshold to the images color range.
if threshold is not None:
threshold = im.norm(threshold)
else:
threshold = im.norm(data.max())/2.
# Set default alignment to center, but allow it to be
# overwritten by textkw.
kw = dict(horizontalalignment="center",
verticalalignment="center")
kw.update(textkw)
# Get the formatter in case a string is supplied
if isinstance(valfmt, str):
valfmt = matplotlib.ticker.StrMethodFormatter(valfmt)
# Loop over the data and create a `Text` for each "pixel".
# Change the text's color depending on the data.
texts = []
for i in range(data.shape[0]):
for j in range(data.shape[1]):
kw.update(color=textcolors[im.norm(data[i, j]) > threshold])
text = im.axes.text(j, i, valfmt(data[i, j], None), **kw)
texts.append(text)
return texts
data.iloc[:,1:].values
fig, ax = plt.subplots()
im, cbar = heatmap(data.iloc[:, 1:].values, data.iloc[:, 0].values, data.iloc[:, 0].values, ax=ax,
cmap="YlGn", cbarlabel="Validation Accuracy (%)",
ylabel='Dropout after word-encoding & fully-connected layers',
xlabel='Dropout after sentence-encoding layer')
texts = annotate_heatmap(im, valfmt="{x:.2f}")
fig.tight_layout()
plt.show()
plt.savefig(format='pdf', bbox_inches='tight', fname='output.pdf',
transparent=True, pad_inches=0)
```
| github_jupyter |
```
from __future__ import division,print_function
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../')
from tqdm.notebook import tqdm
import numpy as np
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import transforms, models, datasets
import utils.calculate_log as callog
from my_models import vgg
import pandas as pd
```
# Setting the model
```
torch_model = vgg.Net(models.vgg16_bn(pretrained=False), 8)
ckpt = torch.load("checkpoints/vgg-16_checkpoint.pth")
torch_model.load_state_dict(ckpt['model_state_dict'])
torch_model.eval()
torch_model.cuda()
print("Done!")
```
## Setting the hook register
```
feat_maps = list()
def _hook_fn(self, input, output):
feat_maps.append(output)
def hook_layers(model):
hooked_layers = list()
for layer in torch_model.modules():
if isinstance(layer, nn.BatchNorm2d):
# if isinstance(layer, nn.Conv2d) or isinstance(layer, nn.ReLU):
hooked_layers.append(layer)
return hooked_layers
def register_layers(layers):
regs_layers = list()
for lay in layers:
regs_layers.append(lay.register_forward_hook(_hook_fn))
return regs_layers
def unregister_layers(reg_layers):
for lay in reg_layers:
lay.remove()
def get_feat_maps(model, batch_img):
batch_img = batch_img.cuda()
with torch.no_grad():
preds = model(batch_img)
preds = F.softmax(preds, dim=1)
maps = feat_maps.copy()
feat_maps.clear()
return preds, maps
## Setting the hook
hl = hook_layers (torch_model)
rgl = register_layers (hl)
print ("Total number of registered hooked layers:", len(rgl))
```
# Loading the data
## In distributions
```
batch_size = 15
trans = transforms.Compose([
# transforms.Resize((224,224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
sk_train = torch.utils.data.DataLoader(
datasets.ImageFolder("data/skin_cancer/train/",transform=trans),
batch_size=batch_size,
shuffle=False)
sk_test = torch.utils.data.DataLoader(
datasets.ImageFolder("data/skin_cancer/test/",transform=trans),
batch_size=batch_size,
shuffle=False)
```
## Out-of-distributions
```
skin_cli = torch.utils.data.DataLoader(
datasets.ImageFolder("data/skins/clinical/",transform=trans),
batch_size=batch_size,
shuffle=False)
skin_derm = torch.utils.data.DataLoader(
datasets.ImageFolder("data/skins/dermoscopy/",transform=trans),
batch_size=batch_size,
shuffle=False)
imgnet = torch.utils.data.DataLoader(
datasets.ImageFolder("data/imagenet/",transform=trans),
batch_size=batch_size,
shuffle=False)
corrupted = torch.utils.data.DataLoader(
datasets.ImageFolder("data/corrupted/bbox/",transform=trans),
batch_size=batch_size,
shuffle=False)
corrupted_70 = torch.utils.data.DataLoader(
datasets.ImageFolder("data/corrupted/bbox_70/",transform=trans),
batch_size=batch_size,
shuffle=False)
nct = torch.utils.data.DataLoader(
datasets.ImageFolder("data/nct/",transform=trans),
batch_size=batch_size,
shuffle=False)
```
# Gram-Matrix operations
## Gram matrix operator
```
def norm_min_max(x):
ma = torch.max(x,dim=1)[0].unsqueeze(1)
mi = torch.min(x,dim=1)[0].unsqueeze(1)
x = (x-mi)/(ma-mi)
return x
def get_sims_gram_matrix (maps, power):
maps = F.relu(maps)
maps = maps ** power
maps = maps.reshape(maps.shape[0],maps.shape[1],-1)
gram = ((torch.matmul(maps,maps.transpose(dim0=2,dim1=1)))).sum(2)
gram = (gram.sign()*torch.abs(gram)**(1/power)).reshape(gram.shape[0],-1)
gram = norm_min_max(gram)
return gram
```
## Considering samples per label
```
def _get_sim_per_labels(data_loader, power, use_preds=True):
sims_per_label = None
if not isinstance(power, list) and not isinstance(power, range):
power = [power]
for data in tqdm(data_loader):
img_batch, labels = data
preds, maps_list = get_feat_maps(torch_model, img_batch)
if use_preds:
labels = preds.argmax(dim=1)
if sims_per_label is None:
sims_per_label = [[[] for _ in range(len(maps_list))] for _ in range(preds.shape[1])]
for layer, maps in enumerate(maps_list):
for p in power:
sims = get_sims_gram_matrix (maps, p)
for sim, lab in zip(sims, labels):
sims_per_label[lab.item()][layer].append(sim.cpu())
return sims_per_label
def get_min_max_per_label(data_loader, power):
sims_per_label = _get_sim_per_labels(data_loader, power)
sims_per_label_min = [[[] for _ in range(len(sims_per_label[0]))] for _ in range(len(sims_per_label))]
sims_per_label_max = [[[] for _ in range(len(sims_per_label[0]))] for _ in range(len(sims_per_label))]
print ("-- Computing the values...")
for lab_idx in range(len(sims_per_label)):
for layer_idx in range(len(sims_per_label[lab_idx])):
temp = torch.stack(sims_per_label[lab_idx][layer_idx])
sims_per_label_min[lab_idx][layer_idx] = temp.min(dim=0)[0]
sims_per_label_max[lab_idx][layer_idx] = temp.max(dim=0)[0]
del sims_per_label
return sims_per_label_min, sims_per_label_max
def get_layer_gaps(mins, maxs):
num_lab, num_lay = len(mins), len(mins[0])
gaps = torch.zeros(num_lab, num_lay)
gaps = gaps.cuda()
for lab in range(num_lab):
for layer in range(num_lay):
gaps[lab][layer] = (maxs[lab][layer]-mins[lab][layer]).sum()
return gaps.cpu().numpy()
def get_dev_scores_per_label(data_loader, power, sims_min, sims_max, ep=10e-6):
if not isinstance(power, list) and not isinstance(power, range):
power = [power]
dev_scores = list()
for data in tqdm(data_loader):
img_batch, _ = data
preds_batch, maps_list = get_feat_maps(torch_model, img_batch)
labels = preds_batch.argmax(dim=1)
batch_scores = list()
for layer, maps in enumerate(maps_list):
score_layer = 0
for p in power:
sims = get_sims_gram_matrix (maps, p)
_sim_min = torch.zeros(sims.shape[0], sims.shape[1]).cuda()
_sim_max = torch.zeros(sims.shape[0], sims.shape[1]).cuda()
for k, lab in enumerate(labels):
_sim_min[k] = sims_min[lab.item()][layer]
_sim_max[k] = sims_max[lab.item()][layer]
score_layer += (F.relu(_sim_min-sims)/torch.abs(_sim_min+ep)).sum(dim=1, keepdim=True)
score_layer += (F.relu(sims-_sim_max)/torch.abs(_sim_max+ep)).sum(dim=1, keepdim=True)
batch_scores.append(score_layer)
dev_scores.append(torch.cat(batch_scores, dim=1))
return torch.cat(dev_scores).cpu().numpy()
def detect_mean(all_test_std, all_ood_std, gaps=None):
avg_results = dict()
indices = list(range(len(all_test_std)))
split = int(np.floor(0.1 * len(all_test_std)))
for i in range(1,11):
np.random.seed(i)
np.random.shuffle(indices)
val_std = all_test_std[indices[:split]]
test_std = all_test_std[indices[split:]]
if gaps is not None:
t95 = (val_std.sum(axis=0) + gaps.mean(0))
else:
t95 = val_std.mean(axis=0) + 10**-7
test_std = ((test_std)/t95[np.newaxis,:]).sum(axis=1)
ood_std = ((all_ood_std)/t95[np.newaxis,:]).sum(axis=1)
results = callog.compute_metric(-test_std,-ood_std)
for m in results:
avg_results[m] = avg_results.get(m,0)+results[m]
for m in avg_results:
avg_results[m] /= i
callog.print_results(avg_results)
return avg_results
def detect(all_test_std, all_ood_std):
indices = list(range(len(all_test_std)))
split = int(np.floor(0.1 * len(all_test_std)))
np.random.seed(10)
np.random.shuffle(indices)
val_std = all_test_std[indices[:split]]
test_std = all_test_std[indices[split:]]
t95 = val_std.mean(axis=0) + 10**-7
test_std = ((test_std)/t95[np.newaxis,:]).sum(axis=1)
ood_std = ((all_ood_std)/t95[np.newaxis,:]).sum(axis=1)
results = callog.compute_metric(-test_std,-ood_std)
callog.print_results(results)
return results
```
# OOD detection per label
```
power = 1
print ("- Getting mins/maxs")
mins, maxs = get_min_max_per_label(sk_train, power)
print("- Getting the gaps")
gaps = get_layer_gaps(mins, maxs)
print ("- Getting test stdevs")
sk_test_stdev = get_dev_scores_per_label(sk_test, power, mins, maxs)
# Releasing the GPU memory
torch.cuda.empty_cache()
```
# Testing
```
print("Skins dermoscopy")
skin_derm_stdev = get_dev_scores_per_label(skin_derm, power, mins, maxs)
skin_derm_results = detect_mean(sk_test_stdev, skin_derm_stdev)
print("Skins clinical")
skin_cli_stdev = get_dev_scores_per_label(skin_cli, power, mins, maxs)
skin_cli_results = detect_mean(sk_test_stdev, skin_cli_stdev)
print("ImageNet")
imgnet_stdev = get_dev_scores_per_label(imgnet, power, mins, maxs)
imgent_results = detect_mean(sk_test_stdev, imgnet_stdev)
print("Corrupted images bbox")
corrupted_stdev = get_dev_scores_per_label(corrupted, power, mins, maxs)
corrupted_results = detect_mean(sk_test_stdev, corrupted_stdev)
print("Corrupted images bbox 70")
corrupted_70_stdev = get_dev_scores_per_label(corrupted_70, power, mins, maxs)
corrupted_70_results = detect_mean(sk_test_stdev, corrupted_70_stdev)
print("NCT")
nct_stdev = get_dev_scores_per_label(nct, power, mins, maxs)
nct_results = detect_mean(sk_test_stdev, nct_stdev)
```
## Summary
```
print(round(skin_derm_results['TNR']*100,3))
print(round(skin_cli_results['TNR']*100,3))
print(round(imgent_results['TNR']*100,3))
print(round(corrupted_results['TNR']*100,3))
print(round(corrupted_70_results['TNR']*100,3))
print(round(nct_results['TNR']*100,3))
```
| github_jupyter |
# Database Population & Querying
##### Using Pandas & SQLAlchemy to store and retrieve StatsBomb event data
---
```
import requests
import pandas as pd
import numpy as np
from tqdm import tqdm
from sqlalchemy import create_engine
```
In this example, we use SQLAlchemy's `create_engine` function to create a temporary database in memory.
We can use a similar approach to connect to other persistant local or remote databases. It's very flexible.
---
```
base_url = "https://raw.githubusercontent.com/statsbomb/open-data/master/data/"
comp_url = base_url + "matches/{}/{}.json"
match_url = base_url + "events/{}.json"
def parse_data(competition_id, season_id):
matches = requests.get(url=comp_url.format(competition_id, season_id)).json()
match_ids = [m['match_id'] for m in matches]
events = []
for match_id in tqdm(match_ids):
for e in requests.get(url=match_url.format(match_id)).json():
events.append(e)
return pd.json_normalize(events, sep='_')
```
This is pretty much the same `parse_data` function that we've seen in previous examples, but with a couple specific twists:
- We are storing entire events, not subsets of them.
- We are using `pd.json_normalize` to convert the hierarchical StatsBomb JSON data structure into something more tabular that can more easily be stored in a relational database.
---
```
competition_id = 43
season_id = 3
df = parse_data(competition_id, season_id)
location_columns = [x for x in df.columns.values if 'location' in x]
for col in location_columns:
for i, dimension in enumerate(["x", "y"]):
new_col = col.replace("location", dimension)
df[new_col] = df.apply(lambda x: x[col][i] if type(x[col]) == list else None, axis=1)
```
Because StatsBomb delivers x/y coordinates in an array (e.g. `[60.0, 40.0]`), we need to split them into separate columns so we can easily store the individual coordinates in a SQL database.
Unfortunately, this is a bit tricky, and we use a couple fun `Python` and `Pandas` tricks to our advantage.
First we determine which columns in the DataFrame are locations (with the list comprehension that generates the `location_columns` list).
Then we iterate through these columns, and each `dimension` (i.e. `x` and `y`), to create two new columns for each old column.
e.g. `pass_end_location` becomes `pass_end_x` and `pass_end_y`
Once we have the new column names, we use `df.apply` and a lambda function to grab the correct coordinate from each row. I recommend reading further on both **`df.apply`** and python **lambda functions** as they're a bit complicated, but fully worth learning about.
---
```
df = df[[c for c in df.columns if c not in location_columns]]
```
We use a list comprehension to generate a new subset of columns that we want in the DataFrame, excluding the old location columns.
---
```
columns_to_remove = ['tactics_lineup', 'related_events', 'shot_freeze_frame']
df = df[[c for c in df.columns if c not in columns_to_remove]]
```
In the same fashion, we're going to exclude the `tactics_lineup`, `related_events`, and `shot_freeze_frame` columns because their heirarcical data structures cannot easily be stored in a SQL database.
If you need these particular columns for analysis, you have to pull them out seperately.
> Note: _It's possible that you may need to exclude additional columns from the data specification if you're using a data set other than the World Cup 2018 data that we're using for this example._
---
```
engine = create_engine('sqlite://')
```
This creates a temporary SQLite3 database in memory, and provides an engine object that you can use to interact directly with it.
If you wish to use a persistant local or remote database, you can change the `uri` (i.e. `sqlite://`) to point elsewhere. For example, a `uri` for a local mysql database might look something like this: `mysql://user:password@127.0.0.1:3306/dbname`.
---
```
df.to_sql('events', engine)
```
This loads the content of our DataFrame into our SQLite3 database via the `engine` object, and puts the rows into new table named `events`.
> Note: **This takes a while**, 2-3 minutes on my local laptop.
---
```
top_passers = """
select player_name, count(*) as passes
from events
where 1=1
and type_name = "Pass"
group by player_id
order by count(*) desc
"""
pd.read_sql(top_passers, engine).head(10)
```
The demonstrates a basic SQL query that finds which players have attempted the most passes during the competition.
The query is fed into `pd.read_sql` along with the engine object to return the results.
---
```
top_xg = """
select player_name
, round(sum(shot_statsbomb_xg),2) as 'total xg'
from events
where 1=1
and type_name = "Shot"
group by player_id
order by 2 desc
"""
pd.read_sql(top_xg, engine).head(10)
```
Another example, this time demonstrating the results of a different question, but using a pretty similar SQL query to provide the solution.
---
---
Devin Pleuler 2020
| github_jupyter |
#### DS requests results via request/response cycle
A user can send a request to the domain owner to access a resource.
- The user can send a request by calling the `.request` method on the pointer.
- A `reason` needs to be passed on a parameter while requesting access to the data.
#### The user selects a dataset and perform some query
- The user selects a network and domain
- The logs into the domain and selects a dataset
- The user perform a query on the selected dataset pointer
```
import syft as sy
# Let's list the available dataset
sy.datasets
# We want to access the `Pneumonia Dataset`, let's connect to the RSNA domain
# Let's list the RSNA available networks
sy.networks
# Let's select the `WHO` network and list the available domains on the `WHO Network`.
who_network = sy.networks[1]
who_network.domains
# Let's select the `RSNA domain`
rsna_domain = who_network["e1640cc4af70422da1d60300724b1ee3"]
# Let's login into the rsna domain
rsna_domain_client = rsna_domain.login(email="sheldon@caltech.edu", password="bazinga")
# Let's select the pnuemonia dataset
pnuemonia_dataset = rsna_domain_client["fc650f14f6f7454b87d3ccd345c437b5"]
# Let's see the dataset
pnuemonia_dataset
# Let's select the lable tensors
label_ptr = pnuemonia_dataset["labels"]
# Let's calculate the unique labels in the dataset
unique_labels = label_ptr[:,0].unique()
```
#### The user fetches the results of their query
- The user can perform a `.get` operation to download the data of the variable locally.
- If a user tries to access a variable without publishing its results or without requesting it, then they receive a 403.
- If a user has requested a resource, then its denied by the DO, then the user receives a 403 on performing a get operation on the resource.
```
number_of_unique_labels = unique_labels.shape
# Let's access the labels
number_of_unique_labels.get()
# Let's request the results from the Domain Owner
number_of_unique_labels.request(reason="Know the number of unique labels in the dataset.")
```
#### The user views the request logs
- The user can list all the logs of all the requests send by them to a domain. **[P1]**
Following properties are visible to the user w.r.t to the logs (for data requests):
- Request Id (Unique id of the request)
- Request Date (Datetime on which the request was submitted. The datetime/timestamp are shown in UTC)
- Reason (The reason submitted to access the resource by requester)
- Result Id (The unique id of the reasource being requested)
- State (State of the request - Approved/Declined/Pending)
- Reviewer Comments (Comment provided by the reqeuest reviewer (DO) during request approval/deny)
- The user can filter through the logs via ID and Status. **[P2]**
```
# Let's check the status of our request logs
# A user can see only the request logs with state == `Pending`.
rsna_domain.requests
# If we want to see all the requests that are submitted to the domain,
rsna_domain.requests.all()
```
```
Some time has passed, let' check if our requests are approved or not.
```
```
# Let's check again for pending requests first....
rsna_domain.requests
# Let's check all the submitted requests
rsna_domain.requests.all()
# Great our requests are approved, let's get the information
unique_labels = number_of_unique_labels.get()
print(f"Unique Labels: {len(unique_labels)}")
# Filtering requests logs
# via Id (Get the request with the given request id)
rsna_domain.requests.filter(id="3ca9694c8e5d4214a1ed8025a1391c8c")
# or via Status (List all the logs with given status)
rsna_domain.requests.filter(status="Approved")
```
#### Dummy Data
```
import pandas as pd
from enum import Enum
import uuid
import torch
import datetime
import json
import numpy as np
class bcolors(Enum):
HEADER = "\033[95m"
OKBLUE = "\033[94m"
OKCYAN = "\033[96m"
OKGREEN = "\033[92m"
WARNING = "\033[93m"
FAIL = "\033[91m"
ENDC = "\033[0m"
BOLD = "\033[1m"
UNDERLINE = "\033[4m"
all_datasets = [
{
"Id": uuid.uuid4().hex,
"Name": "Diabetes Dataset",
"Tags": ["Health", "Classification", "Dicom"],
"Assets": '''["Images"] -> Tensor; ["Labels"] -> Tensor''',
"Description": "A large set of high-resolution retina images",
"Domain": "California Healthcare Foundation",
"Network": "WHO",
"Usage": 102,
"Added On": datetime.datetime.now().replace(month=1).strftime("%b %d %Y")
},
{
"Id": uuid.uuid4().hex,
"Name": "Canada Commodities Dataset",
"Tags": ["Commodities", "Canada", "Trade"],
"Assets": '''["ca-feb2021"] -> DataFrame''',
"Description": "Commodity Trade Dataset",
"Domain": "Canada Domain",
"Network": "United Nations",
"Usage": 40,
"Added On": datetime.datetime.now().replace(month=3, day=11).strftime("%b %d %Y")
},
{
"Id": uuid.uuid4().hex,
"Name": "Italy Commodities Dataset",
"Tags": ["Commodities", "Italy", "Trade"],
"Assets": '''["it-feb2021"] -> DataFrame''',
"Description": "Commodity Trade Dataset",
"Domain": "Italy Domain",
"Network": "United Nations",
"Usage": 23,
"Added On": datetime.datetime.now().replace(month=3).strftime("%b %d %Y")
},
{
"Id": uuid.uuid4().hex,
"Name": "Netherlands Commodities Dataset",
"Tags": ["Commodities", "Netherlands", "Trade"],
"Assets": '''["ne-feb2021"] -> DataFrame''',
"Description": "Commodity Trade Dataset",
"Domain": "Netherland Domain",
"Network": "United Nations",
"Usage": 20,
"Added On": datetime.datetime.now().replace(month=4, day=12).strftime("%b %d %Y")
},
{
"Id": uuid.uuid4().hex,
"Name": "Pnuemonia Dataset",
"Tags": ["Health", "Pneumonia", "X-Ray"],
"Assets": '''["X-Ray-Images"] -> Tensor; ["labels"] -> Tensor''',
"Description": "Chest X-Ray images. All provided images are in DICOM format.",
"Domain": "RSNA",
"Network": "WHO",
"Usage": 334,
"Added On": datetime.datetime.now().replace(month=1).strftime("%b %d %Y")
},
]
all_datasets_df = pd.DataFrame(all_datasets)
# Print available networks
available_networks = [
{
"Id": f"{uuid.uuid4().hex}",
"Name": "United Nations",
"Hosted Domains": 4,
"Hosted Datasets": 6,
"Description": "The UN hosts data related to the commodity and Census data.",
"Tags": ["Commodities", "Census", "Health"],
"Url": "https://un.openmined.org",
},
{
"Id": f"{uuid.uuid4().hex}",
"Name": "World Health Organisation",
"Hosted Domains": 3,
"Hosted Datasets": 5,
"Description": "WHO hosts data related to health sector of different parts of the worlds.",
"Tags": ["Virology", "Cancer", "Health"],
"Url": "https://who.openmined.org",
},
{
"Id": f"{uuid.uuid4().hex}",
"Name": "International Space Station",
"Hosted Domains": 2,
"Hosted Datasets": 4,
"Description": "ISS hosts data related to the topography of different exoplanets.",
"Tags": ["Exoplanets", "Extra-Terrestrial"],
"Url": "https://iss.openmined.org",
},
]
networks_df = pd.DataFrame(available_networks)
who_domains = [
{
"Id": f"{uuid.uuid4().hex}",
"Name": "California Healthcare Foundation",
"Hosted Datasets": 1,
"Description": "Health care systems",
"Tags": ["Clinical Data", "Healthcare"],
},
{
"Id": f"{uuid.uuid4().hex}",
"Name": "RSNA",
"Hosted Datasets": 1,
"Description": "Radiological Image Datasets",
"Tags": ["Dicom", "Radiology", "Health"],
},
]
who_domains_df = pd.DataFrame(who_domains)
pneumonia_dataset = [
{
"Asset Key": "[X-Ray-Images]",
"Type": "Tensor",
"Shape": "(40000, 7)"
},
{
"Asset Key": '[labels]',
"Type": "Tensor",
"Shape": "(40000, 5)"
},
]
print("""
Name: Pnuemonia Detection and Locationzation Dataset
Description: Chest X-Ray images. All provided images are in DICOM format.
""")
pneumonia_dataset_df = pd.DataFrame(pneumonia_dataset)
labels_data = np.random.randint(0, 2, size=(40000, 5))[:, 0]
label_tensors = torch.Tensor(labels_data)
authorization_error = f"""
{bcolors.FAIL.value}PermissionDenied:{bcolors.ENDC.value}
You don't have authorization to perform the `.get` operation.
You need to either `request` the results or `publish` the results.
"""
print(authorization_error)
request_uuid = uuid.uuid4().hex
request_submission = f"""
Your request has been submitted to the domain. Your request id is: {bcolors.BOLD.value}{request_uuid}
{bcolors.ENDC.value}You can check the status of your requests via `.requests`.
"""
print(request_submission)
requests_data = [
{
"Request Id": uuid.uuid4().hex,
"Request Date": datetime.datetime.now().strftime("%b %d %Y %I:%M%p"),
"Reason": "Know the number of unique labels in the dataset.",
"Result Id": uuid.uuid4().hex,
"State": "Pending",
"Reviewer Comments": "-",
},
{
"Request Id": uuid.uuid4().hex,
"Request Date": datetime.datetime.now().replace(day=19, hour=1).strftime("%b %d %Y %I:%M%p"),
"Reason": "Get the labels in the dataset.",
"Result Id": uuid.uuid4().hex,
"State": "Denied",
"Reviewer Comments": "Access to raw labels is not allowed",
}
]
requests_data_df = pd.DataFrame(requests_data)
approved_requests_data_df = requests_data_df.copy()
approved_requests_data_df["State"][0] = "Approved"
approved_requests_data_df["Reviewer Comments"][0] = "Looks good."
filtered_request_logs = approved_requests_data_df[:1]
```
| github_jupyter |
# Modified Triplet Loss : Ungraded Lecture Notebook
In this notebook you'll see how to calculate the full triplet loss, step by step, including the mean negative and the closest negative. You'll also calculate the matrix of similarity scores.
## Background
This is the original triplet loss function:
$\mathcal{L_\mathrm{Original}} = \max{(\mathrm{s}(A,N) -\mathrm{s}(A,P) +\alpha, 0)}$
It can be improved by including the mean negative and the closest negative, to create a new full loss function. The inputs are the Anchor $\mathrm{A}$, Positive $\mathrm{P}$ and Negative $\mathrm{N}$.
$\mathcal{L_\mathrm{1}} = \max{(mean\_neg -\mathrm{s}(A,P) +\alpha, 0)}$
$\mathcal{L_\mathrm{2}} = \max{(closest\_neg -\mathrm{s}(A,P) +\alpha, 0)}$
$\mathcal{L_\mathrm{Full}} = \mathcal{L_\mathrm{1}} + \mathcal{L_\mathrm{2}}$
Let me show you what that means exactly, and how to calculate each step.
## Imports
```
import numpy as np
```
## Similarity Scores
The first step is to calculate the matrix of similarity scores using cosine similarity so that you can look up $\mathrm{s}(A,P)$, $\mathrm{s}(A,N)$ as needed for the loss formulas.
### Two Vectors
First I'll show you how to calculate the similarity score, using cosine similarity, for 2 vectors.
$\mathrm{s}(v_1,v_2) = \mathrm{cosine \ similarity}(v_1,v_2) = \frac{v_1 \cdot v_2}{||v_1||~||v_2||}$
* Try changing the values in the second vector to see how it changes the cosine similarity.
```
# Two vector example
# Input data
print("-- Inputs --")
v1 = np.array([1, 2, 3], dtype=float)
v2 = np.array([1, 2, 3.5]) # notice the 3rd element is offset by 0.5
### START CODE HERE ###
# Try modifying the vector v2 to see how it impacts the cosine similarity
# v2 = v1 # identical vector
# v2 = v1 * -1 # opposite vector
# v2 = np.array([0,-42,1]) # random example
### END CODE HERE ###
print("v1 :", v1)
print("v2 :", v2, "\n")
# Similarity score
def cosine_similarity(v1, v2):
numerator = np.dot(v1, v2)
denominator = np.sqrt(np.dot(v1, v1)) * np.sqrt(np.dot(v2, v2))
return numerator / denominator
print("-- Outputs --")
print("cosine similarity :", cosine_similarity(v1, v2))
```
### Two Batches of Vectors
Now i'll show you how to calculate the similarity scores, using cosine similarity, for 2 batches of vectors. These are rows of individual vectors, just like in the example above, but stacked vertically into a matrix. They would look like the image below for a batch size (row count) of 4 and embedding size (column count) of 5.
The data is setup so that $v_{1\_1}$ and $v_{2\_1}$ represent duplicate inputs, but they are not duplicates with any other rows in the batch. This means $v_{1\_1}$ and $v_{2\_1}$ (green and green) have more similar vectors than say $v_{1\_1}$ and $v_{2\_2}$ (green and magenta).
I'll show you two different methods for calculating the matrix of similarities from 2 batches of vectors.
<img src = 'images/v1v2_stacked.png' width="width" height="height" style="height:250px;"/>
```
# Two batches of vectors example
# Input data
print("-- Inputs --")
v1_1 = np.array([1, 2, 3])
v1_2 = np.array([9, 8, 7])
v1_3 = np.array([-1, -4, -2])
v1_4 = np.array([1, -7, 2])
v1 = np.vstack([v1_1, v1_2, v1_3, v1_4])
print("v1 :")
print(v1, "\n")
v2_1 = v1_1 + np.random.normal(0, 2, 3) # add some noise to create approximate duplicate
v2_2 = v1_2 + np.random.normal(0, 2, 3)
v2_3 = v1_3 + np.random.normal(0, 2, 3)
v2_4 = v1_4 + np.random.normal(0, 2, 3)
v2 = np.vstack([v2_1, v2_2, v2_3, v2_4])
print("v2 :")
print(v2, "\n")
# Batch sizes must match
b = len(v1)
print("batch sizes match :", b == len(v2), "\n")
# Similarity scores
print("-- Outputs --")
# Option 1 : nested loops and the cosine similarity function
sim_1 = np.zeros([b, b]) # empty array to take similarity scores
# Loop
for row in range(0, sim_1.shape[0]):
for col in range(0, sim_1.shape[1]):
sim_1[row, col] = cosine_similarity(v1[row], v2[col])
print("option 1 : loop")
print(sim_1, "\n")
# Option 2 : vector normalization and dot product
def norm(x):
return x / np.sqrt(np.sum(x * x, axis=1, keepdims=True))
sim_2 = np.dot(norm(v1), norm(v2).T)
print("option 2 : vec norm & dot product")
print(sim_2, "\n")
# Check
print("outputs are the same :", np.allclose(sim_1, sim_2))
```
## Hard Negative Mining
I'll now show you how to calculate the mean negative $mean\_neg$ and the closest negative $close\_neg$ used in calculating $\mathcal{L_\mathrm{1}}$ and $\mathcal{L_\mathrm{2}}$.
$\mathcal{L_\mathrm{1}} = \max{(mean\_neg -\mathrm{s}(A,P) +\alpha, 0)}$
$\mathcal{L_\mathrm{2}} = \max{(closest\_neg -\mathrm{s}(A,P) +\alpha, 0)}$
You'll do this using the matrix of similarity scores you already know how to make, like the example below for a batch size of 4. The diagonal of the matrix contains all the $\mathrm{s}(A,P)$ values, similarities from duplicate question pairs (aka Positives). This is an important attribute for the calculations to follow.
<img src = 'images/ss_matrix.png' width="width" height="height" style="height:250px;"/>
### Mean Negative
$mean\_neg$ is the average of the off diagonals, the $\mathrm{s}(A,N)$ values, for each row.
### Closest Negative
$closest\_neg$ is the largest off diagonal value, $\mathrm{s}(A,N)$, that is smaller than the diagonal $\mathrm{s}(A,P)$ for each row.
* Try using a different matrix of similarity scores.
```
# Hardcoded matrix of similarity scores
sim_hardcoded = np.array(
[
[0.9, -0.8, 0.3, -0.5],
[-0.4, 0.5, 0.1, -0.1],
[0.3, 0.1, -0.4, -0.8],
[-0.5, -0.2, -0.7, 0.5],
]
)
sim = sim_hardcoded
### START CODE HERE ###
# Try using different values for the matrix of similarity scores
# sim = 2 * np.random.random_sample((b,b)) -1 # random similarity scores between -1 and 1
# sim = sim_2 # the matrix calculated previously using vector normalization and dot product
### END CODE HERE ###
# Batch size
b = sim.shape[0]
print("-- Inputs --")
print("sim :")
print(sim)
print("shape :", sim.shape, "\n")
# Positives
# All the s(A,P) values : similarities from duplicate question pairs (aka Positives)
# These are along the diagonal
sim_ap = np.diag(sim)
print("sim_ap :")
print(np.diag(sim_ap), "\n")
# Negatives
# all the s(A,N) values : similarities the non duplicate question pairs (aka Negatives)
# These are in the off diagonals
sim_an = sim - np.diag(sim_ap)
print("sim_an :")
print(sim_an, "\n")
print("-- Outputs --")
# Mean negative
# Average of the s(A,N) values for each row
mean_neg = np.sum(sim_an, axis=1, keepdims=True) / (b - 1)
print("mean_neg :")
print(mean_neg, "\n")
# Closest negative
# Max s(A,N) that is <= s(A,P) for each row
mask_1 = np.identity(b) == 1 # mask to exclude the diagonal
mask_2 = sim_an > sim_ap.reshape(b, 1) # mask to exclude sim_an > sim_ap
mask = mask_1 | mask_2
sim_an_masked = np.copy(sim_an) # create a copy to preserve sim_an
sim_an_masked[mask] = -2
closest_neg = np.max(sim_an_masked, axis=1, keepdims=True)
print("closest_neg :")
print(closest_neg, "\n")
```
## The Loss Functions
The last step is to calculate the loss functions.
$\mathcal{L_\mathrm{1}} = \max{(mean\_neg -\mathrm{s}(A,P) +\alpha, 0)}$
$\mathcal{L_\mathrm{2}} = \max{(closest\_neg -\mathrm{s}(A,P) +\alpha, 0)}$
$\mathcal{L_\mathrm{Full}} = \mathcal{L_\mathrm{1}} + \mathcal{L_\mathrm{2}}$
```
# Alpha margin
alpha = 0.25
# Modified triplet loss
# Loss 1
l_1 = np.maximum(mean_neg - sim_ap.reshape(b, 1) + alpha, 0)
# Loss 2
l_2 = np.maximum(closest_neg - sim_ap.reshape(b, 1) + alpha, 0)
# Loss full
l_full = l_1 + l_2
# Cost
cost = np.sum(l_full)
print("-- Outputs --")
print("loss full :")
print(l_full, "\n")
print("cost :", "{:.3f}".format(cost))
```
## Summary
There were a lot of steps in there, so well done. You now know how to calculate a modified triplet loss, incorporating the mean negative and the closest negative. You also learned how to create a matrix of similarity scores based on cosine similarity.
| github_jupyter |
[](https://colab.research.google.com/github/cadCAD-org/demos/blob/master/tutorials/robots_and_marbles/robot-marbles-part-3/robot-marbles-part-3.ipynb)
# cadCAD Tutorials: The Robot and the Marbles, part 3
In parts [1](../robot-marbles-part-1/robot-marbles-part-1.ipynb) and [2](../robot-marbles-part-2/robot-marbles-part-2.ipynb) we introduced the 'language' in which a system must be described in order for it to be interpretable by cadCAD and some of the basic concepts of the library:
* State Variables
* Timestep
* State Update Functions
* Partial State Update Blocks
* Simulation Configuration Parameters
* Policies
In this notebook we'll look at how subsystems within a system can operate in different frequencies. But first let's copy the base configuration with which we ended Part 2. Here's the description of that system:
__The robot and the marbles__
* Picture a box (`box_A`) with ten marbles in it; an empty box (`box_B`) next to the first one; and __two__ robot arms capable of taking a marble from any one of the boxes and dropping it into the other one.
* The robots are programmed to take one marble at a time from the box containing the largest number of marbles and drop it in the other box. They repeat that process until the boxes contain an equal number of marbles.
* The robots act simultaneously; in other words, they assess the state of the system at the exact same time, and decide what their action will be based on that information.
```
%%capture
# Only run this cell if you need to install the libraries
# If running in google colab, this is needed.
!pip install cadcad matplotlib pandas numpy
# Import dependancies
# Data processing and plotting libraries
import pandas as pd
import numpy as np
from random import normalvariate
import matplotlib.pyplot as plt
# cadCAD specific libraries
from cadCAD.configuration.utils import config_sim
from cadCAD.configuration import Experiment
from cadCAD.engine import ExecutionContext, Executor
def p_robot_arm(params, substep, state_history, previous_state):
# Parameters & variables
box_a = previous_state['box_A']
box_b = previous_state['box_B']
# Logic
if box_b > box_a:
b_to_a = 1
elif box_b < box_a:
b_to_a = -1
else:
b_to_a = 0
# Output
return({'add_to_A': b_to_a, 'add_to_B': -b_to_a})
def s_box_A(params, substep, state_history, previous_state, policy_input):
# Parameters & variables
box_A_current = previous_state['box_A']
box_A_change = policy_input['add_to_A']
# Logic
box_A_new = box_A_current + box_A_change
# Output
return ('box_A', box_A_new)
def s_box_B(params, substep, state_history, previous_state, policy_input):
# Parameters & variables
box_B_current = previous_state['box_B']
box_B_change = policy_input['add_to_B']
# Logic
box_B_new = box_B_current + box_B_change
# Output
return ('box_B', box_B_new)
partial_state_update_blocks = [
{
'policies': {
'robot_arm_1': p_robot_arm,
'robot_arm_2': p_robot_arm
},
'variables': {
'box_A': s_box_A,
'box_B': s_box_B
}
}
]
MONTE_CARLO_RUNS = 1
SIMULATION_TIMESTEPS = 10
sim_config = config_sim(
{
'N': MONTE_CARLO_RUNS,
'T': range(SIMULATION_TIMESTEPS),
#'M': {} # This will be explained in later tutorials
}
)
initial_state = {
'box_A': 10, # box_A starts out with 10 marbles in it
'box_B': 0 # box_B starts out empty
}
from cadCAD import configs
del configs[:]
experiment = Experiment()
experiment.append_configs(
sim_configs=sim_config,
initial_state=initial_state,
partial_state_update_blocks=partial_state_update_blocks,
)
exec_context = ExecutionContext()
run = Executor(exec_context=exec_context, configs=configs)
(system_events, tensor_field, sessions) = run.execute()
df = pd.DataFrame(system_events)
# Create figure
fig = df.plot(x='timestep', y=['box_A','box_B'], marker='o', markersize=12,
markeredgewidth=4, alpha=0.7, markerfacecolor='black',
linewidth=5, figsize=(12,8), title="Marbles in each box as a function of time",
ylabel='Number of Marbles', grid=True, fillstyle='none',
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(1+(df['box_A']+df['box_B']).max())));
```
# Asynchronous Subsystems
We have defined that the robots operate simultaneously on the boxes of marbles. But it is often the case that agents within a system operate asynchronously, each having their own operation frequencies or conditions.
Suppose that instead of acting simultaneously, the robots in our examples operated in the following manner:
* Robot 1: acts once every 2 timesteps
* Robot 2: acts once every 3 timesteps
One way to simulate the system with this change is to introduce a check of the current timestep before the robots act, with the definition of separate policy functions for each robot arm.
```
robots_periods = [2,3] # Robot 1 acts once every 2 timesteps; Robot 2 acts once every 3 timesteps
def get_current_timestep(cur_substep, previous_state):
if cur_substep == 1:
return previous_state['timestep']+1
return previous_state['timestep']
def robot_arm_1(params, substep, state_history, previous_state):
_robotId = 1
if get_current_timestep(substep, previous_state)%robots_periods[_robotId-1]==0: # on timesteps that are multiple of 2, Robot 1 acts
return p_robot_arm(params, substep, state_history, previous_state)
else:
return({'add_to_A': 0, 'add_to_B': 0}) # for all other timesteps, Robot 1 doesn't interfere with the system
def robot_arm_2(params, substep, state_history, previous_state):
_robotId = 2
if get_current_timestep(substep, previous_state)%robots_periods[_robotId-1]==0: # on timesteps that are multiple of 3, Robot 2 acts
return p_robot_arm(params, substep, state_history, previous_state)
else:
return({'add_to_A': 0, 'add_to_B': 0}) # for all other timesteps, Robot 1 doesn't interfere with the system
```
In the Partial State Update Blocks, the user specifies if state update functions will be run in series or in parallel and the policy functions that will be evaluated in that block
```
partial_state_update_blocks = [
{
'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions
'robot_arm_1': robot_arm_1,
'robot_arm_2': robot_arm_2
},
'variables': { # The following state variables will be updated simultaneously
'box_A': s_box_A,
'box_B': s_box_B
}
}
]
from cadCAD import configs
del configs[:]
experiment = Experiment()
experiment.append_configs(
sim_configs=sim_config,
initial_state=initial_state,
partial_state_update_blocks=partial_state_update_blocks,
)
exec_context = ExecutionContext()
run = Executor(exec_context=exec_context, configs=configs)
(system_events, tensor_field, sessions) = run.execute()
df = pd.DataFrame(system_events)
# Create figure
fig = df.plot(x='timestep', y=['box_A','box_B'], marker='o', markersize=12,
markeredgewidth=4, alpha=0.7, markerfacecolor='black',
linewidth=5, figsize=(12,8), title="Marbles in each box as a function of time",
ylabel='Number of Marbles', grid=True, fillstyle='none',
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(1+(df['box_A']+df['box_B']).max())));
```
Let's take a step-by-step look at what the simulation tells us:
* Timestep 1: the number of marbles in the boxes does not change, as none of the robots act
* Timestep 2: Robot 1 acts, Robot 2 doesn't; resulting in one marble being moved from box A to box B
* Timestep 3: Robot 2 acts, Robot 1 doesn't; resulting in one marble being moved from box A to box B
* Timestep 4: Robot 1 acts, Robot 2 doesn't; resulting in one marble being moved from box A to box B
* Timestep 5: the number of marbles in the boxes does not change, as none of the robots act
* Timestep 6: Robots 1 __and__ 2 act, as 6 is a multiple of 2 __and__ 3; resulting in two marbles being moved from box A to box B and an equilibrium being reached.
| github_jupyter |
## IBM Quantum Challenge Fall 2021
# Challenge 4c: Battery revenue optimization with adiabatic quantum computation
*Challenge solution write-up by Lukas Botsch*
The solution I present in this notebook might not be the most optimized and highest scoring one submitted during the challenge (the final score is just over 300k). But I want to give some background and try to explain how and why the implemented solution works, from the perspective of a physicist. I'm by no means an expert in qiskit! In fact, this was the first time I seriously played around with the framework ;)
If you are only interested in the code, please skip to the [end of the notebook](#final)
# Content
* [Very short introduction to the computational model](#intro)
* [Adiabatic Quantum Computation](#adiabatic_computation)
* [Battery revenue optimization problem](#problem)
* [Solving the Relaxed Battery Revenue Optimization Problem Step by Step](#step_by_step)
* [Optimizing the Solution](#optimizing)
* [Final Solution](#final)
<a id="intro"></a>
# Very short introduction to the quantum circuit computational model
_Let me first give a VERY condensed overview of the computational model._
```
import numpy as np
import matplotlib.pyplot as plt
from qiskit.visualization import plot_bloch_vector, plot_histogram, plot_bloch_multivector, plot_state_qsphere
import warnings
warnings.filterwarnings("ignore")
```
## The qubit
In most models of quantum circuit computing (including IBM's), the smallest unit of information is a two-level quantum system, called a *qubit*. The state $\left|\Psi\right\rangle$ of a qubit lives in a two-dimensional Hilbert space $\mathcal{H}_2$. We can define a basis of this Hilbert space, consisting of two orthogonal states. Let's call them $\left|0\right\rangle$ and $\left|1\right\rangle$ in analogy to classical bits. The state of the qubit can then be described (in that basis) as a linear combination of the basis states: $\left|\Psi\right\rangle = \alpha\left|0\right\rangle + \beta \left|1\right\rangle$, where $\alpha, \beta$ are complex numbers and $\sqrt{|\alpha|^2+|\beta|^2} = 1$. We can now describe the state by the two-dimensional column vector
\begin{equation*}
\left|\Psi\right\rangle = \left[
\begin{array}{l} \alpha \\ \beta \end{array}
\right] = e^{i\phi_0}\left[
\begin{array}{l} \cos(\theta/2) \\ e^{i\phi}\sin(\theta/2) \end{array}
\right],
\end{equation*}
where I have rewritten $\alpha,\beta$ in terms of their global phase $\phi_0$, relative phase $\phi$ and their magnitudes $|\alpha| = \cos(\theta/2)$, $|\beta| = \sin(\theta/2)$. As the global phase doesn't have any physical effect on a measurement, we can discard it and we are left with a description of the state using two real angles, $\phi$ and $\theta$. This representation allows us to visualize the state as a point on the unit sphere, the so-called Bloch sphere.
Let's visualize the state $\left|+\right\rangle = \frac{1}{\sqrt{2}}\left[\begin{array}{l} 1 \\ 1 \end{array}\right] = \left[\begin{array}{l}\cos(\pi/4) \\ \sin(\pi/4)\end{array}\right]$
```
plot_bloch_vector([1, np.pi/2, 0], coord_type='spherical')
```
To perform computation on a qubit, we need a way to modify its state. This is what (single qubit) gates do. The action of a gate on a single qubit can be described by a 2x2 unitary matrix $U$ and the resulting state can be obtained by matrix multiplication $\left|U\Psi\right\rangle = U\left|\Psi\right\rangle$.
Let's for example take the Pauli-X gate, represented by $X = \left[ \begin{array}{lr} 0 & 1 \\ 1 & 0 \end{array} \right]$. It's action on a state, $\left|X\Psi\right\rangle = X\left|\Psi\right\rangle = \left[ \begin{array}{lr} 0 & 1 \\ 1 & 0 \end{array} \right] \left[ \begin{array}{l} \alpha \\ \beta \end{array} \right] = \left[ \begin{array}{l} \beta \\ \alpha \end{array} \right]$, is to exchange the components of the state vector. The state $\left|0\right\rangle$ becomes $\left|1\right\rangle$ and the state $\left|1\right\rangle$ becomes $\left|0\right\rangle$.
Finally, once we have performed computation on the qubit state, we need to transfer the information to a form that the classical computer understands, to a classical bit. This is done by performing a measurement of the qubit state. The outcome of the measurement will be one of the basis states, namely $\left|0\right\rangle$ with a probability $\left|\left\langle0\right|\left.\Psi\right\rangle\right|^2 = |\alpha|^2$ and $\left|1\right\rangle$ with a probability $\left|\left\langle1\right|\left.\Psi\right\rangle\right|^2 = |\beta|^2$.
## Multi-qubit register
As in a classical computer, we can not do much with just a single qbit. So let us now consider multiple qubits and combine them into a quantum register. The state of a N-qubit register now lives in the $2^N$-dimensional Hilbert space resulting from the tensor product $\mathcal{H} = \otimes_{i=1}^N \mathcal{H}_2 = \mathcal{H}_2 \otimes \mathcal{H}_2 \otimes ... \otimes \mathcal{H}_2$ of the single qubit Hilbert space $\mathcal{H}_2$ and we now need $2^N$ basis states to describe it. For a two qubit register, we can construct a basis with the four states $\left| 00 \right\rangle$, $\left| 01 \right\rangle$, $\left| 10 \right\rangle$, $\left| 11 \right\rangle$ and describe the register state in that basis with a vector
\begin{equation*}
\left| \Psi \right\rangle = \left[ \begin{array}{l} a_{00} \\ a_{01} \\ a_{10} \\ a_{11} \end{array} \right]
\end{equation*}
We can construct register states from the single qubit states by taking their tensor product:
\begin{equation*}
\left| ab \right\rangle = \left| a \right\rangle \otimes \left| b \right\rangle = \left[ \begin{array}{l} a_0b_0 \\ a_0 b_1 \\ a_1b_0 \\ a_1b_1 \end{array} \right]
\end{equation*}
But what about the two-qubit state
\begin{equation*}
\frac{1}{\sqrt{2}}\left( \left|00\right\rangle + \left|11\right\rangle \right) = \frac{1}{\sqrt{2}}\left[ \begin{array}{l} 1 \\ 0 \\ 0 \\ 1 \end{array} \right]\text{ ?}
\end{equation*}
It's a perfectly valid two-qubit state, but we can't construct it as a tensor product of single qubit states (try it)! We call it an entangled state.
As in the single qubit case, multi-qubit gates can be represented by unitary matrices and operate on a register state by matrix multiplication. An N-qubit gate is then represented by a $2^N \times 2^N$ unitary matrix.
Finally, measurement works in the same way as for single qubits: When we measure the two-qubit state $\left| \Psi \right\rangle$ in the basis given above, we will find it in one of the basis states $\left|ij\right\rangle$ with a probability $\left|\left\langle ij \right|\left.\Psi\right\rangle\right|^2 = |a_{ij}|^2$.
## Encoding numbers in qubit registers
When we want to encode an unsigned integer number $a\in\mathbb{N}$ in a classical computer, we encode its binary representation $a = \sum_{i=0}^{N-1} a_i 2^i$ in a bit register $a_N...a_2a_1a_0$. We can do the same thing with quantum registers and represent the number $a$ in an N-qubit register
\begin{equation*}
\left|a\right\rangle = \left|a_N...a_2a_1a_0\right\rangle = \otimes_{i=0}^{N-1} \frac{1}{\sqrt{N}} \left( (1-a_i) \left|0\right\rangle + a_i \left|1\right\rangle \right)
\end{equation*}
<a id="adiabatic_computation"></a>
# Adiabatic Quantum Computation
Adiabatic Quantum Computation (AQC) is a different model of quantum computation. It is based on the adiabatic approximation [**[1]**](https://arxiv.org/abs/1611.04471):
> For a system initially prepared in an eigenstate (e.g., the ground state) $\left|\epsilon_0(0)\right\rangle$ of a time-dependent Hamiltonian $H(t)$, the time evolution governed by the Schrödinger equation $i\frac{\partial\left|\psi(t)\right\rangle}{\partial t} = H(t)\left|\psi(t)\right\rangle$ (we set $\hbar=1$ from now on) will approximately keep the actual state $\left|\psi(t)\right\rangle$ of the system in the corresponding instantaneous ground state (or other eigenstate) $\left|\epsilon_0(t)\right\rangle$ of $H(t)$, provided that $H(t)$ varies “sufficiently slowly”.
Many optimization problems can be translated into a form suitable for AQC. The basic idea is to construct a Hamiltonian $H_f$, whose ground state encodes the solution of the optimization problem. The quantum system is then initialized in the ground state of a simple Hamiltonian $H_0$ and let to evolve adiabatically under a time dependent Hamiltonian $H(t)$, such that $H(0) = H_0$ and $H(t_f) = H_f$. At time $t_f$, the system is in the ground state of Hamiltonian $H_f$, which encodes the solution of the optimization problem.
One way to construct the time dependent Hamiltonian $H(t)$ is by interpolation of the initial ($H_0$) and final ($H_f$) Hamiltonians:
\begin{equation*}
H(t) = \beta(t)H_0 + \gamma(t) H_f,
\end{equation*}
where $\beta(t)$ and $\gamma(t)$ are monotonically decreasing and increasing, respectively, and $\beta(0)=1$, $\beta(t_f)=0$, $\gamma(0)=0$, $\gamma(t_f)=1$.
## Example
Let's see an example: We want to find the solution to the optimization problem
\begin{equation*}
\min_{x\in\mathcal{X}} f(x), \text{ with }\mathcal{X} = [0, 1, ..., 2^N-1].
\end{equation*}
We construct the Hamiltonian $H_f$, such that $H_f\left|x\right\rangle = f(x)\left|x\right\rangle$, where $\left|x\right\rangle$ is an encoding of $x\in\mathcal{X}$. We define $H_0 = \mathbb{1} - \left|\phi\right\rangle\left\langle\phi\right|$, where $\left|\phi\right\rangle = \frac{1}{\sqrt{N}}\sum_{i=0}^{N-1}\left|i\right\rangle$ is the uniform superposition state.
We initialize the state $\left|\psi\right\rangle$ in the ground state of Hamiltonian $H_0$: $\left|\psi(0)\right\rangle = \left|\phi\right\rangle$ and let it evolve under the time dependent Hamiltonian $H(t) = (1-t/t_f) H_0 + t/t_f H_f$ (assuming the evolution satisfies the adiabatic theorem and is "sufficiently slow"). At time $t_f$, $\left|\psi(t_f)\right\rangle$ is in the ground state of the Hamiltonian $H(t_f) = H_f$ and therefore satisfies
\begin{equation*}
H_f \left|\psi(t_f)\right\rangle = \min_{x\in\mathcal{X}} f(x)\left|\psi(t_f)\right\rangle.
\end{equation*}
The final state $\left|\psi(t_f)\right\rangle$ encodes the solution to the optimization problem!
## Connection to Circuit Model
How can we translate the time evolution to our circuit model and solve the optimization problem using AQC on IBM's quantum computers? After all, we can't construct the time dependent Hamiltonian $H(t)$!
First, let's define $t_i = i \frac{t_f}{p}$ ($i\in[0,1,...,p]$), the discretization of the time window [0, t_f] into $p$ chunks. We can now approximate the time dependent Hamiltonian $H(t)$ by the (p+1) time independent Hamiltonians $H_i = H(t_i)$.
We can express the time evolution of a state $\left|\psi\right\rangle$ under a time independent Hamiltonian $H$ using the unitary time evolution operator $U_H(t) = \exp\left(-iHt\right)$:
\begin{equation*}
\left|\psi(t)\right\rangle = U_H(t)\left|\psi(0)\right\rangle = \exp\left(-iHt\right)\left|\psi(0)\right\rangle.
\end{equation*}
Assuming the Hamiltonian $H_i = H(t_i)$ changes "sufficiently slowly" to $H_{i+1} = H(t_{i+1})$, we can approximate the time evolution of the initial state $\left|\psi(0)\right\rangle$ to the final state $\left|\psi(t_f)\right\rangle$ under the Hamiltonian $H(t)$ as
\begin{equation*}
\left|\psi(t_f)\right\rangle \approx \Pi_{j=1}^p \exp\left(-iH_j (t_j-t_{j-1})\right) \left|\psi(0)\right\rangle = \Pi_{j=1}^p \exp\left(-iH_j \frac{t_f}{p}\right) \left|\psi(0)\right\rangle.
\end{equation*}
If $\left|\psi(0)\right\rangle$ is the ground state of the initial Hamiltonian $H(0)$, then $\left|\psi(t_f)\right\rangle$ is the ground state of the final Hamiltonian $H_f$.
Taking the interpolating time dependent Hamiltonian from above, we get:
\begin{equation*}
\left|\psi(t_f)\right\rangle \approx \Pi_{j=1}^p \exp\left( -i[\beta(t_j)H_0 + \gamma(t_j)H_f] \frac{t_f}{p} \right) \left|\psi(0)\right\rangle
\end{equation*}
Let's rename $H_0 \equiv B$ and $H_f \equiv C$ and define $\beta(t_i) = \frac{p}{t_f}\beta_i$, with $\beta_i = 1-i/p$ and $\gamma(t_i) = \frac{p}{t_f}\gamma_i$, with $\gamma_i = i/p$ ($i\in[1,...,p]$). The above equation becomes
\begin{equation*}
\left|\psi(t_f)\right\rangle \approx \Pi_{j=1}^p \exp\left( -i[\beta_j B + \gamma_j C] \right) \left|\psi(0)\right\rangle = U(B, \beta_p)U(C, \gamma_p)...U(B, \beta_1)U(C, \gamma_1)\left|\psi(0)\right\rangle,
\end{equation*}
where $U(B, \beta) \equiv \exp(-i\beta B)$ and $U(C, \gamma) \equiv \exp(-i\gamma C)$.
We have brought the AQC time evolution operator into a form suitable for the circuit model! Note that $U$, $B$ and $C$ are time independent unitary operators and $\left|\psi(0)\right\rangle$ is the ground state of $B$. One simple choice for the operator $B$ is $B = (\sigma_x)^{\otimes N}$, the tensor product of the Pauli X operator. Depending on the type of optimization problem we want to solve, we can choose $\left|\psi(0)\right\rangle = \left|-\right\rangle^{\otimes N}$ or $\left|\psi(0)\right\rangle = \left|+\right\rangle^{\otimes N}$, the eigenstates of $B$ with the lowest and highest eigenvalue, respectively. In other words the first choice is useful for minimization problems, while the second choice can be used for maximization.
<a id="problem"></a>
# Battery revenue optimization problem
Battery storage systems have provided a solution to flexibly integrate large-scale renewable energy (such as wind and solar) in a power system. The revenues from batteries come from different types of services sold to the grid. The process of energy trading of battery storage assets is as follows: A regulator asks each battery supplier to choose a market in advance for each time window. Then, the batteries operator will charge the battery with renewable energy and release the energy to the grid depending on pre-agreed contracts. The supplier makes therefore forecasts on the return and the number of charge/discharge cycles for each time window to optimize its overall return.
How to maximize the revenue of battery-based energy storage is a concern of all battery storage investors. Choose to let the battery always supply power to the market which pays the most for every time window might be a simple guess, but in reality, we have to consider many other factors.
What we can not ignore is the aging of batteries, also known as **degradation**. As the battery charge/discharge cycle progresses, the battery capacity will gradually degrade (the amount of energy a battery can store, or the amount of power it can deliver will permanently reduce). After a number of cycles, the battery will reach the end of its usefulness. Since the performance of a battery decreases while it is used, choosing the best cash return for every time window one after the other, without considering the degradation, does not lead to an optimal return over the lifetime of the battery, i.e. before the number of charge/discharge cycles reached.
Therefore, in order to optimize the revenue of the battery, what we have to do is to select the market for the battery in each time window taking both **the returns on these markets (value)**, based on price forecast, as well as expected battery **degradation over time (cost)** into account ——It sounds like solving a common optimization problem, right?
## Problem Setting
Here, we have referred to the problem setting in de la Grand'rive and Hullo's paper [**[2]**](https://arxiv.org/abs/1908.02210).
Considering two markets $M_{1}$ , $M_{2}$, during every time window (typically a day), the battery operates on one or the other market, for a maximum of $n$ time windows. Every day is considered independent and the intraday optimization is a standalone problem: every morning the battery starts with the same level of power so that we don’t consider charging problems. Forecasts on both markets being available for the $n$ time windows, we assume known for each time window $t$ (day) and for each market:
- the daily returns $\lambda_{1}^{t}$ , $\lambda_{2}^{t}$
- the daily degradation, or health cost (number of cycles), for the battery $c_{1}^{t}$, $c_{2}^{t}$
We want to find the optimal schedule, i.e. optimize the life time return with a cost less than $C_{max}$ cycles. We introduce $d = max_{t}\left\{c_{1}^{t}, c_{2}^{t}\right\} $.
We introduce the decision variable $z_{t}, \forall t \in [1, n]$ such that $z_{t} = 0$ if the supplier chooses $M_{1}$ , $z_{t} = 1$ if choose $M_{2}$, with every possible vector $z = [z_{1}, ..., z_{n}]$ being a possible schedule. The previously formulated problem can then be expressed as:
\begin{equation}
\underset{z \in \left\{0,1\right\}^{n}}{max} \displaystyle\sum_{t=1}^{n}(1-z_{t})\lambda_{1}^{t}+z_{t}\lambda_{2}^{t}
\end{equation}
<br>
\begin{equation}
s.t. \sum_{t=1}^{n}[(1-z_{t})c_{1}^{t}+z_{t}c_{2}^{t}]\leq C_{max}
\end{equation}
## Formulating the relaxed knapsack problem
Here we are making the answer according to the way shown in [**[2]**](https://arxiv.org/abs/1908.02210), which is solving the "relaxed" formulation of knapsack problem.
The relaxed problem can be defined as follows:
\begin{equation*}
\text{maximize } f(z)=return(z)+penalty(z)
\end{equation*}
\begin{equation*}
\text{where} \quad return(z)=\sum_{t=1}^{n} return_{t}(z) \quad \text{with} \quad return_{t}(z) \equiv\left(1-z_{t}\right) \lambda_{1}^{t}+z_{t} \lambda_{2}^{t}
\end{equation*}
\begin{equation*}
\quad \quad \quad \quad \quad \quad penalty(z)=\left\{\begin{array}{ll}
0 & \text{if}\quad cost(z)<C_{\max } \\
-\alpha\left(cost(z)-C_{\max }\right) & \text{if} \quad cost(z) \geq C_{\max }, \alpha>0 \quad \text{constant}
\end{array}\right.
\end{equation*}
Now, our task is to find an operator C, such that
\begin{equation*}
C\left|z\right\rangle = f(z)\left|z\right\rangle.
\end{equation*}
Applying the AQC procedure introduced earlier, we should find an approximate solution to the relaxed battery optimization problem.
## Simple example
Let's walk through a simple example:
* $\lambda_1 = [1, 3]$
* $\lambda_2 = [2, 8]$
* $C_1 = [1, 1]$
* $C_2 = [3, 5]$
* $C_\text{max} = 6$
The possible schedules are $z_1 = [0, 0], z_2 = [0, 1], z_3 = [1, 0]$ and $z_4 = [1, 1]$,
which we can encode in a two-qubit register. We see that only $z_1, z_2$ and $z_3$ are feasible
schedules, as $cost(z_4) = 8 > C_\text{max}$. We can calculate $f(z_1) = 4, f(z_2) = 9,
f(z_3) = 5, f(z_4) = 2$ and find $z_\text{opt} = z_2$.
```
L1 = np.asarray([1, 3])
L2 = np.asarray([2, 8])
C1 = np.asarray([1, 1])
C2 = np.asarray([3, 5])
C_max = 6
```
<a id="step_by_step"></a>
# Solving the Relaxed Battery Revenue Optimization Problem Step by Step
## Simplified problem
To see that the AQC procedure works, let's first try to solve a simplified version of the problem: Let's only consider the return part and assume there is no cost associated to battery degradation.
In that case, the problem reduces to:
\begin{equation*}
\text{maximize } f(z)=return(z) = \sum_{t=1}^n \left((1-z_t)\lambda_1^t + z_t\lambda_2^t\right)
\end{equation*}
## Solving the Simplified Problem with AQC
To solve the simplified maximization problem, we need a qubit register of size N (N is the number of days) to encode the battery schedules. We will choose the operator $B = (\sigma_x)^{\otimes N}$ and the initial state $\left|\psi_0\right\rangle = \left|+\right\rangle^{\otimes N}$.
To implement the operator $U(B, \beta) = \exp(-iB\beta)$, we use the single qubit gate RX, defined by:
\begin{equation*}
\text{RX}(\theta) = \exp\left(-i\frac{\theta}{2}\sigma_x\right),
\end{equation*}
and for the operator $U(C, \gamma)$, we can make use of another single qubit gate, the phase gate:
\begin{equation*}
\text{P}(\lambda) = \left[ \begin{array}{cc} 1 & 0 \\ 0 & e^{i\lambda} \end{array} \right].
\end{equation*}
You should convince yourself that the operator
\begin{equation*}
\otimes_{t=1}^{N} \text{P}\left(-\gamma(\lambda_2^t-\lambda_1^t)\right)
\end{equation*}
is equivalent to $U(C, \gamma)$ up to a global phase, such that $C\left|z\right\rangle = f(z)\left|z\right\rangle$.
Let's implement the AQC scheme in qiskit!
```
from typing import List, Union
import math, time
import numpy as np
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, AncillaRegister, assemble, execute, Aer
from qiskit.compiler import transpile
from qiskit.circuit import Gate
from qiskit.circuit.library.standard_gates import *
simulator = Aer.get_backend('aer_simulator')
z = QuantumRegister(2)
qc = QuantumCircuit(z)
# Initialize state
qc.h(z)
# Number of time slices
p = 5
for i in range(p):
beta = 1-(i+1)/p
gamma = (i+1)/p
# U(C, γ)
for j in range(len(L1)):
qc.p(-gamma*(L2[j]-L1[j]), z[j])
# U(B, β)
qc.rx(2*beta, z)
qc.measure_all()
result = simulator.run(qc).result()
plot_histogram(result.get_counts())
```
We have successfully solved the simplified optimization problem using AQC! The algorithm finds the schedule with the highest return.
**Try to change the number of time slices p!** What happens when p becomes too small? Remember, p controls the "slowness" of the time evolution and has to be large enough, so that the adiabatic approximation still holds.
## General approach
Let's now solve the original problem, taking into account the cost penalty in our operator $U(C, \gamma)$.
We will construct the the operator in the following steps:
1. return part ✅
2. penalty part
1. Cost calculation (data encoding)
2. Constraint testing (marking the indices whose data exceed $C_{max}$)
3. Penalty dephasing (adding penalty to the marked indices)
4. Reinitialization of constraint testing and cost calculation (clean the data register and flag register)
### 2A. Cost Calculation
In order to build the penalty part of the operator $C$, we first need to calculate the total cost of the state $z$:
\begin{equation*}
\text{cost}(z) = \sum_{t=1}^N (1-z_t)c_1^t + z_t c_2^t
\end{equation*}
We want to encode the total cost in an ancilla register, in order to get the entangled state
\begin{equation*}
\left|\psi\right\rangle = \left|z\right\rangle \otimes \left|cost(z)\right\rangle.
\end{equation*}
But how do we perform addition on a quantum computer?
#### The ripple-carry adder
Let's start by implementing a circuit that uses a technique often found in classical binary arithmatic, the ripple-carry adder.
```
def ripple_carry_adder(data_qubits: int, const: int) -> QuantumCircuit:
def add_block(data_qubits: int, k: int) -> QuantumCircuit:
""" |z> = |z0 z1 ... zn> -> |z+2^k> """
N = data_qubits
qr_data = QuantumRegister(data_qubits, "data")
qr_temp = QuantumRegister(data_qubits-1, "temp")
qc = QuantumCircuit(qr_data, qr_temp, name=f"const_adder({const})")
# calculate carry bit
qc.cx(qr_data[k], qr_temp[0])
for i in range(N-2-k):
#print(f"i = {i}, len(qr_data) = {len(qr_data)}, len(qr_temp) = {len(qr_temp)}")
qc.ccx(qr_data[k+1+i], qr_temp[i], qr_temp[i+1])
# backpropagate carry
for i in reversed(range(N-2-k)):
qc.cx(qr_temp[i+1], qr_data[k+2+i])
qc.ccx(qr_data[k+1+i], qr_temp[i], qr_temp[i+1])
qc.cx(qr_temp[0], qr_data[k+1])
qc.cx(qr_data[k], qr_temp[0])
# flip qbit k
qc.x(qr_data[k])
return qc
bin_const = [int(x) for x in bin(const)[2:]]
qr_data = QuantumRegister(data_qubits, "data")
qr_temp = QuantumRegister(data_qubits-1, "temp")
qc = QuantumCircuit(qr_data, qr_temp, name=f"const_adder({const})")
for k, b in enumerate(reversed(bin_const)):
if b:
qc.append(add_block(data_qubits, k), qr_data[:] + qr_temp[:])
qc.barrier()
return qc
```
Let's first test the implementation to make sure that the adder works.
We test the addition of two numbers a and b, using the ripple-carry adder circuit we just implemented:
1. Encoding the binary representation of the number a in the initial state of the data register
2. Use the ripple-carry adder circuit to add the number b
3. Measure the state of the data register
```
def test_ripple_carry_adder(a, b):
""" Calculate a + b using the ripple-carry adder. """
data_qubits = data_qubits = math.ceil(math.log(a+b, 2)) + 1 if not a+b & (a+b - 1) == 0 else math.ceil(math.log(a+b, 2)) + 2
qr_data = QuantumRegister(data_qubits, "data")
qr_temp = QuantumRegister(data_qubits-1, "temp")
cr_data = ClassicalRegister(data_qubits, "c_data")
qc = QuantumCircuit(qr_data, qr_temp, cr_data)
# Step 1: Encode the number a into the qr_data register
binary_a = bin(a)[2:]
for i, bit in enumerate(reversed(binary_a)):
if bit == '1':
qc.x(qr_data[i])
# Step 2: Add the number b using ripple-carry adder
qc.append(ripple_carry_adder(data_qubits, b), qr_data[:] + qr_temp[:])
# Step 3: Measure the result
qc.measure(qr_data, cr_data)
result = simulator.run(transpile(qc, backend=simulator)).result()
# Make sure the measurement outcome corresponds to the addition a+b
values = np.asarray([int(v, 2) for v in result.get_counts().keys()])
counts = np.asarray(list(result.get_counts().values()))
assert values[np.argmax(counts)] == a+b
test_ripple_carry_adder(5, 3)
test_ripple_carry_adder(4, 5)
test_ripple_carry_adder(1, 0)
```
How does the ripple-carry adder perform addition? Let's draw the circuit that adds the constant 1 to a data register of size 5!
```
ripple_carry_adder(data_qubits=5, const=1).decompose().draw('mpl')
```
As we can see, adding the constant 1 already results in a rather complex circuit. We note:
1. The ripple-carry adder needs N-1 ancilla qubits for a data register of size N
2. The carry ripples down (hence the name) from the least significant bit affected by the addition to the most significant bit of the data register into the ancilla register (temp) and back up the data register
#### QFT Adder
While the ripple-carry adder performs addition in the computational basis, the QFT adder [**[3]**](https://arxiv.org/abs/1411.5949) performs addition in the Fourier basis (have a look at the Qiskit textbook [**[4]**](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html) for a great introduction to the Fourier basis and the Quantum Fourier Transform!). Let's see an implementation
```
def qft(data_qubits: int, inverse=False) -> QuantumCircuit:
"""Quantum Fourier Transform (https://arxiv.org/pdf/1411.5949.pdf)"""
qc = QuantumCircuit(data_qubits)
N = data_qubits-1
qc.h(N)
for n in range(1, N+1):
qc.cp(2*np.pi / 2**(n+1), N-n, N)
for i in range(1, N):
qc.h(N-i)
for n in range(1, N-i+1):
qc.cp(2*np.pi / 2**(n+1), N-(n+i), N-i)
qc.h(0)
# Calculate IQFT
qc = qc.inverse() if inverse else qc
return qc
def qft_add(data_qubits: int, const: int) -> QuantumCircuit:
"""QFT Adder (https://arxiv.org/pdf/1411.5949.pdf)"""
qc = QuantumCircuit(data_qubits)
N = data_qubits-1
bin_const = [int(x) for x in bin(const)[2:]] # Big endian
bin_const = [0]*(N-len(bin_const)) + bin_const
for n in range(1, N+1):
if bin_const[n-1]:
qc.p(2*np.pi / 2**(n+1), N)
for i in range(1, N+1):
for n in range(N-i+1):
if bin_const[n+i-1]:
qc.p(2*np.pi / 2**(n+1), N-i)
return qc
def qft_const_adder(data_qubits: int, const: int) -> QuantumCircuit:
qr_data = QuantumRegister(data_qubits, "data")
qc = QuantumCircuit(qr_data)
qc.append(qft(data_qubits), qr_data)
qc.barrier()
qc.append(qft_add(data_qubits, const), qr_data)
qc.barrier()
qc.append(qft(data_qubits, inverse=True), qr_data)
qc.barrier()
return qc
def test_qft_adder(a, b):
""" Calculate a + b using the QFT adder. """
data_qubits = data_qubits = math.ceil(math.log(a+b, 2)) + 1 if not a+b & (a+b - 1) == 0 else math.ceil(math.log(a+b, 2)) + 2
qr_data = QuantumRegister(data_qubits, "data")
cr_data = ClassicalRegister(data_qubits, "c_data")
qc = QuantumCircuit(qr_data, cr_data)
# Step 1: Encode the number a into the qr_data register
binary_a = bin(a)[2:]
for i, bit in enumerate(reversed(binary_a)):
if bit == '1':
qc.x(qr_data[i])
# Step 2: Apply Quantum Fourier Transform
qc.append(qft(data_qubits), qr_data)
# Step 3: Add the number b using QFT adder
qc.append(qft_add(data_qubits, b), qr_data)
# Step 4: Apply Inverse Fourier Transform
qc.append(qft(data_qubits, inverse=True), qr_data)
# Step 5: Measure the result
qc.measure(qr_data, cr_data)
result = simulator.run(transpile(qc, backend=simulator)).result()
# Make sure the measurement outcome corresponds to the addition a+b
values = np.asarray([int(v, 2) for v in result.get_counts().keys()])
counts = np.asarray(list(result.get_counts().values()))
assert values[np.argmax(counts)] == a+b
test_qft_adder(5, 3)
test_qft_adder(4, 5)
test_qft_adder(1, 0)
qft_const_adder(data_qubits=5, const=1).decompose().draw('mpl')
```
As we can see, the QFT adder consists of three stages:
1. First, the Quantum Fourier Transform is applied
2. The add operation is performed
3. The resulting state is transformed back into the computational basis
We see that the overhead of applying transform and its inverse is really large and the final circuit looks much more complex than that of the ripple-carry adder. But the actual adding stage is now much smaller. This can get useful, as we will have to calculate sums with many summands and only have to apply the QFT once at the beginning and its inverse once at the end!
#### The cost_calculation function
We now know two methods to perform an addition on a quantum computer. Let's implement the cost_calculation function:
```
def cost_calculation(z_qubits: int, cost_qubits: int, C1: list, C2: list) -> QuantumCircuit:
qr_z = QuantumRegister(z_qubits, "z")
qr_cost = QuantumRegister(cost_qubits, "cost")
qc = QuantumCircuit(qr_z, qr_cost)
qc.append(qft(cost_qubits).to_gate(label=" [ QFT ] "), qr_cost)
for i, (c1, c2) in enumerate(zip(C1, C2)):
qc.append(qft_add(cost_qubits, c2).to_gate(label=f" [ +{c2} ] ").control(1), [qr_z[i]] + qr_cost[:])
qc.x(qr_z[i])
qc.append(qft_add(cost_qubits, c1).to_gate(label=f" [ +{c1} ] ").control(1), [qr_z[i]] + qr_cost[:])
qc.x(qr_z[i])
qc.append(qft(cost_qubits, inverse=True).to_gate(label=" [ IQFT ] "), qr_cost)
return qc
cost_calculation(2, 5, C1, C2).draw("mpl")
```
We will look at ways to optimize this circuit later.
### 2B. Constraint Testing
Next, we need to test the condition $cost(z) > C_\text{max}$, as we only want to apply the penalty to those states. We add another ancilla qubit to encode the truth value of this condition. Our final state should be the entangled state:
\begin{equation*}
\left|\psi\right\rangle = \left|z\right\rangle \otimes \left|cost(z)\right\rangle \otimes \left|cost(z) > C_\text{max}\right\rangle
\end{equation*}
Let's assume, $C_\text{max} = 2^c$ is a power of 2. Then, the condition is false if and only if
\begin{equation*}
cost(z)_{i} = 0\,\,\, \forall i > c,\text{ where } cost(z) = \sum_{i=0}^{M} 2^i cost(z)_i\text{ is the binary decomposition of }cost(z).
\end{equation*}
But what if $C_\text{max}$ is not a power of 2? Well, we use a trick! The conditions $cost(z) > C_\text{max}$ and $cost(z) + w > C_\text{max} + 2$ are equivalent for any constant $w$, so we can always define $w$, such that $C_\text{max} + w = 2^c$ is a power of 2. Even more, let's assume the quantum register for $cost(z)$ has size $M+1$, and lets set $c = M$. Then, we define $w$, such that $C_\text{max}+w = 2^M-1$. To test the condition $cost(z) > C_\text{max}$, we first check that $C_\text{max} < 2^c$. If not, $cost(z)$ can never get larger than $C_\text{max}$. Otherwise, we simply check if $(cost(z)+w)_{c}$ is 1.
Let's implement the constraint_testing function:
```
def constraint_testing(cost_qubits: int, C_max: int) -> QuantumCircuit:
"""Test the condition cost(z) > C_max and set a flag F if the condition holds
true.
"""
qr_cost = QuantumRegister(cost_qubits, "cost")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_cost, qr_f)
c = cost_qubits-1
w = 2**c - C_max - 1
if w > 0:
# Add w to cost(z)
qc.append(qft(cost_qubits), qr_cost)
qc.append(qft_add(cost_qubits, w), qr_cost)
qc.append(qft(cost_qubits, inverse=True), qr_cost)
# Set F if the condition cost(z) > C_max holds
qc.cx(qr_cost[c], qr_f)
return qc
```
### 2C. Penalty Dephasing
Now we have the entangled state
\begin{equation*}
\left|\psi\right\rangle = \left|z\right\rangle \otimes \left|cost(z)\right\rangle \otimes \left|cost(z) > C_\text{max}\right\rangle
\end{equation*}
and we need to construct the penalty part of the operator $U(C, \gamma)$. As in the return part, we will use the phase gate.
Let's first see the implementation of the penalty_dephasing function:
```
def penalty_dephasing(cost_qubits: int, alpha: float, gamma: float) -> QuantumCircuit:
"""Apply the penalty part of U(C, γ)."""
qr_cost = QuantumRegister(cost_qubits, "cost")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_cost, qr_f)
# Add phase alpha * (cost(z) + w) to all infeasible states
for i in range(cost_qubits):
qc.cp((2**i) * alpha * gamma, qr_f, qr_cost[i])
# Add phase -alpha * (Cmax + w) = -alpha * 2^(cost_qubits-1) to all
# infeasible states
C_max_plus_w = 2**(cost_qubits-1)-1
qc.p(-C_max_plus_w * alpha * gamma, qr_f)
return qc
```
There are quite some things going on here. Some remarks:
1. We apply controlled phase (cp) gates to each qubit in the cost register, controlled by the flag qubit. We are adding a phase $\alpha \gamma 2^i$ to each qubit $i$ in the cost register if both the qubit itself and the control qubit, the flag, are in the state $\left|1\right\rangle$.
2. But how does this affect the z register? After all, thats the register that encodes our schedules and to which we want to apply our operator $U(C, \gamma)$. Remember that our state $\left|z\right\rangle\otimes\left|cost(z)\right\rangle\otimes\left|cost(z)>C_\text{max}\right\rangle$ is entangled. As we saw in the beginning, we can't describe it in terms of single qubit states anymore.
3. The second part is applied with a phase gate on the flag qubit. Remember, that we added the constant $w$ to $cost(z)$ in the previous step. But that's not a big issue: we can simply adjust our penalty function:
\begin{equation*}
penalty(z)=\left\{\begin{array}{ll}
0 & \text{if}\quad cost(z)<C_{\max } \\
-\alpha\left((cost(z)+w)-(C_{\max }+w)\right) & \text{if} \quad cost(z) \geq C_{\max }, \alpha>0 \quad \text{constant}
\end{array}\right.
\end{equation*}
By applying the phase gate with a phase $\theta = -\alpha \gamma (C_\text{max}+w)$ to the flag qubit, we add a relative phase $\theta = -\alpha \gamma (C_\text{max}+w)$ only if $cost(z) > C_\text{max}$.
This concludes the penalty part of the $U(C, \gamma)$ operator! Let's see it in action:
```
def plot_penalty_dephased_states():
z_qubits = len(L1)
# Calculate required size of cost register
max_c = sum([max(l0, l1) for l0, l1 in zip(C1, C2)])
cost_qubits = math.ceil(math.log(max_c, 2)) + 1 if not max_c & (max_c - 1) == 0 else math.ceil(math.log(max_c, 2)) + 2
qr_z = QuantumRegister(z_qubits, name="z")
qr_cost = QuantumRegister(cost_qubits, name="cost")
qr_f = QuantumRegister(1, name="flag")
cr_z = ClassicalRegister(2, name="c_z")
qc = QuantumCircuit(qr_z, qr_cost, qr_f, cr_z)
# Initialize z register
qc.h(qr_z)
qc.save_statevector(label="state1")
# Calculate cost(z)
qc.append(cost_calculation(z_qubits, cost_qubits, C1, C2), qr_z[:] + qr_cost[:])
qc.save_statevector(label="state2")
# Test constraint
qc.append(constraint_testing(cost_qubits, C_max), qr_cost[:] + qr_f[:])
qc.save_statevector(label="state3")
# Apply penalty dephasing
qc.append(penalty_dephasing(cost_qubits, 1.0, 1.0), qr_cost[:] + qr_f[:])
qc.save_statevector(label="state4")
# Measure register z
qc.measure(qr_z, cr_z)
# Run the circuit
result = simulator.run(qc.decompose().decompose()).result().results[0].data
state1 = result.state1
state2 = result.state2
state3 = result.state3
state4 = result.state4
# Plot the states
print("1) The initial state is a uniform superposition of all possible schedules")
display(plot_state_qsphere(state1, show_state_phases=True))
print("2) After cost calculation, the state represents |z> ⊗ |cost(z)> ⊗ |0>")
display(plot_state_qsphere(state2, show_state_phases=True))
print("3) After constraint testing, the state represents |z> ⊗ |cost(z)+w> ⊗ |cost(z)>C_max>")
display(plot_state_qsphere(state3, show_state_phases=True))
print("""4) After penalty dephasing, the components of the state vector
|z> ⊗ |cost(z)+w> ⊗ |cost(z)>C_max> corresponding to basis states,
where the highest order qubit (the flag) is 1 (here |11000111>) gain a relative phase
θ = cost(z)-C_max. In the simple example with C_max = 6, the only infeasible schedule is z = 11,
with cost(z) = 8. And, as expected, we see a relative phase θ = 2!""")
display(plot_state_qsphere(state4, show_state_phases=True))
plot_penalty_dephased_states()
```
### 2D. Reinitialization
Before we continue, we need to reset all ancilla qubits to the state $\left|0\right\rangle$, so we can reuse them when we recalculate operator $U(C, \gamma)$ for the next $\gamma$.
```
def reinitialization(z_qubits: int, cost_qubits: int, C1: list, C2: list, C_max: int) -> QuantumCircuit:
"""Reinitialize the ancilla qubits to |0>."""
qr_z = QuantumRegister(z_qubits, "z")
qr_cost = QuantumRegister(cost_qubits, "cost")
qr_f = QuantumRegister(1, "F")
qc = QuantumCircuit(qr_z, qr_cost, qr_f)
qc.append(constraint_testing(cost_qubits, C_max, False).inverse().to_gate(label=" Constraint Testing + "), qr_cost[:] + qr_f[:])
qc.append(cost_calculation(z_qubits, cost_qubits, C1, C2, False).inverse().to_gate(label=" Cost Calculation + "), qr_z[:] + qr_cost[:])
return qc
```
## Putting it all together
We now have all the parts, let's put them together:
```
def solver_function(L1: list, L2: list, C1: list, C2: list, C_max: int) -> QuantumCircuit:
# the number of qubits representing answers
z_qubits = len(L1)
# the maximum possible total cost
max_c = sum([max(l0, l1) for l0, l1 in zip(C1, C2)])
# the number of qubits representing data values can be defined using the maximum possible total cost as follows:
cost_qubits = math.ceil(math.log(max_c, 2)) + 1 if not max_c & (max_c - 1) == 0 else math.ceil(math.log(max_c, 2)) + 2
### U(C, γ) ###
# 1. return part
def phase_return(z_qubits: int, gamma: float, L1: list, L2: list, to_gate=True) -> Union[Gate, QuantumCircuit]:
qr_z = QuantumRegister(z_qubits, "z")
qc = QuantumCircuit(qr_z)
for i in range(z_qubits):
qc.p(-gamma*(L2[i]-L1[i])/2, qr_z[i])
return qc.to_gate(label=" phase return ") if to_gate else qc
# 2. penalty part
# 2A. Cost calculation
def qft(data_qubits: int, inverse=False, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""Quantum Fourier Transform (https://arxiv.org/pdf/1411.5949.pdf)"""
qc = QuantumCircuit(data_qubits)
N = data_qubits-1
qc.h(N)
for n in range(1, N+1):
qc.cp(2*np.pi / 2**(n+1), N-n, N)
for i in range(1, N):
qc.h(N-i)
for n in range(1, N-i+1):
qc.cp(2*np.pi / 2**(n+1), N-(n+i), N-i)
qc.h(0)
# Calculate IQFT
qc = qc.inverse() if inverse else qc
return qc.to_gate(label=" QFT ") if to_gate and not inverse else qc.to_gate(label=" IQFT ") if to_gate else qc
def qft_add(data_qubits: int, const: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""QFT Adder (https://arxiv.org/pdf/1411.5949.pdf)"""
qc = QuantumCircuit(data_qubits)
N = data_qubits-1
bin_const = [int(x) for x in bin(const)[2:]] # Big endian
bin_const = [0]*(N-len(bin_const)) + bin_const
for n in range(1, N+1):
if bin_const[n-1]:
qc.p(2*np.pi / 2**(n+1), N)
for i in range(1, N+1):
for n in range(N-i+1):
if bin_const[n+i-1]:
qc.p(2*np.pi / 2**(n+1), N-i)
return qc.to_gate(label=f" [ +{const} ] ") if to_gate else qc
def cost_calculation(z_qubits: int, cost_qubits: int, C1: list, C2: list, to_gate=True) -> Union[Gate, QuantumCircuit]:
qr_z = QuantumRegister(z_qubits, "z")
qr_cost = QuantumRegister(cost_qubits, "cost")
qc = QuantumCircuit(qr_z, qr_cost)
qc.append(qft(cost_qubits), qr_cost)
for i, (c1, c2) in enumerate(zip(C1, C2)):
qc.append(qft_add(cost_qubits, c2).control(1), [qr_z[i]] + qr_cost[:])
qc.x(qr_z[i])
qc.append(qft_add(cost_qubits, c1).control(1), [qr_z[i]] + qr_cost[:])
qc.x(qr_z[i])
qc.append(qft(cost_qubits, inverse=True), qr_cost)
return qc.to_gate(label=" cost_calculation ") if to_gate else qc
def constraint_testing(cost_qubits: int, C_max: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""Test the condition cost(z) > C_max and set a flag F if the condition holds
true.
"""
qr_cost = QuantumRegister(cost_qubits, "cost")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_cost, qr_f)
c = cost_qubits-1
w = 2**c - C_max - 1
if w > 0:
# Add w to cost(z)
qc.append(qft(cost_qubits), qr_cost)
qc.append(qft_add(cost_qubits, w), qr_cost)
qc.append(qft(cost_qubits, inverse=True), qr_cost)
# Set F if the condition cost(z) > C_max holds
qc.cx(qr_cost[c], qr_f)
return qc.to_gate(label=" constraint_testing ") if to_gate else qc
def penalty_dephasing(cost_qubits: int, alpha: float, gamma: float, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""Apply the penalty part of U(C, γ)."""
qr_cost = QuantumRegister(cost_qubits, "cost")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_cost, qr_f)
# Add phase alpha * (cost(z) + w) to all infeasible states
for i in range(cost_qubits):
qc.cp((2**i) * alpha * gamma, qr_f, qr_cost[i])
# Add phase -alpha * (Cmax + w) = -alpha * 2^(cost_qubits-1) to all
# infeasible states
C_max_plus_w = 2**(cost_qubits-1)-1
qc.p(-C_max_plus_w * alpha * gamma, qr_f)
return qc.to_gate(label=" penalty_dephasing ") if to_gate else qc
def reinitialization(z_qubits: int, cost_qubits: int, C1: list, C2: list, C_max: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""Reinitialize the ancilla qubits to |0>."""
qr_z = QuantumRegister(z_qubits, "z")
qr_cost = QuantumRegister(cost_qubits, "cost")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_z, qr_cost, qr_f)
qc.append(constraint_testing(cost_qubits, C_max, False).inverse().to_gate(label=" Constraint Testing + "), qr_cost[:] + qr_f[:])
qc.append(cost_calculation(z_qubits, cost_qubits, C1, C2, False).inverse().to_gate(label=" Cost Calculation + "), qr_z[:] + qr_cost[:])
return qc.to_gate(label=" reinitialization ") if to_gate else qc
### U(B, β) ###
def mixing_operator(z_qubits: int, beta: float, to_gate = True) -> Union[Gate, QuantumCircuit]:
"""Operator U(B, β)."""
qr_z = QuantumRegister(z_qubits, "index")
qc = QuantumCircuit(qr_z)
qc.rx(2*beta, qr_z)
return qc.to_gate(label=" Mixing Operator ") if to_gate else qc
qr_z = QuantumRegister(z_qubits, "z") # index register
qr_cost = QuantumRegister(cost_qubits, "cost") # data register
qr_f = QuantumRegister(1, "flag") # flag register
cr_z = ClassicalRegister(z_qubits, "c_z") # classical register storing the measurement result of index register
qc = QuantumCircuit(qr_z, qr_cost, qr_f, cr_z)
### initialize the index register with uniform superposition state ###
qc.h(qr_z)
### DO NOT CHANGE THE CODE BELOW
p = 5
alpha = 1
for i in range(p):
### set fixed parameters for each round ###
beta = 1 - (i + 1) / p
gamma = (i + 1) / p
### return part ###
qc.append(phase_return(z_qubits, gamma, L1, L2), qr_z)
### step 1: cost calculation ###
qc.append(cost_calculation(z_qubits, cost_qubits, C1, C2), qr_z[:] + qr_cost[:])
### step 2: Constraint testing ###
qc.append(constraint_testing(cost_qubits, C_max), qr_cost[:] + qr_f[:])
### step 3: penalty dephasing ###
qc.append(penalty_dephasing(cost_qubits, alpha, gamma), qr_cost[:] + qr_f[:])
### step 4: reinitialization ###
qc.append(reinitialization(z_qubits, cost_qubits, C1, C2, C_max), qr_z[:] + qr_cost[:] + qr_f[:])
### mixing operator ###
qc.append(mixing_operator(z_qubits, beta), qr_z)
### measure the index ###
### since the default measurement outcome is shown in big endian, it is necessary to reverse the classical bits in order to unify the endian ###
qc.measure(qr_z, cr_z[::-1])
return qc
# Execute your circuit with following prepare_ex4c() function.
# The prepare_ex4c() function works like the execute() function with only QuantumCircuit as an argument.
from qc_grader import prepare_ex4c
job = prepare_ex4c(solver_function)
result = job.result()
# Check your answer and submit using the following code
from qc_grader.grade import grade_job
grade_job(job, '4c')
```
<a id="optimizing"></a>
# Optimizing the Solution
Good! The solution works correctly, but the circuit is rather expensive (the score is high). Let's see, if we can optimize our circuit a bit.
1. First, as I already mentioned before, the advantage of using the QFT adder is that we only need to apply the transform once, add all numbers and transform back. Right now, we are transforming once in *cost_calculation* and again in *constraint_testing*. We can spare one QFT and one IQFT here.
2. In *cost_calculation*, we are applying the QFT to the cost register. But this register is always in the $\left|0\right\rangle$ state, as we reset all ancilla registers in *reinitialization*! The QFT of $\left|0\right\rangle$ is simply the uniform superposition state. So instead of applying the full QFT, we can simply apply Hadamard gates to every qubit in the cost register.
3. Again in *cost_calculation*, we are adding $C_2^i$ to cost(z), controlled by $\left|z^i\right\rangle$. Then we flip $z^i$ by applying a Pauli X gate and add $C_1^i$ to cost(z), controlled by $\left|Xz^i\right\rangle$. And finally, we flip back $z^i$. We can save half of the additions and all of the Pauli X gates, using the following trick: We first calculate $C_\text{diff}^i = C_2^i - C_1^i$ and only add $C_\text{diff}^i$ to cost(z), controlled by $\left|z^i\right\rangle$. As stated in the problem description, $C_2^i > C_1^i$. We now have $cost(z) = \sum_i z^i C_2^i$ and need to correct $C_\text{math}$ to account for the $C_1^i$ we subtracted, by rescaling $C_\text{max} \rightarrow C_\text{max} - \sum_i C_1^i$.
4. We can modify the QFT adder and implement an approximation: Note that in both functions *qft* and *qft_add*, we apply phase gates with a phase argument of the form $2\pi / 2^{n+1}$. When n gets large, the effect of this gate becomes small and we can remove them.
5. Finally, we can apply qiskit's circuit optimizer
<a id="final"></a>
# Final Solution
```
def solver_function_optimized(L1: list, L2: list, C1: list, C2: list, C_max: int) -> QuantumCircuit:
# the number of qubits representing answers
z_qubits = len(L1)
C_diff = [c2-c1 for c1, c2 in zip(C1, C2)]
C_max -= sum(C1)
# the maximum possible total cost
max_c = sum(C_diff)
# the number of qubits representing data values can be defined using the maximum possible total cost as follows:
cost_qubits = math.ceil(math.log(max_c, 2)) + 1 if not max_c & (max_c - 1) == 0 else math.ceil(math.log(max_c, 2)) + 2
### U(C, γ) ###
# 1. return part
def phase_return(z_qubits: int, gamma: float, L1: list, L2: list, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""Apply the return part of U(C, γ)"""
qr_z = QuantumRegister(z_qubits, "z")
qc = QuantumCircuit(qr_z)
for i in range(z_qubits):
qc.p(-gamma*(L2[i]-L1[i])/2, qr_z[i])
return qc.to_gate(label=" phase return ") if to_gate else qc
# 2. penalty part
# 2A. Cost calculation
def qft(data_qubits: int, inverse=False, max_n=2, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""Quantum Fourier Transform (https://arxiv.org/pdf/1411.5949.pdf)"""
qc = QuantumCircuit(data_qubits)
N = data_qubits-1
qc.h(N)
for n in range(1, N+1):
if n > max_n:
break
qc.cp(2*np.pi / 2**(n+1), N-n, N)
for i in range(1, N):
qc.h(N-i)
for n in range(1, N-i+1):
if n > max_n:
break
qc.cp(2*np.pi / 2**(n+1), N-(n+i), N-i)
qc.h(0)
# Calculate IQFT
qc = qc.inverse() if inverse else qc
return qc.to_gate(label=" QFT ") if to_gate and not inverse else qc.to_gate(label=" IQFT ") if to_gate else qc
def qft_add(data_qubits: int, const: int, max_n=4, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""QFT Adder (https://arxiv.org/pdf/1411.5949.pdf)"""
qc = QuantumCircuit(data_qubits)
N = data_qubits-1
bin_const = [int(x) for x in bin(const)[2:]] # Big endian
bin_const = [0]*(N-len(bin_const)) + bin_const
for n in range(1, N+1):
if bin_const[n-1]:
if n > max_n:
break
qc.p(2*np.pi / 2**(n+1), N)
for i in range(1, N+1):
for n in range(N-i+1):
if n > max_n:
break
if bin_const[n+i-1]:
qc.p(2*np.pi / 2**(n+1), N-i)
return qc.to_gate(label=f" [ +{const} ] ") if to_gate else qc
def cost_calculation(z_qubits: int, cost_qubits: int, C1: list, C2: list, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""Calculate cost(z)."""
qr_z = QuantumRegister(z_qubits, "z")
qr_cost = QuantumRegister(cost_qubits, "cost")
qc = QuantumCircuit(qr_z, qr_cost)
C_diff = [c2-c1 for c1, c2 in zip(C1, C2)]
qc.h(qr_cost)
for i, c in enumerate(C_diff):
qc.append(qft_add(cost_qubits, c).control(1), [qr_z[i]] + qr_cost[:])
return qc.to_gate(label=" cost_calculation ") if to_gate else qc
def constraint_testing(cost_qubits: int, C_max: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""Test the condition cost(z) > C_max and set a flag F if the condition holds
true.
"""
qr_cost = QuantumRegister(cost_qubits, "cost")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_cost, qr_f)
c = cost_qubits-1
w = 2**c - C_max - 1
if w > 0:
# Add w to cost(z)
qc.append(qft_add(cost_qubits, w), qr_cost)
qc.append(qft(cost_qubits, inverse=True), qr_cost)
# Set F if the condition cost(z) > C_max holds
qc.cx(qr_cost[c], qr_f)
return qc.to_gate(label=" constraint_testing ") if to_gate else qc
def penalty_dephasing(cost_qubits: int, alpha: float, gamma: float, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""Apply the penalty part of U(C, γ)."""
qr_cost = QuantumRegister(cost_qubits, "cost")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_cost, qr_f)
# Add phase alpha * (cost(z) + w) to all infeasible states
for i in range(cost_qubits):
qc.cp((2**i) * alpha * gamma, qr_f, qr_cost[i])
# Add phase -alpha * (Cmax + w) = -alpha * 2^(cost_qubits-1) to all
# infeasible states
C_max_plus_w = 2**(cost_qubits-1)-1
qc.p(-C_max_plus_w * alpha * gamma, qr_f)
return qc.to_gate(label=" penalty_dephasing ") if to_gate else qc
def reinitialization(z_qubits: int, cost_qubits: int, C1: list, C2: list, C_max: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
"""Reinitialize the ancilla qubits to |0>."""
qr_z = QuantumRegister(z_qubits, "z")
qr_cost = QuantumRegister(cost_qubits, "cost")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_z, qr_cost, qr_f)
qc.append(constraint_testing(cost_qubits, C_max, False).inverse().to_gate(label=" Constraint Testing + "), qr_cost[:] + qr_f[:])
qc.append(cost_calculation(z_qubits, cost_qubits, C1, C2, False).inverse().to_gate(label=" Cost Calculation + "), qr_z[:] + qr_cost[:])
return qc.to_gate(label=" reinitialization ") if to_gate else qc
### U(B, β) ###
def mixing_operator(z_qubits: int, beta: float, to_gate = True) -> Union[Gate, QuantumCircuit]:
"""Operator U(B, β)."""
qr_z = QuantumRegister(z_qubits, "index")
qc = QuantumCircuit(qr_z)
qc.rx(2*beta, qr_z)
return qc.to_gate(label=" Mixing Operator ") if to_gate else qc
qr_z = QuantumRegister(z_qubits, "z") # index register
qr_cost = QuantumRegister(cost_qubits, "cost") # data register
qr_f = QuantumRegister(1, "flag") # flag register
cr_z = ClassicalRegister(z_qubits, "c_z") # classical register storing the measurement result of index register
qc = QuantumCircuit(qr_z, qr_cost, qr_f, cr_z)
### initialize the index register with uniform superposition state ###
qc.h(qr_z)
### DO NOT CHANGE THE CODE BELOW
p = 5
alpha = 1
for i in range(p):
### set fixed parameters for each round ###
beta = 1 - (i + 1) / p
gamma = (i + 1) / p
### return part ###
qc.append(phase_return(z_qubits, gamma, L1, L2), qr_z)
### step 1: cost calculation ###
qc.append(cost_calculation(z_qubits, cost_qubits, C1, C2), qr_z[:] + qr_cost[:])
### step 2: Constraint testing ###
qc.append(constraint_testing(cost_qubits, C_max), qr_cost[:] + qr_f[:])
### step 3: penalty dephasing ###
qc.append(penalty_dephasing(cost_qubits, alpha, gamma), qr_cost[:] + qr_f[:])
### step 4: reinitialization ###
qc.append(reinitialization(z_qubits, cost_qubits, C1, C2, C_max), qr_z[:] + qr_cost[:] + qr_f[:])
### mixing operator ###
qc.append(mixing_operator(z_qubits, beta), qr_z)
### measure the index ###
### since the default measurement outcome is shown in big endian, it is necessary to reverse the classical bits in order to unify the endian ###
qc.measure(qr_z, cr_z[::-1])
from qc_grader.util import get_provider
backend = get_provider().get_backend("ibmq_qasm_simulator")
return transpile(qc, backend=backend, seed_transpiler=42, optimization_level=3)
# Execute your circuit with following prepare_ex4c() function.
# The prepare_ex4c() function works like the execute() function with only QuantumCircuit as an argument.
from qc_grader import prepare_ex4c
job = prepare_ex4c(solver_function_optimized)
result = job.result()
# Check your answer and submit using the following code
from qc_grader.grade import grade_job
grade_job(job, '4c')
```
# References
1. [Tameem Albash, Daniel A. Lidar, Adiabatic Quantum Computing (2016)](https://arxiv.org/abs/1611.04471)
2. [Pierre Dupuy de la Grand'rive, Jean-Francois Hullo, Knapsack Problem variants of QAOA for battery revenue optimisation (2019)](https://arxiv.org/abs/1908.02210)
3. [Lidia Ruiz-Perez, Juan Carlos Garcia-Escartin, Quantum arithmetic with the Quantum Fourier Transform (2014)](https://arxiv.org/abs/1411.5949)
4. [Stina Andersson, Abraham Asfaw, Antonio Corcoles et al., Learn Quantum Computation using Qiskit ("The Qiskit Textbook")](https://qiskit.org/textbook/preface.html)
| github_jupyter |
# Data visualization with Matpotlib: Scatter plots
**Created by: Kirstie Whitaker**
**Created on: 29 July 2019**
In 2017 Michael Vendetti and I published a paper on *"Neuroscientific insights into the development of analogical reasoning"*.
The code to recreate the figures from processed data is available at https://github.com/KirstieJane/NORA_WhitakerVendetti_DevSci2017.
This tutorial is going to recreate figure 2 which outlines the behavioral results from the study.

### Take what you need
The philosophy of the tutorial is that I'll start by making some very simple plots, and then enhance them up to "publication standard".
You should take _only the parts you need_ and leave the rest behind.
If you don't care about fancy legends, or setting the number of minor x ticks, then you can stop before we get to that part.
The goal is to have you leave the tutorial feeling like you know _how_ to get started writing code to visualize and customise your plots.
## Import modules
We're importing everything up here.
And they should all be listed in the [requirements.txt](requirements.txt) file in this repository.
Checkout the [README](README.md) file for more information on installing these packages.
```
from IPython.display import display
import matplotlib.lines as mlines
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import os
import pandas as pd
from statsmodels.formula.api import ols
import seaborn as sns
import string
```
## The task
In this study we asked children and adolescents (ages 6 to 18 years old) to lie in an fMRI scanner and complete an visual analogical reasoning task.
There were two types of questions.
* The **analogy** questions (shown in panel a in the figure below) asked "Which of the pictures at the bottom goes together with the 3rd picture *in the same way* that the two pictures at the top go together?"
* The **semantic** questions (shown in panel b in the figure below) were the control questions and the participants were simply asked asked "Which of the pictures below goes best with the picture at the top?"

The answers to the **analogy** question are grouped into four different types:
* a **semantic** lure: milk comes from a cow...but that relationship is not the same as the one between dress and wardrobe
* a **perceptual** lure: the milk carton looks perceptually similar to the clock (but isn't semantically related)
* an **unrelate** lure: the tennis racket has nothing to do with any of the pictures at the top
* a **correct** answer: a dress is stored in a wardrobe, the milk is stored in the **fridge**
There are three answer types for the **semantic** question:
* a **correct** answer: the pen write on the notepad
* two **unrelated** lures: tea and sweetcorn have nothing to do with notepaper
* a **perceptual** lure: the shower curtain looks like the notepad (but isn't semantically related)
### Hypotheses
1. Accuracy on both tasks will increase (at a decreasing rate) with age, and accuracy will be higher for the semantic task compared to the analogy task.
2. Reaction time on both tasks will decrease (at a decreasing rate) with age, and reaction time will be smaller (faster) for the semantic task compared to the analogy task.
3. There will be more semantic errors than perceptual errors, and more perceptual errors than unrelated errors across all ages, although the number of errors will decrease with age (corresponding to an increase in accuracy).
## With great power comes great responsibility
I've listed some hypotheses above.
We can't confirm or reject them by visualizing the data.
Just because a line _looks_ like it is going up or down, doesn't mean it is statistically significantly doing so.
You can tell many stories with a picture...including ones that mislead people very easily.
Be careful!
## The data
The data is stored in the [DATA/WhitakerVendetti_BehavData.csv](https://github.com/KirstieJane/NORA_WhitakerVendetti_DevSci2017/blob/master/DATA/WhitakerVendetti_BehavData.csv) file in the [NORA_WhitakerVendetti_DevSci2017](https://github.com/KirstieJane/NORA_WhitakerVendetti_DevSci2017) GitHub repository.
I've made a copy of it in this repository so you don't have to go and get it from the web 😊
The important columns are:
* `subid_short`
* `Age_scan`
* `R1_percent_acc`
* `R2_percent_acc`
* `R1_meanRTcorr_cor`
* `R2_meanRTcorr_cor`
* `R2_percent_sem`
* `R2_percent_per`
* `R2_percent_dis`
The `R1` trials are the semantic trials (they have one relationship to process).
The `R2` trials are the analogy trials (they have two relationships to process).
**Accuracy** is the percentage of correct trials from all the trials a participant saw.
**Reaction time** is the mean RT of the _correct_ trials, corrected for some obvious outliers.
The **semantic**, **perceptual** and **unrelated** (née "distractor") errors are reported as percentages of all the trials the participant saw.
```
# Read in the data
df = pd.read_csv('data/WhitakerVendetti_BehavData.csv')
# Take a look at the first 5 rows
print ('====== Here are the first 5 rows ======')
display(df.head())
# Print all of the columns - its a big wide file 😬
print ('\n\n\n====== Here are all the columns in the file======')
display(df.columns)
# Lets just keep the columns we need
keep_cols = ['subid_short', 'Age_scan',
'R1_percent_acc', 'R2_percent_acc',
'R1_meanRTcorr_cor', 'R2_meanRTcorr_cor',
'R2_percent_sem', 'R2_percent_per', 'R2_percent_dis']
df = df.loc[:, keep_cols]
# And now lets see the summary information of this subset of the data
print ('\n\n\n====== Here are some summary statistics from the columns we need ======')
display(df.describe())
```
## A quick scatter plot
The first thing we'll do is take a look at our first hypothesis: that accuracy increase with age and that the analogy task is harder than the semantic task.
Lets start by making a scatter plot with **age** on the x axis and **semantic accuracy** on the y axis.
Cool, what about the accuracy on the analogy task?
That's nice, but it would probably be more useful if we put these two on the _same_ plot.
Woah, that was very clever!
Matplotlib didn't make two different plots, it assumed that I would want these two plots on the same axis because they were in the same cell.
If I had called `plt.show()` inbetween the two lines above I would have ended up with two plots:
## Being a little more explicit
The scatter plot above shows how easy it is to plot some data - for example to check whether you have any weird outliers or if the pattern of results generally looks the way you'd expect.
You can stop here if your goal is to explore the data ✨
But some times you'll want to have a bit more control over the plots, and for that we'll introduce the concepts of a matplotlib `figure` and an `axis`.
To be honest, we aren't really going to introduce them properly because that's a deeeeeep dive into the matplotlib object-orientated architecture.
There's a nice tutorial at [https://matplotlib.org/users/artists.html](https://matplotlib.org/users/artists.html), but all you need to know is that a **figure** is a figure - the canvas on which you'll make your beautiful visualisation - and it can contain multiple **axes** displaying different aspects or types of data.
In fact, that makes it a little easier to understand why the way that many people create a figure and an axis is to use the [`subplots`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplots.html#matplotlib.pyplot.subplots) command.
(And here's a [stack overflow answer](https://stackoverflow.com/questions/34162443/why-do-many-examples-use-fig-ax-plt-subplots-in-matplotlib-pyplot-python) which explains it in a little more depth.)
If you run the command all by itself, you'll make an empty axis which takes up the whole figure area:
Lets add our plots to a figure:
Did you see that this time we changed `plt.scatter` to `ax.scatter`?
That's because we're being more specific about _where_ we want the data to be plotted.
Specifically, we want it on the first (only) axis in our figure.
We also got explicit about telling jupyter to show the plot with `plt.show()`.
You don't need this, but its good practice for when you start coding lots of plots all in one go and don't want them to all appear on the same axis 😂
## Let's add a regression line
In my paper I modelled the data with a quadratic (yeah, yeah, suboptimal, I know).
I used [`statsmodels`](https://www.statsmodels.org/stable/index.html) to fit the model and to get the predicted values.
```
# Add a column to our data frame for the Age_scan squared
df['Age_scan_sq'] = df['Age_scan']**2
# Semantic
formula_sem = 'R1_percent_acc ~ Age_scan_sq + Age_scan'
mod_sem = ols(formula=formula_sem, data=df)
results_sem = mod_sem.fit()
print(results_sem.summary())
predicted_sem = results_sem.predict()
print(predicted_sem)
# Analogy
formula_ana = 'R2_percent_acc ~ Age_scan_sq + Age_scan'
mod_ana = ols(formula=formula_ana, data=df)
results_ana = mod_ana.fit()
print(results_ana.summary())
predicted_ana = results_ana.predict()
print(predicted_ana)
```
Lets plot that modelled pattern on our scatter plot.
Hmmmm, that looks ok....but usually we'd draw these predicted values as a line rather than individual points.
And they'd be the same color.
To connect up the dots, we're going to use `ax.plot` instead of `ax.scatter`.
Woah! That's not right.
We forgot to sort the data into ascending order before we plotted it so we've connected all the points in the order that the participants joined the study....which is not a helpful ordering at all.
So instead, lets add the predicted values to our data frame and sort the data before we plot it:
```
df['R1_percent_acc_pred'] = predicted_sem
df['R2_percent_acc_pred'] = predicted_ana
df = df.sort_values(by='Age_scan', axis = 0)
```
Cool!
But those lines aren't very easy to see.
Let's make them a little thicker.
## Add a legend
These two lines aren't labelled!
We don't know which one is which.
So lets add a legend to the plot.
The function is called `ax.legend()` and we don't tell it the labels directly, we actually add those as _attributes_ of the scatter plots by adding `label`s.
So that's pretty cool.
Matplotlib hasn't added the regression lines to the legend, only the scatter plots, because they have labels.
The legend positioning is also very clever, matplotlib has put it in the location with the fewest dots!
Here's the [documentation](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.legend.html) that shows you how to be explicit about where to put the legend.
The default value for `loc` is `best`, and we can happily keep that for the rest of this notebook.
If you really wanted to put it somewhere else, you can set the location explicitly.
For example, in the center on the right hand side.
## Change the colors
The fact that our semantic data (dots) and predicted values (line) are both blue, and that the analogy data are both orange, is a consequence of the order in which we've asked matplotlib to plot the data.
At the moment, matplotlib is coloring the first scatter plot with its first default color, and the second with the second default color.
Then when we give it a different type of plot (`plot` vs `scatter`) it starts the color cycle again.
(You can see the order of the colours in the [documentation of when they were introduced](https://matplotlib.org/3.1.1/users/dflt_style_changes.html#colors-in-default-property-cycle).)
If we move the order of the two regression lines the colours will change:
So that's no good!
Let's explicitly set the colors.
Cool....the colours are fixed....but wow those aren't nice to look at 😕
## Color management with Seaborn
[Seaborn](https://seaborn.pydata.org) is a Python data visualization library based on matplotlib.
It provides a high-level interface for drawing attractive and informative statistical graphics.
It does many beautiful things (a few of which we're going to explore) but it can sometimes be so clever that it becomes a little opaque.
If in doubt, remember that seaborn will almost always return an `axis` object for you, and you can change those settings just as you would in matplotlib.
One of the very nice things that seaborn does is manage colors easily.
The red and blue that I used in the published figure came from the ["Set1" Brewer color map](http://colorbrewer2.org/#type=qualitative&scheme=Set1&n=5).
We can get the RGB values for the colors in this qualitative color map from seaborn's `color_palette` function, and visualize them using the "palette plot" (`palplot`) function.
```
color_list = sns.color_palette("Set1", n_colors=5)
for color in color_list:
print(color)
sns.palplot(color_list)
```
Ok, now that we have our much nicer colors, let's change the red and blue in our accuracy plot.
## Really jazzing up the plot with seaborn
Another other two really beautiful things that seaborn can do is set the **context** of the plot, and the figure **style**.
There are lots of great examples in the [aesthetics tutorial](https://seaborn.pydata.org/tutorial/aesthetics.html) which I really encourage you to have a play around with.
Let's check out the `poster` context, with a `darkgrid` background.
Now that we've run the code above, lets re-make our scatter plot:
Wowzers trousers.
That's no good 😬
How about `notebook` context with a `ticks` background?
Fun, we've got back to the matplotlib default!
The settings have changed slightly since I wrote the original code in 2016, but I think the closest setting is `notebook` context with a `font_scale` of 1.5 and the `white` style.
When you run the `set_context` and `set_style` commands they become global settings for all plots that you subsequently make in the same notebook (or script).
Personally I load them in at the top of all my notebooks because I think they make the plots look nicer 💁♀️
Oh, and I like the plots [despined](https://seaborn.pydata.org/tutorial/aesthetics.html#removing-axes-spines) too, so lets do that real quick:
## Axis labels, limits and tick placement
### Labels
Our plots don't have any axis labels!
How will anyone know what they're looking at?!
Let's go ahead and label them 😸
### Limits
One of the things that's really nice to do is to be able to set the min and max values for a figure.
At the moment, matplotlib is guessing at a good place to start and stop the axes.
And it's doing a great job...but what if you wanted to show a subset of the data?
If the minimum or maxium age or reaction time values were different then you'd end up with slightly different dimensions on the x and y axes.
Let's see what it's guessing:
To be honest, its doing a great job.
I'd be inclined to leave it, but I wrote a little function to pad the axes by 5% of the data range and I think its useful for times when matplotlib isn't choosing sensibly, so let me show you how it works anyway.
```
def get_min_max(data, pad_percent=5):
"""
Calculate the range of the data by subtracting the minimum from the maximum value.
Subtract pad_percent of the range from the minimum value.
Add pad_percent of the range to the maximum value.
Default value for pad_percent is 5 which corresponds to 5%.
Return the padded min and max values.
"""
data_range = np.max(data) - np.min(data)
data_min = np.min(data) - (data_range * pad_percent/100.0)
data_max = np.max(data) + (data_range * pad_percent/100.0)
return data_min, data_max
```
You could set `pad_percent` to 0 to get rid of all the white space if you wanted to.
(You shouldn't though, it has cut off a bunch of the dots!)
### Tick placement
Seaborn and matplotlib have made a good guess at where to put the x and y ticks, but I don't think they're the most intuitive way of illustrating a 6 to 18 year age range.
So lets just put the ticks where I think they should be: at 6, 10, 14 and 18.
For the y ticks we can use the ticker `locator_params` to put them in the best place for getting a maximum of 6 ticks.
This is basically what matplotlib is already doing, but I'll show you the command just in case you want to use it in the future.
For example, if you wanted to force the plot to have 4 bins, you'd set `nbins` to 4:
And note that even when we set `nbins` to 6 (as I wrote in the original figure code), it actually only gives us 5 ticks, because matplotlib - correctly - can't find a sensible way to parse the range to give us 6 evenly spaced ticks on they y axis.
One quick point to remember here: the x and y axis **limits** are not the same as the **tick locations**.
The limits are the edges of the plot.
The tick locations are where the markers sit on the axes.
## Adding confidence intervals with seaborn
We're doing great.
Matplotlib has got us to a pretty nice looking plot.
But, a responsible visualization should have some error bars on it....and they are a pain to try to write ourselves.
So at this point, we're going to replace our `scatter` and `plot` functions (and all the code used to calculate the predicted values) with the wonderful [`regplot` function](https://seaborn.pydata.org/generated/seaborn.regplot.html) in seaborn.
Doh, that's not right.
We've fitted a straight line to data that is clearly reaching ceiling.
Fortunately, `regplot` take a parameter, `order=2`, to make the fitted line follow a quadratic function.
## Advanced legend-ing
Those dots in the legend aren't great: they look like they correspond to data points!
The following function replaces those dots with a line the same color as the colors we've used for the regplots.
I think this is some pretty hardcore matplotlib, and you _really_ only need it when you want to publish your figure.
There's a [matplotlib legend guide](https://matplotlib.org/3.1.1/tutorials/intermediate/legend_guide.html) if you want to dig into what's going on here...but be warned that it gets intense pretty quickly.
```
def add_line_to_legend(ax, color_list, label_list, loc='best', frameon=True):
"""
Add a legend that has lines instead of markers.
"""
line_list = []
for color, label in zip(color_list, label_list):
line_list += [mlines.Line2D([], [], color=color, marker=None, label=label)]
ax.legend(handles=line_list, loc=loc, frameon=frameon)
return ax
```
Yeaaaaah!
You did it!
The figure looks just like the published one.... except that that one has a bunch of other plots too!

## A second plot: Reaction time
A second hypothesis that we had was that participants would answer faster as they got older (reaction time would decrease) and that they would be **slower** (high RT) for the analogy trials.
These data are shown in the second panel in the published figure.
It's actually super similar to the first panel, we're just plotting two different variable from the data frame.
Let's see what happens if we change the plotting code to have a different variable:
🚀🌟🚀🌟🚀🌟🚀🌟🚀🌟🚀🌟
Fantastic!
We've made the second panel!
Just like that.
## Third plot: Errors
The third hypothesis that we want to test is that there are more semantic errors than perceptual errors, and more perceptual errors than unrelated errors.
As accuracy is increasing, the separate modelled lines (quadratic again) will all decrease.
## One function to plot them all
Do you remember how our figure plotting code was really similar across the three panels?
I think we can make a general function for our plots.
```
def visan_behav_plot(ax, behav_var_list, label_list, color_list, y_ax_label, legend_rev=False):
"""
Plot the visual analogy behavioural results on a given axis (ax)
"""
# Get the min and max values for the x and y axes
x_min, x_max = get_min_max(df['Age_scan'])
y_min, y_max = get_min_max(df[behav_var_list[0]])
if len(behav_var_list) > 1:
for behav_var in behav_var_list[1:]:
new_y_min, new_y_max = get_min_max(df[behav_var_list[-1]])
y_min, y_max = min(y_min, new_y_min), max(y_max, new_y_max)
# Loop over all the behavioural measures you want to plot
for behav_var, label, color in zip(behav_var_list, label_list, color_list):
sns.regplot(df['Age_scan'], df[behav_var],
label = label,
color = color,
ax = ax,
order = 2)
# Set the x and y tick locations
ax.set_xticks([6, 10, 14, 18])
ax.locator_params(nbins=6, axis='y')
# Set the x and y min and max axis limits
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
# Set the x and y labels
ax.set_xlabel('Age (years)')
ax.set_ylabel(y_ax_label)
# Add a legend
if legend_rev:
color_list = color_list[::-1]
label_list = label_list[::-1]
ax = add_line_to_legend(ax,
color_list,
label_list,
loc='best',
frameon=False)
sns.despine()
return ax
# Accuracy
fig, ax = plt.subplots()
ax = visan_behav_plot(ax,
['R1_percent_acc', 'R2_percent_acc'],
['Semantic', 'Analogy'],
color_list[:2],
'Accuracy (% resp)',
legend_rev=False)
plt.show()
# Reaction time
fig, ax = plt.subplots()
ax = visan_behav_plot(ax,
['R1_meanRTcorr_cor', 'R2_meanRTcorr_cor'],
['Semantic', 'Analogy'],
color_list[:2],
'Reaction time (s)',
legend_rev=False)
plt.show()
# Errors
fig, ax = plt.subplots()
ax = visan_behav_plot(ax,
['R2_percent_dis', 'R2_percent_per', 'R2_percent_sem'],
['Unrelated', 'Perceptual', 'Semantic'],
color_list[2:5],
'Reaction time (s)',
legend_rev=True)
plt.show()
```
## Three subplots
Our last step is to put these three plots into one figure.
We can use `subplots` to set the number of equally sized plots and `figsize` to set the dimensions of the figure.
Let's see what that looks like on its own.
One thing to note is that instead of getting just one axis back now, we get an **axis list** which allows us to iterate over the three panels.
```
if not os.path.isdir('figures'):
os.makedirs('figures')
```
### Save the figure
We can save this figure using `fig.savefig`.
## Panel labels
This is where I'm probably going a little overboard.
Setting the panel labels is definitely easier in any other picture editing software!!
But, it brings me great joy to make the figure entirely from code, so lets go ahead and use this function to add the panel labels.
```
def add_panel_labels_fig2(ax_list):
"""
Add panel labels (a, b, c) to the scatter plot.
Note that these positions have been calibrated by hand...
there isn't a good way to do it otherwise.
"""
x_list = [ -0.17, -0.1, -0.14 ]
y = 1.1
color='k'
fontsize=18
letters = string.ascii_lowercase
for i, ax in enumerate(ax_list):
ax.text(x_list[i], y,
'({})'.format(letters[i]),
fontsize=fontsize,
transform=ax.transAxes,
color=color,
horizontalalignment='center',
verticalalignment='center',
fontname='arial',
fontweight='bold'
)
return ax_list
fig, ax_list = plt.subplots(1, 3, figsize=(16,4.5))
# Accuracy
ax = ax_list[0]
ax = visan_behav_plot(ax,
['R1_percent_acc', 'R2_percent_acc'],
['Semantic', 'Analogy'],
color_list[:2],
'Accuracy (% resp)',
legend_rev=False)
# Reaction time
ax = ax_list[1]
ax = visan_behav_plot(ax,
['R1_meanRTcorr_cor', 'R2_meanRTcorr_cor'],
['Semantic', 'Analogy'],
color_list[:2],
'Reaction time (s)',
legend_rev=False)
# Errors
ax = ax_list[2]
ax = visan_behav_plot(ax,
['R2_percent_dis', 'R2_percent_per', 'R2_percent_sem'],
['Unrelated', 'Perceptual', 'Semantic'],
color_list[2:5],
'Analogy error rate (% resp)',
legend_rev=True)
ax_list = add_panel_labels_fig2(ax_list) # <-------- Add the panel labels
plt.tight_layout()
fig.savefig('figures/Figure2.png', dpi=150, bbox_inches=0)
plt.show()
```
## Congratulations!
We did it! 🎉🎉🎉
Look how similar these two figures look!
#### Figure 2 from GitHub repository

#### Figure 2 we've just made

## The end
Well done 💖
| github_jupyter |
# Live Data
The [HoloMap](../reference/containers/bokeh/HoloMap.ipynb) is a core HoloViews data structure that allows easy exploration of parameter spaces. The essence of a HoloMap is that it contains a collection of [Elements](http://build.holoviews.org/reference/index.html) (e.g. ``Image``s and ``Curve``s) that you can easily select and visualize.
HoloMaps hold fully constructed Elements at specifically sampled points in a multidimensional space. Although HoloMaps are useful for exploring high-dimensional parameter spaces, they can very quickly consume huge amounts of memory to store all these Elements. For instance, a hundred samples along four orthogonal dimensions would need a HoloMap containing a hundred *million* Elements, each of which could be a substantial object that takes time to create and costs memory to store. Thus ``HoloMaps`` have some clear limitations:
* HoloMaps may require the generation of millions of Elements before the first element can be viewed.
* HoloMaps can easily exhaust all the memory available to Python.
* HoloMaps can even more easily exhaust all the memory in the browser when displayed.
* Static export of a notebook containing HoloMaps can result in impractically large HTML files.
The ``DynamicMap`` addresses these issues by computing and displaying elements dynamically, allowing exploration of much larger datasets:
* DynamicMaps generate elements on the fly, allowing the process of exploration to begin immediately.
* DynamicMaps do not require fixed sampling, allowing exploration of parameters with arbitrary resolution.
* DynamicMaps are lazy in the sense they only compute as much data as the user wishes to explore.
Of course, these advantages come with some limitations:
* DynamicMaps require a live notebook server and cannot be fully exported to static HTML.
* DynamicMaps store only a portion of the underlying data, in the form of an Element cache and their output is dependent on the particular version of the executed code.
* DynamicMaps (and particularly their element caches) are typically stateful (with values that depend on patterns of user interaction), which can make them more difficult to reason about.
In addition to the different computational requirements of ``DynamicMaps``, they can be used to build sophisticated, interactive vizualisations that cannot be achieved using only ``HoloMaps``. This notebook demonstrates some basic examples and the [Responding to Events](./11-Responding_to_Events.ipynb) guide follows on by introducing the streams system. The [Custom Interactivity](./12-Custom_Interactivity.ipynb) shows how you can directly interact with your plots when using the Bokeh backend.
When DynamicMap was introduced in version 1.6, it supported multiple different 'modes' which have now been deprecated. This notebook demonstrates the simpler, more flexible and more powerful DynamicMap introduced in version 1.7. Users who have been using the previous version of DynamicMap should be unaffected as backwards compatibility has been preserved for the most common cases.
All this will make much more sense once we've tried out some ``DynamicMaps`` and showed how they work, so let's create one!
<center><div class="alert alert-info" role="alert">To visualize and use a <b>DynamicMap</b> you need to be running a live Jupyter server.<br>This guide assumes that it will be run in a live notebook environment.<br>
When viewed statically, DynamicMaps will only show the first available Element,<br> and will thus not have any slider widgets, making it difficult to follow the descriptions below.<br><br>
It's also best to run this notebook one cell at a time, not via "Run All",<br> so that subsequent cells can reflect your dynamic interaction with widgets in previous cells.</div></center>
## ``DynamicMap`` <a id='DynamicMap'></a>
Let's start by importing HoloViews and loading the extension:
```
import holoviews as hv
import numpy as np
hv.extension()
```
We will now create ``DynamicMap`` similar to the ``HoloMap`` introduced in the [Introductory guide](../getting_started/1-Introduction.ipynb). The ``HoloMap`` in that introduction consisted of ``Image`` elements defined by a function returning NumPy arrays called ``sine_array``. Here we will define a ``waves_image`` function that returns an array pattern parameterized by arbitrary ``alpha`` and ``beta`` parameters inside a HoloViews
[``Image``](../reference/elements/bokeh/Image.ipynb) element:
```
xvals = np.linspace(-4,0,202)
yvals = np.linspace(4,0,202)
xs,ys = np.meshgrid(xvals, yvals)
def waves_image(alpha, beta):
return hv.Image(np.sin(((ys/alpha)**alpha+beta)*xs))
waves_image(0,0) + waves_image(0,4)
```
Now we can demonstrate the possibilities for exploration enabled by the simplest declaration of a ``DynamicMap``.
### Basic ``DynamicMap`` declaration<a id='BasicDeclaration'></a>
A simple ``DynamicMap`` declaration looks identical to that needed to declare a ``HoloMap``. Instead of supplying some initial data, we will supply the ``waves_image`` function with key dimensions simply declaring the arguments of that function:
```
dmap = hv.DynamicMap(waves_image, kdims=['alpha', 'beta'])
dmap
```
This object is created instantly, but because it doesn't generate any `hv.Image` objects initially it only shows the printed representation of this object along with some information about how to display it. We will refer to a ``DynamicMap`` that doesn't have enough information to display itself as 'unbounded'.
The textual representation of all ``DynamicMaps`` look similar, differing only in the listed dimensions until they have been evaluated at least once.
#### Explicit indexing
Unlike a corresponding ``HoloMap`` declaration, this simple unbounded ``DynamicMap`` cannot yet visualize itself. To view it, we can follow the advice in the warning message. First we will explicitly index into our ``DynamicMap`` in the same way you would access a key on a ``HoloMap``:
```
dmap[0,1] + dmap.select(alpha=1, beta=2)
```
Note that the declared kdims are specifying the arguments *by position* as they do not match the argument names of the ``waves_image`` function. If you *do* match the argument names *exactly*, you can map a kdim position to any argument position of the callable. For instance, the declaration ``kdims=['freq', 'phase']`` would index first by frequency, then phase without mixing up the arguments to ``waves_image`` when indexing.
#### Setting dimension ranges
The second suggestion in the warning message was to supply dimension ranges using the ``redim.range`` method:
```
dmap.redim.range(alpha=(0,5.0), beta=(1,5.0))
```
Here each `hv.Image` object visualizing a particular sine ring pattern with the given parameters is created dynamically, whenever the slider is set to a new value. Any value in the allowable range can be requested by dragging the sliders or by tweaking the values using the left and right arrow keys.
Of course, we didn't have to use the ``redim.range`` method and we could have simply declared the ranges right away using explicit ``hv.Dimension`` objects. This would allow us to declare other dimension properties such as the step size used by the sliders: by default each slider can select around a thousand distinct values along its range but you can specify your own step value via the dimension ``step`` parameter. If you use integers in your range declarations, integer stepping will be assumed with a step size of one.
Note that whenever the ``redim`` method is used, a new ``DynamicMap`` is returned with the updated dimensions. In other words, the original ``dmap`` remains unbounded with default dimension objects.
#### Setting dimension values
The ``DynamicMap`` above allows exploration of *any* phase and frequency within the declared range unlike an equivalent ``HoloMap`` which would have to be composed of a finite set of samples. We can achieve a similar discrete sampling using ``DynamicMap`` by setting the ``values`` parameter on the dimensions:
```
dmap.redim.values(alpha=[0,1,2], beta=[0.1, 1.0, 2.5])
```
The sliders now snap to the specified dimension values and if you are running this live, the above cell should look like a [HoloMap](../reference/containers/bokeh/HoloMap.ipynb). ``DynamicMap`` is in fact a subclass of ``HoloMap`` with some crucial differences:
* You can now pick as many values of **alpha** or **beta** as allowed by the slider.
* What you see in the cell above will not be exported in any HTML snapshot of the notebook
We will now explore how ``DynamicMaps`` relate to ``HoloMaps`` including conversion operations between the two types. As we will see, there are other ways to display a ``DynamicMap`` without using explicit indexing or redim.
## Interaction with ``HoloMap``s
To explore the relationship between ``DynamicMap`` and ``HoloMap``, let's declare another callable to draw some shapes we will use in a new ``DynamicMap``:
```
def shapes(N, radius=0.5): # Positional keyword arguments are fine
paths = [hv.Path([[(radius*np.sin(a), radius*np.cos(a))
for a in np.linspace(-np.pi, np.pi, n+2)]],
extents=(-1,-1,1,1))
for n in range(N,N+3)]
return hv.Overlay(paths)
```
#### Sampling ``DynamicMap`` from a ``HoloMap``
When combining a ``HoloMap`` with a ``DynamicMap``, it would be very awkward to have to match the declared dimension ``values`` of the DynamicMap with the keys of the ``HoloMap``. Fortunately you don't have to:
```
%%opts Path (linewidth=1.5)
holomap = hv.HoloMap({(N,r):shapes(N, r) for N in [3,4,5] for r in [0.5,0.75]}, kdims=['N', 'radius'])
dmap = hv.DynamicMap(shapes, kdims=['N','radius'])
holomap + dmap
```
Here we declared a ``DynamicMap`` without using ``redim``, but we can view its output because it is presented alongside a ``HoloMap`` which defines the available keys. This convenience is subject to three particular restrictions:
* You cannot display a layout consisting of unbounded ``DynamicMaps`` only, because at least one HoloMap is needed to define the samples.
* The HoloMaps provide the necessary information required to sample the DynamicMap.
Note that there is one way ``DynamicMap`` is less restricted than ``HoloMap``: you can freely combine bounded ``DynamicMaps`` together in a ``Layout``, even if they don't share key dimensions.
Also notice that the ``%%opts`` cell magic allows you to style DynamicMaps in exactly the same way as HoloMaps. We will now use the ``%opts`` line magic to set the linewidths of all ``Path`` elements in the rest of the notebook:
```
%opts Path (linewidth=1.5)
```
#### Converting from ``DynamicMap`` to ``HoloMap``
Above we mentioned that ``DynamicMap`` is an instance of ``HoloMap``. Does this mean it has a ``.data`` attribute?
```
dtype = type(dmap.data).__name__
length = len(dmap.data)
print("DynamicMap 'dmap' has an {dtype} .data attribute of length {length}".format(dtype=dtype, length=length))
```
This is exactly the same sort of ``.data`` as the equivalent ``HoloMap``, except that its values will vary according to how much you explored the parameter space of ``dmap`` using the sliders above. In a ``HoloMap``, ``.data`` contains a defined sampling along the different dimensions, whereas in a ``DynamicMap``, the ``.data`` is simply the *cache*.
The cache serves two purposes:
* Avoids recomputation of an element should we revisit a particular point in the parameter space. This works well for categorical or integer dimensions, but doesn't help much when using continuous sliders for real-valued dimensions.
* Records the space that has been explored with the ``DynamicMap`` for any later conversion to a ``HoloMap`` up to the allowed cache size.
We can always convert *any* ``DynamicMap`` directly to a ``HoloMap`` as follows:
```
hv.HoloMap(dmap)
```
This is in fact equivalent to declaring a HoloMap with the same parameters (dimensions, etc.) using ``dmap.data`` as input, but is more convenient. Note that the slider positions reflect those we sampled from the ``HoloMap`` in the previous section.
Although creating a HoloMap this way is easy, the result is poorly controlled, as the keys in the DynamicMap cache are usually defined by how you moved the sliders around. If you instead want to specify a specific set of samples, you can easily do so by using the same key-selection semantics as for a ``HoloMap`` to define exactly which elements are to be sampled and put into the cache:
```
hv.HoloMap(dmap[{(2,0.3), (2,0.6), (3,0.3), (3,0.6)}])
```
Here we index the ``dmap`` with specified keys to return a *new* DynamicMap with those keys in its cache, which we then cast to a ``HoloMap``. This allows us to export specific contents of ``DynamicMap`` to static HTML which will display the data at the sampled slider positions.
The key selection above happens to define a Cartesian product, which is one of the most common ways to sample across dimensions. Because the list of such dimension values can quickly get very large when enumerated as above, we provide a way to specify a Cartesian product directly, which also works with ``HoloMaps``. Here is an equivalent way of defining the same set of four points in that two-dimensional space:
```
samples = hv.HoloMap(dmap[{2,3},{0.5,1.0}])
samples
samples.data.keys()
```
The default cache size of 500 Elements is relatively high so that interactive exploration will work smoothly, but you can reduce it using the ``cache_size`` parameter if you find you are running into issues with memory consumption. A bounded ``DynamicMap`` with ``cache_size=1`` requires the least memory, but will recompute a new Element every time the sliders are moved, making it less responsive.
#### Converting from ``HoloMap`` to ``DynamicMap``
We have now seen how to convert from a ``DynamicMap`` to a ``HoloMap`` for the purposes of static export, but why would you ever want to do the inverse?
Although having a ``HoloMap`` to start with means it will not save you memory, converting to a ``DynamicMap`` does mean that the rendering process can be deferred until a new slider value requests an update. You can achieve this conversion using the ``Dynamic`` utility as demonstrated here by applying it to the previously defined ``HoloMap`` called ``samples``:
```
from holoviews.util import Dynamic
dynamic = Dynamic(samples)
print('After apply Dynamic, the type is a {dtype}'.format(dtype=type(dynamic).__name__))
dynamic
```
In this particular example, there is no real need to use ``Dynamic`` as each frame renders quickly enough. For visualizations that are slow to render, using ``Dynamic`` can result in more responsive visualizations.
The ``Dynamic`` utility is very versatile and is discussed in more detail in the [Transforming Elements](./10-Transforming_Elements.ipynb) guide.
### Slicing ``DynamicMaps``
As we have seen we can either declare dimension ranges directly in the kdims or use the ``redim.range`` convenience method:
```
dmap = hv.DynamicMap(shapes, kdims=['N','radius']).redim.range(N=(2,20), radius=(0.5,1.0))
```
The declared dimension ranges define the absolute limits allowed for exploration in this continuous, bounded DynamicMap . That said, you can use the soft_range parameter to view subregions within that range. Setting the soft_range parameter on dimensions can be done conveniently using slicing:
```
sliced = dmap[4:8, :]
sliced
```
Notice that N is now restricted to the range 4:8. Open slices are used to release any ``soft_range`` values, which resets the limits back to those defined by the full range:
```
sliced[:, 0.8:1.0]
```
The ``[:]`` slice leaves the soft_range values alone and can be used as a convenient way to clone a ``DynamicMap``. Note that mixing slices with any other object type is not supported. In other words, once you use a single slice, you can only use slices in that indexing operation.
## Using groupby to discretize a DynamicMap
A DynamicMap also makes it easy to partially or completely discretize a function to evaluate in a complex plot. By grouping over specific dimensions that define a fixed sampling via the Dimension values parameter, the DynamicMap can be viewed as a ``GridSpace``, ``NdLayout``, or ``NdOverlay``. If a dimension specifies only a continuous range it can't be grouped over, but it may still be explored using the widgets. This means we can plot partial or completely discretized views of a parameter space easily.
#### Partially discretize
The implementation for all the groupby operations uses the ``.groupby`` method internally, but we also provide three higher-level convenience methods to group dimensions into an ``NdOverlay`` (``.overlay``), ``GridSpace`` (``.grid``), or ``NdLayout`` (``.layout``).
Here we will evaluate a simple sine function with three dimensions, the phase, frequency, and amplitude. We assign the frequency and amplitude discrete samples, while defining a continuous range for the phase:
```
xs = np.linspace(0, 2*np.pi,100)
def sin(ph, f, amp):
return hv.Curve((xs, np.sin(xs*f+ph)*amp))
kdims=[hv.Dimension('phase', range=(0, np.pi)),
hv.Dimension('frequency', values=[0.1, 1, 2, 5, 10]),
hv.Dimension('amplitude', values=[0.5, 5, 10])]
waves_dmap = hv.DynamicMap(sin, kdims=kdims)
```
Next we define the amplitude dimension to be overlaid and the frequency dimension to be gridded:
```
%%opts GridSpace [show_legend=True fig_size=200]
waves_dmap.overlay('amplitude').grid('frequency')
```
As you can see, instead of having three sliders (one per dimension), we've now laid out the frequency dimension as a discrete set of values in a grid, and the amplitude dimension as a discrete set of values in an overlay, leaving one slider for the remaining dimension (phase). This approach can help you visualize a large, multi-dimensional space efficiently, with full control over how each dimension is made visible.
#### Fully discretize
Given a continuous function defined over a space, we could sample it manually, but here we'll look at an example of evaluating it using the groupby method. Let's look at a spiral function with a frequency and first- and second-order phase terms. Then we define the dimension values for all the parameters and declare the DynamicMap:
```
%opts Path (linewidth=1 color=Palette('Blues'))
def spiral_equation(f, ph, ph2):
r = np.arange(0, 1, 0.005)
xs, ys = (r * fn(f*np.pi*np.sin(r+ph)+ph2) for fn in (np.cos, np.sin))
return hv.Path((xs, ys))
spiral_dmap = hv.DynamicMap(spiral_equation, kdims=['f','ph','ph2']).\
redim.values(f=np.linspace(1, 10, 10),
ph=np.linspace(0, np.pi, 10),
ph2=np.linspace(0, np.pi, 4))
```
Now we can make use of the ``.groupby`` method to group over the frequency and phase dimensions, which we will display as part of a GridSpace by setting the ``container_type``. This leaves the second phase variable, which we assign to an NdOverlay by setting the ``group_type``:
```
%%opts GridSpace [xaxis=None yaxis=None] Path [bgcolor='w' xaxis=None yaxis=None]
spiral_dmap.groupby(['f', 'ph'], group_type=hv.NdOverlay, container_type=hv.GridSpace)
```
This grid shows a range of frequencies `f` on the x axis, a range of the first phase variable `ph` on the `y` axis, and a range of different `ph2` phases as overlays within each location in the grid. As you can see, these techniques can help you visualize multidimensional parameter spaces compactly and conveniently.
## DynamicMaps and normalization
By default, a ``HoloMap`` normalizes the display of elements using the minimum and maximum values found across the ``HoloMap``. This automatic behavior is not possible in a ``DynamicMap``, where arbitrary new elements are being generated on the fly. Consider the following examples where the arrays contained within the returned ``Image`` objects are scaled with time:
```
%%opts Image {+axiswise}
ls = np.linspace(0, 10, 200)
xx, yy = np.meshgrid(ls, ls)
def cells(time):
return hv.Image(time*np.sin(xx+time)*np.cos(yy+time), vdims='Intensity')
dmap = hv.DynamicMap(cells, kdims='time').redim.range(time=(1,20))
dmap + dmap.redim.range(Intensity=(0,10))
```
Here we use ``+axiswise`` to see the behavior of the two cases independently. We see in **A** that when only the time dimension is given a range, no automatic normalization occurs (unlike a ``HoloMap``). In **B** we see that normalization is applied, but only when the value dimension ('Intensity') range has been specified.
In other words, ``DynamicMaps`` cannot support automatic normalization across their elements, but do support the same explicit normalization behavior as ``HoloMaps``. Values that are generated outside this range are simply clipped in accord with the usual semantics of explicit value dimension ranges.
Note that we always have the option of casting a ``DynamicMap`` to a ``HoloMap`` in order to automatically normalize across the cached values, without needing explicit value dimension ranges.
## Using DynamicMaps in your code
As you can see, ``DynamicMaps`` let you use HoloViews with a very wide range of dynamic data formats and sources, making it simple to visualize ongoing processes or very large data spaces.
Given unlimited computational resources, the functionality covered in this guide would match that offered by ``HoloMap`` but with fewer normalization options. ``DynamicMap`` actually enables a vast range of new possibilities for dynamic, interactive visualizations as covered in the [Responding to Events](./Responding_to_Events.ipynb) guide. Following on from that, the [Custom Interactivity](./12-Custom_Interactivity.ipynb) guide shows how you can directly interact with your plots when using the Bokeh backend.
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/JavaScripts/Image/HSVPanSharpening.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Image/HSVPanSharpening.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Image/HSVPanSharpening.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google MapS`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# HSV-based Pan-Sharpening.
# Grab a sample L8 image and pull out the RGB and pan bands.
image = ee.Image(ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterDate('2017-01-01', '2017-12-31') \
.filterBounds(ee.Geometry.Point(-122.0808, 37.3947)) \
.sort('CLOUD_COVER') \
.first())
rgb = image.select('B4', 'B3', 'B2')
pan = image.select('B8')
# Convert to HSV, swap in the pan band, and convert back to RGB.
huesat = rgb.rgbToHsv().select('hue', 'saturation')
upres = ee.Image.cat(huesat, pan).hsvToRgb()
# There are many fine places to look; here is one. Comment
# this out if you want to twiddle knobs while panning around.
Map.setCenter(-122.0808, 37.3947, 14)
# Display before and after layers using the same vis parameters.
Map.addLayer(rgb, {'max': 0.3}, 'Original')
Map.addLayer(upres, {'max': 0.3}, 'Pansharpened')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# Visualizing Overlays of Clusters on Widefield Images
During an analysis it is very often useful to overlay clustered localizations on top of widefield images to ensure that the clustering is performed correctly. One may also wish to navigate through the clusters and manually annotate them one-by-one.
In this notebook, we will demonstrate how to do this with the OverlayClusters and AlignToWidefield multiprocessors.
```
# Import the essential bstore libraries
%pylab
from bstore import processors as proc
from bstore import multiprocessors as mp
import pandas as pd
# This is part of Python 3.4 and greater and not part of B-Store
from pathlib import Path
```
## Before starting: Get the test data
You can get the test data for this tutorial from the B-Store test repository at https://github.com/kmdouglass/bstore_test_files. Clone or download the files and change the filename below to point to the folder *multiprocessor_test_files/align_to_widefield* within this repository.
```
dataDirectory = Path('../../bstore_test_files/multiprocessor_test_files/align_to_widefield/') # ../ means go up one directory level
```
# Step one: load the data
This example demonstrates how to use the [OverlayClusters](http://b-store.readthedocs.io/en/latest/bstore.html#bstore.multiprocessors.OverlayClusters) multiprocessor in B-Store's analysis tools. This processor takes as input
1. a Pandas DataFrame containing clustered localization information;
2. (optional) a Pandas DataFrame containing the statistics belonging to each cluster;
3. (optional) a widefield image to overlay the clusters onto.
If no `stats` DataFrame is supplied, a basic one will be calculated. If no widefield image is supplied, then the clusters will be displayed on a blank 2D space.
The DataFrame containing the localizations **MUST** have a column that specifies cluster IDs as integers. If the localizations have not been clustered, you could use the [Cluster processor](http://b-store.readthedocs.io/en/latest/bstore.html#bstore.processors.Cluster) or any other clustering algorithm to do so.
The example data contains all three of the above datasets, so we'll load all three now.
```
locsFile = dataDirectory / Path('locResults_A647_Pos0.csv')
statsFile = dataDirectory / Path('locResults_A647_Pos0_processed.csv')
wfFile = dataDirectory / Path('HeLaS_Control_53BP1_IF_FISH_A647_WF1/HeLaS_Control_53BP1_IF_FISH_A647_WF1_MMStack_Pos0.ome.tif')
with open(str(locsFile), 'r') as f:
locs = pd.read_csv(f)
with open(str(statsFile), 'r') as f:
# Note that we set the cluster_id to the index column!
stats = pd.read_csv(f, index_col = 'cluster_id')
with open(str(wfFile), 'br') as f:
img = plt.imread(f)
locs.head()
stats.head()
plt.imshow(img, cmap = 'gray_r')
plt.show()
```
If all goes well you should see the first five lines of the `locs` and `stats` DataFrames. The widefield image of telomeres in HeLa cell nuclei should appear in a separate window after running the above cell.
# Step two: set up the stats DataFrame for annotation
The `OverlayClusters` multiprocessor allows you to annotate clusters with a label, such as `True`, `False`, or an integer between 0 and 9. This allows you to, for example, manually filter clusters for further analyses. To do this, you need to add a column that will be annotated for cluster in the `stats` DataFrame.
This step is optional, so you may skip it if you like.
```
# Use AddColumn processor from B-Store to add the column
adder = proc.AddColumn('annotation', defaultValue = True)
stats = adder(stats)
stats.head()
```
You can see that the stats DataFrame now has an annotation column with each value set to `True`.
Let's do some initial filtering on this DataFrame. Many of the clusters are noise and don't actually correspond to the telomeric signal. They typically have fewer than 50 localizations per cluster. We can remove already during our filtering step using Pandas DataFrame slicing and assignments.
```
# Set rows representing clusters with fewer than 50 localizations to false
stats.loc[stats['number_of_localizations'] < 50, 'annotation'] = False
```
# Step 3: Overlay the clusters on top of the widefield image
Running the cell below will open up a window showing two views. On the left, you will see the full widefield image displayed with white dots on top. These dots are the centers of the clusters in the stats DataFrame. A yellow circle will indicate the current cluster.
On the right, you will see a zoom of the current cluster. The localizations in this cluster are teal circles. Green circles denote the centers of other clusters now currently being analyzed and magenta dots denote noise localizations (their `cluster_id` is -1).
**You can press `f` and `b` to navigate forward and backward through each cluster.**
```
overlay = mp.OverlayClusters(annotateCol = 'annotation', filterCol='annotation', pixelSize = 108)
overlay(locs, stats, img)
```
Setting the `filterCol` parameter to the name of the annotation column removed all the clusters that we filtered out above from the visualization. If you set this None, you will see every cluster in the DataFrame.
## Adjust the image contrast and colormap
If you need to adjust the image contrast of the underlying widefield images, you can use matplotlib 2's real-time image adjustment tools. In the Qt5Agg backend, the menu button to click to access these tools looks like this:
<img src="../images/edit_axis.png">
With the overlay window open, click it and choose which of the two axes you wish to edit (either the large-scale image entitled **Cluster centers** or the zoomed-in image starting with **Current cluster ID**. Once you make a selection, click **OK**. In the next window to appear (Figure Options), select the **Images** tab. Here you may adjust the colormap and the min/max values of the underlying images.
# Step 4: Correcting the shift between clusters and the widefield image
As you navigate, you should notice a constant offset between the widefield image and the clusters. This can be corrected with the [AlignToWidefield](http://b-store.readthedocs.io/en/latest/bstore.html#bstore.multiprocessors.AlignToWidefield) multiprocessor. This processor creates a histogram from the localizations and computes the cross-correlation with an upsampled version of the widefield image to determine the global offset between the two.
To use this multiprocessor, we will input the widefield image and localizations belonging to the filtered clusters as inputs.
```
# This removes all localizations whose cluster_id is not set to False in stats
# Filtering out the noisy localizations is not necessary but sometimes helps the alignment
filteredLocs = locs.loc[locs['cluster_id'].isin(stats[stats['annotation'] == True].index)]
# Now compute the offset with the filtered localizations
aligner = mp.AlignToWidefield()
dx, dy = aligner(filteredLocs, img)
print('x-offset: {0}, y-offset: {1}'.format(dx, dy))
```
We can now use the `xShift` and `yShift` parameters of the call to overlay to apply these corrections. The localizations are not physically changed by this operation; only their locations in the visualization are moved.
```
overlay = mp.OverlayClusters(annotateCol = 'annotation', filterCol='annotation', pixelSize = 108,
xShift = dx, yShift = dy)
overlay(locs, stats, img)
```
Now when you navigate through the clusters you should see that they overlap quite well.
# Step 5: Annotating the clusters
If you do specify an annotation column in the call to `overlay`, you can use the keyboard to annotate each cluster and move to the next. The following keys are used to add annotations:
- **Space bar** : set the value in the stats column for this cluster to `True`
- **r** : set the value in the stats column for this cluster to `False` ('r' is for 'reject')
- **0-9** : set the value in the stats column to an integer between 0 and 9
# Step 6: Saving the results
Once you are finished, you may save the results of the annotation by saving the `stats` DataFrame using any of the Pandas save functions, such as `to_csv()`.
```
filename = 'annotated_data'
with open(filename, 'w') as f:
stats.to_csv(f)
```
# Summary
1. The **OverlayClusters** multiprocessor may be used to overlay clustered localizations on widefield images
2. The same multiprocessor may be used to manually annotate clusters
3. If the localizations are shifted relative to the widefield image, use the `AlignToWidefield` multiprocessor to correct this global shift.
| github_jupyter |
# Random numbers and simulation
You will learn how to use a random number generator with a seed and produce simulation results (**numpy.random**, **scipy.stats**), and calcuate the expected value of a random variable through Monte Carlo integration. You will learn how to save your results for later use (**pickle**). Finally, you will learn how to make your figures interactive (**ipywidgets**).
**Links:**
* [numpy.random](https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.random.html)
* [scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html)
* [ipywidgets](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html)
* datacamp on [pickle](https://www.datacamp.com/community/tutorials/pickle-python-tutorial)
**Imports:** We now import all the modules, we need for this notebook. Importing everything in the beginning makes it more clear what modules the notebook relies on.
```
import math
import pickle
import numpy as np
from scipy.stats import norm # normal distribution
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import ipywidgets as widgets
```
# Exchange economy with many consumers
Consider an **exchange economy** with
1. 2 goods, $(x_1,x_2)$
2. $N$ consumers indexed by $j \in \{1,2,\dots,N\}$
3. Preferences are Cobb-Douglas with uniformly *heterogenous* coefficients
$$
\begin{aligned}
u^{j}(x_{1},x_{2}) & = x_{1}^{\alpha_{j}}x_{2}^{1-\alpha_{j}}\\
& \,\,\,\alpha_{j}\sim\mathcal{U}(\underline{\mu},\overline{\mu})\\
& \,\,\,0<\underline{\mu}<\overline{\mu}<1
\end{aligned}
$$
4. Endowments are *homogenous* and given by
$$
\boldsymbol{e}^{j}=(e_{1}^{j},e_{2}^{j})=(k,1),\,k>0
$$
The implied **demand functions** are:
$$
\begin{aligned}
x_{1}^{\star j}(p_{1},p_{2},e^{j})&=&\alpha_{j}\frac{I}{p_{1}}=\alpha_{j}\frac{kp_{1}+p_{2}}{p_{1}} \\
x_{2}^{\star j}(p_{1},p_{2},e^{j})&=&(1-\alpha_{j})\frac{I}{p_{2}}=(1-\alpha_{j})\frac{kp_{1}+p_{2}}{p_{2}}
\end{aligned}
$$
The **equilibrium** for a random draw of $\alpha = \{\alpha_1,\alpha_2,\dots,\alpha_N\}$ is a set of **prices** $p_1$ and $p_2$ satifying:
$$
\begin{aligned}
x_1(p_1,p_2) = \sum_{j=1}^N x_{1}^{\star j}(p_{1},p_{2},e^{j}) &= \sum_{j=1}^N e_1^j = Nk \\
x_2(p_1,p_2) = \sum_{j=1}^N x_{2}^{\star j}(p_{1},p_{2},e^{j}) &= \sum_{j=1}^N e_2^j = N
\end{aligned}
$$
**Problem:** Solve for this equilibrium. But how do we handle the randomness? We need a random number generator (RNG).
**Warm-up**: Choose parameters and define demand functions.
```
# a. parameters
N = 1000
k = 2 # endowment
mu_low = 0.1 # lower bound on alpha
mu_high = 0.9 # upper bound on alpha
# b. demand functions
def demand_good_1_func(alpha,p1,p2,k):
I = k*p1+p2
return alpha*I/p1
def demand_good_2_func(alpha,p1,p2,k):
I = k*p1+p2
return (1-alpha)*I/p2
```
**Quizz:** take a quick [quizz](https://forms.office.com/Pages/ResponsePage.aspx?id=kX-So6HNlkaviYyfHO_6kckJrnVYqJlJgGf8Jm3FvY9UMFpSRTIzUlJKMkdFQlpIN1VZUE9EVTBaMSQlQCN0PWcu) regarding the demand functions.
# Random numbers
The two main approaches to generating random numbers are:
1. **Physical observations** of random processes (radioactive decay, atmospheric noise, roulette wheels, etc.)
2. **Algorithms** creating pseudo-random numbers
**Pseudo-random numbers** satisfy propoerties such that they are as good as random. It should be impossible (for all practical purposes) to calculate, or otherwise guess, from any given subsequence, any previous or future values in the sequence.
**More information:** See this [video](https://www.youtube.com/watch?v=C82JyCmtKWg&app=desktop#fauxfullscreen) by Infinite Series.
## Simple example: Middle-square method
Proposed by **John von Neumann**:
1. Start with a $N$ digit number
2. Square the number
3. Pad the number with leading zeros making it a $2N$ digit number
4. Extract the middle $N$ digits (*your random number*)
5. Return to step 1 to generate one more
> **Pro:** Simple and easy to implement. Conceptually somewhat similar to more advanced methods (e.g. *Mersenne-Twister* used by *numpy*).
>
> **Con:** Cycles can be no longer than $8^N$ periods. Many repeating cycles are very short. Internal state is directly observable.
>
> **Conclusion:** Can not be used in practice.
**Code:** An implementation in Python for $N = 4$ digit random integers:
```
def rng(number,max_iter=100):
already_seen = [] # list of seen numbers
i = 0
while number not in already_seen and i < max_iter:
already_seen.append(number)
squared = number**2
padded = str(squared).zfill(8) # add leading zeros
number = int(padded[2:6]) # extract middle 4 numbers
print(f"square = {squared:8d}, padded = {padded} -> {number:4d}")
i += 1
```
A reasonable cycle:
```
rng(4653)
```
A short cycle:
```
rng(540)
```
No cycle at all:
```
rng(3792)
```
## Numpy
Numpy provides various functions for drawing random numbers. We can, for example, draw random integers between 0 and 10000:
```
X = np.random.randint(0,10000,size=5)
print(X)
```
**Problem:** How can we reproduce our results the next time we open Python?
**Solution:** Use a seed! Choose the seed, and reset the random number generator:
```
print('set seed to 2000 and create numbers:')
np.random.seed(2000)
print(np.random.uniform(size=5))
print(np.random.uniform(size=5))
print('\nreset algorithm by stating the same seed again:')
np.random.seed(2000)
print(np.random.uniform(size=5))
```
> **Note:** The first and third draws above are exactly the same.
We can also **save and load the state** of the random number generator.
```
# a. save state
state = np.random.get_state()
# b. draw some random number
print('generate numbers from current state:')
print(np.random.uniform(size=5))
print(np.random.uniform(size=5))
# c. reset state
np.random.set_state(state)
# d. draw the same random numbers again
print('\ngenerate numbers from past state by reloading it:')
print(np.random.uniform(size=5))
print(np.random.uniform(size=5))
```
> **Note**: You should *only set the seed once* per program. Changing seed might brake randomness.
## Different distributions
Draw random numbers from various distributions:
```
X = np.random.normal(loc=0,scale=1,size=10**6)
Y = np.random.beta(a=5,b=2,size=10**6)
Z = np.random.uniform(low=-2,high=2,size=10**6)
vec = np.array([-2.5,-2.0,-1.5,-1.0,-0.5,0,0.5,1.0,1.5,2,2.5])
prob = (np.linspace(-1,1,vec.size)+0.1)**2 # all positive numbers
prob /= np.sum(prob) # make them sum to one
K = np.random.choice(vec,size=10**6,p=prob)
```
Plot the various distributions:
```
fig = plt.figure(dpi=100)
ax = fig.add_subplot(1,1,1)
ax.hist(X,bins=100,density=True,alpha=0.5,label='normal') # alpha < 1 = transparent
ax.hist(Y,bins=100,density=True,alpha=0.5,label='beta')
ax.hist(Z,bins=100,density=True,alpha=0.5,label='uniform')
ax.hist(K,bins=100,density=True,alpha=0.5,label='choice')
ax.set_xlim([-3,3])
ax.legend(loc='upper left'); # note: the ; stops output from being printed
```
**Task:** Follow this [link](https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.random.html). Choose a distribution and add it to the figure above.
## Analytical results
How close are our draws to a normal distribution?
```
from scipy.stats import norm
# a. create analytical distribution
loc_guess = 0.25
scale_guess = 0.75
# loc_guess, scale_guess = norm.fit(X)
F = norm(loc=loc_guess,scale=scale_guess)
rnd = F.rvs(5) # example: create 5 random draws from the distribution F
print(f'F pdf at 0.0: {F.pdf(0.0): 1.3f} \nF cdf at 0.0: {F.cdf(0.0): 1.3f}') # the object F has several useful functions available
# b. vector of x values
x_low = F.ppf(0.001) # x value where cdf is 0.001
x_high = F.ppf(0.999) # x value where cdf is 0.999
x = np.linspace(x_low,x_high,100)
# c. compare
fig = plt.figure(dpi=100)
ax = fig.add_subplot(1,1,1)
ax.plot(x,F.pdf(x),lw=2,label='estimated')
ax.hist(X,bins=100,density=True,histtype='stepfilled');
```
**Task:** Make the pdf fit the historgram.
## Permutations
```
class dice_cup:
def __init__(self,ndice):
self.ndice = ndice
def roll(self):
self.dice = np.random.randint(1,7,size=self.ndice)
print(self.dice)
def shuffle(self):
np.random.shuffle(self.dice)
print(self.dice)
def roll_and_sum(self):
self.roll()
print(self.dice.sum())
my_dice_cup = dice_cup(4)
my_dice_cup.roll()
my_dice_cup.shuffle()
my_dice_cup.roll_and_sum()
```
**Task:** Add a method ``roll_and_sum()`` to the class above, which rolls and print the sum of the dice. Compare the value of your roll to your neighbor.
*(You can delete the pass statement when starting to code. It's there to inform Python that roll_and_sum() is well defined as Python cannot handle a totally codeless function)*
# Demand
$$
x_1(p_1,p_2) = \sum_{j=1}^N x_{1}^{\star j}(p_{1},p_{2},e^{j}) = \alpha_{j}\frac{kp_{1}+p_{2}}{p_{1}}
$$
Find demand distribution and total demand:
```
def find_demand_good_1(alphas,p1,p2,k):
distr = demand_good_1_func(alphas,p1,p2,k) # Notice we are passing in arrays of alphas together with scalars! It works because of numpy broadcasting.
total = distr.sum()
return distr,total
```
Calculate for various prices:
```
# a. draw alphas
alphas = np.random.uniform(low=mu_low,high=mu_high,size=N)
# b. prices
p1_vec = [0.5,1,2,5]
p2 = 1
# c. demand
dists = np.empty((len(p1_vec),N))
totals = np.empty(len(p1_vec))
for i,p1 in enumerate(p1_vec):
dist,total = find_demand_good_1(alphas,p1,p2,k)
dists[i,:] = dist
totals[i] = total
```
Plot the results:
```
fig = plt.figure(figsize=(10,4))
ax_left = fig.add_subplot(1,2,1)
ax_left.set_title('Distributions of demand')
for i,p1 in enumerate(p1_vec):
ax_left.hist(dists[i],density=True,alpha=0.5,label=f'$p_1 = {p1}$')
ax_left.legend(loc='upper right')
ax_right = fig.add_subplot(1,2,2)
ax_right.set_title('Level of demand')
ax_right.grid(True)
ax_right.plot(p1_vec,totals)
```
# Interactive figures
Create a function constructing a figure:
```
def interactive_figure(alphas,p1,p2,k):
# a. calculations
dist,_total = find_demand_good_1(alphas,p1,p2,k)
# b. figure
fig = plt.figure(dpi=100)
ax = fig.add_subplot(1,1,1)
ax.hist(dist,density=True)
ax.set_xlim([0,4]) # fixed x range
ax.set_ylim([0,0.8]) # fixed y range
```
**Case 1:** Make it interactive with a **slider**
```
widgets.interact(interactive_figure,
alphas=widgets.fixed(alphas),
p1=widgets.FloatSlider(description="$p_1$", min=0.1, max=5, step=0.05, value=2),
p2=widgets.fixed(p2),
k=widgets.fixed(k)
);
```
**Case 2:** Make it interactive with a **textbox**:
```
widgets.interact(interactive_figure,
alphas=widgets.fixed(alphas),
p1=widgets.FloatText(description="$p_1$", value=2),
p2=widgets.fixed(p2),
k=widgets.fixed(k)
);
```
**Case 3:** Make it interactive with a **dropdown menu**
```
widgets.interact(interactive_figure,
alphas=widgets.fixed(alphas),
p1=widgets.Dropdown(description="$p_1$", options=[0.5,1,1.5,2.0,2.5,3], value=2),
p2=widgets.fixed(p2),
k=widgets.fixed(k)
);
```
**Task:** Add a slider for \\(k\\) to the interactive figure below.
```
# change this code
widgets.interact(interactive_figure,
alphas=widgets.fixed(alphas),
p1=widgets.FloatSlider(description="$p_1$", min=0.1, max=5, step=0.05, value=2),
p2=widgets.fixed(p2),
k=widgets.fixed(k)
);
```
# Equilibrium
The equilibrium conditions (demand = supply) were:
$$
\begin{aligned}
\sum_{j=1}^N x_{1}^{\star j}(p_{1},p_{2},e^{j}) &= Nk \Leftrightarrow Z_1 \equiv \sum_{j=1}^N x_{1}^{\star j}(p_{1},p_{2},e^{j}) - Nk = 0 \\
\sum_{j=1}^N x_{2}^{\star j}(p_{1},p_{2},e^{j}) &= N \Leftrightarrow Z_2 \equiv \sum_{j=1}^N x_{2}^{\star j}(p_{1},p_{2},e^{j}) - N = 0
\end{aligned}
$$
**Idea:** Solve the first equation. The second is then satisfied due to Walras's law.
**Excess demand functions:**
```
def excess_demand_good_1_func(alphas,p1,p2,k):
# a. demand
demand = np.sum(demand_good_1_func(alphas,p1,p2,k))
# b. supply
supply = k*alphas.size
# c. excess demand
excess_demand = demand-supply
return excess_demand
def excess_demand_good_2_func(alphas,p1,p2,k):
# a. demand
demand = np.sum(demand_good_2_func(alphas,p1,p2,k))
# b. supply
supply = alphas.size
# c. excess demand
excess_demand = demand-supply
return excess_demand
```
**Algorithm:**
First choose a tolerance $\epsilon > 0$ and an adjustment factor $\kappa$, and a guess on $p_1 > 0$.
Then find the equilibrium price by:
1. Calculate excess demand $Z_1 = \sum_{j=1}^N x_{1}^{\star j}(p_{1},p_{2},e^{j}) - Nk$
2. If $|Z_1| < \epsilon $ stop
3. If $|Z_1| \geq \epsilon $ set $p_1 = p_1 + \kappa \cdot \frac{Z_1}{N}$
4. Return to step 1
That is, if if excess demand is positive and far from 0, then increase the price. If excess demand is negative and far from 0, decrease the price.
```
def find_equilibrium(alphas,p1,p2,k,kappa=0.5,eps=1e-8,maxiter=500):
t = 0
while True:
# a. step 1: excess demand
Z1 = excess_demand_good_1_func(alphas,p1,p2,k)
# b: step 2: stop?
if np.abs(Z1) < eps or t >= maxiter:
print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand -> {Z1:14.8f}')
break
# c. step 3: update p1
p1 = p1 + kappa*Z1/alphas.size
# d. step 4: print only every 25th iteration using the modulus operator
if t < 5 or t%25 == 0:
print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand -> {Z1:14.8f}')
elif t == 5:
print(' ...')
t += 1
return p1
```
Find the equilibrium price:
```
p1 = 1.4
p2 = 1
kappa = 0.1
eps = 1e-8
p1 = find_equilibrium(alphas,p1,p2,k,kappa=kappa,eps=eps)
```
**Check:** Ensure that excess demand of both goods are (almost) zero.
```
Z1 = excess_demand_good_1_func(alphas,p1,p2,k)
Z2 = excess_demand_good_2_func(alphas,p1,p2,k)
print(Z1,Z2)
assert np.abs(Z1) < eps
assert np.abs(Z2) < eps
```
**Quizz:** take a quick quizz on the algorithm [here](https://forms.office.com/Pages/ResponsePage.aspx?id=kX-So6HNlkaviYyfHO_6kckJrnVYqJlJgGf8Jm3FvY9UMjRVRkEwQTRGVVJPVzRDS0dIV1VJWjhJVyQlQCN0PWcu)
# Numerical integration by Monte Carlo
Numerical integration is the task of computing
$$
\mathbb{E}[g(x)] \text{ where } x \sim F
$$
and $F$ is a known probability distribution and $g$ is a function.
Relying on the law of large numbers we approximate this integral with
$$
\mathbb{E}[g(x)] \approx \frac{1}{N}\sum_{i=1}^{N} g(x_i)
$$
where $x_i$ is drawn from $F$ using a random number generator. This is also called **numerical integration by Monte Carlo**.
**Monte Carlo function:**
```
def g(x):
return (x-1)**2
def MC(N,g,F):
X = F.rvs(size=N) # rvs = draw N random values from F
return np.mean(g(X))
```
**Example** with a normal distribution:
```
N = 1000
mu = 0.1
sigma = 0.5
F = norm(loc=mu,scale=sigma)
print(MC(N,g,F))
```
Function for drawning \\( K \\) Monte Carlo samples:
```
def MC_sample(N,g,F,K):
results = np.empty(K)
for i in range(K):
results[i] = MC(N,g,F)
return results
```
The variance across Monte Carlo samples falls with larger $N$:
```
K = 1000
for N in [10**2,10**3,10**4,10**5]:
results = MC_sample(N,g,F,K)
print(f'N = {N:8d}: {results.mean():.6f} (std: {results.std():.4f})')
```
## Advanced: Gauss-Hermite quadrature
**Problem:** Numerical integration by Monte Carlo is **slow**.
**Solution:** Use smarter integration formulas on the form
$$
\mathbb{E}[g(x)] \approx \sum_{i=1}^{n} w_ig(x_i)
$$
where $(x_i,w_i), \forall n \in \{1,2,\dots,N\}$, are called **quadrature nodes and weights** and are provided by some theoretical formula depending on the distribution of $x$.
**Example I, Normal:** If $x \sim \mathcal{N}(\mu,\sigma)$ then we can use [Gauss-Hermite quadrature](https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature) as implemented below.
```
def gauss_hermite(n):
""" gauss-hermite nodes
Args:
n (int): number of points
Returns:
x (numpy.ndarray): nodes of length n
w (numpy.ndarray): weights of length n
"""
# a. calculations
i = np.arange(1,n)
a = np.sqrt(i/2)
CM = np.diag(a,1) + np.diag(a,-1)
L,V = np.linalg.eig(CM)
I = L.argsort()
V = V[:,I].T
# b. nodes and weights
x = L[I]
w = np.sqrt(math.pi)*V[:,0]**2
return x,w
def normal_gauss_hermite(sigma, n=7, mu=None, exp=False):
""" normal gauss-hermite nodes
Args:
sigma (double): standard deviation
n (int): number of points
mu (double,optinal): mean
exp (bool,optinal): take exp and correct mean (if not specified)
Returns:
x (numpy.ndarray): nodes of length n
w (numpy.ndarray): weights of length n
"""
if sigma == 0.0 or n == 1:
x = np.ones(n)
if mu is not None:
x += mu
w = np.ones(n)
return x,w
# a. GaussHermite
x,w = gauss_hermite(n)
x *= np.sqrt(2)*sigma
# b. log-normality
if exp:
if mu is None:
x = np.exp(x - 0.5*sigma**2)
else:
x = np.exp(x + mu)
else:
if mu is None:
x = x
else:
x = x + mu
w /= np.sqrt(math.pi)
return x,w
```
**Results:** Becuase the function is "nice", very few quadrature points are actually needed (*not generally true*).
```
for n in [1,2,3,5,7,9,11]:
x,w = normal_gauss_hermite(mu=mu,sigma=sigma,n=n)
result = np.sum(w*g(x))
print(f'n = {n:3d}: {result:.10f}')
```
**Example II, log-normal ([more info](https://en.wikipedia.org/wiki/Log-normal_distribution)):**
1. Let $\log x \sim \mathcal{N}(\mu,\sigma)$.
2. Gauss-Hermite quadrature nodes and weights can be used with the option `exp=True`.
3. To ensure $\mathbb{E}[x] = 1$ then $\mu = -0.5\sigma^2$.
```
z = np.random.normal(size=1_000_000,scale=sigma)
print('mean(x) when mu = 0')
x,w = normal_gauss_hermite(mu=0,sigma=sigma,n=7,exp=True)
print(f'MC: {np.mean(np.exp(z)):.4f}')
print(f'Gauss-Hermite: {np.sum(x*w):.4f}')
print('')
print('mean(x), mu = -0.5*sigma^2')
x,w = normal_gauss_hermite(sigma=sigma,n=7,exp=True)
print(f'MC: {np.mean(np.exp(z)-0.5*sigma**2):.4f}')
print(f'Gauss-Hermite: {np.sum(x*w):.4f}')
```
# Load and save
## Pickle
A good allround method for loading and saving is to use **pickle**. Here is how to save:
```
# a. variables
my_dict = {'a':1,'b':2}
my_vec = np.array([1,2,3])
my_tupple = (1,4,2)
# b. put them in a dictionary
my_data = {}
my_data['my_dict'] = my_dict
my_data['my_vec'] = my_vec
my_data['my_tupple'] = my_tupple
# c. save the dictionary in a file
with open(f'data.p', 'wb') as f: # wb = write binary
pickle.dump(my_data, f)
```
Delete the variables:
```
del my_dict
del my_vec
del my_tupple
```
Load the data again:
```
# a. try
try:
print(my_tupple)
except:
print('my_vec does not exist')
# b. load
with open(f'data.p', 'rb') as f: # rb = read binary
data = pickle.load(f)
my_dict = data['my_dict']
my_vec = data['my_vec']
my_tupple = data['my_tupple']
# c. try again
print(my_vec)
print(my_tupple)
```
## Saving with numpy
When only saving/loading **numpy arrays**, an alternative is to use ``np.savez`` (or ``np.savez_compressed``). This is typically faster than pickle.
Here is how to save some data:
```
my_data = {}
my_data['A'] = np.array([1,2,3])
my_data['B'] = np.zeros((5,8))
my_data['C'] = np.ones((7,3,8))
np.savez(f'data.npz', **my_data)
# '**' unpacks the dictionary
```
Here is how to load the data again:
```
# a. delete
del my_data
# a. load all
my_data = {}
with np.load(f'data.npz') as data_obj:
for key in data_obj.files:
my_data[key] = data_obj[key]
print(my_data['A'])
# b. load single array
X = np.load(f'data.npz')['A']
print(X)
```
# Summary
**This lecture:** We have talked about:
1. numpy.random: Drawing (pseudo-)random numbers (seed, state, distributions)
2. scipy.stats: Using analytical random distributions (ppf, pdf, cdf, rvs)
3. ipywidgets: Making interactive figures
4. pickle and np.savez: Saving and loading data
The method you learned for finding the equilibrium can be used in a lot of models. For example, a simple method can be applied with multiple goods.
**Your work:** Before solving Problem Set 2 read through this notebook and play around with the code.
**Next lecture:** Workflow and debugging. Go through these guides beforehand:
1. [Installing Python and VSCode](https://numeconcopenhagen.netlify.com//guides/python-setup)
2. [Running Python in JupyterLab](https://numeconcopenhagen.netlify.com//guides/jupyterlab)
3. [Running Python in VSCode](https://numeconcopenhagen.netlify.com//guides/vscode-basics)
You must have installed **git** and have a **GitHub account!** (step 2 in [Installing Python and VSCode](https://numeconcopenhagen.netlify.com//guides/python-setup)).
**Finally:** You can begin to think about who you want to work together with for the group assignments. We will talk more about inaugural project next-time.
| github_jupyter |
# Disk Health Predictor Usage Demo
In this notebook, we will show how you can get started with the [`disk-health-predictor`](https://github.com/aicoe-aiops/disk-health-predictor) python module with just a few lines of code! First we will install it as a package, and then we will pass to it a [`smartctl`](https://linux.die.net/man/8/smartctl) json collected from a real hard disk, to predict that hard disk's health and estimate its time-to-failure.
## Installation
To install Disk Health Predictor using pip, you can run the following command:
```bash
pip install git+https://github.com/aicoe-aiops/disk-health-predictor.git
```
Alternatively, you can also add it to your Pipfile like this:
```
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
some-packege = "*"
[packages]
disk-health-predictor = {git = "https://github.com/aicoe-aiops/disk-health-predictor.git", ref="master"}
another-pkg = "*"
[requires]
python_version = "3.8"
```
```
# lets install using the command line for now
# !pip install git+https://github.com/aicoe-aiops/disk-health-predictor.git
```
## Usage
To collect your disk's SMART data using the `smartctl` tool, you can run the following command (replacing `/dev/nvme0` with the device you want)
```bash
smartctl --json --xall /dev/nvme0
```
To collect and share (anonymously) this data for the hard disks in your Ceph cluster, you can enable the [Telemetry](https://docs.ceph.com/en/latest/mgr/telemetry/) module. We will use one of the smartctl jsons collected via Ceph telemetry as sample input data in this notebook.
```
import json
from pprint import pprint
# lets take a look at what the input data looks like
with open("sample_input_1.json", "r") as f:
sample_smartctl_jsons = json.load(f)
# view keys
keys = sample_smartctl_jsons.keys()
print("Keys in the input JSON:")
print(keys, end="\n\n")
# view value for a key
sample_key = list(keys)[0]
print(f"Value for Key {sample_key} in the input JSON:")
pprint(sample_smartctl_jsons[sample_key], depth=4)
```
As we can see, the keys in the input json are the datetime objects when the `smartctl` output was taken, and values are the raw `smartctl` json output. Note that predictor needs at least 6 days of data to make a valid prediction.
Disk Health Predictor provides several `Classifier`s, such `PSDiskHealthClassifier` and `RHDiskHealthClassifier`, to estimate disk health using `smartctl` data. These classifiers make use of some pretrained models associated with them to predict a disk's health as either
- `BAD` (will fail in less than 2 weeks),
- `WARNING` (will fail in 2 to 6 weeks),
- `GOOD` (will not fail for at least 6 weeks), or
- `UNKNOWN` (could not generate a prediction due to insufficient data).
To instantiate these classifiers, you can use the factory function `DiskHealthClassifierFactory` as shown below.
```
from disk_health_predictor.classifiers import DiskHealthClassifierFactory # noqa: E402
clf = DiskHealthClassifierFactory("redhat")
clf
```
Now, you can classify disk health to estimate when the disk will fail as follows.
```
pred = clf.predict(sample_smartctl_jsons)
print(f"This hard disk is estimated to be in the '{pred}' health category.")
```
If you have any questions, comments, or suggestions, please feel free to [open issues](https://github.com/aicoe-aiops/disk-health-predictor/issues/new/choose) or [submit pull requests](https://github.com/aicoe-aiops/disk-health-predictor/compare)!
| github_jupyter |
# Machine Learning and the MNC

[xkcd: Machine Learning](https://xkcd.com/1838/)
## About this Course
### Premises
1. Many machine learning methods are relevant and useful in a wide range of
academic and non-academic disciplines.
1. Machine learning should not be viewed as a black box.
1. It is important to know what job is performed by each cog - it is not
necessary to have the skills to construct the machine inside the (black) box.
1. You are interested in applying machine learning to real-world problems.
### Vision
By the end of the lecture you will:
- Understand fundamental machine learning concepts.
- Gain hands on experience by working on business-related use cases.
- Develop an understanding for typical challenges, be able to ask the right questions when assessing a model and be able to frame business related questions as ML projects.
### Challenges
Machine Learning is a very broad, deep and quickly developing field with progress being made in
both academia and industry at breathtaking pace.
Yet, fundamental concepts to not change and we will therefore focus on exactly these concept - at times at the price of not applying most state of the art tools or methods.
As understanding complex systems is greatly facilitated by gaining practical experience using them, this lecture takes a very hands on approach. In doing so, we will employ the the Python programming language and its Data Science and ML ecosystem. Yet again, for those unfamiliar with Python, learning a new programming language is always a challenge.
### Notes
While we will use Python as primary tool, the concepts discussed in this course are independent of the programming language. However, the employed libraries will help to maintain a high level perspective on the problem without the need to deal with numerical details.
### Literature
#### Introductory and Hands On
- [Hands-On Machine Learning with Scikit-Learn and TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/) *Concepts, Tools, and Techniques to Build Intelligent Systems*, easy to read, very hands on, uses [Python](http://python.org). Copies are available in the library.
- [An Introduction to Statistical Learning](http://faculty.marshall.usc.edu/gareth-james/ISL/) *with Applications in R*, easy to read, uses [R](https://www.r-project.org/), slightly more formal treatment than in the book above; available as PDF download.
#### More Formal
- [The Elements of Statistical Learning: Data Mining, Inference, and Prediction.](https://web.stanford.edu/~hastie/ElemStatLearn/), a classic, available as PDF download.
- [Pattern Recognition and Machine Learning](https://www.springer.com/gp/book/9780387310732), a classic, available as PDF download.
#### Online Resources and Courses
- [Scikit-learn's](https://scikit-learn.org/stable/user_guide.html) user guide features a wide range of examples and typically provides links to follow up information.
- [Coursera's](https://www.coursera.org/learn/machine-learning) offers courses both on Machine and Deep Learning.
- *And many (!) others.*
## About Machine Learning
### What are Machine Learning Use Cases?
In everyday life:
- Automatic face recognition
- Automatic text translation
- Automatic speech recognition
- Spam filter
- Recommendation systems
- ...
And elsewhere:
- Fraught detection
- Predictive maintenance
- Diagnosing diseases
- Sentiment/Topic analysis in texts
- ...
What do these applications have in common?
- Data
- Complexity
- Need to scale as data increases
- Need to evolve as data changes
- Need to learn from data
### What is Machine Learning?
\[ISL]:
> *Statistical learning* refers to a vast set of tools for *understanding data*.
\[HandsOn]:
> Machine Learning is the science (and art) of programming computers so they can *learn from data*.
### How does a Machine Learning Process look like?
#### Traditional Approach
<a href="https://www.oreilly.com/library/view/hands-on-machine-learning/9781491962282/ch01.html">
<img src="../images/handson/mlst_0101.png" alt="Traditional Approach" width="50%">
</a>
Comments:
- Writing rules may be hard if not outright impossible.
- Not clear how to update rules when new data arrives.
#### Machine Learning Approach
<a href="https://www.oreilly.com/library/view/hands-on-machine-learning/9781491962282/ch01.html">
<img src="../images/handson/mlst_0102.png" alt="Traditional Approach" width="50%">
</a>
Comments:
- Rule detection/specification is left to an algorithm.
- When new data arrives, the model is retrained.
Machine Learning systems be automated to incrementally improve over time.
<a href="https://www.oreilly.com/library/view/hands-on-machine-learning/9781491962282/ch01.html">
<img src="../images/handson/mlst_0103.png" alt="ML Approach" width="50%">
</a>
In addition, Machine Learning can be used to gain insights from data that is otherwise not available. It must no necessarily be *lots* of data.
<a href="https://www.oreilly.com/library/view/hands-on-machine-learning/9781491962282/ch01.html">
<img src="../images/handson/mlst_0104.png" alt="Traditional Approach" width="50%">
</a>
### What is Machine Learning good for?
\[HandsOn]:
> Machine Learning is great for:
> - Problems for which existing solutions require a lot of hand-tuning or long lists of rules: one Machine Learning algorithm can often simplify code and perform better
> - Complex problems for which there is no good solution at all using traditional approach: the best Machine Learning techniques can [may] find a solution.
> - Fluctuating environments: a Machine Learning system can adapt to new data.
> - Getting insights about problems and (large amounts of) data.
Machine Learning is typically used as part of a decision making process and may therefore be accompanied by (or even include) a cost benefit analysis.
## Python
> Python is an interpreted, high-level, general-purpose programming language.
> Created by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code readability [...].
> Its language constructs [...] aim to help programmers write clear, logical code for small and large-scale projects.
[Wikipedia](https://en.wikipedia.org/wiki/Python_(programming_language))
---
*Python has a very large scientific computing and Machine Learning ecosystem.*
## Machine Learning with Python
In this lecture, we will primarily use [scikit-learn](https://scikit-learn.org/stable/).
<a href="https://scikit-learn.org/stable/">
<img src="../images/sklearn/scikit-learn-logo-small.png" alt="scikit learn" width="20%">
</a>
Scikit-learn is a prominent, production-ready library used in both academia and industry with [endorsements](https://scikit-learn.org/stable/testimonials/testimonials.html) from e.g. [JPMorgan](https://www.jpmorgan.com/), [Spotify](https://www.spotify.com/), [Inria](https://www.inria.fr/), [Evernote](https://evernote.com/) and others.
One of the particular features is its very simple and uniform [API](https://en.wikipedia.org/wiki/Application_programming_interface) which allows to use a wide range of different models through very little different commands.
In addition, it provides a set of utilities typically needed for developing a Machine Learning system.
### Generic Use
Using scikit-learn typically involves at least the following steps:
```python
# 0. train test split
X_train, X_test, y_train, y_test = train_test_split(X, y)
# 1. choose a model
from sklearn.model_family import DesiredModel
# 2. instantiate a model with certain parameters
model = DesiredModel(model_parameters)
# 3. fit a model to the data
model.fit(X_train, y_train)
# 4. evaluate
model.score(X_test, y_test), model.score(X_train, y_train)
# 5. use the model to make a prediction
y_new = model.predict(X_new)
```
Although there are a lot of things going on in the background, the basic actions always take a form similar to this.
### A Simple Example
**Problem:**
- Input: 2D data, i.e. variables x1 and x2
- Output: binary, i.e. a variable y taking only values {0, 1} (=classes)
- Objective: separate input space according to class values
```
import matplotlib.pyplot as plt
from mlxtend.plotting import plot_decision_regions
from sklearn.datasets import make_moons
from sklearn.ensemble import RandomForestClassifier
def load_data(**wargs):
# experiment by setting the parameters to different values
# and observe the result
return make_moons(n_samples=200, noise=0.3, random_state=42)
def visualize_data(X, y):
fig, ax = plt.subplots()
ax.scatter(X[:, 0], X[:, 1], c=y)
return fig, ax
X, y = load_data()
visualize_data(X, y)
# SCIKIT-LEARN: start
# 1. chose a model
from sklearn.ensemble import RandomForestClassifier
# 2. set model parameters (perform parameter search e.g. using GridSearchCV)
model = RandomForestClassifier(n_estimators=100, random_state=42)
# 3. fit model
model.fit(X, y)
# 4. predict
model.predict(X)
# 5. inspect
plot_decision_regions(X, y, clf=model, legend=2)
# SCIKIT-LEARN: end
```
Observe how the model has successfully captured the shape of the data.
## A First Application
*Head over to the advertising notebook.*
| github_jupyter |
```
from PIL import Image
import numpy as np
import os
import cv2
import keras
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Conv2D,MaxPooling2D,Dense,Flatten,Dropout
import pandas as pd
import sys
%matplotlib inline
from scipy.spatial.distance import euclidean as euc
import matplotlib.pyplot as plt
import random
import plotly.express as px
import numpy
import tensorflow as tf
import requests
def readData(filepath, label):
cells = []
labels = []
file = os.listdir(filepath)
for img in file:
try:
image = cv2.imread(filepath + img)
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((50, 50))
cells.append(np.array(size_image))
labels.append(label)
except AttributeError as e:
print('Skipping file: ', img, e)
print(len(cells), ' Data Points Read!')
return np.array(cells), np.array(labels)
TestParasitizedCells, TestParasitizedLabels = readData('./input/fed/test/Parasitized/', 1)
TestUninfectedCells, TestUninfectedLabels = readData('./input/fed/test/Uninfected/', 0)
def update(name, Cells, Labels, globalId):
s = np.arange(Cells.shape[0])
np.random.shuffle(s)
Cells = Cells[s]
Labels = Labels[s]
num_classes=len(np.unique(Labels))
len_data=len(Cells)
print(len_data, ' Data Points')
(x_train)=Cells
(y_train)=Labels
# Since we're working on image data, we normalize data by divinding 255.
x_train = x_train.astype('float32')/255
train_len=len(x_train)
#Doing One hot encoding as classifier has multiple classes
y_train=keras.utils.to_categorical(y_train,num_classes)
#creating sequential model
model=Sequential()
model.add(Conv2D(filters=16,kernel_size=2,padding="same",activation="relu",input_shape=(50,50,3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(500,activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(2,activation="softmax"))#2 represent output layer neurons
# model.summary()
if globalId != 1:
model.load_weights("./weights/global"+str(globalId)+".h5")
# compile the model with loss as categorical_crossentropy and using adam optimizer
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#Fit the model with min batch size as 50[can tune batch size to some factor of 2^power ]
model.fit(x_train, y_train, batch_size=10, epochs=10, verbose=1)
print(model.summary())
#Saving Model
model.save("./weights/"+str(name)+".h5")
return len_data, model
def unison_shuffled_copies(a, b):
assert len(a) == len(b)
p = numpy.random.permutation(len(a))
return a[p], b[p]
print('Reading Training Data')
ParasitizedCells, ParasitizedLabels = readData('./input/cell_images/Parasitized/', 1)
UninfectedCells, UninfectedLabels = readData('./input/cell_images/Uninfected/', 0)
Cells = np.concatenate((ParasitizedCells, UninfectedCells))
Labels = np.concatenate((ParasitizedLabels, UninfectedLabels))
Cells, Labels = unison_shuffled_copies(Cells, Labels)
def getDataLen(trainingDict):
n = 0
for w in trainingDict:
# print(w)
n += trainingDict[w]
print('Total number of data points after this round: ', n)
return n
def assignWeights(trainingDf, trainingDict):
n = getDataLen(trainingDict)
trainingDf['Weightage'] = trainingDf['DataSize'].apply(lambda x: x/n)
return trainingDf, n
def scale(weight, scaler):
scaledWeights = []
for i in range(len(weight)):
scaledWeights.append(scaler * weight[i])
return scaledWeights
def getWeight(d):
#creating sequential model
model=Sequential()
model.add(Conv2D(filters=16,kernel_size=2,padding="same",activation="relu",input_shape=(50,50,3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(500,activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(2,activation="softmax"))#2 represent output layer neurons
# model.summary()
fpath = "./weights/"+d+".h5"
model.load_weights(fpath)
weight = model.get_weights()
return weight
def getScaledWeight(d, scaler):
#creating sequential model
model=Sequential()
model.add(Conv2D(filters=16,kernel_size=2,padding="same",activation="relu",input_shape=(50,50,3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(500,activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(2,activation="softmax"))#2 represent output layer neurons
# model.summary()
fpath = "./weights/"+d+".h5"
model.load_weights(fpath)
weight = model.get_weights()
return scale(weight, scaler)
def avgWeights(scaledWeights):
avg = list()
for weight_list_tuple in zip(*scaledWeights):
layer_mean = tf.math.reduce_sum(weight_list_tuple, axis=0)
avg.append(layer_mean)
return avg
def FedAvg(trainingDict):
trainingDf = pd.DataFrame.from_dict(trainingDict, orient='index', columns=['DataSize'])
models = list(trainingDict.keys())
scaledWeights = []
trainingDf, dataLen = assignWeights(trainingDf, trainingDict)
for m in models:
scaledWeights.append(getScaledWeight(m, trainingDf.loc[m]['Weightage']))
fedAvgWeight = avgWeights(scaledWeights)
return fedAvgWeight, dataLen
def saveModel(weight, n):
print('Reading Testing Data')
TestCells = np.concatenate((TestParasitizedCells, TestUninfectedCells))
TestLabels = np.concatenate((TestParasitizedLabels, TestUninfectedLabels))
sTest = np.arange(TestCells.shape[0])
np.random.shuffle(sTest)
TestCells = TestCells[sTest]
TestLabels = TestLabels[sTest]
num_classes=len(np.unique(TestLabels))
(x_test) = TestCells
(y_test) = TestLabels
# Since we're working on image data, we normalize data by divinding 255.
x_test = x_test.astype('float32')/255
test_len=len(x_test)
#Doing One hot encoding as classifier has multiple classes
y_test=keras.utils.to_categorical(y_test,num_classes)
#creating sequential model
model=Sequential()
model.add(Conv2D(filters=16,kernel_size=2,padding="same",activation="relu",input_shape=(50,50,3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(500,activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(2,activation="softmax"))#2 represent output layer neurons
# model.summary()
model.set_weights(weight)
# compile the model with loss as categorical_crossentropy and using adam optimizer
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
scores = model.evaluate(x_test, y_test)
print("Loss: ", scores[0]) #Loss
print("Accuracy: ", scores[1]) #Accuracy
#Saving Model
fpath = "./weights/global"+str(n)+".h5"
model.save(fpath)
return scores[0], scores[1]
def euclidean(m, n):
distance = []
for i in range(len(m)):
# print(i)
distance.append(euc(m[i].reshape(-1,1), n[i].reshape(-1,1)))
# print(distance)
distance = sum(distance)/len(m)
return distance
def merge(trainingDict, b):
# print(trainingDict)
models = list(trainingDict.keys())
# print(models)
trainingDf = pd.DataFrame.from_dict(trainingDict, orient='index', columns=['DataSize'])
l_weights = []
g_weight = {}
# print(models)
for m in models:
# print(m)
if 'global' in m:
g_weight['name'] = m
g_weight['weight'] = getWeight(m)
else:
l_weights.append({
'name': m,
'weight': getWeight(m)
})
# print(g_weight)
scores = {}
for m in l_weights:
scores[m['name']] = euclidean(m['weight'], g_weight['weight'])
sortedScores = {k: v for k, v in sorted(scores.items(), key=lambda item: item[1])}
# print(scores)
# print(sortedScores)
b = int(len(scores)*b)
selected = []
for i in range(b):
selected.append((sortedScores.popitem())[0])
newDict = {}
for i in trainingDict.keys():
if (i not in selected) and ('global' not in i):
newDict[i] = trainingDict[i]
print('Selections: ', newDict)
NewGlobal, dataLen = FedAvg(newDict)
return NewGlobal, dataLen
per_client_batch_size = 1000
curr_local = 0
curr_global = 0
local = {}
loss_array = []
acc_array = []
for i in range(0, len(Cells), per_client_batch_size):
if int(curr_global) == 0:
curr_global += 1
name = 'global' + str(curr_global)
l, m = update(name, Cells[i:i+per_client_batch_size], Labels[i:i+per_client_batch_size], curr_global)
local[name] = l
elif (curr_local != 0) and (int(curr_local)%5 == 0):
curr_global += 1
print('Current Global: ', curr_global)
name = 'global' + str(curr_global)
m, l = merge(local, 0.25)
loss, acc = saveModel(m, curr_global)
loss_array.append(loss)
acc_array.append(acc)
curr_local += 1
local = {}
local[name] = l
else:
print('Current Local: ', curr_local)
name = str('local'+str(curr_local))
curr_local += 1
l, m = update(name, Cells[i:i+per_client_batch_size], Labels[i:i+per_client_batch_size], curr_global)
local[name] = l
#accuracy #per_client_batch_size: 2000
print(acc_array)
fig = px.line(y=acc_array)
fig.show()
#accuracy #per_client_batch_size: 1500
print(acc_array)
fig = px.line(y=acc_array)
fig.show()
#accuracy #per_client_batch_size: 1000
print(acc_array)
fig = px.line(y=acc_array)
fig.show()
#accuracy #per_client_batch_size: 500
print(acc_array)
fig = px.line(y=acc_array)
fig.show()
#accuracy #per_client_batch_size: 100
print(acc_array)
fig = px.line(y=acc_array)
fig.show()
#accuracy #per_client_batch_size: 100, 10 epochs
print(acc_array)
fig = px.line(y=acc_array)
fig.show()
#accuracy #per_client_batch_size: 50
print(acc_array)
fig = px.line(y=acc_array)
fig.show()
#loss
fig = px.line(y=loss_array)
fig.show()
```
| github_jupyter |
```
import conf
import uuid
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
bucket = conf.SESSION_BUCKET
sess = sagemaker.Session(default_bucket=bucket)
bucket = sess.default_bucket()
sess.default_bucket = bucket
sess.default_bucket
from sagemaker.amazon.amazon_estimator import get_image_uri
import uuid
training_image = get_image_uri(sess.boto_region_name, 'object-detection', repo_version="latest")
print (training_image)
s3_output_location = 's3://{}/output'.format(bucket)
s3_checkpoint_path = "checkpoint"
checkpoint_s3_uri = 's3://{}/{}/{}'.format(bucket, s3_checkpoint_path, uuid.uuid4())
s3_train_data = 's3://{}/{}'.format(bucket, 'train')
s3_validation_data = 's3://{}/{}'.format(bucket, 'validation')
s3_train_annotation = 's3://{}/{}'.format(bucket, 'train_annotation')
s3_validation_annotation = 's3://{}/{}'.format(bucket, 'validation_annotation')
od_model = sagemaker.estimator.Estimator(training_image,
role,
train_instance_count=conf.train_instance_count,
train_instance_type=conf.train_instance_type,
train_volume_size=conf.train_volume_size,
train_use_spot_instances=conf.train_use_spot_instances,
train_max_wait=conf.train_max_wait,
train_max_run=conf.train_max_run,
input_mode=conf.input_mode,
output_path=s3_output_location,
checkpoint_s3_uri=checkpoint_s3_uri,
sagemaker_session=sess)
od_model.set_hyperparameters(base_network=conf.base_network,
use_pretrained_model=conf.use_pretrained_model,
num_classes=conf.num_classes,
mini_batch_size=conf.mini_batch_size,
epochs=conf.epochs,
learning_rate=conf.learning_rate,
lr_scheduler_step=conf.lr_scheduler_step,
lr_scheduler_factor=conf.lr_scheduler_factor,
optimizer=conf.optimizer,
momentum=conf.momentum,
weight_decay=conf.weight_decay,
overlap_threshold=conf.overlap_threshold,
nms_threshold=conf.nms_threshold,
image_shape=conf.image_shape,
label_width=conf.label_width,
num_training_samples=conf.num_training_samples)
train_data = sagemaker.session.s3_input(s3_train_data,
distribution='FullyReplicated',
content_type='image/jpeg',
s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3_validation_data,
distribution='FullyReplicated',
content_type='image/jpeg',
s3_data_type='S3Prefix')
train_annotation = sagemaker.session.s3_input(s3_train_annotation,
distribution='FullyReplicated',
content_type='image/jpeg',
s3_data_type='S3Prefix')
validation_annotation = sagemaker.session.s3_input(s3_validation_annotation,
distribution='FullyReplicated',
content_type='image/jpeg',
s3_data_type='S3Prefix')
data_channels = {
'train': train_data,
'validation': validation_data,
'train_annotation': train_annotation,
'validation_annotation': validation_annotation
}
od_model.fit(inputs=data_channels, logs=True)
```
| github_jupyter |
# Setup
Run below for setup and data loading:
```
if(!require(lme4)){
install.packages("lme4")
}
library(lme4)
library(tidyr)
library(dplyr)
library(ggplot2)
results = read.csv("model_input.csv", header=TRUE, sep=",")
nrow(results)
rearranged_results <- droplevels(results)
data <- mutate(rearranged_results, 'participant' = as.factor(participant)) %>%
mutate('slide' = as.factor(slide)) %>%
mutate('order' = as.factor(order)) %>%
mutate('type' = as.factor(type)) %>%
mutate('grade' = as.factor(grade)) %>%
mutate('mode' = as.factor(mode))
run_model <- function(data) {
model0 <- glmer(accuracy ~ (1|participant) + (1|slide) + grade + type, family = 'binomial', data=data)
model1 <- glmer(accuracy ~ (1|participant) + (1|slide) + grade + type + mode, family = 'binomial', data=data)
print(model1)
print(anova(model0, model1))
se <- sqrt(diag(vcov(model1)))
# table of estimates with 95% CI
tab <- cbind(Est = fixef(model1), LL = fixef(model1) - 1.96 * se, UL = fixef(model1) + 1.96 * se)
print("Odds Ratio")
print(exp(tab))
}
```
# Effect of Assistance on ALL Participants
```
m = run_model(data)
```
# Effect of Assistance on all except Pathologist NOC
```
m = run_model(data %>% filter(type != 4))
```
# Effect of Experience Level
```
model0 <- glmer(accuracy ~ (1|participant) + (1|slide) + grade + mode, family = 'binomial', data=data)
print(model0)
model1 <- glmer(accuracy ~ (1|participant) + (1|slide) + grade + mode + type, family = 'binomial', data=data)
#print(model1)
print(anova(model0, model1))
se <- sqrt(diag(vcov(model1)))
# table of estimates with 95% CI
tab <- cbind(Est = fixef(model1), LL = fixef(model1) - 1.96 * se, UL = fixef(model1) + 1.96 * se)
print("Odds Ratio")
print(exp(tab))
```
# Effect of Tumor Grade
```
model0 <- glmer(accuracy ~ (1|participant) + (1|slide) + mode + type, family = 'binomial', data=data)
print(model0)
model1 <- glmer(accuracy ~ (1|participant) + (1|slide) + mode + type + grade, family = 'binomial', data=data)
print(model1)
print(anova(model0, model1))
se <- sqrt(diag(vcov(model1)))
# table of estimates with 95% CI
tab <- cbind(Est = fixef(model1), LL = fixef(model1) - 1.96 * se, UL = fixef(model1) + 1.96 * se)
print("Odds Ratio")
print(exp(tab))
```
# Effect of model correctness on accuracy
```
model_wrong_set = data %>% filter(errormodel == 1)
model_right_set = data %>% filter(errormodel == 0)
nrow(model_wrong_set)
nrow(model_right_set)
model0 <- glmer(accuracy ~ (1|participant) + (1|slide) + type + grade, family = 'binomial', data=model_right_set)
model1 <- glmer(accuracy ~ (1|participant) + (1|slide) + type + grade + mode, family = 'binomial', data=model_right_set)
print(anova(model0, model1))
se <- sqrt(diag(vcov(model1)))
# table of estimates with 95% CI
tab <- cbind(Est = fixef(model1), LL = fixef(model1) - 1.96 * se, UL = fixef(model1) + 1.96 * se)
print("Odds Ratio")
print(exp(tab))
model0 <- glmer(accuracy ~ (1|participant) + (1|slide) + type + grade, family = 'binomial', data=model_wrong_set)
model1 <- glmer(accuracy ~ (1|participant) + (1|slide) + type + grade + mode, family = 'binomial', data=model_wrong_set)
print(anova(model0, model1))
se <- sqrt(diag(vcov(model1)))
# table of estimates with 95% CI
tab <- cbind(Est = fixef(model1), LL = fixef(model1) - 1.96 * se, UL = fixef(model1) + 1.96 * se)
print("Odds Ratio")
print(exp(tab))
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
# Create data
```
import copy
from liegroups import SE2, SO2
params_true = {'T_1_0': SE2.identity(),
'T_2_0': SE2(SO2.identity(), -np.array([0.5, 0])),
'T_3_0': SE2(SO2.identity(), -np.array([1, 0])),
'T_4_0': SE2(SO2.from_angle(np.pi / 2),
-(SO2.from_angle(np.pi / 2).dot(np.array([1, 0.5])))),
'T_5_0': SE2(SO2.from_angle(np.pi),
-(SO2.from_angle(np.pi).dot(np.array([0.5, 0.5])))),
'T_6_0': SE2(SO2.from_angle(-np.pi / 2),
-(SO2.from_angle(-np.pi / 2).dot(np.array([0.5, 0]))))}
# observation: relative pose between poses
obs = {'T_1_0': params_true['T_1_0'],
'T_2_1': params_true['T_2_0'].dot(params_true['T_1_0'].inv()),
'T_3_2': params_true['T_3_0'].dot(params_true['T_2_0'].inv()),
'T_4_3': params_true['T_4_0'].dot(params_true['T_3_0'].inv()),
'T_5_4': params_true['T_5_0'].dot(params_true['T_4_0'].inv()),
'T_6_5': params_true['T_6_0'].dot(params_true['T_5_0'].inv()),
'T_6_2': params_true['T_6_0'].dot(params_true['T_2_0'].inv())}
# parans_init直接设为 SE3.exp(np.random.rand(3))是否可以?
params_init = copy.deepcopy(params_true)
for key in params_init.keys():
params_init[key] = SE2.exp(5 * np.random.rand(3)).dot(params_init[key])
```
# Create residual functions
```
from pyslam.residuals import PoseResidual, PoseToPoseResidual
from pyslam.utils import invsqrt
# Q: stiffness的值是如何设定的?
prior_stiffness = invsqrt(1e-12 * np.identity(3))
odom_stiffness = invsqrt(1e-3 * np.identity(3))
loop_stiffness = invsqrt(1e-3 * np.identity(3))
residual0 = PoseResidual(obs['T_1_0'], prior_stiffness)
residual0_params = ['T_1_0']
residual1 = PoseToPoseResidual(obs['T_2_1'], odom_stiffness)
residual1_params = ['T_1_0', 'T_2_0']
residual2 = PoseToPoseResidual(obs['T_3_2'], odom_stiffness)
residual2_params = ['T_2_0', 'T_3_0']
residual3 = PoseToPoseResidual(obs['T_4_3'], odom_stiffness)
residual3_params = ['T_3_0', 'T_4_0']
residual4 = PoseToPoseResidual(obs['T_5_4'], odom_stiffness)
residual4_params = ['T_4_0', 'T_5_0']
residual5 = PoseToPoseResidual(obs['T_6_5'], odom_stiffness)
residual5_params = ['T_5_0', 'T_6_0']
# loop closure
residual6 = PoseToPoseResidual(obs['T_6_2'], loop_stiffness)
residual6_params = ['T_2_0', 'T_6_0']
prior_stiffness, odom_stiffness, loop_stiffness
```
# Set up and solve the problem
```
from pyslam.problem import Problem, Options
options = Options()
options.allow_nondecreasing_steps = True
options.max_nondecreasing_steps = 3
problem = Problem(options)
problem.add_residual_block(residual0, residual0_params)
problem.add_residual_block(residual1, residual1_params)
problem.add_residual_block(residual2, residual2_params)
problem.add_residual_block(residual3, residual3_params)
problem.add_residual_block(residual4, residual4_params)
problem.add_residual_block(residual5, residual5_params)
# problem.add_residual_block(residual6, residual6_params)
problem.initialize_params(params_init)
params_final = problem.solve()
print(problem.summary(format='full'))
```
# Check results
```
print("Initial Error:")
for key in params_true.keys():
print('{}: {}'.format(key, SE2.log(params_init[key].inv().dot(params_true[key]))))
print()
print("Final Error:")
for key in params_true.keys():
print('{}: {}'.format(key, SE2.log(params_final[key].inv().dot(params_true[key]))))
```
# Optional: Compute the covariance of the final parameter estimates
```
problem.compute_covariance()
print('covariance of T_5_0:\n{}'.format( problem.get_covariance_block('T_5_0','T_5_0') ))
```
| github_jupyter |
```
import os
import numpy as np
import pandas as pd
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import SimpleImputer
from sklearn.impute import IterativeImputer
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.linear_model import BayesianRidge
from sklearn.tree import DecisionTreeRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_val_score
dir_path = os.path.join("..", "data", "raw")
dir_to_save = os.path.join("..", "data", "interim")
building_metadata = "building_metadata.csv"
metadata = pd.read_csv(os.path.join(dir_path, building_metadata))
original = metadata
metadata.head()
metadata.shape
metadata = metadata.drop(['floor_count', 'primary_use'], axis = 1)
```
# Transform year_built to age of the building
```
metadata.loc[:, 'year_built'] = 2017 - metadata.loc[:, 'year_built']
```
## Generate a controlling train and validation set (no need for a test set)
- Drop primary_use column (as of now)
- Delete all existing NaN values
- Set seed and delete random values from floor_count and year_built separately, thus gaining a more distributed deletion of the values
```
c_train = metadata
c_train.head()
c_train['year_built'].isna().sum()
c_train.shape
c_train = c_train[np.isfinite(c_train['year_built'])]
```
- Check if missing values are droped
```
c_train.shape
c_train['year_built'].isna().sum()
c_valid = metadata
c_valid = c_valid[np.isfinite(c_valid['year_built'])]
c_valid.head()
c_valid.shape
```
- set seed
- set the number of rows to be droped from the dataframe
- TODO: find a better way of calculating replace_n, e.g. relatively to the number of the total rows in the dataframe and replace 10% with NaNs
```
np.random.seed(20)
replace_frac = 0.1
```
- set values of year_built to NaN
- check the nr. of replaced values
```
sample_idx = np.random.randint(c_train.shape[0], size=int(c_train.shape[0]*replace_frac))
c_train.iloc[sample_idx, 3] = np.nan
c_train['year_built'].isna().sum()
c_train.shape
assert c_train.shape == c_valid.shape
```
- Create list with different imputers
```
estimators = [
BayesianRidge(),
DecisionTreeRegressor(max_features='sqrt', random_state=0),
ExtraTreesRegressor(n_estimators=10, random_state=0),
KNeighborsRegressor(n_neighbors=15)
]
score_imputer = pd.DataFrame()
N_SPLITS = 5
br_estimator = BayesianRidge()
```
# Save for later use
```
#score_simple_imputer = pd.DataFrame()
#for strategy in ('mean', 'median'):
# estimator = SimpleImputer(missing_values=np.nan, strategy=strategy)
#
# estimator.fit(c_train)
# score_simple_imputer[strategy] = cross_val_score(
# estimator, c_train, c_valid, scoring='neg_mean_squared_error',
# cv=N_SPLITS
# )
#score_iterat_imputer = pd.DataFrame()
#for impute_estimator in estimators:
# estimator = make_pipeline(
# IterativeImputer(random_state=0, estimator=impute_estimator),
# br_estimator
# )
# score_iterat_imputer[impute_estimator.__class__.__name__] = \
# cross_val_score(
# estimator, c_train, c_valid, scoring='neg_mean_squared_error',
# cv=N_SPLITS
# )
#scores = pd.concat(
# [score_simple_imputer, score_iterat_imputer],
# keys=['SimpleImputer', 'IterativeImputer'], axis=1
#)
#scores.head()
```
# The best scoring algorithm
```
imp = IterativeImputer(estimator=ExtraTreesRegressor(n_estimators=10, random_state=0), missing_values=np.nan, sample_posterior=False,
max_iter=100, tol=0.001,
n_nearest_features=4, initial_strategy='median')
imp.fit(c_train)
imputed =pd.DataFrame(imp.transform(metadata), columns=['site_id', 'building_id', 'square_feet', 'age'],
dtype='int')
imputed.head()
building_metadata_imputed = pd.DataFrame(columns=['site_id', 'building_id', 'primary_use', 'square_feet', 'age'])
building_metadata_imputed['site_id'] = imputed['site_id'].to_numpy()
building_metadata_imputed['building_id'] = imputed['site_id'].to_numpy()
building_metadata_imputed['primary_use'] = original['primary_use']
building_metadata_imputed['square_feet'] = imputed['square_feet'].to_numpy()
building_metadata_imputed['age'] = imputed['age'].to_numpy()
building_metadata_imputed.head()
building_metadata_imputed.to_csv(os.path.join(dir_to_save, r'building_metadata_imputed.csv'))
```
| github_jupyter |
## Applying mapper to plant gene expression data
In this notebook, we will apply mapper algorithm to the gene expression data collected by last year's class.<br>
Download all the files from the shared google drive folder into a directory on your computer if you are running jupyter notebooks locally. The data is stored in two csv files, one contains the gene expression profiles, the other contains the metadata such as sample id, family, tissue type, stress, etc.<br> Along with the data files, there are two notebooks (including this one!) and two python script files which we will need. Make sure you have all these files downloaded into the same directory.
### Imort useful packages / modules
```
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# ML tools
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import MinMaxScaler
# For output display
from IPython.display import IFrame
# If running locally, set current directory as projdir
projdir = '.'
```
### Mount Google Drive*
Run the next cell __only if you're planning to run this notebook on Google Colab__. If you are running this notebook locally, comment it out.
To run the notebook in Google Colab, it is best if you upload data and script files to a folder in Google Drive and access them from there. We already have a shared Google Drive folder named `PlantsAndPython-2021-10-22` that contains all the required data and script files.
Run the next code cell to mount the drive and make the files accessible. We will define the shared folder as our project directory `projdir`.
If Google drive is not already mounted, running the cell will produce a link. Click on the link, follow the prompts to log in to your google account, and copy the text string generated at the end. Paste the text string in the box below and press `Enter`.
```
# Only if running in Google Colab..!!
# DO NOT run this cell if running locally - simply comment it out.
from google.colab import drive
drive.mount('/content/gdrive')
projdir = '/content/gdrive/MyDrive/PlantsAndPython-2021-10-22'
sys.path.append(projdir)
# import helper_functions
from helper_functions import loaddata
from helper_functions import colorscale_from_matplotlib_cmap
# import lense function
from lenses import fsga_transform
# keppler mapper
import kmapper as km
```
### Data
The data is stored in two csv files. `clean_metadata.csv` contains the metadata, `clean_RNAseq_sv_corrected.csv` contains the gene expression data. We are interested in three factors in particular: family, tissue type and stress type.
```
factorfile = projdir + '/clean_metadata.csv'
rnafile = projdir + '/clean_RNAseq_sv_corrected.csv'
factors = ['stress', 'tissue', 'family']
levels = ['healthy', 'leaf', 'Poaceae']
filter_by_factor, filter_by_level = ('family', 'Poaceae')
color_by_factor, color_by_level = ('tissue', 'leaf')
```
We will use the custom `loaddata` function to load the data from the csv files and merge them using the *sra*s. Feel free to take a look at the function defined in file `helper_functions.py`. The number of SRAs in the two files is different. We will only take SRAs for which we have both the gene expression profile and the factor data available. This is done using the `merge` method from pandas.
```
df, orthos = loaddata(factorfile, rnafile, factors)
df.head()
```
`df` is the dataframe containing the merged factor and RNASeq data. `orthos` is just a list of orthogroup names that will be useful when we want to select only the part of dataframe containing RNASeq data.
## Applying Mapper
First step is to initialize a KeplerMapper object. You can ignore the `nerve` part.
```
# Initialize mapper object
mymapper = km.KeplerMapper(verbose=1)
# Define Nerve
nerve = km.GraphNerve(min_intersection=1)
```
### Define lens / filter function
Next, we need to define the *lens*. In the python file `lenses.py`, I have defined a lens called `fsga_transform`. Given a factor and a specific level of that factor (for example, factor: stress, level: healthy), we construct a lens following the method described in __[Nicolau et. al. 2011](https://www.pnas.org/content/108/17/7265)__.
For example, for factor: stress, and level: healthy, we take the gene expression profiles of all the *healthy* samples from the data and fit a linear model. Then we project all the samples on to this linear model and compute the residuals. The *lens* is the norm of the residual vector.
```
# Define lens
scaler = MinMaxScaler()
residuals, idx_tr, idx_te = fsga_transform(df, orthos, filter_by_factor, filter_by_level)
lens = mymapper.project(residuals, projection='l2norm', scaler=scaler)
```
### Define cover
Next step, define the cover. Just specify the number of intervals `cubes` and the amount of overlap between consecutive intervals `overlap` and let `kmapper` take care of it. Feel free to change both the parameters but keep in mind that overlap must be between 0 and 100. Also, increasing the number of intervals will make the algorithm run slower so don't increase it beyond 130 or so.
```
# Define cover
cubes, overlap = (100, 85)
cover = km.cover.Cover(n_cubes=cubes, perc_overlap=overlap/100.)
```
### Define clustering algorithm
We will stick to DBSCAN with its default parameters.<br>
However, the metric we will use is the correlation distance (1 - correlation) between a pair of gene expression profiles. You can try changing this to *cosine* metric or some other predefined metric available in scikit-learn and see how it affects the output.
```
# Define clustering algorithm
clust_metric = 'correlation'
clusterer = DBSCAN(metric=clust_metric)
```
### Construct the mapper graph
With all the components required to construct the mapper graph ready, we can go ahead and call the `map` method of the KepplerMapper object to construct the mapper graph. Keep an eye on the number of hypercubes, nodes and edges reported by the algorithm. You can change the graph size by changing the cover parameters.
```
# Create mapper 'graph' with nodes, edges and meta-information.
graph = mymapper.map(lens=lens,
X=residuals,
clusterer=clusterer,
cover=cover,
nerve=nerve,
precomputed=False,
remove_duplicate_nodes=True)
```
### Adding components to visualization
Before we visualize the constructed mapper graph, we will add a couple of components to the visualization.<br>
First, we will color the nodes of the mapper graph using the specified factor (`color_by_factor`). The specified level (`color_by_level`) will be at one end of the colormap, all other levels will be at the other end. The node color is determined averaging the colors of all samples in the corresponding cluster.
```
# Color nodes by specified color_by_factor, color_by_level
df[color_by_factor] = df[color_by_factor].astype('category')
color_vec = np.asarray([0 if(val == color_by_level) else 1 for val in df[color_by_factor]])
cscale = colorscale_from_matplotlib_cmap(plt.get_cmap('coolwarm'))
# show filter_by_factor levels in tooltip
temp = ['({}, {})'.format(str(p[0]), str(p[1])) for p in zip(df[filter_by_factor], df[color_by_factor])]
df['tooltips'] = temp
```
### Visualize the mapper graph
Latly, we create the visualization, save it as an html file, and then load it into a frame.<br>
Alternatively, you can browse to the html file and open it in a separate browser window.
```
# Specify file to save html output
fname = 'FilterBy_{}_ColorBy_{}_Cubes_{}_Overlap_{}.html'.format(filter_by_factor,
color_by_factor,
cubes,
overlap)
figtitle = 'Lens: {} : {}, Color by {} : {}, # Intervals {}, overlap {}'.format(filter_by_factor,
filter_by_level,
color_by_factor,
color_by_level,
cubes, overlap/100.0)
fpath = projdir + '/' + fname
# Create visualization and save to specified file
_ = mymapper.visualize(graph,
path_html=fpath,
title=figtitle,
color_values=color_vec,
color_function_name=color_by_factor,
colorscale=cscale,
custom_tooltips=df['tooltips'])
# Load the html output file
IFrame(src=fpath, width=1000, height=800)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/WarwickAI/wai203-fin-nlp/blob/main/WAI203_Part_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<p align="center">
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAARgAAAEYCAYAAACHjumMAAAVd0lEQVR4nOzdf5AkZ13H8b4jyA+BOcBeCm6O2cNCZA6zFbDKUdBeLCiVEqYoZoIIZC7+A7MntZmARn7ZF8QfVDlunTVX+AOdOxBkOsgeVSogQguJc4UWtRfpM+YImb1JhHR+2DkTiT9IrTXsXnkuu3vzfJ/+zjNs3q+q71XdH9PP091Pf7ZnuvvpK9bW1jwA0LDXdQcA7F4EDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1BAwANQQMADUEDAA1UxUw8/Pzn9qzZ8+apK6//vp3uO7/dk6fPn2l6frMzMzc5rrfO5mZmUml++r06dNlk7buuuuuZ0rb2hgbbdP1i6KoJm3vwIEDXzFtb9daW1ubmup2u68bdUlYWZZlBdfrsFXV6/UTknWKouglrvu+VYVheIN0PwVBEJu2t7CwsGQxLkZ1YTAYzJi2Wy6XE2mby8vLr3S9n6ahnHdgcxWLxfPSndrpdJqu+7+50jT1petTqVQ+77r/W6zP6Kz3bot99KZJbT/bsRFF0auk7dVqtZ7rfTUN5bwDW+zUunSnlkqlW133f3MtLS29zebAWFlZeb7rdbi0ut3uNdJ18X0/zbLsCpP2wjAM8wiYUql0u2R9S6XSbcI2H0mS5Hmu95frct6Brcr3/TukA6nf71dc9//SKpfLt9ocGM1m872u12HT+twiXZd2u70oGAuDPALGW//K+RrT9kdnPtL2Wq1W2/X+cl3OO7BVhWH4dulOHX1vdt3/i5UkSTmHv7x3ul6PizUYDEoW6zEwbW95efln8goX6dgYDAajr2gXpG2Otpnr/eaynHdgq8qyrOB53v3Sndrtdl/veh1GFQRBnMeBMTrQXK/LqBqNxp9I1yEMQ+PfQIIg+GyeAbMxNhqm/bD5mtZoND7oer+5LOcd2K5arZb4LCYIgk+77n8eZy8Xq1arnXC9PsPhcI/neY9K1yFNU9+kvX6/P5d3uHjrZzE3S9bf9/1U2OYjrvedy3Lege0qTdNnFAqF+6QDaXSAu+x/Xj9OjqpQKGSj7eFyfRYWFt4n7X+1Wj1p2l69Xo80AsYTfm1ZXFz8gLQ9yVnTbinnHdipms1mR7pTXV/i9X1fevVhy2q32y1X62J7qbjX6/2USXt5nv1tVc1m85jpNkiS5KC0vXK5/BXXx5Kr2jP6Z1qdOXPm4FVXXXWn9PNxHP/g/Py8+PNSN91008uvvvrqL+S5TN/3v3Hvvffuz3OZ4zp69OjCjTfeeFzy2XK5fObs2bNXmXzm8OHDx0+ePLkgaW8cozPCc+fOPXNmZsZo8B8+fLi7uro6K2nz+PHjrz906NC9ks9+T3OdcJerSqXycYu/VB1Hfe5r/OXt9Xo1F+tjc0er5LYBz/Me1jyD8dYvIb/D9dh+LJTzDlyukiR5vnQQjf5SZVn2tEn2N47jH9I6KBqNRnfS2z+KIvHjG6VS6Uum7eX529VOtXHT31Q+WrKbynkHxqnRgSUdSGEYXjfhvop/NxqnTK/G2FSWZVdYXD1Z63a7v2jS3mAwePYkzl4uGRvvdD22d3s578A41ev1ftriL9W/TLKvNgfkODXJu0M7nc4bpf0sFotDQXviu2aFY+Os67G928t5B8atUqkkvmU8iqL6JPrY7XZfM4ED475Jndrb3CjY6XR+aZL72KKfU3FT5m4t5x0Yt2z+uhWLxYk8BFmr1U5O6KBQD8x+vy++VOz7vvFjAe12+82TDhdPeKZFjV/OOzBubTw+8Ih0IGnfbp+m6Q94nvfQJA6KIAj+Snt7B0Hw19L+hWEYmrZn84Brt9u9xuZKV7fb/TnX43u3lvMOmFSr1bpROoi05+dotVq/Mcm/vP1+/7la62J7o5vpD9G9Xq9m0d7daZrubbfbr53WsfFYLucdMKnRwC0UCpl0IGk9PpBl2RN93//mJANmYWHh/Vrb2eaq3cLCwu+ZthcEwSlpe2EY3nBxOTa/4SRJ8kLX43s31lTfybuVw4cPf+DkyZO/KvlsuVz+/NmzZ1+Rd59OnDjRuPbaa0+Yfq5arX7s1KlTb9y3b9+/Xbhw4emmnx8MBs+ZnZ39punndrK6uvqsgwcP3ul53pNNP1soFB48d+7cc2dmZh4yaK908ODBVeOOet4oUFZXV1cPXvy/dD9462Pj78+ePfsy088dPXo0lLTnbdyxPDs7e7/0898TXCecadmevmucxQRB8LeSvvR6veqaxTNXYRgaT+B0ubL5MV1y57TN82ZhGP7y5uV5nvef0uXFcfxi0/5XKpW/kbY3jVO85l3OOyApmydtG43GH+fZlyRJRHfu+r5/T5ZlT1pbv/s3kCyjVCr9Y97btlQq3W5xgBqFd5qmj7f4yvvoYDDYt3mZNhOSS+6UtvmDVyqVbnN9LGmX8w5IajgcFqU71fO8bw+Hwz159aXVarUl/Wi1Wu+6dDmjwJEsJ0mSF+S1LlEUie/jkcwW12q13iNtr1arfXS75Vrc7PhwmqZPNV2PIAg+LV2POI5/zPXxpFnOOyCtIAg+Id2pCwsL78ujDxuXziV9+Faapv/vGanFxUXRqzkWFxd/M69tanmp12jOk9G2s7nreXTWt92yG43GsUmtx9r6DZZvkLbn4vmySZbzDkjL8tJmLs/0SH+vqNVqf755WdJTbd/3vz4F2/NBwbZ7k7S9YrG44+MfNl9bpHM624Tl6Izc9fGkVc47YFM2kzrl8aCb9Fb6brf76q2WJ30DQRRFr81hXT5lsS2vN21vbm7utLS9KIpeNcby/05z+Zur0WiI79Gq1+u/7/pY0irnHbCpKIpEP45661M5rGZZtlfadhzHhyTt+r5/xw7LFK2P7ZsULH+oNH4swGYKiHHfDLm8vPzz0jZ83z9vuk6DweCA53n/JW1zt759wHkHbMvmvUM2EzhVKpWbJG222+2Fy6yP6HeQOI5/wmJdxA81NptN4+kwisXicBL7rFgsfm2SY0P6g//GdtyV71By3gHb6na7DelOlT7TI30/UKFQuCdN0x3PmsIwFF1Zkf5YGMfxldLt53netweDgdEjC91u95XS9kZnFlmWff+4bdm8JTQIgs+absuNuYv/Rzg27nd9LGmU8w7YVpqmj7d5h1K/3/9h0zabzWZL0latVrvsPThJkrxQePClku1Xq9U+KN12krlparVaT9qeZOLzUShZjA3j571snqhvtVpvd3085V3OO5BHNRqND0l3ar1ej0zbK5VKX5W0tby8/NJxlh8EwWcEy7+QJMmzTdbD8tmuh0yvxCVJsl+6nwqFwr9KxkYYhu+UthkEwSdM25P+jubt0mk8nXcgj5rk4wOTuJzc7/crpsuv1WofN91uNk+Aj858TNur1+t/IG2v2WyKHu60feWK5NGSYrH4ZWl7nU7nWtfHU57lvAN5VRAE4mdCGo3G8XHbaTQaoh/yRp8zWZ9SqfR1k+UvLy8bvXto4yZB8VWPOI5fZtKezXutPcurLPV6XfzSuEaj8Yem7dlc3Zybm8v98Q+X5bwDeVWSJM+zGMCPjHO6n6bpFYVCQTQtw8rKyqzJ+iwtLY19E98oXE231+Li4q9Lt5fka6XN11jJBFab9tszCoWC6He60VfIJElmTNucm5v7gnR9O53O61wfT3mV8w7kWTY/II7zg2W73X6bZNmVSuUvTdfF5NS+3+9fabp86bNPG+1dZbguey1+iL8wHA6fYjs2bJ7abrVav2vans2d0cVi0fg+nGkt5x3Is5aXl+elO3V0FjMYDPbvtHzf90UTGkVR9OOS9anX6x+73LLHvfHs0rK5VCy5fGtzK0Fez+qsrKyIboy8WIPB4Jkm7WVZ9hTf9++StjfaR66PpzzKeQfyLpubq8IwPLLdcnu93isky7S5y7bf77/ocsuXTABuc6YnuQGtXC7fIm0vjuMX5TU2KpXK5yzGhvEl8na7LbqdwdtF03g670DeFUXRq6U7dafb3qXTSC4uLv6KzfrMzc39w3bLLhQK95kuL0mSF0i3T6FQeCDLMqOpLuI4frHF/ljJc2xM+pGIURUKBfF9OIPB4IDr48m2nHdAo2ymHWi322/dckMJ79BMksRqcu6dnthuNpvGvw1UKhXxZF3NZnNJ0J74Pd2dTuctCmPjL6T96fV6xj++NpvNoxbb+6jrY8m2nHdAo8IwFP0Y663fr/JdN3SFYfhrkmUFQfBJ23UZDAbP8jzv0W3+whldul1ZWTko3S6S9uI4fqnFflC56azb7YqniQiCYNtJrnbYfwek7RUKhbtcH0u25bwDGmU7mdHS0tLhPJYVx/GhPNanWq1+128mkkf8bd4WUK/XjW+sC4Lgw9L2wjB8k9b4KBQKq9J+SW68q1arJyy2g9UletflvANaZfMDW7lc/urF5UgvN5bL5dN5rctWfTCdpGg4HD7Z8zzxK18E7T1H2pbv+/dojo0wDN8t7VulUvmMYP+Jp47wPO+BNE2f7Pp4kpbzDmiW9KY4b/3s4zvvyalWq5+UfH5paSm3W76zLPu+0Ve3i8uWXLod/SWUbgtJe6MzHml7YRi+V3NcDAYD8TNR3vpNk8YPyE5yOtJpKucd0Kxms/nb0p3abDY7NpOL5zEl56V16XNDcRz/qMlnN15rKz6gTCemTpJEfPaise222Z42c7cYv57F5vEB2wnFXJbzDmiWzfMvhUIhq9frors/6/V6rq9GWfu/kPgPyVevMAzfbjG4vyxoT3y2VK/XPzSJsZEkybM9z7sgHRuSECyVSuLJ0aIoUn23ulY574B2lctl8Vyz0tJ6b3StVvtIv9+vmH7O933xzYfC9u6WtjfJCbBrtdrHpf1stVrvMW2v0+m8VdpesVj8J9fHkqScd0C7bC5LSmpubu5LWuti+mKzNctZ3UqlkvHZS6fTeYu0vWq1emKSY2N5eVn8tcXzvIdNL6MPBoOne553n7TNnV7VMq3lvAOTKJsf2Exrmn6Qy7LsSb7vG037sGldrjFs72k2B9DKykoul/UNx8bnpf0Nw/Bdpu2NPiNtTzrFq8ty3oFJlM0PbKaVpmlub420Lcsneoem7dl8BahUKp9zsY36/f5LpH3e6qbMceqx9A6l7xwMjwWzs7O3nj9//krNNqrV6odPnTrV0GzDxPz8fPzFL35xXvLZTqdzzZEjRz5i8pnZ2dmbz58//zJhe0f2799/+5kzZ57ied6+jXqS53lPGOfz9Xr9pkOHDv2zpG3L7fT6I0eORCafue66695/7Nixd0vaG61nFEVXSz7rhOuEm1TZ/HUdt2xeg5J32TzY5/v+1ybZXh5VqVRusRgb4le/Ss70bC/jS+4mdlXOOzCpGgwG+2xOTcc4KEWz+muVdGpPT3h7eqPR+COXAeNZTKuZZdnjbMZGt9t9g2mb9Xpd/NBpo9Ewvg/HVTnvwCSr1Wr9ltbgnrZXTtgcMEmSPMt4IHnef7sOmGazeUy6vdrt9qK0XckkXP1+X3yP1rT9MdtxXLjuwCQrTdOnWrymY9uS3nilVWEY3mBxsBg/AW5zY13O++EBi7Gx1+YhSNM5l9fWHwZdlrbX7Xbf7HqcjVPOOzDpWlxcFM/PsV3lNa1jHuXobQFbTifhomyePpa+VXNjDBjfgRzH8U9K2yuVSmdcj7VxynkHJl2278nZqnq93rzr9bpYO01QdbmqVCp90/aazaZ47h2leiBN08c7GBv3D4fDJ5i2aXOPVhRFRq+qcVHOO+Ci6vW6eJ6SzTVtD6KVSiXRxOQbA9bobQG27WmVzc2O1Wr1o9J2JWdPYRiK/yBM29jbqpx3wEXZPCW9uSTvS9aqdrv9CxaD1fiUu91uiw8OzSqXyzdLt6F0cveLlabpM0zasz2j7vV6r3E97nYq5x1wVdVqVfyg28UqFAqZ6/W4tCY954jv+99wHSbb1crKykEX2zEMw+tM21tYWHi/tL0gCD7setztVM474Kp6vd7P2g5iybwgiutTtViXbwnaEz+GMIlqNpu/I92W7Xb75dJ2fd+/3bS94XD4OJt1nebHB5x3wGXZ/n4wTU+3BkEgmnnPW/+re4OgPfFDgpMo6WtGLhkb4odEoygyfldVo9H4U2l7GvMP5VXOO+CybN44OE0/sG28mnViB2Icxz/iOkDGKUlw5jE2fN+/I8uyJ5q0ZzM5mrf+lfD5rsfhVuW8Ay4rTdOnep73oPCv1FS9oFz69sSd3ma5Xdm862jC9e9WB4fnPSBtW/JcWqVS+TNpe9P0df3Sesw8TQ1g8va67gCA3YuAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaCGgAGghoABoIaAAaDmfwMAAP//3sSN9oP5CnYAAAAASUVORK5CYII=" alt="WAI Logo"/>
</p>
## Dependencies
```
!pip install transformers datasets numpy sklearn wandb seaborn
!pip install cloud-tpu-client==0.10 torch==1.9.0 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl
!sudo apt-get install git-lfs
!git lfs install --system --skip-repo
```
## 📊 Dataset
```
from datasets import load_dataset
# Load data here
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style('dark')
plt.figure(figsize=(10, 10))
sns.countplot(data=dataset.to_dict(), x='label', palette='pastel')
plt.xticks(range(3), dataset.features['label'].names)
plt.show()
```
## ⚙️ Dataset processing
```
from transformers import PerceiverTokenizer, DataCollatorWithPadding
# Initialize tokenizer
# Initialize data collator
labels = dataset.features['label'].names
id2label = { id: label for id, label in enumerate(labels) }
label2id = { label: id for id, label in enumerate(labels) }
# Tokenize the sentences
# Create training/test split
```
## 📏 Metrics
```
import numpy as np
from datasets import load_metric
def compute_metrics(eval_pred):
recall_metric = load_metric('recall')
accuracy_metric = load_metric('accuracy')
precision_metric = load_metric('precision')
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
recall = recall_metric.compute(
predictions=predictions,
references=labels,
average='macro'
)['recall']
accuracy = accuracy_metric.compute(
predictions=predictions,
references=labels
)['accuracy']
precision = precision_metric.compute(
predictions=predictions,
references=labels,
average='macro'
)['precision']
return {
'recall': recall,
'accuracy': accuracy,
'precision': precision
}
```
## 🚂 Training
```
import wandb
from huggingface_hub import notebook_login
#@title Training Settings
#@markdown ---
#@markdown #### Upload Model?
#@markdown By ticking this box, your model will be automatically uploaded to the HuggingFace Hub. You will need to create an account.
#@markdown (Recommended)
upload_model_tokenizer = True #@param {type:"boolean"}
#@markdown ---
#@markdown #### Use Weights & Biases?
#@markdown By ticking this box, you will be able to visualize your experiments live [here](https://wandb.ai).
#@markdown (Recommended)
use_wb = True #@param {type:"boolean"}
#@markdown ---
#@markdown #### Model name
#@markdown The name of your model. This will be used as the public model name if you decide to upload it.
model_name = 'my_model' #@param {type:"string"}
if upload_model_tokenizer:
notebook_login()
if use_wb:
wandb.login()
from transformers import Trainer, TrainingArguments, \
PerceiverForSequenceClassification
# Load the pre-trained model
# Initialize the training arguments
# Initialize the trainer
trainer.train()
if upload_model_tokenizer:
trainer.push_to_hub()
if use_wb:
wandb.finish()
```
## 🔬 Model Analysis
```
predictions = trainer.predict(tokenized_splits['test'])
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
matrix = confusion_matrix(predictions.label_ids, np.argmax(predictions.predictions, axis=-1))
plt.figure(figsize=(10, 10))
sns.set_context('notebook', font_scale=1.5, rc={'lines.linewidth': 2.5})
conf_plot = sns.heatmap(
matrix,
annot=True,
cmap='Blues',
square=True,
cbar=False,
xticklabels=labels,
yticklabels=labels,
fmt='g'
)
conf_plot.set_yticklabels(conf_plot.get_yticklabels(), rotation = 0)
conf_plot.xaxis.set_ticks_position('top')
conf_plot.xaxis.set_label_position('top')
conf_plot.set_xlabel('Predicted sentiment', labelpad=20)
conf_plot.set_ylabel('True sentiment', labelpad=20)
plt.show()
```
## Inference
```
#@markdown ---
#@markdown #### Sentence
#@markdown ###### Run cell to predict the sentiment
text = 'WarwickAI stock price sky rockets after IPO.' #@param {type:"string"}
#@markdown ---
inference_prediction = trainer.predict([tokenizer(text, truncation=True)])
predicted_sentiment = id2label[np.argmax(inference_prediction.predictions)]
print(f'The sentence is {predicted_sentiment}.')
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Tutorials/Keiko/fire_australia.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Tutorials/Keiko/fire_australia.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Tutorials/Keiko/fire_australia.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Credits to: Keiko Nomura, Senior Analyst, Space Intelligence Ltd
# Source: https://medium.com/google-earth/10-tips-for-becoming-an-earth-engine-expert-b11aad9e598b
# GEE JS: https://code.earthengine.google.com/?scriptPath=users%2Fnkeikon%2Fmedium%3Afire_australia
geometry = ee.Geometry.Polygon(
[[[153.02512376008724, -28.052192238512877],
[153.02512376008724, -28.702237664294238],
[153.65683762727474, -28.702237664294238],
[153.65683762727474, -28.052192238512877]]])
Map.centerObject(ee.FeatureCollection(geometry), 10)
# Use clear images from May and Dec 2019
imageMay = ee.Image('COPERNICUS/S2_SR/20190506T235259_20190506T235253_T56JNP')
imageDec = ee.Image('COPERNICUS/S2_SR/20191202T235239_20191202T235239_T56JNP')
Map.addLayer(imageMay, {
'bands': ['B4', 'B3', 'B2'],
'min': 0,
'max': 1800
}, 'May 2019 (True colours)')
Map.addLayer(imageDec, {
'bands': ['B4', 'B3', 'B2'],
'min': 0,
'max': 1800
}, 'Dec 2019 (True colours)')
# Compute NDVI and use grey colour for areas with NDVI < 0.8 in May 2019
NDVI = imageMay.normalizedDifference(['B8', 'B4']).rename('NDVI')
grey = imageMay.mask(NDVI.select('NDVI').lt(0.8))
Map.addLayer(grey, {
'bands': ['B3', 'B3', 'B3'],
'min': 0,
'max': 1800,
'gamma': 1.5
}, 'grey (base)')
# Export as mosaic. Alternatively you can also use blend().
mosaicDec = ee.ImageCollection([
imageDec.visualize(**{
'bands': ['B4', 'B3', 'B2'],
'min': 0,
'max': 1800
}),
grey.visualize(**{
'bands': ['B3', 'B3', 'B3'],
'min': 0,
'max': 1800
}),
]).mosaic()
mosaicMay = ee.ImageCollection([
imageMay.visualize(**{
'bands': ['B4', 'B3', 'B2'],
'min': 0,
'max': 1800
}),
grey.visualize(**{
'bands': ['B3', 'B3', 'B3'],
'min': 0,
'max': 1800
}),
]).mosaic()
# Export.image.toDrive({
# 'image': mosaicMay,
# description: 'May',
# 'region': geometry,
# crs: 'EPSG:3857',
# 'scale': 10
# })
# Export.image.toDrive({
# 'image': mosaicDec,
# description: 'Dec',
# 'region': geometry,
# crs: 'EPSG:3857',
# 'scale': 10
# })
# ============ #
# Topography #
# ============ #
# Add topography by computing a hillshade using the terrain algorithms
elev = ee.Image('USGS/SRTMGL1_003')
shadeAll = ee.Terrain.hillshade(elev)
shade = shadeAll.mask(elev.gt(0)) # mask the sea
mayTR = ee.ImageCollection([
imageMay.visualize(**{
'bands': ['B4', 'B3', 'B2'],
'min': 0,
'max': 1800
}),
shade.visualize(**{
'bands': ['hillshade', 'hillshade', 'hillshade'],
'opacity': 0.2
}),
]).mosaic()
highVeg = NDVI.gte(0.8).visualize(**{
'min': 0,
'max': 1
})
Map.addLayer(mayTR.mask(highVeg), {
'gamma': 0.8
}, 'May (with topography)',False)
# Convert the visualized elevation to HSV, first converting to [0, 1] data.
hsv = mayTR.divide(255).rgbToHsv()
# Select only the hue and saturation bands.
hs = hsv.select(0, 1)
# Convert the hillshade to [0, 1] data, as expected by the HSV algorithm.
v = shade.divide(255)
# Create a visualization image by converting back to RGB from HSV.
# Note the cast to byte in order to export the image correctly.
rgb = hs.addBands(v).hsvToRgb().multiply(255).byte()
Map.addLayer(rgb.mask(highVeg), {
'gamma': 0.5
}, 'May (topography visualised)')
# Export the image
mayTRMosaic = ee.ImageCollection([
rgb.mask(highVeg).visualize(**{
'gamma': 0.5}),
grey.visualize(**{
'bands': ['B3', 'B3', 'B3'],
'min': 0,
'max': 1800
}),
]).mosaic()
# Export.image.toDrive({
# 'image': mayTRMosaic,
# description: 'MayTerrain',
# 'region': geometry,
# crs: 'EPSG:3857',
# 'scale': 10
# })
decTR = ee.ImageCollection([
imageDec.visualize(**{
'bands': ['B4', 'B3', 'B2'],
'min': 0,
'max': 1800
}),
shade.visualize(**{
'bands': ['hillshade', 'hillshade', 'hillshade'],
'opacity': 0.2
}),
]).mosaic()
Map.addLayer(decTR.mask(highVeg), {
'gamma': 0.8
}, 'Dec (with topography)',False)
# Convert the visualized elevation to HSV, first converting to [0, 1] data.
hsv = decTR.divide(255).rgbToHsv()
# Select only the hue and saturation bands.
hs = hsv.select(0, 1)
# Convert the hillshade to [0, 1] data, as expected by the HSV algorithm.
v = shade.divide(255)
# Create a visualization image by converting back to RGB from HSV.
# Note the cast to byte in order to export the image correctly.
rgb = hs.addBands(v).hsvToRgb().multiply(255).byte()
Map.addLayer(rgb.mask(highVeg), {
'gamma': 0.5
}, 'Dec (topography visualised)')
# Export the image
decTRMosaic = ee.ImageCollection([
rgb.mask(highVeg).visualize(**{
'gamma': 0.5
}),
grey.visualize(**{
'bands': ['B3', 'B3', 'B3'],
'min': 0,
'max': 1800
}),
]).mosaic()
# Export.image.toDrive({
# 'image': decTRMosaic,
# description: 'DecTerrain',
# 'region': geometry,
# crs: 'EPSG:3857',
# 'scale': 10
# })
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# How to Use Anomaly Detectors in Merlion
This notebook will guide you through using all the key features of anomaly detectors in Merlion. Specifically, we will explain
1. Initializing an anomaly detection model (including ensembles)
1. Training the model
1. Producing a series of anomaly scores with the model
1. Quantitatively evaluating the model
1. Visualizing the model's predictions
1. Saving and loading a trained model
1. Simulating the live deployment of a model using a `TSADEvaluator`
We will be using a single example time series for this whole notebook. We load and visualize it now:
```
import matplotlib.pyplot as plt
import numpy as np
from merlion.plot import plot_anoms
from merlion.utils import TimeSeries
from ts_datasets.anomaly import NAB
np.random.seed(1234)
# This is a time series with anomalies in both the train and test split.
# time_series and metadata are both time-indexed pandas DataFrames.
time_series, metadata = NAB(subset="realKnownCause")[3]
# Visualize the full time series
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111)
ax.plot(time_series)
# Label the train/test split with a dashed line & plot anomalies
ax.axvline(metadata[metadata.trainval].index[-1], ls="--", lw=2, c="k")
plot_anoms(ax, TimeSeries.from_pd(metadata.anomaly))
from merlion.utils import TimeSeries
# Get training split
train = time_series[metadata.trainval]
train_data = TimeSeries.from_pd(train)
train_labels = TimeSeries.from_pd(metadata[metadata.trainval].anomaly)
# Get testing split
test = time_series[~metadata.trainval]
test_data = TimeSeries.from_pd(test)
test_labels = TimeSeries.from_pd(metadata[~metadata.trainval].anomaly)
```
## Model Initialization
In this notebook, we will use three different anomaly detection models:
1. Isolation Forest (a classic anomaly detection model)
2. WindStats (an in-house model that divides each week into windows of a specified size, and compares time series values to the historical values in the appropriate window)
3. Prophet (Facebook's popular forecasting model, adapted for anomaly detection.
Let's start by initializing each of them:
```
# Import models & configs
from merlion.models.anomaly.isolation_forest import IsolationForest, IsolationForestConfig
from merlion.models.anomaly.windstats import WindStats, WindStatsConfig
from merlion.models.anomaly.forecast_based.prophet import ProphetDetector, ProphetDetectorConfig
# Import a post-rule for thresholding
from merlion.post_process.threshold import AggregateAlarms
# Import a data processing transform
from merlion.transform.moving_average import DifferenceTransform
# All models are initialized using the syntax ModelClass(config), where config
# is a model-specific configuration object. This is where you specify any
# algorithm-specific hyperparameters, any data pre-processing transforms, and
# the post-rule you want to use to post-process the anomaly scores (to reduce
# noisiness when firing alerts).
# We initialize isolation forest using the default config
config1 = IsolationForestConfig()
model1 = IsolationForest(config1)
# We use a WindStats model that splits each week into windows of 60 minutes
# each. Anomaly scores in Merlion correspond to z-scores. By default, we would
# like to fire an alert for any 4-sigma event, so we specify a threshold rule
# which achieves this.
config2 = WindStatsConfig(wind_sz=60, threshold=AggregateAlarms(alm_threshold=4))
model2 = WindStats(config2)
# Prophet is a popular forecasting algorithm. Here, we specify that we would like
# to pre-processes the input time series by applying a difference transform,
# before running the model on it.
config3 = ProphetDetectorConfig(transform=DifferenceTransform())
model3 = ProphetDetector(config3)
```
Now that we have initialized the individual models, we will also combine them in an ensemble. We set this ensemble's detection threshold to fire alerts for 4-sigma events (the same as WindStats).
```
from merlion.models.ensemble.anomaly import DetectorEnsemble, DetectorEnsembleConfig
ensemble_config = DetectorEnsembleConfig(threshold=AggregateAlarms(alm_threshold=4))
ensemble = DetectorEnsemble(config=ensemble_config, models=[model1, model2, model3])
```
## Model Training
All anomaly detection models (and ensembles) share the same API for training. The `train()` method returns the model's predicted anomaly scores on the training data. Note that you may optionally specify configs that modify the protocol used to train the model's post-rule! You may optionally specify ground truth anomaly labels as well (if you have them), but they are not needed. We give examples of all these behaviors below.
```
from merlion.evaluate.anomaly import TSADMetric
# Train IsolationForest in the default way, using the ground truth anomaly labels
# to set the post-rule's threshold
print(f"Training {type(model1).__name__}...")
train_scores_1 = model1.train(train_data=train_data, anomaly_labels=train_labels)
# Train WindStats completely unsupervised (this retains our anomaly detection
# default anomaly detection threshold of 4)
print(f"\nTraining {type(model2).__name__}...")
train_scores_2 = model2.train(train_data=train_data, anomaly_labels=None)
# Train Prophet with the ground truth anomaly labels, with a post-rule
# trained to optimize Precision score
print(f"\nTraining {type(model3).__name__}...")
post_rule_train_config_3 = dict(metric=TSADMetric.F1)
train_scores_3 = model3.train(
train_data=train_data, anomaly_labels=train_labels,
post_rule_train_config=post_rule_train_config_3)
# We consider an unsupervised ensemble, which combines the anomaly scores
# returned by the models & sets a static anomaly detection threshold of 3.
print("\nTraining ensemble...")
ensemble_post_rule_train_config = dict(metric=None)
train_scores_e = ensemble.train(
train_data=train_data, anomaly_labels=train_labels,
post_rule_train_config=ensemble_post_rule_train_config,
)
print("Done!")
```
## Model Inference
There are two ways to invoke an anomaly detection model: `model.get_anomaly_score()` returns the model's raw anomaly scores, while `model.get_anomaly_label()` returns the model's post-processed anomaly scores. The post-processing calibrates the anomaly scores to be interpretable as z-scores, and it also sparsifies them such that any nonzero values should be treated as an alert that a particular timestamp is anomalous.
```
# Here is a full example for the first model, IsolationForest
scores_1 = model1.get_anomaly_score(test_data)
scores_1_df = scores_1.to_pd()
print(f"{type(model1).__name__}.get_anomaly_score() nonzero values (raw)")
print(scores_1_df[scores_1_df.iloc[:, 0] != 0])
print()
labels_1 = model1.get_anomaly_label(test_data)
labels_1_df = labels_1.to_pd()
print(f"{type(model1).__name__}.get_anomaly_label() nonzero values (post-processed)")
print(labels_1_df[labels_1_df.iloc[:, 0] != 0])
print()
print(f"{type(model1).__name__} fires {(labels_1_df.values != 0).sum()} alarms")
print()
print("Raw scores at the locations where alarms were fired:")
print(scores_1_df[labels_1_df.iloc[:, 0] != 0])
print("Post-processed scores are interpretable as z-scores")
print("Raw scores are challenging to interpret")
```
The same API is shared for all models, including ensembles.
```
scores_2 = model2.get_anomaly_score(test_data)
labels_2 = model2.get_anomaly_label(test_data)
scores_3 = model3.get_anomaly_score(test_data)
labels_3 = model3.get_anomaly_label(test_data)
scores_e = ensemble.get_anomaly_score(test_data)
labels_e = ensemble.get_anomaly_label(test_data)
```
## Quantitative Evaluation
It is fairly transparent to visualize a model's predicted anomaly scores and also quantitatively evaluate its anomaly labels. For evaluation, we use specialized definitions of precision, recall, and F1 as revised point-adjusted metrics (see the technical report for more details). We also consider the mean time to detect anomalies.
In general, you may use the `TSADMetric` enum to compute evaluation metrics for a time series using the syntax
```
TSADMetric.<metric_name>.value(ground_truth=ground_truth, predict=anomaly_labels)
```
where `<metric_name>` is the name of the evaluation metric (see the API docs for details and more options), `ground_truth` is a time series of ground truth anomaly labels, and `anomaly_labels` is the output of `model.get_anomaly_label()`.
```
from merlion.evaluate.anomaly import TSADMetric
for model, labels in [(model1, labels_1), (model2, labels_2), (model3, labels_3), (ensemble, labels_e)]:
print(f"{type(model).__name__}")
precision = TSADMetric.Precision.value(ground_truth=test_labels, predict=labels)
recall = TSADMetric.Recall.value(ground_truth=test_labels, predict=labels)
f1 = TSADMetric.F1.value(ground_truth=test_labels, predict=labels)
mttd = TSADMetric.MeanTimeToDetect.value(ground_truth=test_labels, predict=labels)
print(f"Precision: {precision:.4f}")
print(f"Recall: {recall:.4f}")
print(f"F1: {f1:.4f}")
print(f"MTTD: {mttd}")
print()
```
Since the individual models are trained to optimize F1 directly, they all have low precision, high recall, and a mean time to detect of around 1 day. However, by instead training the individual models to optimize precision, and training a model combination unit to optimize F1, we are able to greatly increase the precision and F1 score, at the cost of a lower recall and higher mean time to detect.
## Model Visualization
Let's now visualize the model predictions that led to these outcomes. The option `filter_scores=True` means that we want to plot the post-processed anomaly scores (i.e. returned by `model.get_anomaly_label()`). You may instead specify `filter_scores=False` to visualize the raw anomaly scores.
```
for model in [model1, model2, model3]:
print(type(model).__name__)
fig, ax = model.plot_anomaly(
time_series=test_data, time_series_prev=train_data,
filter_scores=True, plot_time_series_prev=True)
plot_anoms(ax=ax, anomaly_labels=test_labels)
plt.show()
print()
```
So all the individual models generate quite a few false positives. Let's see how the ensemble does:
```
fig, ax = ensemble.plot_anomaly(
time_series=test_data, time_series_prev=train_data,
filter_scores=True, plot_time_series_prev=True)
plot_anoms(ax=ax, anomaly_labels=test_labels)
plt.show()
```
So the ensemble misses one of the three anomalies in the test split, but it also greatly reduces the number of false positives relative to the other models.
## Saving & Loading Models
All models have a `save()` method and `load()` class method. Models may also be loaded with the assistance of the `ModelFactory`, which works for arbitrary models. The `save()` method creates a new directory at the specified path, where it saves a `json` file representing the model's config, as well as a binary file for the model's state.
We will demonstrate these behaviors using our `IsolationForest` model (`model1`) for concreteness. Note that the config explicitly tracks the transform (to pre-process the data), the calibrator (to transform raw anomaly scores into z-scores), the thresholding rule (to sparsify the calibrated anomaly scores).
```
import json
import os
import pprint
from merlion.models.factory import ModelFactory
# Save the model
os.makedirs("models", exist_ok=True)
path = os.path.join("models", "isf")
model1.save(path)
# Print the config saved
pp = pprint.PrettyPrinter()
with open(os.path.join(path, "config.json")) as f:
print(f"{type(model1).__name__} Config")
pp.pprint(json.load(f))
# Load the model using Prophet.load()
model2_loaded = IsolationForest.load(dirname=path)
# Load the model using the ModelFactory
model2_factory_loaded = ModelFactory.load(name="IsolationForest", model_path=path)
```
We can do the same exact thing with ensembles! Note that the ensemble stores its underlying models in a nested structure. This is all reflected in the config.
```
# Save the ensemble
path = os.path.join("models", "ensemble")
ensemble.save(path)
# Print the config saved. Note that we've saved all individual models,
# and their paths are specified under the model_paths key.
pp = pprint.PrettyPrinter()
with open(os.path.join(path, "config.json")) as f:
print(f"Ensemble Config")
pp.pprint(json.load(f))
# Load the selector
selector_loaded = DetectorEnsemble.load(dirname=path)
# Load the selector using the ModelFactory
selector_factory_loaded = ModelFactory.load(name="DetectorEnsemble", model_path=path)
```
## Simulating Live Model Deployment
A typical model deployment scenario is as follows:
1. Train an initial model on some recent historical data, optionally with labels.
1. At a regular interval `retrain_freq` (e.g. once per week), retrain the entire model unsupervised (i.e. with no labels) on the most recent data.
1. Obtain the model's predicted anomaly scores for the time series values that occur between re-trainings. We perform this operation in batch, but a deployment scenario may do this in streaming.
1. Optionally, specify a maximum amount of data (`train_window`) that the model should use for training (e.g. the most recent 2 weeks of data).
We provide a `TSADEvaluator` object which simulates the above deployment scenario, and also allows a user to evaluate the quality of the forecaster according to an evaluation metric of their choice. We illustrate an example below using the ensemble.
```
# Initialize the evaluator
from merlion.evaluate.anomaly import TSADEvaluator, TSADEvaluatorConfig
evaluator = TSADEvaluator(model=ensemble, config=TSADEvaluatorConfig(retrain_freq="7d"))
# The kwargs we would provide to ensemble.train() for the initial training
# Note that we are training the ensemble unsupervised.
train_kwargs = {"anomaly_labels": None}
# We will use the default kwargs for re-training (these leave the
# post-rules unchanged, since there are no new labels)
retrain_kwargs = None
# We call evaluator.get_predict() to get the time series of anomaly scores
# produced by the anomaly detector when deployed in this manner
train_scores, test_scores = evaluator.get_predict(
train_vals=train_data, test_vals=test_data,
train_kwargs=train_kwargs, retrain_kwargs=retrain_kwargs
)
# Now let's evaluate how we did.
precision = evaluator.evaluate(ground_truth=test_labels, predict=test_scores, metric=TSADMetric.Precision)
recall = evaluator.evaluate(ground_truth=test_labels, predict=test_scores, metric=TSADMetric.Recall)
f1 = evaluator.evaluate(ground_truth=test_labels, predict=test_scores, metric=TSADMetric.F1)
mttd = evaluator.evaluate(ground_truth=test_labels, predict=test_scores, metric=TSADMetric.MeanTimeToDetect)
print("Ensemble Performance")
print(f"Precision: {precision:.4f}")
print(f"Recall: {recall:.4f}")
print(f"F1: {f1:.4f}")
print(f"MTTD: {mttd}")
print()
```
In this case, we see that by simply re-training the ensemble weekly in an unsupervised manner, we have increased the precision from $\frac{2}{5}$ to $\frac{2}{3}$, while leaving unchanged the recall and mean time to detect. This is due to data drift over time.
| github_jupyter |
```
artefact_prefix = '2_pytorch'
target = 'beer_style'
from dotenv import find_dotenv
from datetime import datetime
import pandas as pd
from pathlib import Path
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from category_encoders.binary import BinaryEncoder
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, LabelEncoder, OneHotEncoder
from joblib import dump, load
# from src.data.sets import merge_categories
# from src.data.sets import save_sets
from src.data.sets import load_sets
# from src.data.sets import split_sets_random
# from src.data.sets import test_class_exclusion
# from src.models.performance import convert_cr_to_dataframe
from src.models.pytorch import PytorchClassification_2
from src.models.pytorch import get_device
from src.models.pytorch import train_classification
from src.models.pytorch import test_classification
from src.models.pytorch import PytorchDataset
from src.models.pipes import create_preprocessing_pipe
from src.visualization.visualize import plot_confusion_matrix
```
### Directory Set up
```
project_dir = Path(find_dotenv()).parent
data_dir = project_dir / 'data'
raw_data_dir = data_dir / 'raw'
interim_data_dir = data_dir / 'interim'
processed_data_dir = data_dir / 'processed'
reports_dir = project_dir / 'reports'
models_dir = project_dir / 'models'
processed_data_dir
```
### Load Save Data
```
## Panda Data type
from src.data.sets import load_sets
X_train, X_test, X_val, y_train, y_test, y_val = load_sets()
y_train['beer_style'].nunique()
X_train.head()
```
### Data Pipeline
```
pipe = Pipeline([
('bin_encoder', BinaryEncoder(cols=['brewery_name'])),
('scaler', StandardScaler())
])
X_train_trans = pipe.fit_transform(X_train)
X_val_trans = pipe.transform(X_val)
X_test_trans = pipe.transform(X_test)
X_train_trans.shape
n_features = X_train_trans.shape[1]
n_features
n_classes = y_train['beer_style'].nunique()
n_classes
```
### Encoding - Label
```
le = LabelEncoder()
y_train_trans = le.fit_transform(y_train)
y_val_trans = le.fit_transform(y_val)
y_test_trans = le.transform(y_test)
y_test_trans
```
### Convert to Pytorch Tensor
```
device = get_device()
device
train_dataset = PytorchDataset(X=X_train_trans, y=y_train_trans)
val_dataset = PytorchDataset(X=X_val_trans, y=y_val_trans)
test_dataset = PytorchDataset(X=X_test_trans, y=y_test_trans)
```
### Classification Model
```
model = PytorchClassification_2(n_features=n_features, n_classes=n_classes)
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
```
## Train the Model
```
N_EPOCHS = 20
BATCH_SIZE = 512
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9)
start_time = datetime.now()
print(f'Started: {start_time}')
for epoch in range(N_EPOCHS):
train_loss, train_acc = train_classification(train_dataset,
model=model,
criterion=criterion,
optimizer=optimizer,
batch_size=BATCH_SIZE,
device=device,
scheduler=scheduler)
valid_loss, valid_acc = test_classification(val_dataset,
model=model,
criterion=criterion,
batch_size=BATCH_SIZE,
device=device)
print(f'Epoch: {epoch}')
print(f'\t(train)\tLoss: {train_loss:.4f}\t|\tAcc: {train_acc * 100:.1f}%')
print(f'\t(valid)\tLoss: {valid_loss:.4f}\t|\tAcc: {valid_acc * 100:.1f}%')
end_time = datetime.now()
runtime = end_time - start_time
print(f'Ended: {end_time}')
print(f'Runtime: {runtime}')
```
### Retrain the model with lesser EPOCH
```
N_EPOCHS = 20
BATCH_SIZE = 4096
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9)
start_time = datetime.now()
print(f'Started: {start_time}')
for epoch in range(N_EPOCHS):
train_loss, train_acc = train_classification(train_dataset,
model=model,
criterion=criterion,
optimizer=optimizer,
batch_size=BATCH_SIZE,
device=device,
scheduler=scheduler)
valid_loss, valid_acc = test_classification(val_dataset,
model=model,
criterion=criterion,
batch_size=BATCH_SIZE,
device=device)
print(f'Epoch: {epoch}')
print(f'\t(train)\tLoss: {train_loss:.4f}\t|\tAcc: {train_acc * 100:.1f}%')
print(f'\t(valid)\tLoss: {valid_loss:.4f}\t|\tAcc: {valid_acc * 100:.1f}%')
end_time = datetime.now()
runtime = end_time - start_time
print(f'Ended: {end_time}')
print(f'Runtime: {runtime}')
```
### Prediction
```
model.to('cpu')
preds = model(test_dataset.X_tensor).argmax(1)
preds
model.to(device)
```
## Evaluation
### Classification Report
```
report = classification_report(y_test, le.inverse_transform(preds.cpu()))
print(report)
```
## Save Objects for Production
### Save model
```
path = models_dir / f'{artefact_prefix}_model'
torch.save(model, path.with_suffix('.torch'))
```
### Save Pipe Object
```
X = pd.concat([X_train, X_val, X_test])
prod_pipe = create_preprocessing_pipe(X)
path = models_dir / f'{artefact_prefix}_pipe'
dump(prod_pipe, path.with_suffix('.sav'))
```
### Save the label encoder
This is required to retrive the name of the beer_style.
```
path = models_dir / f'{artefact_prefix}_label_encoder'
dump(le, path.with_suffix('.sav'))
```
| github_jupyter |
```
# EDA MOdule importing
import shutil
shutil.copy('/content/drive/MyDrive/DA_Library/EDA.py','EDA.py')
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import EDA as eda
raw_data = eda.readfile('/content/drive/MyDrive/Ajinkya_Patil_Plant Disease Detection /Processed_data&models/Tomato/dataset_tomato.xlsx')
raw_data.dtypes
raw_data.drop(['Unnamed: 0'],axis = 1, inplace=True)
raw_data.columns
eda.correlation(raw_data)
eda.correlationlist(raw_data)
```
**Insights:**
Less correlated features are:
- green channel mean
- Blue channel mean
- green channel std
- f4
- f5
- f6
- f7
- f8
Also f1 and f2 are mutually relative So one of them can be removed
```
cleaned_data = raw_data.drop(['f1'],axis = 1, inplace=False)
eda.correlation(cleaned_data)
cleaned_data = eda.removenullrows(cleaned_data)
raw_data.shape
cleaned_data.shape
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.svm import SVC
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import KFold
df = cleaned_data.reset_index()
X = df.drop(['index','label'],axis = 1, inplace=False)
y = df['label']
print(X.shape)
print(y.shape)
k = 5
kf = KFold(n_splits=k, random_state=9, shuffle = True)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(random_state = 50,n_estimators = 50,max_samples = 0.7)
acc_score = []
for train_index , test_index in kf.split(X):
X_train , X_test = X.iloc[train_index,:],X.iloc[test_index,:]
y_train , y_test = y[train_index] , y[test_index]
model = RandomForestClassifier(random_state = 50,n_estimators = 50,max_samples = 0.7)
model.fit(X_train,y_train)
pred_values = model.predict(X_test) # classification
acc = accuracy_score(pred_values , y_test) # Classification
#acc = model.score(X_test,y_test) # Regression
acc_score.append(acc)
avg_acc_score = sum(acc_score)/k
print('Score of each fold - {}'.format(acc_score))
print('Avg Score : {}'.format(avg_acc_score))
```
# **K- fold Cross validation accuracy:**
# **0.8667723659915705**
# ROC Curve
```
!pip install scikit-plot
import scikitplot as skplt
import matplotlib.pyplot as plt
y_true = y_test
y_probas = model.predict_proba(X_test)
skplt.metrics.plot_roc_curve(y_true, y_probas)
from sklearn.metrics import plot_confusion_matrix
y_pred = model.predict(X_test)
plot_confusion_matrix(model, X_test, y_test, values_format = 'd',cmap = 'Blues',display_labels = ['0','1','2','3','4','5','6','7','8','9'])
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred, [0,1,2,3,4,5,6,7,8,9]))
```
# **F1 Score:**
# **0.87**
# Deployment
```
from sklearn.ensemble import RandomForestClassifier
lm = RandomForestClassifier(random_state = 50,n_estimators = 50,max_samples = 0.7)
lm.fit(X,y)
print('Training Score: ',lm.score(X,y))
lm.feature_importances_
import pickle
filename = '/content/drive/MyDrive/Ajinkya_Patil_Plant Disease Detection /Processed_data&models/Tomato/Results/Tomatomodel_V1.sav'
pickle.dump(lm, open(filename, 'wb'))
filename = '/content/drive/MyDrive/Ajinkya_Patil_Plant Disease Detection /Processed_data&models/Tomato/Results/Tomatomodel_V1.sav'
dep_model = pickle.load(open(filename, 'rb'))
print(dep_model.score(X,y))
```
# Selected Features
```
X.columns
```
# Label Dictionary
- 0 : Tomato___healthy
- 1 : Tomato___Bacterial_spot
- 2 : Tomato___Early_blight
- 3 : Tomato___Late_blight
- 4 : Tomato___Leaf_Mold
- 5 : Tomato___Septoria_leaf_spot
- 6 : omato___Spider_mites Two-spotted_spider_mite
- 7 : Tomato___Target_Spot
- 8 : omato___Tomato_Yellow_Leaf_Curl_Virus
- 9 : Tomato___Tomato_mosaic_virus
# Performance
- Accuracy : 0.8668
- F1 score : 0.87
```
```
| github_jupyter |
# Monte Carlo experiments
The Monte Carlo method is a way of using random numbers to solve problems that can otherwise be quite complicated. Essentially, the idea is to replace uncertain values with a large list of values, assume those values have no uncertaintly, and compute a large list of results. Analysis of the results can often lead you to the solution to the original problem.
The following story is based on the Dart method, described in the following book by Thijsse, J. M. (2006), Computational Physics, Cambridge University Press, p. 273, ISBN 978-0-521-57588-1
### Approximating $\pi$
Let's use the Monte Carlo method to approximate $\pi$. First, the area of a circle is $A = \pi r^2,$
so that the unit circle ($r=1$) has an area $A = \pi$. Second, we define the square domain in which the unit circle just fits (2x2), and plot both:
```
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import uniform
def circle():
theta = np.linspace(0, 2*np.pi, num=1000)
x = [np.cos(n) for n in theta]
y = [np.sin(n) for n in theta]
return x, y
def square():
x = [-1,1,1,-1,-1]
y = [-1,-1,1,1,-1]
return x, y
plt.plot(*circle(), 'k')
plt.plot(*square(), 'k')
plt.axis('equal')
plt.show()
```
Consider a random point (x,y) inside the square.
**The probability that a random point from a square domain lies inside the biggest circle that fits inside the square is equal to the area of the circle divided by the area of the square.**
Our square is $2\times 2=4$ and our circle has an area of $\pi$, so if we pick many points, statistically, we expect $\pi/4$ of them to fall inside the circle.
### Our first few points
Let's create a function that will give us random points from a uniform distribution inside the square domain, and generate a few random points.
```
np.random.seed(seed=345) # remove/change the seed if you want different random numbers
def monte_carlo(n):
x = uniform.rvs(loc=-1, scale=2, size=n)
y = uniform.rvs(loc=-1, scale=2, size=n)
return x, y
mc_x, mc_y = monte_carlo(28)
plt.plot(*circle(), 'k')
plt.plot(*square(), 'k')
plt.scatter(mc_x, mc_y, c='green')
plt.axis('equal')
plt.show()
```
Let's colour the points depending on whether they are within 1 unit of the origin or not:
```
def dist(x, y):
return np.sqrt(x**2 + y**2)
def inside(x, y):
return dist(x, y) < 1
def outside(x, y):
return dist(x, y) > 1
np.random.seed(seed=345) # remove the command or change the value if you want different random numbers
x, y = monte_carlo(28)
ins = inside(x, y)
outs = outside(x, y)
plt.plot(*square(), 'k')
plt.plot(*circle(), 'k')
plt.scatter(x[ins], y[ins], c='blue',label=len(x[ins]))
plt.scatter(x[outs], y[outs], c='red', label=len(x[outs]))
plt.axis('equal')
plt.legend()
plt.show()
```
If you left the np.random seed as 345, and generated 28 points, then your output should have 22 blue points (inside the circle) and 6 red points (outside the circle).
So, if 22 out of 28 (around 76%) of the points are inside the circle, we can approximate $\pi$ from this.
$$
\begin{align}
\pi \approx 4(22/28) = 22/7 = 3.142857.
\end{align}
$$
$22/7$ is a common approximation of $\pi$, and this seed value happens to get us to this ratio. If you vary the random seed, however, you will see that this ratio for only a small number of points can vary quite a lot!
To produce a more robust approximation for $\pi$, we will need many more random points.
### 250 points
```
x, y = monte_carlo(250)
ins = inside(x, y)
outs = outside(x, y)
pi = 4 * len(x[ins])/len(x)
plt.plot(*square(), 'k')
plt.scatter(x[ins], y[ins], c='blue', label=len(x[ins]))
plt.scatter(x[outs], y[outs], c='red', label=len(x[outs]))
plt.axis('equal')
plt.legend()
plt.title('π ≈ {}'.format(pi))
plt.show()
```
### 1000 points
```
x, y = monte_carlo(1000)
ins = inside(x, y)
outs = outside(x, y)
pi = 4 * len(x[ins])/len(x)
plt.plot(*square(), 'k')
plt.scatter(x[ins], y[ins], c='blue', label=len(x[ins]))
plt.scatter(x[outs], y[outs], c='red', label=len(x[outs]))
plt.axis('equal')
plt.legend()
plt.title('π ≈ {}'.format(pi))
plt.show()
```
## 25000 points
```
x, y = monte_carlo(25000)
ins = inside(x, y)
outs = outside(x, y)
pi = 4 * len(x[ins])/len(x)
plt.plot(*square(), 'k')
plt.scatter(x[ins], y[ins], c='blue', label=len(x[ins]))
plt.scatter(x[outs], y[outs], c='red', label=len(x[outs]))
plt.axis('equal')
plt.legend()
plt.title('π ≈ {}'.format(pi))
plt.show()
```
With many points, you should see a completely blue circle in an otherwise red square, **and** a decent estimate of $\pi$.
## The mcerp3 package
There is a Python package, `mcerp3`, which handles Monte Carlo calculations automatically. It was originally written by Abraham Lee and has recently been updated by Paul Freeman to support Python3. This package is available on [PyPI](https://pypi.org/project/mcerp3/). If you want to use in this notebook in the cloud, you will have to do an install (which can take a bit of time):
```
#!pip install mcerp3 # or use conda install -y mcerp3 -c freemapa
!conda install -y mcerp3 -c freemapa
import mcerp3 as mc
from mcerp3.umath import sqrt
x = mc.U(-1, 1)
y = mc.U(-1, 1)
ins = sqrt(x**2 + y**2) < 1
print('percentage of points in the circle =', ins)
print('pi ≈', 4 * ins)
```
| github_jupyter |
Implementation of the paper : https://arxiv.org/abs/1907.10830
Inspired by the corresponding code : https://github.com/znxlwm/UGATIT-pytorch
```
!nvidia-smi -L
!pip install --upgrade --force-reinstall --no-deps kaggle
from torchvision import transforms
from torch.utils.data import DataLoader
import torch
import torch.nn as nn
from torch.nn.parameter import Parameter
import torch.utils.data as data
import numpy as np
import cv2
from scipy import misc
import time, itertools
from PIL import Image
import os
import os.path
```
# Load Data
```
from google.colab import files
# Here you should upload your Kaggle API key (see : https://www.kaggle.com/docs/api (Authentification paragraph))
files.upload()
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
! kaggle datasets list
! kaggle competitions download -c gan-getting-started
! unzip /content/gan-getting-started.zip
```
# Utils
```
def load_test_data(image_path, size=256):
img = misc.imread(image_path, mode='RGB')
img = misc.imresize(img, [size, size])
img = np.expand_dims(img, axis=0)
img = preprocessing(img)
return img
def preprocessing(x):
x = x/127.5 - 1 # -1 ~ 1
return x
def save_images(images, size, image_path):
return imsave(inverse_transform(images), size, image_path)
def inverse_transform(images):
return (images+1.) / 2
def imsave(images, size, path):
return misc.imsave(path, merge(images, size))
def merge(images, size):
h, w = images.shape[1], images.shape[2]
img = np.zeros((h * size[0], w * size[1], 3))
for idx, image in enumerate(images):
i = idx % size[1]
j = idx // size[1]
img[h*j:h*(j+1), w*i:w*(i+1), :] = image
return img
def check_folder(log_dir):
if not os.path.exists(log_dir):
os.makedirs(log_dir)
return log_dir
def str2bool(x):
return x.lower() in ('true')
def cam(x, size = 256):
x = x - np.min(x)
cam_img = x / np.max(x)
cam_img = np.uint8(255 * cam_img)
cam_img = cv2.resize(cam_img, (size, size))
cam_img = cv2.applyColorMap(cam_img, cv2.COLORMAP_JET)
return cam_img / 255.0
def imagenet_norm(x):
mean = [0.485, 0.456, 0.406]
std = [0.299, 0.224, 0.225]
mean = torch.FloatTensor(mean).unsqueeze(0).unsqueeze(2).unsqueeze(3).to(x.device)
std = torch.FloatTensor(std).unsqueeze(0).unsqueeze(2).unsqueeze(3).to(x.device)
return (x - mean) / std
def denorm(x):
return x * 0.5 + 0.5
def tensor2numpy(x):
return x.detach().cpu().numpy().transpose(1,2,0)
def RGB2BGR(x):
return cv2.cvtColor(x, cv2.COLOR_RGB2BGR)
```
# Models
```
class ResnetGenerator(nn.Module):
def __init__(self, input_nc, output_nc, ngf=64, n_blocks=6, img_size=256, light=False):
assert(n_blocks >= 0)
super(ResnetGenerator, self).__init__()
self.input_nc = input_nc
self.output_nc = output_nc
self.ngf = ngf
self.n_blocks = n_blocks
self.img_size = img_size
self.light = light
DownBlock = []
DownBlock += [nn.ReflectionPad2d(3),
nn.Conv2d(input_nc, ngf, kernel_size=7, stride=1, padding=0, bias=False),
nn.InstanceNorm2d(ngf),
nn.ReLU(True)]
# Down-Sampling
n_downsampling = 2
for i in range(n_downsampling):
mult = 2**i
DownBlock += [nn.ReflectionPad2d(1),
nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=0, bias=False),
nn.InstanceNorm2d(ngf * mult * 2),
nn.ReLU(True)]
# Down-Sampling Bottleneck
mult = 2**n_downsampling
for i in range(n_blocks):
DownBlock += [ResnetBlock(ngf * mult, use_bias=False)]
# Class Activation Map
self.gap_fc = nn.Linear(ngf * mult, 1, bias=False)
self.gmp_fc = nn.Linear(ngf * mult, 1, bias=False)
self.conv1x1 = nn.Conv2d(ngf * mult * 2, ngf * mult, kernel_size=1, stride=1, bias=True)
self.relu = nn.ReLU(True)
# Gamma, Beta block
if self.light:
FC = [nn.Linear(ngf * mult, ngf * mult, bias=False),
nn.ReLU(True),
nn.Linear(ngf * mult, ngf * mult, bias=False),
nn.ReLU(True)]
else:
FC = [nn.Linear(img_size // mult * img_size // mult * ngf * mult, ngf * mult, bias=False),
nn.ReLU(True),
nn.Linear(ngf * mult, ngf * mult, bias=False),
nn.ReLU(True)]
self.gamma = nn.Linear(ngf * mult, ngf * mult, bias=False)
self.beta = nn.Linear(ngf * mult, ngf * mult, bias=False)
# Up-Sampling Bottleneck
for i in range(n_blocks):
setattr(self, 'UpBlock1_' + str(i+1), ResnetAdaILNBlock(ngf * mult, use_bias=False))
# Up-Sampling
UpBlock2 = []
for i in range(n_downsampling):
mult = 2**(n_downsampling - i)
UpBlock2 += [nn.Upsample(scale_factor=2, mode='nearest'),
nn.ReflectionPad2d(1),
nn.Conv2d(ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=1, padding=0, bias=False),
ILN(int(ngf * mult / 2)),
nn.ReLU(True)]
UpBlock2 += [nn.ReflectionPad2d(3),
nn.Conv2d(ngf, output_nc, kernel_size=7, stride=1, padding=0, bias=False),
nn.Tanh()]
self.DownBlock = nn.Sequential(*DownBlock)
self.FC = nn.Sequential(*FC)
self.UpBlock2 = nn.Sequential(*UpBlock2)
def forward(self, input):
x = self.DownBlock(input)
gap = torch.nn.functional.adaptive_avg_pool2d(x, 1)
gap_logit = self.gap_fc(gap.view(x.shape[0], -1))
gap_weight = list(self.gap_fc.parameters())[0]
gap = x * gap_weight.unsqueeze(2).unsqueeze(3)
gmp = torch.nn.functional.adaptive_max_pool2d(x, 1)
gmp_logit = self.gmp_fc(gmp.view(x.shape[0], -1))
gmp_weight = list(self.gmp_fc.parameters())[0]
gmp = x * gmp_weight.unsqueeze(2).unsqueeze(3)
cam_logit = torch.cat([gap_logit, gmp_logit], 1)
x = torch.cat([gap, gmp], 1)
x = self.relu(self.conv1x1(x))
heatmap = torch.sum(x, dim=1, keepdim=True)
if self.light:
x_ = torch.nn.functional.adaptive_avg_pool2d(x, 1)
x_ = self.FC(x_.view(x_.shape[0], -1))
else:
x_ = self.FC(x.view(x.shape[0], -1))
gamma, beta = self.gamma(x_), self.beta(x_)
for i in range(self.n_blocks):
x = getattr(self, 'UpBlock1_' + str(i+1))(x, gamma, beta)
out = self.UpBlock2(x)
return out, cam_logit, heatmap
class ResnetBlock(nn.Module):
def __init__(self, dim, use_bias):
super(ResnetBlock, self).__init__()
conv_block = []
conv_block += [nn.ReflectionPad2d(1),
nn.Conv2d(dim, dim, kernel_size=3, stride=1, padding=0, bias=use_bias),
nn.InstanceNorm2d(dim),
nn.ReLU(True)]
conv_block += [nn.ReflectionPad2d(1),
nn.Conv2d(dim, dim, kernel_size=3, stride=1, padding=0, bias=use_bias),
nn.InstanceNorm2d(dim)]
self.conv_block = nn.Sequential(*conv_block)
def forward(self, x):
out = x + self.conv_block(x)
return out
class ResnetAdaILNBlock(nn.Module):
def __init__(self, dim, use_bias):
super(ResnetAdaILNBlock, self).__init__()
self.pad1 = nn.ReflectionPad2d(1)
self.conv1 = nn.Conv2d(dim, dim, kernel_size=3, stride=1, padding=0, bias=use_bias)
self.norm1 = adaILN(dim)
self.relu1 = nn.ReLU(True)
self.pad2 = nn.ReflectionPad2d(1)
self.conv2 = nn.Conv2d(dim, dim, kernel_size=3, stride=1, padding=0, bias=use_bias)
self.norm2 = adaILN(dim)
def forward(self, x, gamma, beta):
out = self.pad1(x)
out = self.conv1(out)
out = self.norm1(out, gamma, beta)
out = self.relu1(out)
out = self.pad2(out)
out = self.conv2(out)
out = self.norm2(out, gamma, beta)
return out + x
class adaILN(nn.Module):
def __init__(self, num_features, eps=1e-5):
super(adaILN, self).__init__()
self.eps = eps
self.rho = Parameter(torch.Tensor(1, num_features, 1, 1))
self.rho.data.fill_(0.9)
def forward(self, input, gamma, beta):
in_mean, in_var = torch.mean(input, dim=[2, 3], keepdim=True), torch.var(input, dim=[2, 3], keepdim=True)
out_in = (input - in_mean) / torch.sqrt(in_var + self.eps)
ln_mean, ln_var = torch.mean(input, dim=[1, 2, 3], keepdim=True), torch.var(input, dim=[1, 2, 3], keepdim=True)
out_ln = (input - ln_mean) / torch.sqrt(ln_var + self.eps)
out = self.rho.expand(input.shape[0], -1, -1, -1) * out_in + (1-self.rho.expand(input.shape[0], -1, -1, -1)) * out_ln
out = out * gamma.unsqueeze(2).unsqueeze(3) + beta.unsqueeze(2).unsqueeze(3)
return out
class ILN(nn.Module):
def __init__(self, num_features, eps=1e-5):
super(ILN, self).__init__()
self.eps = eps
self.rho = Parameter(torch.Tensor(1, num_features, 1, 1))
self.gamma = Parameter(torch.Tensor(1, num_features, 1, 1))
self.beta = Parameter(torch.Tensor(1, num_features, 1, 1))
self.rho.data.fill_(0.0)
self.gamma.data.fill_(1.0)
self.beta.data.fill_(0.0)
def forward(self, input):
in_mean, in_var = torch.mean(input, dim=[2, 3], keepdim=True), torch.var(input, dim=[2, 3], keepdim=True)
out_in = (input - in_mean) / torch.sqrt(in_var + self.eps)
ln_mean, ln_var = torch.mean(input, dim=[1, 2, 3], keepdim=True), torch.var(input, dim=[1, 2, 3], keepdim=True)
out_ln = (input - ln_mean) / torch.sqrt(ln_var + self.eps)
out = self.rho.expand(input.shape[0], -1, -1, -1) * out_in + (1-self.rho.expand(input.shape[0], -1, -1, -1)) * out_ln
out = out * self.gamma.expand(input.shape[0], -1, -1, -1) + self.beta.expand(input.shape[0], -1, -1, -1)
return out
class Discriminator(nn.Module):
def __init__(self, input_nc, ndf=64, n_layers=5):
super(Discriminator, self).__init__()
model = [nn.ReflectionPad2d(1),
nn.utils.spectral_norm(
nn.Conv2d(input_nc, ndf, kernel_size=4, stride=2, padding=0, bias=True)),
nn.LeakyReLU(0.2, True)]
for i in range(1, n_layers - 2):
mult = 2 ** (i - 1)
model += [nn.ReflectionPad2d(1),
nn.utils.spectral_norm(
nn.Conv2d(ndf * mult, ndf * mult * 2, kernel_size=4, stride=2, padding=0, bias=True)),
nn.LeakyReLU(0.2, True)]
mult = 2 ** (n_layers - 2 - 1)
model += [nn.ReflectionPad2d(1),
nn.utils.spectral_norm(
nn.Conv2d(ndf * mult, ndf * mult * 2, kernel_size=4, stride=1, padding=0, bias=True)),
nn.LeakyReLU(0.2, True)]
# Class Activation Map
mult = 2 ** (n_layers - 2)
self.gap_fc = nn.utils.spectral_norm(nn.Linear(ndf * mult, 1, bias=False))
self.gmp_fc = nn.utils.spectral_norm(nn.Linear(ndf * mult, 1, bias=False))
self.conv1x1 = nn.Conv2d(ndf * mult * 2, ndf * mult, kernel_size=1, stride=1, bias=True)
self.leaky_relu = nn.LeakyReLU(0.2, True)
self.pad = nn.ReflectionPad2d(1)
self.conv = nn.utils.spectral_norm(
nn.Conv2d(ndf * mult, 1, kernel_size=4, stride=1, padding=0, bias=False))
self.model = nn.Sequential(*model)
def forward(self, input):
x = self.model(input)
gap = torch.nn.functional.adaptive_avg_pool2d(x, 1)
gap_logit = self.gap_fc(gap.view(x.shape[0], -1))
gap_weight = list(self.gap_fc.parameters())[0]
gap = x * gap_weight.unsqueeze(2).unsqueeze(3)
gmp = torch.nn.functional.adaptive_max_pool2d(x, 1)
gmp_logit = self.gmp_fc(gmp.view(x.shape[0], -1))
gmp_weight = list(self.gmp_fc.parameters())[0]
gmp = x * gmp_weight.unsqueeze(2).unsqueeze(3)
cam_logit = torch.cat([gap_logit, gmp_logit], 1)
x = torch.cat([gap, gmp], 1)
x = self.leaky_relu(self.conv1x1(x))
heatmap = torch.sum(x, dim=1, keepdim=True)
x = self.pad(x)
out = self.conv(x)
return out, cam_logit, heatmap
class RhoClipper(object):
def __init__(self, min, max):
self.clip_min = min
self.clip_max = max
assert min < max
def __call__(self, module):
if hasattr(module, 'rho'):
w = module.rho.data
w = w.clamp(self.clip_min, self.clip_max)
module.rho.data = w
```
# Dataset
```
def has_file_allowed_extension(filename, extensions):
"""Checks if a file is an allowed extension.
Args:
filename (string): path to a file
Returns:
bool: True if the filename ends with a known image extension
"""
filename_lower = filename.lower()
return any(filename_lower.endswith(ext) for ext in extensions)
def find_classes(dir):
classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))]
classes.sort()
class_to_idx = {classes[i]: i for i in range(len(classes))}
return classes, class_to_idx
def make_dataset(dir, extensions):
images = []
for root, _, fnames in sorted(os.walk(dir)):
for fname in sorted(fnames):
if has_file_allowed_extension(fname, extensions):
path = os.path.join(root, fname)
item = (path, 0)
images.append(item)
return images
class DatasetFolder(data.Dataset):
def __init__(self, root, loader, extensions, transform=None, target_transform=None):
# classes, class_to_idx = find_classes(root)
samples = make_dataset(root, extensions)
if len(samples) == 0:
raise(RuntimeError("Found 0 files in subfolders of: " + root + "\n"
"Supported extensions are: " + ",".join(extensions)))
self.root = root
self.loader = loader
self.extensions = extensions
self.samples = samples
self.transform = transform
self.target_transform = target_transform
def __getitem__(self, index):
"""
Args:
index (int): Index
Returns:
tuple: (sample, target) where target is class_index of the target class.
"""
path, target = self.samples[index]
sample = self.loader(path)
if self.transform is not None:
sample = self.transform(sample)
if self.target_transform is not None:
target = self.target_transform(target)
return sample, target
def __len__(self):
return len(self.samples)
def __repr__(self):
fmt_str = 'Dataset ' + self.__class__.__name__ + '\n'
fmt_str += ' Number of datapoints: {}\n'.format(self.__len__())
fmt_str += ' Root Location: {}\n'.format(self.root)
tmp = ' Transforms (if any): '
fmt_str += '{0}{1}\n'.format(tmp, self.transform.__repr__().replace('\n', '\n' + ' ' * len(tmp)))
tmp = ' Target Transforms (if any): '
fmt_str += '{0}{1}'.format(tmp, self.target_transform.__repr__().replace('\n', '\n' + ' ' * len(tmp)))
return fmt_str
IMG_EXTENSIONS = ['.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif']
def pil_loader(path):
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
def default_loader(path):
return pil_loader(path)
class ImageFolder(DatasetFolder):
def __init__(self, root, transform=None, target_transform=None,
loader=default_loader):
super(ImageFolder, self).__init__(root, loader, IMG_EXTENSIONS,
transform=transform,
target_transform=target_transform)
self.imgs = self.samples
class UGATIT(object) :
def __init__(self, trainA_path=None, trainB_path=None, resume=False, model_path='/content/drive/MyDrive/photo2monet/models/',
name_model=None, output_path='/content/drive/MyDrive/photo2monet/outputs/'): ## A2B
self.light = True
self.model_name = 'UGATIT'
#self.result_dir = args.result_dir
self.trainApath = trainA_path
self.trainBpath = trainB_path
self.iteration = 100000
self.decay_flag = True
self.batch_size = 1
self.lr = 0.0001
self.weight_decay = 0.0001
self.ch = 64
""" Weight """
self.adv_weight = 1
self.cycle_weight = 10
self.identity_weight = 10
self.cam_weight = 1000
""" Generator """
self.n_res = 4
""" Discriminator """
self.n_dis = 6
self.img_size = 256
self.img_ch = 3
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.resume = resume
self.model_path = model_path
self.name_model = name_model
self.output_path = output_path
self.print_freq = 3500
##################################################################################
# Model
##################################################################################
def build_model(self):
""" DataLoader """
train_transform = transforms.Compose([
transforms.RandomHorizontalFlip(p=0.4),
transforms.RandomVerticalFlip(p=0.4),
#transforms.Resize((self.img_size, self.img_size)),
transforms.RandomResizedCrop(self.img_size, scale=(0.6,1.0)),
#transforms.RandomRotation(15),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5])
])
test_transform = transforms.Compose([
transforms.Resize((self.img_size, self.img_size)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5])
])
self.trainA = ImageFolder(self.trainApath, train_transform)
self.trainB = ImageFolder(self.trainBpath, train_transform)
self.testA = ImageFolder(self.trainApath, test_transform)
self.testB = ImageFolder(self.trainBpath, test_transform)
self.trainA_loader = DataLoader(self.trainA, batch_size=self.batch_size, shuffle=True)
self.trainB_loader = DataLoader(self.trainB, batch_size=self.batch_size, shuffle=True)
self.testA_loader = DataLoader(self.testA, batch_size=1, shuffle=False)
self.testB_loader = DataLoader(self.testB, batch_size=1, shuffle=False)
""" Define Generator, Discriminator """
self.genA2B = ResnetGenerator(input_nc=3, output_nc=3, ngf=self.ch, n_blocks=self.n_res, img_size=self.img_size, light=self.light).to(self.device)
self.genB2A = ResnetGenerator(input_nc=3, output_nc=3, ngf=self.ch, n_blocks=self.n_res, img_size=self.img_size, light=self.light).to(self.device)
self.disGA = Discriminator(input_nc=3, ndf=self.ch, n_layers=7).to(self.device)
self.disGB = Discriminator(input_nc=3, ndf=self.ch, n_layers=7).to(self.device)
self.disLA = Discriminator(input_nc=3, ndf=self.ch, n_layers=5).to(self.device)
self.disLB = Discriminator(input_nc=3, ndf=self.ch, n_layers=5).to(self.device)
""" Define Loss """
self.L1_loss = nn.L1Loss().to(self.device)
self.MSE_loss = nn.MSELoss().to(self.device)
self.BCE_loss = nn.BCEWithLogitsLoss().to(self.device)
""" Trainer """
self.G_optim = torch.optim.Adam(itertools.chain(self.genA2B.parameters(), self.genB2A.parameters()), lr=self.lr, betas=(0.5, 0.999), weight_decay=self.weight_decay)
self.D_optim = torch.optim.Adam(itertools.chain(self.disGA.parameters(), self.disGB.parameters(), self.disLA.parameters(), self.disLB.parameters()), lr=self.lr, betas=(0.5, 0.999), weight_decay=self.weight_decay)
""" Define Rho clipper to constraint the value of rho in AdaILN and ILN"""
self.Rho_clipper = RhoClipper(0, 1)
def train(self):
self.genA2B.train(), self.genB2A.train(), self.disGA.train(), self.disGB.train(), self.disLA.train(), self.disLB.train()
start_iter = 1
if self.resume:
self.load(self.model_path + self.name_model)
print(" [*] Load SUCCESS")
if self.decay_flag and start_iter > (self.iteration // 2):
self.G_optim.param_groups[0]['lr'] -= (self.lr / (self.iteration // 2)) * (start_iter - self.iteration // 2)
self.D_optim.param_groups[0]['lr'] -= (self.lr / (self.iteration // 2)) * (start_iter - self.iteration // 2)
# training loop
print('training start !')
start_time = time.time()
for step in range(start_iter, self.iteration + 1):
if self.decay_flag and step > (self.iteration // 2):
self.G_optim.param_groups[0]['lr'] -= (self.lr / (self.iteration // 2))
self.D_optim.param_groups[0]['lr'] -= (self.lr / (self.iteration // 2))
try:
real_A, _ = trainA_iter.next()
except:
trainA_iter = iter(self.trainA_loader)
real_A, _ = trainA_iter.next()
try:
real_B, _ = trainB_iter.next()
except:
trainB_iter = iter(self.trainB_loader)
real_B, _ = trainB_iter.next()
real_A, real_B = real_A.to(self.device), real_B.to(self.device)
# Update D
self.D_optim.zero_grad()
fake_A2B, _, _ = self.genA2B(real_A)
fake_B2A, _, _ = self.genB2A(real_B)
real_GA_logit, real_GA_cam_logit, _ = self.disGA(real_A)
real_LA_logit, real_LA_cam_logit, _ = self.disLA(real_A)
real_GB_logit, real_GB_cam_logit, _ = self.disGB(real_B)
real_LB_logit, real_LB_cam_logit, _ = self.disLB(real_B)
fake_GA_logit, fake_GA_cam_logit, _ = self.disGA(fake_B2A)
fake_LA_logit, fake_LA_cam_logit, _ = self.disLA(fake_B2A)
fake_GB_logit, fake_GB_cam_logit, _ = self.disGB(fake_A2B)
fake_LB_logit, fake_LB_cam_logit, _ = self.disLB(fake_A2B)
D_ad_loss_GA = self.MSE_loss(real_GA_logit, torch.ones_like(real_GA_logit).to(self.device)) + self.MSE_loss(fake_GA_logit, torch.zeros_like(fake_GA_logit).to(self.device))
D_ad_cam_loss_GA = self.MSE_loss(real_GA_cam_logit, torch.ones_like(real_GA_cam_logit).to(self.device)) + self.MSE_loss(fake_GA_cam_logit, torch.zeros_like(fake_GA_cam_logit).to(self.device))
D_ad_loss_LA = self.MSE_loss(real_LA_logit, torch.ones_like(real_LA_logit).to(self.device)) + self.MSE_loss(fake_LA_logit, torch.zeros_like(fake_LA_logit).to(self.device))
D_ad_cam_loss_LA = self.MSE_loss(real_LA_cam_logit, torch.ones_like(real_LA_cam_logit).to(self.device)) + self.MSE_loss(fake_LA_cam_logit, torch.zeros_like(fake_LA_cam_logit).to(self.device))
D_ad_loss_GB = self.MSE_loss(real_GB_logit, torch.ones_like(real_GB_logit).to(self.device)) + self.MSE_loss(fake_GB_logit, torch.zeros_like(fake_GB_logit).to(self.device))
D_ad_cam_loss_GB = self.MSE_loss(real_GB_cam_logit, torch.ones_like(real_GB_cam_logit).to(self.device)) + self.MSE_loss(fake_GB_cam_logit, torch.zeros_like(fake_GB_cam_logit).to(self.device))
D_ad_loss_LB = self.MSE_loss(real_LB_logit, torch.ones_like(real_LB_logit).to(self.device)) + self.MSE_loss(fake_LB_logit, torch.zeros_like(fake_LB_logit).to(self.device))
D_ad_cam_loss_LB = self.MSE_loss(real_LB_cam_logit, torch.ones_like(real_LB_cam_logit).to(self.device)) + self.MSE_loss(fake_LB_cam_logit, torch.zeros_like(fake_LB_cam_logit).to(self.device))
D_loss_A = self.adv_weight * (D_ad_loss_GA + D_ad_cam_loss_GA + D_ad_loss_LA + D_ad_cam_loss_LA)
D_loss_B = self.adv_weight * (D_ad_loss_GB + D_ad_cam_loss_GB + D_ad_loss_LB + D_ad_cam_loss_LB)
Discriminator_loss = D_loss_A + D_loss_B
Discriminator_loss.backward()
self.D_optim.step()
# Update G
self.G_optim.zero_grad()
fake_A2B, fake_A2B_cam_logit, _ = self.genA2B(real_A)
fake_B2A, fake_B2A_cam_logit, _ = self.genB2A(real_B)
fake_A2B2A, _, _ = self.genB2A(fake_A2B)
fake_B2A2B, _, _ = self.genA2B(fake_B2A)
fake_A2A, fake_A2A_cam_logit, _ = self.genB2A(real_A)
fake_B2B, fake_B2B_cam_logit, _ = self.genA2B(real_B)
fake_GA_logit, fake_GA_cam_logit, _ = self.disGA(fake_B2A)
fake_LA_logit, fake_LA_cam_logit, _ = self.disLA(fake_B2A)
fake_GB_logit, fake_GB_cam_logit, _ = self.disGB(fake_A2B)
fake_LB_logit, fake_LB_cam_logit, _ = self.disLB(fake_A2B)
G_ad_loss_GA = self.MSE_loss(fake_GA_logit, torch.ones_like(fake_GA_logit).to(self.device))
G_ad_cam_loss_GA = self.MSE_loss(fake_GA_cam_logit, torch.ones_like(fake_GA_cam_logit).to(self.device))
G_ad_loss_LA = self.MSE_loss(fake_LA_logit, torch.ones_like(fake_LA_logit).to(self.device))
G_ad_cam_loss_LA = self.MSE_loss(fake_LA_cam_logit, torch.ones_like(fake_LA_cam_logit).to(self.device))
G_ad_loss_GB = self.MSE_loss(fake_GB_logit, torch.ones_like(fake_GB_logit).to(self.device))
G_ad_cam_loss_GB = self.MSE_loss(fake_GB_cam_logit, torch.ones_like(fake_GB_cam_logit).to(self.device))
G_ad_loss_LB = self.MSE_loss(fake_LB_logit, torch.ones_like(fake_LB_logit).to(self.device))
G_ad_cam_loss_LB = self.MSE_loss(fake_LB_cam_logit, torch.ones_like(fake_LB_cam_logit).to(self.device))
G_recon_loss_A = self.L1_loss(fake_A2B2A, real_A)
G_recon_loss_B = self.L1_loss(fake_B2A2B, real_B)
G_identity_loss_A = self.L1_loss(fake_A2A, real_A)
G_identity_loss_B = self.L1_loss(fake_B2B, real_B)
G_cam_loss_A = self.BCE_loss(fake_B2A_cam_logit, torch.ones_like(fake_B2A_cam_logit).to(self.device)) + self.BCE_loss(fake_A2A_cam_logit, torch.zeros_like(fake_A2A_cam_logit).to(self.device))
G_cam_loss_B = self.BCE_loss(fake_A2B_cam_logit, torch.ones_like(fake_A2B_cam_logit).to(self.device)) + self.BCE_loss(fake_B2B_cam_logit, torch.zeros_like(fake_B2B_cam_logit).to(self.device))
G_loss_A = self.adv_weight * (G_ad_loss_GA + G_ad_cam_loss_GA + G_ad_loss_LA + G_ad_cam_loss_LA) + self.cycle_weight * G_recon_loss_A + self.identity_weight * G_identity_loss_A + self.cam_weight * G_cam_loss_A
G_loss_B = self.adv_weight * (G_ad_loss_GB + G_ad_cam_loss_GB + G_ad_loss_LB + G_ad_cam_loss_LB) + self.cycle_weight * G_recon_loss_B + self.identity_weight * G_identity_loss_B + self.cam_weight * G_cam_loss_B
del real_A, real_B, fake_A2B, fake_B2A, fake_A2B2A, fake_B2A2B, fake_A2A, fake_B2B
torch.cuda.empty_cache()
Generator_loss = G_loss_A + G_loss_B
Generator_loss.backward()
self.G_optim.step()
# clip parameter of AdaILN and ILN, applied after optimizer step
self.genA2B.apply(self.Rho_clipper)
self.genB2A.apply(self.Rho_clipper)
if step % 100 == 0:
print("[%5d/%5d] time: %4.4f d_loss: %.8f, g_loss: %.8f" % (step, self.iteration, time.time() - start_time, Discriminator_loss, Generator_loss))
if step % self.print_freq == 0:
test_sample_num = 4
A2B = np.zeros((self.img_size * 7, 0, 3))
B2A = np.zeros((self.img_size * 7, 0, 3))
self.genA2B.eval(), self.genB2A.eval(), self.disGA.eval(), self.disGB.eval(), self.disLA.eval(), self.disLB.eval()
for _ in range(test_sample_num):
try:
real_A, _ = testA_iter.next()
except:
testA_iter = iter(self.testA_loader)
real_A, _ = testA_iter.next()
try:
real_B, _ = testB_iter.next()
except:
testB_iter = iter(self.testB_loader)
real_B, _ = testB_iter.next()
real_A, real_B = real_A.to(self.device), real_B.to(self.device)
fake_A2B, _, fake_A2B_heatmap = self.genA2B(real_A)
fake_B2A, _, fake_B2A_heatmap = self.genB2A(real_B)
fake_A2B2A, _, fake_A2B2A_heatmap = self.genB2A(fake_A2B)
fake_B2A2B, _, fake_B2A2B_heatmap = self.genA2B(fake_B2A)
fake_A2A, _, fake_A2A_heatmap = self.genB2A(real_A)
fake_B2B, _, fake_B2B_heatmap = self.genA2B(real_B)
A2B = np.concatenate((A2B, np.concatenate((RGB2BGR(tensor2numpy(denorm(real_A[0]))),
cam(tensor2numpy(fake_A2A_heatmap[0]), self.img_size),
RGB2BGR(tensor2numpy(denorm(fake_A2A[0]))),
cam(tensor2numpy(fake_A2B_heatmap[0]), self.img_size),
RGB2BGR(tensor2numpy(denorm(fake_A2B[0]))),
cam(tensor2numpy(fake_A2B2A_heatmap[0]), self.img_size),
RGB2BGR(tensor2numpy(denorm(fake_A2B2A[0])))), 0)), 1)
cv2.imwrite(self.output_path + 'A2B_%d.png'%step, A2B * 255.0)
self.genA2B.train(), self.genB2A.train(), self.disGA.train(), self.disGB.train(), self.disLA.train(), self.disLB.train()
if step % 14000 == 0:
params = {}
params['genA2B'] = self.genA2B.state_dict()
params['genB2A'] = self.genB2A.state_dict()
params['disGA'] = self.disGA.state_dict()
params['disGB'] = self.disGB.state_dict()
params['disLA'] = self.disLA.state_dict()
params['disLB'] = self.disLB.state_dict()
torch.save(params, self.model_path + f'params_latest_{step}.pt')
self.save()
def save(self):
params = {}
params['genA2B'] = self.genA2B.state_dict()
params['genB2A'] = self.genB2A.state_dict()
params['disGA'] = self.disGA.state_dict()
params['disGB'] = self.disGB.state_dict()
params['disLA'] = self.disLA.state_dict()
params['disLB'] = self.disLB.state_dict()
torch.save(params, self.model_path + 'params_latest.pt')
def load(self, path):
params = torch.load(path)
self.genA2B.load_state_dict(params['genA2B'])
self.genB2A.load_state_dict(params['genB2A'])
self.disGA.load_state_dict(params['disGA'])
self.disGB.load_state_dict(params['disGB'])
self.disLA.load_state_dict(params['disLA'])
self.disLB.load_state_dict(params['disLB'])
def test(self):
self.load(self.model_path)
print(" [*] Load SUCCESS")
self.genA2B.eval(), self.genB2A.eval()
for n, (real_A, _) in enumerate(self.testA_loader):
real_A = real_A.to(self.device)
fake_A2B, _, fake_A2B_heatmap = self.genA2B(real_A)
A2B = RGB2BGR(tensor2numpy(denorm(fake_A2B[0])))
cv2.imwrite(self.output_path + '%d.jpg'%(n + 1), A2B * 255.0)
trainer = UGATIT(trainA_path='/content/photo_jpg/', resume=True, trainB_path='/content/drive/MyDrive/photo2monet/monetphotos/', name_model='train_epoch_44.pt',
model_path='/content/drive/MyDrive/photo2monet/models/', output_path='/content/drive/MyDrive/photo2monet/outputs/')
trainer.build_model()
# Consider that you want A => B
# trainA_path contains the images of the set A (here, it's the picture of real world)
# trainB_path contains the images of the set B (here, it's the picture painted by Monet)
# model_path is the folder where you want to save/load models
# name_model is the name of the model (checkpoint) that you want to load
# resume is set to True if you want to load model from checkpoint and continue training
# output_path is the folder where you want to save the intermediary picture generated during training (with the attention)
trainer.train()
```
| github_jupyter |
```
import umap
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"
res_folders=os.listdir('../../results/')
#model_folder='/home/mara/multitask_adversarial/results/NCOUNT_822/'
CONCEPT=['domain']
import keras
keras.__version__
from sklearn.metrics import accuracy_score
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.visible_device_list = str(0)# str(hvd.local_rank())
keras.backend.set_session(tf.Session(config=config))
verbose=1
init=tf.global_variables_initializer() #initialize_all_variables()
sess=tf.Session()
sess.run(init)
#reducer = umap.UMAP()
#reducer = reducer.fit(feats)
#embedding = reducer.transform(feats)
CONCEPT
model_folder='/home/mara/multitask_adversarial/results/DOMAIN_822/'
BASEL_FOLDER='../../results/BASEL_2009/best_model.h5'
## Loading OS libraries to configure server preferences
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import warnings
warnings.filterwarnings("ignore")
import setproctitle
SERVER_NAME = 'ultrafast'
EXPERIMENT_TYPE='test_domain'
import time
import sys
import shutil
## Adding PROCESS_UC1 utilities
sys.path.append('../../lib/TASK_2_UC1/')
from models import *
from util import otsu_thresholding
from extract_xml import *
from functions import *
sys.path.append('../../lib/')
from mlta import *
import math
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve, auc
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.visible_device_list = str(0)# str(hvd.local_rank())
keras.backend.set_session(tf.Session(config=config))
verbose=1
"""loading dataset files"""
#rank = MPI.COMM_WORLD.rank
cam16 = hd.File('/home/mara/adversarialMICCAI/data/ultrafast/cam16_500/patches.h5py', 'r', libver='latest', swmr=True)
all500 = hd.File('/home/mara/adversarialMICCAI/data/ultrafast/all500/patches.h5py', 'r', libver='latest', swmr=True)
extra17 = hd.File('/home/mara/adversarialMICCAI/data/ultrafast/extra17/patches.h5py','r', libver='latest', swmr=True)
tumor_extra17=hd.File('/home/mara/adversarialMICCAI/data/ultrafast/1129-1155/patches.h5py', 'r', libver='latest', swmr=True)
test2 = hd.File('/mnt/nas2/results/IntermediateResults/Camelyon/ultrafast/test_data2/patches.hdf5', 'r', libver='latest', swmr=True)
pannuke= hd.File('/mnt/nas2/results/IntermediateResults/Camelyon/pannuke/patches_fix.hdf5', 'r', libver='latest', swmr=True)
global datasetss
datasetss={'cam16':cam16,'all500':all500,'extra17':extra17, 'tumor_extra17':tumor_extra17, 'test_data2': test2, 'pannuke':pannuke}
global concept_db
concept_db = hd.File('../../data/normalized_cmeasures/concept_values_def.h5py','r')
#SYSTEM CONFIGS
CONFIG_FILE = '../../doc/config.cfg'
COLOR = True
BATCH_SIZE = 32
# SAVE FOLD
f=open(model_folder+"/seed.txt","r")
seed=1001#int(f.read())
if verbose: print(seed)
#f.write(str(seed))
f.close()
# SET PROCESS TITLE
setproctitle.setproctitle('{}'.format(EXPERIMENT_TYPE))
# SET SEED
np.random.seed(seed)
tf.set_random_seed(seed)
# DATA SPLIT CSVs
train_csv=open('/mnt/nas2/results/IntermediateResults/Camelyon/train_shuffle.csv', 'r') # How is the encoding of .csv files ?
val_csv=open('/mnt/nas2/results/IntermediateResults/Camelyon/val_shuffle.csv', 'r')
test_csv=open('/mnt/nas2/results/IntermediateResults/Camelyon/test_shuffle.csv', 'r')
train_list=train_csv.readlines()
val_list=val_csv.readlines()
test_list=test_csv.readlines()
test2_csv = open('/mnt/nas2/results/IntermediateResults/Camelyon/test2_shuffle.csv', 'r')
test2_list=test2_csv.readlines()
test2_csv.close()
train_csv.close()
val_csv.close()
test_csv.close()
data_csv=open('../../doc/pannuke_data_shuffle.csv', 'r')
data_list=data_csv.readlines()
data_csv.close()
# STAIN NORMALIZATION
def get_normalizer(patch, save_folder='../../results/'):
normalizer = ReinhardNormalizer()
normalizer.fit(patch)
np.save('{}/normalizer'.format(save_folder),normalizer)
np.save('{}/normalizing_patch'.format(save_folder), patch)
#print('Normalisers saved to disk.')
return normalizer
def normalize_patch(patch, normalizer):
return np.float64(normalizer.transform(np.uint8(patch)))
global normalizer
db_name, entry_path, patch_no = get_keys(data_list[0])
normalization_reference_patch = datasetss[db_name][entry_path][patch_no]
normalizer = get_normalizer(normalization_reference_patch, save_folder='../../results/')
"""
Batch generators:
They load a patch list: a list of file names and paths.
They use the list to create a batch of 32 samples.
"""
# Retrieve Concept Measures
def get_concept_measure(db_name, entry_path, patch_no, measure_type=''):
if measure_type=='domain':
return get_domain(db_name, entry_path)
path=db_name+'/'+entry_path+'/'+str(patch_no)+'/'+measure_type.strip(' ')
try:
cm=concept_db[path][0]
return cm
except:
print("[ERR]: {}, {}, {}, {} with path {}".format(db_name, entry_path, patch_no, measure_type, path))
#import pdb; pdb.set_trace()
return 1.
# BATCH GENERATORS
import keras.utils
class DataGenerator(keras.utils.Sequence):
def __init__(self, patch_list, concept=CONCEPT, batch_size=32, shuffle=True, data_type=0):
self.batch_size=batch_size
self.patch_list=patch_list
self.shuffle=shuffle
self.concept = concept
self.data_type=data_type
self.on_epoch_end()
def __len__(self):
return int(np.floor(len(self.patch_list)/self.batch_size))
def __getitem__(self, index):
indexes=self.indexes[index*self.batch_size:(index+1)*self.batch_size]
patch_list_temp=[self.patch_list[k] for k in indexes]
self.patch_list_temp=patch_list_temp
return self.__data_generation(self), None
def get(self, index):
indexes=self.indexes[index*self.batch_size:(index+1)*self.batch_size]
patch_list_temp=[self.patch_list[k] for k in indexes]
self.patch_list_temp=patch_list_temp
return self.__data_generation(self), None
def on_epoch_end(self):
self.indexes = np.arange(len(self.patch_list))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, patch_list_temp):
patch_list_temp=self.patch_list_temp
batch_x=np.zeros((len(patch_list_temp), 224,224,3))
batch_y=np.zeros(len(patch_list_temp))
i=0
for line in patch_list_temp:
db_name, entry_path, patch_no = get_keys(line)
patch=datasetss[db_name][entry_path][patch_no]
patch=normalize_patch(patch, normalizer)
patch=keras.applications.inception_v3.preprocess_input(patch)
label = get_class(line, entry_path)
if self.data_type!=0:
label=get_test_label(entry_path)
batch_x[i]=patch
batch_y[i]=label
i+=1
generator_output=[batch_x, batch_y]
for c in self.concept:
batch_concept_values=np.zeros(len(patch_list_temp))
i=0
for line in patch_list_temp:
db_name, entry_path, patch_no = get_keys(line)
batch_concept_values[i]=get_concept_measure(db_name, entry_path, patch_no, measure_type=c)
i+=1
if c=='domain':
batch_concept_values=keras.utils.to_categorical(batch_concept_values, num_classes=7)
generator_output.append(batch_concept_values)
return generator_output
#import matplotlib as mpl
#mpl.use('Agg')
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import warnings
warnings.filterwarnings("ignore")
import logging
logging.getLogger('tensorflow').disabled = True
from keras import *
import setproctitle
SERVER_NAME = 'ultrafast'
import time
import sys
import shutil
## Adding PROCESS_UC1 utilities
sys.path.append('../../lib/TASK_2_UC1/')
from models import *
from util import otsu_thresholding
from extract_xml import *
from functions import *
sys.path.append('../../lib/')
from mlta import *
import math
import keras.callbacks as callbacks
from keras.callbacks import Callback
keras.backend.clear_session()
"""
Building guidable model
"""
def get_baseline_model(hp_lambda=0., domain=False, c_list=[]):
base_model = keras.applications.inception_v3.InceptionV3(include_top=False, weights='imagenet', input_shape=(224,224,3))
layers_list=['conv2d_92', 'conv2d_93', 'conv2d_88', 'conv2d_89', 'conv2d_86']
#layers_list=[]
for i in range(len(base_model.layers[:])):
layer=base_model.layers[i]
if layer.name in layers_list:
print layer.name
layer.trainable=True
else:
layer.trainable = False
feature_output=base_model.layers[-1].output
gap_layer_output = keras.layers.GlobalAveragePooling2D()(feature_output)
feature_output = Dense(2048, activation='relu', name='finetuned_features1',kernel_regularizer=keras.regularizers.l2(0.01))(gap_layer_output)
feature_output = keras.layers.Dropout(0.8, noise_shape=None, seed=None)(feature_output)
feature_output = Dense(512, activation='relu', name='finetuned_features2',kernel_regularizer=keras.regularizers.l2(0.01))(feature_output)
feature_output = keras.layers.Dropout(0.8, noise_shape=None, seed=None)(feature_output)
feature_output = Dense(256, activation='relu', name='finetuned_features3',kernel_regularizer=keras.regularizers.l2(0.01))(feature_output)
feature_output = keras.layers.Dropout(0.8, noise_shape=None, seed=None)(feature_output)
grl_layer=GradientReversal(hp_lambda=hp_lambda)
feature_output_grl = grl_layer(feature_output)
if domain:
domain_adversarial = keras.layers.Dense(7, activation = keras.layers.Activation('softmax'), name='domain_adversarial')(feature_output_grl)
finetuning = Dense(1,name='predictions')(feature_output)
## here you need to check how many other concepts you have apart from domain adversarial
# then you add one layer per each.
if domain:
output_nodes=[finetuning, domain_adversarial]
else:
output_nodes=[finetuning]
for c in c_list:
if c!='domain':
concept_layer= keras.layers.Dense(1, activation = keras.layers.Activation('linear'), name='extra_{}'.format(c.strip(' ')))(feature_output)
output_nodes.append(concept_layer)
model = Model(input=base_model.input, output=output_nodes)
model.grl_layer=grl_layer
return model
"""
Get trainable model with Hepistemic Uncertainty Weighted Loss
"""
# Custom loss layer
from keras.initializers import Constant
class CustomMultiLossLayer(Layer):
def __init__(self, nb_outputs=2, **kwargs):
self.nb_outputs = nb_outputs
self.is_placeholder = True
super(CustomMultiLossLayer, self).__init__(**kwargs)
def build(self, input_shape=None):
self.log_vars = []
for i in range(self.nb_outputs):
self.log_vars += [self.add_weight(name='log_var' + str(i), shape=(1,), initializer=Constant(0.), trainable=True)]
super(CustomMultiLossLayer, self).build(input_shape)
def multi_loss(self, ys_true, ys_pred):
#print len(ys_true)
assert len(ys_true) == self.nb_outputs and len(ys_pred) == self.nb_outputs
loss = 0
i=0
for y_true, y_pred, log_var in zip(ys_true, ys_pred, self.log_vars):
precision =K.exp(-log_var[0]) ###MODIFICATION HERE
if i==0:
pred_loss = bbce(y_true, y_pred)
term = precision*pred_loss + 0.5 * log_var[0]
#term=tf.Print(term, [term], 'bbce: ')
else:
pred_loss = keras_mse(y_true, y_pred)
#pred_loss=tf.Print(pred_loss, [pred_loss], 'MSE: ')
term = 0.5 * precision * pred_loss + 0.5 * log_var[0]
#term=tf.Print(term, [term], 'MSE: ')
loss+=term
term = 0.
i+=1
return K.mean(loss)
def call(self, inputs):
ys_true = inputs[:self.nb_outputs]
ys_pred = inputs[self.nb_outputs:]
loss = self.multi_loss(ys_true, ys_pred)
self.add_loss(loss, inputs=inputs)
# We won't actually use the output.
return K.concatenate(inputs, -1)
def get_trainable_model(baseline_model):
inp = keras.layers.Input(shape=(224,224,3,), name='inp')
y1_pred, y2_pred = baseline_model(inp)
y1_true=keras.layers.Input(shape=(1,),name='y1_true')
y2_true=keras.layers.Input(shape=(1,),name='y2_true')
out = CustomMultiLossLayer(nb_outputs=2)([y1_true, y2_true, y1_pred, y2_pred])
return Model(input=[inp, y1_true, y2_true], output=out)
""" Get trainable model with Hepistemic Uncertainty Weighted Loss """
"""
LOSS FUNCTIONS
"""
def keras_mse(y_true, y_pred):
return tf.reduce_mean(tf.keras.losses.mean_squared_error(y_true, y_pred))
def bbce(y_true, y_pred):
# we use zero weights to set the loss to zero for unlabeled data
verbose=0
zero= tf.constant(-1, dtype=tf.float32)
where = tf.not_equal(y_true, zero)
where = tf.reshape(where, [-1])
indices=tf.where(where) #indices where the item of y_true is NOT -1
indices = tf.reshape(indices, [-1])
sliced_y_true = tf.nn.embedding_lookup(y_true, indices)
sliced_y_pred = tf.nn.embedding_lookup(y_pred, indices)
n1 = tf.shape(indices)[0] #number of train images in batch
batch_size = tf.shape(y_true)[0]
n2 = batch_size - n1 #number of test images in batch
sliced_y_true = tf.reshape(sliced_y_true, [n1, -1])
n1_ = tf.cast(n1, tf.float32)
n2_ = tf.cast(n2, tf.float32)
multiplier = (n1_+ n2_) / n1_
zero_class = tf.constant(0, dtype=tf.float32)
where_class_is_zero=tf.cast(tf.reduce_sum(tf.cast(tf.equal(sliced_y_true, zero_class), dtype=tf.float32)), dtype=tf.float32)
if verbose:
where_class_is_zero=tf.Print(where_class_is_zero,[where_class_is_zero],'where_class_is_zero: ')
class_weight_zero = tf.cast(tf.divide(n1_, 2. * tf.cast(where_class_is_zero, dtype=tf.float32)+0.001), dtype=tf.float32)
if verbose:
class_weight_zero=tf.Print(class_weight_zero,[class_weight_zero],'class_weight_zero: ')
one_class = tf.constant(1, dtype=tf.float32)
where_class_is_one=tf.cast(tf.reduce_sum(tf.cast(tf.equal(sliced_y_true, one_class), dtype=tf.float32)), dtype=tf.float32)
if verbose:
where_class_is_one=tf.Print(where_class_is_one,[where_class_is_one],'where_class_is_one: ')
n1_=tf.Print(n1_,[n1_],'n1_: ')
class_weight_one = tf.cast(tf.divide(n1_, 2. * tf.cast(where_class_is_one,dtype=tf.float32)+0.001), dtype=tf.float32)
class_weight_zero = tf.constant(23477.0/(23477.0+123820.0), dtype=tf.float32)
class_weight_one = tf.constant(123820.0/(23477.0+123820.0), dtype=tf.float32)
A = tf.ones(tf.shape(sliced_y_true), dtype=tf.float32) - sliced_y_true
A = tf.scalar_mul(class_weight_zero, A)
B = tf.scalar_mul(class_weight_one, sliced_y_true)
class_weight_vector=A+B
ce = tf.nn.sigmoid_cross_entropy_with_logits(labels=sliced_y_true,logits=sliced_y_pred)
ce = tf.multiply(class_weight_vector,ce)
return tf.reduce_mean(ce)
from keras.initializers import Constant
global domain_weight
global main_task_weight
class CustomMultiLossLayer(Layer):
def __init__(self, new_folder='', nb_outputs=2, **kwargs):
self.nb_outputs = nb_outputs
self.is_placeholder = True
super(CustomMultiLossLayer, self).__init__(**kwargs)
def build(self, input_shape=None):
# initialise log_vars
self.log_vars = []
for i in range(self.nb_outputs):
self.log_vars += [self.add_weight(name='log_var' + str(i), shape=(1,),
initializer=Constant(0.), trainable=True)]
super(CustomMultiLossLayer, self).build(input_shape)
"""
def multi_loss(self, ys_true, ys_pred):
assert len(ys_true) == self.nb_outputs and len(ys_pred) == self.nb_outputs
loss = 0
for y_true, y_pred, log_var in zip(ys_true, ys_pred, self.log_vars):
precision = K.exp(-log_var[0])
loss += K.sum(precision * (y_true - y_pred)**2. + log_var[0], -1)
return K.mean(loss)
"""
def multi_loss(self, ys_true, ys_pred):
assert len(ys_true) == self.nb_outputs and len(ys_pred) == self.nb_outputs
loss = 0
i=0
for y_true, y_pred, log_var in zip(ys_true, ys_pred, self.log_vars):
precision =keras.backend.exp(-log_var[0])
if i==0:
pred_loss = bbce(y_true, y_pred)
term = main_task_weight*precision*pred_loss + main_task_weight*0.5 * log_var[0]
#term=tf.Print(keras.backend.mean(term), [keras.backend.mean(term)], 'mean bbce: ')
#elif i==1:
# I need to find a better way for this
# pred_loss = keras.losses.categorical_crossentropy(y_true, y_pred)
#keras_mse(y_true, y_pred)
# term = domain_weight * precision * pred_loss + domain_weight * log_var[0]
#term=tf.Print(keras.backend.mean(term), [keras.backend.mean(term)], 'mean cce: ')
else:
pred_loss = keras_mse(y_true, y_pred)
#pred_loss=tf.Print(pred_loss, [pred_loss], 'MSE: ')
term = 0.5 * precision * pred_loss + 0.5 * log_var[0]
loss+=term
term = 0.
i+=1
return keras.backend.mean(loss)
def call(self, inputs):
ys_true = inputs[:self.nb_outputs]
ys_pred = inputs[self.nb_outputs:]
loss = self.multi_loss(ys_true, ys_pred)
self.add_loss(loss, inputs=inputs)
return keras.backend.concatenate(inputs, -1)
"""
EVALUATION FUNCTIONs
"""
def accuracy_domain(y_true,y_pred):
y_p_r=np.round(y_pred)
acc = np.equal(y_p_r, y_true)**1.
acc = np.mean(np.float32(acc))
return acc
def my_sigmoid(x):
return 1 / (1 + np.exp(-x))
def my_accuracy_np(y_true, y_pred):
sliced_y_pred = my_sigmoid(y_pred)
y_pred_rounded = np.round(sliced_y_pred)
acc = np.equal(y_pred_rounded, y_true)**1.
acc = np.mean(np.float32(acc))
return acc
def r_square_np(y_true, y_pred):
SS_res = np.sum(np.square(y_true - y_pred))
SS_tot = np.sum(np.square(y_true - np.mean(y_true)))
r2_mine=( 1 - SS_res/(SS_tot + keras.backend.epsilon()) )
return ( 1 - SS_res/(SS_tot + keras.backend.epsilon()) )
global report_val_acc
global report_val_r2
global report_val_mse
report_val_acc=[]
report_val_r2=[]
report_val_mse=[]
hp_lambda=0.
c_list=CONCEPT
base_model = keras.applications.inception_v3.InceptionV3(include_top=False, weights='imagenet', input_shape=(224,224,3))
layers_list=['conv2d_92', 'conv2d_93', 'conv2d_88', 'conv2d_89', 'conv2d_86']
#layers_list=[]
for i in range(len(base_model.layers[:])):
layer=base_model.layers[i]
if layer.name in layers_list:
print layer.name
layer.trainable=True
else:
layer.trainable = False
feature_output=base_model.layers[-1].output
gap_layer_output = keras.layers.GlobalAveragePooling2D()(feature_output)
feature_output = keras.layers.Dense(2048, activation='relu', name='finetuned_features1',kernel_regularizer=keras.regularizers.l2(0.01))(gap_layer_output)
feature_output = keras.layers.Dropout(0.8, noise_shape=None, seed=None)(feature_output)
feature_output = keras.layers.Dense(512, activation='relu', name='finetuned_features2',kernel_regularizer=keras.regularizers.l2(0.01))(feature_output)
feature_output = keras.layers.Dropout(0.8, noise_shape=None, seed=None)(feature_output)
feature_output = keras.layers.Dense(256, activation='relu', name='finetuned_features3',kernel_regularizer=keras.regularizers.l2(0.01))(feature_output)
#feature_output = keras.layers.Dropout(0.8, noise_shape=None, seed=None)(feature_output)
feature_output = keras.layers.Dropout(0.8, noise_shape=None, seed=None)(feature_output)
grl_layer=GradientReversal(hp_lambda=hp_lambda)
feature_output_grl = grl_layer(feature_output)
domain_adversarial = keras.layers.Dense(7, activation = keras.layers.Activation('softmax'), name='domain_adversarial')(feature_output_grl)
finetuning = Dense(1,name='predictions')(feature_output)
#finetuning = keras.layers.Dense(1,name='predictions')(feature_output)
#sfinetuning = Dense(1,name='predictions')(feature_output)
## here you need to check how many other concepts you have apart from domain adversarial
# then you add one layer per each.
output_nodes=[finetuning, domain_adversarial]
for c in c_list:
if c!='domain':
concept_layer= keras.layers.Dense(1, activation = keras.layers.Activation('linear'), name='extra_{}'.format(c.strip(' ')))(feature_output)
output_nodes.append(concept_layer)
model = Model(input=base_model.input, output=output_nodes)
model.grl_layer=grl_layer
#return model
#regression_output = keras.layers.Dense(1, activation = keras.layers.Activation('linear'), name='concept_regressor')(gap_layer_output)
#model = keras.Model(input=base_model.input, output=[finetuning, regression_output])
main_task_weight=1.
domain_weight = 1. #e-100
t_m = get_trainable_model(model)
t_m.load_weights('{}/best_model.h5'.format(model_folder))
baseline=get_baseline_model()
baseline.load_weights(BASEL_FOLDER)
len(data_list)
CONCEPT
type_='def_'
test_list=test_list+test2_list
for layer in ['finetuned_features3']:
print layer
get_guided_layer_output = K.function([model.layers[0].input],
[model.get_layer(layer).output])
get_base_layer_output = K.function([baseline.layers[0].input],
[baseline.get_layer(layer).output])
N_SAMPLES=len(train_list)
t_gen=DataGenerator(test_list, concept=CONCEPT, batch_size=BATCH_SIZE, data_type=1)
shape=int(model.get_layer(layer).output.shape[1])
#import pdb; pdb.set_trace()
if layer=='mixed10':
shape=51200
feats = np.zeros((N_SAMPLES, shape))
i=0
labs=np.zeros(N_SAMPLES)
cvals=np.zeros((N_SAMPLES, 7))
#idxs = np.random.choice(np.arange(len(data_list)), N_SAMPLES, repetition=False)
batch_size=32
while i<=N_SAMPLES:
#import pdb; pdb.set_trace()
print i
input_,_=t_gen.__getitem__(i//batch_size)
try:
feats[i:i+batch_size, :] = get_guided_layer_output([input_[0]])[0].reshape(batch_size,-1)
labs[i:i+batch_size]=input_[1]
cvals[i:i+batch_size,:]=input_[2]
i+=batch_size
except:
print "ERR"
#import pdb; pdb.set_trace()
break
np.save('{}/{}features{}.npy'.format(model_folder,type_, layer), feats)
np.save('{}/{}labels{}.npy'.format(model_folder, type_,layer), labs)
np.save('{}/{}cvalues{}.npy'.format(model_folder,type_, layer), cvals)
t_gen=DataGenerator(test_list, concept=CONCEPT, batch_size=BATCH_SIZE, data_type=1)
#N_SAMPLES=5000
shape=int(model.get_layer(layer).output.shape[1])
if layer=='mixed10':
shape=51200
feats = np.zeros((N_SAMPLES, shape))
i=0
labs=np.zeros(N_SAMPLES)
cvals=np.zeros((N_SAMPLES, 7))
batch_size=32
while i+batch_size<=N_SAMPLES:
#import pdb; pdb.set_trace()
print i
input_,_=t_gen.__getitem__(i//batch_size)
try:
feats[i:i+batch_size, :] = get_base_layer_output([input_[0]])[0].reshape(batch_size,-1)
labs[i:i+batch_size]=input_[1]
cvals[i:i+batch_size,:]=input_[2]
i+=batch_size
except:
i+=batch_size
print "ERR"
break
np.save('{}/{}base_features{}.npy'.format(model_folder,type_,layer), feats)
np.save('{}/{}base_labels{}.npy'.format(model_folder,type_,layer), labs)
np.save('{}/{}base_cvalues{}.npy'.format(model_folder,type_,layer), cvals)
'{}/{}base_features{}.npy'.format(model_folder,type_,layer)
cvals
type_
layer
```
| github_jupyter |
<a href="https://colab.research.google.com/github/AllanWang/Kaggle-Colab/blob/master/Kaggle_Git_Template.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Kaggle + Git + Colab
This notebook contains some helper functions to import kaggle data, and run an external project.
## Public Set Up
Kaggle + Git setup using field variables.
If you don't want the fields to be visible, there is another variant below that reads user input.
Note that whenever the runtime is reset, all unsaved data and variables are removed.
All set up operations will clean the directories beforehand.
Fields:
| | |
|---|---|
| Kaggle Username | Username for Kaggle |
| Kaggle Token | Kaggle api token, generated from your account page |
| Kaggle Competition Url | Url segment of the form "https://www.kaggle.com/c/{competition-url}" |
| Github Username | Username for Github |
| Github Token | Token generated from [github settings](https://github.com/settings/tokens) |
| Github Slug | Url segment of the form "https://github.com/{github_slug}" (owner/repo) |
```
# Load Kaggle Data
%cd /content
!rm '/root/.kaggle/kaggle.json'
kaggle_username = '' #@param {type: 'string'}
kaggle_token = '' #@param {type: 'string'}
import json
token = {'username':kaggle_username,'key':kaggle_token}
!mkdir '/root/.kaggle'
with open('/root/.kaggle/kaggle.json', 'w') as file:
json.dump(token, file)
!chmod 600 /root/.kaggle/kaggle.json
kaggle_competition_url = '' #@param {type: 'string'}
!kaggle config set -n path -v '/content'
!kaggle competitions download -c "$kaggle_competition_url" --force
!rm -rf "/content/kaggle_data"
!cp -r "/content/competitions/$kaggle_competition_url" "/content/kaggle_data"
import os
import zipfile
for path, dir_list, file_list in os.walk("/content/kaggle_data"):
for file_name in file_list:
if file_name.endswith(".zip"):
print("Unzipping " + file_name)
abs_file_path = os.path.join(path, file_name)
parent_path = os.path.dirname(abs_file_path)
zip_obj = zipfile.ZipFile(abs_file_path, 'r')
zip_obj.extractall(parent_path)
zip_obj.close()
print("Saved kaggle data to /content/kaggle_data")
!rm '/root/.kaggle/kaggle.json'
# Load Github Data
github_username = '' #@param {type: 'string'}
github_token = '' #@param {type: 'string'}
credential = "https://" + github_username + ":" + github_token + "@github.com/"
!echo "$credential" > ".git-credentials"
!git config --global credential.helper 'store --file .git-credentials'
repo_slug = 'owner/repo' #@param {type: 'string'}
git_url = "https://github.com/" + repo_slug + ".git"
!rm -rf "/content/github"
!git clone "$git_url" "/content/github"
print("Cloned repo to /content/github")
```
## Private Kaggle Set Up
```
%cd /content
import getpass
!rm '/root/.kaggle/kaggle.json'
kaggle_username = input('Enter kaggle username\n')
print('Enter kaggle token')
kaggle_token = getpass.getpass()
import json
token = {'username':kaggle_username,'key':kaggle_token}
!mkdir '/root/.kaggle'
with open('/root/.kaggle/kaggle.json', 'w') as file:
json.dump(token, file)
!chmod 600 /root/.kaggle/kaggle.json
kaggle_competition_url = input('Enter competition url segment\n')
!kaggle config set -n path -v '/content'
!kaggle competitions download -c "$kaggle_competition_url" --force
!rm -rf "/content/kaggle_data"
!cp -r "/content/competitions/$kaggle_competition_url" "/content/kaggle_data"
import os
import zipfile
for path, dir_list, file_list in os.walk("/content/kaggle_data"):
for file_name in file_list:
if file_name.endswith(".zip"):
print("Unzipping " + file_name)
abs_file_path = os.path.join(path, file_name)
parent_path = os.path.dirname(abs_file_path)
zip_obj = zipfile.ZipFile(abs_file_path, 'r')
zip_obj.extractall(parent_path)
zip_obj.close()
print("Saved kaggle data to /content/kaggle_data")
!rm '/root/.kaggle/kaggle.json'
```
## Private Github Set Up
```
%cd /content
import getpass
github_username = input('Enter Github Username\n')
print('Enter Github Token')
github_token = getpass.getpass()
credential = "https://" + github_username + ":" + github_token + "@github.com/"
!echo "$credential" > ".git-credentials"
!git config --global credential.helper 'store --file .git-credentials'
repo_slug = input('Enter Github Slug\n')
git_url = "https://github.com/" + repo_slug + ".git"
!rm -rf "/content/github"
!git clone "$git_url" "/content/github"
print("Cloned repo to /content/github")
```
# Run Project
Fetch from github again and run specified commands.
Make sure you've updated your runtime settings if you are using a GPU.
```
%cd /content/github
!git config --global credential.helper 'store --file ../.git-credentials'
!git fetch -ap
branch = 'master' #@param {type: 'string'}
!git checkout "$branch"
!git pull
%env PYTHONPATH=/content/github
# Add run commands here!
```
| github_jupyter |
# Part 2 - Advanced text classifiers
As seen in the past, we can create models that take advantage of counts of words and tf-idf scores and that yield some pretty accurate predictions. But it is possible to make use of several additional features to improve our classifier. In this learning unit we are going to check how we could use other data extracted from our text data to determine if an e-mail is 'spam' or 'not spam' (also known as ham). We are going to use a very well known Kaggle dataset for spam detection - [Kaggle Spam Collection](https://www.kaggle.com/uciml/sms-spam-collection-dataset).

This part will also introduce you to feature unions, a very useful way of combining different feature sets into your models. This scikit-learn class comes hand-in-hand with pipelines. Both allow you to delegate the work of combining and piping your transformer's outputs - your features - allowing you to create workflows in a very simple way.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.preprocessing import StandardScaler
from nltk.corpus import stopwords
%matplotlib inline
warnings.simplefilter("ignore")
```
## 1 - Spam and Ham
As we mentioned before, we are going to try and come up with ways of detecting spam in the Kaggle Spam dataset. Let's load it and look into the data.
```
df = pd.read_csv('./datasets/spam.csv', encoding='latin1')
df.drop(["Unnamed: 2", "Unnamed: 3", "Unnamed: 4"], axis=1,inplace=True)
df.rename(columns={"v1":"label", "v2":"message"},inplace=True)
df.head()
```
You could think it should be quite easy to detect the spam text, since it is clearer to the human eye. I don't know about you, but I'm always suspicious of free stuff. There ain't no such thing as a free lunch (except for the ones in our hackathons).
But by now you should also know that what seems obvious in text to us is sometimes not as easy to detect by a model. So, what kind of features could you use for this? The most obvious one is the words themselves, which you already know how to use with your bag-of-words approach - using CountVectorizer or TfIdfVectorizer.
## 1.1 - Baseline
To start with, let's look at the target class distribution,
```
df.label.value_counts(normalize=True)
```
So, if we were to create a dumb classifier which always predicts "ham", we would get an accuracy of 86.6% for this dataset.
Let's get our baseline with the Bag-of-words approach. Here we are going to use a RandomForestClassifier, a powerful machine learning classifier that fits very well in this problem. You may remember this estimator from SLU13.
```
# Split in train and validation
train_data, test_data = train_test_split(df, test_size=0.2, random_state=42)
# Build the pipeline
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('classifier', RandomForestClassifier(random_state = 42))])
# Train the classifier
text_clf.fit(map(str, train_data['message'].values), train_data['label'].values)
predicted = text_clf.predict(map(str, test_data['message'].values))
np.mean(predicted == test_data['label'])
```
Powerful words, no?
Our next step is to include other features.
## 1.2 - Adding extra features
But, beside this vectorization as a bag-of-words, let's understand if our classifier can be fed other signals we can retrieve from the text. Let's check for example the *length of the message*. We'll first compute it and add it as a feature in our dataframe.
```
df['length'] = df['message'].map(len)
df.head()
```
**Is this feature useful?**
Since this is only one numerical feature, we can just simply plot its distribution in our data. Let's evaluate the length distribution for "Spam" and "Ham"
```
ax_list = df.hist(column='length', by='label', bins=50,figsize=(12,4))
ax_list[0].set_xlim((0,300))
ax_list[1].set_xlim((0,300))
```
Seems quite different, right? So you would guess this feature should be helpful in your classifier.
But let's actually check this feature through the use of a text classifier. Now for the tricky parts.
### Preprocessing
If BLU7 is still fresh on you, you remember that when using pipelines we just fed it the text column. In fact, we could feed it more than one column, but the standard preprocessing applies the same preprocessing to the whole dataset. For our heterogeneous data, this doesn't quite work.
So what can we do if we want to have a pipeline using several different features from several different columns? We can't apply the same methods to everything right? So first thing we can do is to create a selector transformer that simply returns the right column in the dataset by the key value(s) you pass.
You can find below two such transformers: `TextSelector` for text columns and `NumberSelector` for number columns. Note that the only difference between them is the return type.
```
class Selector(BaseEstimator, TransformerMixin):
"""
Transformer to select a column from the dataframe to perform additional transformations on
"""
def __init__(self, key):
self.key = key
def fit(self, X, y=None):
return self
class TextSelector(Selector):
"""
Transformer to select a single column from the data frame to perform additional transformations on
Use on text columns in the data
"""
def transform(self, X):
return X[self.key]
class NumberSelector(Selector):
"""
Transformer to select a single column from the data frame to perform additional transformations on
Use on numeric columns in the data
"""
def transform(self, X):
return X[[self.key]]
```
And then we define pipelines tailored for each of our cases.
```
text = Pipeline([
('selector', TextSelector("message")),
('tfidf', TfidfVectorizer())
])
length = Pipeline([
('selector', NumberSelector("length")),
('standard', StandardScaler())
])
```
Notice that we used the `StandardScaler`. The use of this scaler (scales the feature to zero mean and unit variance) is because we don't want to have different feature scales in our classifier. Most of classification algorithms expect the features to be in the same scale!
You might be wondering now:
> *How does this solve my problem... now I have two pipelines and although I can feed my whole dataset they are separate pipelines... does this help at all?*
In fact, if you were to run them separately this would not be that helpful, since you would have to add the classifier at the end of each. It seems like we are missing only one piece, a way to combine steps in parallel and not in sequence. This is where feature unions come in!
## 1.3 - Feature Unions
While pipelines define a cascaded workflow, feature unions allow you to parallelize your workflows and have several transformations applied in parallel to your pipeline. The image below presents a simple pipeline, in sequence:
<img src="./media/pipeline.png" width="40%">
While the following one presents what it is called a feature union:
<img src="./media/unions.png" width="70%">
The latter is quite simple to define in scikit-learn, as follows:
```
# Feature Union allow use to use multiple distinct features in our classifier
feats = FeatureUnion([('text', text),
('length', length)])
```
Now you can use this combination of pipelines and feature unions inside a new pipeline!
<img src="./media/pipelines_dawg.png" width="45%">
We then get our final flow, from which we can extract the classification score.
```
# Split in train and validation
train_data, test_data = train_test_split(df, test_size=0.2, random_state=42)
pipeline = Pipeline([
('features',feats),
('classifier', RandomForestClassifier(random_state = 42)),
])
pipeline.fit(train_data, train_data.label)
preds = pipeline.predict(test_data)
np.mean(preds == test_data.label)
```
Our new feature does help! We got a slightly improvement from a baseline that was already quite high. Nicely done. Let's now play with other more complex text features and see if we can maximize our classification score even more.
## 1.4 - Advanced features
What kind of features can you think of?
You could start by just having the number of words, in the same way that we had the character length of the sentence:
```
df['words'] = df['message'].str.split().map(len)
```
Remember BLU7? Remember stopwords?
<img src="./media/stopwords.png" width="40%">
Let's count only words that are not stopwords, since these are normally less relevant.
```
stop_words = set(stopwords.words('english'))
df['words_not_stopword'] = df['message'].apply(lambda x: len([t for t in x.split() if t not in stop_words]))
```
In the same way, we can apply counts conditioned on other different characteristics, like counting the number of commas in the sentence or the number of words that are uppercased or capitalized:
```
df['commas'] = df['message'].str.count(',')
df['upper'] = df['message'].map(lambda x: map(str.isupper, x)).map(sum)
df['capitalized'] = df['message'].map(lambda x: map(str.istitle, x)).map(sum)
```
We can also model the type of words by their length, for example:
```
#get the average word length
df['avg_word_length'] = df['message'].apply(lambda x: np.mean([len(t) for t in x.split() if t not in stop_words]) if len([len(t) for t in x.split(' ') if t not in stop_words]) > 0 else 0)
```
Let's take a look then at our output data frame, and all the features we added:
```
df.head()
```
And now we can use the Feature Unions that we learned about to merge all these together. We'll split the data, create pipelines for all our new features and get their unions. Easy, right?
```
words = Pipeline([
('selector', NumberSelector(key='words')),
('standard', StandardScaler())
])
words_not_stopword = Pipeline([
('selector', NumberSelector(key='words_not_stopword')),
('standard', StandardScaler())
])
avg_word_length = Pipeline([
('selector', NumberSelector(key='avg_word_length')),
('standard', StandardScaler())
])
commas = Pipeline([
('selector', NumberSelector(key='commas')),
('standard', StandardScaler()),
])
upper = Pipeline([
('selector', NumberSelector(key='upper')),
('standard', StandardScaler()),
])
capitalized = Pipeline([
('selector', NumberSelector(key='capitalized')),
('standard', StandardScaler()),
])
feats = FeatureUnion([('text', text),
('length', length),
('words', words),
('words_not_stopword', words_not_stopword),
('avg_word_length', avg_word_length),
('commas', commas),
('upper', upper),
('capitalized', capitalized)])
feature_processing = Pipeline([('feats', feats)])
```
We ended with our classifier so let's run it and get our classification score.
*Drumroll, please.*
```
# Split in train and validation
train_data, test_data = train_test_split(df, test_size=0.2, random_state=42)
pipeline = Pipeline([
('features',feats),
('classifier', RandomForestClassifier(random_state = 42)),
])
pipeline.fit(train_data, train_data.label)
preds = pipeline.predict(test_data)
np.mean(preds == test_data.label)
```
<img src="./media/sad.png" width="40%">
Although we are still above the baseline, we didn't surpass by much the score of using just the text and its length. But don't despair, with all the tools from BLU7, BLU8 and the first part of this BLU you are already perfectly equipped to find yet new features and to analyze if they are good or not. Even to integrate your pipelines with dimensionality reduction techniques that might find your meaningful features among all these.
## 2 - Other classifiers
New approaches in text processing have arised with new machine learning methods known as deep learning. The usage of deep learning methods is out of the scope for this BLU, but it is important that the reader is aware of the potential of such methods to improve over traditional machine learning algorithms. In particular, we suggest the knowledge about two different classifiers besides sklearn.
* [StarSpace](https://github.com/facebookresearch/StarSpace)
* [Vowpal Wabbit classifier](https://github.com/JohnLangford/vowpal_wabbit/wiki)
### Additional Pointers
* https://www.kaggle.com/baghern/a-deep-dive-into-sklearn-pipelines
* http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html
* http://michelleful.github.io/code-blog/2015/06/20/pipelines/
* http://scikit-learn.org/stable/auto_examples/hetero_feature_union.html#sphx-glr-auto-examples-hetero-feature-union-py
## 3 - Final remarks
And we are at the end of our NLP specialization. It saddens me, but it is time to say goodbye.
Throughout these BLUs you learned:
* How to process text
* Typicall text features used in classification tasks
* State of the art techniques to encode text
* Methods to analyze feature importance
* Methods to perform feature reduction
* How to design pipelines and combine different features inside them
You are now armed with several tools to perform text classification and much more in NLP. Don't forget to review all of this for the NLP hackathon, and to do your best in the Exercises.
<img src="./media/so_long.jpg" width="40%">
| github_jupyter |
```
%matplotlib inline
import os
import sys
# Modify the path
sys.path.append("..")
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression, Lasso
# yellowbrick.features.importances
# Feature importance visualizer
#
# Author: Benjamin Bengfort <benjamin@bengfort.com>
# Created: Fri Mar 02 15:21:36 2018 -0500
# Author: Rebecca Bilbro <rbilbro@districtdatalabs.com>
# Updated: Sun Jun 24 10:53:36 2018 -0500
#
# Copyright (C) 2018 District Data Labs
# For license information, see LICENSE.txt
#
# ID: importances.py [] benjamin@bengfort.com $
"""
Implementation of a feature importances visualizer. This visualizer sits in
kind of a weird place since it is technically a model scoring visualizer, but
is generally used for feature engineering.
"""
##########################################################################
## Imports
##########################################################################
import numpy as np
import matplotlib.pyplot as plt
from yellowbrick.utils import is_dataframe
from yellowbrick.base import ModelVisualizer
from yellowbrick.exceptions import YellowbrickTypeError, NotFitted
##########################################################################
## Feature Visualizer
##########################################################################
class FeatureImportances(ModelVisualizer):
"""
Displays the most informative features in a model by showing a bar chart
of features ranked by their importances. Although primarily a feature
engineering mechanism, this visualizer requires a model that has either a
``coef_`` or ``feature_importances_`` parameter after fit.
Parameters
----------
model : Estimator
A Scikit-Learn estimator that learns feature importances. Must support
either ``coef_`` or ``feature_importances_`` parameters.
ax : matplotlib Axes, default: None
The axis to plot the figure on. If None is passed in the current axes
will be used (or generated if required).
labels : list, default: None
A list of feature names to use. If a DataFrame is passed to fit and
features is None, feature names are selected as the column names.
relative : bool, default: True
If true, the features are described by their relative importance as a
percentage of the strongest feature component; otherwise the raw
numeric description of the feature importance is shown.
absolute : bool, default: False
Make all coeficients absolute to more easily compare negative
coeficients with positive ones.
xlabel : str, default: None
The label for the X-axis. If None is automatically determined by the
underlying model and options provided.
kwargs : dict
Keyword arguments that are passed to the base class and may influence
the visualization as defined in other Visualizers.
Attributes
----------
features_ : np.array
The feature labels ranked according to their importance
feature_importances_ : np.array
The numeric value of the feature importance computed by the model
Examples
--------
>>> from sklearn.ensemble import GradientBoostingClassifier
>>> visualizer = FeatureImportances(GradientBoostingClassifier())
>>> visualizer.fit(X, y)
>>> visualizer.show()
"""
def __init__(self, model, ax=None, labels=None, relative=True,
absolute=False, xlabel=None, **kwargs):
super(FeatureImportances, self).__init__(model, ax, **kwargs)
# Data Parameters
self.set_params(
labels=labels, relative=relative, absolute=absolute,
xlabel=xlabel,
)
def fit(self, X, y=None, **kwargs):
"""
Fits the estimator to discover the feature importances described by
the data, then draws those importances as a bar plot.
Parameters
----------
X : ndarray or DataFrame of shape n x m
A matrix of n instances with m features
y : ndarray or Series of length n
An array or series of target or class values
kwargs : dict
Keyword arguments passed to the fit method of the estimator.
Returns
-------
self : visualizer
The fit method must always return self to support pipelines.
"""
super(FeatureImportances, self).fit(X, y, **kwargs)
# Get the feature importances from the model
self.feature_importances_ = self._find_importances_param()
# Check if feature importances is a multidimensional array & if so flatten
if self.feature_importances_.ndim > 1:
self.feature_importances_ = np.mean(self.feature_importances_, axis=0)
# Apply absolute value filter before normalization
if self.absolute:
self.feature_importances_ = np.abs(self.feature_importances_)
# Normalize features relative to the maximum
if self.relative:
maxv = self.feature_importances_.max()
self.feature_importances_ /= maxv
self.feature_importances_ *= 100.0
# Create labels for the feature importances
# NOTE: this code is duplicated from MultiFeatureVisualizer
if self.labels is None:
# Use column names if a dataframe
if is_dataframe(X):
self.features_ = np.array(X.columns)
# Otherwise use the column index as the labels
else:
_, ncols = X.shape
self.features_ = np.arange(0, ncols)
else:
self.features_ = np.array(self.labels)
# Sort the features and their importances
sort_idx = np.argsort(self.feature_importances_)
self.features_ = self.features_[sort_idx]
self.feature_importances_ = self.feature_importances_[sort_idx]
# Draw the feature importances
self.draw()
return self
def draw(self, **kwargs):
"""
Draws the feature importances as a bar chart; called from fit.
"""
# Quick validation
for param in ('feature_importances_', 'features_'):
if not hasattr(self, param):
raise NotFitted("missing required param '{}'".format(param))
# Find the positions for each bar
pos = np.arange(self.features_.shape[0]) + 0.5
# Plot the bar chart
self.ax.barh(pos, self.feature_importances_, align='center')
# Set the labels for the bars
self.ax.set_yticks(pos)
self.ax.set_yticklabels(self.features_)
return self.ax
def finalize(self, **kwargs):
"""
Finalize the drawing setting labels and title.
"""
# Set the title
self.set_title('Feature Importances of {} Features using {}'.format(
len(self.features_), self.name))
# Set the xlabel
self.ax.set_xlabel(self._get_xlabel())
# Remove the ygrid
self.ax.grid(False, axis='y')
# Ensure we have a tight fit
plt.tight_layout()
def _find_importances_param(self):
"""
Searches the wrapped model for the feature importances parameter.
"""
for attr in ("feature_importances_", "coef_"):
try:
return getattr(self.estimator, attr)
except AttributeError:
continue
raise YellowbrickTypeError(
"could not find feature importances param on {}".format(
self.estimator.__class__.__name__
)
)
def _get_xlabel(self):
"""
Determines the xlabel based on the underlying data structure
"""
# Return user-specified label
if self.xlabel:
return self.xlabel
# Label for coefficients
if hasattr(self.estimator, "coef_"):
if self.relative:
return "relative coefficient magnitude"
return "coefficient value"
# Default label for feature_importances_
if self.relative:
return "relative importance"
return "feature importance"
def _is_fitted(self):
"""
Returns true if the visualizer has been fit.
"""
return hasattr(self, 'feature_importances_') and hasattr(self, 'features_')
##########################################################################
## Quick Method
##########################################################################
def feature_importances(model, X, y=None, ax=None, labels=None,
relative=True, absolute=False, xlabel=None, **kwargs):
"""
Displays the most informative features in a model by showing a bar chart
of features ranked by their importances. Although primarily a feature
engineering mechanism, this visualizer requires a model that has either a
``coef_`` or ``feature_importances_`` parameter after fit.
Parameters
----------
model : Estimator
A Scikit-Learn estimator that learns feature importances. Must support
either ``coef_`` or ``feature_importances_`` parameters.
X : ndarray or DataFrame of shape n x m
A matrix of n instances with m features
y : ndarray or Series of length n, optional
An array or series of target or class values
ax : matplotlib Axes, default: None
The axis to plot the figure on. If None is passed in the current axes
will be used (or generated if required).
labels : list, default: None
A list of feature names to use. If a DataFrame is passed to fit and
features is None, feature names are selected as the column names.
relative : bool, default: True
If true, the features are described by their relative importance as a
percentage of the strongest feature component; otherwise the raw
numeric description of the feature importance is shown.
absolute : bool, default: False
Make all coeficients absolute to more easily compare negative
coeficients with positive ones.
xlabel : str, default: None
The label for the X-axis. If None is automatically determined by the
underlying model and options provided.
kwargs : dict
Keyword arguments that are passed to the base class and may influence
the visualization as defined in other Visualizers.
Returns
-------
ax : matplotlib axes
Returns the axes that the parallel coordinates were drawn on.
"""
# Instantiate the visualizer
visualizer = FeatureImportances(
model, ax, labels, relative, absolute, xlabel, **kwargs)
# Fit and transform the visualizer (calls draw)
visualizer.fit(X, y)
visualizer.finalize()
# Return the axes object on the visualizer
return visualizer.ax
```
# Binary Classification
```
occupancy = pd.read_csv('data/occupancy/occupancy.csv')
features = [
"temperature", "relative humidity", "light", "C02", "humidity"
]
classes = ["unoccupied", "occupied"]
X = occupancy[features]
y = occupancy['occupancy']
lr_importances = FeatureImportances(LogisticRegression())
lr_importances.fit(X,y)
lr_importances.show()
lr_importances = FeatureImportances(LogisticRegression(), absolute=True)
lr_importances.fit(X,y)
lr_importances.show()
```
# Multiclass Classification
```
game = pd.read_csv('data/game/game.csv')
classes = ["win", "loss", "draw"]
game.replace({'loss':-1, 'draw':0, 'win':1, 'x':2, 'o':3, 'b':4}, inplace=True)
X = game.iloc[:, game.columns != 'outcome']
y = game['outcome']
lr_importances = FeatureImportances(LogisticRegression(), size=(1080, 720))
lr_importances.fit(X,y)
lr_importances.show()
lr_importances = FeatureImportances(LogisticRegression(), absolute=True, size=(1080, 720))
lr_importances.fit(X,y)
lr_importances.show()
```
| github_jupyter |
---
# Safety and Security - Ingest Video Streams
To avoid the need for a set of live cameras for this demo, we play back video from a series of JPEG files on disk
and write each video frame to SDP.
---
### Import dependencies
```
%load_ext autoreload
%autoreload 2
import grpc
import imp
import pravega.grpc_gateway as pravega
import pravega.video as video
from pravega.video import OutputStream, ImageFileSequenceLoader
import os
```
### Define Pravega stream parameters
```
gateway = os.environ['PRAVEGA_GRPC_GATEWAY_ADDRESS']
scope = 'examples'
stream = 'safety-and-security-video'
```
### Initialize connection to Pravega GRPC Gateway
```
pravega_channel = grpc.insecure_channel(gateway)
pravega_client = pravega.grpc.PravegaGatewayStub(pravega_channel)
#pravega_client.CreateScope(pravega.pb.CreateScopeRequest(scope=scope))
```
### Create Pravega stream
```
output_stream = OutputStream(pravega_client, scope, stream)
#output_stream.delete_stream()
output_stream.create_stream()
output_stream.truncate_stream()
# Truncate object detector output stream
detected_stream = OutputStream(pravega_client, scope, 'multi-video-grid-output')
#detected_stream.delete_stream()
detected_stream.create_stream(min_num_segments=1)
detected_stream.truncate_stream()
```
### Ingest JPEG images from files
```
data_dir = '../../data/'
camera_filespecs = [
data_dir + 'tollway-system-in-a-multi-lane-expressway-3324160/*.jpg',
data_dir + 'traffic-intersection-4188267/*.jpg',
data_dir + 'aerial-footage-of-vehicular-traffic-of-a-busy-street-intersection-at-night-3048225/*.jpg',
data_dir + 'people-walking-on-the-street-of-a-shopping-center-3283572/*.jpg',
# data_dir + 'a-cargo-truck-maneuvering-turns-in-a-manufacturing-plant-2711276/*.jpg',
# data_dir + 'aerial-view-of-a-freeway-1472012/*.jpg',
# data_dir + 'drone-footage-of-a-city-s-business-district-2835994/*.jpg',
# data_dir + 'drone-footage-of-a-manufacturing-plant-3061260/*.jpg',
# data_dir + 'rail-freight-trains-in-transit-4066355/*.jpg',
# data_dir + 'aerial-view-of-bridge-and-river-2292093/*.jpg',
# data_dir + 'virat/aerial/09152008flight2tape1_10/*.jpg',
# data_dir + 'virat/ground/VIRAT_S_000200_01_000226_000268/*.jpg',
# data_dir + 'virat/ground/VIRAT_S_000201_08_001652_001838/*.jpg',
# data_dir + 'virat/ground/VIRAT_S_000205_03_000860_000922/*.jpg',
# data_dir + 'virat/aerial/09172008flight1tape1_5/*.jpg',
# data_dir + 'virat/aerial/09152008flight2tape3_1/*.jpg',
# data_dir + 'virat/aerial/09152008flight2tape1_7/*.jpg',
]
input_fps = 15
step = 1
fps = input_fps / step
loader = ImageFileSequenceLoader(scope, stream, camera_filespecs, fps, step=step)
events_to_write = loader.event_generator()
output_stream.write_events(events_to_write)
```
| github_jupyter |
# 1. Import libraries
```
#----------------------------Reproducible----------------------------------------------------------------------------------------
import numpy as np
import tensorflow as tf
import random as rn
import os
seed=0
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
rn.seed(seed)
#session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
session_conf =tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
from keras import backend as K
#tf.set_random_seed(seed)
tf.compat.v1.set_random_seed(seed)
#sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
K.set_session(sess)
#----------------------------Reproducible----------------------------------------------------------------------------------------
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
#--------------------------------------------------------------------------------------------------------------------------------
from keras.datasets import fashion_mnist
from keras.models import Model
from keras.layers import Dense, Input, Flatten, Activation, Dropout, Layer
from keras.layers.normalization import BatchNormalization
from keras.utils import to_categorical
from keras import optimizers,initializers,constraints,regularizers
from keras import backend as K
from keras.callbacks import LambdaCallback,ModelCheckpoint
from keras.utils import plot_model
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import ExtraTreesClassifier
from sklearn import svm
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
import h5py
import math
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
matplotlib.style.use('ggplot')
import random
import scipy.sparse as sparse
import pandas as pd
from skimage import io
from PIL import Image
from sklearn.model_selection import train_test_split
import scipy.sparse as sparse
#--------------------------------------------------------------------------------------------------------------------------------
#Import ourslef defined methods
import sys
sys.path.append(r"./Defined")
import Functions as F
# The following code should be added before the keras model
#np.random.seed(seed)
#--------------------------------------------------------------------------------------------------------------------------------
def write_to_csv(p_data,p_path):
dataframe = pd.DataFrame(p_data)
dataframe.to_csv(p_path, mode='a',header=False,index=False,sep=',')
del dataframe
```
# 2. Loading data
```
dataset_path='./Dataset/coil-20-proc/'
samples={}
for dirpath, dirnames, filenames in os.walk(dataset_path):
#print(dirpath)
#print(dirnames)
#print(filenames)
dirnames.sort()
filenames.sort()
for filename in [f for f in filenames if f.endswith(".png") and not f.find('checkpoint')>0]:
full_path = os.path.join(dirpath, filename)
file_identifier=filename.split('__')[0][3:]
if file_identifier not in samples.keys():
samples[file_identifier] = []
# Direct read
#image = io.imread(full_path)
# Resize read
image_=Image.open(full_path).resize((20, 20),Image.ANTIALIAS)
image=np.asarray(image_)
samples[file_identifier].append(image)
#plt.imshow(samples['1'][0].reshape(20,20))
data_arr_list=[]
label_arr_list=[]
for key_i in samples.keys():
key_i_for_label=[int(key_i)-1]
data_arr_list.append(np.array(samples[key_i]))
label_arr_list.append(np.array(72*key_i_for_label))
data_arr=np.concatenate(data_arr_list).reshape(1440, 20*20).astype('float32') / 255.
label_arr_onehot=to_categorical(np.concatenate(label_arr_list))
sample_used=300
x_train_all,x_test,y_train_all,y_test_onehot= train_test_split(data_arr,label_arr_onehot,test_size=0.2,random_state=seed)
x_train=x_train_all[0:sample_used]
y_train_onehot=y_train_all[0:sample_used]
print('Shape of x_train: ' + str(x_train.shape))
print('Shape of x_test: ' + str(x_test.shape))
print('Shape of y_train: ' + str(y_train_onehot.shape))
print('Shape of y_test: ' + str(y_test_onehot.shape))
F.show_data_figures(x_train[0:40],20,20,40)
F.show_data_figures(x_test[0:40],20,20,40)
key_feture_number=50
```
# 3.Model
```
np.random.seed(seed)
#--------------------------------------------------------------------------------------------------------------------------------
class Feature_Select_Layer(Layer):
def __init__(self, output_dim, **kwargs):
super(Feature_Select_Layer, self).__init__(**kwargs)
self.output_dim = output_dim
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1],),
initializer=initializers.RandomUniform(minval=0.999999, maxval=0.9999999, seed=seed),
trainable=True)
super(Feature_Select_Layer, self).build(input_shape)
def call(self, x, selection=False,k=key_feture_number):
kernel=K.abs(self.kernel)
if selection:
kernel_=K.transpose(kernel)
kth_largest = tf.math.top_k(kernel_, k=k)[0][-1]
kernel = tf.where(condition=K.less(kernel,kth_largest),x=K.zeros_like(kernel),y=kernel)
return K.dot(x, tf.linalg.tensor_diag(kernel))
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
#--------------------------------------------------------------------------------------------------------------------------------
def Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate=1E-3,\
p_loss_weight_1=1,\
p_loss_weight_2=2):
input_img = Input(shape=(p_data_feature,), name='autoencoder_input')
feature_selection = Feature_Select_Layer(output_dim=p_data_feature,\
input_shape=(p_data_feature,),\
name='feature_selection')
feature_selection_score=feature_selection(input_img)
feature_selection_choose=feature_selection(input_img,selection=True,k=p_feture_number)
encoded = Dense(p_encoding_dim,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_hidden_layer')
encoded_score=encoded(feature_selection_score)
encoded_choose=encoded(feature_selection_choose)
bottleneck_score=encoded_score
bottleneck_choose=encoded_choose
decoded = Dense(p_data_feature,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_output')
decoded_score =decoded(bottleneck_score)
decoded_choose =decoded(bottleneck_choose)
latent_encoder_score = Model(input_img, bottleneck_score)
latent_encoder_choose = Model(input_img, bottleneck_choose)
feature_selection_output=Model(input_img,feature_selection_choose)
autoencoder = Model(input_img, [decoded_score,decoded_choose])
autoencoder.compile(loss=['mean_squared_error','mean_squared_error'],\
loss_weights=[p_loss_weight_1, p_loss_weight_2],\
optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
return autoencoder,feature_selection_output,latent_encoder_score,latent_encoder_choose
```
## 3.1 Structure and paramter testing
```
epochs_number=200
batch_size_value=8
```
---
### 3.1.1 Fractal Autoencoder
---
```
loss_weight_1=0.0078125
F_AE,\
feature_selection_output,\
latent_encoder_score_F_AE,\
latent_encoder_choose_F_AE=Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3,\
p_loss_weight_1=loss_weight_1,\
p_loss_weight_2=1)
F_AE_history = F_AE.fit(x_train, [x_train,x_train],\
epochs=epochs_number,\
batch_size=batch_size_value,\
shuffle=True)
loss = F_AE_history.history['loss']
epochs = range(epochs_number)
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
fina_results=np.array(F_AE.evaluate(x_test,[x_test,x_test]))
fina_results
fina_results_single=np.array(F_AE.evaluate(x_test[0:1],[x_test[0:1],x_test[0:1]]))
fina_results_single
for i in np.arange(x_test.shape[0]):
fina_results_i=np.array(F_AE.evaluate(x_test[i:i+1],[x_test[i:i+1],x_test[i:i+1]]))
write_to_csv(fina_results_i.reshape(1,len(fina_results_i)),"./log/results_"+str(sample_used)+".csv")
```
| github_jupyter |
# Getting started with TensorFlow (Eager Mode)
**Learning Objectives**
- Understand difference between Tensorflow's two modes: Eager Execution and Graph Execution
- Practice defining and performing basic operations on constant Tensors
- Use Tensorflow's automatic differentiation capability
## Introduction
**Eager Execution**
Eager mode evaluates operations immediatley and return concrete values immediately. To enable eager mode simply place `tf.enable_eager_execution()` at the top of your code. We recommend using eager execution when prototyping as it is intuitive, easier to debug, and requires less boilerplate code.
**Graph Execution**
Graph mode is TensorFlow's default execution mode (although it will change to eager with TF 2.0). In graph mode operations only produce a symbolic graph which doesn't get executed until run within the context of a tf.Session(). This style of coding is less inutitive and has more boilerplate, however it can lead to performance optimizations and is particularly suited for distributing training across multiple devices. We recommend using delayed execution for performance sensitive production code.
```
import tensorflow as tf
print(tf.__version__)
```
## Eager Execution
```
tf.enable_eager_execution()
```
### Adding Two Tensors
The value of the tensor, as well as its shape and data type are printed
```
a = tf.constant(value = [5, 3, 8], dtype = tf.int32)
b = tf.constant(value = [3, -1, 2], dtype = tf.int32)
c = tf.add(x = a, y = b)
print(c)
```
#### Overloaded Operators
We can also perform a `tf.add()` using the `+` operator. The `/,-,*` and `**` operators are similarly overloaded with the appropriate tensorflow operation.
```
c = a + b # this is equivalent to tf.add(a,b)
print(c)
```
### NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
```
import numpy as np
a_py = [1,2] # native python list
b_py = [3,4] # native python list
a_np = np.array(object = [1,2]) # numpy array
b_np = np.array(object = [3,4]) # numpy array
a_tf = tf.constant(value = [1,2], dtype = tf.int32) # native TF tensor
b_tf = tf.constant(value = [3,4], dtype = tf.int32) # native TF tensor
for result in [tf.add(x = a_py, y = b_py), tf.add(x = a_np, y = b_np), tf.add(x = a_tf, y = b_tf)]:
print("Type: {}, Value: {}".format(type(result), result))
```
You can convert a native TF tensor to a NumPy array using .numpy()
```
a_tf.numpy()
```
### Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
#### Toy Dataset
We'll model the following function:
\begin{equation}
y= 2x + 10
\end{equation}
```
X = tf.constant(value = [1,2,3,4,5,6,7,8,9,10], dtype = tf.float32)
Y = 2 * X + 10
print("X:{}".format(X))
print("Y:{}".format(Y))
```
#### Loss Function
Using mean squared error, our loss function is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
$\hat{Y}$ represents the vector containing our model's predictions:
\begin{equation}
\hat{Y} = w_oX + w_1
\end{equation}
#### **Exercise 1**
The function `loss_mse` below takes four arguments: the tensors $X$, $Y$ and the weights $w_0$ and $w_1$. Complete the function below to compute the Mean Square Error (MSE). Hint: [check out](https://www.tensorflow.org/api_docs/python/tf/math/reduce_mean) the `tf.reduce_mean` function.
```
def loss_mse(X, Y, w0, w1):
# TODO: Your code goes here
pass
```
#### Gradient Function
To use gradient descent we need to take the partial derivative of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables. The `params=[2,3]` argument tells TensorFlow to only compute derivatives with respect to the 2nd and 3rd arguments to the loss function (counting from 0, so really the 3rd and 4th).
```
# Counting from 0, the 2nd and 3rd parameter to the loss function are our weights
grad_f = tf.contrib.eager.gradients_function(f = loss_mse, params = [2, 3])
```
#### Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
#### **Exercise 2**
Complete the code to update the parameters $w_0$ and $w_1$ according to the gradients `d_w0` and `d_w1` and the specified learning rate.
```
STEPS = 1000
LEARNING_RATE = .02
# Initialize weights
w0 = tf.constant(value = 0.0, dtype = tf.float32)
w1 = tf.constant(value = 0.0, dtype = tf.float32)
for step in range(STEPS):
#1. Calculate gradients
d_w0, d_w1 = grad_f(X, Y, w0, w1) # derivatives calculated by tensorflow!
#2. Update weights
w0 = # TODO: Your code goes here
w1 = # TODO: Your code goes here
#3. Periodically print MSE
if step % 100 == 0:
print("STEP: {} MSE: {}".format(step,loss_mse(X, Y, w0, w1)))
# Print final MSE and weights
print("STEP: {} MSE: {}".format(STEPS,loss_mse(X, Y, w0, w1)))
print("w0:{}".format(round(float(w0), 4)))
print("w1:{}".format(round(float(w1), 4)))
```
## Bonus
Try modelling a non-linear function such as: $y=xe^{-x^2}$
Hint: Creating more training data will help. Also, you will need to build non-linear features.
```
from matplotlib import pyplot as plt
%matplotlib inline
X = tf.constant(value = np.linspace(0,2,1000), dtype = tf.float32)
Y = X*np.exp(-X**2) * X
plt.plot(X, Y)
def make_features(X):
features = [X]
features.append(tf.ones_like(X)) # Bias.
# TODO: add new features.
return tf.stack(features, axis=1)
def make_weights(n_weights):
W = [tf.constant(value = 0.0, dtype = tf.float32) for _ in range(n_weights)]
return tf.expand_dims(tf.stack(W),-1)
def predict(X, W):
Y_hat = tf.matmul(# TODO)
return tf.squeeze(Y_hat, axis=-1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
return tf.reduce_mean(input_tensor = (Y_hat - Y)**2)
X = tf.constant(value = np.linspace(0,2,1000), dtype = tf.float32)
Y = np.exp(-X**2) * X
grad_f = tf.contrib.eager.gradients_function(f = loss_mse, params=[2])
STEPS = 2000
LEARNING_RATE = .02
# Weights/features.
Xf = make_features(X)
# Xf = Xf[:,0:2] # Linear features only.
W = make_weights(Xf.get_shape()[1].value)
# For plotting
steps = []
losses = []
plt.figure()
for step in range(STEPS):
#1. Calculate gradients
dW = grad_f(Xf, Y, W)[0]
#2. Update weights
W -= dW * LEARNING_RATE
#3. Periodically print MSE
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
# Print final MSE and weights
print("STEP: {} MSE: {}".format(STEPS,loss_mse(Xf, Y, W)))
# Plot results
plt.figure()
plt.plot(X, Y, label='actual')
plt.plot(X, predict(Xf, W), label='predicted')
plt.legend()
```
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
```
import oommfc as oc
import discretisedfield as df
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import colorsys
plt.style.use('styles/lato_style.mplstyle')
def convert_to_RGB(hls_color):
return np.array(colorsys.hls_to_rgb(hls_color[0] / (2 * np.pi),
hls_color[1],
hls_color[2]))
def generate_RGBs(field_data):
"""
field_data :: (n, 3) array
"""
hls = np.ones_like(field_data)
hls[:, 0] = np.arctan2(field_data[:, 1],
field_data[:, 0]
)
hls[:, 0][hls[:, 0] < 0] = hls[:, 0][hls[:, 0] < 0] + 2 * np.pi
hls[:, 1] = 0.5 * (field_data[:, 2] + 1)
rgbs = np.apply_along_axis(convert_to_RGB, 1, hls)
# Redefine colours less than zero
# rgbs[rgbs < 0] += 2 * np.pi
return rgbs
```
Initial states:
```
R = 30e-9 # bubble radius
w = 30e-9 # dw width
a = 130 * np.pi / 180
def init_m_lattice(pos):
xm, ym = 160e-9, 100e-9
for xc, yc in [[0, 0], [-xm, -ym], [xm, -ym], [xm, ym], [-xm, ym]]:
x, y = pos[0] - xc, pos[1] - yc
r = np.sqrt(x ** 2 + y ** 2)
phi = np.arctan2(y, x)
if r <= R:
if (phi < 10 * np.pi / 180) or (phi < 0 and phi > -170 * np.pi / 180):
factor = 1
else:
factor = -1
sech = 1 / np.cosh(np.pi * (r - R) / w)
mx = factor * sech * (-np.sin(phi + a))
my = factor * sech * (np.cos(phi + a))
mz = 1 - (mx ** 2 + my ** 2)
# if r < R:
# mz = -(1 - (mx ** 2 + my ** 2))
# else:
# mz = (1 - (mx ** 2 + my ** 2))
return (mx, my, mz)
return (0, 0, 1)
def init_m(pos):
x, y = pos[0], pos[1]
r = np.sqrt(x ** 2 + y ** 2)
phi = np.arctan2(y, x)
if (phi < 10 * np.pi / 180) or (phi < 0 and phi > -170 * np.pi / 180):
factor = 1
else:
factor = -1
sech = 1 / np.cosh(np.pi * (r - R) / w)
mx = factor * sech * (-np.sin(phi + a))
my = factor * sech * (np.cos(phi + a))
# mz = 1 - (mx ** 2 + my ** 2)
if r < R:
mz = -(1 - (mx ** 2 + my ** 2))
else:
mz = (1 - (mx ** 2 + my ** 2))
return (mx, my, mz)
def init_dot(pos):
x, y = pos[0], pos[1]
r = np.sqrt(x ** 2 + y ** 2)
if r < R:
mz = -1
else:
mz = 1
return (0, 0, mz)
def init_type2bubble(pos):
R = 80e-9
x, y = pos[0], pos[1]
r = np.sqrt(x ** 2 + y ** 2)
phi = np.arctan2(y, x) + 0.5 * np.pi
k = np.pi / R
if r < R and y > 0:
m = (np.sin(k * r) * np.cos(phi), np.sin(k * r) * np.sin(phi), -np.cos(k * r))
elif r < R and y < 0:
m = (-np.sin(k * r) * np.cos(phi), -np.sin(k * r) * np.sin(phi), -np.cos(k * r))
else:
m = (0, 0, 1)
return m
def init_type2bubble_bls(pos):
R = 80e-9
x, y = pos[0], pos[1]
r = np.sqrt(x ** 2 + y ** 2)
phi = np.arctan2(y, x)
phi_b = np.arctan2(y, x) + 0.5 * np.pi
k = np.pi / R
if r < R and y > 0 and phi < np.pi * 0.5:
m = (-np.sin(k * r) * np.cos(phi_b), -np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
elif r < R and y > 0 and phi > np.pi * 0.5:
m = (np.sin(k * r) * np.cos(phi_b), np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
elif r < R and y < 0 and np.abs(phi) > np.pi * 0.5:
m = (-np.sin(k * r) * np.cos(phi_b), -np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
elif r < R and y < 0 and np.abs(phi) < np.pi * 0.5:
m = (np.sin(k * r) * np.cos(phi_b), np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
else:
m = (0, 0, 1)
return m
def init_type2bubble_bls_II(pos):
R = 80e-9
x, y = pos[0], pos[1]
r = np.sqrt(x ** 2 + y ** 2)
phi = np.arctan2(y, x)
phi_b = np.arctan2(y, x) + 0.5 * np.pi
k = np.pi / R
if r < R and y > 0:
m = (np.sin(k * r) * np.cos(phi_b), np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
elif r < R and y < 0:
m = (-np.sin(k * r) * np.cos(phi_b), -np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
else:
m = (0, 0, 1)
return m
def init_type2bubble_neel(pos):
R = 80e-9
x, y = pos[0], pos[1]
r = np.sqrt(x ** 2 + y ** 2)
phi = np.arctan2(y, x)
phi_b = np.arctan2(y, x)
k = np.pi / R
if r < R and y > 0 and phi < np.pi * 1:
m = (-np.sin(k * r) * np.cos(phi_b), -np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
elif r < R and y > 0 and phi > np.pi * 1:
m = (np.sin(k * r) * np.cos(phi_b), np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
elif r < R and y < 0 and np.abs(phi) > np.pi * 1:
m = (-np.sin(k * r) * np.cos(phi_b), -np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
elif r < R and y < 0 and np.abs(phi) < np.pi * 1:
m = (np.sin(k * r) * np.cos(phi_b), np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
else:
m = (0, 0, 1)
return m
def init_type2bubble_neel_II(pos):
R = 80e-9
x, y = pos[0], pos[1]
r = np.sqrt(x ** 2 + y ** 2)
phi = np.arctan2(y, x)
phi_b = np.arctan2(y, x)
k = np.pi / R
if r < R and y > 0:
m = (np.sin(k * r) * np.cos(phi_b), np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
elif r < R and y < 0:
m = (-np.sin(k * r) * np.cos(phi_b), -np.sin(k * r) * np.sin(phi_b), -np.cos(k * r))
else:
m = (0, 0, 1)
return m
np.random.seed(42)
def init_random(pos):
m = 2 * np.random.random((1, 3)) - 1
return m
mu0 = 4 * np.pi * 1e-7
# A = 12e-12
# Ms = 5.37e5
# Ku = 187952
# B = 180e-3
A = 20e-12
Ms = 0.648 / mu0
Ku = A / 2.3e-16
# Apply field in an angle
B = 0.2
theta_B = 0 * np.pi / 180
phi_B = 0 * np.pi / 180
print('lex = ', np.sqrt(2 * A / (mu0 * Ms ** 2)))
mesh = oc.Mesh(p1=(-400e-9, -400e-9, -50e-9), p2=(400e-9, 400e-9, 50e-9),
cell=(5e-9, 5e-9, 5e-9))
system = oc.System(name='oommf_typeII_bubble')
# Add interactions
system.hamiltonian = (oc.Exchange(A=A) + oc.UniaxialAnisotropy(K1=Ku, u=(0, 0, 1))
+ oc.Demag()
+ oc.Zeeman((np.cos(phi_B) * np.sin(theta_B) * B / mu0,
np.sin(phi_B) * np.sin(theta_B) * B / mu0,
np.cos(theta_B) * B / mu0))
)
system.m = df.Field(mesh, value=init_m, norm=Ms)
f, ax = plt.subplots(ncols=1, figsize=(8, 8))
system.m.plot_plane('z', ax=ax)
```
Relax the system:
```
md = oc.MinDriver()
md.drive(system)
```
Extract the simulation data:
```
# A list of tuples with the coordinates in the order of systems[2]
coordinates = list(system.m.mesh.coordinates)
# Turn coordinates into a (N, 3) array and save in corresponding variables
# scaled in nm
coordinates = np.array(coordinates)
x_oommf, y_oommf, z_oommf = coordinates[:, 0] * 1e9, coordinates[:, 1] * 1e9, coordinates[:, 2] * 1e9
xs, ys, zs = np.unique(x_oommf), np.unique(y_oommf), np.unique(z_oommf)
# phi_oommf = np.arctan2(y_oommf, x_oommf)
# Get the magnetisation for every coordinate in the magnetisation list
values = []
for c in coordinates:
values.append(system.m(c))
values = np.array(values)
# Save them in the corresponding row and column of the m list
# mx, my, mz:
mx, my, mz = (values[:, 0] / Ms, values[:, 1] / Ms,values[:, 2] / Ms)
# mphi = lambda z_i: (-mx_O * np.sin(phi_O) + my_O * np.cos(phi_O))[_filter_y_O(z_i)]
# mr = lambda z_i: (mx_O * np.cos(phi_O) + my_O * np.sin(phi_O))[_filter_y_O(z_i)]
# Optionally save the data:
#
# np.savetxt('coordinates_typeIIbubble_800nmx800nmx100nm.txt', coordinates)
# np.savetxt('mx_typeIIbubble_800nmx800nmx100nm.txt', mx)
# np.savetxt('my_typeIIbubble_800nmx800nmx100nm.txt', my)
# np.savetxt('mz_typeIIbubble_800nmx800nmx100nm.txt', mz)
# Make an average through the thickness
z_filter = z_oommf == np.unique(z_oommf)[0]
av_map_x = np.copy(np.zeros_like(mx[z_filter]))
av_map_y = np.copy(np.zeros_like(mx[z_filter]))
av_map_z = np.copy(np.zeros_like(mx[z_filter]))
for layer in np.unique(z_oommf)[:]:
z_filter = z_oommf == layer
av_map_x += mx[z_filter]
av_map_y += my[z_filter]
av_map_z += mz[z_filter]
av_map_x /= len(np.unique(z_oommf)[:])
av_map_y /= len(np.unique(z_oommf)[:])
av_map_z /= len(np.unique(z_oommf)[:])
```
Plot the the system at the bottom layer:
```
f, ax = plt.subplots(ncols=1, figsize=(8, 8))
z_filter = z_oommf == np.unique(z_oommf)[0]
rgb_map = generate_RGBs(np.column_stack((mx, my, mz)))
plt.scatter(x_oommf[z_filter], y_oommf[z_filter], c=rgb_map[z_filter])
# plt.scatter(x_oommf, y_oommf, c=rgb_map)
# Arrows filter
arr_fltr_tmp = np.zeros(len(xs))
arr_fltr_tmp[::6] = 1
arr_fltr = np.zeros_like(x_oommf[z_filter]).reshape(len(xs), -1)
arr_fltr[::6] = arr_fltr_tmp
arr_fltr = arr_fltr.astype(np.bool).reshape(-1,)
plt.quiver(x_oommf[z_filter][arr_fltr], y_oommf[z_filter][arr_fltr],
mx[z_filter][arr_fltr], my[z_filter][arr_fltr],
scale=None)
# plt.savefig('oommf_bubble_tilted_30deg_y-axis-neg.jpg', dpi=300, bbox_inches='tight')
```
Plot the average:
```
f, ax = plt.subplots(ncols=1, figsize=(8, 8))
z_filter = z_oommf == np.unique(z_oommf)[0]
rgb_map = generate_RGBs(np.column_stack((av_map_x, av_map_y, av_map_z)))
plt.scatter(x_oommf[z_filter], y_oommf[z_filter], c=rgb_map)
# Arrows filter
arr_fltr_tmp = np.zeros(len(xs))
arr_fltr_tmp[::6] = 1
arr_fltr = np.zeros_like(x_oommf[z_filter]).reshape(len(xs), -1)
arr_fltr[::6] = arr_fltr_tmp
arr_fltr = arr_fltr.astype(np.bool).reshape(-1,)
plt.quiver(x_oommf[z_filter][arr_fltr], y_oommf[z_filter][arr_fltr],
av_map_x[arr_fltr], av_map_y[arr_fltr],
scale=None)
# plt.savefig('av_map_thickness_quiver.jpg', dpi=300, bbox_inches='tight')
```
| github_jupyter |
# A3: ReactionNetworks demo
# Introduction
This notebook demonstrates the functionality of the ReactionNetworks module. Classes included here are ReactionNetwork, C13ReactionNetwork and 2SReactionNetwork, which hold stochiometric networks, C13 transition networks and mixed stochiometric 13C transitions networks respectively.
# Setup
First, we need to set the path and environment variable properly:
```
quantmodelDir = '/users/hgmartin/libraries/quantmodel'
```
This is the only place where the jQMM library path needs to be set.
```
%matplotlib inline
import sys, os
pythonPath = quantmodelDir+"/code/core"
if pythonPath not in sys.path:
sys.path.append(pythonPath)
os.environ["QUANTMODELPATH"] = quantmodelDir
```
Then import the needed classes:
```
import ReactionNetworks as RN
import core, enhancedLists
import numpy
```
and get into a test directory:
```
cd ~/tests
```
# Classes description
## ReactionNetwork class
The ReactionNetwork class holds reaction networks with only stochiometric information, such as used for FBA. Let's create a reaction network from scratch by creating a few reactions (see core and enhancedLists module notebook for more details):
```
GLCpts = core.Reaction.from_string('GLCpts: glc_dsh_D_e + pep_c --> g6p_c + pyr_c')
PGI = core.Reaction.from_string('PGI: g6p_c <==> f6p_c')
PFK = core.Reaction.from_string('PFK: atp_c + f6p_c --> adp_c + fdp_c + h_c')
print GLCpts.stoichLine()
print PGI.stoichLine()
print PFK.stoichLine()
```
Once we have the reactions, we create a reactionList instance:
```
reactionList = enhancedLists.ReactionList([GLCpts,PGI,PFK])
```
And, from there, we create a list of metabolites with getMetList and use test as name of the reaction network and include no notes ({}):
```
ReactionNetwork = RN.ReactionNetwork(('test',{},reactionList.getMetList(),reactionList))
print ReactionNetwork.reactionList
print ReactionNetwork.metList
```
The **addReaction** method adds a reaction to the network:
```
ReactionNetwork.addReaction('F6PA: f6p_c <==> dha_c + g3p_c')
print ReactionNetwork.reactionList
print ReactionNetwork.metList
```
and **deleteReaction** eliminates a reaction:
```
ReactionNetwork.deleteReaction('F6PA')
print ReactionNetwork.reactionList
```
**changeFluxBounds** changes upper and lower bounds for a desired reaction:
```
ReactionNetwork.changeFluxBounds('GLCpts',3)
ReactionNetwork.changeFluxBounds('PGI',0)
ReactionNetwork.changeFluxBounds('PFK',(0.1,0.5))
rxnDict = ReactionNetwork.reactionList.getReactionDictionary()
print rxnDict['PFK'].fluxBounds.net
```
(First number is lower bound, last number is upper bound)
Flux bounds can also be loaded from a file using **loadFluxBounds**:
```
lines= [
'GLCpts: 0.3 [==] 0.45',
'PGI: 0.8 [==] 0.95',
'PFK: 0.0 [==] 1000',
]
ReactionNetwork.loadFluxBounds(('testFile.txt','\n'.join(lines)))
print rxnDict['GLCpts'].fluxBounds.net
print rxnDict['PGI'].fluxBounds.net
```
**fixFluxes** fixes the value of lower and upper found to the current flux (+/- some wiggle room):
```
rxnDict['PGI'].flux = rxnDict['PGI'].fluxBounds
ReactionNetwork.fixFluxes()
print rxnDict['PGI'].fluxBounds.net
```
This is useful when you want to constrains fluxes to a measured value, e.g. for finding the OF corresponding to that measured value.
and **capFluxBounds** caps the maximum bounds for each reaction:
```
ReactionNetwork.capFluxBounds(10)
print rxnDict['PFK'].fluxBounds
```
**getStoichMetRxnFBAFiles** produces the GAMS files for running FBA:
```
ReactionNetwork.getStoichMetRxnFBAFiles()
```
as does **getStoichMetRxnFVAFiles** for Flux Variability Analysis (FVA).
The method **write** writes the reaction format in SBML format for the given filename (or to a string if 'toString' is used as arugment):
```
print ReactionNetwork.write('toString')
```
## C13ReactionNetwork class
The C13ReactionNetwork class holds reaction networks with only carbon transition information, such as used for 13C MFA. Let's upload a C13ReactionNetwork for a TCA cycle mock-up:
```
qmodeldir = os.environ['QUANTMODELPATH']
dirDATA = qmodeldir+'/data/tests/TCAtoy/'
REACTIONSfilename = dirDATA+'REACTIONStca.txt'
FEEDfilename = dirDATA+'FEEDtca.txt'
CEMSfilename = dirDATA+'GCMStca.txt'
CEMSSTDfilename = dirDATA+'GCMSerrtca.txt'
FLUXESFreefilename = dirDATA+'FLUXtca.txt'
```
**addLabeling** and **addFeed** are used to load the labeling data and the feed labeling information""
```
atomTransitions = enhancedLists.AtomTransitionList(REACTIONSfilename)
ReacNet = atomTransitions.getReactionNetwork('E. coli wt5h 13C MFA')
ReacNet.addLabeling(CEMSfilename,'LCMSLabelData',CEMSSTDfilename,minSTD=0.001)
ReacNet.addFeed(FEEDfilename)
ReacNet.loadFluxBounds(FLUXESFreefilename)
```
Here is the labeling (Mass Distribution Vector, or MDV) for glutamate:
```
ReacNet.notes['LCMSLabelData']['Glu'].mdv
ReacNet.notes['LCMSLabelData']['Glu'].std
```
One can use **randomizeLabeling** to randomize this labeling within the standard deviations:
```
ReacNet.randomizeLabeling()
print ReacNet.notes['LCMSLabelData']['Glu'].mdv
ReacNet.randomizeLabeling()
print ReacNet.notes['LCMSLabelData']['Glu'].mdv
```
**fragDict** produces a dictionary of fragments to be fit:
```
fragDict = ReacNet.fragDict()
print ReacNet.fragDict()
print fragDict['Glu'].mdv
print fragDict['Glu'].std
```
**getLabelingDataFiles** produces the gams files with the labeling information:
```
ReacNet.getLabelingDataFiles()
```
**getFragmentInfoFiles** produces the gams files with the fragment information (raw indicates that no derivatization correction is needed):
```
ReacNet.getFragmentInfoFiles('raw')
```
**getSourceLabelFile** produces the file with the source labeling information:
```
ReacNet.getSourceLabelFile()
```
**getStoichMetRxn13CFiles** produces the files with the metabolites, reactions and stoichiometric information for 13C MFA:
```
ReacNet.getStoichMetRxn13CFiles()
```
**getEMUfiles** produces the EMU information files:
```
ReacNet.getEMUfiles()
```
**getAuxiliaryFiles** provides the auxiliary files (random number seed, etc):
```
ReacNet.getAuxiliaryFiles()
```
## TSReactionNetwork class
The TSReactionNetwork class holds reaction networks with stochiometric information and carbon transition information for a defined core, which are typically used for 2S-13C MFA:
```
datadir = os.environ["QUANTMODELPATH"]+'/data/tests/Toya2010/2S/wt5h/'
BASEfilename = datadir + 'EciJR904TKs.xml'
FLUXESfilename = datadir + 'FLUXwt5h.txt'
REACTIONSfilename = datadir + 'REACTIONSwt5h.txt'
MSfilename = datadir + 'GCMSwt5h.txt'
FEEDfilename = datadir + 'FEEDwt5h.txt'
MSSTDfilename = datadir + 'GCMSerrwt5h.txt'
# Load initial SBML file
ReacNet = RN.TSReactionNetwork(BASEfilename)
# Add Measured fluxes
ReacNet.loadFluxBounds(FLUXESfilename)
# Add carbon transitions
ReacNet.addTransitions(REACTIONSfilename)
# Add measured labeling information
ReacNet.addLabeling(MSfilename,'LCMSLabelData',MSSTDfilename,minSTD=0.001)
# Add feed labeling information
ReacNet.addFeed(FEEDfilename)
```
**addTransitions** adds the transition information to a genome-scale model:
```
rxnDict = ReacNet.reactionList.getReactionDictionary()
print rxnDict['PDH'].stoichLine()
REACTIONSfilename = datadir + 'REACTIONSwt5h.txt'
ReacNet.addTransitions(REACTIONSfilename)
rxnDict = ReacNet.reactionList.getReactionDictionary()
print rxnDict['PDH'].stoichLine()
print rxnDict['PDH'].transitionLine
```
**getFluxRefScale** provides the flux that is used as reference scale :
```
ReacNet.getFluxRefScale(inFluxRefs=['EX_glc(e)','EX_glc_e_'])
```
| github_jupyter |
# Autoencoder
In this notebook we will
```
from keras.layers import Input, Dense
from keras.models import Model
import matplotlib.pyplot as plt
import matplotlib.colors as mcol
from matplotlib import cm
def graph_colors(nx_graph):
#cm1 = mcol.LinearSegmentedColormap.from_list("MyCmapName",["blue","red"])
#cm1 = mcol.Colormap('viridis')
cnorm = mcol.Normalize(vmin=0,vmax=9)
cpick = cm.ScalarMappable(norm=cnorm,cmap='Set1')
cpick.set_array([])
val_map = {}
for k,v in nx.get_node_attributes(nx_graph,'attr').items():
#print(v)
val_map[k]=cpick.to_rgba(v)
#print(val_map)
colors=[]
for node in nx_graph.nodes():
#print(node,val_map.get(str(node), 'black'))
colors.append(val_map[node])
return colors
```
##### 1 Write a function that builds a simple autoencoder
The autoencoder must have a simple Dense layer with relu activation. The number of node of the dense layer is a parameter of the function
The function must return the entire autoencoder model as well as the encoder and the decoder
##### Load the mnist dataset
##### 2. Buil the autoencoder with a embedding size of 32 and print the number of parameters of the model. What do they relate to ?
##### 3. Fit the autoencoder using 32 epochs with a batch size of 256
##### 4. Using the history module of the autoencoder write a function that plots the learning curves with respect to the epochs on the train and test set. What can you say about these learning curves ? Give also the last loss on the test set
##### 5. Write a function that plots a fix number of example of the original images on the test as weel as their reconstruction
### Nearest neighbours graphs
The goal of this part is to visualize the neighbours graph in the embedding. It corresponds the the graph of the k-nearest neighbours using the euclidean distance of points the element in the embedding
```
from sklearn.neighbors import kneighbors_graph
import networkx as nx
def plot_nearest_neighbour_graph(encoder,x_test,y_test,ntest=100,p=3): #to explain
X=encoder.predict(x_test[1:ntest])
y=y_test[1:ntest]
A = kneighbors_graph(X, p, mode='connectivity', include_self=True)
G=nx.from_numpy_array(A.toarray())
nx.set_node_attributes(G,dict(zip(range(ntest),y)),'attr')
fig, ax = plt.subplots(figsize=(10,10))
pos=nx.layout.kamada_kawai_layout(G)
nx.draw(G,pos=pos
,with_labels=True
,labels=nx.get_node_attributes(G,'attr')
,node_color=graph_colors(G))
plt.tight_layout()
plt.title('Nearest Neighbours Graph',fontsize=15)
plt.show()
```
### Reduce the dimension of the embedding
##### 6. Rerun the previous example using an embedding dimension of 16
## Adding sparsity
##### 7. In this part we will add sparisity over the weights on the embedding layer. Write a function that build such a autoencoder (using a l1 regularization with a configurable regularization parameter and using the same autoencoder architecture that before)
# Deep autoencoder
# Convolutionnal autoencoder
# Application to denoising
| github_jupyter |
## Is flying safer now than before?
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import sqlite3
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import matplotlib as mpl
from flight_safety.queries import get_events_accidents
mpl.rcParams['figure.figsize'] = 10, 6
mpl.rcParams['font.size'] = 20
con = sqlite3.connect('data/avall.db')
events = get_events_accidents(con)
gby_year = events.groupby(events.ev_date.dt.year)
injured_per_year = gby_year[['inj_tot_f', 'inj_tot_s', 'inj_tot_m']].sum()
injured_per_year.plot.area(alpha=0.8)
plt.xlabel('Year')
plt.ylabel('Number of victims');
passengers = pd.read_csv('./data/annual_passengers_carried_data.csv', nrows=1, usecols=range(4,60))
passengers = passengers.transpose()
# renaming column
passengers.columns = ['passengers']
gby_year = events.groupby(events.ev_date.dt.year)
injured_per_year = gby_year[['inj_tot_f', 'inj_tot_s', 'inj_tot_m']].sum()
injured_per_year.tail()
# parsing date in index
passengers.index = pd.to_datetime(passengers.index.str[:4])
# converting flight number to number
passengers['passengers'] = pd.to_numeric(passengers['passengers'], errors='coerce') / 1e6
passengers.index = passengers.index.year
flights = pd.read_csv('data/API_IS.AIR.DPRT_DS2_en_csv_v2/API_IS.AIR.DPRT_DS2_en_csv_v2.csv', skiprows=4)
flights = flights[flights['Country Name'] == 'United States']
flights = flights.iloc[:, 5:-1].T
flights.index = pd.to_numeric(flights.index)
flights = flights / 1e6
flights.columns = ['flights']
fig, ax = plt.subplots(1, 1)
# ax[0].set_title('Millions of passengers transported')
# passengers['passengers'].plot.area(ax=ax[0], alpha=0.6, color="#0072B2")
# ax[0].set_xlim(1975, 2014)
ax.set_title('Millions of flights')
flights.plot.area(ax=ax, alpha=0.6, legend=False)
ax.set_xlim(1975, 2014)
plt.tight_layout()
accidents_gby_year = events.groupby(events.ev_date.dt.year).ntsb_no.count()
ax = accidents_gby_year.plot.area(alpha=0.8,)
ax.set_xlim(1982, 2015)
ax.set_ylabel('Number of accidents');
accident_rate = accidents_gby_year / flights.flights
accident_rate.tail()
ax = accident_rate.plot.area(alpha=0.8, figsize=(11,5))
ax.set_xlim(1982, 2015)
ax.set_ylabel('accident rate');
ax = accident_rate.plot.area(alpha=0.8, figsize=(11,5))
ax.set_xlim(1998, 2016)
ax.set_ylabel('accident rate');
```
## Official global numbers
According to [EASA](https://www.easa.europa.eu/) in 2016:
* Airline fatal accident rate: ~0.6 fatal accidentes / 1 Million flights
* Airline non-fatal accident rate: ~4.4 non-fatal accidents / 1 Million flights
https://www.easa.europa.eu/system/files/dfu/209735_EASA_ASR_MAIN_REPORT_2.0.pdf
## What can be done to keep improving?
<center><img src="images/data4safety.png" width=500px></center>
Collecting and gathering all data that may support the management of safety risks
- safety reports
- flight data (i.e. data generated by the aircraft via the Flight Data Recorders)
- surveillance data (air traffic data),
- weather data
- ...
| github_jupyter |
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
<!--NAVIGATION-->
< [Structure Refinement](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/05.00-Structure-Refinement.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Refinement Protocol](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/05.02-Refinement-Protocol.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/05.01-High-Res-Movers.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
# High-Resolution Movers
Keywords: keep_history(), MoveMap, SmallMover(), ShearMover(), angle_max(), set_bb(), MinMover(), MonteCarlo(), boltzmann(), TrialMover(), SequenceMover(), RepeatMover()
```
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.setup()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
from pyrosetta import *
from pyrosetta.teaching import *
init()
```
**Make sure you are in the directory with the pdb files:**
`cd google_drive/My\ Drive/student-notebooks/`
In the last workshop, you encountered the `ClassicFragmentMover`, which inserts a short sequence of backbone torsion angles, and the `SwitchResidueTypeSetMover`, which doesn’t actually change the conformation of the pose but instead swaps out the residue types used.
In this workshop, we will introduce a variety of other `Movers`, particularly those used in high-resolution refinement (e.g., in Bradley’s 2005 paper).
Before you start, load the cleaned cetuximab protein 1YY8 that we've worked with previously (but just one Fab fragment, chain A and B) and make a copy of the pose so we can compare later:
```
start = pose_from_pdb("1YY8.clean.pdb")
test = Pose()
test.assign(start)
```
```
### BEGIN SOLUTION
start = pose_from_pdb("inputs/1YY8.clean.pdb")
test = Pose()
test.assign(start)
### END SOLUTION
```
OPTIONAL: For convenient viewing in PyMOL, set the names of both poses:
```
start.pdb_info().name("start")
test.pdb_info().name("test")
pmm = PyMOLMover()
```
```
### BEGIN SOLUTION
start.pdb_info().name("start")
test.pdb_info().name("test")
pmm = PyMOLMover()
### END SOLUTION
```
We also want to activate the `keep_history` setting so that PyMOL will keep separate frames for each conformation as we modify it (more on this shortly):
```
pmm.keep_history(True)
pmm.apply(start)
pmm.apply(test)
```
```
### BEGIN SOLUTION
pmm.keep_history(True)
pmm.apply(start)
pmm.apply(test)
### END SOLUTION
```
## Small and Shear Moves
```
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from IPython import display
```
Small mover (1YY9, residue 277):
```
from pathlib import Path
gifPath = Path("./Media/small-mover.gif")
# Display GIF in Jupyter, CoLab, IPython
with open(gifPath,'rb') as f:
display.Image(data=f.read(), format='png',width='400')
```
Shear mover (1YY9, residue 277):
```
gifPath = Path("./Media/shear-mover.gif")
# Display GIF in Jupyter, CoLab, IPython
with open(gifPath,'rb') as f:
display.Image(data=f.read(), format='png',width='400')
```
The simplest move types are small moves, which perturb φ or ψ of a random residue by a random small angle, and shear moves, which perturb φ of a random residue by a small angle and ψ of the same residue by the same small angle of opposite sign.
For convenience, the `SmallMover` and `ShearMover` can do multiple rounds of perturbation. They also check that the new φ/ψ combinations are within an allowable region of the Ramachandran plot by using a Metropolis acceptance criterion based on the rama score component change. (The `rama` score is a statistical score from Simons et al. 1999, parametrized by bins of φ/ψ space.) Because they use the Metropolis criterion, we must also supply $kT$.
Finally, like most `Movers`, these require a `MoveMap` object to specify which degrees of freedom are fixed and which are free to change. Thus, we can create our `Movers` like this:
```
kT = 1.0
n_moves = 1
movemap = MoveMap()
movemap.set_bb(True)
small_mover = SmallMover(movemap, kT, n_moves)
shear_mover = ShearMover(movemap, kT, n_moves)
```
```
### BEGIN SOLUTION
kT = 1.0
n_moves = 1
movemap = MoveMap()
movemap.set_bb(True)
small_mover = SmallMover(movemap, kT, n_moves)
shear_mover = ShearMover(movemap, kT, n_moves)
### END SOLUTION
```
We can also adjust the maximum magnitude of the perturbations and get the information back from the `SmallMover` by printing it:
```
small_mover.angle_max("H", 25)
small_mover.angle_max("E", 25)
small_mover.angle_max("L", 25)
print(SmallMover)
```
```
### BEGIN SOLUTION
small_mover.angle_max("H", 25)
small_mover.angle_max("E", 25)
small_mover.angle_max("L", 25)
print(SmallMover)
### END SOLUTION
```
Here, *"H"*, *"E"*, and *"L"* refer to helical, sheet, and loop residues — as they did in the fragment library file — and the magnitude is in degrees. We will set all the maximum angles to 25° to make the changes easy to visualize. (The default values in Rosetta are 0°, 5°, and 6°, respectively.)
### Test your mover by applying it to your pose
```
small_mover.apply(test)
```
```
### BEGIN SOLUTION
small_mover.apply(test)
### END SOLUTION
```
Confirm that the change has occurred by comparing the start and test poses in PyMOL.
```
pmm.apply(test)
```
```
### BEGIN SOLUTION
pmm.apply(test)
### END SOLUTION
```
Second, try the PyMOL animation controls on the bottom right corner of the Viewer window. There should be a play button (►) as well as frame-forward, rewind, etc. Play the movie to watch PyMOL shuffle your pose move back and forth.
__Question:__ Can you identify which torsion angles changed? By how much? If it is hard to view on the screen, it may help to use your programs from previous workshops to compare torsion angles or coordinates.
### Comparing small and shear movers
Reset the test pose by re-assigning it the conformation from `start`, and create and view a second test pose (`test2`) in the same manner. Reset the existing `MoveMap` object to only allow the backbone angles of residue 50 to move. (Hint: Set all residues to `False`, then set just residues 50 and 51 to `True`). Note that the `SmallMover` contains a pointer to your `MoveMap`, and so it will automatically know you have made these changes and use the modified `MoveMap` in future moves.
```
test2 = Pose()
test2.assign(start)
test2.pdb_info().name("test2")
pmm.apply(test2)
movemap.set_bb(False)
movemap.set_bb(50, True)
movemap.set_bb(51, True)
print(movemap)
```
```
### BEGIN SOLUTION
test2 = Pose()
test2.assign(start)
test2.pdb_info().name("test2")
pmm.apply(test2)
movemap.set_bb(False)
movemap.set_bb(50, True)
movemap.set_bb(51, True)
print(movemap)
### END SOLUTION
```
Make one small move on one of your test poses and one shear move on the other test pose. Output both poses to PyMOL using the `PyMOLMover`. Be sure to set the name of each pose so they are distinguishable in PyMOL. Show only backbone atoms and view as lines or sticks. Identify the torsion angle changes that occurred.
```
small_mover.apply(test)
shear_mover.apply(test2)
pmm.apply(test)
pmm.apply(test2)
```
```
### BEGIN SOLUTION
small_mover.apply(test)
shear_mover.apply(test2)
pmm.apply(test)
pmm.apply(test2)
### END SOLUTION
```
__Question:__ What was the magnitude of the change in the sheared pose? How does the displacement of residue 60 compare between the small- and shear-perturbed poses?
## Minimization Moves
```
from IPython.display import Image
Image('./Media/minmover.png',width='300')
```
The `MinMover` carries out a gradient-based minimization to find the nearest local minimum in the energy function, such as that used in one step of the Monte-Carlo-plus-Minimization algorithm of Li & Scheraga.
```
min_mover = MinMover()
```
```
### BEGIN SOLUTION
min_mover = MinMover()
### END SOLUTION
```
3. The minimization mover needs at least a `MoveMap` and a `ScoreFunction`. You can also specify different minimization algorithms and a tolerance. (See Appendix A). For now, set up a new `MoveMap` that is flexible from residues 40 to 60, inclusive, using:
```
mm4060 = MoveMap()
mm4060.set_bb_true_range(40, 60)
```
```
### BEGIN SOLUTION
mm4060 = MoveMap()
mm4060.set_bb_true_range(40, 60)
### END SOLUTION
```
Create a standard, full-atom `ScoreFunction`, attach these objects to the default `MinMover`, and print out the information in the `MinMover` with the following methods and check that everything looks right:
```
scorefxn = #get the full-atom score function
min_mover.movemap(mm4060)
min_mover.score_function(scorefxn)
print(min_mover)
```
```
### BEGIN SOLUTION
scorefxn = get_fa_scorefxn()
min_mover.movemap(mm4060)
min_mover.score_function(scorefxn)
print(min_mover)
### END SOLUTION
```
Finally, attach an “observer”. The observer is configured to execute a `PyMOLMover.apply()` every time a change is observed in the pose coordinates. The `True` is a flag to ensure that PyMOL keeps a history of the moves.
```
observer = pyrosetta.rosetta.protocols.moves.AddPyMOLObserver(test2, True)
```
```
### BEGIN SOLUTION
observer = pyrosetta.rosetta.protocols.moves.AddPyMOLObserver(test2, True)
### END SOLUTION
```
4. Apply the `MinMover` to your sheared pose. Observe the output in PyMOL. (This may take a couple minutes — the Observer can slow down the minimization significantly).
```
min_mover.apply(test2)
```
```
### BEGIN SOLUTION
min_mover.apply(test2)
### END SOLUTION
```
__Question:__ How much motion do you see, relative to the original shear move? How many coordinate updates does the *MinMover* try? How does the magnitude of the motion change as the minimization continues? At the end, how far has the Cα atom of residue 60 moved?
## Monte Carlo Object
PyRosetta has several object classes for convenience for building more complex algorithms. One example is the `MonteCarlo` object. This object performs all the bookkeeping you need for creating a Monte Carlo search. That is, it can decide whether to accept or reject a trial conformation, and it keeps track of the lowest-energy conformation and other statistics about the search. Having the Monte Carlo operations packaged together is convenient, especially if we want multiple Monte Carlo loops to nest within each other or to operate on different parts of the protein.
To create the object, you need an initial test pose, a score function, and a temperature factor:
```
mc = MonteCarlo(test, scorefxn, kT)
```
```
### BEGIN SOLUTION
mc = MonteCarlo(test, scorefxn, kT)
### END SOLUTION
```
After the pose is modified by a mover, we tell the `MonteCarlo` object to automatically accept or reject the new conformation and update a set of internal counters by calling:
```
mc.boltzmann(test)
```
```
### BEGIN SOLUTION
mc.boltzmann(test)
### END SOLUTION
```
5. Test out a `MonteCarlo` object. Before doing so, you may need to adjust your small and shear moves to smaller maximum angles (3–5°) so they are more likely to be accepted. Apply several small or shear moves on your `test` pose, output the score using `print(scorefxn(test))`, then call the `mc.boltzmann(test)` method of the `MonteCarlo` object. A response of `True` indicates the move is accepted, and `False` indicates that the move is rejected. If the move is rejected, the pose is automatically reverted for you to its last accepted state. Manually iterate a few times between moves and calls to `mc.boltzmann()`. Call `pmm.apply(test)` every time you get a `True` back from the `mc.boltzmann(test)` method. Do enough cycles to observe at least two `True` and two `False` outputs. Do the acceptances match what you expect given the scores you obtain? After doing a few cycles, use `mc.show_scores()` to find the score of the last accepted state and the lowest energy state. What energies do you find? Is the last accepted energy equal to the lowest energy?
```
# adjust the SmallMover
small_mover.angle_max("H", 3)
small_mover.angle_max("E", 5)
small_mover.angle_max("L", 6)
# and the ShearMover
shear_mover.angle_max("H", 3)
shear_mover.angle_max("E", 5)
shear_mover.angle_max("L", 6)
```
Then write your MonteCarlo loop below:
```
# adjust the SmallMover
### BEGIN SOLUTION
small_mover.angle_max("H", 3)
small_mover.angle_max("E", 5)
small_mover.angle_max("L", 6)
### END SOLUTION
# and the ShearMover
### BEGIN SOLUTION
shear_mover.angle_max("H", 3)
shear_mover.angle_max("E", 5)
shear_mover.angle_max("L", 6)
### END SOLUTION
```
6. See what information is stored in the Monte Carlo object using:
```
mc.show_scores()
mc.show_counters()
mc.show_state()
```
__Question:__ What information do you get from each of these?
## Trial Mover
```
Image('./Media/trialmover.png',width='250')
```
A `TrialMover` combines a specified `Mover` with a `MonteCarlo` object. Each time a `TrialMover` is called, it performs a trial move and tests that move’s acceptance with the `MonteCarlo` object. You can create a `TrialMover` from any other type of `Mover`. You might imagine that, as we start nesting these together, we can build some complex algorithms!
```
trial_mover = TrialMover(small_mover, mc)
trial_mover.apply(test)
```
```
### BEGIN SOLUTION
trial_mover = TrialMover(small_mover, mc)
trial_mover.apply(test)
### END SOLUTION
```
7. Apply the `TrialMover` above ten times. Using `trial_mover.num_accepts()` and `trial_mover.acceptance_rate()`, what do you find?
```
### BEGIN SOLUTION
for i in range(10):
trial_mover.apply(test)
print(trial_mover.num_accepts())
print(trial_mover.acceptance_rate())
### END SOLUTION
```
8. The `TrialMover` also communicates information to the `MonteCarlo` object about the type of moves being tried. Create a second `TrialMover` (`trial_mover2`) using a `ShearMover` and the same `MonteCarlo` object, and apply this second `TrialMover` ten times like above. After, look at the `MonteCarlo` object state (`mc.show_state()`).
__Question:__ Using information from `mc.show_state()`, what are the acceptance rates of each mover (`SmallMover` and `ShearMover`)? Which mover is accepted most often, and which has the largest energy drop per trial? What are the average energy drops?
## Sequence and Repeat Movers
A `SequenceMover` is another combination `Mover` and applies several `Movers` in succession. It is useful for building up complex routines and is constructed (and confirmed with a print statement) as follows:
```
seq_mover = SequenceMover()
seq_mover.add_mover(small_mover)
seq_mover.add_mover(shear_mover)
seq_mover.add_mover(min_mover)
print(seq_mover)
```
```
### BEGIN SOLUTION
seq_mover = SequenceMover()
seq_mover.add_mover(small_mover)
seq_mover.add_mover(shear_mover)
seq_mover.add_mover(min_mover)
print(seq_mover)
### END SOLUTION
```
The above example mover will apply first the small, then the shear, and finally the minimization movers.
9. Create and print a `TrialMover` using the `SequenceMover` above, and apply it five times to the test pose. How is the sequence mover shown by `mc.show_state()`?
A `RepeatMover` will apply its input Mover `n_repeats` times each time it is applied:
```
n_repeats = 3
repeat_mover = RepeatMover(trial_mover, n_repeats)
print(repeat_mover)
```
```
### BEGIN SOLUTION
n_repeats = 3
repeat_mover = RepeatMover(trial_mover, n_repeats)
print(repeat_mover)
### END SOLUTION
```
10. Use these tools to build up your own *ab initio* protocol. Create `TrialMovers` for 9-mer and 3-mer fragment insertions. First, create `RepeatMovers` for each and then create the `TrialMovers` using the same `MonteCarlo` object for each. Create a `SequenceMover` to do the 9-mer trials and then the 3-mer trials, and iterate the sequence 10 times.
__Problem:__ Use a pen and paper to write out a flowchartalgorithm:
11. *Hierarchical search*. Construct a `TrialMover` that tries to insert a 9-mer fragment and then refines the protein with 100 alternating small and shear trials before the next 9-mer fragment trial. The interesting part is this: you will use one `MonteCarlo` object for the small and shear trials, inside the whole 9-mer combination mover. But use a separate `MonteCarlo` object for the 9-mer trials. In this way, if a 9-mer fragment insertion is evaluated after the optimization by small and shear moves and is rejected, the pose goes all the way back to before the 9-mer fragment insertion.
<!--NAVIGATION-->
< [Structure Refinement](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/05.00-Structure-Refinement.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Refinement Protocol](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/05.02-Refinement-Protocol.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/05.01-High-Res-Movers.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
| github_jupyter |
```
%matplotlib inline
import os, glob, warnings, sys
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from nltools.data import Brain_Data
from nltools.mask import expand_mask, collapse_mask
import scipy.stats as ss
from scipy.stats import pearsonr,spearmanr
from scipy.spatial.distance import squareform
from sklearn.metrics import pairwise_distances
from sklearn.metrics.pairwise import cosine_similarity
base_dir = '/project/3014018.02/analysis_mri/DataSharingCollection/'
```
###### Run Intersubject RSA on Response screen (here 'Dec')
```
import subprocess
py_dir = '/home/decision/jervbaa/.conda/envs/hmtg_fmri_nc/bin/python2.7'
# Fixed arguments
screen = 'Dec'
metric_model = 'euclidean'
metric_brain = 'correlation'
permutation_method = 'vector'
n_permute = 100000
nparcel = 200
# Loops
conds = ['X4']
parcels = range(nparcel)
complementOnly = True # Sometimes jobs go missing when being sent to the cluster => run this cell again to fill in the blanks only
for cond in conds:
jobids = pd.DataFrame(columns=['cond','parcel','jobid'])
pathCur = os.path.join(base_dir,
'Results/3.fMRI-ISRSA/IS-RSA/'+
'IS-RSA_nparcel-%i_perm-%s_%s%s'%(
nparcel,permutation_method,screen,cond))
logPathCur = os.path.join(pathCur,'Logfiles/')
if os.path.exists(pathCur) == False:
os.mkdir(pathCur)
if os.path.exists(logPathCur) == False:
os.mkdir(logPathCur)
for parcel in parcels:
go = True
if complementOnly:
if os.path.isfile(os.path.join(pathCur,'parcel%03d.csv'%parcel)):
go = False
if go:
cmd = ['/home/decision/jervbaa/.conda/envs/hmtg_fmri_nc/bin/python2.7',
'/home/decision/jervbaa/Software/SubmitToCluster.py',
'-length','0:30:00',
'-memory','2GB',
'-name','ISRSA-%s-%03d'%(cond,parcel),
'-logfiledir',logPathCur,
'-command','%s %s/Code/5.fMRI-ISRSA/Functions/ComputeISRSA.py'%(py_dir,base_dir) +
' %s %s %i %i %s %s %s %i'%(
screen,cond,nparcel,parcel,metric_model,metric_brain,
permutation_method,n_permute),
]
out = subprocess.check_output(' '.join(cmd),shell=True)
print out
jobid = out[-27:-1]
jobids = jobids.append(pd.DataFrame([[cond,parcel,jobid]],columns=jobids.columns))
jobids.to_csv(os.path.join(pathCur,'jobids.csv'))
```
##### Run intersubject RSA on Player and Investment screens (here 'Face' and 'Inv')
```
import subprocess
py_dir = '/home/decision/jervbaa/.conda/envs/hmtg_fmri_nc/bin/python2.7'
# Fixed arguments
screen = 'Inv'
metric_model = 'euclidean'
metric_brain = 'correlation'
permutation_method = 'vector'
n_permute = 100000
nparcel = 200
# Loops
parcels = range(nparcel)
complementOnly = True # Sometimes jobs go missing when being sent to the cluster => run this cell again to fill in the blanks only
for cond in conds:
jobids = pd.DataFrame(columns=['parcel','jobid'])
pathCur = os.path.join(base_dir,
'Results/3.fMRI-ISRSA/IS-RSA/'+
'IS-RSA_nparcel-%i_perm-%s_%s'%(
nparcel,permutation_method,screen))
logPathCur = os.path.join(pathCur,'Logfiles/')
if os.path.exists(pathCur) == False:
os.mkdir(pathCur)
if os.path.exists(logPathCur) == False:
os.mkdir(logPathCur)
for parcel in parcels:
go = True
if complementOnly:
if os.path.isfile(os.path.join(pathCur,'parcel%03d.csv'%parcel)):
go = False
if go:
cmd = ['/home/decision/jervbaa/.conda/envs/hmtg_fmri_nc/bin/python2.7',
'/home/decision/jervbaa/Software/SubmitToCluster.py',
'-length','0:30:00',
'-memory','2GB',
'-name','ISRSA-%s-%03d'%(cond,parcel),
'-logfiledir',logPathCur,
'-command','%s %s/Code/5.fMRI-ISRSA/Functions/ComputeISRSA_noMultiplier.py'%(py_dir,base_dir) +
' %s %i %i %s %s %s %i'%(
screen,nparcel,parcel,metric_model,metric_brain,
permutation_method,n_permute),
]
out = subprocess.check_output(' '.join(cmd),shell=True)
print out
jobid = out[-27:-1]
jobids = jobids.append(pd.DataFrame([[parcel,jobid]],columns=jobids.columns))
jobids.to_csv(os.path.join(pathCur,'jobids.csv'))
```
#### Face
```
import subprocess
py_dir = '/home/decision/jervbaa/.conda/envs/hmtg_fmri_nc/bin/python2.7'
# Fixed arguments
screen = 'Face'
metric_model = 'euclidean'
metric_brain = 'correlation'
permutation_method = 'vector'
n_permute = 100000
nparcel = 200
# Loops
parcels = range(nparcel)
complementOnly = True # Sometimes jobs go missing when being sent to the cluster => run this cell again to fill in the blanks only
for cond in conds:
jobids = pd.DataFrame(columns=['parcel','jobid'])
pathCur = os.path.join(base_dir,
'Results/3.fMRI-ISRSA/IS-RSA/'+
'IS-RSA_nparcel-%i_perm-%s_%s'%(
nparcel,permutation_method,screen))
logPathCur = os.path.join(pathCur,'Logfiles/')
if os.path.exists(pathCur) == False:
os.mkdir(pathCur)
if os.path.exists(logPathCur) == False:
os.mkdir(logPathCur)
for parcel in parcels:
go = True
if complementOnly:
if os.path.isfile(os.path.join(pathCur,'parcel%03d.csv'%parcel)):
go = False
if go:
cmd = ['/home/decision/jervbaa/.conda/envs/hmtg_fmri_nc/bin/python2.7',
'/home/decision/jervbaa/Software/SubmitToCluster.py',
'-length','0:30:00',
'-memory','2GB',
'-name','ISRSA-%s-%03d'%(cond,parcel),
'-logfiledir',logPathCur,
'-command','%s %s/Code/5.fMRI-ISRSA/Functions/ComputeISRSA_noMultiplier.py'%(py_dir,base_dir) +
' %s %i %i %s %s %s %i'%(
screen,nparcel,parcel,metric_model,metric_brain,
permutation_method,n_permute),
]
out = subprocess.check_output(' '.join(cmd),shell=True)
print out
jobid = out[-27:-1]
jobids = jobids.append(pd.DataFrame([[parcel,jobid]],columns=jobids.columns))
jobids.to_csv(os.path.join(pathCur,'jobids.csv'))
```
| github_jupyter |
# Chapter 1, Table 1
This notebook explains how I used the Harvard General Inquirer to *streamline* interpretation of a predictive model.
I'm italicizing the word "streamline" because I want to emphasize that I place very little weight on the Inquirer: as I say in the text, "The General Inquirer has no special authority, and I have tried not to make it a load-bearing element of this argument."
To interpret a model, I actually spend a lot of time looking at lists of features, as well as predictions about individual texts. But to *explain* my interpretation, I need some relatively simple summary. Given real-world limits on time and attention, going on about lists of individual words for five pages is rarely an option. So, although wordlists are crude and arbitrary devices, flattening out polysemy and historical change, I am willing to lean on them rhetorically, where I find that they do in practice echo observations I have made in other ways.
I should also acknowledge that I'm not using the General Inquirer as it was designed to be used. The full version of this tool is not just a set of wordlists, it's a software package that tries to get around polysemy by disambiguating different word senses. I haven't tried to use it in that way: I think it would complicate my explanation, in order to project an impression of accuracy and precision that I don't particularly want to project. Instead, I have stressed that word lists are crude tools, and I'm using them only as crude approximations.
That said, how do I do it?
To start with, we'll load an array of modules. Some standard, some utilities that I've written myself.
```
# some standard modules
import csv, os, sys
from collections import Counter
import numpy as np
from scipy.stats import pearsonr
# now a module that I wrote myself, located
# a few directories up, in the software
# library for this repository
sys.path.append('../../lib')
import FileCabinet as filecab
```
### Loading the General Inquirer.
This takes some doing, because the General Inquirer doesn't start out as a set of wordlists. I have to translate it into that form.
I start by loading an English dictionary.
```
# start by loading the dictionary
dictionary = set()
with open('../../lexicons/MainDictionary.txt', encoding = 'utf-8') as f:
reader = csv.reader(f, delimiter = '\t')
for row in reader:
word = row[0]
count = int(row[2])
if count < 10000:
continue
# that ignores very rare words
# we end up with about 42,700 common ones
else:
dictionary.add(word)
```
The next stage is to translate the Inquirer. It begins as a table where word senses are row labels, and the Inquirer categories are columns (except for two columns at the beginning and two at the end). This is, by the way, the "basic spreadsheet" described at this site:
http://www.wjh.harvard.edu/~inquirer/spreadsheet_guide.htm
I translate this into a dictionary where the keys are Inquirer categories, and the values are sets of words associated with each category.
But to do that, I have to do some filtering and expanding. Different senses of a word are broken out in the spreadsheet thus:
ABOUT#1
ABOUT#2
ABOUT#3
etc.
I need to separate the hashtag part. Also, because I don't want to allow rare senses of a word too much power, I ignore everything but the first sense of a word.
However, I also want to allow singular verb forms and plural nouns to count. So there's some code below that expands words by adding -s -ed, etc to the end. See the *suffixes* dictionary defined below for more details.
```
inquirer = dict()
suffixes = dict()
suffixes['verb'] = ['s', 'es', 'ed', 'd', 'ing']
suffixes['noun'] = ['s', 'es']
allinquirerwords = set()
with open('../../lexicons/inquirerbasic.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
fields = reader.fieldnames[2:-2]
for field in fields:
inquirer[field] = set()
for row in reader:
term = row['Entry']
if '#' in term:
parts = term.split('#')
word = parts[0].lower()
sense = int(parts[1].strip('_ '))
partialsense = True
else:
word = term.lower()
sense = 0
partialsense = False
if sense > 1:
continue
# we're ignoring uncommon senses
pos = row['Othtags']
if 'Noun' in pos:
pos = 'noun'
elif 'SUPV' in pos:
pos = 'verb'
forms = {word}
if pos == 'noun' or pos == 'verb':
for suffix in suffixes[pos]:
if word + suffix in dictionary:
forms.add(word + suffix)
if pos == 'verb' and word.rstrip('e') + suffix in dictionary:
forms.add(word.rstrip('e') + suffix)
for form in forms:
for field in fields:
if len(row[field]) > 1:
inquirer[field].add(form)
allinquirerwords.add(form)
print('Inquirer loaded')
print('Total of ' + str(len(allinquirerwords)) + " words.")
```
### Load model predictions about volumes
The next step is to create some vectors that store predictions about volumes. In this case, these are predictions about the probability that a volume is fiction, rather than biography.
```
# the folder where wordcounts will live
# we're only going to load predictions
# that correspond to files located there
sourcedir = '../sourcefiles/'
docs = []
logistic = []
with open('../plotdata/the900.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
genre = row['realclass']
docid = row['volid']
if not os.path.exists(sourcedir + docid + '.tsv'):
continue
docs.append(row['volid'])
logistic.append(float(row['logistic']))
logistic = np.array(logistic)
numdocs = len(docs)
assert numdocs == len(logistic)
print("We have information about " + str(numdocs) + " volumes.")
```
### And get the wordcounts themselves
This cell of the notebook is very short (one line), but it takes a lot of time to execute. There's a lot of file i/o that happens inside the function get_wordcounts, in the FileCabinet module, which is invoked here. We come away with a dictionary of wordcounts, keyed in the first instance by volume ID.
```
wordcounts = filecab.get_wordcounts(sourcedir, '.tsv', docs)
```
### Now calculate the representation of each Inquirer category in each doc
We normalize by the total wordcount for a volume.
This cell also takes a long time to run. I've added a counter so you have some confidence that it's still running.
```
# Initialize empty category vectors
categories = dict()
for field in fields:
categories[field] = np.zeros(numdocs)
# Now fill them
for i, doc in enumerate(docs):
ctcat = Counter()
allcats = 0
for word, count in wordcounts[doc].items():
if word in dictionary:
allcats += count
if word not in allinquirerwords:
continue
for field in fields:
if word in inquirer[field]:
ctcat[field] += count
for field in fields:
categories[field][i] = ctcat[field] / (allcats + 0.1)
# Laplacian smoothing there to avoid div by zero, among other things.
if i % 100 == 1:
print(i, allcats)
```
### Calculate correlations
Now that we have all the information, calculating correlations is easy. We iterate through Inquirer categories, in each case calculating the correlation between a vector of model predictions for docs, and a vector of category-frequencies for docs.
```
logresults = []
for inq_category in fields:
l = pearsonr(logistic, categories[inq_category])[0]
logresults.append((l, inq_category))
logresults.sort()
```
### Load expanded names of Inquirer categories
The terms used in the inquirer spreadsheet are not very transparent. ```DAV``` for instance is "descriptive action verbs." ```BodyPt``` is "body parts." To make these more transparent, I have provided expanded names for many categories that turned out to be relevant in the book, trying to base my description on the accounts provided here: http://www.wjh.harvard.edu/~inquirer/homecat.htm
We load these into a dictionary.
```
short2long = dict()
with open('../../lexicons/long_inquirer_names.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
short2long[row['short_name']] = row['long_name']
```
### Print results
I print the top 12 correlations and the bottom 12, skipping categories that are drawn from the "Laswell value dictionary." The Laswell categories are very finely discriminated (things like "enlightenment gain" or "power loss"), and I have little faith that they're meaningful. I especially doubt that they could remain meaningful when the Inquirer is used crudely as a source of wordlists.
```
print('Printing the correlations of General Inquirer categories')
print('with the predicted probabilities of being fiction in allsubset2.csv:')
print()
print('First, top positive correlations: ')
print()
for prob, n in reversed(logresults[-12 : ]):
if n in short2long:
n = short2long[n]
if 'Laswell' in n:
continue
else:
print(str(prob) + '\t' + n)
print()
print('Now, negative correlations: ')
print()
for prob, n in logresults[0 : 12]:
if n in short2long:
n = short2long[n]
if 'Laswell' in n:
continue
else:
print(str(prob) + '\t' + n)
```
### Comments
If you compare the printout above to the book's version of Table 1.1, you will notice very slight differences. For instance, "power" appears twice, so those lines have been fused.
Titlecased terms are the terms originally used in the Inquirer. Lowercased terms are my explanations.
| github_jupyter |
# Calculations on TiO2 and a monolayer of MoS2
Calculation of TiO2 and a Monolayer of MoS2 with the Fleur code. We converge the systems and calculate a Bandstructure and DOS.
Author: Jens Broeder 2017
```
%load_ext autoreload
%autoreload 2
%matplotlib notebook
from aiida import load_dbenv, is_dbenv_loaded
if not is_dbenv_loaded():
load_dbenv()
from aiida.orm import Code, load_node
from aiida.orm import DataFactory as DF
from aiida.work.run import run, async, submit
from aiida_fleur.workflows.scf import fleur_scf_wc
from aiida_fleur.workflows.band import fleur_band_wc
from aiida_fleur.workflows.dos import fleur_dos_wc
from aiida_fleur.workflows.eos import fleur_eos_wc
#from plot_methods.plot_methods import single_scatterplot, multiple_scatterplots
#from plot_methods.plot_fleur_aiida import plot_fleur, plot_fleur_mn
StructureData, ParameterData, KpointsData, FleurinpData = DF('structure'), DF('parameter'), DF('array.kpoints'), DF('fleur.fleurinp')
###############################
codename = 'fleur_inpgen@localhost'
codename2 = 'fleur-0.27@localhost'
###############################
inpgen = Code.get_from_string(codename)
fleur = Code.get_from_string(codename2)
```
# Perparing the crystal structures and ParameterData nodes:
```
bohr_a_0 = 0.52917721092 # A
ba = bohr_a_0
```
Create the structures or better load the nodes from the db
## TiO2
```
# From Cif file
cell =[[4.2262, 0.0, 0.0],
[2.58780115127827e-16, 4.2262, 0.0],
[1.6538793790145e-16, 1.6538793790145e-16, 2.70099]]
ti_o_structure = StructureData(cell=cell)
ti_o_structure.append_atom(position=(0.00, 0.00, 0.00), symbols='Ti')
ti_o_structure.append_atom(position=(2.1131, 2.1131, 1.350495), symbols='Ti')
ti_o_structure.append_atom(position=(1.261055818, 1.261055818, 0.0), symbols='O')
ti_o_structure.append_atom(position=(0.852044182, 3.374155818, 1.350495), symbols='O')
ti_o_structure.append_atom(position=(2.965144182, 2.965144182, 0.0), symbols='O')
ti_o_structure.append_atom(position=(3.374155818, 0.852044182, 1.350495), symbols='O')
#ti_o_structure2.store()
#ti_o_structure2 = load_node(76513)
print ti_o_structure2
```
We could just use the defaults, but we can also control what parameters Fleur should use
```
ti_o_fleur_parameter = ParameterData(dict={
u'comp': {u'kmax': 4.2, u'gmaxxc': 11.5, u'gmax': 13.9},
u'atom': {u'lmax': 8, u'lnonsph': 6, u'jri': 551, u'rmt': 2.2, u'dx': 0.021,
u'element': u'Ti', u'lo': u'2p'},
u'atom2': {u'element': u'O', u'jri': 311, u'dx': 0.033, u'lmax': 6, u'lnonsph': 4,
u'rmt': 1.2},
u'soc' : {'theta' : 0.0, 'phi' : 0.0},
u'kpt': {u'tkb': 0.001, u'div1' : 4 , u'div2' : 4 , u'div3' : 2},
u'title': u'TiO2, rutile bulk'})
#ti_o_fleur_parameter.store()
#ti_o_fleur_parameter = load_node(76586)
print ti_o_fleur_parameter
# U on Ti &ldaU l=2, u=5.5, j=0.5, l_amf=F /
```
## MoS2
```
a = 2.976320*ba
b = 5.155137*ba
c = 14.050000*ba
cell = [[a, a, 0.00],
[-b, b, 0.00],
[0.00, 0.00, c]]
mos2_structure = StructureData(cell=cell)
mos2_structure.append_atom(position=(0.00, 3.436758*ba, 2.998605*ba), symbols='S')
mos2_structure.append_atom(position=(0.00, 3.436758*ba, -2.998605*ba), symbols='S')
mos2_structure.append_atom(position=(0.00, -3.436758*ba, 0.00), symbols='Mo')
mos2_structure.pbc = (True, True, False)
#mos2_structure.store()
mos2_structure = load_node(76494)
print mos2_structure
mos2_fleur_parameter = ParameterData(dict={
u'comp': {u'kmax': 3.8, u'gmaxxc': 10.5, u'gmax': 12.6, u'jspins' : 2.0}, # dvac : 14.05000000 # bohr
u'atom': {u'lmax': 10, u'lnonsph': 8, u'jri': 803, u'dx': 0.015, u'rmt': 2.43,
u'element': u'Mo'},
u'atom2': {u'element': u'S', u'jri': 629, u'lmax': 8, u'lnonsph': 6,
u'rmt': 1.9, u'dx': 0.018},
u'soc' : {'theta' : 0.0, 'phi' : 0.0},
u'kpt': {u'tkb': 0.001, u'div1' : 4 , u'div2' : 4 , u'div3' : 1},
u'title': u'MoS2, 2L monolayer'})
#mos2_fleur_parameter.store()
mos2_fleur_parameter = load_node(76457)
print mos2_fleur_parameter
```
# Running the scf workflow, converging the charge density
We now define the workflow parameters, where we specify the resources queue names and so on we run in serial
```
scf_wc_para = ParameterData(dict={'fleur_runmax': 4,
'resources' : {"num_machines": 1},#{"tot_num_mpiprocs": 24},
'walltime_sec': 300*60,
'queue_name' : 'batch',#th123_node',
'serial' : True,
'custom_scheduler_commands' : ''})#'#BSUB -P jara0043 \n#BSUB -M 120000 \n#BSUB -a intelmpi'})
#scf_wc_para.store()
#scf_wc_para = load_node(76532)
print scf_wc_para
```
Execute below once and the workflow will be submited to the AiiDa Daemon, then lock the field. First we run with the given Parameters, then we will use the defaults.
```
label = 'fleur_scf_wc on TiO2'
description = 'Fleur scf of TiO2'
res_eos_ti_O = submit(fleur_scf_wc, wf_parameters=scf_wc_para, fleur=fleur, inpgen=inpgen,
structure=ti_o_structure, calc_parameters=ti_o_fleur_parameter,
_label=label, _description=description)
label = 'fleur_scf_wc on MoS2'
description = 'Fleur scf of MoS2'
res_eos_mo_s2 = submit(fleur_scf_wc, wf_parameters=scf_wc_para, fleur=fleur, inpgen=inpgen,
structure=mos2_structure, calc_parameters=mos2_fleur_parameter,
_label=label, _description=description)
label = 'fleur_scf_wc on TiO2 defaults'
description = 'Fleur scf of TiO2 defaults'
res_eos_ti_O = submit(fleur_scf_wc, wf_parameters=scf_wc_para,
fleur=fleur, inpgen=inpgen, structure=ti_o_structure,
_label=label, _description=description)
label = 'fleur_scf_wc on MoS2 defaults'
description = 'Fleur scf of MoS2 defaults'
res_eos_ti_O = submit(fleur_scf_wc, wf_parameters=scf_wc_para,
fleur=fleur, inpgen=inpgen, structure=mos2_structure,
_label=label, _description=description)
ti_o_wc_uuids = []
```
If you have access to the plot_methods repo, you can use plot Fleur For visualisation of workflow results.
```
plot_fleur(ti_o_wc_uuids)
```
# Running the eos workflow, converging the charge density
Now we also calculate the equation of states with fleur for TiO2
```
eos_wc_para = ParameterData(dict={'fleur_runmax': 4,
'points' : 9,
'step' : 0.01,
'guess' : 1.00,
'resources' : {"num_machines" : 1},#{"tot_num_mpiprocs": 24},
'walltime_sec': 300*60,
'queue_name' : 'batch',#th123_node',
'serial' : True,
'custom_scheduler_commands' : ''})
#wf_para.store()
#wf_para = load_node(76873)
print wf_para
label = 'fleur_eos_wc on TiO2'
description = 'Fleur eos of TiO2'
res_eos_ti_O = submit(fleur_eos_wc, wf_parameters=eos_wc_para, fleur=fleur, inpgen=inpgen,
structure=ti_o_structure, calc_parameters=ti_o_fleur_parameter,
_label=label, _description=description)
ti_eos_wc_uuids = []
plot_fleur(ti_eos_wc_uuids)
```
# Running the dos workflow, from a converged calculation
We want to calculate a DOS and a Bandstructure ontop of the converged TiO2 calculation.
From this we need to get the fleurinpData and remotefolder node of the last Fleur calculation from the scf workchain. We do so by hand here.
```
fleurinp = load_node()
fleur_calc = load_node() # from previous calculation look at scf nodes
remote = fleur_calc.out.remote_folder
wf_para = ParameterData(dict={'fleur_runmax' : 4,
'tria' : True,
'nkpts' : 800,
'sigma' : 0.005,
'emin' : -0.30,
'emax' : 0.80,
'queue_name' : 'batch',
'resources' : {"num_machines": 1},
'serial' : True,
'walltime_sec': 60*60})
#wf_para.store()
#wf_para = load_node(76866)
print wf_para
label = 'fleur_dos_wc on TiO2'
description = 'Fleur dos of TiO2'
res = submit(fleur_dos_wc, wf_parameters=wf_para, fleurinp=fleurinp, remote=remote, fleur=fleur, _label=label, _description=description)
dos_wc_uuids = []
#plot_fleur(77013)
```
# Running the bands workflow, from a converged calculation
```
#fleurinp = load_node()
#fleur_calc = load_node() # from previous calculation
#remote = fleur_calc.out.remote_folder
wf_para = ParameterData(dict={'fleur_runmax' : 4,
'kpath' : 'auto',
'nkpts' : 800,
'sigma' : 0.005,
'emin' : -0.30,
'emax' : 0.80,
'queue_name' : 'batch',
'resources' : {"num_machines": 1},
'serial' : True,
'walltime_sec': 60*60})
#wf_para.store()
#wf_para = load_node()
print wf_para
label = 'fleur_band_wc on TiO2'
description = 'Fleur band of TiO2'
res = submit(fleur_band_wc, wf_parameters=wf_para, fleurinp=fleurinp, remote=remote, fleur=fleur, _label=label, _description=description)
band_wc_uuids = []
#plot_fleur(band_wc_uuids)
```
# Visualizing results:
| github_jupyter |
# Publications markdown generator for yashpatel5400
Takes a set of bibtex of publications and converts them for use with [yashpatel5400.github.io](yashpatel5400.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)).
The core python code is also in `pubsFromBibs.py`.
Run either from the `markdown_generator` folder after replacing updating the publist dictionary with:
* bib file names
* specific venue keys based on your bib file preferences
* any specific pre-text for specific files
* Collection Name (future feature)
TODO: Make this work with other databases of citations,
TODO: Merge this with the existing TSV parsing solution
```
from pybtex.database.input import bibtex
import pybtex.database.input.bibtex
from time import strptime
import string
import html
import os
import re
#todo: incorporate different collection types rather than a catch all publications, requires other changes to template
publist = {
"proceeding": {
"file" : "proceedings.bib",
"venuekey": "booktitle",
"venue-pretext": "In the proceedings of ",
"collection" : {"name":"publications",
"permalink":"/publication/"}
},
"journal":{
"file": "pubs.bib",
"venuekey" : "journal",
"venue-pretext" : "",
"collection" : {"name":"publications",
"permalink":"/publication/"}
}
}
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
"""Produce entities within text."""
return "".join(html_escape_table.get(c,c) for c in text)
for pubsource in publist:
parser = bibtex.Parser()
bibdata = parser.parse_file(publist[pubsource]["file"])
#loop through the individual references in a given bibtex file
for bib_id in bibdata.entries:
#reset default date
pub_year = "1900"
pub_month = "01"
pub_day = "01"
b = bibdata.entries[bib_id].fields
try:
pub_year = f'{b["year"]}'
#todo: this hack for month and day needs some cleanup
if "month" in b.keys():
if(len(b["month"])<3):
pub_month = "0"+b["month"]
pub_month = pub_month[-2:]
elif(b["month"] not in range(12)):
tmnth = strptime(b["month"][:3],'%b').tm_mon
pub_month = "{:02d}".format(tmnth)
else:
pub_month = str(b["month"])
if "day" in b.keys():
pub_day = str(b["day"])
pub_date = pub_year+"-"+pub_month+"-"+pub_day
#strip out {} as needed (some bibtex entries that maintain formatting)
clean_title = b["title"].replace("{", "").replace("}","").replace("\\","").replace(" ","-")
url_slug = re.sub("\\[.*\\]|[^a-zA-Z0-9_-]", "", clean_title)
url_slug = url_slug.replace("--","-")
md_filename = (str(pub_date) + "-" + url_slug + ".md").replace("--","-")
html_filename = (str(pub_date) + "-" + url_slug).replace("--","-")
#Build Citation from text
citation = ""
#citation authors - todo - add highlighting for primary author?
for author in bibdata.entries[bib_id].persons["author"]:
citation = citation+" "+author.first_names[0]+" "+author.last_names[0]+", "
#citation title
citation = citation + "\"" + html_escape(b["title"].replace("{", "").replace("}","").replace("\\","")) + ".\""
#add venue logic depending on citation type
venue = publist[pubsource]["venue-pretext"]+b[publist[pubsource]["venuekey"]].replace("{", "").replace("}","").replace("\\","")
citation = citation + " " + html_escape(venue)
citation = citation + ", " + pub_year + "."
## YAML variables
md = "---\ntitle: \"" + html_escape(b["title"].replace("{", "").replace("}","").replace("\\","")) + '"\n'
md += """collection: """ + publist[pubsource]["collection"]["name"]
md += """\npermalink: """ + publist[pubsource]["collection"]["permalink"] + html_filename
note = False
if "note" in b.keys():
if len(str(b["note"])) > 5:
md += "\nexcerpt: '" + html_escape(b["note"]) + "'"
note = True
md += "\ndate: " + str(pub_date)
md += "\nvenue: '" + html_escape(venue) + "'"
url = False
if "url" in b.keys():
if len(str(b["url"])) > 5:
md += "\npaperurl: '" + b["url"] + "'"
url = True
md += "\ncitation: '" + html_escape(citation) + "'"
md += "\n---"
## Markdown description for individual page
if note:
md += "\n" + html_escape(b["note"]) + "\n"
if url:
md += "\n[Access paper here](" + b["url"] + "){:target=\"_blank\"}\n"
else:
md += "\nUse [Google Scholar](https://scholar.google.com/scholar?q="+html.escape(clean_title.replace("-","+"))+"){:target=\"_blank\"} for full citation"
md_filename = os.path.basename(md_filename)
with open("../_publications/" + md_filename, 'w') as f:
f.write(md)
print(f'SUCESSFULLY PARSED {bib_id}: \"', b["title"][:60],"..."*(len(b['title'])>60),"\"")
# field may not exist for a reference
except KeyError as e:
print(f'WARNING Missing Expected Field {e} from entry {bib_id}: \"', b["title"][:30],"..."*(len(b['title'])>30),"\"")
continue
```
| github_jupyter |
# Tutorial 1: Instatiating a *scenario category*
In this tutorial, we will cover the following items:
1. Create *actor categories*, *activity categories*, and *physical thing categories*
2. Instantiate a *scenario category*
3. Show all tags of the *scenario category*
4. Use the `includes` function of a *scenario category*
5. Export the objects
```
# Before starting, let us do the necessary imports
import os
import json
from domain_model import ActivityCategory, ActorCategory, ActorType, Constant, ScenarioCategory, \
Sinusoidal, Spline3Knots, StateVariable, PhysicalElementCategory, Tag, \
actor_category_from_json, scenario_category_from_json
```
## 1. Create *actor categories*, *activity categories*, and the *static physical thing categories*
In this tutorial, we will create a *scenario category* in which another vehicle changes lane such that it becomes the ego vehicle's leading vehicle. This is often referred to as a "cut-in scenario". The *scenario category* is depicted in the figure below. Here, the blue car represents the ego vehicle and the red car represents the vehicle that performs the cut in.
<img src="./examples/images/cut-in.png" alt="Cut in" width="400"/>
To create the *scenario category*, we first need to create the *actor categories*, *activity categories*, and the *physical things*. Let us start with the *actor categories*. Just like most objects, an *actor category* has a `name`, a `uid` (a unique ID), and `tags`. Additionally, an *actor category* has a `vehicle_type`.
In this implementation of the domain model, it is checked whether the correct types are used. For example, `name` must be a string. Similarly, `uid` must be an integer. `tags` must be a (possibly empty) list of type `Tag`. This is to ensure that only tags are chosen out of a predefined list. This is done for consistency, such that, for example, users do not use the tag "braking" at one time and "Braking" at another time. Note, however, that the disadvantage is that it might be very well possible that the list of possible tags is not complete, so if there is a good reason to add a `Tag` this should be allowed. Lastly, the `vehicle_type` must be of type `VehicleType`.
Now let us create the *actor categories*. For this example, we assume that both *actor categories* are "vehicles". Note that we can ignore the `uid` for now. When not `uid` is given, a unique ID is generated automatically. If no `tags` are provided, it will default to an empty list.
```
EGO_VEHICLE = ActorCategory(ActorType.Vehicle, name="Ego vehicle",
tags=[Tag.EgoVehicle, Tag.RoadUserType_Vehicle])
TARGET_VEHICLE = ActorCategory(ActorType.Vehicle, name="Target vehicle",
tags=[Tag.RoadUserType_Vehicle])
```
It is as simple as that. If it does not throw an error, you can be assured that a correct *actor category* is created. For example, if we would forget to add the brackets around the `Tag.RoadUserType_Vehicle` - such that it would not be a *list* of `Tag` - an error will be thrown:
```
# The following code results in an error!
# The error is captured as to show only the final error message.
try:
ActorCategory(ActorType.Vehicle, name="Target vehicle", tags=Tag.RoadUserType_Vehicle)
except TypeError as error:
print(error)
```
Now let us create the *activity categories*:
```
FOLLOWING_LANE = ActivityCategory(Constant(), StateVariable.LATERAL_POSITION,
name="Following lane",
tags=[Tag.VehicleLateralActivity_GoingStraight])
CHANGING_LANE = ActivityCategory(Sinusoidal(), StateVariable.LATERAL_POSITION,
name="Changing lane",
tags=[Tag.VehicleLateralActivity_ChangingLane])
DRIVING_FORWARD = ActivityCategory(Spline3Knots(), StateVariable.SPEED,
name="Driving forward",
tags=[Tag.VehicleLongitudinalActivity_DrivingForward])
```
The last object we need to define before we can define the *scenario category* is the *static physical thing category*. A *scenario category* may contain multiple *physical things*, but for now we only define one that specifies the road layout. We assume that the scenario takes place at a straight motorway with multiple lanes:
```
MOTORWAY = PhysicalElementCategory(description="Motorway with multiple lanes",
name="Motorway",
tags=[Tag.RoadLayout_Straight,
Tag.RoadType_PrincipleRoad_Motorway])
```
## 2. Instantiate a *scenario category*
To define a *scenario category*, we need a description and a location to an image. After this, the static content of the scenario can be specified using the `set_physical_things` function. Next, to describe the dynamic content of the scenarios, the *actor categories* can be passed using the `set_actors` function and the *activity categories* can be passed using the `set_activities` function. Finally, using `set_acts`, it is described which activity is connected to which actor.
Note: It is possible that two actors perform the same activity. In this example, both the ego vehicle and the target vehicle are driving forward.
```
CUTIN = ScenarioCategory("./examples/images/cut-in.png",
description="Cut-in at the motorway",
name="Cut-in")
CUTIN.set_physical_elements([MOTORWAY])
CUTIN.set_actors([EGO_VEHICLE, TARGET_VEHICLE])
CUTIN.set_activities([FOLLOWING_LANE, CHANGING_LANE, DRIVING_FORWARD])
CUTIN.set_acts([(EGO_VEHICLE, DRIVING_FORWARD), (EGO_VEHICLE, FOLLOWING_LANE),
(TARGET_VEHICLE, DRIVING_FORWARD), (TARGET_VEHICLE, CHANGING_LANE)])
```
## 3. Show all tags of the *scenario category*
The tags should be used to define the *scenario category* in such a manner that also a computer can understand. However, we did not pass any tags to the *scenario category* itself. On the other hand, the attributes of the *scenario category* (in this case, the *physical things*, the *activity categories*, and the *actor categories*) have tags. Using the `derived_tags` function of the *scenario category*, these tags can be retrieved.
Running the `derived_tags` function returns a dictionary with (key,value) pairs. Each key is formatted as `<name>::<class>` and the corresponding value contains a list of tags that are associated to that particular object. For example, `Ego vehicle::ActorCategory` is a key and the corresponding tags are the tags that are passed when instantiating the ego vehicle (`EgoVehicle`) and the tags that are part of the *activity categories* that are connected with the ego vehicle (`GoingStraight` and `DrivingForward`).
```
CUTIN.derived_tags()
```
Another way - and possibly easier way - to show the tags, is to simply print the scenario category. Doing this will show the name, the description, and all tags of the scenario category.
```
print(CUTIN)
```
## 4. Use the *includes* function of a *scenario category*
A *scenario category* A includes another *scenario category* B if it comprises all scenarios that are comprised in B. Loosely said, this means that *scenario category* A is "more general" than *scenario category* B. To demonstrate this, let us first create another *scenario category*. The only different between the following *scenario category* is that the target vehicle comes from the left side of the ego vehicle. This means that the target vehicle performs a right lane change, whereas our previously defined *scenario category* did not define to which side the lane change was.
```
CHANGING_LANE_RIGHT = ActivityCategory(Sinusoidal(), StateVariable.LATERAL_POSITION,
name="Changing lane right",
tags=[Tag.VehicleLateralActivity_ChangingLane_Right])
CUTIN_LEFT = ScenarioCategory("./examples/images/cut-in.png",
description="Cut-in from the left at the motorway",
name="Cut-in from left")
CUTIN_LEFT.set_physical_elements([MOTORWAY])
CUTIN_LEFT.set_actors([EGO_VEHICLE, TARGET_VEHICLE])
CUTIN_LEFT.set_activities([FOLLOWING_LANE, CHANGING_LANE_RIGHT, DRIVING_FORWARD])
CUTIN_LEFT.set_acts([(EGO_VEHICLE, DRIVING_FORWARD), (EGO_VEHICLE, FOLLOWING_LANE),
(TARGET_VEHICLE, DRIVING_FORWARD), (TARGET_VEHICLE, CHANGING_LANE_RIGHT)])
```
To ensure ourselves that we correctly created a new *scenario category*, we can print the scenario category. Note the difference with the previously defined *scenario category*: now the target vehicle performs a right lane change (see the tag `VehicleLateralActivity_ChangingLane_Right`).
```
print(CUTIN_LEFT)
```
Because our original *scenario category* (`CUTIN`) is more general than the *scenario category* we just created (`CUTIN_LEFT`), we expect that `CUTIN` *includes* `CUTIN_LEFT`. In other words: because all "cut ins from the left" are also "cut ins", `CUTIN` *includes* `CUTIN_LEFT`.
The converse is not true: not all "cut ins" are "cut ins from the left".
Let's check it:
```
print(CUTIN.includes(CUTIN_LEFT)) # True
print(CUTIN_LEFT.includes(CUTIN)) # False
```
## 5. Export the objects
It would be cumbersome if one would be required to define a scenario category each time again. Luckily, there is an easy way to export the objects we have created.
Each object of this domain model comes with a `to_json` function and a `to_json_full` function. These functions return a dictionary that can be directly written to a .json file. The difference between `to_json` and `to_json_full` is that with `to_json`, rather than also returning the full dictionary of the attributes, only a reference (using the unique ID and the name) is returned. In case of the *physical thing*, *actor category*, and *activity category*, this does not make any difference. For the *scenario category*, however, this makes a difference.
To see this, let's see what the `to_json` function returns.
```
CUTIN.to_json()
```
As can be seen, the *physical thing category* (see `physical_thing_categories`) only returns the `name` and `uid`. This is not enough information for us if we would like to recreate the *physical thing category*. Therefore, for now we will use the `to_json_full` functionality.
Note, however, that if we would like to store the objects in a database, it would be better to have separate tables for *scenario categories*, *physical thing categories*, *activity categories*, and *actor categories*. In that case, the `to_json` function becomes handy. We will demonstrate this in a later tutorial.
Also note that Python has more efficient ways to store objects than through some json code. However, the reason to opt for the current approach is that this would be easily implementable in a database, such that it is easily possible to perform queries on the data. Again, the actual application of this goes beyond the current tutorial.
To save the returned dictionary to a .json file, we will use the external library `json`.
```
FILENAME = os.path.join("examples", "cutin_qualitative.json")
with open(FILENAME, "w") as FILE:
json.dump(CUTIN.to_json_full(), FILE, indent=4)
```
Let us also save the other *scenario category*, such that we can use it for a later tutorial.
```
FILENAME_CUTIN_LEFT = os.path.join("examples", "cutin_left_qualitative.json")
with open(FILENAME_CUTIN_LEFT, "w") as FILE:
json.dump(CUTIN_LEFT.to_json_full(), FILE, indent=4)
```
So how can we use this .json code to create the *scenario category*? As each object has a `to_json_full` function, for each object there is a `<class_name>_from_json` function. For the objects discussed in this toturial, we have:
- for a *physical thing category*: `physical_thing_category_from_json`
- for an *actor category*: `actor_category_from_json`
- for an *activity category*: `actvitiy_category_from_json`
- for a *model*: `model_from_json`
- for a *scenario category*: `scenario_category_from_json`
Each of these functions takes as input a dictionary that could be a potential output of its corresponding `to_json_full` function.
To demonstrate this, let's load the just created .json file and see if we can create a new *scenario category* from this.
```
with open(FILENAME, "r") as FILE:
CUTIN2 = scenario_category_from_json(json.load(FILE))
```
To see that this returns a similar *scenario category* as our previously created `CUTIN`, we can print the just created scenario category:
```
print(CUTIN2)
```
Note that although the just created *scenario category* is now similar to `CUTIN`, it is a different object in Python. That is, if we would change `CUTIN2`, that change will not apply to `CUTIN`.
You reached the end of the first tutorial. In the [next tutorial](./Tutorial%202%20Scenario.ipynb), we will see how we can instantiate a *scenario*.
| github_jupyter |
# Input and Output
```
from __future__ import print_function
import numpy as np
author = "kyubyong. https://github.com/Kyubyong/numpy_exercises"
np.__version__
from datetime import date
print(date.today())
```
## NumPy binary files (NPY, NPZ)
Q1. Save x into `temp.npy` and load it.
```
x = np.arange(10)
np.save('temp.npy', x) # Actually you can omit the extension. If so, it will be added automatically.
# Check if there exists the 'temp.npy' file.
import os
if os.path.exists('temp.npy'):
x2 = np.load('temp.npy')
print(np.array_equal(x, x2))
```
Q2. Save x and y into a single file 'temp.npz' and load it.
```
x = np.arange(10)
y = np.arange(11, 20)
np.savez('temp.npz', x=x, y=y)
# np.savez_compressed('temp.npz', x=x, y=y) # If you want to save x and y into a single file in compressed .npz format.
with np.load('temp.npz') as data:
x2 = data['x']
y2 = data['y']
print(np.array_equal(x, x2))
print(np.array_equal(y, y2))
```
## Text files
Q3. Save x to 'temp.txt' in string format and load it.
```
x = np.arange(10).reshape(2, 5)
header = 'num1 num2 num3 num4 num5'
np.savetxt('temp.txt', x, fmt="%d", header=header)
np.loadtxt('temp.txt')
```
Q4. Save `x`, `y`, and `z` to 'temp.txt' in string format line by line, then load it.
```
x = np.arange(10)
y = np.arange(11, 21)
z = np.arange(22, 32)
np.savetxt('temp.txt', (x, y, z), fmt='%d')
np.loadtxt('temp.txt')
```
Q5. Convert `x` into bytes, and load it as array.
```
x = np.array([1, 2, 3, 4])
x_bytes = x.tostring() # Don't be misled by the function name. What it really does is it returns bytes.
x2 = np.fromstring(x_bytes, dtype=x.dtype) # returns a 1-D array even if x is not.
print(np.array_equal(x, x2))
```
Q6. Convert `a` into an ndarray and then convert it into a list again.
```
a = [[1, 2], [3, 4]]
x = np.array(a)
a2 = x.tolist()
print(a == a2)
```
## String formatting¶
Q7. Convert `x` to a string, and revert it.
```
x = np.arange(10).reshape(2,5)
x_str = np.array_str(x)
print(x_str, "\n", type(x_str))
x_str = x_str.replace("[", "") # [] must be stripped
x_str = x_str.replace("]", "")
x2 = np.fromstring(x_str, dtype=x.dtype, sep=" ").reshape(x.shape)
assert np.array_equal(x, x2)
```
## Text formatting options
Q8. Print `x` such that all elements are displayed with precision=1, no suppress.
```
x = np.random.uniform(size=[10,100])
np.set_printoptions(precision=1, threshold=np.nan, suppress=True)
print(x)
```
## Base-n representations
Q9. Convert 12 into a binary number in string format.
```
out1 = np.binary_repr(12)
out2 = np.base_repr(12, base=2)
assert out1 == out2 # But out1 is better because it's much faster.
print(out1)
```
Q10. Convert 12 into a hexadecimal number in string format.
```
np.base_repr(1100, base=16)
```
| github_jupyter |
```
from sqlalchemy import create_engine
import api_keys
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
DB_USER = api_keys.DB_USER
DB_PASS = api_keys.DB_PASS
DB_URL = api_keys.DB_URL
engine = create_engine("mysql+pymysql://{0}:{1}@{2}".format(DB_USER, DB_PASS, DB_URL), echo=True)
connection = engine.connect()
statement = """SELECT * FROM dublin_bikes.weather_forecast_1hour
order by time_queried desc
limit 500;""" # create select statement for stations table
df = pd.read_sql_query(statement, engine) # https://stackoverflow.com/questions/29525808/sqlalchemy-orm-conversion-to-pandas-dataframe
# the following notebook is based off material presented in Data Analytics module COMP47350 labs 7 and 9
df.shape
df.head(5)
df.tail(5)
df.dtypes
categorical_columns = df[['station_number','weather_main', 'weather_description']].columns
# Convert data type to category for these columns
for column in categorical_columns:
df[column] = df[column].astype('category')
continuous_columns = df.select_dtypes(['int64']).columns
datetime_columns = df.select_dtypes(['datetime64[ns]']).columns
df.dtypes
#Print the number of duplicates, without the original rows that were duplicated
print('Number of duplicate (excluding first) rows in the table is: ', df.duplicated().sum())
# Check for duplicate rows.
# Use "keep=False" to mark all duplicates as true, including the original rows that were duplicated.
print('Number of duplicate rows (including first) in the table is:', df[df.duplicated(keep=False)].shape[0])
# Check for duplicate columns
#First transpose the df so columns become rows, then apply the same check as above
dfT = df.T
print("Number of duplicate (excluding first) columns in the table is: ", dfT.duplicated().sum())
print("Number of duplicate (including first) columns in the table is: ", dfT[dfT.duplicated(keep=False)].shape[0])
```
# no duplicate rows or columns
```
df.select_dtypes(['category']).describe().T
```
# station_status is constant column
```
df.select_dtypes(include=['int64']).describe().T
df.select_dtypes(include=['datetime64[ns]']).describe().T
df.isnull().sum()
```
# logical integrity
```
test_1 = df[['time_queried','last_update']][df['time_queried']>df['last_update']]
print("Number of rows failing the test: ", test_1.shape[0])
test_1.head(5)
df[continuous_columns].hist(layout=(3, 3), figsize=(10,10), bins=10)
df[continuous_columns].plot(kind='box', subplots=True, figsize=(10,10), layout=(3,3), sharex=False, sharey=False)
df[datetime_columns].plot()
print(datetime_columns[0])
df[datetime_columns[0]].hist()
print(datetime_columns[1])
df[datetime_columns[1]].hist()
for col in categorical_columns:
f = df[col].value_counts().plot(kind='bar', figsize=(12,10))
plt.title(col)
plt.ylabel('number of occurances')
plt.show()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.