text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Riskfolio-Lib Tutorial:
<br>__[Financionerioncios](https://financioneroncios.wordpress.com)__
<br>__[Orenji](https://www.orenj-i.net)__
<br>__[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)__
<br>__[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__
<a href='https://ko-fi.com/B0B833SXD' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://cdn.ko-fi.com/cdn/kofi1.png?v=2' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
## Tutorial 19: Mean Entropic Drawdown at Risk (EDaR) Optimization
## 1. Downloading the data:
```
import numpy as np
import pandas as pd
import yfinance as yf
import warnings
warnings.filterwarnings("ignore")
yf.pdr_override()
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'TMO',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI', 'T', 'BA']
assets.sort()
# Downloading data
data = yf.download(assets, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = assets
# Calculating returns
Y = data[assets].pct_change().dropna()
display(Y.head())
```
## 2. Estimating Mean EDaR Portfolios
### 2.1 Calculating the portfolio that maximizes Return/EDaR ratio.
The Entropic Drawdown at Risk (EDaR) is a risk measure derived from Entropic Value at Risk, that I developed when you apply the EVaR to drawdowns distribution:
$$
\begin{aligned}
\text{EVaR}_{\alpha}(X) & = \inf_{z>0} \left \{z\log \left ( \frac{1}{\alpha} M_{\text{DD}(X)} (\frac{1}{z}) \right ) \right \} \\
\text{DD}(X,j) & = \max_{t \in (0,j)} \left ( \sum_{i=0}^{t}X_{i} \right )- \sum_{i=0}^{j}X_{i} \\
\end{aligned}
$$
Where $M_{X} (t) = \text{E} [e^{tX}]$ is the moment generating function and $\alpha \in [0,1]$ is the significance level.
In a similar way than Markowitz model, the mean EDaR model can be expressed as one of the following problems:
$$
\begin{aligned}
& \min_{w,\, z} & & \mu w - \lambda \text{EDaR}_{\alpha}(r w)\\
& & & 1^{T}w = 1 \\
& & & w \geq 0 \\
\end{aligned}
$$
$$
\begin{aligned}
& \max_{w,\, z} & & \frac{\mu w - r_{f}}{\lambda \text{EDaR}_{\alpha}(r w)}\\
& & & 1^{T}w = 1 \\
& & & w \geq 0 \\
\end{aligned}
$$
$$
\begin{aligned}
& \min_{w,\, z} & & \text{EDaR}_{\alpha}(r w)\\
& & & 1^{T}w = 1 \\
& & & w \geq 0 \\
\end{aligned}
$$
Where $z$ is the factor of EDaR, $w$ are the weights of assets, $\mu$ is the mean vector, $\lambda$ is the risk aversion factor, $r$ is the returns matrix and $r_{f}$ the risk free rate.
It is recommended to use MOSEK to optimize EDaR, due to it requires exponential cone programming to solve the problem.
Instructions to install MOSEK are in this __[link](https://docs.mosek.com/9.2/install/installation.html)__, is better to install using Anaconda. Also you will need a license, I recommend you that ask for an academic license __[here](https://www.mosek.com/products/academic-licenses/)__.
```
import riskfolio.Portfolio as pf
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Calculating optimum portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
# Estimate optimal portfolio:
port.solvers = ['MOSEK'] # It is recommended to use mosek when optimizing EVaR
port.alpha = 0.05 # Significance level for CVaR, EVaR y CDaR
model='Classic' # Could be Classic (historical), BL (Black Litterman) or FM (Factor Model)
rm = 'EDaR' # Risk measure used, this time will be EVaR
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = True # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
```
### 2.2 Plotting portfolio composition
```
import riskfolio.PlotFunctions as plf
# Plotting the composition of the portfolio
ax = plf.plot_pie(w=w, title='Sharpe Mean - Entropic Drawdown at Risk', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
```
### 2.3 Calculate efficient frontier
```
points = 40 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting the efficient frontier
label = 'Max Risk Adjusted Return Portfolio' # Title of point
mu = port.mu # Expected returns
cov = port.cov # Covariance matrix
returns = port.returns # Returns of the assets
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
```
## 3. Estimating Risk Parity Portfolios for EDaR
### 3.1 Calculating the risk parity portfolio for EDaR.
The risk parity portfolio for the EDaR risk measure is the solution of the problem:
$$
\begin{aligned}
& \min_{w,\, z} & & \text{EDaR}_{\alpha}(r w) - b \ln(w)\\
& & & 1^{T}w = 0 \\
& & & w \geq 0 \\
\end{aligned}
$$
Where $w$ are the weights of assets, $b$ is the vector of constraints, by default is a vector of 1/(number of assets).
```
b = None # Risk contribution constraints vector
w_rp = port.rp_optimization(model=model, rm=rm, rf=rf, b=b, hist=hist)
display(w_rp.T)
```
### 3.2 Plotting portfolio composition
```
ax = plf.plot_pie(w=w_rp, title='Risk Parity EDaR', others=0.05, nrow=25, cmap = "tab20",
height=8, width=12, ax=None)
```
### 3.3 Plotting Risk Composition
```
ax = plf.plot_risk_con(w_rp, cov=port.cov, returns=port.returns, rm=rm, rf=0, alpha=0.05,
color="tab:blue", height=6, width=12, ax=None)
```
| github_jupyter |
# Finetuning of ImageNet pretrained EfficientNet-B0 on CIFAR-100
In 2019, new ConvNets architectures have been proposed in ["EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks"](https://arxiv.org/pdf/1905.11946.pdf) paper. According to the paper, model's compound scaling starting from a 'good' baseline provides an network that achieves state-of-the-art on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet.

Following the paper, EfficientNet-B0 model pretrained on ImageNet and finetuned on CIFAR100 dataset gives 88% test accuracy. Let's reproduce this result with Ignite. [Official implementation](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) of EfficientNet uses Tensorflow,
for our case we will borrow the code from [katsura-jp/efficientnet-pytorch](https://github.com/katsura-jp/efficientnet-pytorch),
[rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models) and [lukemelas/EfficientNet-PyTorch](https://github.com/lukemelas/EfficientNet-PyTorch/) repositories (kudos to authors!). We will download pretrained weights from [lukemelas/EfficientNet-PyTorch](https://github.com/lukemelas/EfficientNet-PyTorch/) repository.
## Network architecture review
The architecture of EfficientNet-B0 is the following:
```
1 - Stem - Conv3x3|BN|Swish
2 - Blocks - MBConv1, k3x3
- MBConv6, k3x3 repeated 2 times
- MBConv6, k5x5 repeated 2 times
- MBConv6, k3x3 repeated 3 times
- MBConv6, k5x5 repeated 3 times
- MBConv6, k5x5 repeated 4 times
- MBConv6, k3x3
totally 16 blocks
3 - Head - Conv1x1|BN|Swish
- Pooling
- Dropout
- FC
```
where
```
Swish(x) = x * sigmoid(x)
```
and `MBConvX` stands for mobile inverted bottleneck convolution, X - denotes expansion ratio:
```
MBConv1 :
-> DepthwiseConv|BN|Swish -> SqueezeExcitation -> Conv|BN
MBConv6 :
-> Conv|BN|Swish -> DepthwiseConv|BN|Swish -> SqueezeExcitation -> Conv|BN
MBConv6+IdentitySkip :
-.-> Conv|BN|Swish -> DepthwiseConv|BN|Swish -> SqueezeExcitation -> Conv|BN-(+)->
\___________________________________________________________________________/
```
## Installations
1) Torchvision
Please install torchvision in order to get CIFAR100 dataset:
```
conda install -y torchvision -c pytorch
```
2) Let's install Nvidia/Apex package:
We will train with automatic mixed precision using [nvidia/apex](https://github.com/NVIDIA/apex) pacakge
```
# Install Apex:
# If torch cuda version and nvcc version match:
!pip install --upgrade --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" git+https://github.com/NVIDIA/apex/
# if above command is failing, please install apex without c++/cuda extensions:
# !pip install --upgrade --no-cache-dir git+https://github.com/NVIDIA/apex/
```
3) Install tensorboardX and `pytorch-ignite`
```
!pip install pytorch-ignite tensorboardX
import random
import torch
import ignite
seed = 17
random.seed(seed)
_ = torch.manual_seed(seed)
torch.__version__, ignite.__version__
```
## Model
Let's define some helpful modules:
- Flatten
- Swish
The reason why Swish is not implemented in `torch.nn` can be found [here](https://github.com/pytorch/pytorch/pull/3182).
```
import torch
import torch.nn as nn
class Swish(nn.Module):
def forward(self, x):
return x * torch.sigmoid(x)
class Flatten(nn.Module):
def forward(self, x):
return x.reshape(x.shape[0], -1)
```
Let's visualize Swish transform vs ReLU:
```
import matplotlib.pylab as plt
%matplotlib inline
d = torch.linspace(-10.0, 10.0)
s = Swish()
res = s(d)
res2 = torch.relu(d)
plt.title("Swish transformation")
plt.plot(d.numpy(), res.numpy(), label='Swish')
plt.plot(d.numpy(), res2.numpy(), label='ReLU')
plt.legend()
```
Now let's define `SqueezeExcitation` module
```
class SqueezeExcitation(nn.Module):
def __init__(self, inplanes, se_planes):
super(SqueezeExcitation, self).__init__()
self.reduce_expand = nn.Sequential(
nn.Conv2d(inplanes, se_planes,
kernel_size=1, stride=1, padding=0, bias=True),
Swish(),
nn.Conv2d(se_planes, inplanes,
kernel_size=1, stride=1, padding=0, bias=True),
nn.Sigmoid()
)
def forward(self, x):
x_se = torch.mean(x, dim=(-2, -1), keepdim=True)
x_se = self.reduce_expand(x_se)
return x_se * x
```
Next, we can define `MBConv`.
**Note on implementation**: in Tensorflow (and PyTorch ports) convolutions use `SAME` padding option which in PyTorch requires
a specific padding computation and additional operation to apply. We will use built-in padding argument of the convolution.
```
from torch.nn import functional as F
class MBConv(nn.Module):
def __init__(self, inplanes, planes, kernel_size, stride,
expand_rate=1.0, se_rate=0.25,
drop_connect_rate=0.2):
super(MBConv, self).__init__()
expand_planes = int(inplanes * expand_rate)
se_planes = max(1, int(inplanes * se_rate))
self.expansion_conv = None
if expand_rate > 1.0:
self.expansion_conv = nn.Sequential(
nn.Conv2d(inplanes, expand_planes,
kernel_size=1, stride=1, padding=0, bias=False),
nn.BatchNorm2d(expand_planes, momentum=0.01, eps=1e-3),
Swish()
)
inplanes = expand_planes
self.depthwise_conv = nn.Sequential(
nn.Conv2d(inplanes, expand_planes,
kernel_size=kernel_size, stride=stride,
padding=kernel_size // 2, groups=expand_planes,
bias=False),
nn.BatchNorm2d(expand_planes, momentum=0.01, eps=1e-3),
Swish()
)
self.squeeze_excitation = SqueezeExcitation(expand_planes, se_planes)
self.project_conv = nn.Sequential(
nn.Conv2d(expand_planes, planes,
kernel_size=1, stride=1, padding=0, bias=False),
nn.BatchNorm2d(planes, momentum=0.01, eps=1e-3),
)
self.with_skip = stride == 1
self.drop_connect_rate = drop_connect_rate
def _drop_connect(self, x):
keep_prob = 1.0 - self.drop_connect_rate
drop_mask = torch.rand(x.shape[0], 1, 1, 1) + keep_prob
drop_mask = drop_mask.type_as(x)
drop_mask.floor_()
return drop_mask * x / keep_prob
def forward(self, x):
z = x
if self.expansion_conv is not None:
x = self.expansion_conv(x)
x = self.depthwise_conv(x)
x = self.squeeze_excitation(x)
x = self.project_conv(x)
# Add identity skip
if x.shape == z.shape and self.with_skip:
if self.training and self.drop_connect_rate is not None:
x = self._drop_connect(x)
x += z
return x
```
And finally, we can implement generic `EfficientNet`:
```
from collections import OrderedDict
import math
def init_weights(module):
if isinstance(module, nn.Conv2d):
nn.init.kaiming_normal_(module.weight, a=0, mode='fan_out')
elif isinstance(module, nn.Linear):
init_range = 1.0 / math.sqrt(module.weight.shape[1])
nn.init.uniform_(module.weight, a=-init_range, b=init_range)
class EfficientNet(nn.Module):
def _setup_repeats(self, num_repeats):
return int(math.ceil(self.depth_coefficient * num_repeats))
def _setup_channels(self, num_channels):
num_channels *= self.width_coefficient
new_num_channels = math.floor(num_channels / self.divisor + 0.5) * self.divisor
new_num_channels = max(self.divisor, new_num_channels)
if new_num_channels < 0.9 * num_channels:
new_num_channels += self.divisor
return new_num_channels
def __init__(self, num_classes=100,
width_coefficient=1.0,
depth_coefficient=1.0,
se_rate=0.25,
dropout_rate=0.2,
drop_connect_rate=0.2):
super(EfficientNet, self).__init__()
self.width_coefficient = width_coefficient
self.depth_coefficient = depth_coefficient
self.divisor = 8
list_channels = [32, 16, 24, 40, 80, 112, 192, 320, 1280]
list_channels = [self._setup_channels(c) for c in list_channels]
list_num_repeats = [1, 2, 2, 3, 3, 4, 1]
list_num_repeats = [self._setup_repeats(r) for r in list_num_repeats]
expand_rates = [1, 6, 6, 6, 6, 6, 6]
strides = [1, 2, 2, 2, 1, 2, 1]
kernel_sizes = [3, 3, 5, 3, 5, 5, 3]
# Define stem:
self.stem = nn.Sequential(
nn.Conv2d(3, list_channels[0], kernel_size=3, stride=2, padding=1, bias=False),
nn.BatchNorm2d(list_channels[0], momentum=0.01, eps=1e-3),
Swish()
)
# Define MBConv blocks
blocks = []
counter = 0
num_blocks = sum(list_num_repeats)
for idx in range(7):
num_channels = list_channels[idx]
next_num_channels = list_channels[idx + 1]
num_repeats = list_num_repeats[idx]
expand_rate = expand_rates[idx]
kernel_size = kernel_sizes[idx]
stride = strides[idx]
drop_rate = drop_connect_rate * counter / num_blocks
name = "MBConv{}_{}".format(expand_rate, counter)
blocks.append((
name,
MBConv(num_channels, next_num_channels,
kernel_size=kernel_size, stride=stride, expand_rate=expand_rate,
se_rate=se_rate, drop_connect_rate=drop_rate)
))
counter += 1
for i in range(1, num_repeats):
name = "MBConv{}_{}".format(expand_rate, counter)
drop_rate = drop_connect_rate * counter / num_blocks
blocks.append((
name,
MBConv(next_num_channels, next_num_channels,
kernel_size=kernel_size, stride=1, expand_rate=expand_rate,
se_rate=se_rate, drop_connect_rate=drop_rate)
))
counter += 1
self.blocks = nn.Sequential(OrderedDict(blocks))
# Define head
self.head = nn.Sequential(
nn.Conv2d(list_channels[-2], list_channels[-1],
kernel_size=1, bias=False),
nn.BatchNorm2d(list_channels[-1], momentum=0.01, eps=1e-3),
Swish(),
nn.AdaptiveAvgPool2d(1),
Flatten(),
nn.Dropout(p=dropout_rate),
nn.Linear(list_channels[-1], num_classes)
)
self.apply(init_weights)
def forward(self, x):
f = self.stem(x)
f = self.blocks(f)
y = self.head(f)
return y
```
All EfficientNet models can be defined using the following parametrization:
```
# (width_coefficient, depth_coefficient, resolution, dropout_rate)
'efficientnet-b0': (1.0, 1.0, 224, 0.2),
'efficientnet-b1': (1.0, 1.1, 240, 0.2),
'efficientnet-b2': (1.1, 1.2, 260, 0.3),
'efficientnet-b3': (1.2, 1.4, 300, 0.3),
'efficientnet-b4': (1.4, 1.8, 380, 0.4),
'efficientnet-b5': (1.6, 2.2, 456, 0.4),
'efficientnet-b6': (1.8, 2.6, 528, 0.5),
'efficientnet-b7': (2.0, 3.1, 600, 0.5),
```
Let's define and train the third one: `EfficientNet-B0`
```
model = EfficientNet(num_classes=1000,
width_coefficient=1.0, depth_coefficient=1.0,
dropout_rate=0.2)
```
Number of parameters:
```
def print_num_params(model, display_all_modules=False):
total_num_params = 0
for n, p in model.named_parameters():
num_params = 1
for s in p.shape:
num_params *= s
if display_all_modules: print("{}: {}".format(n, num_params))
total_num_params += num_params
print("-" * 50)
print("Total number of parameters: {:.2e}".format(total_num_params))
print_num_params(model)
```
Let's compare the number of parameters with some of ResNets:
```
from torchvision.models.resnet import resnet18, resnet34, resnet50
print_num_params(resnet18(pretrained=False, num_classes=100))
print_num_params(resnet34(pretrained=False, num_classes=100))
print_num_params(resnet50(pretrained=False, num_classes=100))
```
### Model's graph with Tensorboard
We can optionally inspect model's graph with the code below. For that we need to install
`tensorboardX` package.
Otherwise go directly to the next section.
```
from tensorboardX.pytorch_graph import graph
import random
from IPython.display import clear_output, Image, display, HTML
def show_graph(graph_def):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = graph_def
code = """
<script src="//cdnjs.cloudflare.com/ajax/libs/polymer/0.3.3/platform.js"></script>
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(random.randint(0, 1000)))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
x = torch.rand(4, 3, 224, 224)
# Error : module 'torch.onnx' has no attribute 'set_training'
# uncomment when it will be fixed
# graph_def = graph(model, x, operator_export_type='RAW')
# Display in Firefox may not work properly. Use Chrome.
# show_graph(graph_def[0])
```
### Load pretrained weights
Let's load pretrained weights and check the model on a single image.
```
!mkdir /tmp/efficientnet_weights
!wget http://storage.googleapis.com/public-models/efficientnet-b0-08094119.pth -O/tmp/efficientnet_weights/efficientnet-b0-08094119.pth
from collections import OrderedDict
model_state = torch.load("/tmp/efficientnet_weights/efficientnet-b0-08094119.pth")
# A basic remapping is required
mapping = {
k: v for k, v in zip(model_state.keys(), model.state_dict().keys())
}
mapped_model_state = OrderedDict([
(mapping[k], v) for k, v in model_state.items()
])
model.load_state_dict(mapped_model_state, strict=False)
!wget https://raw.githubusercontent.com/lukemelas/EfficientNet-PyTorch/master/examples/simple/img.jpg -O/tmp/giant_panda.jpg
!wget https://raw.githubusercontent.com/lukemelas/EfficientNet-PyTorch/master/examples/simple/labels_map.txt -O/tmp/labels_map.txt
import json
with open("/tmp/labels_map.txt", "r") as h:
labels = json.load(h)
from PIL import Image
import torchvision.transforms as transforms
img = Image.open("/tmp/giant_panda.jpg")
# Preprocess image
image_size = 224
tfms = transforms.Compose([transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),])
x = tfms(img).unsqueeze(0)
plt.imshow(img)
# Classify
model.eval()
with torch.no_grad():
y_pred = model(x)
# Print predictions
print('-----')
for idx in torch.topk(y_pred, k=5).indices.squeeze(0).tolist():
prob = torch.softmax(y_pred, dim=1)[0, idx].item()
print('{label:<75} ({p:.2f}%)'.format(label=labels[str(idx)], p=prob*100))
```
## Dataflow
Let's setup the dataflow:
- load CIFAR100 train and test datasets
- setup train/test image transforms
- setup train/test data loaders
According to the paper authors borrowed training settings from other publications and the dataflow for CIFAR100 is the following:
- input images to the network during training are resized to 224x224
- horizontally flipped randomly and augmented using cutout.
- each mini-batch contained 256 examples
```
from torchvision.datasets.cifar import CIFAR100
from torchvision.transforms import Compose, RandomCrop, Pad, RandomHorizontalFlip, Resize
from torchvision.transforms import ToTensor, Normalize
from torch.utils.data import Subset
path = "/tmp/cifar100"
from PIL.Image import BICUBIC
train_transform = Compose([
Resize(256, BICUBIC),
RandomCrop(224),
RandomHorizontalFlip(),
ToTensor(),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
test_transform = Compose([
Resize(224, BICUBIC),
ToTensor(),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
train_dataset = CIFAR100(root=path, train=True, transform=train_transform, download=True)
test_dataset = CIFAR100(root=path, train=False, transform=test_transform, download=False)
train_eval_indices = [random.randint(0, len(train_dataset) - 1) for i in range(len(test_dataset))]
train_eval_dataset = Subset(train_dataset, train_eval_indices)
len(train_dataset), len(test_dataset), len(train_eval_dataset)
from torch.utils.data import DataLoader
batch_size = 172
train_loader = DataLoader(train_dataset, batch_size=batch_size, num_workers=20,
shuffle=True, drop_last=True, pin_memory=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, num_workers=20,
shuffle=False, drop_last=False, pin_memory=True)
eval_train_loader = DataLoader(train_eval_dataset, batch_size=batch_size, num_workers=20,
shuffle=False, drop_last=False, pin_memory=True)
import torchvision.utils as vutils
# Plot some training images
batch = next(iter(train_loader))
plt.figure(figsize=(16, 8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(
vutils.make_grid(batch[0][:16], padding=2, normalize=True).cpu().numpy().transpose((1, 2, 0))
)
batch = None
torch.cuda.empty_cache()
```
## Finetunning model
As we are interested to finetune the model to CIFAR-100, we will replace the classification fully-connected layer (ImageNet-1000 vs CIFAR-100).
```
model.head[6].in_features, model.head[6].out_features
model.head[6] = nn.Linear(1280, 100)
model.head[6].in_features, model.head[6].out_features
```
We will finetune the model on GPU with AMP fp32/fp16 using nvidia/apex package.
```
assert torch.cuda.is_available()
assert torch.backends.cudnn.enabled, "NVIDIA/Apex:Amp requires cudnn backend to be enabled."
torch.backends.cudnn.benchmark = True
device = "cuda"
model = model.to(device)
```
Let's setup cross-entropy as criterion and SGD as optimizer.
We will split model parameters into 2 groups:
1) feature extractor (pretrained weights)
2) classifier (random weights)
and define different learning rates for these groups (via learning rate scheduler).
```
from itertools import chain
import torch.optim as optim
import torch.nn.functional as F
criterion = nn.CrossEntropyLoss()
lr = 0.01
optimizer = optim.SGD([
{
"params": chain(model.stem.parameters(), model.blocks.parameters()),
"lr": lr * 0.1,
},
{
"params": model.head[:6].parameters(),
"lr": lr * 0.2,
},
{
"params": model.head[6].parameters(),
"lr": lr
}],
momentum=0.9, weight_decay=0.001, nesterov=True)
from torch.optim.lr_scheduler import ExponentialLR
lr_scheduler = ExponentialLR(optimizer, gamma=0.975)
try:
from apex import amp
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to run this example.")
# Initialize Amp
model, optimizer = amp.initialize(model, optimizer, opt_level="O2", num_losses=1)
```
Next, let's define a single iteration function `update_fn`. This function is then used by `ignite.engine.Engine` to update model while running over the input data.
```
from ignite.utils import convert_tensor
def update_fn(engine, batch):
model.train()
x = convert_tensor(batch[0], device=device, non_blocking=True)
y = convert_tensor(batch[1], device=device, non_blocking=True)
y_pred = model(x)
# Compute loss
loss = criterion(y_pred, y)
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
optimizer.step()
return {
"batchloss": loss.item(),
}
```
Let's check `update_fn`
```
batch = next(iter(train_loader))
res = update_fn(engine=None, batch=batch)
batch = None
torch.cuda.empty_cache()
res
```
Now let's define a trainer and add some practical handlers:
- log to tensorboard: losses, metrics, lr
- progress bar
- models/optimizers checkpointing
```
from ignite.engine import Engine, Events, create_supervised_evaluator
from ignite.metrics import RunningAverage, Accuracy, Precision, Recall, Loss, TopKCategoricalAccuracy
from ignite.contrib.handlers import TensorboardLogger
from ignite.contrib.handlers.tensorboard_logger import OutputHandler, OptimizerParamsHandler
trainer = Engine(update_fn)
def output_transform(out):
return out['batchloss']
RunningAverage(output_transform=output_transform).attach(trainer, "batchloss")
from datetime import datetime
exp_name = datetime.now().strftime("%Y%m%d-%H%M%S")
log_path = "/tmp/finetune_efficientnet_cifar100/{}".format(exp_name)
tb_logger = TensorboardLogger(log_dir=log_path)
tb_logger.attach(trainer,
log_handler=OutputHandler('training', ['batchloss', ]),
event_name=Events.ITERATION_COMPLETED)
print("Experiment name: ", exp_name)
```
Let's setup learning rate scheduling:
```
trainer.add_event_handler(Events.EPOCH_COMPLETED, lambda engine: lr_scheduler.step())
# Log optimizer parameters
tb_logger.attach(trainer,
log_handler=OptimizerParamsHandler(optimizer, "lr"),
event_name=Events.EPOCH_STARTED)
from ignite.contrib.handlers import ProgressBar
# Iteration-wise progress bar
# ProgressBar(bar_format="").attach(trainer, metric_names=['batchloss',])
# Epoch-wise progress bar with display of training losses
ProgressBar(persist=True, bar_format="").attach(trainer,
event_name=Events.EPOCH_STARTED,
closing_event_name=Events.COMPLETED)
```
Let's create two evaluators to compute metrics on train/test images and log them to Tensorboard:
```
metrics = {
'Loss': Loss(criterion),
'Accuracy': Accuracy(),
'Precision': Precision(average=True),
'Recall': Recall(average=True),
'Top-5 Accuracy': TopKCategoricalAccuracy(k=5)
}
evaluator = create_supervised_evaluator(model, metrics=metrics,
device=device, non_blocking=True)
train_evaluator = create_supervised_evaluator(model, metrics=metrics,
device=device, non_blocking=True)
from ignite.handlers import global_step_from_engine
def run_evaluation(engine):
train_evaluator.run(eval_train_loader)
evaluator.run(test_loader)
trainer.add_event_handler(Events.EPOCH_STARTED(every=3), run_evaluation)
trainer.add_event_handler(Events.COMPLETED, run_evaluation)
# Log train eval metrics:
tb_logger.attach_output_handler(
train_evaluator,
event_name=Events.EPOCH_COMPLETED,
tag="training",
metric_names=list(metrics.keys()),
global_step_transform=global_step_from_engine(trainer)
)
# Log val metrics:
tb_logger.attach_output_handler(
evaluator,
event_name=Events.EPOCH_COMPLETED,
tag="test",
metric_names=list(metrics.keys()),
global_step_transform=global_step_from_engine(trainer)
)
```
Now let's setup the best model checkpointing, early stopping:
```
import logging
# Setup engine & logger
def setup_logger(logger):
handler = logging.StreamHandler()
formatter = logging.Formatter("%(asctime)s %(name)-12s %(levelname)-8s %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
from ignite.handlers import Checkpoint, DiskSaver, EarlyStopping, TerminateOnNan
trainer.add_event_handler(Events.ITERATION_COMPLETED, TerminateOnNan())
# Store the best model
def default_score_fn(engine):
score = engine.state.metrics['Accuracy']
return score
# Force filename to model.pt to ease the rerun of the notebook
disk_saver = DiskSaver(dirname=log_path)
best_model_handler = Checkpoint(to_save={'model': model},
save_handler=disk_saver,
filename_pattern="{name}.{ext}",
n_saved=1)
evaluator.add_event_handler(Events.COMPLETED, best_model_handler)
# Add early stopping
es_patience = 10
es_handler = EarlyStopping(patience=es_patience, score_function=default_score_fn, trainer=trainer)
evaluator.add_event_handler(Events.COMPLETED, es_handler)
setup_logger(es_handler.logger)
# Clear cuda cache between training/testing
def empty_cuda_cache(engine):
torch.cuda.empty_cache()
import gc
gc.collect()
trainer.add_event_handler(Events.EPOCH_COMPLETED, empty_cuda_cache)
evaluator.add_event_handler(Events.COMPLETED, empty_cuda_cache)
train_evaluator.add_event_handler(Events.COMPLETED, empty_cuda_cache)
num_epochs = 100
trainer.run(train_loader, max_epochs=num_epochs)
```
Finetunning results:
- Test dataset:
```
evaluator.state.metrics
```
- Training subset:
```
train_evaluator.state.metrics
```
Obviously, our training settings is not the optimal one and the delta between our result and the paper's one is about 5%.
## Inference
Let's load the best model and recompute evaluation metrics on test dataset with a very basic Test-Time-Augmentation to boost the performances.
```
best_model = EfficientNet()
best_model.load_state_dict(torch.load(log_path + "/model.pt"))
metrics = {
'Accuracy': Accuracy(),
'Precision': Precision(average=True),
'Recall': Recall(average=True),
}
def inference_update_with_tta(engine, batch):
best_model.eval()
with torch.no_grad():
x, y = batch
# Let's compute final prediction as a mean of predictions on x and flipped x
y_pred1 = best_model(x)
y_pred2 = best_model(x.flip(dims=(-1, )))
y_pred = 0.5 * (y_pred1 + y_pred2)
return y_pred, y
inferencer = Engine(inference_update_with_tta)
for name, metric in metrics.items():
metric.attach(inferencer, name)
ProgressBar(desc="Inference").attach(inferencer)
result_state = inferencer.run(test_loader, max_epochs=1)
```
Finally, we obtain similar scores:
```
result_state.metrics
```
| github_jupyter |
# Exploring precision and recall
# Question 1
<img src="images/lec6_quiz01_pic01.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 2
<img src="images/lec6_quiz01_pic02.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 3
<img src="images/lec6_quiz01_pic03.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 4
<img src="images/lec6_quiz01_pic04.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 5
<img src="images/lec6_quiz01_pic05.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 6
<img src="images/lec6_quiz01_pic06.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 7
<img src="images/lec6_quiz01_pic07.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 8
<img src="images/lec6_quiz01_pic08.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 9
<img src="images/lec6_quiz01_pic09.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 10
<img src="images/lec6_quiz01_pic10.png">
<img src="images/lec6_quiz01_pic10_01.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 11
<img src="images/lec6_quiz01_pic11.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 12
<img src="images/lec6_quiz01_pic12.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 13
<img src="images/lec6_quiz01_pic13.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
# Question 14
<img src="images/lec6_quiz01_pic14.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-classification/exam/ObhEq/exploring-precision-and-recall)*
<!--TEASER_END-->
| github_jupyter |
```
# header files needed
import numpy as np
import torch
import torch.nn as nn
import torchvision
from torch.utils.tensorboard import SummaryWriter
from google.colab import drive
drive.mount('/content/drive')
np.random.seed(1234)
torch.manual_seed(1234)
torch.cuda.manual_seed(1234)
# define transforms
train_transforms = torchvision.transforms.Compose([torchvision.transforms.RandomRotation(30),
torchvision.transforms.Resize((224, 224)),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
# get data
train_data = torchvision.datasets.ImageFolder("/content/drive/My Drive/train_images/", transform=train_transforms)
val_data = torchvision.datasets.ImageFolder("/content/drive/My Drive/val_images/", transform=train_transforms)
print(len(train_data))
print(len(val_data))
# data loader
train_loader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True, num_workers=16)
val_loader = torch.utils.data.DataLoader(val_data, batch_size=32, shuffle=False, num_workers=16)
# Cross-Entropy loss with Label Smoothing
class CrossEntropyLabelSmoothingLoss(nn.Module):
def __init__(self, smoothing=0.0):
super(CrossEntropyLabelSmoothingLoss, self).__init__()
self.smoothing = smoothing
def forward(self, pred, target):
log_prob = torch.nn.functional.log_softmax(pred, dim=-1)
weight = input.new_ones(pred.size()) * (self.smoothing/(pred.size(-1)-1.))
weight.scatter_(-1, target.unsqueeze(-1), (1.-self.smoothing))
loss = (-weight * log_prob).sum(dim=-1).mean()
return loss
# define loss (smoothing=0 is equivalent to standard Cross-Entropy loss)
criterion = CrossEntropyLabelSmoothingLoss(0.0)
import torch.nn as nn
import math
import torch.utils.model_zoo as model_zoo
import numpy as np
__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
'resnet152']
model_urls = {
'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
}
def conv3x3(in_planes, out_planes, stride=1, dilation=1):
"3x3 convolution with padding"
kernel_size = np.asarray((3, 3))
# Compute the size of the upsampled filter with
# a specified dilation rate.
upsampled_kernel_size = (kernel_size - 1) * (dilation - 1) + kernel_size
# Determine the padding that is necessary for full padding,
# meaning the output spatial size is equal to input spatial size
full_padding = (upsampled_kernel_size - 1) // 2
# Conv2d doesn't accept numpy arrays as arguments
full_padding, kernel_size = tuple(full_padding), tuple(kernel_size)
return nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride,
padding=full_padding, dilation=dilation, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride, dilation=dilation)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes, dilation=dilation)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = conv3x3(planes, planes, stride=stride, dilation=dilation)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self,
block,
layers,
num_classes=1000,
fully_conv=False,
remove_avg_pool_layer=False,
output_stride=32,
additional_blocks=0,
multi_grid=(1,1,1),
remove_fc = False ):
# Add additional variables to track
# output stride. Necessary to achieve
# specified output stride.
self.output_stride = output_stride
self.current_stride = 4
self.current_dilation = 1
self.remove_avg_pool_layer = remove_avg_pool_layer
self.remove_fc = remove_fc
self.inplanes = 64
self.fully_conv = fully_conv
super(ResNet, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2, multi_grid=multi_grid)
self.additional_blocks = additional_blocks
if additional_blocks == 1:
self.layer5 = self._make_layer(block, 512, layers[3], stride=2, multi_grid=multi_grid)
if additional_blocks == 2:
self.layer5 = self._make_layer(block, 512, layers[3], stride=2, multi_grid=multi_grid)
self.layer6 = self._make_layer(block, 512, layers[3], stride=2, multi_grid=multi_grid)
if additional_blocks == 3:
self.layer5 = self._make_layer(block, 512, layers[3], stride=2, multi_grid=multi_grid)
self.layer6 = self._make_layer(block, 512, layers[3], stride=2, multi_grid=multi_grid)
self.layer7 = self._make_layer(block, 512, layers[3], stride=2, multi_grid=multi_grid)
self.avgpool = nn.AvgPool2d(7)
self.fc = nn.Linear(512 * block.expansion, num_classes)
if self.fully_conv:
self.avgpool = nn.AvgPool2d(7, padding=3, stride=1)
# In the latest unstable torch 4.0 the tensor.copy_
# method was changed and doesn't work as it used to be
#self.fc = nn.Conv2d(512 * block.expansion, num_classes, 1)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _make_layer(self,
block,
planes,
blocks,
stride=1,
multi_grid=None):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
# Check if we already achieved desired output stride.
if self.current_stride == self.output_stride:
# If so, replace subsampling with a dilation to preserve
# current spatial resolution.
self.current_dilation = self.current_dilation * stride
stride = 1
else:
# If not, perform subsampling and update current
# new output stride.
self.current_stride = self.current_stride * stride
# We don't dilate 1x1 convolution.
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
dilation = multi_grid[0] * self.current_dilation if multi_grid else self.current_dilation
layers.append(block(self.inplanes, planes, stride, downsample, dilation=dilation))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
dilation = multi_grid[i] * self.current_dilation if multi_grid else self.current_dilation
layers.append(block(self.inplanes, planes, dilation=dilation))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
if self.additional_blocks == 1:
x = self.layer5(x)
if self.additional_blocks == 2:
x = self.layer5(x)
x = self.layer6(x)
if self.additional_blocks == 3:
x = self.layer5(x)
x = self.layer6(x)
x = self.layer7(x)
if not self.remove_avg_pool_layer:
x = self.avgpool(x)
if not self.fully_conv:
x = x.view(x.size(0), -1)
if not self.remove_fc:
x = self.fc(x)
return x
def resnet18(pretrained=False, **kwargs):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
if pretrained:
if model.additional_blocks:
model.load_state_dict(model_zoo.load_url(model_urls['resnet18']), strict=False)
return model
print("Loading pretrained: ")
model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))
return model
def resnet34(pretrained=False, **kwargs):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
if pretrained:
if model.additional_blocks:
model.load_state_dict(model_zoo.load_url(model_urls['resnet34']), strict=False)
return model
model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))
return model
def resnet50(pretrained=False, **kwargs):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
if pretrained:
if model.additional_blocks:
model.load_state_dict(model_zoo.load_url(model_urls['resnet50']), strict=False)
return model
model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))
return model
def resnet101(pretrained=False, **kwargs):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
if pretrained:
if model.additional_blocks:
model.load_state_dict(model_zoo.load_url(model_urls['resnet101']), strict=False)
return model
model.load_state_dict(model_zoo.load_url(model_urls['resnet101']))
return model
def resnet152(pretrained=False, **kwargs):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet152']))
return model
# define model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = resnet50(num_classes=2)
model.to(device)
# load tensorboard
%load_ext tensorboard
%tensorboard --logdir logs
# define optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9, weight_decay=0.001)
best_metric = -1
best_metric_epoch = -1
writer = SummaryWriter("./logs/")
writer.flush()
# train and validate
for epoch in range(0, 100):
# train
model.train()
training_loss = 0.0
total = 0
correct = 0
for i, (input, target) in enumerate(train_loader):
input = input.to(device)
target = target.to(device)
optimizer.zero_grad()
output = model(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()
training_loss = training_loss + loss.item()
_, predicted = output.max(1)
total += target.size(0)
correct += predicted.eq(target).sum().item()
training_loss = training_loss/float(len(train_loader))
training_accuracy = str(100.0*(float(correct)/float(total)))
writer.add_scalar("Loss/train", float(training_loss), epoch)
writer.add_scalar("Accuracy/train", float(training_accuracy), epoch)
# validate
model.eval()
valid_loss = 0.0
total = 0
correct = 0
for i, (input, target) in enumerate(val_loader):
with torch.no_grad():
input = input.to(device)
target = target.to(device)
output = model(input)
loss = criterion(output, target)
_, predicted = output.max(1)
total += target.size(0)
correct += predicted.eq(target).sum().item()
valid_loss = valid_loss + loss.item()
valid_loss = valid_loss/float(len(val_loader))
valid_accuracy = str(100.0*(float(correct)/float(total)))
writer.add_scalar("Loss/val", float(valid_loss), epoch)
writer.add_scalar("Accuracy/val", float(valid_accuracy), epoch)
# store best model
if(float(valid_accuracy)>best_metric and epoch>=30):
best_metric = float(valid_accuracy)
best_metric_epoch = epoch
torch.save(model.state_dict(), "best_model_resnet50.pth")
print()
print("Epoch" + str(epoch) + ":")
print("Training Accuracy: " + str(training_accuracy) + " Validation Accuracy: " + str(valid_accuracy))
print("Training Loss: " + str(training_loss) + " Validation Loss: " + str(valid_loss))
print()
# close tensorboard writer
writer.flush()
writer.close()
```
| github_jupyter |
```
import os
import math
import tarfile
import pandas as pd
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
from random import shuffle
import torch
from torch.utils.data import Dataset, DataLoader, random_split
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms, utils, models
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
print("GPU available: {}".format(torch.cuda.is_available()))
```
# 实验五 深度学习
实验目的
+ 练习使用基本的CNN框架分析图像数据。
使用的数据集:表情识别数据集
## 一、数据获取
Modal Arts从OBS中获取数据,文件数不宜太多。这里我选择把数据集打包为tar文件上传OBS,同步目录后进行解压。
首先,定义一些路径:
```
tar_file = './datasets/expression-recognition.tar'
data_root = './cache/ext_data'
image_path = data_root+'/data'
label_file = data_root+'/data_label.csv'
```
定义解压的工具函数
```
def untar(tar_file, ext_path):
if os.path.exists(ext_path):
os.removedirs(ext_path)
os.makedirs(ext_path)
t = tarfile.open(tar_file)
t.extractall(path=ext_path)
untar(tar_file, data_root)
```
查看一下数据集
```
def show_img(idx, df):
if idx < 0 or idx > 19107:
print("idx error")
with Image.open(os.path.join(image_path, df.loc[idx].pic_name)) as img:
plt.imshow(img)
print("label : " + str(df.loc[idx].label))
df = pd.read_csv(label_file)
show_img(1, df)
```
### 数据集分割
由于各种类别的数据数量十分不平衡,我选择手动分割数据集。
准备工作:统计各label的数目
```
n_labels = len(df.drop_duplicates(['label']))
bucket = [[] for i in range(n_labels)] # 统计各label数量的容器
label_nums = np.zeros(n_labels).astype(int)
for row in df.itertuples():
bucket[row.label].append(row.pic_name)
label_nums[row.label] += 1
label_nums = np.array(label_nums)
print("total size is ", label_nums)
```
使用5%作为测试集、5%作为验证集、90%作为训练集
```
test_size = dev_size = (label_nums * 0.05).astype(int)
train_size = label_nums - test_size - dev_size
print("train size is ", train_size)
print("dev size is ", dev_size)
print("test size is ", test_size)
```
洗牌,重新包装成三个df
```
for l in bucket:
shuffle(l)
# merge files of each label
train_list = [[],[]]
dev_list = [[],[]]
test_list = [[],[]]
for i in range(n_labels):
train_list[0] += bucket[i][:train_size[i]]
dev_list[0] += bucket[i][train_size[i]:train_size[i]+dev_size[i]]
test_list[0] += bucket[i][train_size[i]+dev_size[i]:train_size[i]+dev_size[i]+test_size[i]]
train_list[1] += [i for ii in range(train_size[i])]
test_list[1] += [i for ii in range(test_size[i])]
dev_list[1] += [i for ii in range(dev_size[i])]
assert len(train_list[0]) == len(train_list[1]) == np.sum(train_size)
assert len(test_list[0]) == len(test_list[1]) == np.sum(test_size)
assert len(dev_list[0]) == len(dev_list[1]) == np.sum(dev_size)
# turn list into ndarray
train_array = np.array(train_list).T
dev_array = np.array(dev_list).T
test_array = np.array(test_list).T
# crate dataframs
train_df = pd.DataFrame(train_array, columns=['pic_name','label'])
dev_df = pd.DataFrame(dev_array, columns=['pic_name','label'])
test_df = pd.DataFrame(test_array, columns=['pic_name','label'])
train_df.label = train_df.label.astype('int')
test_df.label = test_df.label.astype('int')
dev_df.label = dev_df.label.astype('int')
```
## 二、编写Dataset
首先继承Dataset抽象类,重写方法
1. 在`__init__`中,读取label文件
2. 在`__len__`中返回样本数
3. 在`__getitem__`中读取图片,返回{'image','label'}的字典
4. 如果需要进行transform,在`__getitem__`最后完成
我使用的Dataset同时包含image和label。
```
class ERDataset(Dataset):
"""expression-recognition dataset"""
def __init__(self, df, image_path, transform=None):
self.df = df
self.image_path = image_path
self.transform = transform
def __len__(self):
return self.df.shape[0]
def __getitem__(self, idx):
img_file = os.path.join(self.image_path,
self.df.iloc[idx].pic_name)
image = Image.open(img_file)
label = self.df.iloc[idx].label
sample = image, label
if self.transform:
sample = self.transform(sample)
return sample
```
实例化类并尝试进行迭代查看几张数据
```
dataset = ERDataset(train_df, image_path)
fig = plt.figure()
start_idx = 6000
for i in range(4):
image, label = dataset[i + start_idx]
ax = plt.subplot(1,4,i+1)
plt.tight_layout()
ax.set_title("#{},Label {}".format(i,label))
ax.axis('off')
plt.imshow(image, cmap="gray")
```
## 三、数据预处理
原生图像没有办法直接放到模型中去跑,需要变换。同时,通过变换可以达到数据增强的效果。
准备使用的变换有:
1. Rescale:缩放图片
2. RandomCrop:随机截取
3. Mirror:左右镜像
4. ToTensor:将Numpy数组变为Torch张量
首先实现这些变换类:
```
class Rescale(object):
"""缩放图片
Args:
output_size (int): 输出图片的长和宽
"""
def __init__(self, output_size):
assert isinstance(output_size, (int))
self.output_size = output_size
def __call__(self, sample):
image, label = sample
output_size = self.output_size
image = transforms.Resize((output_size, output_size))(image)
return image, label
class RandomCrop(object):
"""随机截取
Args:
output_size (int): 截取的图片大小
"""
def __init__(self, output_size):
assert isinstance(output_size, (int))
self.output_size = output_size
def __call__(self, sample):
image, label = sample
image = transforms.RandomCrop(self.output_size)(image)
return image, label
class Flipper(object):
"""镜像翻转"""
def __call__(self, sample):
image, label = sample
return transforms.RandomHorizontalFlip(1)(image), label
class ToTensor(object):
"""把ndarray变成torch tensor"""
def __call__(self, sample):
image, label = sample
return transforms.ToTensor()(image), torch.full((1,),label)
class Normalize(object):
"""归一化
Args:
mean, std (tuple): 均值,标准差
"""
def __init__(self, mean, std):
assert isinstance(mean, (tuple))
assert isinstance(std, (tuple))
self.mean = mean
self.std = std
def __call__(self, sample):
image, label = sample
image = transforms.Normalize(self.mean, self.std)(image)
return image, label
```
随意举个例子,尝试下用上述变换进行组合实例化Dataset
```
transformed_dataset = ERDataset(train_df, image_path,
transform=transforms.Compose([
Rescale(48),
RandomCrop(30),
Flipper(),
ToTensor(),
Normalize((0.1307,), (0.3081,))
]))
for i in range(3):
image, label = transformed_dataset[i]
print("#{}".format(i), "shape: {}".format(image.shape),
"type: {}".format(type(image)), end='\n')
```
## 四、训练模型
首先定义用到的超参数:
```
BATCH_SIZE = 128 # minibatch的大小
EPOCHS = 40 # 迭代次数
NUM_WORKERS = 2 # CPU工作线程数
TEST_SIZE = 250 # 测试集大小
DEV_SIZE = 250 # 验证集大小
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
### 4.1 准备数据
实例化 Dataset 和 Data Loader
```
# 实例化训练数据集并预处理
train_dataset = ERDataset(train_df, image_path,
transform=transforms.Compose([
ToTensor(),
Normalize((0.1307,), (0.3081,))
]))
# 实例化验证数据集并预处理
dev_dataset = ERDataset(dev_df, image_path,
transform=transforms.Compose([
ToTensor(),
Normalize((0.1307,), (0.3081,))
]))
# 实例化测试数据集并预处理
test_dataset = ERDataset(test_df, image_path,
transform=transforms.Compose([
ToTensor(),
Normalize((0.1307,), (0.3081,))
]))
# 分别实例化DataLoader
train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS, shuffle=True)
dev_loader = DataLoader(dev_dataset, batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS, shuffle=True)
```
### 4.2 建立模型
先建立一个简单的CNN模型
```
class ConvNet(nn.Module):
def __init__(self):
super().__init__()
# input size = batch * 1 * 48 * 48
self.conv1 = nn.Sequential(
nn.Conv2d(1,10,3),
nn.BatchNorm2d(10)
)
self.conv2 = nn.Sequential(
nn.Conv2d(10,20,3),
nn.BatchNorm2d(20)
)
self.conv3 = nn.Sequential(
nn.Conv2d(20,40,3),
nn.BatchNorm2d(40)
)
self.conv4 = nn.Sequential(
nn.Conv2d(40,80,4),
nn.BatchNorm2d(80),
nn.Dropout(0.5)
)
self.conv5 = nn.Sequential(
nn.Conv2d(80,160,4),
nn.BatchNorm2d(160),
nn.Dropout(0.5)
)
self.conv6 = nn.Sequential(
nn.Conv2d(160,320,3),
nn.BatchNorm2d(320),
nn.Dropout(0.35)
)
self.fc1 = nn.Sequential(
nn.Linear(320*2*2,512),
nn.BatchNorm1d(512)
)
self.fc2 = nn.Sequential(
nn.Linear(512,256),
nn.BatchNorm1d(256)
)
self.fc3 = nn.Linear(256,7)
def forward(self, x):
batch_size = x.size(0)
# Conv layer1
out = self.conv1(x) # 10*46*46
out = F.relu(out)
# Conv layer2
out = self.conv2(out) # 20*44*44
out = F.relu(out)
# Conv layer3
out = self.conv3(out) # 40*42*42
out = F.relu(out)
out = F.max_pool2d(out, 2, 2) # 40*21*21
# Conv layer4
out = self.conv4(out) # 80*18*18
out = F.relu(out)
out = F.max_pool2d(out,2,2) # 80*9*9
# Conv layer5
out = self.conv5(out) # 160*6*6
out = F.relu(out)
# Conv layer6
out = self.conv6(out) # 320*4*4
out = F.relu(out)
out = F.avg_pool2d(out, 2, 2) # 320*2*2
# Flatten
out = out.view(batch_size, -1)
# FC layer1
out = self.fc1(out) # 512*1
out = F.relu(out)
# FC layer2
out = self.fc2(out) # 256*1
out = F.relu(out)
# FC layer3
out = self.fc3(out) # 7*1
out = F.log_softmax(out, dim=1)
return out
```
实例化一个网络,移动到GPU,选择优化器
```
model = ConvNet().to(DEVICE)
optimizer = optim.Adam(model.parameters())
```
定义训练函数,损失函数选择`nll_loss`
```
def train(model, device, loader, optimizer, epoch):
model.train()
print("\nEpoch {}:".format(epoch))
for batch_idx, (data, target) in enumerate(loader):
data, target = data.to(device), target.view(target.size(0)).long().to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
```
定义测试函数
```
def test(model, device, loader, name):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in loader:
data, target = data.to(device), target.view(target.size(0)).long().to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item()
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(loader.dataset)
print('On {} set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)'.format(
name, test_loss, correct, len(loader.dataset),
100. * correct / len(loader.dataset)))
```
### 4.3 开始训练
```
for epoch in range(EPOCHS):
train(model, DEVICE, train_loader, optimizer, epoch)
if (epoch+1) % 5 == 0:
test(model, DEVICE, train_loader, 'Train')
test(model, DEVICE, dev_loader, 'Dev')
```
经过多次调整网络结构,达到的最好准确率如上。使用Google net可以稳定达到85%的正确率,接下来如何更好地进行数据预处理和调整网络结构是努力方向。
| github_jupyter |
<a href="https://colab.research.google.com/github/bkkaggle/pytorch-CycleGAN-and-pix2pix/blob/master/pix2pix.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Install
```
!git clone https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
import os
os.chdir('pytorch-CycleGAN-and-pix2pix/')
!pip install -r requirements.txt
```
# Datasets
Download one of the official datasets with:
- `bash ./datasets/download_pix2pix_dataset.sh [cityscapes, night2day, edges2handbags, edges2shoes, facades, maps]`
Or use your own dataset by creating the appropriate folders and adding in the images. Follow the instructions [here](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/datasets.md#pix2pix-datasets).
```
!bash ./datasets/download_pix2pix_dataset.sh facades
```
# Pretrained models
Download one of the official pretrained models with:
- `bash ./scripts/download_pix2pix_model.sh [edges2shoes, sat2map, map2sat, facades_label2photo, and day2night]`
Or add your own pretrained model to `./checkpoints/{NAME}_pretrained/latest_net_G.pt`
```
!bash ./scripts/download_pix2pix_model.sh facades_label2photo
```
# Training
- `python train.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA`
Change the `--dataroot` and `--name` to your own dataset's path and model's name. Use `--gpu_ids 0,1,..` to train on multiple GPUs and `--batch_size` to change the batch size. Add `--direction BtoA` if you want to train a model to transfrom from class B to A.
```
!python train.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA --use_wandb
```
# Testing
- `python test.py --dataroot ./datasets/facades --direction BtoA --model pix2pix --name facades_pix2pix`
Change the `--dataroot`, `--name`, and `--direction` to be consistent with your trained model's configuration and how you want to transform images.
> from https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix:
> Note that we specified --direction BtoA as Facades dataset's A to B direction is photos to labels.
> If you would like to apply a pre-trained model to a collection of input images (rather than image pairs), please use --model test option. See ./scripts/test_single.sh for how to apply a model to Facade label maps (stored in the directory facades/testB).
> See a list of currently available models at ./scripts/download_pix2pix_model.sh
```
!ls checkpoints/
!python test.py --dataroot ./datasets/facades --direction BtoA --model pix2pix --name facades_label2photo_pretrained --use_wandb
```
# Visualize
```
import matplotlib.pyplot as plt
img = plt.imread('./results/facades_label2photo_pretrained/test_latest/images/100_fake_B.png')
plt.imshow(img)
img = plt.imread('./results/facades_label2photo_pretrained/test_latest/images/100_real_A.png')
plt.imshow(img)
img = plt.imread('./results/facades_label2photo_pretrained/test_latest/images/100_real_B.png')
plt.imshow(img)
```
| github_jupyter |
```
import sys
sys.path.append("../")
import numpy as np
import matplotlib.pyplot as plt
import os
def get_length_counts(path):
counts = []
with open(path, "r") as f:
for line in f:
label, sentence = line.split(" ", 1)
counts.append(len(sentence.replace(" ", "")))
counts = np.array(counts)
return counts
europarl_base_path = "/home/david/Programming/data/WMT/europarl"
train_path = os.path.join(europarl_base_path, "txt_noxml/europarl.tokenized.all")
test_path = os.path.join(europarl_base_path, "test/europarl.tokenized.test")
counts = []
counts = get_length_counts(train_path)
#print(np.histogram(counts, bins="auto"))
plt.title("Histogram of Sentence Lengths (Train Set)")
plt.hist(counts, bins="auto")
plt.xlabel("Word Count")
plt.ylabel("Number of Sentences")
plt.show()
print("min: {}, median: {}, max: {}".format(counts.min(), counts[len(counts)//2], counts.max()))
counts = get_length_counts(test_path)
plt.title("Histogram of Sentence Lengths (Test Set)")
plt.hist(counts, bins="auto")
plt.xlabel("Word Count")
plt.ylabel("Number of Sentences")
plt.show()
print("min: {}, median: {}, max: {}".format(counts.min(), counts[len(counts)//2], counts.max()))
def get_words(path):
words = set([])
with open(path, "r") as f:
for line in f:
label, sentence = line.split(" ", 1)
words.update(sentence.split(" "))
return words
words_train = get_words(train_path)
words_test = get_words(test_path)
print("Number of words in test set but not in train set: {}".format(len(words_test - words_train)))
from collections import Counter
def get_char_counts(path, charset_full = None):
characters = {"overall": Counter()}
with open(path, "r") as f:
for line in f:
label, sentence = line.split(" ", 1)
char_count = Counter([c for c in sentence.replace(" ", "")])
characters["overall"].update(char_count)
if label not in characters:
characters[label] = Counter()
characters[label].update(char_count)
if not charset_full:
charset_full = {c: 0 for c in characters["overall"].keys()}
for label in characters.keys():
characters[label].update(charset_full)
return characters, charset_full
def plot_characters(char_count1, char_count2, titles = ["Train", "Test"]):
def _create_plot(lang_counts, lang_name, titles=titles):
# sort character counts by character
_, counts1 = zip(*sorted([(k, v) for k, v in lang_counts[0].items()]))
_, counts2 = zip(*sorted([(k, v) for k, v in lang_counts[1].items()]))
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(12,4))
fig.suptitle("Character Counts ({})".format(lang_name))
# create plots for counts
for i, cnt in enumerate([counts1, counts2]):
axs[i].bar(range(len(cnt)), cnt)
axs[i].tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False) # labels along the bottom edge are off
axs[i].set_yscale("log")
axs[i].set_ylabel("# of Characters")
plt.show()
# plot overall
_create_plot([char_count1["overall"], char_count2["overall"]], "overview")
# plot languages sorted by language
for counts1, counts2 in zip(sorted(list(char_count1.items())), sorted(list(char_count2.items()))):
if counts1[0] != "overall": _create_plot([counts1[1], counts2[1]], counts1[0].replace("__label__", ""))
char_count_train, charset_train = get_char_counts(train_path)
char_count_test, _ = get_char_counts(test_path, charset_full = charset_train)
plot_characters(char_count_train, char_count_test)
```
| github_jupyter |
```
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
import json
from collections import Mapping
import os
import numpy as np
```
The 18 identifiers that make health information PHI are:
- Names
- Dates, except year
- Telephone numbers
- Geographic data
- FAX numbers
- Social Security numbers
- Email addresses
- Medical record numbers
- Account numbers
- Health plan beneficiary numbers
- Certificate/license numbers
- Vehicle identifiers and serial numbers including license plates
- Web URLs
- Device identifiers and serial numbers
- Internet protocol addresses
- Full face photos and comparable images
- Biometric identifiers (i.e. retinal scan, fingerprints)
- Any unique identifying number or code
```
deid_dict = {
"de-identification_root_concept": ["name", "contact_details", "healthcare_identifier", "date"],
"name": ["fore_name", "surname","initials"],
"contact_details": ["address", "telephone_number", "email", "identification","url"],
"address": ["address_line", "postcode"],
"identification": ["passport_number", "driving_licence_number","national_insurance"],
"healthcare_identifier": ["nhs_number", "hospital_number", "emergency_department_number", "lab_number","gmc_number"],
"date": ["date_of_birth"]
}
print(deid_dict.keys())
```
#### Meta_annotations will hold the contextual information of the concept.
"Subject": ["Patient", "Relative", "Healthcare provider"] (unselected meta_ann will mean "other/ N/A")
```
deid_cui_dict = {
"de-identification root concept": "r0000",
"name": "n1000",
"fore name": "n1100",
"surname": "n1200",
"initials":"n1300",
"contact details": "c2000",
"address": "c2100",
"address line": "c2110",
"postcode": "c2120",
"telephone number":"c2200",
"email": "c2300",
"identification": "c2400",
"passport number": "c2410",
"driving licence_number": "c2420",
"national insurance": "c2430",
"healthcare identifier": "h3000",
"nhs number": "h3100",
"hospital number": "h3200",
"emergency department_number": "h3300",
"lab number": "h3400",
"gmc number":"h3500",
"date": "d4000",
"date of birth": "h4100"
}
len(deid_cui_dict)
deid_dict = {
"de-identification_root_concept": ["name", "contact_details", "healthcare_identifier", "date"],
"name": ["fore_name", "surname","initials"],
"contact_details": ["address", "telephone_number", "email", "identification","url"],
"address": ["address_line", "postcode"],
"identification": ["passport_number", "driving_licence_number","national_insurance"],
"healthcare_identifier": ["nhs_number", "hospital_number", "emergency_department_number", "lab_number","gmc_number"],
"date": ["date_of_birth"]
}
# q
deid_fig = {
"de-identification root concept": {
"name": {
"fore name": 1100,
"surname": 1200,
"initials":1300},
"contact details": {
"address": {
"address line": 2110,
"postcode": 2120},
"telephone number": 2200,
"email": 2300,
"identification \n number": {
"passport": 2410,
"driving licence": 2420,
"national insurance": 2430}},
"healthcare identifier": {
"nhs \n number": 3100,
"hospital \n number": 3200,
"emergency \n department number": 3300,
"lab number": 3400,
"gmc number": 3500},
"date": {
"date of birth": 4100}
}}
```
# Visualisation of temininology
```
G = nx.DiGraph()
q = list(deid_fig.items())
while q:
v, d = q.pop()
for nv, nd in d.items():
G.add_edge(v, nv)
if isinstance(nd, Mapping):
q.append((nv, nd))
np.random.seed(42)
def hierarchy_pos(G, root, levels=None, width=1., height=1.):
'''If there is a cycle that is reachable from root, then this will see infinite recursion.
G: the graph
root: the root node
levels: a dictionary
key: level number (starting from 0)
value: number of nodes in this level
width: horizontal space allocated for drawing
height: vertical space allocated for drawing'''
TOTAL = "total"
CURRENT = "current"
def make_levels(levels, node=root, currentLevel=0, parent=None):
"""Compute the number of nodes for each level
"""
if not currentLevel in levels:
levels[currentLevel] = {TOTAL : 0, CURRENT : 0}
levels[currentLevel][TOTAL] += 1
neighbors = G.neighbors(node)
for neighbor in neighbors:
if not neighbor == parent:
levels = make_levels(levels, neighbor, currentLevel + 1, node)
return levels
def make_pos(pos, node=root, currentLevel=0, parent=None, vert_loc=0):
dx = 1/levels[currentLevel][TOTAL]
left = dx/2
pos[node] = ((left + dx*levels[currentLevel][CURRENT])*width, vert_loc)
levels[currentLevel][CURRENT] += 1
neighbors = G.neighbors(node)
for neighbor in neighbors:
if not neighbor == parent:
pos = make_pos(pos, neighbor, currentLevel + 1, node, vert_loc-vert_gap)
return pos
if levels is None:
levels = make_levels({})
else:
levels = {l:{TOTAL: levels[l], CURRENT:0} for l in levels}
vert_gap = height / (max([l for l in levels])+1)
return make_pos({})
pos = hierarchy_pos(G, 'de-identification root concept')
plt.figure(3, figsize=(40,15))
plt.title("De-Identification Terminology", fontsize=60)
nx.draw(G, pos=pos, with_labels=True, node_size=100,font_size=20,font_weight="bold", node_color="skyblue", node_shape="s", alpha=0.8, linewidths=50)
plt.tight_layout()
plt.savefig("DeID_teminology.png")
# TODO: Review and delete this section if not necessary
# https://python-graph-gallery.com/321-custom-networkx-graph-appearance/
print(os.getcwd())
df_diagram = pd.read_csv("cbd_diagram.csv")
print(df_diagram)
# Build your graph
G=nx.from_pandas_edgelist(df_diagram, 'from', 'to')#, edge_attr=True)
# Graph with Custom nodes:
plt.figure(figsize=(8,8))
nx.draw(G, with_labels=True, node_size=60,font_size=8, node_color="skyblue", node_shape="s", alpha=0.5, linewidths=40)
pos=nx.spring_layout(G)
plt.show()
# TODO: Create a JSON file with the terminology structure with only the Fully specified names (FSN)
deid_json = json.dumps(deid_dict, sort_keys=True)
print(deid_json)
# TODO: create a CUI convention which reflects terminology structure
# code consists of 2 parts a letter = the term it belongs to (either the root, or the 1st child term) then a 4 letter code linking the term into the rest of the dictionary [first_parent][second parent][third parent][fourth parent]
deid_cui_dict = {
"de-identification_root_concept": "r0000",
"name": "n1000",
"contact_details": "c2000",
"healthcare_identifier": "h3000",
"date": "d4000",
"fore_name": "n1100",
"surname": "n1200",
"initials":"n1300",
"address": "c2100",
"address_line": "c2110",
"postcode": "c2120",
"telephone_number":"c2200",
"email": "c2300",
"identification": "c2400",
"passport_number": "c2410",
"driving_licence_number": "c2420",
"national_insurance": "c2430",
"nhs_number": "h3100",
"hospital_number": "h3200",
"emergency_department_number": "h3300",
"lab_number": "h3400",
"gmc_number":"h3500",
"date_of_birth": "h4100",
"url": "c2500"
}
```
# Create a CDB CSV format file following the structure:
|cui|str|onto|tty|tui|sty|desc|
|--|--|--|--|--|--|--|
__cui__ - The concept unique identifier, this is simply an ID in your database
__str__ - String/Name of that concept. It is important to write all possible names and abbreviations for a concept of interest.
__onto__ - Source ontology, e.g. HPO, SNOMED, HPC,...
__tty__ - Term type e.g. PN - Primary Name. Primary names are important and I would always recommend to add this fields when creating your CDB. Important to distinguish from synoymns.
__tui__ - Semantic type identifier - A unique identifier code for each semantic type.
__sty__ - Semantic type - AKA fully specified name of top level concept group
__desc__ - Description of this concept
```
# TODO: enrich the terminology with synonyms
# TODO: Create a csv file with each concept and concept unique identifier
# dictionary of root concepts for the de-id cui database
# dictionary of descriptions for the de-id cui database
deid_desc = {
"de-identification_root_concept": "root concept of de-identification",
"name": "surname and forename",
"contact_details": "non hospital identification and contact details",
"healthcare_identifier": "hospital derived ID",
"date": "personal dates",
"fore_name": "given name including middle names (each name, e.g. first and middle name, is treated as a separate concept)",
"surname": "all surnames",
"initials": "all initials (initials that aren't seperated by a space are treated as a single concept, e.g. EJ)",
"address": "address and postcode (including a comma or full stop at the end of the address string)",
"address_line": "all address line items including city and country",
"postcode": "all postcodes",
"telephone_number":"telephone numbers both mobile and landline",
"email": "email addresses",
"identification": "non hospital identification",
"passport_number": "passport number",
"driving_licence_number": "driving licence",
"url":"all websites associated with an individual",
"national_insurance": "all national insurance numbers",
"nhs_number": "nhs numbers",
"hospital_number": "hospital number from kings college hospital and other trusts",
"emergency_department_number": "number given by kings college hospital emergency number",
"lab_number":"all lab ids used to identify samples",
"gmc_number":"General Medical Council (GMC) number",
"date_of_birth": "date of birth"
}
df = pd.DataFrame(deid_cui_dict.items(), columns=['str', 'cui'])
df["onto"] = "cat_anon"
df["tty"] ="PN"
df["tui"] = ""
df["sty"] = ""
#sty_df = d.DataFrame(deid_sty.items(), columns=['str', 'desc'])
desc_df = pd.DataFrame(deid_desc.items(), columns=['str', 'desc'])
#df = df.merge(sty_df, left_on="str",right_on="str")
df = df.merge(desc_df, left_on="str",right_on="str")
df['cui'] = df['cui'].str.upper()
print(df)
# Note: it was decided not to use any synthesised numbers or names extracted from onlie websites (there were issues with spaces)
df.to_csv("./cui_05082021.csv", index=False)
# read in list
def read_list(name, encode):
with open(name, encoding = encode) as file:
lines = [line.rstrip('\n') for line in file]
return lines
# read in list of first names
def list_2_df(lst,cui, sty):
"""
Takes a list of words, the cui (concept) and sty (root concept) nad turns this into a cat_anon dataframe
"""
temp_df = pd.DataFrame({'str':lst})
temp_df["cui"] = cui
temp_df["onto"] = "cat_anon"
temp_df["tty"] ="SN"
temp_df["tui"] = ""
temp_df["sty"] = ""
temp_df["desc"] = ""
return temp_df
first_names = read_list("cleaned_output/cleaned_first_name.txt","UTF-8")
last_names = read_list("cleaned_output/cleaned_last_name.txt","UTF-8")
hosp_num = read_list("cleaned_output/cleaned_last_name.txt","UTF-8")
nhs_num = read_list("cleaned_output/nhs_numbers.txt","UTF-8")
postcodes = read_list("cleaned_output/postcodes.txt","UTF-8")
first_df = list_2_df(first_names, "n1100", "")
last_df = list_2_df(last_names, "n1200", "")
hosp_df = list_2_df(hosp_num, "h3200", "")
nhs_df = list_2_df(nhs_num, "h3100", "")
postcode_df = list_2_df(postcodes, "c2120", "")
print(postcode_df.head())
#output broken up for to make files smaller
df_list1 = [df, first_df, last_df]
df_list2 = [hosp_df, postcode_df]
df_list3 = [nhs_df]
#df_list4 = [landline_df, mobile_df]
output_df1 = pd.concat(df_list1)
output_df2 = pd.concat(df_list2)
output_df3 = pd.concat(df_list3)
#output_df4 = pd.concat(df_list4)
output_df1.to_csv("cui1.csv", index=False)
output_df2.to_csv("cui2.csv", index=False)
output_df3.to_csv("cui3.csv", index=False)
#output_df4.to_csv("cui4.csv", index=False)
# full output
df_list = [df, first_df, last_df,hosp_df, postcode_df,nhs_df]
full_output = pd.concat(df_list)
output_fill=full_output.to_csv("full_cui_medcat040221.csv", index=False)
```
| github_jupyter |
```
import coremltools
%load_ext autoreload
%autoreload 2
```
## Load pre-trained keras model
```
from keras.models import load_model
# model = load_model('./model/nn4.small2.lrn.h5')
import tensorflow as tf
from keras.utils import CustomObjectScope
with CustomObjectScope({'tf': tf}):
model = load_model('./model/nn4.small2.lrn.h5' )
import coremltools
coreml_model = coremltools.converters.keras.convert(
model, input_names='data', image_input_names='data', image_scale=1/255.0, output_names='output')
print coreml_model
# Read a sample image as input to test the model
import cv2
import numpy as np
img = cv2.imread('./data/dlib-affine-sz/Aaron_Eckhart/Aaron_Eckhart_0001.png', 1)
img = img[...,::-1]
img = np.around(np.transpose(img, (2, 1, 0))/255.0, decimals=12)
# x_train = np.array(img)
# y = coreml_model.predict({'image': x_train})
img = np.transpose(img, (1, 2, 0))
x_train = np.array([img])
y = model.predict_on_batch(x_train)
print y
# LFW TEST
import lfw
import os
import numpy as np
import math
import facenet
import time
import tensorflow as tf
lfw_pairs='data/pairs.txt'
lfw_dir='data/dlib-affine-sz'
lfw_file_ext='png'
lfw_nrof_folds=10
image_size=96
batch_size=100
# Read the file containing the pairs used for testing
pairs = lfw.read_pairs(os.path.expanduser(lfw_pairs))
# Get the paths for the corresponding images
paths, actual_issame = lfw.get_paths(os.path.expanduser(lfw_dir), pairs, lfw_file_ext)
embedding_size=128
nrof_images = len(paths)
nrof_batches = int(math.ceil(1.0*nrof_images / batch_size))
emb_array = np.zeros((nrof_images, embedding_size))
# print paths
for i in range(nrof_batches):
start_index = i*batch_size
end_index = min((i+1)*batch_size, nrof_images)
paths_batch = paths[start_index:end_index]
images = facenet.load_data(paths_batch, False, False, image_size)
images = np.transpose(images, (0,3,1,2))
t0 = time.time()
y = []
for img in images:
tmp = coreml_model.predict({'input1': img})
# print tmp.output1
y.append(tmp['output1'])
# y = model.predict_on_batch(images)
emb_array[start_index:end_index,:] = y
# print('y', y)
# print('emb', emb_array[start_index:end_index,:])
t1 = time.time()
print('batch: ', i, ' time: ', t1-t0)
from sklearn import metrics
from scipy.optimize import brentq
from scipy import interpolate
tpr, fpr, accuracy, val, val_std, far = lfw.evaluate(emb_array,
actual_issame, nrof_folds=lfw_nrof_folds)
print('Accuracy: %1.3f+-%1.3f' % (np.mean(accuracy), np.std(accuracy)))
print('Validation rate: %2.5f+-%2.5f @ FAR=%2.5f' % (val, val_std, far))
auc = metrics.auc(fpr, tpr)
print('Area Under Curve (AUC): %1.3f' % auc)
eer = brentq(lambda x: 1. - x - interpolate.interp1d(fpr, tpr)(x), 0., 1.)
print('Equal Error Rate (EER): %1.3f' % eer)
coreml_model.save('./model/OpenFace.mlmodel')
```
| github_jupyter |
As the second step of this tutorial, we will train an image model. This step can be run in parallel with Step 3 (training the text model).
This notebook was run on an AWS p3.2xlarge
# Octopod Image Model Training Pipeline
```
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../../')
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
from torch.utils.data import Dataset, DataLoader
```
Note: for images, we use the MultiInputMultiTaskLearner since we will send in the full image and a center crop of the image.
```
from octopod import MultiInputMultiTaskLearner, MultiDatasetLoader
from octopod.vision.dataset import OctopodImageDataset
from octopod.vision.models import ResnetForMultiTaskClassification
```
## Load in train and validation datasets
First we load in the csv's we created in Step 1.
Remember to change the path if you stored your data somewhere other than the default.
```
TRAIN_COLOR_DF = pd.read_csv('data/color_swatches/color_train.csv')
VALID_COLOR_DF = pd.read_csv('data/color_swatches/color_valid.csv')
TRAIN_PATTERN_DF = pd.read_csv('data/pattern_swatches/pattern_train.csv')
VALID_PATTERN_DF = pd.read_csv('data/pattern_swatches/pattern_valid.csv')
```
You will most likely have to alter this to however big your batches can be on your machine
```
batch_size = 64
```
We use the `OctopodImageDataSet` class to create train and valid datasets for each task.
Check out the documentation for infomation about the transformations.
```
color_train_dataset = OctopodImageDataset(
x=TRAIN_COLOR_DF['image_locs'],
y=TRAIN_COLOR_DF['simple_color_cat'],
transform='train',
crop_transform='train'
)
color_valid_dataset = OctopodImageDataset(
x=VALID_COLOR_DF['image_locs'],
y=VALID_COLOR_DF['simple_color_cat'],
transform='val',
crop_transform='val'
)
pattern_train_dataset = OctopodImageDataset(
x=TRAIN_PATTERN_DF['image_locs'],
y=TRAIN_PATTERN_DF['pattern_type_cat'],
transform='train',
crop_transform='train'
)
pattern_valid_dataset = OctopodImageDataset(
x=VALID_PATTERN_DF['image_locs'],
y=VALID_PATTERN_DF['pattern_type_cat'],
transform='val',
crop_transform='val'
)
```
We then put the datasets into a dictionary of dataloaders.
Each task is a key.
```
train_dataloaders_dict = {
'color': DataLoader(color_train_dataset, batch_size=batch_size, shuffle=True, num_workers=2),
'pattern': DataLoader(pattern_train_dataset, batch_size=batch_size, shuffle=True, num_workers=2),
}
valid_dataloaders_dict = {
'color': DataLoader(color_valid_dataset, batch_size=batch_size, shuffle=False, num_workers=8),
'pattern': DataLoader(pattern_valid_dataset, batch_size=batch_size, shuffle=False, num_workers=8),
}
```
The dictionary of dataloaders is then put into an instance of the Octopod `MultiDatasetLoader` class.
```
TrainLoader = MultiDatasetLoader(loader_dict=train_dataloaders_dict)
len(TrainLoader)
ValidLoader = MultiDatasetLoader(loader_dict=valid_dataloaders_dict, shuffle=False)
len(ValidLoader)
```
We need to create a dictionary of the tasks and the number of unique values so that we can create our model. This is a `new_task_dict` because we are training new tasks from scratch, but we could potentially have a mix of new and pretrained tasks. See the Octopod documentation for more details.
```
new_task_dict = {
'color': TRAIN_COLOR_DF['simple_color_cat'].nunique(),
'pattern': TRAIN_PATTERN_DF['pattern_type_cat'].nunique(),
}
new_task_dict
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
```
Create Model and Learner
===
These are completely new tasks so we use `new_task_dict`. If we had already trained a model on some tasks, we would use `pretrained_task_dict`.
And since these are new tasks, we set `load_pretrained_renset=True` to use the weights from Torch.
```
model = ResnetForMultiTaskClassification(
new_task_dict=new_task_dict,
load_pretrained_resnet=True
)
```
You will likely need to explore different values in this section to find some that work for your particular model.
```
lr_last = 1e-2
lr_main = 1e-4
optimizer = optim.Adam([
{'params': model.resnet.parameters(), 'lr': lr_main},
{'params': model.dense_layers.parameters(), 'lr': lr_last},
{'params': model.new_classifiers.parameters(), 'lr': lr_last},
])
exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size= 4, gamma= 0.1)
loss_function_dict = {'color': 'categorical_cross_entropy', 'pattern': 'categorical_cross_entropy'}
metric_function_dict = {'color': 'multi_class_acc', 'pattern': 'multi_class_acc'}
learn = MultiInputMultiTaskLearner(model, TrainLoader, ValidLoader, new_task_dict, loss_function_dict, metric_function_dict)
```
Train model
===
As your model trains, you can see some output of how the model is performing overall and how it is doing on each individual task.
```
learn.fit(
num_epochs=10,
scheduler=exp_lr_scheduler,
step_scheduler_on_batch=False,
optimizer=optimizer,
device=device,
best_model=True
)
```
If you run the above cell and see an error like:
```python
RuntimeError: DataLoader worker (pid X) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
```
Try lowering the `num_workers` to `0` for each `DataLoader` in `train_dataloaders_dict` and `valid_dataloaders_dict`.
Validate model
===
We provide a method on the learner called `get_val_preds`, which makes predictions on the validation data. You can then use this to analyze your model's performance in more detail.
```
pred_dict = learn.get_val_preds(device)
pred_dict
```
Save/Export Model
===
Once we are happy with our training we can save (or export) our model, using the `save` method (or `export`).
See the docs for the difference between `save` and `export`.
We will need the saved model later to use in the ensemble model
```
model.save(folder='models/', model_id='IMAGE_MODEL1')
model.export(folder='models/', model_id='IMAGE_MODEL1')
```
Now that we have an image model, we can move to `Step3_train_text_model`.
| github_jupyter |
# Improving Data Quality
**Learning Objectives**
1. Resolve missing values
2. Convert the Date feature column to a datetime format
3. Rename a feature column, remove a value from a feature column
4. Create one-hot encoding features
5. Understand temporal feature conversions
## Introduction
Recall that machine learning models can only consume numeric data, and that numeric data should be "1"s or "0"s. Data is said to be "messy" or "untidy" if it is missing attribute values, contains noise or outliers, has duplicates, wrong data, upper/lower case column names, and is essentially not ready for ingestion by a machine learning algorithm.
This notebook presents and solves some of the most common issues of "untidy" data. Note that different problems will require different methods, and they are beyond the scope of this notebook.
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/launching_into_ml/labs/improve_data_quality.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1 || pip install tensorflow==2.1
```
Start by importing the necessary libraries.
### Import Libraries
```
import os
import pandas as pd # First, we'll import Pandas, a data processing and CSV file I/O library
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
### Load the Dataset
The dataset is based on California's [Vehicle Fuel Type Count by Zip Code](https://data.ca.gov/dataset/vehicle-fuel-type-count-by-zip-codeSynthetic) report. The dataset has been modified to make the data "untidy" and is thus a synthetic representation that can be used for learning purposes.
Let's download the raw .csv data by copying the data from a cloud storage bucket.
```
if not os.path.isdir("../data/transport"):
os.makedirs("../data/transport")
!gsutil cp gs://cloud-training-demos/feat_eng/transport/untidy_vehicle_data.csv ../data/transport
!ls -l ../data/transport
```
### Read Dataset into a Pandas DataFrame
Next, let's read in the dataset just copied from the cloud storage bucket and create a Pandas DataFrame. We also add a Pandas .head() function to show you the top 5 rows of data in the DataFrame. Head() and Tail() are "best-practice" functions used to investigate datasets.
```
df_transport = pd.read_csv('../data/transport/untidy_vehicle_data.csv')
df_transport.head() # Output the first five rows.
```
### DataFrame Column Data Types
DataFrames may have heterogenous or "mixed" data types, that is, some columns are numbers, some are strings, and some are dates etc. Because CSV files do not contain information on what data types are contained in each column, Pandas infers the data types when loading the data, e.g. if a column contains only numbers, Pandas will set that column’s data type to numeric: integer or float.
Run the next cell to see information on the DataFrame.
```
df_transport.info()
```
From what the .info() function shows us, we have six string objects and one float object. Let's print out the first and last five rows of each column. We can definitely see more of the "string" object values now!
```
print(df_transport,5)
```
### Summary Statistics
At this point, we have only one column which contains a numerical value (e.g. Vehicles). For features which contain numerical values, we are often interested in various statistical measures relating to those values. We can use .describe() to see some summary statistics for the numeric fields in our dataframe. Note, that because we only have one numeric feature, we see only one summary stastic - for now.
```
df_transport.describe()
```
Let's investigate a bit more of our data by using the .groupby() function.
```
grouped_data = df_transport.groupby(['Zip Code','Model Year','Fuel','Make','Light_Duty','Vehicles'])
df_transport.groupby('Fuel').first() # Get the first entry for each month.
```
#### Checking for Missing Values
Missing values adversely impact data quality, as they can lead the machine learning model to make inaccurate inferences about the data. Missing values can be the result of numerous factors, e.g. "bits" lost during streaming transmission, data entry, or perhaps a user forgot to fill in a field. Note that Pandas recognizes both empty cells and “NaN” types as missing values.
Let's show the null values for all features in the DataFrame.
```
df_transport.isnull().sum()
```
To see a sampling of which values are missing, enter the feature column name. You'll notice that "False" and "True" correpond to the presence or abscence of a value by index number.
```
print (df_transport['Date'])
print (df_transport['Date'].isnull())
print (df_transport['Make'])
print (df_transport['Make'].isnull())
print (df_transport['Model Year'])
print (df_transport['Model Year'].isnull())
```
### What can we deduce about the data at this point?
First, let's summarize our data by row, column, features, unique, and missing values,
```
print ("Rows : " ,df_transport.shape[0])
print ("Columns : " ,df_transport.shape[1])
print ("\nFeatures : \n" ,df_transport.columns.tolist())
print ("\nUnique values : \n",df_transport.nunique())
print ("\nMissing values : ", df_transport.isnull().sum().values.sum())
```
Let's see the data again -- this time the last five rows in the dataset.
```
df_transport.tail()
```
### What Are Our Data Quality Issues?
1. **Data Quality Issue #1**:
> **Missing Values**:
Each feature column has multiple missing values. In fact, we have a total of 18 missing values.
2. **Data Quality Issue #2**:
> **Date DataType**: Date is shown as an "object" datatype and should be a datetime. In addition, Date is in one column. Our business requirement is to see the Date parsed out to year, month, and day.
3. **Data Quality Issue #3**:
> **Model Year**: We are only interested in years greater than 2006, not "<2006".
4. **Data Quality Issue #4**:
> **Categorical Columns**: The feature column "Light_Duty" is categorical and has a "Yes/No" choice. We cannot feed values like this into a machine learning model. In addition, we need to "one-hot encode the remaining "string"/"object" columns.
5. **Data Quality Issue #5**:
> **Temporal Features**: How do we handle year, month, and day?
#### Data Quality Issue #1:
##### Resolving Missing Values
Most algorithms do not accept missing values. Yet, when we see missing values in our dataset, there is always a tendency to just "drop all the rows" with missing values. Although Pandas will fill in the blank space with “NaN", we should "handle" them in some way.
While all the methods to handle missing values is beyond the scope of this lab, there are a few methods you should consider. For numeric columns, use the "mean" values to fill in the missing numeric values. For categorical columns, use the "mode" (or most frequent values) to fill in missing categorical values.
In this lab, we use the .apply and Lambda functions to fill every column with its own most frequent value. You'll learn more about Lambda functions later in the lab.
Let's check again for missing values by showing how many rows contain NaN values for each feature column.
```
# TODO 1a
df_transport.isnull().sum()
```
Run the cell to apply the lambda function.
```
# TODO 1b
df_transport = df_transport.apply(lambda x:x.fillna(x.value_counts().index[0]))
```
Let's check again for missing values.
```
# TODO 1c
df_transport.isnull().sum()
```
#### Data Quality Issue #2:
##### Convert the Date Feature Column to a Datetime Format
The date column is indeed shown as a string object. We can convert it to the datetime datatype with the to_datetime() function in Pandas.
```
# TODO 2a
df_transport['Date'] = pd.to_datetime(df_transport['Date'],
format='%m/%d/%Y')
# TODO 2b
df_transport.info() # Date is now converted.
```
Let's parse Date into three columns, e.g. year, month, and day.
```
df_transport['year'] = df_transport['Date'].dt.year
df_transport['month'] = df_transport['Date'].dt.month
df_transport['day'] = df_transport['Date'].dt.day
#df['hour'] = df['date'].dt.hour - you could use this if your date format included hour.
#df['minute'] = df['date'].dt.minute - you could use this if your date format included minute.
df_transport.info()
```
Next, let's confirm the Date parsing. This will also give us a another visualization of the data.
```
grouped_data = df_transport.groupby(['Make'])
df_transport.groupby('Fuel').first() # Get the first entry for each month.
```
Now that we have Dates as a integers, let's do some additional plotting.
```
plt.figure(figsize=(10,6))
sns.jointplot(x='month',y='Vehicles',data=df_transport)
#plt.title('Vehicles by Month')
```
#### Data Quality Issue #3:
##### Rename a Feature Column and Remove a Value.
Our feature columns have different "capitalizations" in their names, e.g. both upper and lower "case". In addition, there are "spaces" in some of the column names. In addition, we are only interested in years greater than 2006, not "<2006".
Let's remove all the spaces for feature columns by renaming them; we can also resolve the "case" problem too by making all the feature column names lower case.
```
# TODO 3a
df_transport.rename(columns = { 'Date': 'date', 'Zip Code':'zipcode', 'Model Year': 'modelyear', 'Fuel': 'fuel', 'Make': 'make', 'Light_Duty': 'lightduty', 'Vehicles': 'vehicles'}, inplace = True)
df_transport.head(2)
```
**Note:** Next we create a copy of the dataframe to avoid the "SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame" warning. Run the cell to remove the value '<2006' from the modelyear feature column.
```
# Here, we create a copy of the dataframe to avoid copy warning issues.
# TODO 3b
df = df_transport.loc[df_transport.modelyear != '<2006'].copy()
```
Next, confirm that the modelyear value '<2006' has been removed by doing a value count.
```
df['modelyear'].value_counts(0)
```
#### Data Quality Issue #4:
##### Handling Categorical Columns
The feature column "lightduty" is categorical and has a "Yes/No" choice. We cannot feed values like this into a machine learning model. We need to convert the binary answers from strings of yes/no to integers of 1/0. There are various methods to achieve this. We will use the "apply" method with a lambda expression. Pandas. apply() takes a function and applies it to all values of a Pandas series.
##### What is a Lambda Function?
Typically, Python requires that you define a function using the def keyword. However, lambda functions are anonymous -- which means there is no need to name them. The most common use case for lambda functions is in code that requires a simple one-line function (e.g. lambdas only have a single expression).
As you progress through the Course Specialization, you will see many examples where lambda functions are being used. Now is a good time to become familiar with them.
First, lets count the number of "Yes" and"No's" in the 'lightduty' feature column.
```
df['lightduty'].value_counts(0)
```
Let's convert the Yes to 1 and No to 0. Pandas. apply() . apply takes a function and applies it to all values of a Pandas series (e.g. lightduty).
```
df.loc[:,'lightduty'] = df['lightduty'].apply(lambda x: 0 if x=='No' else 1)
df['lightduty'].value_counts(0)
# Confirm that "lightduty" has been converted.
df.head()
```
#### One-Hot Encoding Categorical Feature Columns
Machine learning algorithms expect input vectors and not categorical features. Specifically, they cannot handle text or string values. Thus, it is often useful to transform categorical features into vectors.
One transformation method is to create dummy variables for our categorical features. Dummy variables are a set of binary (0 or 1) variables that each represent a single class from a categorical feature. We simply encode the categorical variable as a one-hot vector, i.e. a vector where only one element is non-zero, or hot. With one-hot encoding, a categorical feature becomes an array whose size is the number of possible choices for that feature.
Panda provides a function called "get_dummies" to convert a categorical variable into dummy/indicator variables.
```
# Making dummy variables for categorical data with more inputs.
data_dummy = pd.get_dummies(df[['zipcode','modelyear', 'fuel', 'make']], drop_first=True)
data_dummy.head()
# Merging (concatenate) original data frame with 'dummy' dataframe.
# TODO 4a
df = pd.concat([df,data_dummy], axis=1)
df.head()
# Dropping attributes for which we made dummy variables. Let's also drop the Date column.
# TODO 4b
df = df.drop(['date','zipcode','modelyear', 'fuel', 'make'], axis=1)
# Confirm that 'zipcode','modelyear', 'fuel', and 'make' have been dropped.
df.head()
```
#### Data Quality Issue #5:
##### Temporal Feature Columns
Our dataset now contains year, month, and day feature columns. Let's convert the month and day feature columns to meaningful representations as a way to get us thinking about changing temporal features -- as they are sometimes overlooked.
Note that the Feature Engineering course in this Specialization will provide more depth on methods to handle year, month, day, and hour feature columns.
First, let's print the unique values for "month" and "day" in our dataset.
```
print ('Unique values of month:',df.month.unique())
print ('Unique values of day:',df.day.unique())
print ('Unique values of year:',df.year.unique())
```
Next, we map each temporal variable onto a circle such that the lowest value for that variable appears right next to the largest value. We compute the x- and y- component of that point using sin and cos trigonometric functions. Don't worry, this is the last time we will use this code, as you can develop an input pipeline to address these temporal feature columns in TensorFlow and Keras - and it is much easier! But, sometimes you need to appreciate what you're not going to encounter as you move through the course!
Run the cell to view the output.
```
df['day_sin'] = np.sin(df.day*(2.*np.pi/31))
df['day_cos'] = np.cos(df.day*(2.*np.pi/31))
df['month_sin'] = np.sin((df.month-1)*(2.*np.pi/12))
df['month_cos'] = np.cos((df.month-1)*(2.*np.pi/12))
# Let's drop month, and day
# TODO 5
df = df.drop(['month','day','year'], axis=1)
# scroll left to see the converted month and day coluumns.
df.tail(4)
```
### Conclusion
This notebook introduced a few concepts to improve data quality. We resolved missing values, converted the Date feature column to a datetime format, renamed feature columns, removed a value from a feature column, created one-hot encoding features, and converted temporal features to meaningful representations. By the end of our lab, we gained an understanding as to why data should be "cleaned" and "pre-processed" before input into a machine learning model.
Copyright 2020 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
### Feature Scaling - Solution
With any distance based machine learning model (regularized regression methods, neural networks, and now kmeans), you will want to scale your data.
If you have some features that are on completely different scales, this can greatly impact the clusters you get when using K-Means.
In this notebook, you will get to see this first hand. To begin, let's read in the necessary libraries.
```
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from sklearn import preprocessing as p
%matplotlib inline
plt.rcParams['figure.figsize'] = (16, 9)
import helpers2 as h
import tests as t
# Create the dataset for the notebook
data = h.simulate_data(200, 2, 4)
df = pd.DataFrame(data)
df.columns = ['height', 'weight']
df['height'] = np.abs(df['height']*100)
df['weight'] = df['weight'] + np.random.normal(50, 10, 200)
```
`1.` Next, take a look at the data to get familiar with it. The dataset has two columns, and it is stored in the **df** variable. It might be useful to get an idea of the spread in the current data, as well as a visual of the points.
```
df.describe()
plt.scatter(df['height'], df['weight']);
```
Now that we've got a dataset, let's look at some options for scaling the data. As well as how the data might be scaled. There are two very common types of feature scaling that we should discuss:
**I. MinMaxScaler**
In some cases it is useful to think of your data in terms of the percent they are as compared to the maximum value. In these cases, you will want to use **MinMaxScaler**.
**II. StandardScaler**
Another very popular type of scaling is to scale data so that it has mean 0 and variance 1. In these cases, you will want to use **StandardScaler**.
It is probably more appropriate with this data to use **StandardScaler**. However, to get practice with feature scaling methods in python, we will perform both.
`2.` First let's fit the **StandardScaler** transformation to this dataset. I will do this one so you can see how to apply preprocessing in sklearn.
```
df_ss = p.StandardScaler().fit_transform(df) # Fit and transform the data
df_ss = pd.DataFrame(df_ss) #create a dataframe
df_ss.columns = ['height', 'weight'] #add column names again
plt.scatter(df_ss['height'], df_ss['weight']); # create a plot
```
`3.` Now it's your turn. Try fitting the **MinMaxScaler** transformation to this dataset. You should be able to use the previous example to assist.
```
df_mm = p.MinMaxScaler().fit_transform(df) # fit and transform
df_mm = pd.DataFrame(df_mm) #create a dataframe
df_mm.columns = ['height', 'weight'] #change the column names
plt.scatter(df_mm['height'], df_mm['weight']); #plot the data
```
`4.` Now let's take a look at how kmeans divides the dataset into different groups for each of the different scalings of the data. Did you end up with different clusters when the data was scaled differently?
```
def fit_kmeans(data, centers):
'''
INPUT:
data = the dataset you would like to fit kmeans to (dataframe)
centers = the number of centroids (int)
OUTPUT:
labels - the labels for each datapoint to which group it belongs (nparray)
'''
kmeans = KMeans(centers)
labels = kmeans.fit_predict(data)
return labels
labels = fit_kmeans(df, 10) #fit kmeans to get the labels
# Plot the original data with clusters
plt.scatter(df['height'], df['weight'], c=labels, cmap='Set1');
labels = fit_kmeans(df_mm, 10) #fit kmeans to get the labels
#plot each of the scaled datasets
plt.scatter(df_mm['height'], df_mm['weight'], c=labels, cmap='Set1');
labels = fit_kmeans(df_ss, 10)
plt.scatter(df_ss['height'], df_ss['weight'], c=labels, cmap='Set1');
```
**Different from what was stated in the video - In this case, the scaling did end up changing the results. In the video, the kmeans algorithm was not refit to each differently scaled dataset. It was only using the one clustering fit on every dataset. In this notebook, you see that clustering was recomputed with each scaling, which changes the results!**
| github_jupyter |
```
%load_ext watermark
%watermark -p torch,pytorch_lightning,torchmetrics,matplotlib
```
The three extensions below are optional, for more information, see
- `watermark`: https://github.com/rasbt/watermark
- `pycodestyle_magic`: https://github.com/mattijn/pycodestyle_magic
- `nb_black`: https://github.com/dnanhkhoa/nb_black
```
%load_ext pycodestyle_magic
%flake8_on --ignore W291,W293,E703,E402 --max_line_length=100
%load_ext nb_black
```
<a href="https://pytorch.org"><img src="https://raw.githubusercontent.com/pytorch/pytorch/master/docs/source/_static/img/pytorch-logo-dark.svg" width="90"/></a> <a href="https://www.pytorchlightning.ai"><img src="https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/docs/source/_static/images/logo.svg" width="150"/></a>
# Multilayer Perceptron trained on MNIST
A simple multilayer perceptron [1][2] trained on MNIST [3].
### References
- [1] https://en.wikipedia.org/wiki/Multilayer_perceptron
- [2] L9.1 Multilayer Perceptron Architecture (24:24): https://www.youtube.com/watch?v=IUylp47hNA0
- [3] https://en.wikipedia.org/wiki/MNIST_database
## General settings and hyperparameters
- Here, we specify some general hyperparameter values and general settings.
```
HIDDEN_UNITS = (128, 256)
BATCH_SIZE = 256
NUM_EPOCHS = 10
LEARNING_RATE = 0.005
NUM_WORKERS = 4
```
- Note that using multiple workers can sometimes cause issues with too many open files in PyTorch for small datasets. If we have problems with the data loader later, try setting `NUM_WORKERS = 0` and reload the notebook.
## Implementing a Neural Network using PyTorch Lightning's `LightningModule`
- In this section, we set up the main model architecture using the `LightningModule` from PyTorch Lightning.
- In essence, `LightningModule` is a wrapper around a PyTorch module.
- We start with defining our neural network model in pure PyTorch, and then we use it in the `LightningModule` to get all the extra benefits that PyTorch Lightning provides.
```
import torch
import torch.nn.functional as F
# Regular PyTorch Module
class PyTorchModel(torch.nn.Module):
def __init__(self, input_size, hidden_units, num_classes):
super().__init__()
# Initialize MLP layers
all_layers = []
for hidden_unit in hidden_units:
layer = torch.nn.Linear(input_size, hidden_unit, bias=False)
all_layers.append(layer)
all_layers.append(torch.nn.ReLU())
input_size = hidden_unit
output_layer = torch.nn.Linear(
in_features=hidden_units[-1],
out_features=num_classes)
all_layers.append(output_layer)
self.layers = torch.nn.Sequential(*all_layers)
def forward(self, x):
x = torch.flatten(x, start_dim=1) # to make it work for image inputs
x = self.layers(x)
return x # x are the model's logits
# %load ../code_lightningmodule/lightningmodule_classifier_basic.py
import pytorch_lightning as pl
import torchmetrics
# LightningModule that receives a PyTorch model as input
class LightningModel(pl.LightningModule):
def __init__(self, model, learning_rate):
super().__init__()
self.learning_rate = learning_rate
# The inherited PyTorch module
self.model = model
if hasattr(model, "dropout_proba"):
self.dropout_proba = model.dropout_proba
# Save settings and hyperparameters to the log directory
# but skip the model parameters
self.save_hyperparameters(ignore=["model"])
# Set up attributes for computing the accuracy
self.train_acc = torchmetrics.Accuracy()
self.valid_acc = torchmetrics.Accuracy()
self.test_acc = torchmetrics.Accuracy()
# Defining the forward method is only necessary
# if you want to use a Trainer's .predict() method (optional)
def forward(self, x):
return self.model(x)
# A common forward step to compute the loss and labels
# this is used for training, validation, and testing below
def _shared_step(self, batch):
features, true_labels = batch
logits = self(features)
loss = torch.nn.functional.cross_entropy(logits, true_labels)
predicted_labels = torch.argmax(logits, dim=1)
return loss, true_labels, predicted_labels
def training_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("train_loss", loss)
# Do another forward pass in .eval() mode to compute accuracy
# while accountingfor Dropout, BatchNorm etc. behavior
# during evaluation (inference)
self.model.eval()
with torch.no_grad():
_, true_labels, predicted_labels = self._shared_step(batch)
self.train_acc(predicted_labels, true_labels)
self.log("train_acc", self.train_acc, on_epoch=True, on_step=False)
self.model.train()
return loss # this is passed to the optimzer for training
def validation_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("valid_loss", loss)
self.valid_acc(predicted_labels, true_labels)
self.log(
"valid_acc",
self.valid_acc,
on_epoch=True,
on_step=False,
prog_bar=True,
)
def test_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.test_acc(predicted_labels, true_labels)
self.log("test_acc", self.test_acc, on_epoch=True, on_step=False)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate)
return optimizer
```
## Setting up the dataset
- In this section, we are going to set up our dataset.
### Inspecting the dataset
```
# %load ../code_dataset/dataset_mnist_check.py
from collections import Counter
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
train_dataset = datasets.MNIST(
root="./data", train=True, transform=transforms.ToTensor(), download=True
)
train_loader = DataLoader(
dataset=train_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=True,
shuffle=True,
)
test_dataset = datasets.MNIST(
root="./data", train=False, transform=transforms.ToTensor()
)
test_loader = DataLoader(
dataset=test_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=False,
shuffle=False,
)
train_counter = Counter()
for images, labels in train_loader:
train_counter.update(labels.tolist())
test_counter = Counter()
for images, labels in test_loader:
test_counter.update(labels.tolist())
print("\nTraining label distribution:")
sorted(train_counter.items())
print("\nTest label distribution:")
sorted(test_counter.items())
```
### Performance baseline
- Especially for imbalanced datasets, it's pretty helpful to compute a performance baseline.
- In classification contexts, a useful baseline is to compute the accuracy for a scenario where the model always predicts the majority class -- we want our model to be better than that!
```
# %load ../code_dataset/performance_baseline.py
majority_class = test_counter.most_common(1)[0]
print("Majority class:", majority_class[0])
baseline_acc = majority_class[1] / sum(test_counter.values())
print("Accuracy when always predicting the majority class:")
print(f"{baseline_acc:.2f} ({baseline_acc*100:.2f}%)")
```
## A quick visual check
```
# %load ../code_dataset/plot_visual-check_basic.py
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torchvision
for images, labels in train_loader:
break
plt.figure(figsize=(8, 8))
plt.axis("off")
plt.title("Training images")
plt.imshow(np.transpose(torchvision.utils.make_grid(
images[:64],
padding=2,
normalize=True),
(1, 2, 0)))
plt.show()
```
### Setting up a `DataModule`
- There are three main ways we can prepare the dataset for Lightning. We can
1. make the dataset part of the model;
2. set up the data loaders as usual and feed them to the fit method of a Lightning Trainer -- the Trainer is introduced in the following subsection;
3. create a LightningDataModule.
- Here, we will use approach 3, which is the most organized approach. The `LightningDataModule` consists of several self-explanatory methods, as we can see below:
```
# %load ../code_lightningmodule/datamodule_mnist_basic.py
from torch.utils.data.dataset import random_split
class DataModule(pl.LightningDataModule):
def __init__(self, data_path="./"):
super().__init__()
self.data_path = data_path
def prepare_data(self):
datasets.MNIST(root=self.data_path, download=True)
return
def setup(self, stage=None):
# Note transforms.ToTensor() scales input images
# to 0-1 range
train = datasets.MNIST(
root=self.data_path,
train=True,
transform=transforms.ToTensor(),
download=False,
)
self.test = datasets.MNIST(
root=self.data_path,
train=False,
transform=transforms.ToTensor(),
download=False,
)
self.train, self.valid = random_split(train, lengths=[55000, 5000])
def train_dataloader(self):
train_loader = DataLoader(
dataset=self.train,
batch_size=BATCH_SIZE,
drop_last=True,
shuffle=True,
num_workers=NUM_WORKERS,
)
return train_loader
def val_dataloader(self):
valid_loader = DataLoader(
dataset=self.valid,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS,
)
return valid_loader
def test_dataloader(self):
test_loader = DataLoader(
dataset=self.test,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS,
)
return test_loader
```
- Note that the `prepare_data` method is usually used for steps that only need to be executed once, for example, downloading the dataset; the `setup` method defines the dataset loading -- if we run our code in a distributed setting, this will be called on each node / GPU.
- Next, let's initialize the `DataModule`; we use a random seed for reproducibility (so that the data set is shuffled the same way when we re-execute this code):
```
torch.manual_seed(1)
data_module = DataModule(data_path='./data')
```
## Training the model using the PyTorch Lightning Trainer class
- Next, we initialize our model.
- Also, we define a call back to obtain the model with the best validation set performance after training.
- PyTorch Lightning offers [many advanced logging services](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) like Weights & Biases. However, here, we will keep things simple and use the `CSVLogger`:
```
pytorch_model = PyTorchModel(
input_size=28*28,
hidden_units=HIDDEN_UNITS,
num_classes=10
)
# %load ../code_lightningmodule/logger_csv_acc_basic.py
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.loggers import CSVLogger
lightning_model = LightningModel(pytorch_model, learning_rate=LEARNING_RATE)
callbacks = [
ModelCheckpoint(
save_top_k=1, mode="max", monitor="valid_acc"
) # save top 1 model
]
logger = CSVLogger(save_dir="logs/", name="my-model")
```
- Now it's time to train our model:
```
# %load ../code_lightningmodule/trainer_nb_basic.py
import time
trainer = pl.Trainer(
max_epochs=NUM_EPOCHS,
callbacks=callbacks,
progress_bar_refresh_rate=50, # recommended for notebooks
accelerator="auto", # Uses GPUs or TPUs if available
devices="auto", # Uses all available GPUs/TPUs if applicable
logger=logger,
deterministic=True,
log_every_n_steps=10,
)
start_time = time.time()
trainer.fit(model=lightning_model, datamodule=data_module)
runtime = (time.time() - start_time) / 60
print(f"Training took {runtime:.2f} min in total.")
```
## Evaluating the model
- After training, let's plot our training ACC and validation ACC using pandas, which, in turn, uses matplotlib for plotting (PS: you may want to check out [more advanced logger](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) later on, which take care of it for us):
```
# %load ../code_lightningmodule/logger_csv_plot_basic.py
import pandas as pd
import matplotlib.pyplot as plt
metrics = pd.read_csv(f"{trainer.logger.log_dir}/metrics.csv")
aggreg_metrics = []
agg_col = "epoch"
for i, dfg in metrics.groupby(agg_col):
agg = dict(dfg.mean())
agg[agg_col] = i
aggreg_metrics.append(agg)
df_metrics = pd.DataFrame(aggreg_metrics)
df_metrics[["train_loss", "valid_loss"]].plot(
grid=True, legend=True, xlabel="Epoch", ylabel="Loss"
)
df_metrics[["train_acc", "valid_acc"]].plot(
grid=True, legend=True, xlabel="Epoch", ylabel="ACC"
)
plt.show()
```
- The `trainer` automatically saves the model with the best validation accuracy automatically for us, we which we can load from the checkpoint via the `ckpt_path='best'` argument; below we use the `trainer` instance to evaluate the best model on the test set:
```
trainer.test(model=lightning_model, datamodule=data_module, ckpt_path='best')
```
## Predicting labels of new data
- We can use the `trainer.predict` method either on a new `DataLoader` (`trainer.predict(dataloaders=...)`) or `DataModule` (`trainer.predict(datamodule=...)`) to apply the model to new data.
- Alternatively, we can also manually load the best model from a checkpoint as shown below:
```
path = trainer.checkpoint_callback.best_model_path
print(path)
lightning_model = LightningModel.load_from_checkpoint(path, model=pytorch_model)
lightning_model.eval();
```
- For simplicity, we reused our existing `pytorch_model` above. However, we could also reinitialize the `pytorch_model`, and the `.load_from_checkpoint` method would load the corresponding model weights for us from the checkpoint file.
- Now, below is an example applying the model manually. Here, pretend that the `test_dataloader` is a new data loader.
```
# %load ../code_lightningmodule/datamodule_testloader.py
test_dataloader = data_module.test_dataloader()
acc = torchmetrics.Accuracy()
for batch in test_dataloader:
features, true_labels = batch
with torch.no_grad():
logits = lightning_model(features)
predicted_labels = torch.argmax(logits, dim=1)
acc(predicted_labels, true_labels)
predicted_labels[:5]
```
- As an internal check, if the model was loaded correctly, the test accuracy below should be identical to the test accuracy we saw earlier in the previous section.
```
test_acc = acc.compute()
print(f'Test accuracy: {test_acc:.4f} ({test_acc*100:.2f}%)')
```
## Inspecting Failure Cases
- In practice, it is often informative to look at failure cases like wrong predictions for particular training instances as it can give us some insights into the model behavior and dataset.
- Inspecting failure cases can sometimes reveal interesting patterns and even highlight dataset and labeling issues.
```
# In the case of MNIST, the class label mapping
# is relatively trivial
class_dict = {0: 'digit 0',
1: 'digit 1',
2: 'digit 2',
3: 'digit 3',
4: 'digit 4',
5: 'digit 5',
6: 'digit 6',
7: 'digit 7',
8: 'digit 8',
9: 'digit 9'}
# %load ../code_lightningmodule/plot_failurecases_basic.py
# Append the folder that contains the
# helper_data.py, helper_plotting.py, and helper_evaluate.py
# files so we can import from them
import sys
sys.path.append("../../pytorch_ipynb")
from helper_plotting import show_examples
show_examples(
model=lightning_model, data_loader=test_dataloader, class_dict=class_dict
)
```
- In addition to inspecting failure cases visually, it is also informative to look at which classes the model confuses the most via a confusion matrix:
```
# %load ../code_lightningmodule/plot_confusion-matrix_basic.py
from torchmetrics import ConfusionMatrix
import matplotlib
from mlxtend.plotting import plot_confusion_matrix
cmat = ConfusionMatrix(num_classes=len(class_dict))
for x, y in test_dataloader:
with torch.no_grad():
pred = lightning_model(x)
cmat(pred, y)
cmat_tensor = cmat.compute()
cmat = cmat_tensor.numpy()
fig, ax = plot_confusion_matrix(
conf_mat=cmat,
class_names=class_dict.values(),
norm_colormap=matplotlib.colors.LogNorm()
# normed colormaps highlight the off-diagonals
# for high-accuracy models better
)
plt.show()
%watermark --iversions
```
| github_jupyter |
## 1. Meet Professor William Sharpe
<p>An investment may make sense if we expect it to return more money than it costs. But returns are only part of the story because they are risky - there may be a range of possible outcomes. How does one compare different investments that may deliver similar results on average, but exhibit different levels of risks?</p>
<p><img style="float: left ; margin: 5px 20px 5px 1px;" width="200" src="https://assets.datacamp.com/production/project_66/img/sharpe.jpeg"></p>
<p>Enter William Sharpe. He introduced the <a href="https://web.stanford.edu/~wfsharpe/art/sr/sr.htm"><em>reward-to-variability ratio</em></a> in 1966 that soon came to be called the Sharpe Ratio. It compares the expected returns for two investment opportunities and calculates the additional return per unit of risk an investor could obtain by choosing one over the other. In particular, it looks at the difference in returns for two investments and compares the average difference to the standard deviation (as a measure of risk) of this difference. A higher Sharpe ratio means that the reward will be higher for a given amount of risk. It is common to compare a specific opportunity against a benchmark that represents an entire category of investments.</p>
<p>The Sharpe ratio has been one of the most popular risk/return measures in finance, not least because it's so simple to use. It also helped that Professor Sharpe won a Nobel Memorial Prize in Economics in 1990 for his work on the capital asset pricing model (CAPM).</p>
<p>The Sharpe ratio is usually calculated for a portfolio and uses the risk-free interest rate as benchmark. We will simplify our example and use stocks instead of a portfolio. We will also use a stock index as benchmark rather than the risk-free interest rate because both are readily available at daily frequencies and we do not have to get into converting interest rates from annual to daily frequency. Just keep in mind that you would run the same calculation with portfolio returns and your risk-free rate of choice, e.g, the <a href="https://fred.stlouisfed.org/series/TB3MS">3-month Treasury Bill Rate</a>. </p>
<p>So let's learn about the Sharpe ratio by calculating it for the stocks of the two tech giants Facebook and Amazon. As benchmark we'll use the S&P 500 that measures the performance of the 500 largest stocks in the US. When we use a stock index instead of the risk-free rate, the result is called the Information Ratio and is used to benchmark the return on active portfolio management because it tells you how much more return for a given unit of risk your portfolio manager earned relative to just putting your money into a low-cost index fund.</p>
```
# Importing required modules
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Settings to produce nice plots in a Jupyter notebook
plt.style.use('fivethirtyeight')
%matplotlib inline
#stock_data =pd.read_csv('datasets/stock_data.csv')
# Reading in the data
stock_data = pd.read_csv('datasets/stock_data.csv',parse_dates=['Date'],index_col =['Date']).dropna()
benchmark_data = pd.read_csv('datasets/benchmark_data.csv',parse_dates=['Date'],
index_col=['Date']
).dropna()
#stock_data.Date.dtype
```
## 2. A first glance at the data
<p>Let's take a look the data to find out how many observations and variables we have at our disposal.</p>
```
# Display summary for stock_data
print('Stocks\n')
stock_data.info()
stock_data.head()
# Display summary for benchmark_data
print('\nBenchmarks\n')
benchmark_data.info()
benchmark_data.head()
```
## 3. Plot & summarize daily prices for Amazon and Facebook
<p>Before we compare an investment in either Facebook or Amazon with the index of the 500 largest companies in the US, let's visualize the data, so we better understand what we're dealing with.</p>
```
# visualize the stock_data
stock_data.plot(subplots = True, title ='Stock Data')
# summarize the stock_data
stock_data.describe()
```
## 4. Visualize & summarize daily values for the S&P 500
<p>Let's also take a closer look at the value of the S&P 500, our benchmark.</p>
```
# plot the benchmark_data
benchmark_data.plot(title = 'S&P 500' )
# summarize the benchmark_data
benchmark_data.describe()
```
## 5. The inputs for the Sharpe Ratio: Starting with Daily Stock Returns
<p>The Sharpe Ratio uses the difference in returns between the two investment opportunities under consideration.</p>
<p>However, our data show the historical value of each investment, not the return. To calculate the return, we need to calculate the percentage change in value from one day to the next. We'll also take a look at the summary statistics because these will become our inputs as we calculate the Sharpe Ratio. Can you already guess the result?</p>
```
# calculate daily stock_data returns
stock_returns = stock_data.pct_change()
stock_returns.head()
# plot the daily returns
stock_returns.plot()
# summarize the daily returnsstock_return
stock_returns.describe()
```
## 6. Daily S&P 500 returns
<p>For the S&P 500, calculating daily returns works just the same way, we just need to make sure we select it as a <code>Series</code> using single brackets <code>[]</code> and not as a <code>DataFrame</code> to facilitate the calculations in the next step.</p>
```
# calculate daily benchmark_data returns
sp_returns = benchmark_data['S&P 500'].pct_change()
# plot the daily returns
sp_returns.plot()
# summarize the daily returns
sp_returns.describe()
```
## 7. Calculating Excess Returns for Amazon and Facebook vs. S&P 500
<p>Next, we need to calculate the relative performance of stocks vs. the S&P 500 benchmark. This is calculated as the difference in returns between <code>stock_returns</code> and <code>sp_returns</code> for each day.</p>
```
# calculate the difference in daily returns
excess_returns = stock_returns.sub(sp_returns,axis =0)
excess_returns.head()
# plot the excess_returns
excess_returns.plot()
# summarize the excess_returns
excess_returns.describe()
```
## 8. The Sharpe Ratio, Step 1: The Average Difference in Daily Returns Stocks vs S&P 500
<p>Now we can finally start computing the Sharpe Ratio. First we need to calculate the average of the <code>excess_returns</code>. This tells us how much more or less the investment yields per day compared to the benchmark.</p>
```
# calculate the mean of excess_returns
# ... YOUR CODE FOR TASK 8 HERE ...
avg_excess_return = excess_returns.mean()
avg_excess_return
# plot avg_excess_retur(ns
avg_excess_return.plot.bar(title ='Mean of the Return')
```
## 9. The Sharpe Ratio, Step 2: Standard Deviation of the Return Difference
<p>It looks like there was quite a bit of a difference between average daily returns for Amazon and Facebook.</p>
<p>Next, we calculate the standard deviation of the <code>excess_returns</code>. This shows us the amount of risk an investment in the stocks implies as compared to an investment in the S&P 500.</p>
```
# calculate the standard deviations
sd_excess_return = excess_returns.std()
# plot the standard deviations
sd_excess_return.plot.bar(title ='Standard Deviation of the Return Difference')
```
## 10. Putting it all together
<p>Now we just need to compute the ratio of <code>avg_excess_returns</code> and <code>sd_excess_returns</code>. The result is now finally the <em>Sharpe ratio</em> and indicates how much more (or less) return the investment opportunity under consideration yields per unit of risk.</p>
<p>The Sharpe Ratio is often <em>annualized</em> by multiplying it by the square root of the number of periods. We have used daily data as input, so we'll use the square root of the number of trading days (5 days, 52 weeks, minus a few holidays): √252</p>
```
# calculate the daily sharpe ratio
daily_sharpe_ratio = avg_excess_return.div(sd_excess_return)
# annualize the sharpe ratio
annual_factor = np.sqrt(252)
annual_sharpe_ratio = daily_sharpe_ratio.mul(annual_factor)
# plot the annualized sharpe ratio
annual_sharpe_ratio.plot.bar(title ='Annualized Sharpe Ratio: Stocks vs S&P 500')
```
## 11. Conclusion
<p>Given the two Sharpe ratios, which investment should we go for? In 2016, Amazon had a Sharpe ratio twice as high as Facebook. This means that an investment in Amazon returned twice as much compared to the S&P 500 for each unit of risk an investor would have assumed. In other words, in risk-adjusted terms, the investment in Amazon would have been more attractive.</p>
<p>This difference was mostly driven by differences in return rather than risk between Amazon and Facebook. The risk of choosing Amazon over FB (as measured by the standard deviation) was only slightly higher so that the higher Sharpe ratio for Amazon ends up higher mainly due to the higher average daily returns for Amazon. </p>
<p>When faced with investment alternatives that offer both different returns and risks, the Sharpe Ratio helps to make a decision by adjusting the returns by the differences in risk and allows an investor to compare investment opportunities on equal terms, that is, on an 'apples-to-apples' basis.</p>
```
# Uncomment your choice.
buy_amazon = True
#buy_facebook = True
```
| github_jupyter |
```
import datetime
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.cross_validation import train_test_split
from ntflib import betantf
%matplotlib inline
sns.set(style="white")
```
## Defining functions for mapping and error
```
def mapper(array):
array = np.sort(array)
int_map = np.arange(len(np.unique(array))).astype(int)
dict_map = dict(zip(np.sort(np.unique(array)), int_map))
tmp = pd.Series(array)
res = tmp.map(lambda x: dict_map[x])
inv_dict_map = {v: k for k, v in dict_map.items()}
return res.values, inv_dict_map
def rmse(x, y):
return np.sqrt((x - y)**2.0).sum()
```
## Grabbing Movie Lens data
```
!wget http://files.grouplens.org/datasets/movielens/ml-1m.zip
!unzip ml-1m.zip
```
## Parsing data and cleaning it up for NTFLib
```
ratings = pd.read_table('ml-1m/ratings.dat', sep='::', names=['UserID', 'MovieID', 'Rating', 'Timestamp'])
ratings.Timestamp = ratings.Timestamp.map(lambda x: datetime.datetime.fromtimestamp(x).strftime('%Y-%m'))
# movies = pd.read_table('ml-1m/movies.dat', sep='::', names=['MovieID', 'Title', 'Genres'])
# users = pd.read_table('ml-1m/users.dat', sep='::', names=['UserID' ,'Gender', 'Age', 'Occupation::Zip-code'])
# Converting dates to integers
ratings['UserID'], inv_uid_dict = mapper(ratings['UserID'])
ratings['MovieID'], inv_mid_dict = mapper(ratings['MovieID'])
ratings['Timestamp'], inv_ts_dict = mapper(ratings['Timestamp'])
x_indices = ratings[['UserID', 'MovieID', 'Timestamp']].copy()
x_indices['UserID'] = x_indices['UserID'] - x_indices['UserID'].min()
x_indices['MovieID'] = x_indices['MovieID'] - x_indices['MovieID'].min()
x_indices['Timestamp'] = x_indices['Timestamp'] - x_indices['Timestamp'].min()
print x_indices.min()
x_indices = x_indices.values
x_vals = ratings['Rating'].values
print 'Number of unique movie IDs: {0}'.format(len(ratings['MovieID'].unique()))
print 'Max movie ID: {0}'.format(ratings['MovieID'].max())
indices_train, indices_test, val_train, val_test = train_test_split(
x_indices, x_vals, test_size=0.40, random_state=42)
shape_uid = len(np.unique(x_indices[:,0]))
shape_mid = len(np.unique(x_indices[:,1]))
shape_ts = len(np.unique(x_indices[:,2]))
shape = [shape_uid, shape_mid, shape_ts]
shape
indices_train
# shape = [len(np.unique(ratings[x])) for x in ['UserID', 'MovieID', 'Timestamp']]
bnf = betantf.BetaNTF(shape, n_components=5, n_iters=10)
before = bnf.score(indices_train, val_train)
initial = bnf.impute(x_indices)
reconstructed = bnf.fit(indices_train, val_train)
after = bnf.score()
assert(after < before)
debug
prediction = bnf.impute(indices_test)
rmse(prediction, val_test) / float(prediction.shape[0])
!cat ml-1m/README
```
| github_jupyter |
<a href="https://colab.research.google.com/github/stephenadhi/nn-mpc/blob/main/EVALIDASI-testing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import sys
sys.executable
import os
import matplotlib as mpl
import matplotlib.pyplot as plt
import random
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
import keras
from pandas import DataFrame
from pandas import read_csv
import math
from numpy import savetxt
from keras import layers
from tensorflow.keras.layers import Input, LSTM, Dense, Reshape, Dropout
from tensorflow.keras.models import Model, Sequential
from scipy.integrate import odeint, RK45
from tensorflow.keras.utils import plot_model
import timeit
tf.keras.backend.set_floatx('float64')
tf.keras.backend.clear_session()
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
```
## Mass-Spring-System
<img src="https://github.com/stephenadhi/nn-mpc/blob/main/mass-spring-damper.png?raw=1">
```
# Use ODEINT to solve the differential equations defined by the vector field
from scipy.integrate import odeint
def vectorfield(w, t, p):
"""
Defines the differential equations for the coupled spring-mass system.
Arguments:
w : vector of the state variables:
w = [x1,y1,x2,y2]
t : time
p : vector of the parameters:
p = [m1,m2,k1,k2,L1,L2,b1,b2]
"""
x1, v1, x2, v2, x3, v3 = w
m, k, kp, u1, u2, dist = p
# Create f = (x1',y1',x2',y2'):
f = [v1,
(k * ((-2 * x1) + x2) + kp * (-x1 ** 3 + (x2 - x1) ** 3)) / m + u1,
v2,
(k * (x1 - (2 * x2) + x3) + kp * ((x3 - x2) ** 3 - (x2 - x1) ** 3)) / m + u2,
v3,
(k * (x2 - x3) + kp * ((x2 - x3) ** 3)) / m + dist]
return f
```
# Use Best Training Data
```
df = pd.read_csv('u1000newage20000_0.001ssim.csv')
train_df = df
#val_df = df[int(0.5*n):int(1*n)]
val_df= pd.read_csv('u1000validationdatanewage5k_0.001ssim.csv')
test_df = pd.read_csv('u1000validationdatanewage5k_0.001ssim.csv')
#num_features = df.shape[1]
val_df.shape
train_mean = train_df.mean()
train_std = train_df.std()
train_mean
```
# Data preprocessing and NN model setup
```
train_df = (train_df - train_mean) / train_std
val_df = (val_df - train_mean) / train_std
test_df = (test_df - train_mean) / train_std
#train_df = (train_df - train_df.min()) / (train_df.max() - train_df.min())
#val_df = (val_df - train_df.min()) / (train_df.max() - train_df.min())
#test_df = (test_df - train_df.min()) / (train_df.max() - train_df.min())
#plot
#df_std = (df - df.min()) / (df.max()-df.min())
df_std = (df - train_mean) / df.std()
#df_std.iloc[:,0:3] = df_std.iloc[:,0:3].values * 3
#df_std = (df - df.min()) / (df.max() - df.min())
df_std = df_std.astype('float64')
df_std = df_std.melt(var_name='States', value_name='Normalized')
plt.figure(figsize=(12, 6))
ax = sns.violinplot(x='States', y='Normalized', data=df_std)
_ = ax.set_xticklabels(df.keys(), rotation=90)
plt.savefig('Normalized.png', dpi=300)
class WindowGenerator():
def __init__(self, input_width, label_width, shift,
train_df=train_df, val_df=val_df,
#test_df=test_df,
label_columns=None):
# Store the raw data.
self.train_df = train_df
self.val_df = val_df
self.test_df = test_df
# Work out the label column indices.
self.label_columns = label_columns
if label_columns is not None:
self.label_columns_indices = {name: i for i, name in
enumerate(label_columns)}
self.column_indices = {name: i for i, name in
enumerate(train_df.columns)}
# Work out the window parameters.
self.input_width = input_width
self.label_width = label_width
self.shift = shift
self.total_window_size = input_width + shift
self.input_slice = slice(0, input_width)
self.input_indices = np.arange(self.total_window_size)[self.input_slice]
self.label_start = self.total_window_size - self.label_width
self.labels_slice = slice(self.label_start, None)
self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
def __repr__(self):
return '\n'.join([
f'Total window size: {self.total_window_size}',
f'Input indices: {self.input_indices}',
f'Label indices: {self.label_indices}',
f'Label column name(s): {self.label_columns}'])
def split_window(self, features):
inputs = features[:, self.input_slice, :]
labels = features[:, self.labels_slice, :]
if self.label_columns is not None:
labels = tf.stack(
[labels[:, :, self.column_indices[name]] for name in self.label_columns],
axis=-1)
# Slicing doesn't preserve static shape information, so set the shapes
# manually. This way the `tf.data.Datasets` are easier to inspect.
inputs.set_shape([None, self.input_width, None])
labels.set_shape([None, self.label_width, None])
return inputs, labels
WindowGenerator.split_window = split_window
def plot(self, plot_col, model=None, max_subplots=1):
inputs, labels = self.example
plt.figure(figsize=(12, 8))
plot_col_index = self.column_indices[plot_col]
max_n = min(max_subplots, len(inputs))
for n in range(max_n):
plt.subplot(3, 1, n+1)
plt.ylabel(f'{plot_col} [normed]')
plt.plot(self.input_indices, inputs[n, :, plot_col_index],
label='Inputs', marker='.', zorder=-10)
if self.label_columns:
label_col_index = self.label_columns_indices.get(plot_col, None)
else:
label_col_index = plot_col_index
if label_col_index is None:
continue
plt.scatter(self.label_indices, labels[n, :, label_col_index],
edgecolors='k', label='Labels', c='#2ca02c', s=64)
if model is not None:
predictions = model(inputs)
plt.scatter(self.label_indices, predictions[n, :, label_col_index],
marker='X', edgecolors='k', label='Predictions',
c='#ff7f0e', s=64)
if n == 0:
plt.legend()
plt.xlabel('Timestep ')
WindowGenerator.plot = plot
batchsize= 32
def make_dataset(self, data):
data = np.array(data, dtype=np.float64)
ds = tf.keras.preprocessing.timeseries_dataset_from_array(
data=data,
targets=None,
sequence_length=self.total_window_size,
sequence_stride=1,
shuffle=False,
batch_size=batchsize,)
ds = ds.map(self.split_window)
return ds
WindowGenerator.make_dataset = make_dataset
@property
def train(self):
return self.make_dataset(self.train_df)
@property
def val(self):
return self.make_dataset(self.val_df)
@property
def test(self):
return self.make_dataset(self.test_df)
@property
def example(self):
"""Get and cache an example batch of `inputs, labels` for plotting."""
result = getattr(self, '_example', None)
if result is None:
# No example batch was found, so get one from the `.train` dataset
result = next(iter(self.train))
# And cache it for next time
self._example = result
return result
WindowGenerator.train = train
WindowGenerator.val = val
WindowGenerator.test = test
WindowGenerator.example = example
OUT_STEPS = 1
multi_window = WindowGenerator(input_width=1,
label_width=OUT_STEPS,
shift=OUT_STEPS,
label_columns= ['diff1','diff3']
)
multi_window.plot('diff1')
multi_window
for example_inputs, example_labels in multi_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
MAX_EPOCHS = 100
def compile(model, lr=0.001):
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.Adam(learning_rate=lr),
metrics=[tf.metrics.MeanSquaredError()],
experimental_steps_per_execution=10
)
def scheduler(epoch, lr):
if epoch > 100:
return lr * tf.math.exp(-0.01)
else: return lr
def fit(model, window, patience=150):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
callback = tf.keras.callbacks.LearningRateScheduler(scheduler)
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping,
# callback
]
)
return history
multi_val_performance = {}
multi_performance = {}
num_label=2
#set number of hidden nodes
n_hidden_nodes= 40
from functools import partial
multi_resdense_model = tf.keras.Sequential([
# Take the last time step.
# Shape [batch, time, features] => [batch, 1, features]
tf.keras.layers.Lambda(lambda x: x[:, -1:, :]),
# Shape => [batch, 1, dense_units]
tf.keras.layers.Dense(n_hidden_nodes, activation=partial(tf.nn.leaky_relu, alpha=0.5)),
tf.keras.layers.Dense(OUT_STEPS*num_label,
kernel_initializer=tf.initializers.zeros
),
])
compile(multi_resdense_model,lr=0.001)
```
## Use best model - 40 hidden nodes
```
#Import best nodel
multi_resdense_model.load_weights('./checkpoints/0.001s20knewageresdense1-1batch32allep100lrelu40diff2u1000')
multi_window.plot('diff1',model=multi_resdense_model)
multi_window.plot('diff3',model=multi_resdense_model)
```
# Function to Standardize
```
xmean=train_mean[{'x1','x3'}]
xmean=xmean[['x1','x3']]
diffmean=train_mean[{'diff1','diff3'}]
diffmean=diffmean[['diff1','diff3']]
diffmean
xstd=train_std[{'x1','x3'}]
xstd=xstd[['x1','x3']]
diffstd=train_std[{'diff1','diff3'}]
diffstd=diffstd[['diff1','diff3']]
diffstd
def standardize(modelinput):
modelinput = (modelinput -train_mean.values) / train_std.values
return modelinput
def destandardize(modeloutput):
modeloutput = (modeloutput * train_std.values) + train_mean.values
return modeloutput
def denormalize(outputs):
outputs = outputs * diffstd + diffmean
return outputs
```
# Initial Setup
```
num_rollouts= 1
# Masses:
m = 0.5
# Spring constants
kp = 63.5
k = 217.0
# ODE solver parameters
abserr = 1.0e-8
relerr = 1.0e-6
scale=1
num_data= 4
interval= 0.001
stoptime = interval*(num_data)
np.random.seed(28)
hist= np.zeros((1,15))
hist[0,0:3]=[0,0,0]
dist = 200*(np.random.rand(num_data+1)-0.5)
#dist= np.zeros((num_data+1,1))
for roll in range(num_rollouts):
frames=0
#act1= 0*(np.random.rand(num_data+1)-0.5)
#act2= 0*(np.random.rand(num_data+1)-0.5)
act1 = np.zeros((num_data+1,1))
act2 = np.zeros((num_data+1,1))
#Initial states
w0 = np.zeros((1,6))
#w0= np.random.randn(1,6)
w0= w0.flatten()
prev = [[w0[0],w0[4]]]
value1= w0[0]
value3= w0[4]
# Pack up the parameters and initial conditions:
p = [m, k, kp, act1[0], dist[0], act2[0]]
# Call the ODE solver.
t1= np.array([0,interval])
wsol1 = odeint(vectorfield, w0, t1, args=(p,),
atol=abserr, rtol=relerr)
wsol1 = wsol1.flatten()
wcurr = np.array([wsol1[6:]])
w0=wsol1[6:]
diff1= w0[0] - value1
diff3= w0[4] - value3
diff= [[diff1, diff3]]
diff2=diff
#curr = np.hstack((np.array([[act1[1]]]),np.array([[dist[1]]]),np.array([[act2[1]]]),prev,diff2,diff,wcurr))
curr = np.hstack((np.array([act1[1]]),np.array([[dist[1]]]),np.array([act2[1]]),prev,diff2,diff, wcurr))
hist= np.vstack((hist, curr))
#print(w0)
prevv= prev
prev = [[w0[0],w0[4]]]
value11=value1
value33=value3
value1= w0[0]
value3= w0[4]
# Pack up the parameters and initial conditions:
p = [m, k, kp, act1[1], dist[1],act2[1]]
# Call the ODE solver.
t2= np.array([0+interval,interval+interval])
wsol1 = odeint(vectorfield, w0, t2, args=(p,),
atol=abserr, rtol=relerr)
wsol1 = wsol1.flatten()
wcurr = np.array([wsol1[6:]])
w0=wsol1[6:]
diff1= w0[0] - value1
diff3= w0[4] - value3
diff21= w0[0] - value11
diff23= w0[4] - value33
diff= [[diff1, diff3]]
diff2= [[diff21, diff23]]
#print(w0)
# curr = np.hstack((np.array([[act1[2]]]),np.array([[dist[2]]]),np.array([[act2[2]]]),prev,diff2,diff, wcurr))
curr = np.hstack((np.array([act1[2]]),np.array([[dist[2]]]),np.array([act2[2]]),prev,diff2,diff, wcurr))
hist= np.vstack((hist, curr))
lag=2
for ts in range(num_data-lag):
prevv = prev
t = np.array([stoptime * float(ts+lag) / (num_data), stoptime * float(ts + lag + 1) / (num_data)])
p = [m, k, kp, act1[ts+lag],dist[ts+lag], act2[ts+lag]]
# Call the ODE solver.
wsol1 = odeint(vectorfield, w0, t, args=(p,),atol=abserr, rtol=relerr)
wsol1 = wsol1.flatten()
prev = np.array([[wsol1[0],wsol1[4]]])
value11=value1
value33=value3
value1= wsol1[0]
value3= wsol1[4]
w0 = wsol1[6:]
#print(ts)
#print(w0)
diff1= w0[0] - value1
diff3= w0[4] - value3
diff21= w0[0] - value11
diff23= w0[4] - value33
diff= [[diff1, diff3]]
diff2= [[diff21, diff23]]
action= [act1[ts+lag+1],dist[ts+lag+1]]
#new = np.hstack((np.array([action]),np.array([[act2[ts+lag+1]]]),prev,diff2,diff, np.array([w0])))
new = np.hstack((np.array([act1[ts+lag+1]]),np.array([[dist[ts+lag+1]]]),np.array([act2[ts+lag+1]]),prev,diff2,diff, np.array([w0])))
# print("new: ",new)
hist = np.vstack((hist, new))
history=pd.DataFrame(data=hist,columns =["u1","dist.","u2","prev1","prev3","diff21","diff23","diff1","diff3","x1", "v1", "x2", "v2", "x3", "v3"])
historyy=pd.DataFrame(history[:])
history=historyy[-3:][{"u1","u2","diff21","diff23","dist.","diff1","diff3","x1","x3"}]
history=history[["u1","dist.","u2","x1","x3","diff21","diff23","diff1","diff3"]]
history=history.values
historyy
```
# **Simulating Control Strategy**
```
currentcost=10
horizon= 10
chor= int(horizon / 2)
ts=0
ref = np.ones((horizon,2))
ref[:,1]*=2
num_sim= 1
num_randactions=50
maxdeltau= 1000
maxact= 1000
minact= -maxact
scale=1
histo= np.zeros((3,9))
bestpred= np.zeros((horizon,2))
temp= history
#print("newtemp : ",temp)
savecost=np.zeros((num_sim,1))
start_time = timeit.default_timer()
for run in range(num_sim):
ts+=1
minJ=10000
bestaction= np.zeros((horizon,2))
#if currentcost > 0.5 and currentcost < 1:
# scale=0.8
#elif currentcost > 0.2 and currentcost < 0.5:
# scale=0.6
#elif currentcost > 0.1 and currentcost <0.2:
# scale=0.4
#print("scale: ", scale)
for jj in range (num_randactions):
histo[:]= temp[:]
prevaction= [histo[-1,0], histo[-1,2]]
action= np.ones((horizon,2)) * prevaction
for plan in range(chor):
actii= maxdeltau*2*(np.random.rand(1,2)-0.5)
action[plan:] += actii
if action[plan,0] > maxact:
action[plan:,0] = maxact
if action[plan,0] < minact:
action[plan:,0] = minact
if action[plan,1] > maxact:
action[plan:,1] = maxact
if action[plan,1] < minact:
action[plan:,1] = minact
# print("action: \n", action)
#print("histtemp: ", histo)
prednorm = np.zeros((horizon,2))
predstate = np.zeros((horizon,2))
#deltaaction= currentcost / 10000000 * np.sum((action[0]-prevaction)**2 + (action[2]-action[1])**2 + (action[1]-action[0])**2)
dist= 100*(np.random.rand(1)-0.5)
for kk in range(horizon):
histo[-1,0] = action[kk,0]
histo[-1,1] = disturb
histo[-1,2] = action[kk,1]
curr = histo[-1,3:5]
historstd=standardize(histo)
prednorm[kk,:]=denormalize(multi_resdense_model(np.array([historstd]))[0])
predstate[kk,:]= curr + prednorm[kk,:]
#print("currstate: ",histo[-1,3:5])
#print("currpreddiff: ", prednorm)
#print("predstate: ", predstate)
#histo[0:-1] = histo[1:]
#histo[-1,1]= disturb
histo[-1,3:5]= predstate[kk,:]
histo[-1,5:7]= histo[-1,-2:] + prednorm[kk,:]
histo[-1,-2:]= prednorm[kk,:]
predJ= np.sum((predstate - ref) ** 2)
if currentcost < 100:
#predJ += deltaaction
predJ+= 10*((predstate[horizon-1,0] - predstate[0,0])**2)*currentcost
predJ+= 10*((predstate[horizon-1,1] - predstate[0,1])**2)*currentcost
#penalize small change when large error
#if currentcost > 1:
# predJ -= 1000 * ((prednorm[5,1] - prednorm[0,1])**2)
#penalize big change when small error
#if currentcost < 0.5:
# predJ += 1000 * ((prednorm[5,1] - prednorm[0,1])**2)
#(prednorm[0,1] -ref[0,1])**2 + (prednorm[1,1] -ref[1,1])**2
#+ (prednorm[2,1] -ref[2,1])**2 + (prednorm[3,1] -ref[3,1])**2
#+ (prednorm[4,1] -ref[4,1])**2 + (prednorm[5,1] -ref[5,1])**2
#+ (prednorm[0,0] -ref[0,0])**2 + (prednorm[1,0] -ref[1,0])**2
#+ (prednorm[2,0] -ref[2,0])**2 + (prednorm[3,0] -ref[3,0])**2
#+ (prednorm[4,0] -ref[4,0])**2 + (prednorm[5,0] -ref[5,0])**2
#print(prednorm)
if predJ < minJ:
bestaction = action
minJ = predJ
bestpred= predstate
elapsed = timeit.default_timer() - start_time
print(elapsed)
thisishistory=pd.DataFrame(data=hist,columns =["u1","dist.","u2","prev1","prev3","diff21","diff23","diff1","diff3","x1", "v1", "x2", "v2", "x3", "v3"])
thisishistory
```
# Save to .CSV
```
#thisishistory.to_csv('dist50sim300-1000-1000-pen10xcost.csv',index=False)
thisishistory= pd.read_csv('1611changemassnodistchangerefreinf_startsim40000_train500-250_batch16sim300-1000-1000-pen5.csv')
thisishistory[40000:]
```
# Evaluate using Designed Metrics
```
actionshist= np.hstack([np.array([thisishistory.iloc[3:-1,0].values]).transpose(),np.array([thisishistory.iloc[3:-1,2].values]).transpose()])
actionshistory=pd.DataFrame(data=actionshist, columns=["u1","u2"])
actionshistory.insert(1,"prevu1", actionshistory.iloc[:,0].shift(1))
actionshistory.insert(3,"prevu2", actionshistory.iloc[:,2].shift(1))
diffu1= actionshistory["u1"] - actionshistory["prevu1"]
diffu2= actionshistory["u2"] - actionshistory["prevu2"]
actionshistory.insert(4,"diffu2", diffu2)
actionshistory.insert(2,"diffu1", diffu1)
deltaudt= actionshistory[["diffu1","diffu2"]]
deltaudt=deltaudt.dropna()
np.sqrt(deltaudt[40000:] ** 2).mean()
meanu=np.sqrt(np.square(thisishistory.iloc[40000:,:3])).mean()
meanu
thisishistory.min()
meandiff= np.sqrt(np.square(thisishistory.iloc[40000:,-2:])).mean()
meandiff
#action_sequence= thisishistory.iloc[:,0:3].values
#action_sequence=pd.DataFrame(action_sequence)
#action_sequence.to_csv('action_sequence.csv', index=False)
plot_out=['x1','x3']
thisishistory.iloc[40001]
RMSE= (np.sqrt(np.square(thisishistory[plot_out][40100:40300].values - [1,2]).mean()) + np.sqrt(np.square(thisishistory[plot_out][40400:40600].values - [0,0]).mean()))/2
RMSE
```
# Plotting
```
fig, axs = plt.subplots(3, sharex=False,figsize=(15,10))
x= range(600)
y1=thisishistory.iloc[-600:,3].values
y2=thisishistory.iloc[-600:,4].values
ref=np.ones((600,2))
ref[:,1]*=2
#both=np.hstack((y,ref))
z=thisishistory.iloc[-600:,4].values
uone= thisishistory.iloc[-600:,0].values
utwo= thisishistory.iloc[-600:,2].values
params = {'mathtext.default': 'regular' }
plt.rcParams.update(params)
#axs[0].set_title('System Outputs over Time', fontsize=20)
axs[0].plot(x, y1, label='$x_1$')
axs[0].plot(x, y2, label='$x_3$')
axs[0].plot(x, ref[:,0], 'k:',label= '$x_1,ref$')
axs[0].plot(x, ref[:,1], 'k:',label= '$x_3,ref$')
axs[1].plot(x, uone, 'k')
axs[2].plot(x, utwo,'k')
axs[0].set_ylabel("Position (m)", fontsize=12)
axs[1].set_ylabel("Actuator Force u1 (N)",fontsize=12)
axs[2].set_ylabel("Actuator Force u2 (N)", fontsize=12)
axs[2].set_xlabel("Timestep", fontsize=14)
#fig.legend(loc='upper right',bbox_to_anchor=(0.4, 0.26, 0.5, 0.62),fontsize=10)
fig.legend(loc='upper right',bbox_to_anchor=(0.4, 0.26, 0.50, 0.62),fontsize=10)
#plt.savefig('dist50sim300-1000-1000-pen10xcost.png', dpi=300)
fig, axs = plt.subplots(3, sharex=False,figsize=(15,10))
x= range(600)
y1=thisishistory.iloc[-600:,3].values
y2=thisishistory.iloc[-600:,4].values
ref=np.ones((600,2))
ref[:,1]*=2
ref[300:,:]=0
#both=np.hstack((y,ref))
z=thisishistory.iloc[-600:,4].values
uone= thisishistory.iloc[-600:,0].values
utwo= thisishistory.iloc[-600:,2].values
params = {'mathtext.default': 'regular' }
plt.rcParams.update(params)
#axs[0].set_title('System Outputs over Time', fontsize=20)
axs[0].plot(x, y1, label='$x_1$')
axs[0].plot(x, y2, label='$x_3$')
axs[0].plot(x, ref[:,0], 'k:',label= '$x_1,ref$')
axs[0].plot(x, ref[:,1], 'k:',label= '$x_3,ref$')
axs[1].plot(x, uone, 'k')
axs[2].plot(x, utwo,'k')
axs[0].set_ylabel("Position (m)", fontsize=12)
axs[1].set_ylabel("Actuator Force u1 (N)",fontsize=12)
axs[2].set_ylabel("Actuator Force u2 (N)", fontsize=12)
axs[2].set_xlabel("Timestep", fontsize=14)
#fig.legend(loc='upper right',bbox_to_anchor=(0.4, 0.26, 0.5, 0.62),fontsize=10)
fig.legend(loc='upper right',bbox_to_anchor=(0.4, 0.26, 0.5, 0.62),fontsize=10)
plt.savefig('changemassreinf.png', dpi=300)
axs[1].plot(x, uone, 'k')
axs[2].plot(x, utwo,'k')
axs[0].set_ylabel("Position (m)")
axs[0].set_xlabel("Timestep")
axs[1].set_ylabel("u1 (N)")
axs[2].set_ylabel("u2 (N)")
axs[2].set_xlabel("Timestep")
from matplotlib import rc
fig, axs = plt.subplots(2, sharex=True,figsize=(16,4))
x= range(600)
r1= np.sqrt(np.square(thisishistory[plot_out][-600:].values - [1,2]))[:,0]
r2= np.sqrt(np.square(thisishistory[plot_out][-600:].values - [1,2]))[:,1]
r= (r1+r2)/2
du1=deltaudt.iloc[-600:,0].values
du2=deltaudt.iloc[-600:,1].values
du=(du1+du2)/2
fig.suptitle(r'Control RMSE and Average $\Delta$u over Time')
axs[0].semilogy(x,r)
axs[1].plot(x,du)
axs[0].set_ylabel('RMSE')
axs[1].set_ylabel(r'Average $\Delta$u')
axs[1].set_xlabel("Time (ms)")
#axs[0].set_title("Position (m)")
#axs[0].plot(range(5000), aaa.iloc[:,3].values, 'B')
#plt.savefig('reinf_startsim10000_train500-250_batch16sim300-1000-1000-pen10xcost.png', dpi=300)
```
| github_jupyter |
```
!XLA_FLAGS=--xla_gpu_cuda_data_dir=/cm/shared/sw/pkg/devel/cuda/10.1.243_418.87.00
import jax
print("jax version: ", jax.__version__)
import jax.numpy as np
import tensorflow_probability.substrates.jax as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
import matplotlib.pyplot as plt
from scipy.linalg import toeplitz
rng = jax.random.PRNGKey(2)
from jax.lib import xla_bridge
print(xla_bridge.get_backend().platform)
# for model stuff
import jax.experimental.optimizers as optimizers
import jax.experimental.stax as stax
from jax import jit
# for imnn
import imnn
print("IMNN version: ", imnn.__version__)
from imnn.imnn import (
AggregatedGradientIMNN,
AggregatedNumericalGradientIMNN,
AggregatedSimulatorIMNN,
GradientIMNN,
NumericalGradientIMNN,
SimulatorIMNN,
)
from imnn.lfi import (
ApproximateBayesianComputation,
GaussianApproximation,
)
from imnn.utils import value_and_jacrev, value_and_jacfwd
rng = jax.random.PRNGKey(0)
N = 20
def scipy_compute_r2(N):
_Di = np.tile(toeplitz(np.arange(N)), (N, N))
_Dj = np.concatenate(
[np.concatenate(
[np.tile(np.abs(i - j),(N, N))
for i in range(N)],
axis=0)
for j in range(N)],
axis=1)
_distance_squared = _Di * _Di + _Dj * _Dj
return _distance_squared
def compute_r2(N):
_r2 = np.tile(np.abs(np.expand_dims(np.arange(N), 0)
- np.expand_dims(np.arange(N), 1)), (N, N)) ** 2. + np.abs(np.expand_dims(np.repeat(np.arange(N), N), 0)
- np.expand_dims(np.repeat(np.arange(N), N), 1)) ** 2.
return _r2
r2 = compute_r2(N).astype(np.float32)
def ξ_G(β):
return np.exp(
-np.expand_dims(r2, tuple(np.arange(β.ndim)))
/ 4. / np.expand_dims(β, (-2, -1))**2.)
def get_G_field(β):
pass
def fill_zeros(k, value):
from functools import partial
def fnk(k):
return jax.lax.cond(np.less_equal(k, 1e-5), lambda _: value, lambda k: k+value, operand=k)
if len(k.shape) == 1:
return jax.vmap(fnk)(k)
else:
return jax.vmap(partial(fill_zeros, value=value))(k)
def xi_LN(r, α, β, PixelNoise=0.01):
xi = 1/(np.power(α+1e-12,2)) * (np.exp(np.power(α,2)*np.exp(-0.25*np.power(r/β,2))) - 1)
# Add pixel noise at zero separation:
xi = fill_zeros(xi, PixelNoise**2)
#xi[np.where(r<1e-5)] += PixelNoise**2
return xi
def dxi_LN_dalpha(r, α, β):
_deriv = 2/(α+1e-12) * np.exp(-0.25*np.power(r/β,2)) * np.exp(np.power(α,2)*np.exp(-0.25*np.power(r/β,2))) - 2/np.power(α+1e-12,3) * (np.exp(np.power(α,2)*np.exp(-0.25*np.power(r/β,2))) - 1)
return _deriv
def dxi_LN_dbeta(r, β, α):
return (0.5*np.power(r, 2) * np.exp(np.power(α, 2) * np.exp(-0.25 * np.power(r/β,2)) - 0.25*np.power(r/β,2)))*np.power(1./β,3)
#return (-0.5*r/np.power(β,2)) * np.exp(-0.25*np.power(r/β,2)) * np.exp(np.power(α,2)*np.exp(-0.25*np.power(r/β,2)))
def simulator(rng, n,
α, β, μ=np.zeros((N**2,), dtype=np.float32),
σ=np.ones((N**2 * (N**2 + 1) // 2,), dtype=np.float32)):
dist = tfd.TransformedDistribution(
#distribution=tfd.TransformedDistribution(
distribution=tfd.MultivariateNormalTriL(
loc=μ,
scale_tril=tfp.math.fill_triangular(σ)
* np.linalg.cholesky(ξ_G(β))),
#bijector=tfb.Reshape((N, N))),
bijector=tfb.Chain([
tfb.Scale(np.float32(1.) / np.expand_dims(α, (-1))),
tfb.Expm1(),
tfb.AffineScalar(shift=-np.float32(0.5) * np.expand_dims(α, -1)**np.float32(2.), scale=np.expand_dims(α, -1))]))
if n is not None:
return dist.sample(n, seed=rng)
else:
return dist.sample(seed=rng)
def _f_NL(
_α, _β,
μ=np.zeros((N**2,), dtype=np.float32),
σ=np.ones((N**2 * (N**2 + 1) // 2,), dtype=np.float32)):
return tfd.JointDistributionNamed(
dict(
α = tfd.Uniform(low=np.float32(0.), high=np.float32(2.)),
β = tfd.Uniform(low=np.float32(0.2), high=np.float32(0.8)),
f_NL = tfd.TransformedDistribution(
#distribution=tfd.TransformedDistribution(
distribution=tfd.MultivariateNormalTriL(
loc=μ,
scale_tril=tfp.math.fill_triangular(σ)
* np.linalg.cholesky(ξ_G(_β))),
#bijector=tfb.Reshape((N, N))),
bijector=tfb.Chain([
tfb.Scale(np.float32(1.) / np.expand_dims(_α, (-1))),
tfb.Expm1(),
tfb.AffineScalar(shift=-np.float32(0.5) * np.expand_dims(_α, -1)**np.float32(2.), scale=np.expand_dims(_α, -1))]))))
f_NL = tfd.JointDistributionNamed(
dict(
α = tfd.Uniform(low=np.float32(0.), high=np.float32(2.)),
β = tfd.Uniform(low=np.float32(0.2), high=np.float32(0.8)),
μ = tfd.Normal(
loc=np.zeros((N**2,), dtype=np.float32),
scale=np.ones((N**2,), dtype=np.float32)),
σ = tfp.distributions.Uniform(
low=np.zeros((N**2 * (N**2 + 1) // 2,), dtype=np.float32),
high=np.ones((N**2 * (N**2 + 1) // 2,), dtype=np.float32)),
f_NL = lambda α, β, μ, σ: tfd.TransformedDistribution(
#distribution=tfd.TransformedDistribution(
distribution=tfd.MultivariateNormalTriL(
loc=μ,
scale_tril=tfp.math.fill_triangular(σ)
* np.linalg.cholesky(ξ_G(β))),
bijector=tfb.Chain([
tfb.Scale(np.float32(1.) / np.expand_dims(α, (-1))),
tfb.Expm1(),
tfb.AffineScalar(shift=-np.float32(0.5) * np.expand_dims(α, -1)**np.float32(2.), scale=np.expand_dims(α, -1))]))))
rng, key = jax.random.split(rng)
f_NLs = f_NL.sample(10, seed=key)["f_NL"].reshape((10, N, N))
fig, ax = plt.subplots(2, 5, figsize=(10, 4))
plt.subplots_adjust(wspace=0, hspace=0)
for i in range(2):
for j in range(5):
a = ax[i, j].imshow(f_NLs[j + i * 5])
ax[i, j].set(xticks=[], yticks=[])
#plt.colorbar(a)
key,rng = jax.random.split(rng)
_a,_b = [np.ones(20)*1.0, np.ones(20)*(0.5)]
#plt.imshow(_f_NL(_a, _b).sample(seed=key)['f_NL'].reshape(N,N))
vals = _f_NL(_a, _b).sample(seed=key)['f_NL']
trgs = {'α': _a, 'β': _b, 'f_NL': vals}
vals = _f_NL(np.array(1.0), np.array(0.5)).sample(20, seed=key)['f_NL']
vals.shape
def _get_f_NL_for_grad(θ, my_f_NL):
_α, _β = θ
# take in _α,_β, f_NL, and build dict to pass to _f_NL
_dct = {
'α': _α,
'β': _β,
'f_NL': my_f_NL
}
return _f_NL(_α, _β).log_prob(_dct)
agrd, bgrd = (jax.vmap(jax.grad(_get_f_NL_for_grad, argnums=[0,1])))(_a,_b,vals)
np.var(agrd)
np.var(bgrd)
from jax import jacfwd, jacrev
blah = jax.jacrev(jax.jacfwd(_get_f_NL_for_grad, argnums=[0,1]))(_a,_b,vals)
hessian_loglik = jax.jit(jax.hessian(_get_f_NL_for_grad, argnums=[0]))
np.array([hessian_loglik(np.array([1.0, 0.5]), my_f_NL=vals[i]) for i in range(20)]).shape
F = np.mean(np.squeeze(np.array([hessian_loglik(np.array([1.0, 0.5]), my_f_NL=vals[i]) for i in range(20)])), axis=0)
F
np.linalg.det(F)
F * (1. / (0.8 - 0.2)) * (0.5)
```
| github_jupyter |
```
import seaborn as sns
import matplotlib.pyplot as plt
import os
import pandas as pd
import numpy as np
df=pd.read_csv("C:\\Users\\Kartik\\OneDrive\\Desktop\\python project\\15-StudentsPerformance.csv")
df.head()
df.dtypes
plt.figure(figsize=(20,20))
_=sns.scatterplot(x = "reading score", y= "writing score", hue = "parental level of education", style = "race/ethnicity",
size = "gender",sizes = (20,300), data= df,palette = "rocket_r")
#SCATTER PLOT SHOWING RELATIONSHIP BETWEEN WRITING SCORE AND READING SCORE (WITH REFRENCE WITH PARENTAL EDUCATION)
plt.figure(figsize=(20,25))
plt.subplot(3,2,1)
plt.pie(df['gender'].value_counts(),labels=['female','male'],colors=['blue','red'],autopct='%.1f%%')
plt.subplot(3,2,2)
plt.pie(df.iloc[:,1].value_counts(),labels=['a','b','c','d','e'],colors=['yellow','green','violet','red','green'],autopct='%.1f%%')
plt.subplot(3,2,3)
sns.countplot(df.iloc[:,2])
plt.subplot(3,2,4)
sns.countplot(df.iloc[:,3],palette='Blues')
plt.subplot(3,2,5)
sns.countplot(df.iloc[:,4],palette='Blues')
#PLOT 1: SHOWING THE PERCENTAGE COUNT OF VARIABLE IN DATA
#PLOT2: SHOWING THE PERCENTAGE COUNT OF RACE AND ETHINICITY
#PlOT 3: SHOWING THE COUNT OF PARENTAL LEVEL OF EDUCATION
#PLOT 4: SHOWING THE COUNT OF FREE LUNCH VS STANDARD LUNCH
#PLOT 5: SHOWING THE COUNT OF TESET PREPARTION
plt.figure(figsize=(10,10))
sns.countplot(x="race/ethnicity", data=df)
plt.figure(figsize=(8,5))
a = sns.barplot(x="gender",y="math score",data=df)
a.set(xlabel="gender",ylabel="Maths",title="comparison")
#CAMPARISON OF SCORE OF GENGER IN MATH SUBJECT
sns.boxplot(y='math score',x='race/ethnicity',data=df)
#BOX PLOT SHOWING MATH SCORE W.R.T RACE/ETHINICITY
sns.countplot(x= 'gender', data= df)
#COUNT PLOT FOR GENDER
sns.boxplot(y='reading score',x='race/ethnicity',data=df)
#BOX PLOT SHOWING READING SCORE W.R.T RACE/ETHINICITY
sns.violinplot(x="race/ethnicity", y= "math score", data=df, hue="gender")
#VILION PLOT SHOWING MATH SCORE W.R.T RACE/ETHINICITY
sns.boxplot(y='writing score',x='race/ethnicity',data=df)
#BOX PLOT SHOWING WRITING SCORE W.R.T RACE/ETHINICITY
plt.figure(figsize=(15,12))
sns.swarmplot(x="race/ethnicity", y= "math score", data=df, hue="gender")
#SWARM PLOT SHOWING MATH SCORE W.R.T RACE/ETHINICITY
df.head()
df.dtypes
df["Total Score"]
df.tail()
df1= df.describe()
sns.heatmap(df1,annot=True, color = "blue")
#HEAT MAP
df.info()
df2=df[['race/ethnicity' , 'parental level of education', 'Total_Score']]
df2 = df2.pivot_table(index = 'parental level of education', columns = 'race/ethnicity', values = 'Total_Score')
df2
sns.heatmap(df2,annot=True, color = "blue")
plt.figure(figsize=(15,9))
_= sns.heatmap(df2, linecolor ="black", linewidth = "0.005",cmap="cubehelix",cbar_kws={"orientation": "horizontal"})
#HEATMAP
```
| github_jupyter |
## Problem statement:
Create a delinquency model which can predict in terms of a probability for each loan transaction, whether the customer will be paying back the loaned amount within 5 days of insurance of loan
(Label ‘1’ & ’0’)
## Investigating the data and exploratory data analysis
First installing all the libraries that will use in our application. Installing the libraries in the first part because the algorithm we use later and the analysis we make more clearly will be done. Furthermore, investigating the data, presented some visualization and analyzed some features. Lets write it. Importing necessary packages and libraries
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sn
import warnings
import os
warnings.filterwarnings('ignore')
#reading the file
os.chdir("C://Users//Public//Documents")
```
Now we are uploading our data set using the variable "data" in the pandas library.
```
data=pd.read_csv('sample_data_intw.csv')
#after the loading the data. Next step is to view/see the top 5 rows of the loaded data set
data.head()
data.describe(include= 'all')
```
describe function is a function that allos analysis between the numeric, categorical values contained in the the dataset. Using this function count, mean,std, min,max,25%,50%,75%
```
#next, how many rows an columns are there in the loaded data set
data.shape
#we will be listing the columns of all the data.
#we will check all columns
data.columns
```
since unnamed: 0 is has more NaN values. So dropping that column
```
data.drop('Unnamed: 0',axis=1,inplace=True)
#after dropping the column
#we will check all columns once again
data.columns
#after removing one column checking how many rows and columns are their in the dataset
data.shape
```
Checking whether there is any null values in the columns
```
data.isnull().sum()
#once again analysis of both the values in the data set
data.describe(include='all')
```
Visualizing the pdate column and label column
```
sn.barplot(x='pdate',y='label',data=data)
data[:][data['msisdn']=='04581I85330']
pd.set_option('display.max_column',40)
data
Y=data.iloc[:,0]
for i in data.columns:
if i=='pdate':
continue
else:
data[i]=pd.to_numeric(data[i],errors='coerce')
data.describe(include='all')
labels=dict(enumerate(data.columns))
labels
data.boxplot('rental90')
#checking mean of the column aon
data['aon'].mean()
data.describe()
#dropping the columns which has more than 80% of NaN values
data.drop(['pdate','msisdn','pcircle'],axis=1,inplace=True)
for i in data.columns:
data[i]=data[i].fillna(np.inf)
mode=data['aon'].mode()
mode=mode.astype(float)
mode
data['aon']=data['aon'].fillna(95)
data['aon'].mode()
mode=dict(enumerate(data[i].mode() for i in data.columns))
labels=dict(enumerate(data.columns))
List=[]
for i,j in mode.items():
List.append(float(j))
List
data[data.notnull()].count()
data['daily_decr30']=data['daily_decr30'].fillna(0.0)
data['daily_decr90']=data['daily_decr90'].fillna(0.0)
data['rental30']=data['rental30'].fillna(0.0)
data['rental90']=data['rental90'].fillna(0.0)
data[data.notnull()].count()
data['label'][data['label']==1].count()
data['label'][data['label']==0].count()
headnames=[str(i) for i in data.columns]
headnames
from sklearn.preprocessing import Normalizer
scaller=Normalizer()
data.aon.plot(kind='density')
Y=data.label
data.drop('label',axis=1,inplace=True)
data=scaller.fit_transform(data)
data=pd.DataFrame(data,columns=headnames[1:])
data
```
## Feature Engineering
For feature selection I used extra tree classifier. This is one of the ensemble learning where it aggregates the result of multiple
de-correlated decision trees collected in output
```
from sklearn.ensemble import ExtraTreesClassifier as etc
array=data.values
x=array
y=Y.values
model=etc()
model.fit(x,y)
score=model.feature_importances_
score
ans=list(zip(data.columns,score))
ans
from operator import itemgetter
sorted(ans,key=itemgetter(1),reverse=True)
from sklearn.ensemble import RandomForestClassifier as rf, GradientBoostingClassifier as gb
RF=rf()
GB=gb()
from sklearn.feature_selection import RFE
rfe=RFE(RF,25)
fit=rfe.fit(x,y)
results=fit.transform(x)
print(fit.n_features_)
print(fit.support_)
print(fit.ranking_)
type(fit.support_)
j=1
names=[]
for i in fit.support_:
if i==True:
names.append(headnames[j])
j+=1
datafs=pd.DataFrame()
for i in names:
datafs[i]=data[i]
from scipy import stats
datafs[(np.abs(stats.zscore(datafs))<3).all(axis=1)]
data.boxplot('last_rech_date_ma')
y
```
# ML Modeling
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(datafs,Y,random_state=7)
model=RandomForestClassifier()
model.fit(x_train,y_train)
pre=model.predict(x_test)
from sklearn.metrics import classification_report
print(classification_report(pre,y_test))
from sklearn.utils import resample
datafs['lable']=Y
df_majority=datafs[datafs.lable==1]
df_minority=datafs[datafs.lable==0]
df_downsamp=resample(df_majority,replace=False,n_samples=23737,random_state=7)
df_downsamp
df_down=pd.concat([df_downsamp,df_minority])
y=df_down.lable
df_down.drop(['lable'],axis=1,inplace=True)
df_down
x_train,x_test,y_train,y_test=train_test_split(df_down,y,random_state=7)
model=RandomForestClassifier()
model.fit(x_train,y_train)
pre=model.predict(x_test)
from sklearn.metrics import classification_report
print(classification_report(pre,y_test))
y=datafs['lable']
datafs.drop(['lable'],axis=1,inplace=True)
pre=model.predict(datafs)
from sklearn.metrics import classification_report
print(classification_report(pre,y))
datafs['label']=y[:]
df_majority=datafs[datafs.label==1]
df_minority=datafs[datafs.label==0]
df_upsamp=resample(df_minority,replace=True,n_samples=166264,random_state=7)
df_up=pd.concat([df_majority,df_upsamp])
y=df_up.label
df_up.drop('label',axis=1,inplace=True)
x_train,x_test,y_train,y_test=train_test_split(df_up,y,random_state=7)
model=RandomForestClassifier()
model.fit(x_train,y_train)
pre=model.predict(x_test)
from sklearn.metrics import classification_report
print(classification_report(pre,y_test))
y=datafs['label']
datafs.drop(['label'],axis=1,inplace=True)
pre=model.predict(datafs)
from sklearn.metrics import classification_report
print(classification_report(pre,y))
from sklearn.metrics import confusion_matrix
result=confusion_matrix(pre,y)
from sklearn.metrics import accuracy_score
x_train,x_val,y_train,y_val=train_test_split(df_up,y,random_state=1,test_size=0.2)
#MODEL-1) LogisticRegression
#------------------------------------------
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(x_train, y_train)
y_pred = logreg.predict(x_val)
acc_logreg = round(accuracy_score(y_pred, y_val) * 100, 2)
print( "MODEL-1: Accuracy of LogisticRegression : ", acc_logreg )
#OUTPUT:-
#MODEL-1: Accuracy of LogisticRegression : 77.0
from sklearn.tree import DecisionTreeClassifier
decisiontree = DecisionTreeClassifier()
decisiontree.fit(x_train, y_train)
y_pred = decisiontree.predict(x_val)
acc_decisiontree = round(accuracy_score(y_pred, y_val) * 100, 2)
print( "MODEL-6: Accuracy of DecisionTreeClassifier : ", acc_decisiontree )
#OUTPUT:-
#MODEL-6: Accuracy of DecisionTreeClassifier : 81.22
```
| github_jupyter |
# <div style="text-align: center">Linear Algebra for Data Scientists
<div style="text-align: center">One of the most common questions we get on <b>Data science</b> is:
<br>
How much maths do I need to learn to be a <b>data scientist</b>?
<br>
If you get confused and ask experts what should you learn at this stage, most of them would suggest / agree that you go ahead with Linear Algebra!
This is the third step of the [10 Steps to Become a Data Scientist](https://www.kaggle.com/mjbahmani/10-steps-to-become-a-data-scientist). and you can learn all of the thing you need for being a data scientist with Linear Algabra.</div>
<div style="text-align:center">last update: <b>12/12/2018</b></div>
You can Fork code and Follow me on:
> ###### [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist)
> ###### [Kaggle](https://www.kaggle.com/mjbahmani/)
-------------------------------------------------------------------------------------------------------------
<b>I hope you find this kernel helpful and some <font color='blue'>UPVOTES</font> would be very much appreciated.</b>
-----------
<a id="top"></a> <br>
## Notebook Content
1. [Introduction](#1)
1. [Basic Concepts](#2)
1. [Notation ](#2)
1. [Matrix Multiplication](#3)
1. [Vector-Vector Products](#4)
1. [Outer Product of Two Vectors](#5)
1. [Matrix-Vector Products](#6)
1. [Matrix-Matrix Products](#7)
1. [Identity Matrix](#8)
1. [Diagonal Matrix](#9)
1. [Transpose of a Matrix](#10)
1. [Symmetric Metrices](#11)
1. [The Trace](#12)
1. [Norms](#13)
1. [Linear Independence and Rank](#14)
1. [Column Rank of a Matrix](#15)
1. [Row Rank of a Matrix](#16)
1. [Rank of a Matrix](#17)
1. [Subtraction and Addition of Metrices](#18)
1. [Inverse](#19)
1. [Orthogonal Matrices](#20)
1. [Range and Nullspace of a Matrix](#21)
1. [Determinant](#22)
1. [geometric interpretation of the determinant](#23)
1. [Tensors](#24)
1. [Hyperplane](#25)
1. [Eigenvalues and Eigenvectors](#30)
1. [Exercise](#31)
1. [Conclusion](#32)
1. [References](#33)
<a id="1"></a> <br>
# 1-Introduction
This is the third step of the [10 Steps to Become a Data Scientist](https://www.kaggle.com/mjbahmani/10-steps-to-become-a-data-scientist).
**Linear algebra** is the branch of mathematics that deals with **vector spaces**. good understanding of Linear Algebra is intrinsic to analyze Machine Learning algorithms, especially for **Deep Learning** where so much happens behind the curtain.you have my word that I will try to keep mathematical formulas & derivations out of this completely mathematical topic and I try to cover all of subject that you need as data scientist.
<img src='https://camo.githubusercontent.com/e42ea0e40062cc1e339a6b90054bfbe62be64402/68747470733a2f2f63646e2e646973636f72646170702e636f6d2f6174746163686d656e74732f3339313937313830393536333530383733382f3434323635393336333534333331383532382f7363616c61722d766563746f722d6d61747269782d74656e736f722e706e67' height=200 width=700>
<a id="top"></a> <br>
*Is there anything more useless or less useful than Algebra?*
**Billy Connolly**
## 1-1 Import
```
import matplotlib.patches as patch
import matplotlib.pyplot as plt
from scipy.stats import norm
from scipy import linalg
from sklearn import svm
import tensorflow as tf
import pandas as pd
import numpy as np
import glob
import sys
import os
```
## 1-2 Setup
```
%matplotlib inline
%precision 4
plt.style.use('ggplot')
np.set_printoptions(suppress=True)
```
<a id="1"></a> <br>
# 2- Basic Concepts
The following system of equations:
$\begin{equation}
\begin{split}
4 x_1 - 5 x_2 & = -13 \\
-2x_1 + 3 x_2 & = 9
\end{split}
\end{equation}$
We are looking for a unique solution for the two variables $x_1$ and $x_2$. The system can be described as:
\begin{align}
\dot{x} & = \sigma(y-x) \\
\dot{y} & = \rho x - y - xz \\
\dot{z} & = -\beta z + xy
\end{align}
$$
Ax=b
$$
as matrices:
$$A = \begin{bmatrix}
4 & -5 \\[0.3em]
-2 & 3
\end{bmatrix},\
b = \begin{bmatrix}
-13 \\[0.3em]
9
\end{bmatrix}$$
A **scalar** is an element in a vector, containing a real number **value**. In a vector space model or a vector mapping of (symbolic, qualitative, or quantitative) properties the scalar holds the concrete value or property of a variable.
A **vector** is an array, tuple, or ordered list of scalars (or elements) of size $n$, with $n$ a positive integer. The **length** of the vector, that is the number of scalars in the vector, is also called the **order** of the vector.
<img src='https://cnx.org/resources/ba7a89a854e2336c540409615dbf47aa44155c56/pic002.png' height=400 width=400>
<a id="top"></a> <br>
```
#3-dimensional vector in numpy
a = np.zeros((2, 3, 4))
#l = [[[ 0., 0., 0., 0.],
# [ 0., 0., 0., 0.],
# [ 0., 0., 0., 0.]],
# [[ 0., 0., 0., 0.],
# [ 0., 0., 0., 0.],
# [ 0., 0., 0., 0.]]]
a
# Declaring Vectors
x = [1, 2, 3]
y = [4, 5, 6]
print(type(x))
# This does'nt give the vector addition.
print(x + y)
# Vector addition using Numpy
z = np.add(x, y)
print(z)
print(type(z))
# Vector Cross Product
mul = np.cross(x, y)
print(mul)
```
**Vectorization** is the process of creating a vector from some data using some process.
Vectors of the length $n$ could be treated like points in $n$-dimensional space. One can calculate the distance between such points using measures like [Euclidean Distance](https://en.wikipedia.org/wiki/Euclidean_distance). The similarity of vectors could also be calculated using [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity).
###### [Go to top](#top)
<a id="2"></a> <br>
## 3- Notation
A **matrix** is a list of vectors that all are of the same length. $A$ is a matrix with $m$ rows and $n$ columns, antries of $A$ are real numbers:
$A \in \mathbb{R}^{m \times n}$
A vector $x$ with $n$ entries of real numbers, could also be thought of as a matrix with $n$ rows and $1$ column, or as known as a **column vector**.
$x = \begin{bmatrix}
x_1 \\[0.3em]
x_2 \\[0.3em]
\vdots \\[0.3em]
x_n
\end{bmatrix}$
Representing a **row vector**, that is a matrix with $1$ row and $n$ columns, we write $x^T$ (this denotes the transpose of $x$, see above).
$x^T = \begin{bmatrix}
x_1 & x_2 & \cdots & x_n
\end{bmatrix}$
We use the notation $a_{ij}$ (or $A_{ij}$, $A_{i,j}$, etc.) to denote the entry of $A$ in the $i$th row and
$j$th column:
$A = \begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\[0.3em]
a_{21} & a_{22} & \cdots & a_{2n} \\[0.3em]
\vdots & \vdots & \ddots & \vdots \\[0.3em]
a_{m1} & a_{m2} & \cdots & a_{mn}
\end{bmatrix}$
We denote the $j$th column of $A$ by $a_j$ or $A_{:,j}$:
$A = \begin{bmatrix}
\big| & \big| & & \big| \\[0.3em]
a_{1} & a_{2} & \cdots & a_{n} \\[0.3em]
\big| & \big| & & \big|
\end{bmatrix}$
We denote the $i$th row of $A$ by $a_i^T$ or $A_{i,:}$:
$A = \begin{bmatrix}
-- & a_1^T & -- \\[0.3em]
-- & a_2^T & -- \\[0.3em]
& \vdots & \\[0.3em]
-- & a_m^T & --
\end{bmatrix}$
A $n \times m$ matrix is a **two-dimensional** array with $n$ rows and $m$ columns.
###### [Go to top](#top)
<a id="3"></a> <br>
## 4-Matrix Multiplication
The result of the multiplication of two matrixes $A \in \mathbb{R}^{m \times n}$ and $B \in \mathbb{R}^{n \times p}$ is the matrix:
```
# initializing matrices
x = np.array([[1, 2], [4, 5]])
y = np.array([[7, 8], [9, 10]])
```
$C = AB \in \mathbb{R}^{m \times n}$
That is, we are multiplying the columns of $A$ with the rows of $B$:
$C_{ij}=\sum_{k=1}^n{A_{ij}B_{kj}}$
<img src='https://cdn.britannica.com/06/77706-004-31EE92F3.jpg'>
The number of columns in $A$ must be equal to the number of rows in $B$.
###### [Go to top](#top)
```
# using add() to add matrices
print ("The element wise addition of matrix is : ")
print (np.add(x,y))
# using subtract() to subtract matrices
print ("The element wise subtraction of matrix is : ")
print (np.subtract(x,y))
# using divide() to divide matrices
print ("The element wise division of matrix is : ")
print (np.divide(x,y))
# using multiply() to multiply matrices element wise
print ("The element wise multiplication of matrix is : ")
print (np.multiply(x,y))
```
<a id="4"></a> <br>
## 4-1 Vector-Vector Products
Inner or Dot **Product** of Two Vectors.
For two vectors $x, y \in \mathbb{R}^n$, the **inner product** or **dot product** $x^T y$ is a real number:
$x^T y \in \mathbb{R} = \begin{bmatrix}
x_1 & x_2 & \cdots & x_n
\end{bmatrix} \begin{bmatrix}
y_1 \\[0.3em]
y_2 \\[0.3em]
\vdots \\[0.3em]
y_n
\end{bmatrix} = \sum_{i=1}^{n}{x_i y_i}$
The **inner products** are a special case of matrix multiplication.
It is always the case that $x^T y = y^T x$.
##### Example
To calculate the inner product of two vectors $x = [1 2 3 4]$ and $y = [5 6 7 8]$, we can loop through the vector and multiply and sum the scalars (this is simplified code):
```
x = (1, 2, 3, 4)
y = (5, 6, 7, 8)
n = len(x)
if n == len(y):
result = 0
for i in range(n):
result += x[i] * y[i]
print(result)
```
It is clear that in the code above we could change line 7 to `result += y[i] * x[i]` without affecting the result.
###### [Go to top](#top)
We can use the *numpy* module to apply the same operation, to calculate the **inner product**. We import the *numpy* module and assign it a name *np* for the following code:
We define the vectors $x$ and $y$ using *numpy*:
```
x = np.array([1, 2, 3, 4])
y = np.array([5, 6, 7, 8])
print("x:", x)
print("y:", y)
```
We can now calculate the $dot$ or $inner product$ using the *dot* function of *numpy*:
```
np.dot(x, y)
```
The order of the arguments is irrelevant:
```
np.dot(y, x)
```
Note that both vectors are actually **row vectors** in the above code. We can transpose them to column vectors by using the *shape* property:
```
print("x:", x)
x.shape = (4, 1)
print("xT:", x)
print("y:", y)
y.shape = (4, 1)
print("yT:", y)
```
In fact, in our understanding of Linear Algebra, we take the arrays above to represent **row vectors**. *Numpy* treates them differently.
We see the issues when we try to transform the array objects. Usually, we can transform a row vector into a column vector in *numpy* by using the *T* method on vector or matrix objects:
###### [Go to top](#top)
```
x = np.array([1, 2, 3, 4])
y = np.array([5, 6, 7, 8])
print("x:", x)
print("y:", y)
print("xT:", x.T)
print("yT:", y.T)
```
The problem here is that this does not do, what we expect it to do. It only works, if we declare the variables not to be arrays of numbers, but in fact a matrix:
```
x = np.array([[1, 2, 3, 4]])
y = np.array([[5, 6, 7, 8]])
print("x:", x)
print("y:", y)
print("xT:", x.T)
print("yT:", y.T)
```
Note that the *numpy* functions *dot* and *outer* are not affected by this distinction. We can compute the dot product using the mathematical equation above in *numpy* using the new $x$ and $y$ row vectors:
###### [Go to top](#top)
```
print("x:", x)
print("y:", y.T)
np.dot(x, y.T)
```
Or by reverting to:
```
print("x:", x.T)
print("y:", y)
np.dot(y, x.T)
```
To read the result from this array of arrays, we would need to access the value this way:
```
np.dot(y, x.T)[0][0]
```
<a id="5"></a> <br>
## 4-2 Outer Product of Two Vectors
For two vectors $x \in \mathbb{R}^m$ and $y \in \mathbb{R}^n$, where $n$ and $m$ do not have to be equal, the **outer product** of $x$ and $y$ is:
$xy^T \in \mathbb{R}^{m\times n}$
The **outer product** results in a matrix with $m$ rows and $n$ columns by $(xy^T)_{ij} = x_i y_j$:
$xy^T \in \mathbb{R}^{m\times n} = \begin{bmatrix}
x_1 \\[0.3em]
x_2 \\[0.3em]
\vdots \\[0.3em]
x_n
\end{bmatrix} \begin{bmatrix}
y_1 & y_2 & \cdots & y_n
\end{bmatrix} = \begin{bmatrix}
x_1 y_1 & x_1 y_2 & \cdots & x_1 y_n \\[0.3em]
x_2 y_1 & x_2 y_2 & \cdots & x_2 y_n \\[0.3em]
\vdots & \vdots & \ddots & \vdots \\[0.3em]
x_m y_1 & x_m y_2 & \cdots & x_m y_n \\[0.3em]
\end{bmatrix}$
Some useful property of the outer product: assume $\mathbf{1} \in \mathbb{R}^n$ is an $n$-dimensional vector of scalars with the value $1$. Given a matrix $A \in \mathbb{R}^{m\times n}$ with all columns equal to some vector $x \in \mathbb{R}^m$, using the outer product $A$ can be represented as:
$A = \begin{bmatrix}
\big| & \big| & & \big| \\[0.3em]
x & x & \cdots & x \\[0.3em]
\big| & \big| & & \big|
\end{bmatrix} = \begin{bmatrix}
x_1 & x_1 & \cdots & x_1 \\[0.3em]
x_2 & x_2 & \cdots & x_2 \\[0.3em]
\vdots & \vdots & \ddots & \vdots \\[0.3em]
x_m &x_m & \cdots & x_m
\end{bmatrix} = \begin{bmatrix}
x_1 \\[0.3em]
x_2 \\[0.3em]
\vdots \\[0.3em]
x_m
\end{bmatrix} \begin{bmatrix}
1 & 1 & \cdots & 1
\end{bmatrix} = x \mathbf{1}^T$
```
x = np.array([[1, 2, 3, 4]])
print("x:", x)
print("xT:", np.reshape(x, (4, 1)))
print("xT:", x.T)
print("xT:", x.transpose())
```
Example
###### [Go to top](#top)
We can now compute the **outer product** by multiplying the column vector $x$ with the row vector $y$:
```
x = np.array([[1, 2, 3, 4]])
y = np.array([[5, 6, 7, 8]])
x.T * y
```
*Numpy* provides an *outer* function that does all that:
```
np.outer(x, y)
```
Note, in this simple case using the simple arrays for the data structures of the vectors does not affect the result of the *outer* function:
```
x = np.array([1, 2, 3, 4])
y = np.array([5, 6, 7, 8])
np.outer(x, y)
```
<a id="6"></a> <br>
## 4-3 Matrix-Vector Products
Assume a matrix $A \in \mathbb{R}^{m\times n}$ and a vector $x \in \mathbb{R}^n$ the product results in a vector $y = Ax \in \mathbb{R}^m$.
$Ax$ could be expressed as the dot product of row $i$ of matrix $A$ with the column value $j$ of vector $x$. Let us first consider matrix multiplication with a scalar:
###### [Go to top](#top)
$A = \begin{bmatrix}
1 & 2 \\[0.3em]
3 & 4
\end{bmatrix}$
We can compute the product of $A$ with a scalar $n = 2$ as:
$A = \begin{bmatrix}
1 * n & 2 * n \\[0.3em]
3 * n & 4 * n
\end{bmatrix} = \begin{bmatrix}
1 * 2 & 2 * 2 \\[0.3em]
3 * 2 & 4 * 2
\end{bmatrix} = \begin{bmatrix}
2 & 4 \\[0.3em]
6 & 8
\end{bmatrix} $
Using *numpy* this can be achieved by:
```
import numpy as np
A = np.array([[4, 5, 6],
[7, 8, 9]])
A * 2
```
Assume that we have a column vector $x$:
$x = \begin{bmatrix}
1 \\[0.3em]
2 \\[0.3em]
3
\end{bmatrix}$
To be able to multiply this vector with a matrix, the number of columns in the matrix must correspond to the number of rows in the column vector. The matrix $A$ must have $3$ columns, as for example:
$A = \begin{bmatrix}
4 & 5 & 6\\[0.3em]
7 & 8 & 9
\end{bmatrix}$
To compute $Ax$, we multiply row $1$ of the matrix with column $1$ of $x$:
$\begin{bmatrix}
4 & 5 & 6
\end{bmatrix}
\begin{bmatrix}
1 \\[0.3em]
2 \\[0.3em]
3
\end{bmatrix} = 4 * 1 + 5 * 2 + 6 * 3 = 32 $
We do the compute the dot product of row $2$ of $A$ and column $1$ of $x$:
$\begin{bmatrix}
7 & 8 & 9
\end{bmatrix}
\begin{bmatrix}
1 \\[0.3em]
2 \\[0.3em]
3
\end{bmatrix} = 7 * 1 + 8 * 2 + 9 * 3 = 50 $
The resulting column vector $Ax$ is:
$Ax = \begin{bmatrix}
32 \\[0.3em]
50
\end{bmatrix}$
Using *numpy* we can compute $Ax$:
```
A = np.array([[4, 5, 6],
[7, 8, 9]])
x = np.array([1, 2, 3])
A.dot(x)
```
We can thus describe the product writing $A$ by rows as:
<a id="top"></a> <br>
$y = Ax = \begin{bmatrix}
-- & a_1^T & -- \\[0.3em]
-- & a_2^T & -- \\[0.3em]
& \vdots & \\[0.3em]
-- & a_m^T & --
\end{bmatrix} x = \begin{bmatrix}
a_1^T x \\[0.3em]
a_2^T x \\[0.3em]
\vdots \\[0.3em]
a_m^T x
\end{bmatrix}$
This means that the $i$th scalar of $y$ is the inner product of the $i$th row of $A$ and $x$, that is $y_i = a_i^T x$.
If we write $A$ in column form, then:
$y = Ax =
\begin{bmatrix}
\big| & \big| & & \big| \\[0.3em]
a_1 & a_2 & \cdots & a_n \\[0.3em]
\big| & \big| & & \big|
\end{bmatrix}
\begin{bmatrix}
x_1 \\[0.3em]
x_2 \\[0.3em]
\vdots \\[0.3em]
x_n
\end{bmatrix} =
\begin{bmatrix}
a_1
\end{bmatrix} x_1 +
\begin{bmatrix}
a_2
\end{bmatrix} x_2 + \dots +
\begin{bmatrix}
a_n
\end{bmatrix} x_n
$
In this case $y$ is a **[linear combination](https://en.wikipedia.org/wiki/Linear_combination)** of the *columns* of $A$, the coefficients taken from $x$.
The above examples multiply be the right with a column vector. One can multiply on the left by a row vector as well, $y^T = x^T A$ for $A \in \mathbb{R}^{m\times n}$, $x\in \mathbb{R}^m$, $y \in \mathbb{R}^n$. There are two ways to express $y^T$, with $A$ expressed by its columns, with $i$th scalar of $y^T$ corresponds to the inner product of $x$ and the $i$th column of $A$:
$y^T = x^T A = x^t \begin{bmatrix}
\big| & \big| & & \big| \\[0.3em]
a_1 & a_2 & \cdots & a_n \\[0.3em]
\big| & \big| & & \big|
\end{bmatrix} =
\begin{bmatrix}
x^T a_1 & x^T a_2 & \dots & x^T a_n
\end{bmatrix}$
One can express $A$ by rows, where $y^T$ is a linear combination of the rows of $A$ with the scalars from $x$.
$\begin{equation}
\begin{split}
y^T & = x^T A \\
& = \begin{bmatrix}
x_1 & x_2 & \dots & x_n
\end{bmatrix}
\begin{bmatrix}
-- & a_1^T & -- \\[0.3em]
-- & a_2^T & -- \\[0.3em]
& \vdots & \\[0.3em]
-- & a_m^T & --
\end{bmatrix} \\
& = x_1 \begin{bmatrix}-- & a_1^T & --\end{bmatrix} + x_2 \begin{bmatrix}-- & a_2^T & --\end{bmatrix} + \dots + x_n \begin{bmatrix}-- & a_n^T & --\end{bmatrix}
\end{split}
\end{equation}$
###### [Go to top](#top)
<a id="7"></a> <br>
## 4-4 Matrix-Matrix Products
One can view matrix-matrix multiplication $C = AB$ as a set of vector-vector products. The $(i,j)$th entry of $C$ is the inner product of the $i$th row of $A$ and the $j$th column of $B$:
```
matrix1 = np.matrix(
[[0, 4],
[2, 0]]
)
matrix2 = np.matrix(
[[-1, 2],
[1, -2]]
)
matrix1 + matrix2
matrix1 - matrix2
```
### 4-4-1 Multiplication
To multiply two matrices with numpy, you can use the np.dot method:
```
np.dot(matrix1, matrix2)
matrix1 * matrix2
```
$C = AB =
\begin{bmatrix}
-- & a_1^T & -- \\[0.3em]
-- & a_2^T & -- \\[0.3em]
& \vdots & \\[0.3em]
-- & a_m^T & --
\end{bmatrix}
\begin{bmatrix}
\big| & \big| & & \big| \\[0.3em]
b_1 & b_2 & \cdots & b_p \\[0.3em]
\big| & \big| & & \big|
\end{bmatrix} =
\begin{bmatrix}
a_1^T b_1 & a_1^T b_2 & \cdots & a_1^T b_p \\[0.3em]
a_2^T b_1 & a_2^T b_2 & \cdots & a_2^T b_p \\[0.3em]
\vdots & \vdots & \ddots & \vdots \\[0.3em]
a_m^T b_1 & a_m^T b_2 & \cdots & a_m^T b_p
\end{bmatrix}$
Here $A \in \mathbb{R}^{m\times n}$ and $B \in \mathbb{R}^{n\times p}$, $a_i \in \mathbb{R}^n$ and $b_j \in \mathbb{R}^n$, and $A$ is represented by rows, $B$ by columns.
If we represent $A$ by columns and $B$ by rows, then $AB$ is the sum of the outer products:
$C = AB =
\begin{bmatrix}
\big| & \big| & & \big| \\[0.3em]
a_1 & a_2 & \cdots & a_n \\[0.3em]
\big| & \big| & & \big|
\end{bmatrix}
\begin{bmatrix}
-- & b_1^T & -- \\[0.3em]
-- & b_2^T & -- \\[0.3em]
& \vdots & \\[0.3em]
-- & b_n^T & --
\end{bmatrix}
= \sum_{i=1}^n a_i b_i^T
$
This means that $AB$ is the sum over all $i$ of the outer product of the $i$th column of $A$ and the $i$th row of $B$.
One can interpret matrix-matrix operations also as a set of matrix-vector products. Representing $B$ by columns, the columns of $C$ are matrix-vector products between $A$ and the columns of $B$:
$C = AB = A
\begin{bmatrix}
\big| & \big| & & \big| \\[0.3em]
b_1 & b_2 & \cdots & b_p \\[0.3em]
\big| & \big| & & \big|
\end{bmatrix} =
\begin{bmatrix}
\big| & \big| & & \big| \\[0.3em]
A b_1 & A b_2 & \cdots & A b_p \\[0.3em]
\big| & \big| & & \big|
\end{bmatrix}
$
In this interpretation the $i$th column of $C$ is the matrix-vector product with the vector on the right, i.e. $c_i = A b_i$.
Representing $A$ by rows, the rows of $C$ are the matrix-vector products between the rows of $A$ and $B$:
$C = AB = \begin{bmatrix}
-- & a_1^T & -- \\[0.3em]
-- & a_2^T & -- \\[0.3em]
& \vdots & \\[0.3em]
-- & a_m^T & --
\end{bmatrix}
B =
\begin{bmatrix}
-- & a_1^T B & -- \\[0.3em]
-- & a_2^T B & -- \\[0.3em]
& \vdots & \\[0.3em]
-- & a_n^T B & --
\end{bmatrix}$
The $i$th row of $C$ is the matrix-vector product with the vector on the left, i.e. $c_i^T = a_i^T B$.
#### Notes on Matrix-Matrix Products
**Matrix multiplication is associative:** $(AB)C = A(BC)$
**Matrix multiplication is distributive:** $A(B + C) = AB + AC$
**Matrix multiplication is, in general, not commutative;** It can be the case that $AB \neq BA$. (For example, if $A \in \mathbb{R}^{m\times n}$ and $B \in \mathbb{R}^{n\times q}$, the matrix product $BA$ does not even exist if $m$ and $q$ are not equal!)
###### [Go to top](#top)
<a id="8"></a> <br>
## 5- Identity Matrix
The **identity matrix** $I \in \mathbb{R}^{n\times n}$ is a square matrix with the value $1$ on the diagonal and $0$ everywhere else:
```
np.eye(4)
```
$I_{ij} = \left\{
\begin{array}{lr}
1 & i = j\\
0 & i \neq j
\end{array}
\right.
$
For all $A \in \mathbb{R}^{m\times n}$:
$AI = A = IA$
In the equation above multiplication has to be made possible, which means that in the portion $AI = A$ the dimensions of $I$ have to be $n\times n$, while in $A = IA$ they have to be $m\times m$.
We can generate an *identity matrix* in *numpy* using:
```
A = np.array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8],
[9, 10, 11]])
print("A:", A)
```
We can ask for the shape of $A$:
```
A.shape
```
The *shape* property of a matrix contains the $m$ (number of rows) and $n$ (number of columns) properties in a tuple, in that particular order. We can create an identity matrix for the use in $AI$ by using the $n$ value:
```
np.identity(A.shape[1], dtype="int")
```
Note that we specify the *dtype* parameter to *identity* as *int*, since the default would return a matrix of *float* values.
To generate an identity matrix for the use in $IA$ we would use the $m$ value:
```
np.identity(A.shape[0], dtype="int")
```
We can compute the dot product of $A$ and its identity matrix $I$:
```
n = A.shape[1]
I = np.array(np.identity(n, dtype="int"))
np.dot(A, I)
```
The same is true for the other direction:
```
m = A.shape[0]
I = np.array(np.identity(m, dtype="int"))
np.dot(I, A)
```
### 5-1 Inverse Matrices
```
inverse = np.linalg.inv(matrix1)
print(inverse)
```
<a id="9"></a> <br>
## 6- Diagonal Matrix
In the **diagonal matrix** non-diagonal elements are $0$, that is $D = diag(d_1, d_2, \dots{}, d_n)$, with:
$D_{ij} = \left\{
\begin{array}{lr}
d_i & i = j\\
0 & i \neq j
\end{array}
\right.
$
The identity matrix is a special case of a diagonal matrix: $I = diag(1, 1, \dots{}, 1)$.
In *numpy* we can create a *diagonal matrix* from any given matrix using the *diag* function:
```
import numpy as np
A = np.array([[0, 1, 2, 3],
[4, 5, 6, 7],
[8, 9, 10, 11],
[12, 13, 14, 15]])
np.diag(A)
```
An optional parameter *k* to the *diag* function allows us to extract the diagonal above the main diagonal with a positive *k*, and below the main diagonal with a negative *k*:
###### [Go to top](#top)
```
np.diag(A, k=1)
np.diag(A, k=-1)
```
<a id="10"></a> <br>
## 7- Transpose of a Matrix
**Transposing** a matrix is achieved by *flipping* the rows and columns. For a matrix $A \in \mathbb{R}^{m\times n}$ the transpose $A^T \in \mathbb{R}^{n\times m}$ is the $n\times m$ matrix given by:
$(A^T)_{ij} = A_{ji}$
Properties of transposes:
- $(A^T)^T = A$
- $(AB)^T = B^T A^T$
- $(A+B)^T = A^T + B^T$
```
a = np.array([[1, 2], [3, 4]])
a
a.transpose()
```
<a id="11"></a> <br>
## 8- Symmetric Matrices
Square metrices $A \in \mathbb{R}^{n\times n}$ are **symmetric**, if $A = A^T$.
$A$ is **anti-symmetric**, if $A = -A^T$.
For any matrix $A \in \mathbb{R}^{n\times n}$, the matrix $A + A^T$ is **symmetric**.
For any matrix $A \in \mathbb{R}^{n\times n}$, the matrix $A - A^T$ is **anti-symmetric**.
Thus, any square matrix $A \in \mathbb{R}^{n\times n}$ can be represented as a sum of a symmetric matrix and an anti-symmetric matrix:
$A = \frac{1}{2} (A + A^T) + \frac{1}{2} (A - A^T)$
The first matrix on the right, i.e. $\frac{1}{2} (A + A^T)$ is symmetric. The second matrix $\frac{1}{2} (A - A^T)$ is anti-symmetric.
$\mathbb{S}^n$ is the set of all symmetric matrices of size $n$.
$A \in \mathbb{S}^n$ means that $A$ is symmetric and of the size $n\times n$.
```
def symmetrize(a):
return a + a.T - np.diag(a.diagonal())
a = np.array([[1, 2], [3, 4]])
print(symmetrize(a))
```
<a id="12"></a> <br>
## 9-The Trace
The **trace** of a square matrix $A \in \mathbb{R}^{n\times n}$ is $tr(A)$ (or $trA$) is the sum of the diagonal elements in the matrix:
$trA = \sum_{i=1}^n A_{ii}$
Properties of the **trace**:
- For $A \in \mathbb{R}^{n\times n}$, $\mathrm{tr}A = \mathrm{tr}A^T$
- For $A,B \in \mathbb{R}^{n\times n}$, $\mathrm{tr}(A + B) = \mathrm{tr}A + \mathrm{tr}B$
- For $A \in \mathbb{R}^{n\times n}$, $t \in \mathbb{R}$, $\mathrm{tr}(tA) = t \mathrm{tr}A$
- For $A,B$ such that $AB$ is square, $\mathrm{tr}AB = \mathrm{tr}BA$
- For $A,B,C$ such that $ABC$ is square, $\mathrm{tr}ABC = \mathrm{tr}BCA = \mathrm{tr}CAB$, and so on for the product of more matrices.
###### [Go to top](#top)
```
a = np.arange(8).reshape((2,2,2))
np.trace(a)
print(np.trace(matrix1))
det = np.linalg.det(matrix1)
print(det)
a = np.array([[1, 2], [3, 4]])
a
a.transpose()
```
<a id="13"></a> <br>
# 10- Norms
a norm is a function that assigns a strictly positive length or size to each vector in a vector space—except for the zero vector, which is assigned a length of zero. A **seminorm**, on the other hand, is allowed to assign zero length to some non-zero vectors (in addition to the zero vector).
<a id="top"></a> <br>
The **norm** of a vector $x$ is $\| x\|$, informally the length of a vector.
Example: the Euclidean or $\mathscr{l}_2$ norm:
$\|x\|_2 = \sqrt{\sum_{i=1}^n{x_i^2}}$
Note: $\|x\|_2^2 = x^T x$
A **norm** is any function $f : \mathbb{R}^n \rightarrow \mathbb{R}$ that satisfies the following properties:
- For all $x \in \mathbb{R}^n$, $f(x) \geq 0$ (non-negativity)
- $f(x) = 0$ if and only if $x = 0$ (definiteness)
- For all $x \in \mathbb{R}^n$, $t \in \mathbb{R}$, $f(tx) = |t|\ f(x)$ (homogeneity)
- For all $x, y \in \mathbb{R}^n$, $f(x + y) \leq f(x) + f(y)$ (triangle inequality)
Norm $\mathscr{l}_1$:
$\|x\|_1 = \sum_{i=1}^n{|x_i|}$
How to calculate norm in python? **it is so easy**
###### [Go to top](#top)
```
v = np.array([1,2,3,4])
norm.median(v)
```
<a id="14"></a> <br>
# 11- Linear Independence and Rank
A set of vectors $\{x_1, x_2, \dots{}, x_n\} \subset \mathbb{R}^m$ is said to be **(linearly) independent** if no vector can be represented as a linear combination of the remaining vectors.
A set of vectors $\{x_1, x_2, \dots{}, x_n\} \subset \mathbb{R}^m$ is said to be **(lineraly) dependent** if one vector from this set can be represented as a linear combination of the remaining vectors.
For some scalar values $\alpha_1, \dots{}, \alpha_{n-1} \in \mathbb{R}$ the vectors $x_1, \dots{}, x_n$ are linerly dependent, if:
$\begin{equation}
x_n = \sum_{i=1}^{n-1}{\alpha_i x_i}
\end{equation}$
Example: The following vectors are lineraly dependent, because $x_3 = -2 x_1 + x_2$
$x_1 = \begin{bmatrix}
1 \\[0.3em]
2 \\[0.3em]
3
\end{bmatrix}
\quad
x_2 = \begin{bmatrix}
4 \\[0.3em]
1 \\[0.3em]
5
\end{bmatrix}
\quad
x_3 = \begin{bmatrix}
2 \\[0.3em]
-1 \\[0.3em]
-1
\end{bmatrix}
$
```
#How to find linearly independent rows from a matrix
matrix = np.array(
[
[0, 1 ,0 ,0],
[0, 0, 1, 0],
[0, 1, 1, 0],
[1, 0, 0, 1]
])
lambdas, V = np.linalg.eig(matrix.T)
# The linearly dependent row vectors
print (matrix[lambdas == 0,:])
```
<a id="15"></a> <br>
## 11-1 Column Rank of a Matrix
The **column rank** of a matrix $A \in \mathbb{R}^{m\times n}$ is the size of the largest subset of columns of $A$ that constitute a linear independent set. Informaly this is the number of linearly independent columns of $A$.
###### [Go to top](#top)
```
A = np.matrix([[1,3,7],[2,8,3],[7,8,1]])
np.linalg.matrix_rank(A)
from numpy.linalg import matrix_rank
matrix_rank(np.eye(4)) # Full rank matrix
I=np.eye(4); I[-1,-1] = 0. # rank deficient matrix
matrix_rank(I)
matrix_rank(np.ones((4,))) # 1 dimension - rank 1 unless all 0
matrix_rank(np.zeros((4,)))
```
<a id="16"></a> <br>
## 11-2 Row Rank of a Matrix
The **row rank** of a matrix $A \in \mathbb{R}^{m\times n}$ is the largest number of rows of $A$ that constitute a lineraly independent set.
<a id="17"></a> <br>
## 11-3 Rank of a Matrix
For any matrix $A \in \mathbb{R}^{m\times n}$, the column rank of $A$ is equal to the row rank of $A$. Both quantities are referred to collectively as the rank of $A$, denoted as $rank(A)$. Here are some basic properties of the rank:
###### [Go to top](#top)
- For $A \in \mathbb{R}^{m\times n}$, $rank(A) \leq \min(m, n)$. If $rank(A) = \min(m, n)$, then $A$ is said to be
**full rank**.
- For $A \in \mathbb{R}^{m\times n}$, $rank(A) = rank(A^T)$
- For $A \in \mathbb{R}^{m\times n}$, $B \in \mathbb{R}^{n\times p}$, $rank(AB) \leq \min(rank(A), rank(B))$
- For $A,B \in \mathbb{R}^{m\times n}$, $rank(A + B) \leq rank(A) + rank(B)$
```
from numpy.linalg import matrix_rank
print(matrix_rank(np.eye(4))) # Full rank matrix
I=np.eye(4); I[-1,-1] = 0. # rank deficient matrix
print(matrix_rank(I))
print(matrix_rank(np.ones((4,)))) # 1 dimension - rank 1 unless all 0
print (matrix_rank(np.zeros((4,))))
```
<a id="18"></a> <br>
# 12- Subtraction and Addition of Metrices
Assume $A \in \mathbb{R}^{m\times n}$ and $B \in \mathbb{R}^{m\times n}$, that is $A$ and $B$ are of the same size, to add $A$ to $B$, or to subtract $B$ from $A$, we add or subtract corresponding entries:
$A + B =
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\[0.3em]
a_{21} & a_{22} & \cdots & a_{2n} \\[0.3em]
\vdots & \vdots & \ddots & \vdots \\[0.3em]
a_{m1} & a_{m2} & \cdots & a_{mn}
\end{bmatrix} +
\begin{bmatrix}
b_{11} & b_{12} & \cdots & b_{1n} \\[0.3em]
b_{21} & b_{22} & \cdots & b_{2n} \\[0.3em]
\vdots & \vdots & \ddots & \vdots \\[0.3em]
b_{m1} & b_{m2} & \cdots & b_{mn}
\end{bmatrix} =
\begin{bmatrix}
a_{11} + b_{11} & a_{12} + b_{12} & \cdots & a_{1n} + b_{1n} \\[0.3em]
a_{21} + b_{21} & a_{22} + b_{22} & \cdots & a_{2n} + b_{2n} \\[0.3em]
\vdots & \vdots & \ddots & \vdots \\[0.3em]
a_{m1} + b_{m1} & a_{m2} + b_{m2} & \cdots & a_{mn} + b_{mn}
\end{bmatrix}
$
The same is applies to subtraction:
$A - B =
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\[0.3em]
a_{21} & a_{22} & \cdots & a_{2n} \\[0.3em]
\vdots & \vdots & \ddots & \vdots \\[0.3em]
a_{m1} & a_{m2} & \cdots & a_{mn}
\end{bmatrix} -
\begin{bmatrix}
b_{11} & b_{12} & \cdots & b_{1n} \\[0.3em]
b_{21} & b_{22} & \cdots & b_{2n} \\[0.3em]
\vdots & \vdots & \ddots & \vdots \\[0.3em]
b_{m1} & b_{m2} & \cdots & b_{mn}
\end{bmatrix} =
\begin{bmatrix}
a_{11} - b_{11} & a_{12} - b_{12} & \cdots & a_{1n} - b_{1n} \\[0.3em]
a_{21} - b_{21} & a_{22} - b_{22} & \cdots & a_{2n} - b_{2n} \\[0.3em]
\vdots & \vdots & \ddots & \vdots \\[0.3em]
a_{m1} - b_{m1} & a_{m2} - b_{m2} & \cdots & a_{mn} - b_{mn}
\end{bmatrix}
$
In Python using *numpy* this can be achieved using the following code:
```
import numpy as np
print("np.arange(9):", np.arange(9))
print("np.arange(9, 18):", np.arange(9, 18))
A = np.arange(9, 18).reshape((3, 3))
B = np.arange(9).reshape((3, 3))
print("A:", A)
print("B:", B)
```
The *numpy* function *arange* is similar to the standard Python function *range*. It returns an array with $n$ elements, specified in the one parameter version only. If we provide to parameters to *arange*, it generates an array starting from the value of the first parameter and ending with a value one less than the second parameter. The function *reshape* returns us a matrix with the corresponding number of rows and columns.
We can now add and subtract the two matrices $A$ and $B$:
```
A + B
A - B
```
<a id="19"></a> <br>
## 12-1 Inverse
The **inverse** of a square matrix $A \in \mathbb{R}^{n\times n}$ is $A^{-1}$:
$A^{-1} A = I = A A^{-1}$
Not all matrices have inverses. Non-square matrices do not have inverses by definition. For some square matrices $A$ the inverse might not exist.
$A$ is **invertible** or **non-singular** if $A^{-1}$ exists.
$A$ is **non-invertible** or **singular** if $A^{-1}$ does not exist.
<font color='red'>Note: **non-singular** means the opposite of **non-invertible**!</font>
For $A$ to have an inverse $A^{-1}$, $A$ must be **full rank**.
Assuming that $A,B \in \mathbb{R}^{n\times n}$ are non-singular, then:
- $(A^{-1})^{-1} = A$
- $(AB)^{-1} = B^{-1} A^{-1}$
- $(A^{-1})^T = (A^T)^{-1}$ (often simply $A^{-T}$)
###### [Go to top](#top)
<a id="20"></a> <br>
## 13- Orthogonal Matrices
Two vectors $x, y \in \mathbb{R}^n$ are **orthogonal** if $x^T y = 0$.
A vector $x \in \mathbb{R}^n$ is **normalized** if $\|x\|^2 = 1$.
A square matrix $U \in \mathbb{R}^{n\times n}$ is **orthogonal** if all its columns are orthogonal to each other and are **normalized**. The columns are then referred to as being **orthonormal**.
It follows immediately from the definition of orthogonality and normality that:
$U^T U = I = U U^T$
This means that the inverse of an orthogonal matrix is its transpose.
If U is not square - i.e., $U \in \mathbb{R}^{m\times n}$, $n < m$ - but its columns are still orthonormal, then $U^T U = I$, but $U U^T \neq I$.
We generally only use the term orthogonal to describe the case, where $U$ is square.
Another nice property of orthogonal matrices is that operating on a vector with an orthogonal matrix will not change its Euclidean norm. For any $x \in \mathbb{R}^n$, $U \in \mathbb{R}^{n\times n}$ orthogonal.
$\|U_x\|^2 = \|x\|^2$
```
#How to create random orthonormal matrix in python numpy
def rvs(dim=3):
random_state = np.random
H = np.eye(dim)
D = np.ones((dim,))
for n in range(1, dim):
x = random_state.normal(size=(dim-n+1,))
D[n-1] = np.sign(x[0])
x[0] -= D[n-1]*np.sqrt((x*x).sum())
# Householder transformation
Hx = (np.eye(dim-n+1) - 2.*np.outer(x, x)/(x*x).sum())
mat = np.eye(dim)
mat[n-1:, n-1:] = Hx
H = np.dot(H, mat)
# Fix the last sign such that the determinant is 1
D[-1] = (-1)**(1-(dim % 2))*D.prod()
# Equivalent to np.dot(np.diag(D), H) but faster, apparently
H = (D*H.T).T
return H
```
<a id="21"></a> <br>
## 14- Range and Nullspace of a Matrix
The **span** of a set of vectors $\{ x_1, x_2, \dots{}, x_n\}$ is the set of all vectors that can be expressed as
a linear combination of $\{ x_1, \dots{}, x_n \}$:
$\mathrm{span}(\{ x_1, \dots{}, x_n \}) = \{ v : v = \sum_{i=1}^n \alpha_i x_i, \alpha_i \in \mathbb{R} \}$
It can be shown that if $\{ x_1, \dots{}, x_n \}$ is a set of n linearly independent vectors, where each $x_i \in \mathbb{R}^n$, then $\mathrm{span}(\{ x_1, \dots{}, x_n\}) = \mathbb{R}^n$. That is, any vector $v \in \mathbb{R}^n$ can be written as a linear combination of $x_1$ through $x_n$.
The projection of a vector $y \in \mathbb{R}^m$ onto the span of $\{ x_1, \dots{}, x_n\}$ (here we assume $x_i \in \mathbb{R}^m$) is the vector $v \in \mathrm{span}(\{ x_1, \dots{}, x_n \})$, such that $v$ is as close as possible to $y$, as measured by the Euclidean norm $\|v − y\|^2$. We denote the projection as $\mathrm{Proj}(y; \{ x_1, \dots{}, x_n \})$ and can define it formally as:
$\mathrm{Proj}( y; \{ x_1, \dots{}, x_n \}) = \mathrm{argmin}_{v\in \mathrm{span}(\{x_1,\dots{},x_n\})}\|y − v\|^2$
The **range** (sometimes also called the columnspace) of a matrix $A \in \mathbb{R}^{m\times n}$, denoted $\mathcal{R}(A)$, is the the span of the columns of $A$. In other words,
$\mathcal{R}(A) = \{ v \in \mathbb{R}^m : v = A x, x \in \mathbb{R}^n\}$
Making a few technical assumptions (namely that $A$ is full rank and that $n < m$), the projection of a vector $y \in \mathbb{R}^m$ onto the range of $A$ is given by:
$\mathrm{Proj}(y; A) = \mathrm{argmin}_{v\in \mathcal{R}(A)}\|v − y\|^2 = A(A^T A)^{−1} A^T y$
<font color="red">See for more details in the notes page 13.</font>
The **nullspace** of a matrix $A \in \mathbb{R}^{m\times n}$, denoted $\mathcal{N}(A)$ is the set of all vectors that equal $0$ when multiplied by $A$, i.e.,
$\mathcal{N}(A) = \{ x \in \mathbb{R}^n : A x = 0 \}$
Note that vectors in $\mathcal{R}(A)$ are of size $m$, while vectors in the $\mathcal{N}(A)$ are of size $n$, so vectors in $\mathcal{R}(A^T)$ and $\mathcal{N}(A)$ are both in $\mathbb{R}^n$. In fact, we can say much more. It turns out that:
$\{ w : w = u + v, u \in \mathcal{R}(A^T), v \in \mathcal{N}(A) \} = \mathbb{R}^n$ and $\mathcal{R}(A^T) \cap \mathcal{N}(A) = \{0\}$
In other words, $\mathcal{R}(A^T)$ and $\mathcal{N}(A)$ are disjoint subsets that together span the entire space of
$\mathbb{R}^n$. Sets of this type are called **orthogonal complements**, and we denote this $\mathcal{R}(A^T) = \mathcal{N}(A)^\perp$.
###### [Go to top](#top)
<a id="22"></a> <br>
# 15- Determinant
The determinant of a square matrix $A \in \mathbb{R}^{n\times n}$, is a function $\mathrm{det} : \mathbb{R}^{n\times n} \rightarrow \mathbb{R}$, and is denoted $|A|$ or $\mathrm{det}A$ (like the trace operator, we usually omit parentheses).
<a id="23"></a> <br>
## 15-1 A geometric interpretation of the determinant
Given
$\begin{bmatrix}
-- & a_1^T & -- \\[0.3em]
-- & a_2^T & -- \\[0.3em]
& \vdots & \\[0.3em]
-- & a_n^T & --
\end{bmatrix}$
consider the set of points $S \subset \mathbb{R}^n$ formed by taking all possible linear combinations of the row vectors $a_1, \dots{}, a_n \in \mathbb{R}^n$ of $A$, where the coefficients of the linear combination are all
between $0$ and $1$; that is, the set $S$ is the restriction of $\mathrm{span}( \{ a_1, \dots{}, a_n \})$ to only those linear combinations whose coefficients $\alpha_1, \dots{}, \alpha_n$ satisfy $0 \leq \alpha_i \leq 1$, $i = 1, \dots{}, n$. Formally:
$S = \{v \in \mathbb{R}^n : v = \sum_{i=1}^n \alpha_i a_i \mbox{ where } 0 \leq \alpha_i \leq 1, i = 1, \dots{}, n \}$
The absolute value of the determinant of $A$, it turns out, is a measure of the *volume* of the set $S$. The volume here is intuitively for example for $n = 2$ the area of $S$ in the Cartesian plane, or with $n = 3$ it is the common understanding of *volume* for 3-dimensional objects.
Example:
$A = \begin{bmatrix}
1 & 3 & 4\\[0.3em]
3 & 2 & 1\\[0.3em]
3 & 2 & 1
\end{bmatrix}$
The rows of the matrix are:
$a_1 = \begin{bmatrix}
1 \\[0.3em]
3 \\[0.3em]
3
\end{bmatrix}
\quad
a_2 = \begin{bmatrix}
3 \\[0.3em]
2 \\[0.3em]
2
\end{bmatrix}
\quad
a_3 = \begin{bmatrix}
4 \\[0.3em]
1 \\[0.3em]
1
\end{bmatrix}$
The set S corresponding to these rows is shown in:
<img src="http://mathworld.wolfram.com/images/equations/Determinant/NumberedEquation19.gif">
The figure above is an illustration of the determinant for the $2\times 2$ matrix $A$ above. Here, $a_1$ and $a_2$
are vectors corresponding to the rows of $A$, and the set $S$ corresponds to the shaded region (i.e., the parallelogram). The absolute value of the determinant, $|\mathrm{det}A| = 7$, is the area of the parallelogram.
For two-dimensional matrices, $S$ generally has the shape of a parallelogram. In our example, the value of the determinant is $|A| = −7$ (as can be computed using the formulas shown later), so the area of the parallelogram is $7$.
In three dimensions, the set $S$ corresponds to an object known as a parallelepiped (a three-dimensional box with skewed sides, such that every face has the shape of a parallelogram). The absolute value of the determinant of the $3 \times 3$ matrix whose rows define $S$ give the three-dimensional volume of the parallelepiped. In even higher dimensions, the set $S$ is an object known as an $n$-dimensional parallelotope.
Algebraically, the determinant satisfies the following three properties (from which all other properties follow, including the general formula):
- The determinant of the identity is $1$, $|I| = 1$. (Geometrically, the volume of a unit hypercube is $1$).
- Given a matrix $A \in \mathbb{R}^{n\times n}$, if we multiply a single row in $A$ by a scalar $t \in \mathbb{R}$, then the determinant of the new matrix is $t|A|$,<br/>
$\left| \begin{bmatrix}
-- & t a_1^T & -- \\[0.3em]
-- & a_2^T & -- \\[0.3em]
& \vdots & \\[0.3em]
-- & a_m^T & --
\end{bmatrix}\right| = t|A|$<br/>
(Geometrically, multiplying one of the sides of the set $S$ by a factor $t$ causes the volume
to increase by a factor $t$.)
- If we exchange any two rows $a^T_i$ and $a^T_j$ of $A$, then the determinant of the new matrix is $−|A|$, for example<br/>
$\left| \begin{bmatrix}
-- & a_2^T & -- \\[0.3em]
-- & a_1^T & -- \\[0.3em]
& \vdots & \\[0.3em]
-- & a_m^T & --
\end{bmatrix}\right| = -|A|$
Several properties that follow from the three properties above include:
- For $A \in \mathbb{R}^{n\times n}$, $|A| = |A^T|$
- For $A,B \in \mathbb{R}^{n\times n}$, $|AB| = |A||B|$
- For $A \in \mathbb{R}^{n\times n}$, $|A| = 0$ if and only if $A$ is singular (i.e., non-invertible). (If $A$ is singular then it does not have full rank, and hence its columns are linearly dependent. In this case, the set $S$ corresponds to a "flat sheet" within the $n$-dimensional space and hence has zero volume.)
- For $A \in \mathbb{R}^{n\times n}$ and $A$ non-singular, $|A−1| = 1/|A|$
###### [Go to top](#top)
<a id="24"></a> <br>
# 16- Tensors
A [**tensor**](https://en.wikipedia.org/wiki/Tensor) could be thought of as an organized multidimensional array of numerical values. A vector could be assumed to be a sub-class of a tensor. Rows of tensors extend alone the y-axis, columns along the x-axis. The **rank** of a scalar is 0, the rank of a **vector** is 1, the rank of a **matrix** is 2, the rank of a **tensor** is 3 or higher.
###### [Go to top](#top)
```
A = tf.Variable(np.zeros((5, 5), dtype=np.float32), trainable=False)
new_part = tf.ones((2,3))
update_A = A[2:4,2:5].assign(new_part)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
print(update_A.eval())
```
<a id="25"></a> <br>
# 17- Hyperplane
The **hyperplane** is a sub-space in the ambient space with one dimension less. In a two-dimensional space the hyperplane is a line, in a three-dimensional space it is a two-dimensional plane, etc.
Hyperplanes divide an $n$-dimensional space into sub-spaces that might represent clases in a machine learning algorithm.
```
np.random.seed(0)
X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]]
Y = [0] * 20 + [1] * 20
fig, ax = plt.subplots()
clf2 = svm.LinearSVC(C=1).fit(X, Y)
# get the separating hyperplane
w = clf2.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx - (clf2.intercept_[0]) / w[1]
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx2, yy2 = np.meshgrid(np.arange(x_min, x_max, .2),
np.arange(y_min, y_max, .2))
Z = clf2.predict(np.c_[xx2.ravel(), yy2.ravel()])
Z = Z.reshape(xx2.shape)
ax.contourf(xx2, yy2, Z, cmap=plt.cm.coolwarm, alpha=0.3)
ax.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.coolwarm, s=25)
ax.plot(xx,yy)
ax.axis([x_min, x_max,y_min, y_max])
plt.show()
```
<a id="31"></a> <br>
## 20- Exercises
let's do some exercise.
```
# Students may (probably should) ignore this code. It is just here to make pretty arrows.
def plot_vectors(vs):
"""Plot vectors in vs assuming origin at (0,0)."""
n = len(vs)
X, Y = np.zeros((n, 2))
U, V = np.vstack(vs).T
plt.quiver(X, Y, U, V, range(n), angles='xy', scale_units='xy', scale=1)
xmin, xmax = np.min([U, X]), np.max([U, X])
ymin, ymax = np.min([V, Y]), np.max([V, Y])
xrng = xmax - xmin
yrng = ymax - ymin
xmin -= 0.05*xrng
xmax += 0.05*xrng
ymin -= 0.05*yrng
ymax += 0.05*yrng
plt.axis([xmin, xmax, ymin, ymax])
# Again, this code is not intended as a coding example.
a1 = np.array([3,0]) # axis
a2 = np.array([0,3])
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plot_vectors([a1, a2])
v1 = np.array([2,3])
plot_vectors([a1,v1])
plt.text(2,3,"(2,3)",fontsize=16)
plt.tight_layout()
#Matrices, Transformations and Geometric Interpretation
a1 = np.array([7,0]) # axis
a2 = np.array([0,5])
A = np.array([[2,1],[1,1]]) # transformation f in standard basis
v2 =np.dot(A,v1)
plt.figure(figsize=(8,8))
plot_vectors([a1, a2])
v1 = np.array([2,3])
plot_vectors([v1,v2])
plt.text(2,3,"v1 =(2,3)",fontsize=16)
plt.text(6,5,"Av1 = ", fontsize=16)
plt.text(v2[0],v2[1],"(7,5)",fontsize=16)
print(v2[1])
#Change to a Different Basis
e1 = np.array([1,0])
e2 = np.array([0,1])
B = np.array([[1,4],[3,1]])
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plot_vectors([e1, e2])
plt.subplot(1,2,2)
plot_vectors([B.dot(e1), B.dot(e2)])
plt.Circle((0,0),2)
#plt.show()
#plt.tight_layout()
#Inner Products
e1 = np.array([1,0])
e2 = np.array([0,1])
A = np.array([[2,3],[3,1]])
v1=A.dot(e1)
v2=A.dot(e2)
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plot_vectors([e1, e2])
plt.subplot(1,2,2)
plot_vectors([v1,v2])
plt.tight_layout()
#help(plt.Circle)
plt.Circle(np.array([0,0]),radius=1)
plt.Circle.draw
# using sqrt() to print the square root of matrix
print ("The element wise square root is : ")
print (np.sqrt(x))
```
<a id="32"></a> <br>
# 21-Conclusion
If you have made this far – give yourself a pat at the back. We have covered different aspects of **Linear algebra** in this Kernel. You are now finishing the **third step** of the course to continue, return to the [**main page**](https://www.kaggle.com/mjbahmani/10-steps-to-become-a-data-scientist/) of the course.
###### [Go to top](#top)
you can follow me on:
> ###### [ GitHub](https://github.com/mjbahmani/)
> ###### [Kaggle](https://www.kaggle.com/mjbahmani/)
<b>I hope you find this kernel helpful and some <font color='red'>UPVOTES</font> would be very much appreciated.<b/>
<a id="33"></a> <br>
# 22-References
1. [Linear Algbra1](https://github.com/dcavar/python-tutorial-for-ipython)
1. [Linear Algbra2](https://www.oreilly.com/library/view/data-science-from/9781491901410/ch04.html)
1. [GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist)
>###### If you have read the notebook, you can follow next steps: [**Course Home Page**](https://www.kaggle.com/mjbahmani/10-steps-to-become-a-data-scientist)
| github_jupyter |
---
# Langages de script - Python
## Cours 8.1 — Exercices
### M2 Ingénierie Multilingue - INaLCO
---
- Loïc Grobol <loic.grobol@gmail.com>
- Yoann Dupont <yoa.dupont@gmail.com>
## Un peu de classe
**Note** si vous modifiez des cellules pour une question, assurez vous que ça ne casse pas vos solutions pour les questions précédentes, n'hésitez pas à relancer tous le notebook pour vous en assurer.
Voici une classe `Word` de base qui représente un mot (avec pas beaucoup de features).
```
class Word:
"""Un mot dans un texte"""
def __init__(self, form):
self.form = form
def to_string(self):
"""Renvoie une représenation sous forme de chaîne de caractères
>>> w = Word("spam")
>>> w.to_string()
"spam"
"""
return self.form
def is_same(self, other):
"""Vérifie si `self` et `other` (un autre mot) ont la même forme"""
return self.form == other.form
w = Word("tofu")
print(w.to_string())
x = Word("tofu")
print(w is x) # Ce n'est pas le même objet
print(w.is_same(x))
```
1. À vous d'écrire une classe `Sentence` qui s'initialise à partir d'une liste de `Word` et a elle aussi une méthode `to_string` qui renvoie une représentation qui va bien et une méthode `is_same` pour vérifier qu'elle a le même contenu qu'une autre phrase.
```
class Sentence:
"""Une phrase"""
def to_string(self):
"""Renvoie une représentation sous forme de chaîne de caractères
>>> s = Sentence([Word(token) for token in "we are the knights who say Ni !".split()])
>>> s.to_string()
"we are the knights who say Ni !"
"""
# À vous de l'écrire
pass
def is_same(self, other):
pass
s = Sentence([Word(token) for token in "we are the knights who say Ni !".split()])
print(s.to_string())
t = Sentence([Word(token) for token in ["we", "are", "the", "knights", "who", "say", "Ni", "!"]])
print(t is s)
print(t.is_same(s))
```
Les morceaux de code dans les docstrings sont des [doctests](https://docs.python.org/3/library/doctest.html), c'est une façon de combiner de la documentation et des tests au même endroit, on en trouve plus trop dans la nature aujoud'hui mais ce fut assez pratique et élégant dans une époque lointaine et plus civilisée.
3. Créez une classe `Span` qui représentera des empans de texte et modifiez la classe `Sentence` pour que l'exemple ci-dessous marche
```
class Span:
"""Un empan de texte"""
pass
sp = Span([Word("knights"), Word("who"), Word("say")])
assert sp.to_string() == "knights who say"
s = Sentence([Word(token) for token in "we are the knights who say Ni !".split()])
sp.is_same(s.slice(3, 6))
```
**Note** Do not Repeat Yourself est un bon conseil
4. Maintenant que vous êtes des pythonista aguerri⋅e⋅s on va pouvoir faire du code moins moche: transformez les classes précédentes pour que le code suivant marche
```
assert str(w) == "tofu"
assert w == x
assert str(s) == "we are the knights who say Ni !"
assert s == t
sq = s[3:6]
assert sq == sp
```
5. Une dernière subtilité, comme c'est fatigant de construire une `Sentence` à la mano, faites marcher cette cellule
```
u = Sentence.from_string("we are the knights who say Ni !")
assert u == s
```
| github_jupyter |

---
# 06 - Lectura y escritura de ficheros
--------------
### Sistemas de ficheros soportados
- Igual que Hadoop, Spark soporta diferentes filesystems: local, HDFS, Amazon S3
- En general, soporta cualquier fuente de datos que se pueda leer con Hadoop
- También, acceso a bases de datos relacionales o noSQL
- MySQL, Postgres, etc. mediante JDBC
- Apache Hive, HBase, Cassandra o Elasticsearch
### Formatos de fichero soportados
- Spark puede acceder a diferentes tipos de ficheros:
- Texto plano, CSV, ficheros sequence, JSON, *protocol buffers* y *object files*
- Soporta ficheros comprimidos
- Veremos el acceso a algunos tipos en esta clase, y dejaremos otros para más adelante
```
!pip install pyspark
# Create apache spark context
from pyspark import SparkContext
sc = SparkContext(master="local", appName="Mi app")
# Stop apache spark context
sc.stop()
```
### Ejemplos con ficheros de texto
En el directorio `data/libros` hay un conjunto de ficheros de texto comprimidos.
```
# Ficheros de entrada
!ls data/libros
```
### Funciones de lectura y escritura con ficheros de texto
- `sc.textFile(nombrefichero/directorio)` Crea un RDD a partir las líneas de uno o varios ficheros de texto
- Si se especifica un directorio, se leen todos los ficheros del mismo, creando una partición por fichero
- Los ficheros pueden estar comprimidos, en diferentes formatos (gz, bz2,...)
- Pueden especificarse comodines en los nombres de los ficheros
- `sc.wholeTextFiles(nombrefichero/directorio)` Lee ficheros y devuelve un RDD clave/valor
- clave: path completo al fichero
- valor: el texto completo del fichero
- `rdd.saveAsTextFile(directorio_salida)` Almacena el RDD en formato texto en el directorio indicado
- Crea un fichero por partición del rdd
```
# Lee todos los ficheros del directorio
# y crea un RDD con las líneas
lineas = sc.textFile("data/libros")
# Se crea una partición por fichero de entrada
print("Número de particiones del RDD lineas = {0}".format(lineas.getNumPartitions()))
# Obtén las palabras usando el método split (split usa un espacio como delimitador por defecto)
palabras = lineas.flatMap(lambda x: x.split())
print("Número de particiones del RDD palabras = {0}".format(palabras.getNumPartitions()))
# Reparticiono el RDD en 4 particiones
palabras2 = palabras.coalesce(4)
print("Número de particiones del RDD palabras2 = {0}".format(palabras2.getNumPartitions()))
# Toma una muestra aleatoria de palabras
print(palabras2.takeSample(False, 10))
# Lee los ficheros y devuelve un RDD clave/valor
# clave->nombre fichero, valor->fichero completo
prdd = sc.wholeTextFiles("data/libros/p*.gz")
print("Número de particiones del RDD prdd = {0}\n".format(prdd.getNumPartitions()))
# Obtiene un lista clave/valor
# clave->nombre fichero, valor->numero de palabras
lista = prdd.mapValues(lambda x: len(x.split())).collect()
for libro in lista:
print("El fichero {0:14s} tiene {1:6d} palabras".format(libro[0].split("/")[-1], libro[1]))
```
## Ficheros Sequence
Ficheros clave/valor usados en Hadoop
- Sus elementos implementan la interfaz [`Writable`](https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/io/Writable.html)
```
rdd = sc.parallelize([("a",2), ("b",5), ("a",8)], 2)
# Salvamos el RDD clave valor como fichero de secuencias
rdd.saveAsSequenceFile("file:///tmp/sequenceoutdir2")
# Lo leemos en otro RDD
rdd2 = sc.sequenceFile("file:///tmp/sequenceoutdir2",
"org.apache.hadoop.io.Text",
"org.apache.hadoop.io.IntWritable")
print("Contenido del RDD {0}".format(rdd2.collect()))
```
## Formatos de entrada/salida de Hadoop
Spark puede interactuar con cualquier formato de fichero soportado por Hadoop
- Soporta las APIs “vieja” y “nueva”
- Permite acceder a otros tipos de almacenamiento (no fichero), p.e. HBase o MongoDB, a través de `saveAsHadoopDataSet` y/o `saveAsNewAPIHadoopDataSet`
```
# Salvamos el RDD clave/valor como fichero de texto Hadoop (TextOutputFormat)
rdd.saveAsNewAPIHadoopFile("file:///tmp/hadoopfileoutdir",
"org.apache.hadoop.mapreduce.lib.output.TextOutputFormat",
"org.apache.hadoop.io.Text",
"org.apache.hadoop.io.IntWritable")
!echo 'Directorio de salida'
!ls -l /tmp/hadoopfileoutdir
!cat /tmp/hadoopfileoutdir/part-r-00001
# Lo leemos como fichero clave-valor Hadoop (KeyValueTextInputFormat)
rdd3 = sc.newAPIHadoopFile("file:///tmp/hadoopfileoutdir",
"org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat",
"org.apache.hadoop.io.Text",
"org.apache.hadoop.io.IntWritable")
print("Contenido del RDD {0}".format(rdd3.collect()))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/victorog17/soulcode_aulas_spark/blob/main/Soulcode_PySpark_002_select_filter.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**BIBLIOTECAS**
```
pip install pyspark
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
```
**CONFIGURAR A NOSSA SPARKSESSION**
```
spark = (SparkSession.builder\
.master("local")\
.appName("aprendendo-dataframes")\
.config("spark.ui.port", "4050")\
.getOrCreate())
spark
```
**CRIANDO DATAFRAMES NO PYSPARK**
```
dados = [
("João da Silva", "São Paulo", "SP", 1100.00),
("Maria dos Santos", "São Paulo", "SP", 2100.00),
("Carlos Victor", "Rio de Janeiro", "RJ", 2100.00),
("Pedro José", "Maceió", "AL", 3600.00)
]
schema = ["nome", "cidade", "estado", "salario"]
df = spark.createDataFrame(data=dados, schema=schema)
df.show()
df.printSchema()
```
**IMPORTANDO CSV NO PYSPARK**
```
df2 = (spark
.read
.format("csv")
.option("header", "true")
.option("inferschema", "true")
.option("delimiter", ";")
.load("/content/drive/MyDrive/Soul_Code_Academy/repositorio_pandas/arquivo_geral.csv")
)
df2.show(5)
df2.printSchema()
```
**COMANDOS SELECT E FILTER**
```
#SELECT - 1ª FORMA DE UTILIZAÇÃO
df2.select("regiao", "estado", "data", "casosNovos").show(15)
```
**UTILIZANDO AS FUNCTIONS PARA TRABALHAR COM COLUNAS**
```
#SELECT - 2ª FORMA DE UTILIZAÇÃO
df2.select(F.col("regiao"), F.col("estado"), F.col("data"), F.col("casosNovos")).show(20)
```
**1ª MANEIRA - EXIBIR A REGIÃO, ESTADOS E CASOS NOVOS APENAS DA REGIÃO NORTE**
```
df2.select(F.col("regiao"), F.col("estado"), F.col("casosNovos")).filter(F.col("regiao") == "Sul").show(12)
print(f'Rows: {df2.count()} | Columns: {len(df2.columns)}')
```
**2ª MANEIRA - EXIBIR A REGIÃO, ESTADOS E CASOS NOVOS APENAS DA REGIÃO NORTE**
```
df2.select(F.col("regiao"),
F.col("estado"),
F.col("casosNovos")).filter(df2["regiao"] == "Nordeste").show(13)
```
**3ª MANEIRA - EXIBIR A REGIÃO, ESTADOS E CASOS NOVOS APENAS DA REGIÃO NORTE**
```
df2.select(F.col("regiao"),
F.col("estado"),
F.col("casosNovos")).filter("regiao = 'Sudeste'").show(14)
```
**4ª MANEIRA - EXIBIR A REGIÃO, ESTADOS E CASOS NOVOS APENAS DA REGIÃO NORTE**
```
filtro = F.col("regiao") == "Sul"
df2.select(F.col("regiao"), F.col("estado"), F.col("casosNovos")).filter(filtro).show(15)
```
**CRIANDO UMA LISTA DINÂMICA COM VÁRIAS COLUNAS PARA SE UTILIZAR DENTRO DO COMANDO SELECT**
```
lista_colunas = ["regiao", "estado", "casosNovos", "obitosNovos", "obitosAcumulados"]
df2.select(lista_colunas).show(16)
df2.printSchema()
```
**APLICAR MAIS DE UM FILTRO DENTRO DO DATAFRAME**
```
#FAZER FILTRAGEM PELA REGIÃO NORTE E ESTADO DO AMAZONAS
df2.select(F.col("regiao"),
F.col("estado")).filter(F.col("regiao") == "Norte").filter(F.col("estado") == "AM").show(17)
filtro01 = F.col("regiao") == "Norte"
filtro02 = F.col("estado") == "AM"
df2.select(F.col("regiao"),
F.col("estado")).filter(filtro01 & filtro02).show(17)
df2.filter(F.col("regiao") == "Norte").filter(F.col("estado") == "AM").show(18)
df2.filter(F.col("regiao") == "Nordeste").filter(F.col("estado") == "BA").show(df2.count())
```
**SEGUNDA MANEIRA DE SE UTILIZAR O FILTER MAIS DE UMA VEZ**
```
df2.filter("regiao = 'Norte' and estado = 'AM'").show(10)
df2.filter("regiao = 'Norte' or estado = 'AM'").show(10)
```
**TERCEIRA MANEIRA DE SE UTILIZAR O FILTER MAIS DE UMA VEZ**
```
df2.filter((F.col("regiao") == 'Norte') & (F.col("estado") == 'AM')).show(10)
df2.filter((F.col("regiao") == 'Norte') | (F.col("estado") == 'AM')).show(10)
```
**QUARTA MANEIRA DE SE UTILIZAR O FILTER MAIS DE UMA VEZ**
```
df2.where((F.col("regiao") == "Sul") & (F.col("estado") == "RS")).show(10)
```
**UTILIZANDO LIKE PARA CONSULTAR SUBSTRINGS ESPECÍFICAS DENTRO DE UMA COLUNA**
```
df2.where((F.col("regiao") == "Norte")).filter("estado like 'R%'").show(19)
df2.filter("regiao like 'S%'").show(10)
```
**OUTRA FORMA DE UTILIZAR O FILTRO COM MAIS DE UMA CONDIÇÃO**
```
df2.filter("regiao in ('Norte', 'Sul')").show(10)
```
**OUTRA FORMA DE UTILIZAR O FILTRO COM UMA LISTA DE REGIÕES**
```
lista_regiao = ['Norte', 'Sul']
df2.filter(F.col("regiao").isin(lista_regiao)).show(10)
```
**UTILIZANDO O FILTRO DE UMA FORMA PARECIDA COM O LIKE, MAS COM UMA FUNÇÃO ESPECÍFICA**
```
df2.filter(F.col("regiao").startswith("Sul")).show(10)
```
**UTILIZANDO O LIKE COMO UMA FUNÇÃO**
```
df2.filter(F.col("regiao").like("C%")).show(100)
```
| github_jupyter |
# **Tiki Web Scraping with Selenium**
Build a web-crawler that take in a Tiki URL and return a dataframe
#Install resources
```
# install selenium and other resources for crawling data
!pip install selenium
!apt-get update
!apt install chromium-chromedriver
### IMPORTS ###
import re
import time
import pandas as pd
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
```
#Configuration for Driver and links
```
###############
### GLOBALS ###
###############
# Header for chromedriver
HEADER = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.164 Safari/537.36'}
# Urls
TIKI = 'https://tiki.vn'
MAIN_CATEGORIES = [
{'Name': 'Điện Thoại - Máy Tính Bảng',
'URL': 'https://tiki.vn/dien-thoai-may-tinh-bang/c1789?src=c.1789.hamburger_menu_fly_out_banner'},
{'Name': 'Điện Tử - Điện Lạnh',
'URL': 'https://tiki.vn/tivi-thiet-bi-nghe-nhin/c4221?src=c.4221.hamburger_menu_fly_out_banner'},
{'Name': 'Phụ Kiện - Thiết Bị Số',
'URL': 'https://tiki.vn/thiet-bi-kts-phu-kien-so/c1815?src=c.1815.hamburger_menu_fly_out_banner'},
{'Name': 'Laptop - Thiết bị IT',
'URL': 'https://tiki.vn/laptop-may-vi-tinh/c1846?src=c.1846.hamburger_menu_fly_out_banner'},
{'Name': 'Máy Ảnh - Quay Phim',
'URL': 'https://tiki.vn/may-anh/c1801?src=c.1801.hamburger_menu_fly_out_banner'},
{'Name': 'Điện Gia Dụng',
'URL': 'https://tiki.vn/dien-gia-dung/c1882?src=c.1882.hamburger_menu_fly_out_banner'},
{'Name': 'Nhà Cửa Đời Sống',
'URL': 'https://tiki.vn/nha-cua-doi-song/c1883?src=c.1883.hamburger_menu_fly_out_banner'},
{'Name': 'Hàng Tiêu Dùng - Thực Phẩm',
'URL': 'https://tiki.vn/bach-hoa-online/c4384?src=c.4384.hamburger_menu_fly_out_banner'},
{'Name': 'Đồ chơi, Mẹ & Bé',
'URL': 'https://tiki.vn/me-va-be/c2549?src=c.2549.hamburger_menu_fly_out_banner'},
{'Name': 'Làm Đẹp - Sức Khỏe',
'URL': 'https://tiki.vn/lam-dep-suc-khoe/c1520?src=c.1520.hamburger_menu_fly_out_banner'},
{'Name': 'Thể Thao - Dã Ngoại',
'URL': 'https://tiki.vn/the-thao/c1975?src=c.1975.hamburger_menu_fly_out_banner'},
{'Name': 'Xe Máy, Ô tô, Xe Đạp',
'URL': 'https://tiki.vn/o-to-xe-may-xe-dap/c8594?src=c.8594.hamburger_menu_fly_out_banner'},
{'Name': 'Hàng quốc tế',
'URL': 'https://tiki.vn/hang-quoc-te/c17166?src=c.17166.hamburger_menu_fly_out_banner'},
{'Name': 'Sách, VPP & Quà Tặng',
'URL': 'https://tiki.vn/nha-sach-tiki/c8322?src=c.8322.hamburger_menu_fly_out_banner'},
{'Name': 'Voucher - Dịch Vụ - Thẻ Cào',
'URL': 'https://tiki.vn/voucher-dich-vu/c11312?src=c.11312.hamburger_menu_fly_out_banner'}
]
# Global driver to use throughout the script
DRIVER = None
```
#Function to Start and Close Driver
```
# Function to (re)start driver
def start_driver(force_restart=False):
global DRIVER
if DRIVER is not None:
if force_restart:
DRIVER.close()
else:
raise RuntimeError('ERROR: cannot overwrite an active driver. Please close the driver before restarting.')
# Setting up the driver
options = webdriver.ChromeOptions()
options.add_argument('-headless') # we don't want a chrome browser opens, so it will run in the background
options.add_argument('-no-sandbox')
options.add_argument('-disable-dev-shm-usage')
DRIVER = webdriver.Chrome('chromedriver',options=options)
# Wrapper to close driver if its created
def close_driver():
global DRIVER
if DRIVER is not None:
DRIVER.close()
DRIVER = None
```
#Function to get info from one product
```
# Function to extract product info from the product
def get_product_info_single(product_item):
d = {'name':'',
'price':'',
'product_url':'',
'image':''}
# name get name through find_element_by_class_name
try:
name_sample = product_item.find_element_by_class_name('name').get_attribute('innerHTML')
pattern = r'<span>.+'
name = re.search(pattern, name_sample).group().strip("<///'span>").strip()
d['name'] = name
except NoSuchElementException:
pass
# get price find_element_by_class_name
try:
d['price'] = product_item.find_element_by_xpath(".//div[contains(@class, 'price-discount__price')]").get_attribute('innerHTML')
except NoSuchElementException:
d['price'] = -1
# get link from .get_attribute()
try:
product_link = product_item.get_attribute('href')
d['product_url'] = product_link
except NoSuchElementException:
pass
# get thumbnail by class_name and Tag name and get_attribute()
try:
thumbnail = product_item.find_element_by_xpath(".//div[contains(@class, 'thumbnail')]//img[@alt]")
d['image'] = thumbnail.get_attribute('src')
except NoSuchElementException:
pass
return d
```
#Function to scrape info of all products from a Page URL
```
# Function to scrape all products from a page
def get_product_info_from_page(page_url):
""" Extract info from all products of a specfic page_url on Tiki website
Args:
page_url: (string) url of the page to scrape
Returns:
data: list of dictionary of products info. If no products shown, return empty list.
"""
global DRIVER
data = []
DRIVER.get(page_url) # Use the driver to get info from the product page
time.sleep(5) ## Must have the sleep function
products_all = DRIVER.find_elements_by_class_name('product-item')
print(f'Found {len(products_all)} products')
for product in products_all:
#Look through the product and get the data
data.append(get_product_info_single(product))
return data
######################
### START SCRAPING ###
######################
num_max_page = int(input('Nhập số lượng trang tối đa cần lấy data: '))
def url_set(): # Cho người dùng chọn ngành hàng và trả về list các trang web trong ngành hàng đó theo số lượng trang web định sẵn
global num_max_page
print('-' * 20 + '\nDanh sách ngành hàng:')
for i in range (len(MAIN_CATEGORIES)):
print(i+1,'-', MAIN_CATEGORIES[i]['Name']) #in toàn bộ list các ngành hàng
user_choice = int(input('-' * 20 + '\nMời nhập số thứ tự ngành hàng: ')) # Người dùng nhập số thứ tự ngành hàng để chọn ngành hàng cần lấy data
while user_choice not in range (1,16):
print ('Nhập theo số thứ tự ngành hàng: 1, 2, ..., 14, 15')
user_choice = int(input('Mời nhập số thứ tự ngành hàng: '))
main_cat_url = MAIN_CATEGORIES[user_choice - 1]['URL'] # lấy link chính của ngành hàng
main_cat_name = MAIN_CATEGORIES[user_choice - 1]['Name']
print()
print('-' * 20)
print(f'{user_choice} - {main_cat_name}')
url_set = []
for n in range (num_max_page):
page_sub_text = r'\1page='+str(n+1)+r'&\2'
url_set.append(re.sub(r'(https:\/\/tiki.vn\/.*\?)(src.*)',page_sub_text, main_cat_url)) # thêm số trang vào link 'page=(số trang)&'
return url_set # trả về list các trang web thuộc ngành hàng
close_driver()
start_driver(force_restart=True)
#CODE TO GET DATA
prod_data = []
for page_url in url_set():
temp_data = get_product_info_from_page(page_url)
print(temp_data)
prod_data.extend(temp_data)
close_driver()
#SAVE DATA TO CSV FILE
df = pd.DataFrame(data=prod_data, columns=prod_data[0].keys())
df.to_csv('tiki_products.csv')
df.head()
df.info()
```
| github_jupyter |
# Exercises - Objects
## Using an object
Below is the definition of an object. Run the cell and create at least two instances of it.
```
class Car(object):
def __init__(self, make, model, year, mpg=25, tank_capacity=30.0, miles=0):
self.make = make
self.model = model
self.year = year
self.mpg = mpg
self.gallons_in_tank = tank_capacity # cars start with a full tank
self.tank_capacity = tank_capacity
self.miles = miles
def __str__(self):
return "{} {} ({}), {} miles and {} gallons in tank".format(self.make,
self.model,
self.year,
self.miles,
self.gallons_in_tank)
def drive(self, new_miles):
"""Drive the car X miles and return number of miles driven.
If there is not enough fuel, drive 0 miles."""
fuel_need = new_miles/self.mpg
if fuel_need <= self.gallons_in_tank:
self.miles = self.miles + new_miles
self.gallons_in_tank = self.gallons_in_tank - fuel_need
return new_miles
else:
raise ValueError("Would run out of gas!")
def fill_up(self):
self.gallons_in_tank = self.tank_capacity
volvo = Car("Volvo", "S40", "2017", 25, 40, 0)
print(volvo)
```
## Simple modification to class
OK, our car has a major problem: it can't be filled up.
Add a method called `fill_up()` to your class. It is up to you if you want to enable filling by an arbitary number or only back to the full state.
If you allow arbitary amounts of liquid remember to consider overfilling the tank.
Once you edit your class, the old objects do not automatically adopt to the changes you made. You will need to re-create them.
## Exceptions
Now make a modification to the `drive`-method: if an attempt is made to drive more than the gas will allow, create and raise an exception.
Instead of creating your own exception you may use a [ValueError](https://docs.python.org/3/library/exceptions.html#ValueError) for this case, as it is a logical choice.
Now add a try-except clause to the following:
```
try:
suv = Car("Ford", "Escape", 2017, mpg=18, tank_capacity=30)
suv.drive(600)
except ValueError:
print("Can't drive that far")
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
##### Original code
https://github.com/MokkeMeguru/glow-realnvp-tutorial/
If you have any Problem, please let me(@MokkeMeguru) it.
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TFP Bijector: RealNVP
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/RealNVP_vector.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/RealNVP_vector.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In this example we show how to build a RealNVP using TFP's "Bijector."
### Dependencies & Prerequisites
```
#@title Install { display-mode: "form" }
TF_Installation = 'Tensorflow 2.0.0rc0 (GPU)' #@param ['Tensorflow 2.0.0rc0 (GPU)']
if TF_Installation == 'Tensorflow 2.0.0rc0 (GPU)':
!pip install -q --upgrade tensorflow-gpu==2.0.0rc0
print('Installation of `tensorflow-gpu-2.0.0-rc0` complete.')
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "0.8.0rc0" #@param ["0.8.0rc0"]
if TFP_Installation == "0.8.0rc0":
!pip install -q --upgrade tensorflow-probability==0.8.0rc0
print("Installation of `tensorflow-probability-0.8.0-rc0` complete.")
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
```
## RealNVP
Previous Study: NICE
RealNVP propose more efficient Flow base Model by Multi-Scale Architecture and Coupling Layer.
In this part, we experiment flow-base model contains Coupling Layer.
## Coupling Layer
tl;dr
Invertible function which scale a half of input by another one.
### Forward
\begin{eqnarray*}
x_{1:d}, x_{d+1:D} &=& split(x) \\
y_{1:d}&=&x_{1:d} \\
y_{d+1:D}&=&s \odot x_{d+1:D} + t \\
y &=& concat(y_{1:d}, y_{d+1:D}) \\
where && \log{s}, t = NN(x_{1:d}) \\
\end{eqnarray*}
### Inverse
\begin{eqnarray*}
y_{1:d}, y_{d+1:D} &=& split(y) \\
x_{1:d}&=&y_{1:d} \\
x_{d+1:D}&=& (y_{d+1:D} - t) / s \\
x &=& concat(x_{1:d},x_{d+1:D}) \\
where && \log{s}, t = NN(y_{1:d}) \\
\end{eqnarray*}
### Forward Jacobian
First, calculate Jacobian Matrix
\begin{eqnarray*}
\cfrac{\partial y}{\partial x}
=
\begin{bmatrix}
\frac{\partial y_{1:d}}{\partial x_{1:d}} & \frac{\partial y_{1:d}}{\partial x_{d+1:D}} \\
\frac{\partial y_{d+1:D}}{\partial x_{1:d}} & \frac{\partial y_{d+1:D}}{\partial x_{d+1:D}}
\end{bmatrix}
=
\begin{bmatrix}
\mathbb{I}_d & 0 \\
\frac{\partial y_{d+1:D}}{\partial x_{1:d}} & diag(s)
\end{bmatrix}
\end{eqnarray*}
Next, calculate Jacobian Determinant
\begin{eqnarray*}
det|\cfrac{dy}{dx}| = |J| = \Pi{s}
\end{eqnarray*}
for computation, we usualy use log-scaled Jacobian Determinant
\begin{eqnarray*}
\log{|J|} = \Sigma{|s|}
\end{eqnarray*}
### Inverse Jacobian
As same as Forward Jacobian
\begin{eqnarray*}
\frac{\partial x}{\partial y}
=
\begin{bmatrix}
\frac{\partial x_{1:d}}{\partial y_{1:d}} & \frac{\partial x_{1:d}}{\partial y_{d+1:D}} \\
\frac{\partial x_{d+1:D}}{\partial y_{1:d}} & \frac{\partial x_{d+1:D}}{\partial y_{d+1:D}}
\end{bmatrix}
=
\begin{bmatrix}
\mathbb{I}_d & 0 \\
\frac{\partial x_{d+1:D}}{\partial y_{1:d}} & diag(\frac{1}{s})
\end{bmatrix}
\end{eqnarray*}
\begin{eqnarray*}
\log{|J'|} = - \Sigma{|s|}
\end{eqnarray*}
## Problem Setting
Double Moon Distribution $\leftrightarrow$ Multivariate normal Distribution
data example = $[x, y]$ = $[0.1, -0.5]$
# Import Tensorflow
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
print('tensorflow: ', tf.__version__)
print('tensorflow-probability: ', tfp.__version__)
```
# Base Distribution
```
mvn = tfd.MultivariateNormalDiag(loc=[0., 0.], scale_diag=[1., 1.])
mvn_samples = mvn.sample(5000)
plt.figure(figsize=(5,5))
plt.xlim([-4, 4])
plt.ylim([-4, 4])
plt.scatter(mvn_samples[:, 0], mvn_samples[:, 1], s=15)
```
# Target Distribution
```
def gen_double_moon_samples(num_samples):
assert num_samples % 2 == 0, "[Requirement] num_samples % 2 == 0"
x1_1 = tfd.Normal(loc=4.0, scale=4.0)
x1_1_samples = x1_1.sample(num_samples // 2)
x1_2 = tfd.Normal(
loc=0.25 * (x1_1_samples - 4) ** 2 - 20,
scale=tf.ones_like(num_samples / 2) * 2
)
x1_2_samples = x1_2.sample()
x2_1 = tfd.Normal(loc=4.0, scale=4.0)
x2_1_samples = x2_1.sample(num_samples // 2)
x2_2 = tfd.Normal(
loc=-0.25 * (x2_1_samples - 4) ** 2 + 20,
scale=tf.ones_like(num_samples / 2) * 2,
)
x2_2_samples = x2_2.sample()
x1_samples = tf.stack([x1_1_samples * 0.2, x1_2_samples * 0.1], axis=1)
x2_samples = tf.stack([x2_1_samples * 0.2 - 2, x2_2_samples * 0.1], axis=1)
x_samples = tf.concat([x1_samples, x2_samples], axis=0)
return x_samples
base_samples = gen_double_moon_samples(50000)
base_samples = tf.random.shuffle(base_samples)
plt.figure(figsize=(5, 5))
plt.xlim([-4, 4])
plt.ylim([-4, 4])
plt.scatter(base_samples[:, 0], base_samples[:, 1], s=15)
```
# Create Dataset
```
SHUFFLE_BUFFER_SIZE = 10000
BATCH_SIZE = 10000
train_dataset = (
tf.data.Dataset.from_tensor_slices(base_samples)
.shuffle(SHUFFLE_BUFFER_SIZE)
.batch(BATCH_SIZE)
)
for i in train_dataset.take(1):
print('data samples: ', len(i))
plt.figure(figsize=(5, 5))
plt.xlim([-4, 4])
plt.ylim([-4, 4])
plt.scatter(i[:, 0], i[:, 1], s=15)
```
# NN Layer
```
from tensorflow.keras.layers import Layer, Dense, BatchNormalization, ReLU
from tensorflow.keras import Model
class NN(Layer):
def __init__(self, input_shape, n_hidden=[512, 512], activation="relu", name="nn"):
super(NN, self).__init__(name="nn")
layer_list = []
for i, hidden in enumerate(n_hidden):
layer_list.append(Dense(hidden, activation=activation, name='dense_{}_1'.format(i)))
layer_list.append(Dense(hidden, activation=activation, name='dense_{}_2'.format(i)))
self.layer_list = layer_list
self.log_s_layer = Dense(input_shape, activation="tanh", name='log_s')
self.t_layer = Dense(input_shape, name='t')
def call(self, x):
y = x
for layer in self.layer_list:
y = layer(y)
log_s = self.log_s_layer(y)
t = self.t_layer(y)
return log_s, t
def nn_test():
nn = NN(1, [512, 512])
x = tf.keras.Input([1])
log_s, t = nn(x)
# Non trainable params: -> Batch Normalization's params
tf.keras.Model(x, [log_s, t], name="nn_test").summary()
nn_test()
```
# RealNVP Biector Layer
```
class RealNVP(tfb.Bijector):
def __init__(
self,
input_shape,
n_hidden=[512, 512],
# this bijector do vector wise quantities.
forward_min_event_ndims=1,
validate_args: bool = False,
name="real_nvp",
):
"""
Args:
input_shape:
input_shape,
ex. [28, 28, 3] (image) [2] (x-y vector)
"""
super(RealNVP, self).__init__(
validate_args=validate_args, forward_min_event_ndims=forward_min_event_ndims, name=name
)
assert input_shape[-1] % 2 == 0
self.input_shape = input_shape
nn_layer = NN(input_shape[-1] // 2, n_hidden)
nn_input_shape = input_shape.copy()
nn_input_shape[-1] = input_shape[-1] // 2
x = tf.keras.Input(nn_input_shape)
log_s, t = nn_layer(x)
self.nn = Model(x, [log_s, t], name="nn")
def _forward(self, x):
x_a, x_b = tf.split(x, 2, axis=-1)
y_b = x_b
log_s, t = self.nn(x_b)
s = tf.exp(log_s)
y_a = s * x_a + t
y = tf.concat([y_a, y_b], axis=-1)
return y
def _inverse(self, y):
y_a, y_b = tf.split(y, 2, axis=-1)
x_b = y_b
log_s, t = self.nn(y_b)
s = tf.exp(log_s)
x_a = (y_a - t) / s
x = tf.concat([x_a, x_b], axis=-1)
return x
def _forward_log_det_jacobian(self, x):
_, x_b = tf.split(x, 2, axis=-1)
log_s, t = self.nn(x_b)
return log_s
def realnvp_test():
realnvp = RealNVP(input_shape=[2], n_hidden=[512, 512])
x = tf.keras.Input([2])
y = realnvp.forward(x)
print('trainable_variables :', len(realnvp.trainable_variables))
Model(x, y, name="realnvp_test").summary()
realnvp_test()
```
# TransformDistribution
```
num_realnvp = 4
bijector_chain = []
for i in range(num_realnvp):
bijector_chain.append(RealNVP(input_shape=[2], n_hidden=[256, 256]))
bijector_chain.append(tfp.bijectors.Permute([1, 0]))
flow = tfd.TransformedDistribution(
distribution=mvn,
bijector=tfb.Chain(list(reversed(bijector_chain)))
)
print('trainable_variables: ', len(flow.bijector.trainable_variables))
```
# Test Pretrained Model
```
samples = flow.sample(10000)
plt.figure(figsize=(5, 5))
plt.xlim([-4, 4])
plt.ylim([-4, 4])
plt.scatter(samples[:, 0], samples[:, 1], s=15)
for targets in train_dataset.take(1):
targets = targets
#print(flow.bijector.inverse(targets))
print(tf.reduce_sum(flow.bijector.inverse_log_det_jacobian(targets, event_ndims=1)))
res = flow.bijector.inverse(targets)
print(tf.reduce_mean(flow.log_prob(res)))
print(flow.log_prob(res).shape)
targets.shape
plt.scatter(targets[:,0], targets[:,1])
plt.scatter(res[:, 0], res[:,1])
```
# Training
```
# !rm -r checkpoints
@tf.function
def loss(targets):
return - tf.reduce_mean(flow.log_prob(targets))
optimizer = tf.optimizers.Adam(learning_rate=1e-4)
# log is logger which can watch training via TensorBoard
# log = tf.summary.create_file_writer('checkpoints')
avg_loss = tf.keras.metrics.Mean(name='loss', dtype=tf.float32)
n_epochs = 300
for epoch in range(n_epochs):
for targets in train_dataset:
with tf.GradientTape() as tape:
log_prob_loss = loss(targets)
grads = tape.gradient(log_prob_loss, flow.trainable_variables)
optimizer.apply_gradients(zip(grads, flow.trainable_variables))
avg_loss.update_state(log_prob_loss)
if tf.equal(optimizer.iterations % 100, 0):
# with log.as_default():
# tf.summary.scalar("loss", avg_loss.result(), step=optimizer.iterations)
# print(
# "Step {} Loss {:.6f}".format(
# optimizer.iterations, avg_loss.result()
# )
# )
# avg_loss.reset_states()
print("Step {} Loss {:.6f}".format(optimizer.iterations, avg_loss.result()))
avg_loss.reset_states()
```
# Inference
```
base = flow.distribution.sample(10000)
targets = flow.sample(10000)
plt.scatter(base[:, 0], base[:, 1], s=15)
plt.scatter(targets[:, 0], targets[:, 1], s=15)
targets = gen_double_moon_samples(10000)
base = flow.bijector.inverse(targets)
targets.shape
plt.scatter(base[:, 0], base[:,1], s=15)
plt.scatter(targets[:,0], targets[:,1], s=15)
```
| github_jupyter |
## SVM with Kernel trick to classify flowers!
As before we had issues with Logistic regression classifying Iris dataset, and we really want that trip, we will try with this Kernel trick to beat that non-linearity!!
This is from the excellent reference https://scikit-learn.org/stable/auto_examples/svm/plot_iris_svc.html#sphx-glr-auto-examples-svm-plot-iris-svc-py
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
def make_meshgrid(x, y, h=.02):
"""Create a mesh of points to plot in
Parameters
----------
x: data to base x-axis meshgrid on
y: data to base y-axis meshgrid on
h: stepsize for meshgrid, optional
Returns
-------
xx, yy : ndarray
"""
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(ax, clf, xx, yy, **params):
"""Plot the decision boundaries for a classifier.
Parameters
----------
ax: matplotlib axes object
clf: a classifier
xx: meshgrid ndarray
yy: meshgrid ndarray
params: dictionary of params to pass to contourf, optional
"""
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
# import some data to play with
iris = datasets.load_iris()
# Take the first two features. We could avoid this by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1.0 # SVM regularization parameter
models = (svm.SVC(kernel='linear', C=C), #SVM with regularization
svm.LinearSVC(C=C, max_iter=10000), #SVM without regularization
svm.SVC(kernel='rbf', gamma=0.7, C=C), #SVM with Gaussian kernel
svm.SVC(kernel='poly', degree=3, gamma='auto', C=C)) #SVM with polynomial kernel!
models = (clf.fit(X, y) for clf in models)
# title for the plots
titles = ('SVC with linear kernel',
'LinearSVC (linear kernel)',
'SVC with RBF kernel',
'SVC with polynomial (degree 3) kernel')
# Set-up 2x2 grid for plotting.
fig, sub = plt.subplots(2, 2)
plt.subplots_adjust(wspace=0.4, hspace=0.4)
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1)
for clf, title, ax in zip(models, titles, sub.flatten()):
plot_contours(ax, clf, xx, yy,
cmap=plt.cm.coolwarm, alpha=0.8)
ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors='k')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xlabel('Sepal length')
ax.set_ylabel('Sepal width')
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(title)
plt.show()
```
| github_jupyter |
# HPV vaccination rates in Young Adults
## April 6, 2020
## University of Utah<br>Department of Biomedical Informatics
### Monika Baker<br>Betsy Campbell<br>Simone Longo
## Introduction
The <i>human papillomavirus</i> (HPV) is the most common sexually transmitted infection (STI) and affects 78 million Americans, primarily in their late teens and early twenties. While many HPV infections are benign, more severe cases can lead to lesions, warts, and a significantly increased risk of cancer. The WHO reports that nearly all cervical cancers as well as large proportions of cancers of other reproductive regions can be attributed to HPV infections. Forunately a vaccine exists to protect against the most virulent forms of HPV and is recommended for all people from as early as 9 up to 27 years old. If the immunization schedule is started early enough, the entire dose may be administered in two doses, however most cases require three vaccination rounds.
The CDC provides vaccination data as a proportion of adults aged 12-17 by state who have received each round of the HPV vaccination (link: https://www.cdc.gov/mmwr/volumes/65/wr/mm6533a4.htm#T3_down).
## Reading and Processing Data
```
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 25, 15
plt.rcParams['font.size'] = 18
```
Get a quick overview of the data.
```
import pandas as pd
import seaborn as sns
data = pd.read_csv('hpv_melt.csv')
sns.barplot(x=data.vaccine, y=data.proportion)
```
From this plot, we immediately see that the proportion of HPV vaccinations decreases from one round of shots to the next. We also see a large difference between male and female rates.
```
from statannot import add_stat_annotation
melt_hpv = data
melt_hpv['gender'] = melt_hpv.vaccine.apply(lambda x: x.split('_')[-1])
melt_hpv['HPV_round'] = melt_hpv.vaccine.apply(lambda x: "".join(x.split('_')[:-1]))
order = list(set(melt_hpv.HPV_round))
boxpairs = [((order[0], 'fem'), (order[0], 'm')),
((order[1], 'fem'), (order[1], 'm')),
((order[2], 'fem'), (order[2], 'm'))]
ax = sns.boxplot(x="HPV_round", y="proportion", hue="gender", data=melt_hpv)
res = add_stat_annotation(ax, data=melt_hpv, x="HPV_round", y="proportion", hue="gender",
box_pairs=boxpairs, test='Mann-Whitney', loc='inside')
```
We can also see that differences between male and female proportions from one round to the next are also statistically significant.
### Comparing to Education Data
We first load the data from https://nces.ed.gov/programs/digest/d19/tables/dt19_203.40.asp?current=yes to obtain current enrollment information. This will be used to standardize spending and other statewide metrics on a per-pupil basis.
Total expenditures per state can be found here https://nces.ed.gov/programs/digest/d19/tables/dt19_236.30.asp?current=yes. In the following cells, the data from these 2 sources will be combined to show how HPV vaccination rates correlates to per-pupil education spending.
```
# Get total enrollment across states and territories after a little data cleaning
enrollment = pd.read_csv('enrollment.csv', header=None)
# standardize names
enrollment[0] = [i.strip().split('..')[0].strip() for i in enrollment[0]]
expenditures = pd.read_csv('expenditures.csv', header=None, index_col=0)
expenditures.index = [i.strip().split('..')[0].strip() for i in expenditures.index]
expenditures.iloc[:,0] = [int(str(i).replace(',','')) for i in expenditures.iloc[:,0]]
expenditures['enrollment'] = [int(str(i).replace(',','')) for i in enrollment.iloc[:,1]]
expenditures['CostPerStudent'] = expenditures.iloc[:,0] / expenditures.iloc[:,1]
expenditures.columns = ['expenditures', 'enrollment', 'CostPerStudent']
expenditures = expenditures.sort_index()
expenditures.sort_values(by='CostPerStudent').head()
df =pd.read_csv('hpv_clean_w_err.csv', index_col=0)
df.columns = ['State', *df.columns[1:]]
df = df.set_index('State')
hpv = df.iloc[:,3:9]
hpv['AverageHPV_Rate'] = df.mean(axis=1)
hpv = hpv.sort_index()
sns.scatterplot(y=hpv.AverageHPV_Rate, x=expenditures.CostPerStudent)
plot_trendline(y=hpv.AverageHPV_Rate, x=expenditures.CostPerStudent)
```
We see some weak correlation between higher spending per-pupil and higher HPV vaccination rates. This evidence is further validated by examining sexual education requirements.
The following sexual education data was taken from https://www.guttmacher.org/state-policy/explore/sex-and-hiv-education.
```
cdm = pd.read_csv('condoms.csv', header=None, index_col=0)
cdm[2] = [hpv.loc[x, 'AverageHPV_Rate'] for x in cdm.index]
#sns.boxplot(cdm[1], cdm[2])
cdm.columns = ['Required', 'AverageHPV_Rate']
mww_2g(cdm[cdm.Required == 0].AverageHPV_Rate, cdm[cdm.Required == 1].AverageHPV_Rate,
names=['NotRequired', 'Required'], col_names=['Average HPV Rate', 'Are condoms required in sex ed?'])
# Some helper functions
from statsmodels.formula.api import ols
import numpy as np
from scipy.stats import mannwhitneyu as mww
import itertools as it
def plot_trendline(x, y, c='r'):
data = {'x':x, 'y':y}
model = ols("y ~ x", data=data)
results = model.fit()
m = results.params[1]
b = results.params[0]
xax = np.linspace(x.min(), x.max(), 100)
yax = m * xax + b
plt.plot(xax, yax, c, label='y = {} x + {}\nR^2 = {}'.format(m, b, results.rsquared))
plt.legend(fontsize=24)
plt.show()
def mww_2g(g1, g2, names=None, col_names=['Value', 'Variable']):
if names is None:
name1 = g1.name
name2 = g2.name
else:
name1 = names[0]
name2 = names[1]
order = [name1, name2]
boxpairs = [(name1, name2)]
stat, pvalue = mww(g1, g2)
df = pd.DataFrame(zip(g1, it.repeat(name1)))
df = df.append(pd.DataFrame(zip(g2, it.repeat(name2))))
df.columns = col_names
plt.figure()
ax = sns.boxplot(data=df, x=col_names[1], y=col_names[0], order=order)
res = add_stat_annotation(ax, data=df, x=col_names[1], y=col_names[0],
box_pairs=boxpairs, perform_stat_test=False, pvalues=[pvalue],
test_short_name='Mann-Whitney-Wilcoxon', text_format='star', verbose=2, loc='inside')
```
| github_jupyter |
# **Model**
```
experiment_label = 'SVC04_na'
user_label = 'tay_donovan'
```
## **Aim**
Look for performance improvement in SVC model, by nullifying all negative values
## **Findings**
Findings for this notebook
```
#Initial imports
import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
import os
import sys
sys.path.append(os.path.abspath('..'))
from src.common_lib import DataReader, NBARawData
from sklearn.svm import SVC
```
## **Data input and cleansing**
```
#Load dataset using common function DataReader.read_data()
data_reader = DataReader()
# Load Raw Train Data
df_train = data_reader.read_data(NBARawData.TRAIN)
# Load Test Raw Data
df_test = data_reader.read_data(NBARawData.TEST)
#For train dataframe, remove redundant column 'Id_old'
cols_drop = ["Id", "Id_old"]
df_train.drop(cols_drop, axis=1, inplace=True)
df_train.columns = df_train.columns.str.strip()
df_train.describe
#For test dataframe, remove redundant column 'Id_old'
df_test.drop(cols_drop, axis=1, inplace=True)
df_test.columns = df_test.columns.str.strip()
df_test.describe
```
## **Negative values in dataset**
```
print(df_train.where(df_train < 0).count())
# Negative values do not make sense in this context
#Define negative cleaning function
def clean_negatives(strategy, df):
if strategy=='abs':
df = abs(df)
if strategy=='null':
df[df < 0] = 0
if strategy=='mean':
df[df < 0] = None
df.fillna(df.mean(), inplace=True)
return(df)
#Clean negative numbers
negatives_strategy = 'null'
df_train = clean_negatives(negatives_strategy, df_train)
df_test = clean_negatives(negatives_strategy, df_test)
```
## **Feature Correlation and Selection**
```
#Use Pearson Correlation to determine feature correlation
pearsoncorr = df_train.corr('pearson')
#Create heatmap of pearson correlation factors
fig, ax = plt.subplots(figsize=(10,10))
sb.heatmap(pearsoncorr,
xticklabels=pearsoncorr.columns,
yticklabels=pearsoncorr.columns,
cmap='RdBu_r',
annot=True,
linewidth=0.2)
#Drop correlated features w/ score over 0.9 - retain "MINS", "3P MADE","FTM","REB"
#selected_features = data_reader.select_feature_by_correlation(df_train)
```
## **Standard Scaling**
```
#Standardise scaling of all feature values
df_train_selected = df_train[selected_features]
#Apply scaler
scaler = StandardScaler()
df_cleaned = df_train_selected.copy()
target = df_cleaned.pop('TARGET_5Yrs')
df_train_cleaned = scaler.fit_transform(df_cleaned)
df_train_scaled = pd.DataFrame(df_train_cleaned)
df_train_scaled.columns = df_cleaned.columns
df_train_scaled['TARGET_5Yrs'] = target
# Split the training dataset using common function data_reader.splitdata
X_train, X_val, y_train, y_val = data_reader.split_data(df_train)
#X_train, X_val, y_train, y_val = data_reader.split_data(df_train_scaled)
```
## **Model Selection and Training**
```
#Create Optimised Model
optmodel = SVC()
#Use GridSearchCV to optimise parameters
from sklearn.model_selection import GridSearchCV
# defining parameter range
param_grid = {'C': [0.1, 1, 10, 100, 500],
'gamma': [1, 0.1, 0.01, 0.001, 0.0001],
'kernel': ['rbf']}
grid = GridSearchCV(SVC(probability=True), param_grid, refit = True, verbose = 3, scoring="roc_auc", n_jobs=-2)
# fitting the model for grid search
grid.fit(X_train, y_train)
#Print the optimised parameters
print(grid.best_params_)
#Create model with the optimised parameters
model = SVC(C=500, break_ties=False, class_weight='balanced', coef0=0.0,
decision_function_shape='ovr', degree=3,
gamma=0.0001, kernel='rbf', max_iter=-1,
probability=True, random_state=None, shrinking=True,
tol=0.001, verbose=False)
X_train.describe()
model.fit(X_train, y_train);
#Store model in /models
from joblib import dump
dump(model, '../models/' + experiment_label + '.joblib')
```
## **Model Evaluation**
```
#Create predictions for train and validation
y_train_preds = model.predict(X_train)
y_val_preds = model.predict(X_val)
#Evaluate train predictions
#from src.models.aj_metrics import confusion_matrix
from sklearn.metrics import roc_auc_score
from sklearn.metrics import plot_roc_curve, plot_precision_recall_curve
from sklearn.metrics import classification_report
sys.path.append(os.path.abspath('..'))
from src.models.aj_metrics import confusion_matrix
y_train_preds
#Training performance results
print("ROC AUC Score:")
print(roc_auc_score(y_train,y_train_preds))
print(classification_report(y_train, y_train_preds))
#Confusion matrix
print(confusion_matrix(y_train, y_train_preds))
#ROC Curve
plot_roc_curve(model,X_train, y_train)
#Precision Recall Curve
plot_precision_recall_curve(model,X_train,y_train)
#Validation performance analysis
print("ROC AUC Score:")
print(roc_auc_score(y_val,y_val_preds))
print("Confusion Matrix:")
print(classification_report(y_val, y_val_preds))
#Confusion matrix
print(confusion_matrix(y_train, y_train_preds))
#ROC Curve
plot_roc_curve(model,X_val, y_val)
#Precision Recall Curve
plot_precision_recall_curve(model,X_train,y_train)
```
## **Test output**
```
#Output predictions
X_test = df_test
y_test_preds = model.predict_proba(X_test)[:,1]
y_test_preds
output = pd.DataFrame({'Id': range(0,3799), 'TARGET_5Yrs': [p for p in y_test_preds]})
output.to_csv("../reports/" + user_label + "_submission_" + experiment_label + ".csv", index=False)
```
## **Outcome**
After outputting the predictions into kaggle, the final score was 0.70754
| github_jupyter |
# Homework: Visualizing UFO's
## The Problem
Let's go back to the UFO Dataset from earlier, remove the dataframe and add a Map!
- You can start with the example from the previous unit.
- Require a state to be selected. (no more selecting all states)
- Remove the option to search by shape.
- load a dataset of geographic centers, so you can center the map accordingly. https://raw.githubusercontent.com/mafudge/datasets/master/usa/geographic-centers.csv
- When you run the interact:
- Create a map
- UFO Dataframe is filtered according to the search criteria
- For each row in the dataframe, City and State must be geocoded to a Latitude and Longitude
- Add pin a pin for each dataframe row, with the summary as the hover text
- show the map
### Screenshot of Sample Run

HINTS and ADVICE::
- Use the **problem simplication** approach. Solve an eaiser problem, then ramp up the complexity.
- Start with the example from the previous lesson.
- Remove the search by shape and require a state to be selected
- Add the map and get it to center over the selected state
- Geocode the UFO sightings, add the pins to the map
## Part 1: Problem Analysis
Inputs:
```
TODO: Inputs
```
Outputs:
```
TODO: Outputs
```
Algorithm (Steps in Program):
```
TODO:Steps Here
```
## Part 2: Code Solution
You may write your code in several cells, but place the complete, final working copy of your code solution within this single cell below. Only the within this cell will be considered your solution. Any imports or user-defined functions should be copied into this cell.
```
# Step 2: Write code here
```
## Part 3: Questions
1. Why does the program crash when you select `AB` or `ON` ? What is happening here?
`--== Double-Click and Write Your Answer Below This Line ==--`
2. It takes a while to draw the map because of the geocoding process. Describe an approach to improve performance.
`--== Double-Click and Write Your Answer Below This Line ==--`
3. How would you handle errors this program to make it more user friendly?
`--== Double-Click and Write Your Answer Below This Line ==--`
## Part 4: Reflection
Reflect upon your experience completing this assignment. This should be a personal narrative, in your own voice, and cite specifics relevant to the activity as to help the grader understand how you arrived at the code you submitted. Things to consider touching upon: Elaborate on the process itself. Did your original problem analysis work as designed? How many iterations did you go through before you arrived at the solution? Where did you struggle along the way and how did you overcome it? What did you learn from completing the assignment? What do you need to work on to get better? What was most valuable and least valuable about this exercise? Do you have any suggestions for improvements?
To make a good reflection, you should journal your thoughts, questions and comments while you complete the exercise.
Keep your response to between 100 and 250 words.
`--== Double-Click and Write Your Reflection Below Here ==--`
```
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
```
| github_jupyter |
```
#!pip3 install qiskit
#!pip3 install pylatexenc
from qiskit import *
import numpy as np
from scipy.stats import norm
from matplotlib import pyplot as plt
from scipy.stats import rv_continuous
def unitary(circ,eta,phi,t):
theta = np.arccos(-eta);
circ.u3(theta,phi,t,0);
"""
get: get함수를 호출할 때마다 파라미터에 랜덤한 값을 주어서 게이트 적용 후의 상태 벡터를 추출합니다.
이때, add함수로 게이트를 적용하고, 추출 후에는 remove함수로 게이트를 지웁니다.
draw: 현재 circuit을 그리는 함수 입니다. 아직 미완성입니다.
"""
class PQC:
def __init__(self,name):
self.backend = Aer.get_backend('statevector_simulator');
self.circ = QuantumCircuit(2);
self.name = name;
self.seed =14256;
np.random.seed(self.seed);
if self.name=="rz":
self.circ.h(0);
if self.name=="rzx":
self.circ.h(0);
if self.name=="uni":
self.circ.h(0);
self.circ.h(1);
if self.name=="crz":
self.circ.h(0);
self.circ.h(1);
if self.name=="cry":
self.circ.h(0);
self.circ.h(1);
if self.name=="crx":
self.circ.h(0);
self.circ.h(1);
def add(self):
if self.name == "rz":
th = np.random.uniform(0,2*np.pi);
self.circ.rz(th,0);
if self.name =="crz":
th = np.random.uniform(0,2*np.pi);
c = QuantumCircuit(1,name="Rz");
c.rz(th,0);
temp = c.to_gate().control(1);
self.circ.append(temp,[0,1]);
if self.name =="crx":
th = np.random.uniform(0,2*np.pi);
c = QuantumCircuit(1,name="Rx");
c.rx(th,0);
temp = c.to_gate().control(1);
self.circ.append(temp,[0,1]);
if self.name =="cry":
th = np.random.uniform(0,2*np.pi);
c = QuantumCircuit(1,name="Ry");
c.ry(th,0);
temp = c.to_gate().control(1);
self.circ.append(temp,[0,1]);
if self.name == "rzx":
th1 = np.random.uniform(0,2*np.pi);
self.circ.rz(th1,0);
th2 = np.random.uniform(0,2*np.pi);
self.circ.rx(th2,0);
if self.name =="uni":
eta = np.random.uniform(-1,1);
theta = np.arccos(-eta);
phi = np.random.uniform(0,2*np.pi);
t = np.random.uniform(0,2*np.pi);
c = QuantumCircuit(1);
c.u3(theta,phi,t,0);
temp = c.to_gate().control(1);
self.circ.append(temp,[0,1]);
def remove(self):
if self.name == "crz":
self.circ.data.pop(2);
if self.name == "cry":
self.circ.data.pop(2);
if self.name == "crx":
self.circ.data.pop(2);
if self.name == "uni":
self.circ.data.pop(2);
if self.name == "rzx":
self.circ.data.pop(1);
self.circ.data.pop(1);
def get(self):
self.add();
# print(self.circ.data)
# print(self.circ)
result = execute(self.circ,self.backend).result();
out_state = result.get_statevector();
self.remove(); # remove a random gate
return np.asmatrix(out_state).T;
def draw(self):
th = np.random.uniform(0,2*np.pi);
self.circ.rz(th,0);
self.circ.draw('mpl'); # 왜 안 그려지는지 모르겠습니다
self.circ.data.pop(1);
pqc = PQC("uni");
pqc.get();
# print(pqc.draw());
print(pqc.circ)
def Haar(F,N):
if F<0 or F>1:
return 0;
return (N-1)*(1-F)**(N-2);
class Haar_dist(rv_continuous):
def _pdf(self,x):
return Haar(x,1*2);
def kl_divergence(p, q):
return np.sum(np.where(p != 0, p * np.log(p / q), 0))
#return np.sum(np.where(p != 0, p * np.log(p / q), 0))
len = 10000;
arr = [];
for i in range(len):
fid = np.abs(pqc.get().getH()*pqc.get())**2; # PQC에서 임의의 두 벡터를 뽑고 fidelity를 계산합니다.
arr.append(fid[0,0]);
haar = [];
h = Haar_dist(a=0,b=1,name="haar");
for i in range(len):
haar.append(h.ppf((i+0.5)/len))
n_bins = 100;
# We can set the number of bins with the `bins` kwarg
# haar = np.linspace(0,1,len,endpoint=False); #true value
haar_pdf = plt.hist(np.array(haar), bins=n_bins, alpha=0.5)[0]/len; # plt.hist[0]은 각 구간마다 원소의 개수를 리턴합니다. 이를 전체 개수로 나누면 fidelity에 따른 pdf를 얻을 수 있을 것입니다.
pqc_pdf = plt.hist(np.array(arr), bins=n_bins, alpha=0.5)[0]/len;
# print("\n\n\n****\n",haar_pdf);
# print("\n\n\n****\n",pqc_pdf);
kl = kl_divergence(pqc_pdf,haar_pdf);
# kl = kkll(pqc_pdf,haar_pdf);
# print(np.sum(pqc_pdf))
plt.title('cU3: KL(P||Q) = %1.4f' % kl)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import itertools
from sklearn.cluster import KMeans
import pprint
```
## 1. Prepare input for node2vec
> We'll use a CSV file where each row represents a single recommendable item: it contains a comma separated list of the named entities that appear in the item's title.
一个样本为一个序列特征。
```
named_entities_df = pd.read_csv('../output/event_features.csv')
named_entities_df.columns = ['named_entities']
# named_entities_df['named_entities'] = named_entities_df.named_entities.str.replace(" ", ",")
# 为了适配代码才做的
named_entities_df.head()
```
> First, we'll have to tokenize the named entities, since `node2vec` expects integers.
处理成节点特征。
```
tokenizer = dict()
named_entities_df['named_entities'] = named_entities_df['named_entities'].astype(str).apply(
lambda named_entities: [tokenizer.setdefault(named_entitie, len(tokenizer)) for named_entitie in named_entities.split(' ')]
)
named_entities_df.head()
# https://blog.csdn.net/u012535605/article/details/81709834
# astype(str)
pprint.pprint(list(tokenizer.items())[0:5])
print( '一共有',len(list(tokenizer.items())) ,'event','对它们进行 embedding')
```
In order to construct the graph on which we'll run node2vec, we first need to understand which named entities appear together.
```
named_entities_df.shape
pairs_df = named_entities_df['named_entities'].apply(lambda named_entities: list(itertools.combinations(named_entities, 2)))
pairs_df = pairs_df[pairs_df.apply(len) > 0]
pairs_df = pd.DataFrame(np.concatenate(pairs_df.values), columns=['named_entity_1', 'named_entity_2'])
pairs_df.head()
pairs_df.shape
```
Now we can construct the graph. The weight of an edge connecting two named entities will be the number of times these named entities appear together in our dataset.
```
pairs_df.groupby(['named_entity_1', 'named_entity_2']).size().reset_index(name='weight').head()
NAMED_ENTITIES_CO_OCCURENCE_THRESHOLD = 0
# By default, 25
edges_df = pairs_df.groupby(['named_entity_1', 'named_entity_2']).size().reset_index(name='weight')
edges_df = edges_df[edges_df['weight'] > NAMED_ENTITIES_CO_OCCURENCE_THRESHOLD]
edges_df[['named_entity_1', 'named_entity_2', 'weight']].to_csv('edges.csv', header=False, index=False, sep=' ')
# 为了作为文本输入,这里需要按照`' '`进行切分
# https://github.com/aditya-grover/node2vec/issues/42
edges_df.head()
edges_df.shape
```
Next, we'll run `node2vec`, which will output the result embeddings in a file called `emb`.
We'll use the open source implementation developed by [Stanford](https://github.com/snap-stanford/snap/tree/master/examples/node2vec).
```
# !git clone https://github.com/JiaxiangBU/node2vec.git
# 下载后,调用 node2vec 代码,基于 word2vec 开发,我调整了 Python 3 版本适用。
!python node2vec/src/main.py --input edges.csv --output emb --weighted
```
## 2. Read embedding and run KMeans clusterring:
```
emb_df = pd.read_csv('emb', sep=' ', skiprows=[0], header=None)
emb_df.set_index(0, inplace=True)
emb_df.index.name = 'named_entity'
emb_df.head()
emb_df.shape
```
基本上每个类别都有 embedding 了。
> Each column is a dimension in the embedding space. Each row contains the dimensions of the embedding of one named entity.
每一列是一个 embedding 的维度。
> We'll now cluster the embeddings using a simple clustering algorithm such as k-means.
下面利用 embedding 进行聚类。
```
NUM_CLUSTERS = 2
# By default 10
kmeans = KMeans(n_clusters=NUM_CLUSTERS)
kmeans.fit(emb_df)
labels = kmeans.predict(emb_df)
emb_df['cluster'] = labels
clusters_df = emb_df.reset_index()[['named_entity','cluster']]
clusters_df.head()
```
## 3. Prepare input for Gephi:
[Gephi](https://gephi.org) (Java 1.8 or higher) is a nice visualization tool for graphical data.
We'll output our data into a format recognizable by Gephi.
```
id_to_named_entity = {named_entity_id: named_entity
for named_entity, named_entity_id in tokenizer.items()}
with open('clusters.gdf', 'w') as f:
f.write('nodedef>name VARCHAR,cluster_id VARCHAR,label VARCHAR\n')
for index, row in clusters_df.iterrows():
f.write('{},{},{}\n'.format(row['named_entity'], row['cluster'], id_to_named_entity[row['named_entity']]))
f.write('edgedef>node1 VARCHAR,node2 VARCHAR, weight DOUBLE\n')
for index, row in edges_df.iterrows():
f.write('{},{},{}\n'.format(row['named_entity_1'], row['named_entity_2'], row['weight']))
```
Finally, we can open `clusters.gdf` using Gephi in order to inspect the clusters.
| github_jupyter |
# voila-interactive-football-pitch
This is a example widget served by `jupyter voila`. It combines `qgrid` and `bqplot` to create an interactive football pitch widget.
## Features
- Selected players on the pitch are highlighted in qgrid.
- Selected players selected in qgrid are marked on the pitch.
- Players are moveable on the pitch and their position is updated in qgrid.
```
import os
import ipywidgets as widgets
from bqplot import *
import numpy as np
import pandas as pd
import qgrid
# create DataFrame for team data
columns = ['name', 'id', 'x', 'y']
tottenham_players = [
['Lloris', 1, 0.1, 0.5],
['Trippier', 2, 0.2, 0.25],
['Alderweireld', 4, 0.2, 0.4],
['Vertonghen', 5, 0.2, 0.6],
['D. Rose', 3, 0.2, 0.75],
['Sissoko', 17, 0.3, 0.4],
['Winks', 8, 0.3, 0.6],
['Eriksen', 23, 0.4, 0.25],
['Alli', 20, 0.4, 0.5],
['Son', 7, 0.4, 0.75],
['H. Kane', 10, 0.45, 0.5]
]
temp_tottenham = pd.DataFrame.from_records(tottenham_players, columns=columns)
temp_tottenham['team'] = 'Tottenham Hotspur'
temp_tottenham['jersey'] = 'Blue'
liverpool_players = [
['Alisson', 13, 0.9, 0.5],
['Alexander-Arnold', 66, 0.8, 0.75],
['Matip', 32, 0.8, 0.6],
['van Dijk', 4, 0.8, 0.4],
['Robertson', 26, 0.8, 0.25],
['J. Henderson', 14, 0.7, 0.7],
['Fabinho', 3, 0.7, 0.5],
['Wijnaldum', 5, 0.7, 0.3],
['Salah', 11, 0.6, 0.75],
['Roberto Firmino', 9, 0.6, 0.5],
['Mané', 10, 0.6, 0.25]
]
temp_liverpool = pd.DataFrame.from_records(liverpool_players, columns=columns)
temp_liverpool['team'] = 'FC Liverpool'
temp_liverpool['jersey'] = 'Red'
teams = pd.concat([temp_tottenham, temp_liverpool], axis=0, ignore_index=True)
# Define bqplot Image mark
# read pitch image
image_path = os.path.abspath('pitch.png')
with open(image_path, 'rb') as f:
raw_image = f.read()
ipyimage = widgets.Image(value=raw_image, format='png')
scales_image = {'x': LinearScale(), 'y': LinearScale()}
axes_options = {'x': {'visible': False}, 'y': {'visible': False}}
image = Image(image=ipyimage, scales=scales_image, axes_options=axes_options)
# Full screen
image.x = [0, 1]
image.y = [0, 1]
# Define qgrid widget
qgrid.set_grid_option('maxVisibleRows', 10)
col_opts = {
'editable': False,
}
def on_row_selected(change):
"""callback for row selection: update selected points in scatter plot"""
filtered_df = qgrid_widget.get_changed_df()
team_scatter.selected = filtered_df.iloc[change.new].index.tolist()
qgrid_widget = qgrid.show_grid(teams, show_toolbar=False, column_options=col_opts)
qgrid_widget.observe(on_row_selected, names=['_selected_rows'])
qgrid_widget.layout = widgets.Layout(width='920px')
# Define scatter plot for team data
scales={'x': LinearScale(min=0, max=1), 'y': LinearScale(min=0, max=1)}
axes_options = {'x': {'visible': False}, 'y': {'visible': False}}
team_scatter = Scatter(x=teams['x'], y=teams['y'],
names=teams['name'],
scales= scales,
default_size=128,
interactions={'click': 'select'},
selected_style={'opacity': 1.0, 'stroke': 'Black'},
unselected_style={'opacity': 0.6},
axes_options=axes_options)
team_scatter.colors = teams['jersey'].values.tolist()
team_scatter.enable_move = True
# Callbacks
def change_callback(change):
qgrid_widget.change_selection(change.new)
def callback_update_qgrid(name, cell):
new_x = round(cell['point']['x'], 2)
new_y = round(cell['point']['y'], 2)
qgrid_widget.edit_cell(cell['index'], 'x', new_x)
qgrid_widget.edit_cell(cell['index'], 'y', new_y)
team_scatter.observe(change_callback, names=['selected'])
team_scatter.on_drag_end(callback_update_qgrid)
# Define football pitch widget
pitch_widget = Figure(marks=[image, team_scatter], padding_x=0, padding_y=0)
pitch_app = widgets.VBox([pitch_widget, qgrid_widget])
# Hack for increasing image size and keeping aspect ratio
width = 506.7
height = 346.7
factor = 1.8
pitch_widget.layout = widgets.Layout(width=f'{width*factor}px', height=f'{height*factor}px')
pitch_app
```
| github_jupyter |
```
#%matplotlib inline
import datetime, math
import os
import numpy as np
#from Scientific.IO import NetCDF
import netCDF4
import matplotlib
import matplotlib.pyplot as plt
import spectra_mole.VIS_Colormaps as VIS_Colormaps
import spectra_mole.viridis as viridis
class pltRange():
def __init__(self, time=[0, -1], height=[0, -1]):
self.t_bg = time[0]
self.t_ed = time[-1]
self.h_bg = height[0]
self.h_ed = height[-1]
filelist = os.listdir("../output")
#print(filelist)
filelist = [f for f in filelist if "mole_output" in f]
filelist = sorted(filelist)
filename = '../output/20150617_1459_mole_output.nc'
#filename = '../output/20150617_1700_mole_output.nc'
#filename = '../output/20150611_1830_mole_output.nc'
#filename = '../output/20150617_2014_mole_output.nc'
filename = "../output/" + filelist[2]
print(len(filelist))
print("filename ", filename)
savepath = '../plots/region'
if not os.path.isdir(savepath):
os.makedirs(savepath)
f = netCDF4.Dataset(filename, 'r')
time_list = f.variables["timestamp"][:]
range_list = f.variables["range"][:]
dt_list = [datetime.datetime.utcfromtimestamp(time) for time in time_list]
# this is the last valid index
jumps = np.where(np.diff(time_list)>15)[0]
for ind in jumps[::-1].tolist():
print(ind)
# and modify the dt_list
dt_list.insert(ind+1, dt_list[ind]+datetime.timedelta(seconds=10))
rect = pltRange(time=[100, 500], height=[10, 40])
rect = pltRange(time=[0, -1], height=[0, -1])
#rect = pltRange(time=[0, 676], height=[0, -1])
#rect = pltRange(time=[170, -169], height=[0, -1])
#rect = pltRange(time=[0, 1183], height=[0, -1])
# case 0611
# rect = pltRange(time=[0, 341], height=[0, -1])
# case 0625
#rect = pltRange(time=[300, 1190], height=[0, -1])
# second cloud 0130-0400
#rect = pltRange(time=[170, 680], height=[0, 65])
# second cloud 0530-0800
# rect = pltRange(time=[851, 1361], height=[0, 65])
# case 0801
#rect = pltRange(time=[2571, 3086], height=[0, -1])
# case 0612
#rect = pltRange(time=[0, 170], height=[0, 60])
#print(time_list[:-1] - time_list[1:])
quality_flag = f.variables["quality_flag"][:]
wipro_vel = f.variables["v"][:].copy()
wipro_vel_fit = f.variables['v_fit'][:].copy()
print(f.variables.keys())
wipro_ucorr_vel = f.variables["v_raw"][:]
tg_v_term = False
if 'mira_v_term' in f.variables.keys():
tg_v_term = True
print('v_term', tg_v_term)
v_term = f.variables["mira_v_term"][:]
for ind in jumps[::-1].tolist():
v_term = np.insert(v_term, ind+1, np.full(height_list.shape, -99.), axis=0)
v_term = np.ma.masked_less(v_term, -90., copy=True)
wpZ_Bragg = f.variables["Z"][:]
wpZ_raw = f.variables["Z_raw"][:]
mira_Z = f.variables["Z_cr"][:]
mira_Z = np.ma.masked_invalid(mira_Z)
cal_const = f.variables["est_cal_const"][:]
cal_corr = f.variables["cal_corr"][:]
sigma_b = f.variables["sigma_broadening"][:]
wipro_width = f.variables["width"][:]
width_raw = f.variables["width_raw"][:]
width_cr = f.variables["width_cr"][:]
error_diff = f.variables["error_diff"][:]
error_fit = f.variables["error_fit"][:]
for ind in jumps[::-1].tolist():
print(ind)
# add the fill array
quality_flag = np.insert(quality_flag, ind+1, np.full(range_list.shape, -1), axis=0)
wipro_vel = np.insert(wipro_vel, ind+1, np.full(range_list.shape, -99.), axis=0)
wipro_vel_fit = np.insert(wipro_vel_fit, ind+1, np.full(range_list.shape, -99.), axis=0)
wipro_ucorr_vel = np.insert(wipro_ucorr_vel, ind+1, np.full(range_list.shape, -99.), axis=0)
wpZ_Bragg = np.insert(wpZ_Bragg, ind+1, np.full(range_list.shape, -200), axis=0)
wpZ_raw = np.insert(wpZ_raw, ind+1, np.full(range_list.shape, -200), axis=0)
mira_Z = np.insert(mira_Z, ind+1, np.full(range_list.shape, -200), axis=0)
cal_const = np.insert(cal_const, ind+1, np.full(range_list.shape, 1e-200), axis=0)
sigma_b = np.insert(sigma_b, ind+1, np.full(range_list.shape, -1), axis=0)
wipro_width = np.insert(wipro_width, ind+1, np.full(range_list.shape, -99.), axis=0)
width_raw = np.insert(width_raw, ind+1, np.full(range_list.shape, -99.), axis=0)
width_cr = np.insert(width_cr, ind+1, np.full(range_list.shape, -99.), axis=0)
error_diff = np.insert(error_diff, ind+1, np.full(range_list.shape, -99.), axis=0)
error_fit = np.insert(error_fit, ind+1, np.full(range_list.shape, -99.), axis=0)
cal_const = np.ma.masked_less_equal(cal_const, 1e-150, copy=True)
quality_flag = np.ma.masked_less(quality_flag, 0., copy=True)
wipro_vel = np.ma.masked_less(wipro_vel, -90., copy=True)
wipro_ucorr_vel = np.ma.masked_less(wipro_ucorr_vel, -90., copy=True)
wpZ_Bragg = np.ma.masked_less_equal(wpZ_Bragg, -200, copy=True)
wpZ_raw = np.ma.masked_less_equal(wpZ_raw, -200, copy=True)
mira_Z = np.ma.masked_less_equal(mira_Z, -200, copy=True)
cal_const = np.ma.masked_less_equal(cal_const, 1e-200, copy=True)
sigma_b = np.ma.masked_less_equal(sigma_b, -1, copy=True)
wipro_width = np.ma.masked_less(wipro_width, -90., copy=True)
width_raw = np.ma.masked_less(width_raw, -90., copy=True)
width_cr = np.ma.masked_less(width_cr, -90., copy=True)
error_diff = np.ma.masked_less(error_diff, -90., copy=True)
error_fit = np.ma.masked_less(error_fit, -90., copy=True)
wipro_vel = np.ma.masked_where(quality_flag > 3.0, wipro_vel)
wipro_vel_fit = np.ma.masked_where(quality_flag > 3.0, wipro_vel_fit)
#quality_flag = np.ma.masked_where(quality_flag >= 2.0, quality_flag)
np.set_printoptions(threshold='nan')
wipro_ucorr_vel = np.ma.masked_invalid(wipro_ucorr_vel)
#print(f.variables)
print(f.variables.keys())
#print(f.variables['v'][:])
print(f.variables['v'].units)
print(f.variables['quality_flag'].comment)
print('creation time', f.creation_time)
#print('settings ', f.settings)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(wipro_vel[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=VIS_Colormaps.carbonne_map, vmin=-1.5, vmax=1.5)
cbar = fig.colorbar(pcmesh)
#ax.set_xlim([dt_list[0], dt_list[-1]])
#ax.set_ylim([height_list[0], height_list[-1]])
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Velocity [m s$\mathregular{^{-1}}$]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
# ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
# ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(interval=5))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_vel_corr.png"
fig.savefig(savename, dpi=250)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(wipro_ucorr_vel[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=VIS_Colormaps.carbonne_map, vmin=-1.5, vmax=1.5)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Velocity [m s$\mathregular{^{-1}}$]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
# ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
# ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(interval=5))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_vel_wp.png"
fig.savefig(savename, dpi=250)
quality_flag[quality_flag == 5] = 4
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(quality_flag[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=VIS_Colormaps.cloudnet_map,
vmin=-0.5, vmax=10.5)
cbar = fig.colorbar(pcmesh, ticks=[0, 1, 2, 3, 4, 5, 6])
cbar.ax.set_yticklabels(["not influenced", "correction reliable",
"plankton", "low SNR",
"noisy spectrum\nmelting layer",
"",
""])
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
#cbar.ax.set_ylabel("Flag", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=13,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_quality_flag.png"
plt.subplots_adjust(right=0.9)
#plt.tight_layout()
fig.savefig(savename, dpi=250)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(np.log10(cal_const[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed])),
cmap='gist_rainbow', vmin=-16.5, vmax=-13.5)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("RWP Calibration Constant [log10]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_system_parameter.png"
fig.savefig(savename, dpi=250)
zmax = 10
#zmax = 40
cmap = viridis.viridis
cmap = 'jet'
print('maximum wind profiler ', np.max(wpZ_raw[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]))
am = np.argmax(wpZ_raw[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed])
am = np.unravel_index(am, wpZ_raw[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed].shape)
print(dt_list[rect.t_bg:rect.t_ed][am[0]], range_list[rect.h_bg:rect.h_ed][am[1]])
print('cloud radar ', np.nanmax(10 * np.log10(mira_Z[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed])))
am = np.nanargmax(mira_Z[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed])
am = np.unravel_index(am, mira_Z[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed].shape)
print(dt_list[rect.t_bg:rect.t_ed][am[0]], range_list[rect.h_bg:rect.h_ed][am[1]])
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(wpZ_raw[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=cmap, vmin=-35, vmax=zmax)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Reflectivity [dBZ]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_wp_total_reflectivity_jet.png"
fig.savefig(savename, dpi=250)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(wpZ_Bragg[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=cmap, vmin=-35, vmax=zmax)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Reflectivity [dBZ]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_wp_corr_reflectivity_jet.png"
fig.savefig(savename, dpi=250)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(mira_Z[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=cmap, vmin=-35, vmax=zmax)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Reflectivity [dBZ]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_mira_reflectivity.png"
fig.savefig(savename, dpi=250)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(sigma_b[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
# normally the range is 1.5 to 4
cmap='gist_rainbow', vmin=1.5, vmax=7)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("sigma_blure [px]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_sigma_blure.png"
fig.savefig(savename, dpi=250)
if tg_v_term:
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
height_list[rect.h_bg:rect.h_ed],
np.transpose(v_term[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=VIS_Colormaps.carbonne_map, vmin=-2, vmax=2)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([height_list[rect.h_bg], height_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Terminal velocity [m s$\mathregular{^{-1}}$]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(interval=2))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_terminal_vel.png"
fig.savefig(savename, dpi=250)
np.max(v_term)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(wipro_vel_fit[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=VIS_Colormaps.carbonne_map, vmin=-1.5, vmax=1.5)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Velocity [m s$\mathregular{^{-1}}$]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
# ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
# ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(interval=5))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_vel_wp_fit.png"
fig.savefig(savename, dpi=250)
diff_estimates = wipro_vel - wipro_vel_fit
diff_estimates = np.ma.masked_where(quality_flag == 0, diff_estimates)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(diff_estimates[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=VIS_Colormaps.carbonne_map, vmin=-1., vmax=1.0)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Differenece between the estimates [m s$\mathregular{^{-1}}$]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(interval=2))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_vel_fit1.png"
#fig.savefig(savename, dpi=250)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(wipro_width[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=cmap, vmin=0.01, vmax=1)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Spectral width [dBZ]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_wp_corr_width.png"
fig.savefig(savename, dpi=250)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(width_raw[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=cmap, vmin=0.01, vmax=1)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Spectral width [dBZ]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_wp_width.png"
fig.savefig(savename, dpi=250)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(width_cr[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=cmap, vmin=0.01, vmax=1)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Spectral width [dBZ]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_mira_width.png"
fig.savefig(savename, dpi=250)
print(error_diff.max())
cmap=VIS_Colormaps.carbonne_map
cmap='RdBu'
error_diff = np.ma.masked_greater(error_diff, 2)
error_diff = np.ma.masked_where(np.logical_or(quality_flag == 0, wipro_vel.mask, error_diff > 1), error_diff)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
ax.patch.set_facecolor('darkgrey')
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(error_diff[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=cmap, vmin=-0.2, vmax=0.2)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Vertical air velocity bias [m s$\mathregular{^{-1}}$]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(interval=2))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_error_diff.png"
fig.savefig(savename, dpi=250)
error_fit = np.ma.masked_greater(error_fit, 2)
error_fit = np.ma.masked_where(np.logical_or(quality_flag == 0, wipro_vel_fit.mask), error_fit)
fig, ax = plt.subplots(1, figsize=(10, 5.7))
ax.patch.set_facecolor('darkgrey')
pcmesh = ax.pcolormesh(matplotlib.dates.date2num(dt_list[rect.t_bg:rect.t_ed]),
range_list[rect.h_bg:rect.h_ed],
np.transpose(error_fit[rect.t_bg:rect.t_ed, rect.h_bg:rect.h_ed]),
cmap=cmap, vmin=-0.2, vmax=0.2)
cbar = fig.colorbar(pcmesh)
ax.set_xlim([dt_list[rect.t_bg], dt_list[rect.t_ed-1]])
ax.set_ylim([range_list[rect.h_bg], range_list[rect.h_ed-1]])
ax.set_xlabel("Time UTC", fontweight='semibold', fontsize=15)
ax.set_ylabel("Height", fontweight='semibold', fontsize=15)
cbar.ax.set_ylabel("Vertical air velocity bias [m s$\mathregular{^{-1}}$]", fontweight='semibold', fontsize=15)
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%H:%M'))
#ax.xaxis.set_major_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=range(0,61,10)))
ax.xaxis.set_major_locator(matplotlib.dates.HourLocator(byhour=[0,3,6,9,12,15,18,21]))
ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(byminute=[0,30]))
#ax.xaxis.set_minor_locator(matplotlib.dates.MinuteLocator(interval=2))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(500))
ax.tick_params(axis='both', which='major', labelsize=14,
right=True, top=True, width=2, length=5)
ax.tick_params(axis='both', which='minor', width=1.5,
length=3.5, right=True, top=True)
cbar.ax.tick_params(axis='both', which='major', labelsize=14,
width=2, length=4)
savename = savepath + "/" + dt_list[0].strftime("%Y%m%d_%H%M") \
+ "_error_fir.png"
fig.savefig(savename, dpi=250)
```
| github_jupyter |
### Structured Streaming with Kafka
In this notebook we'll examine how to connect Structured Streaming with Apache Kafka, a popular publish-subscribe system, to stream data from Wikipedia in real time, with a multitude of different languages.
#### Objectives:
* Learn About Kafka
* Learn how to establish a connection with Kafka
* Learn more aboutcreating visualizations
First, run the following cell to import the data and make various utilities available for our experimentation.
```
%run "./Includes/Classroom-Setup"
```
### 1.0. The Kafka Ecosystem
Kafka is software designed upon the **publish/subscribe** messaging pattern. Publish/subscribe messaging is where a sender (publisher) sends a message that is not specifically directed to any particular receiver (subscriber). The publisher classifies the message somehow, and the receiver subscribes to receive certain categories of messages. There are other usage patterns for Kafka, but this is the pattern we focus on in this course.
Publisher/subscriber systems typically have a central point where messages are published, called a **broker**. The broker receives messages from publishers, assigns offsets to them and commits messages to storage.
The Kafka version of a unit of data is an array of bytes called a **message**. A message can also contain a bit of information related to partitioning called a **key**. In Kafka, messages are categorized into **topics**.
#### 1.1. The Kafka Server
The Kafka server is fed by a separate TCP server that reads the Wikipedia edits, in real time, from the various language-specific IRC channels to which Wikimedia posts them. That server parses the IRC data, converts the results to JSON, and sends the JSON to a Kafka server, with the edits segregated by language. The various languages are **topics**. For example, the Kafka topic "en" corresponds to edits for en.wikipedia.org.
##### Required Options
When consuming from a Kafka source, you **must** specify at least two options:
1. The Kafka bootstrap servers, for example: `dsr.option("kafka.bootstrap.servers", "server1.databricks.training:9092")`
2. Some indication of the topics you want to consume.
#### 1.2. Specifying a Topic
There are three, mutually-exclusive, ways to specify the topics for consumption:
| Option | Value | Description | Example |
| ------------- | ---------------------------------------------- | -------------------------------------- | ------- |
| **subscribe** | A comma-separated list of topics | A list of topics to which to subscribe | `dsr.option("subscribe", "topic1")` <br/> `dsr.option("subscribe", "topic1,topic2,topic3")` |
| **assign** | A JSON string indicating topics and partitions | Specific topic-partitions to consume. | `dsr.dsr.option("assign", "{'topic1': [1,3], 'topic2': [2,5]}")`
| **subscribePattern** | A (Java) regular expression | A pattern to match desired topics | `dsr.option("subscribePattern", "e[ns]")` <br/> `dsr.option("subscribePattern", "topic[123]")`|
**Note:** In the example to follow, we're using the "subscribe" option to select the topics we're interested in consuming. We've selected only the "en" topic, corresponding to edits for the English Wikipedia. If we wanted to consume multiple topics (multiple Wikipedia languages, in our case), we could just specify them as a comma-separate list:
```dsr.option("subscribe", "en,es,it,fr,de,eo")```
There are other, optional, arguments you can give the Kafka source. For more information, see the <a href="https://people.apache.org//~pwendell/spark-nightly/spark-branch-2.1-docs/latest/structured-streaming-kafka-integration.html#" target="_blank">Structured Streaming and Kafka Integration Guide</a>
#### 1.3. The Kafka Schema
Reading from Kafka returns a `DataFrame` with the following fields:
| Field | Type | Description |
|------------------ | ------ |------------ |
| **key** | binary | The key of the record (not needed) |
| **value** | binary | Our JSON payload. We'll need to cast it to STRING |
| **topic** | string | The topic this record is received from (not needed) |
| **partition** | int | The Kafka topic partition from which this record is received (not needed). This server only has one partition. |
| **offset** | long | The position of this record in the corresponding Kafka topic partition (not needed) |
| **timestamp** | long | The timestamp of this record |
| **timestampType** | int | The timestamp type of a record (not needed) |
In the example below, the only column we want to keep is `value`.
**Note:** The default of `spark.sql.shuffle.partitions` is 200. This setting is used in operations like `groupBy`. In this case, we should be setting this value to match the current number of cores.
```
from pyspark.sql.functions import col
spark.conf.set("spark.sql.shuffle.partitions", sc.defaultParallelism)
kafkaServer = "server1.databricks.training:9092" # US (Oregon)
# kafkaServer = "server2.databricks.training:9092" # Singapore
editsDF = (spark.readStream # Get the DataStreamReader
.format("kafka") # Specify the source format as "kafka"
.option("kafka.bootstrap.servers", kafkaServer) # Configure the Kafka server name and port
.option("subscribe", "en") # Subscribe to the "en" Kafka topic
.option("startingOffsets", "earliest") # Rewind stream to beginning when we restart notebook
.option("maxOffsetsPerTrigger", 1000) # Throttle Kafka's processing of the streams
.load() # Load the DataFrame
.select(col("value").cast("STRING")) # Cast the "value" column to STRING
)
```
Let's display some data.
```
myStreamName = "lesson04a_ps"
display(editsDF, streamName = myStreamName)
```
Wait until stream is done initializing...
```
untilStreamIsReady(myStreamName)
```
Make sure to stop the stream before continuing.
```
stopAllStreams()
```
### 2.0. Use Kafka to Display the Raw Data
The Kafka server acts as a sort of "firehose" (or asynchronous buffer) and displays raw data. Since raw data coming in from a stream is transient, we'd like to save it to a more permanent data structure. The first step is to define the schema for the JSON payload.
**Note:** Only those fields of future interest are commented below.
```
from pyspark.sql.types import StructType, StructField, StringType, IntegerType, DoubleType, BooleanType
from pyspark.sql.functions import from_json, unix_timestamp
schema = StructType([
StructField("channel", StringType(), True),
StructField("comment", StringType(), True),
StructField("delta", IntegerType(), True),
StructField("flag", StringType(), True),
StructField("geocoding", StructType([ # (OBJECT): Added by the server, field contains IP address geocoding information for anonymous edit.
StructField("city", StringType(), True),
StructField("country", StringType(), True),
StructField("countryCode2", StringType(), True),
StructField("countryCode3", StringType(), True),
StructField("stateProvince", StringType(), True),
StructField("latitude", DoubleType(), True),
StructField("longitude", DoubleType(), True),
]), True),
StructField("isAnonymous", BooleanType(), True), # (BOOLEAN): Whether or not the change was made by an anonymous user
StructField("isNewPage", BooleanType(), True),
StructField("isRobot", BooleanType(), True),
StructField("isUnpatrolled", BooleanType(), True),
StructField("namespace", StringType(), True), # (STRING): Page's namespace. See https://en.wikipedia.org/wiki/Wikipedia:Namespace
StructField("page", StringType(), True), # (STRING): Printable name of the page that was edited
StructField("pageURL", StringType(), True), # (STRING): URL of the page that was edited
StructField("timestamp", StringType(), True), # (STRING): Time the edit occurred, in ISO-8601 format
StructField("url", StringType(), True),
StructField("user", StringType(), True), # (STRING): User who made the edit or the IP address associated with the anonymous editor
StructField("userURL", StringType(), True),
StructField("wikipediaURL", StringType(), True),
StructField("wikipedia", StringType(), True), # (STRING): Short name of the Wikipedia that was edited (e.g., "en" for the English)
])
```
Next we can use the function `from_json` to parse out the full message with the schema specified above.
```
from pyspark.sql.functions import col, from_json
jsonEdits = editsDF.select(
from_json("value", schema).alias("json")) # Parse the column "value" and name it "json"
```
When parsing a value from JSON, we end up with a single column containing a complex object. We can clearly see this by simply printing the schema.
```
jsonEdits.printSchema()
```
The fields of a complex object can be referenced with a "dot" notation as in: `col("json.geocoding.countryCode3")`
Since a large number of these fields/columns can become unwieldy, it's common to extract the sub-fields and represent them as first-level columns as seen below:
```
from pyspark.sql.functions import isnull, unix_timestamp
anonDF = (jsonEdits
.select(col("json.wikipedia").alias("wikipedia"), # Promoting from sub-field to column
col("json.isAnonymous").alias("isAnonymous"), # " " " " "
col("json.namespace").alias("namespace"), # " " " " "
col("json.page").alias("page"), # " " " " "
col("json.pageURL").alias("pageURL"), # " " " " "
col("json.geocoding").alias("geocoding"), # " " " " "
col("json.user").alias("user"), # " " " " "
col("json.timestamp").cast("timestamp")) # Promoting and converting to a timestamp
.filter(col("namespace") == "article") # Limit result to just articles
.filter(~isnull(col("geocoding.countryCode3"))) # We only want results that are geocoded
)
```
#### 2.1. Mapping Anonymous Editors' Locations
When you run the query, the default is a [live] html table. The geocoded information allows us to associate an anonymous edit with a country. We can then use that geocoded information to plot edits on a [live] world map. In order to create a slick world map visualization of the data, you'll need to click on the item below.
Under **Plot Options**, use the following:
* **Keys:** `countryCode3`
* **Values:** `count`
In **Display type**, use **World map** and click **Apply**.
<img src="https://files.training.databricks.com/images/eLearning/Structured-Streaming/plot-options-map-04.png"/>
By invoking a `display` action on a DataFrame created from a `readStream` transformation, we can generate a LIVE visualization!
**Note:** Keep an eye on the plot for a minute or two and watch the colors change.
```
mappedDF = (anonDF
.groupBy("geocoding.countryCode3") # Aggregate by country (code)
.count() # Produce a count of each aggregate
)
display(mappedDF, streamName = myStreamName)
```
Wait until stream is done initializing...
```
untilStreamIsReady(myStreamName)
```
Stop the streams.
```
stopAllStreams()
```
#### Review Questions
**Q:** What `format` should you use with Kafka?<br>
**A:** `format("kafka")`
**Q:** How do you specify a Kafka server?<br>
**A:** `.option("kafka.bootstrap.servers"", "server1.databricks.training:9092")`
**Q:** What verb should you use in conjunction with `readStream` and Kafka to start the streaming job?<br>
**A:** `load()`, but with no parameters since we are pulling from a Kafka server.
**Q:** What fields are returned in a Kafka DataFrame?<br>
**A:** Reading from Kafka returns a DataFrame with the following fields:
key, value, topic, partition, offset, timestamp, timestampType
Run the **`Classroom-Cleanup`** cell below to remove any artifacts created by this lesson.
```
%run "./Includes/Classroom-Cleanup"
```
##### Additional Topics & Resources
* <a href="http://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html#creating-a-kafka-source-stream#" target="_blank">Create a Kafka Source Stream</a>
* <a href="https://kafka.apache.org/documentation/" target="_blank">Official Kafka Documentation</a>
* <a href="https://www.confluent.io/blog/okay-store-data-apache-kafka/" target="_blank">Use Kafka to store data</a>
| github_jupyter |
```
# default_exp exp.csnc.python
```
# Data exploration (taken from CodeSearchNet challenge)
```
import json
import pandas as pd
from pathlib import Path
pd.set_option('max_colwidth',300)
from pprint import pprint
import re
```
## Preview dataset
```
!ls test_data/python
```
Download specific java dataset
```
!wget https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip -P /tf/main/nbs/test_data/python
!unzip test_data/python/python.zip -d test_data/python
!ls test_data/python/
!gzip -d test_data/python/final/jsonl/test/python_test_0.jsonl.gz
with open('test_data/python/final/jsonl/test/python_test_0.jsonl', 'r') as f:
sample_file = f.readlines()
sample_file[0]
print(type(sample_file))
print(len(sample_file))
pprint(json.loads(sample_file[0]))
```
## Exploring the full DataSet
```
!ls test_data/python/final/jsonl
python_files = sorted(Path('test_data/python/final/jsonl/').glob('**/*.gz'))
python_files
print('Total of related python files: {}'.format(len(python_files)))
columns_long_list = ['repo', 'path', 'url', 'code',
'code_tokens', 'docstring', 'docstring_tokens',
'language', 'partition']
columns_short_list = ['code_tokens', 'docstring_tokens',
'language', 'partition']
# export
def jsonl_list_to_dataframe(file_list, columns=columns_long_list):
"""Load a list of jsonl.gz files into a pandas DataFrame."""
return pd.concat([pd.read_json(f,
orient='records',
compression='gzip',
lines=True)[columns]
for f in file_list], sort=False)
# export
def plain_json_list_to_dataframe(file_list, columns):
'''Load a list of jsonl files into a pandas DaraFrame.'''
return pd.concat([pd.read_json(f,
orients='records',
compression=None,
lines=True)[columns]
for f in file_list], sort=False)
python_df = jsonl_list_to_dataframe(python_files)
python_df.head()
python_df.columns
python_df['partition'].unique()
```
## Summary stats.
```
python_df.partition.value_counts()
python_df.groupby(['partition', 'language'])['code_tokens'].count()
python_df['code_len'] = python_df.code_tokens.apply(lambda x: len(x))
python_df['query_len'] = python_df.docstring_tokens.apply(lambda x: len(x))
```
Tokens Length Percentile
```
code_len_summary = python_df.groupby('language')['code_len'].quantile([.5, .7, .8, .9, .95])
display(pd.DataFrame(code_len_summary))
```
Query length percentile by language
```
query_len_summary = python_df.groupby('language')['query_len'].quantile([.5, .7, .8, .9, .95])
display(pd.DataFrame(query_len_summary))
python_df.shape
```
## Data transformation
```
pprint(python_df.columns)
src_code_columns = ['code', 'code_tokens', 'code_len','partition']
python_src_code_df = python_df[src_code_columns]
python_src_code_df.columns
python_src_code_df.shape
```
Visualizing examples
```
python_src_code_df[:10]['code']
data_type_new_column = ['src' for x in range(python_src_code_df.shape[0])]
len(data_type_new_column)
python_src_code_df.loc[:,'data_type'] = data_type_new_column
python_src_code_df.head()
```
## Data cleaning
Remove functions with syntax errors
```
!pip install radon
python_src_code_df['code'][9071]
!pip install dit
```
## Exploratory analysis
```
# export
# Imports
import dit
import math
import os
import logging
import matplotlib.pyplot as plt
import pandas as pd
import sentencepiece as sp
from collections import Counter
from pathlib import Path
from scipy.stats import sem, t
from statistics import mean, median, stdev
from tqdm.notebook import tqdm
# ds4se
from ds4se.mgmnt.prep.bpe import *
from ds4se.exp.info import *
from ds4se.desc.stats import *
java_path = Path('test_data/java/')
n_sample = int(len(code_df)*0.01)
sample_code_df = code_df.sample(n=n_sample)
sample_code_df.shape
sp_model_from_df(sample_code_df, output=java_path, model_name='_sp_bpe_modal', cols=['code'])
sp_processor = sp.SentencePieceProcessor()
sp_processor.Load(f"{java_path/'_sp_bpe_modal'}.model")
java_src_code_df.shape
n_sample_4_sp = int(java_src_code_df.shape[0]*0.01)
print(n_sample_4_sp)
java_code_df = java_src_code_df.sample(n=n_sample_4_sp)
java_code_df.shape
code_df.shape
# Use the model to compute each file's entropy
java_doc_entropies = get_doc_entropies_from_df(code_df, 'code', java_path/'_sp_bpe_modal', ['src'])
len(java_doc_entropies)
# Use the model to compute each file's entropy
java_corpus_entropies = get_corpus_entropies_from_df(code_df, 'code', java_path/'_sp_bpe_modal', ['src'])
java_corpus_entropies
# Use the model to compute each file's entropy
java_system_entropy = get_system_entropy_from_df(code_df, 'code', java_path/'_sp_bpe_modal')
java_system_entropy
flatten = lambda l: [item for sublist in l for item in sublist]
report_stats(flatten(java_doc_entropies))
java_doc_entropies
# Create a histogram of the entropy distribution
plt.hist(java_doc_entropies,bins = 20, color="blue", alpha=0.5, edgecolor="black", linewidth=1.0)
plt.title('Entropy histogram')
plt.ylabel("Num records")
plt.xlabel("Entropy score")
plt.show()
fig1, ax1 = plt.subplots()
ax1.set_title('Entropy box plot')
ax1.boxplot(java_doc_entropies, vert=False)
```
## Descriptive metrics
```
#Libraries used in ds4se.desc.metrics.java nb
!pip install lizard
!pip install tree_sitter
!pip install bs4
'''from ds4se.desc.metrics import *
from ds4se.desc.metrics.java import *'''
import lizard
import chardet
python_src_code_df.head(1)
test_src_code = python_src_code_df['code'].values[5]
print(test_src_code)
```
Sample of available metrics (for method level)
```
metrics = lizard.analyze_file.analyze_source_code('test.py', test_src_code)
metrics.function_list
func = metrics.function_list[0]
print('cyclomatic_complexity: {}'.format(func.cyclomatic_complexity))
print('nloc (length): {}'.format(func.length))
print('nloc: {}'.format(func.nloc))
print('parameter_count: {}'.format(func.parameter_count))
print('name: {}'.format(func.name))
print('token_count {}'.format(func.token_count))
print('long_name: {}'.format(func.long_name))
def add_method_mccabe_metrics_to_code_df(src_code_df, code_column):
"""Computes method level McCabe metrics and adds it as columns in the specified dataframe"""
#result_df = src_code_df.copy()
result_df = pd.DataFrame([])
for index, row in src_code_df.iterrows():
'''print('index{}'.format(index))
print('type:{}'.format(type(row[code_column])))'''
metrics = lizard.analyze_file.analyze_source_code('python_file.py', row[code_column])
metrics_obj = metrics.function_list
''' print('matrics_length', len(metrics_obj))'''
if(len(metrics_obj) == 0):
continue
row['cyclomatic_complexity'] = metrics_obj[0].cyclomatic_complexity
row['nloc'] = metrics_obj[0].nloc
row['parameter_count'] = metrics_obj[0].parameter_count
row['method_name'] = metrics_obj[0].name
row['token_count'] = metrics_obj[0].token_count
result_df = result_df.append(row)
'''
valid_indices.append(index)
cyclomatic_complexity.append(metrics_obj[0].cyclomatic_complexity)
nloc.append(metrics_obj[0].nloc)
parameter_count.append(metrics_obj[0].parameter_count)
method_name.append(metrics_obj[0].name)
token_count.append(metrics_obj[0].token_count)'''
'''src_code_df['cyclomatic_complexity'] = cyclomatic_complexity
src_code_df['nloc'] = nloc
src_code_df['parameter_count'] = parameter_count
src_code_df['method_name'] = method_name
src_code_df['token_count'] = token_count'''
return result_df
code_df = add_method_mccabe_metrics_to_code_df(python_src_code_df, 'code')
python_src_code_df.shape
code_df.shape
code_df.to_csv('test_data/python/clean_python.csv')
code_df.head()
code_df.to_csv('test_data/clean_java.csv')
code_df.shape
java_code_df.shape
code_df.head()
code_df.describe()
display_numeric_col_hist(code_df['cyclomatic_complexity'], 'Cyclomatic complexity')
fig1, ax1 = plt.subplots()
ax1.set_title('Cyclomatic complexity box plot')
ax1.boxplot(code_df['cyclomatic_complexity'], vert=False)
display_numeric_col_hist(code_df['nloc'], 'Nloc')
fig1, ax1 = plt.subplots()
ax1.set_title('Nloc box plot')
ax1.boxplot(code_df['nloc'], vert=False)
display_numeric_col_hist(code_df['parameter_count'], 'Parameter count')
fig1, ax1 = plt.subplots()
ax1.set_title('Param. count box plot')
ax1.boxplot(code_df['parameter_count'], vert=False)
display_numeric_col_hist(code_df['token_count'], 'Token count')
fig1, ax1 = plt.subplots()
ax1.set_title('Token count box plot')
ax1.boxplot(code_df['token_count'], vert=False)
fig1, ax1 = plt.subplots()
ax1.set_title('Code len box plot')
ax1.boxplot(code_df['code_len'], vert=False)
code_df.shape
code_df[['cyclomatic_complexity', 'nloc', 'token_count', 'parameter_count']].corr()
import seaborn as sns
import numpy as np
def heatmap(x, y, **kwargs):
if 'color' in kwargs:
color = kwargs['color']
else:
color = [1]*len(x)
if 'palette' in kwargs:
palette = kwargs['palette']
n_colors = len(palette)
else:
n_colors = 256 # Use 256 colors for the diverging color palette
palette = sns.color_palette("Blues", n_colors)
if 'color_range' in kwargs:
color_min, color_max = kwargs['color_range']
else:
color_min, color_max = min(color), max(color) # Range of values that will be mapped to the palette, i.e. min and max possible correlation
def value_to_color(val):
if color_min == color_max:
return palette[-1]
else:
val_position = float((val - color_min)) / (color_max - color_min) # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
ind = int(val_position * (n_colors - 1)) # target index in the color palette
return palette[ind]
if 'size' in kwargs:
size = kwargs['size']
else:
size = [1]*len(x)
if 'size_range' in kwargs:
size_min, size_max = kwargs['size_range'][0], kwargs['size_range'][1]
else:
size_min, size_max = min(size), max(size)
size_scale = kwargs.get('size_scale', 500)
def value_to_size(val):
if size_min == size_max:
return 1 * size_scale
else:
val_position = (val - size_min) * 0.99 / (size_max - size_min) + 0.01 # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
return val_position * size_scale
if 'x_order' in kwargs:
x_names = [t for t in kwargs['x_order']]
else:
x_names = [t for t in sorted(set([v for v in x]))]
x_to_num = {p[1]:p[0] for p in enumerate(x_names)}
if 'y_order' in kwargs:
y_names = [t for t in kwargs['y_order']]
else:
y_names = [t for t in sorted(set([v for v in y]))]
y_to_num = {p[1]:p[0] for p in enumerate(y_names)}
plot_grid = plt.GridSpec(1, 15, hspace=0.2, wspace=0.1) # Setup a 1x10 grid
ax = plt.subplot(plot_grid[:,:-1]) # Use the left 14/15ths of the grid for the main plot
marker = kwargs.get('marker', 's')
kwargs_pass_on = {k:v for k,v in kwargs.items() if k not in [
'color', 'palette', 'color_range', 'size', 'size_range', 'size_scale', 'marker', 'x_order', 'y_order', 'xlabel', 'ylabel'
]}
ax.scatter(
x=[x_to_num[v] for v in x],
y=[y_to_num[v] for v in y],
marker=marker,
s=[value_to_size(v) for v in size],
c=[value_to_color(v) for v in color],
**kwargs_pass_on
)
ax.set_xticks([v for k,v in x_to_num.items()])
ax.set_xticklabels([k for k in x_to_num], rotation=45, horizontalalignment='right')
ax.set_yticks([v for k,v in y_to_num.items()])
ax.set_yticklabels([k for k in y_to_num])
ax.grid(False, 'major')
ax.grid(True, 'minor')
ax.set_xticks([t + 0.5 for t in ax.get_xticks()], minor=True)
ax.set_yticks([t + 0.5 for t in ax.get_yticks()], minor=True)
ax.set_xlim([-0.5, max([v for v in x_to_num.values()]) + 0.5])
ax.set_ylim([-0.5, max([v for v in y_to_num.values()]) + 0.5])
ax.set_facecolor('#F1F1F1')
ax.set_xlabel(kwargs.get('xlabel', ''))
ax.set_ylabel(kwargs.get('ylabel', ''))
# Add color legend on the right side of the plot
if color_min < color_max:
ax = plt.subplot(plot_grid[:,-1]) # Use the rightmost column of the plot
col_x = [0]*len(palette) # Fixed x coordinate for the bars
bar_y=np.linspace(color_min, color_max, n_colors) # y coordinates for each of the n_colors bars
bar_height = bar_y[1] - bar_y[0]
ax.barh(
y=bar_y,
width=[5]*len(palette), # Make bars 5 units wide
left=col_x, # Make bars start at 0
height=bar_height,
color=palette,
linewidth=0
)
ax.set_xlim(1, 2) # Bars are going from 0 to 5, so lets crop the plot somewhere in the middle
ax.grid(False) # Hide grid
ax.set_facecolor('white') # Make background white
ax.set_xticks([]) # Remove horizontal ticks
ax.set_yticks(np.linspace(min(bar_y), max(bar_y), 3)) # Show vertical ticks for min, middle and max
ax.yaxis.tick_right() # Show vertical ticks on the right
columns = ['cyclomatic_complexity', 'nloc', 'token_count', 'parameter_count']
corr = code_df[columns].corr()
corr = pd.melt(corr.reset_index(), id_vars='index') # Unpivot the dataframe, so we can get pair of arrays for x and y
corr.columns = ['x', 'y', 'value']
heatmap(
x=corr['x'],
y=corr['y'],
size=corr['value'].abs()
)
def corrplot(data, size_scale=500, marker='s'):
corr = pd.melt(data.reset_index(), id_vars='index').replace(np.nan, 0)
corr.columns = ['x', 'y', 'value']
heatmap(
corr['x'], corr['y'],
color=corr['value'], color_range=[-1, 1],
palette=sns.diverging_palette(20, 220, n=256),
size=corr['value'].abs(), size_range=[0,1],
marker=marker,
x_order=data.columns,
y_order=data.columns[::-1],
size_scale=size_scale
)
corrplot(code_df[columns].corr(), size_scale=300);
```
| github_jupyter |
```
import numpy as np
#import sys
#sys.path.append("../")
from sklearn.svm import SVC
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import scale
from sklearn.metrics import confusion_matrix
import itertools
import datetime
```
## Loading data
### Remember that 1 means PD
```
data = pd.read_csv("../dataset/parkinsons.data")
data.head()
```
## Defining X and y
```
c = 0
for i in data.columns:
print c, i
c+=1
print "total PD", data.status.values.tolist().count(1)
print "total no PD", data.status.values.tolist().count(0)
# as the number of PD and no PD is hugely different, resampling to get even amounts
pd_index = data.index[data.loc[:, "status"]==1]
nopd_index = data.index[data.loc[:, "status"]==0]
print pd_index.shape
print nopd_index.shape
#making data more even
my_index = np.hstack((pd_index[:48], nopd_index))
print my_index.shape
#X = scale(data.values[:, 1:].astype(np.float64)) #data[data.columns[[1, 2, 3]]].values
#y = data.status.values.astype(np.float64)#.reshape(-1,1)
################################################################################################
X = scale(data.values[my_index, 1:].astype(np.float64)) #data[data.columns[[1, 2, 3]]].values
y = data.status.values.astype(np.float64)[my_index]#.reshape(-1,1)
print X.shape, y.shape
```
## Statistics
```
#note to self: making the data more evenly distribuited among pd and no pd
# actually changed the look of this coefficient
print "Pearson coefficient:"
bests = np.zeros(X.shape[1])
for i in xrange(X.shape[1]):
#val = np.cov(X[:, i], y).astype(np.float64)
val = np.corrcoef(X[:, i], y)
bests[i] = abs(val[0,1])
#norm = (val/np.array([[val[0,0], val[0,0]], [val[1,1], val[1,1]] ]))
#print norm.round(2), "\n"
#plt.imshow(val)
#print data.columns[1:][i]
#plt.title("%s %s"%(data.columns[1:][i], i) )
#plt.show()
#print val #.round(4)
#print"\n=====================================\n"
order = np.argsort(bests)[::-1]
for i in order:
print i, data.columns[1:][i], bests[i]
#15% for test, 15% for validating and 70% for training
##seeding
np.random.seed(0)#0
X_data = X[:, [22, 19, 0, 1]] #X[:, [22, 21, 20, 19]] #X[:, [0,1,2]]
myray = np.arange(X_data.shape[0])
np.random.shuffle(myray)
X_train = X_data[myray[:int(0.7*X_data.shape[0])],:]
X_test = X_data[myray[int(0.7*X_data.shape[0]):int(0.85*X_data.shape[0])],:]
X_validation = X_data[myray[int(0.85*X_data.shape[0]):],:]
y_train = y[myray[:int(0.7*X.shape[0])]]
y_test = y[myray[int(0.7*X.shape[0]):int(0.85*X.shape[0])]]
y_validation = y[myray[int(0.85*X.shape[0]):]]
print X_train.shape, X_test.shape, X_validation.shape
print y_train.shape, y_test.shape, y_validation.shape
plt.scatter(X_data[:,0], X_data[:,1], s=40, c=y, cmap=plt.cm.Spectral)
plt.scatter(X_data[:,0], X_data[:,2], s=40, c=y, cmap=plt.cm.Spectral)
plt.scatter(X_data[:,0], X_data[:,3], s=40, c=y, cmap=plt.cm.Spectral)
#plt.scatter(X_data[:,0], X_data[:,4], s=40, c=y, cmap=plt.cm.Spectral)
plt.scatter(X_data[:,1], X_data[:,2], s=40, c=y, cmap=plt.cm.Spectral)
plt.scatter(X_data[:,1], X_data[:,3], s=40, c=y, cmap=plt.cm.Spectral)
#plt.scatter(X_data[:,1], X_data[:,4], s=40, c=y, cmap=plt.cm.Spectral)
#plt.scatter(X_data[:,2], X_data[:,3], s=40, c=y, cmap=plt.cm.Spectral)
#plt.scatter(X_data[:,2], X_data[:,4], s=40, c=y, cmap=plt.cm.Spectral)
#plt.scatter(X_data[:,4], X_data[:,3], s=40, c=y, cmap=plt.cm.Spectral)
plt.show()
```
## Training and validating
```
clf = SVC(C=2., kernel="rbf", degree=3, gamma="auto",
coef0=0.0, shrinking=True, probability=False,
tol=1e-5, cache_size=200, class_weight=None,
verbose=False, max_iter=-1, decision_function_shape="ovr",
random_state=None)
%time clf.fit(X_train, y_train)
pred = clf.predict(X_validation)
print "Validation Score: ", clf.score(X_validation, y_validation)
```
## Test
```
# plots the confusion matrix
def plot_confusion_matrix(y_test, y_pred):
cnf_matrix = confusion_matrix(y_test, y_pred)
classes = np.array(["No Parkinson", "Parkinson"])
plt.clf()
plt.close("all")
#plt.figure(figsize = (7.5,6))
plt.imshow(cnf_matrix, interpolation='nearest', cmap=plt.cm.Blues)
plt.title('Normalized confusion matrix')
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
#normalized
cnf_matrix = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]
thresh = cnf_matrix.max() / 2.
for i, j in itertools.product(range(cnf_matrix.shape[0]), range(cnf_matrix.shape[1])):
plt.text(j, i, round(cnf_matrix[i, j], 3),
horizontalalignment="center",
color="white" if cnf_matrix[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('Expected label')
plt.xlabel('Predicted label')
plt.show()
pred = clf.predict(X_test)
print "Test score:", clf.score(X_test, y_test)
```
## Confusion matrix
```
def plot_confusion_matrix(y_test, y_pred):
cnf_matrix = confusion_matrix(y_test, y_pred)
classes = np.array(["No Parkinson", "Parkinson"])
plt.clf()
plt.close("all")
plt.figure(figsize = (18,9))
plt.imshow(cnf_matrix, interpolation='nearest', cmap=plt.cm.Blues)
plt.title('Normalized confusion matrix(SVM)')
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
#normalized
cnf_matrix = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]
thresh = cnf_matrix.max() / 2.
for i, j in itertools.product(range(cnf_matrix.shape[0]), range(cnf_matrix.shape[1])):
plt.text(j, i, round(cnf_matrix[i, j], 3),
horizontalalignment="center",
color="white" if cnf_matrix[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('Expected label')
plt.xlabel('Predicted label')
plt.show()
%matplotlib inline
plt.figure(figsize = (18,9))
plot_confusion_matrix(y_test, pred)
```
| github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import os
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use("ggplot")
%matplotlib inline
import cv2
from tqdm import tqdm_notebook, tnrange
from glob import glob
from itertools import chain
from skimage.io import imread, imshow, concatenate_images
from skimage.transform import resize
from skimage.morphology import label
from sklearn.model_selection import train_test_split
import tensorflow as tf
from skimage.color import rgb2gray
from tensorflow.keras import Input
from tensorflow.keras.models import Model, load_model, save_model
from tensorflow.keras.layers import Input, Activation, BatchNormalization, Dropout, Lambda, Conv2D, Conv2DTranspose, MaxPooling2D, concatenate, add
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import backend as K
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
# Set some parameters
im_width = 256
im_height = 256
train_files = []
mask_files = glob('../input/lgg-mri-segmentation/kaggle_3m/*/*_mask*')
for i in mask_files:
train_files.append(i.replace('_mask',''))
print(train_files[:10])
print(mask_files[:10])
#Lets plot some samples
rows,cols=3,3
fig=plt.figure(figsize=(10,10))
for i in range(1,rows*cols+1):
fig.add_subplot(rows,cols,i)
img_path=train_files[i]
msk_path=mask_files[i]
img=cv2.imread(img_path)
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
msk=cv2.imread(msk_path)
plt.imshow(img)
plt.imshow(msk,alpha=0.4)
plt.show()
def dice_coef(y_true, y_pred):
y_truef=K.flatten(y_true)
y_predf=K.flatten(y_pred)
And=K.sum(y_truef* y_predf)
return((2* And) / (K.sum(y_truef) + K.sum(y_predf)))
def dice_coef_loss(y_true, y_pred):
return -dice_coef(y_true, y_pred)
def iou(y_true, y_pred):
intersection = K.sum(y_true * y_pred)
sum_ = K.sum(y_true + y_pred)
jac = (intersection) / (sum_ - intersection)
return jac
def jac_distance(y_true, y_pred):
y_truef=K.flatten(y_true)
y_predf=K.flatten(y_pred)
return - iou(y_true, y_pred)
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.layers import *
def res_block(inputs,filter_size):
"""
res_block -- Residual block for building res path
Arguments:
inputs {<class 'tensorflow.python.framework.ops.Tensor'>} -- input for residual block
filter_size {int} -- convolutional filter size
Returns:
add {<class 'tensorflow.python.framework.ops.Tensor'>} -- addition of two convolutional filter output
"""
# First Conv2D layer
cb1 = Conv2D(filter_size,(3,3),padding = 'same',activation="relu")(inputs)
# Second Conv2D layer parallel to the first one
cb2 = Conv2D(filter_size,(1,1),padding = 'same',activation="relu")(inputs)
# Addition of cb1 and cb2
add = Add()([cb1,cb2])
return add
def res_path(inputs,filter_size,path_number):
"""
res_path -- residual path / modified skip connection
Arguments:
inputs {<class 'tensorflow.python.framework.ops.Tensor'>} -- input for res path
filter_size {int} -- convolutional filter size
path_number {int} -- path identifier
Returns:
skip_connection {<class 'tensorflow.python.framework.ops.Tensor'>} -- final res path
"""
# Minimum one residual block for every res path
skip_connection = res_block(inputs, filter_size)
# Two serial residual blocks for res path 2
if path_number == 2:
skip_connection = res_block(skip_connection,filter_size)
# Three serial residual blocks for res path 1
elif path_number == 1:
skip_connection = res_block(skip_connection,filter_size)
skip_connection = res_block(skip_connection,filter_size)
return skip_connection
def decoder_block(inputs, res, out_channels, depth):
"""
decoder_block -- decoder block formation
Arguments:
inputs {<class 'tensorflow.python.framework.ops.Tensor'>} -- input for decoder block
mid_channels {int} -- no. of mid channels
out_channels {int} -- no. of out channels
Returns:
db {<class 'tensorflow.python.framework.ops.Tensor'>} -- returning the decoder block
"""
conv_kwargs = dict(
activation='relu',
padding='same',
kernel_initializer='he_normal',
data_format='channels_last'
)
# UpConvolutional layer
db = Conv2DTranspose(out_channels, (2, 2), strides=(2, 2))(inputs)
db = concatenate([db, res], axis=3)
# First conv2D layer
db = Conv2D(out_channels, 3, **conv_kwargs)(db)
# Second conv2D layer
db = Conv2D(out_channels, 3, **conv_kwargs)(db)
if depth > 2:
# Third conv2D layer
db = Conv2D(out_channels, 3, **conv_kwargs)(db)
return db
def TransResUNet(input_size=(512, 512, 1)):
"""
TransResUNet -- main architecture of TransResUNet
Arguments:
input_size {tuple} -- size of input image
Returns:
model {<class 'tensorflow.python.keras.engine.training.Model'>} -- final model
"""
# Input
inputs = Input(input_size)
# Handling input channels
# input with 1 channel will be converted to 3 channels to be compatible with VGG16 pretrained encoder
# if input_size[-1] < 3:
# inp = Conv2D(3, 1)(inputs)
# input_shape = (input_size[0], input_size[0], 3)
# else:
# inp = inputs
# input_shape = input_size
# VGG16 with imagenet weights
encoder = VGG16(include_top=False, weights='imagenet', input_shape=input_size)
# First encoder block
enc1 = encoder.get_layer(name='block1_conv1')(inputs)
enc1 = encoder.get_layer(name='block1_conv2')(enc1)
enc2 = MaxPooling2D(pool_size=(2, 2))(enc1)
# Second encoder block
enc2 = encoder.get_layer(name='block2_conv1')(enc2)
enc2 = encoder.get_layer(name='block2_conv2')(enc2)
enc3 = MaxPooling2D(pool_size=(2, 2))(enc2)
# Third encoder block
enc3 = encoder.get_layer(name='block3_conv1')(enc3)
enc3 = encoder.get_layer(name='block3_conv2')(enc3)
enc3 = encoder.get_layer(name='block3_conv3')(enc3)
enc4 = MaxPooling2D(pool_size=(2, 2))(enc3)
# Fourth encoder block
enc4 = encoder.get_layer(name='block4_conv1')(enc4)
enc4 = encoder.get_layer(name='block4_conv2')(enc4)
enc4 = encoder.get_layer(name='block4_conv3')(enc4)
center = MaxPooling2D(pool_size=(2, 2))(enc4)
# Center block
center = Conv2D(1024, (3, 3), activation='relu', padding='same')(center)
center = Conv2D(1024, (3, 3), activation='relu', padding='same')(center)
# classification branch
cls = Conv2D(256, (3,3), activation='relu')(center)
cls = Conv2D(128, (3,3), activation='relu')(cls)
cls = Conv2D(1, (1,1))(cls)
cls = GlobalAveragePooling2D()(cls)
cls = Activation('sigmoid', name='class')(cls)
clsr = Reshape((1, 1, 1))(cls)
# Decoder block corresponding to fourth encoder
res_path4 = res_path(enc4,256,4)
dec4 = decoder_block(center, res_path4, 512, 4)
# Decoder block corresponding to third encoder
res_path3 = res_path(enc3,128,3)
dec3 = decoder_block(dec4, res_path3, 256, 3)
# Decoder block corresponding to second encoder
res_path2 = res_path(enc2,64,2)
dec2 = decoder_block(dec3, res_path2, 128, 2)
# Final Block concatenation with first encoded feature
res_path1 = res_path(enc1,32,1)
dec1 = decoder_block(dec2, res_path1, 64, 1)
# Output
out = Conv2D(1, 1)(dec1)
out = Activation('sigmoid')(out)
out = multiply(inputs=[out,clsr], name='seg')
# Final model
model = Model(inputs=[inputs], outputs=[out, cls])
return model
from tensorflow.keras.applications.vgg16 import preprocess_input
def train_generator(data_frame, batch_size, aug_dict, train_path=None,
image_color_mode="rgb",
mask_color_mode="grayscale",
image_save_prefix="image",
mask_save_prefix="mask",
save_to_dir=None,
target_size=(256,256),
seed=1):
'''
can generate image and mask at the same time use the same seed for
image_datagen and mask_datagen to ensure the transformation for image
and mask is the same if you want to visualize the results of generator,
set save_to_dir = "your path"
'''
image_datagen = ImageDataGenerator(**aug_dict)
mask_datagen = ImageDataGenerator(**aug_dict)
image_generator = image_datagen.flow_from_dataframe(
data_frame,
directory = train_path,
x_col = "filename",
class_mode = None,
color_mode = image_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = image_save_prefix,
seed = seed)
mask_generator = mask_datagen.flow_from_dataframe(
data_frame,
directory = train_path,
x_col = "mask",
class_mode = None,
color_mode = mask_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = mask_save_prefix,
seed = seed)
train_gen = zip(image_generator, mask_generator)
for (img, mask) in train_gen:
img, mask, label = adjust_data(img, mask)
yield (img,[mask,label])
def adjust_data(img,mask):
img = preprocess_input(img)
mask = mask / 255
mask[mask > 0.5] = 1
mask[mask <= 0.5] = 0
masks_sum = np.sum(mask, axis=(1,2,3)).reshape((-1, 1))
class_lab = (masks_sum != 0) + 0.
return (img, mask, class_lab)
from sklearn.model_selection import KFold
import pandas
kf = KFold(n_splits = 5, shuffle=False)
df = pandas.DataFrame(data={"filename": train_files, 'mask' : mask_files})
df2 = df.sample(frac=1).reset_index(drop=True)
train_generator_args = dict(rotation_range=0.2,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.05,
zoom_range=0.05,
horizontal_flip=True,
fill_mode='nearest')
histories = []
losses = []
accuracies = []
dicecoefs = []
ious = []
EPOCHS = 40
BATCH_SIZE = 8
for k, (train_index, test_index) in enumerate(kf.split(df2)):
train_data_frame = df2.iloc[train_index]
test_data_frame = df2.iloc[test_index]
train_gen = train_generator(train_data_frame, BATCH_SIZE,
train_generator_args,
target_size=(im_height, im_width))
test_gener = train_generator(test_data_frame, BATCH_SIZE,
dict(),
target_size=(im_height, im_width))
model = TransResUNet(input_size=(im_height, im_width, 3))
model.compile(optimizer=Adam(lr=2e-6), loss={'seg':dice_coef_loss, 'class':'binary_crossentropy'}, \
loss_weights={'seg':50, 'class':1}, metrics=["binary_accuracy", iou, dice_coef])
callbacks = [ModelCheckpoint(str(k+1) + '_unet_brain_mri_seg.hdf5', verbose=1, save_best_only=True)]
history = model.fit(train_gen,
steps_per_epoch=len(train_data_frame) / BATCH_SIZE,
epochs=EPOCHS,
callbacks=callbacks,
validation_data = test_gener,
validation_steps=len(test_data_frame) / BATCH_SIZE)
model = load_model(str(k+1) + '_unet_brain_mri_seg.hdf5', custom_objects={'dice_coef_loss': dice_coef_loss, 'iou': iou, 'dice_coef': dice_coef})
test_gen = train_generator(test_data_frame, BATCH_SIZE,
dict(),
target_size=(im_height, im_width))
results = model.evaluate(test_gen, steps=len(test_data_frame) / BATCH_SIZE)
results = dict(zip(model.metrics_names,results))
histories.append(history)
accuracies.append(results['seg_binary_accuracy'])
losses.append(results['seg_loss'])
dicecoefs.append(results['seg_dice_coef'])
ious.append(results['seg_iou'])
break
print('accuracies : ', accuracies)
print('losses : ', losses)
print('dicecoefs : ', dicecoefs)
print('ious : ', ious)
print('-----------------------------------------------------------------------------')
print('-----------------------------------------------------------------------------')
print('average accuracy : ', np.mean(np.array(accuracies)))
print('average loss : ', np.mean(np.array(losses)))
print('average dicecoefs : ', np.mean(np.array(dicecoefs)))
print('average ious : ', np.mean(np.array(ious)))
print()
print('standard deviation of accuracy : ', np.std(np.array(accuracies)))
print('standard deviation of loss : ', np.std(np.array(losses)))
print('standard deviation of dicecoefs : ', np.std(np.array(dicecoefs)))
print('standard deviation of ious : ', np.std(np.array(ious)))
import pickle
for h, history in enumerate(histories):
keys = history.history.keys()
fig, axs = plt.subplots(1, len(keys)//2, figsize = (25, 5))
fig.suptitle('No. ' + str(h+1) + ' Fold Results', fontsize=30)
for k, key in enumerate(list(keys)[:len(keys)//2]):
training = history.history[key]
validation = history.history['val_' + key]
epoch_count = range(1, len(training) + 1)
axs[k].plot(epoch_count, training, 'r--')
axs[k].plot(epoch_count, validation, 'b-')
axs[k].legend(['Training ' + key, 'Validation ' + key])
with open(str(h+1) + '_brats_trainHistoryDict', 'wb') as file_pi:
pickle.dump(history.history, file_pi)
model = load_model('1_unet_brain_mri_seg.hdf5', custom_objects={'dice_coef_loss': dice_coef_loss, 'iou': iou, 'dice_coef': dice_coef})
for i in range(20):
index=np.random.randint(1,len(test_data_frame.index))
img = cv2.imread(test_data_frame['filename'].iloc[index])
img = cv2.resize(img ,(im_height, im_width))
img = preprocess_input(img)
img = img[np.newaxis, :, :, :]
pred = model.predict(img)
print(pred[1])
plt.figure(figsize=(12,12))
plt.subplot(1,3,1)
plt.imshow(cv2.resize(cv2.imread(test_data_frame['filename'].iloc[index]) ,(im_height, im_width)))
plt.title('Original Image')
plt.subplot(1,3,2)
plt.imshow(np.squeeze(cv2.imread(test_data_frame['mask'].iloc[index])))
plt.title('Original Mask')
plt.subplot(1,3,3)
plt.imshow(np.squeeze(pred[0]) > .5)
plt.title('Prediction')
plt.show()
```
| github_jupyter |
### Machine Learning for Engineers: [LongShortTermMemory](https://www.apmonitor.com/pds/index.php/Main/LongShortTermMemory)
- [LSTM Networks](https://www.apmonitor.com/pds/index.php/Main/LongShortTermMemory)
- Source Blocks: 10
- Description: Long-Short Term Memory (LSTM), Recurrent Neural Networks, and other sequential processing methods consider a window of data to make a future prediction.
- [Course Overview](https://apmonitor.com/pds)
- [Course Schedule](https://apmonitor.com/pds/index.php/Main/CourseSchedule)
```
import numpy as np
import matplotlib.pyplot as plt
# Generate data
n = 500
t = np.linspace(0,20.0*np.pi,n)
X = np.sin(t) # X is already between -1 and 1, scaling normally needed
# Set window of past points for LSTM model
window = 10
# Split 80/20 into train/test data
last = int(n/5.0)
Xtrain = X[:-last]
Xtest = X[-last-window:]
# Store window number of points as a sequence
xin = []
next_X = []
for i in range(window,len(Xtrain)):
xin.append(Xtrain[i-window:i])
next_X.append(Xtrain[i])
# Reshape data to format for LSTM
xin, next_X = np.array(xin), np.array(next_X)
xin = xin.reshape(xin.shape[0], xin.shape[1], 1)
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
# Initialize LSTM model
m = Sequential()
m.add(LSTM(units=50, return_sequences=True, input_shape=(xin.shape[1],1)))
m.add(Dropout(0.2))
m.add(LSTM(units=50))
m.add(Dropout(0.2))
m.add(Dense(units=1))
m.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fit LSTM model
history = m.fit(xin, next_X, epochs = 50, batch_size = 50,verbose=0)
plt.figure()
plt.ylabel('loss'); plt.xlabel('epoch')
plt.semilogy(history.history['loss'])
# Store "window" points as a sequence
xin = []
next_X1 = []
for i in range(window,len(Xtest)):
xin.append(Xtest[i-window:i])
next_X1.append(Xtest[i])
# Reshape data to format for LSTM
xin, next_X1 = np.array(xin), np.array(next_X1)
xin = xin.reshape((xin.shape[0], xin.shape[1], 1))
# Predict the next value (1 step ahead)
X_pred = m.predict(xin)
# Plot prediction vs actual for test data
plt.figure()
plt.plot(X_pred,':',label='LSTM')
plt.plot(next_X1,'--',label='Actual')
plt.legend()
# Using predicted values to predict next step
X_pred = Xtest.copy()
for i in range(window,len(X_pred)):
xin = X_pred[i-window:i].reshape((1, window, 1))
X_pred[i] = m.predict(xin)
# Plot prediction vs actual for test data
plt.figure()
plt.plot(X_pred[window:],':',label='LSTM')
plt.plot(next_X1,'--',label='Actual')
plt.legend()
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
# Generate data
n = 500
t = np.linspace(0,20.0*np.pi,n)
X = np.sin(t) # X is already between -1 and 1, scaling normally needed
# Set window of past points for LSTM model
window = 10
# Split 80/20 into train/test data
last = int(n/5.0)
Xtrain = X[:-last]
Xtest = X[-last-window:]
# Store window number of points as a sequence
xin = []
next_X = []
for i in range(window,len(Xtrain)):
xin.append(Xtrain[i-window:i])
next_X.append(Xtrain[i])
# Reshape data to format for LSTM
xin, next_X = np.array(xin), np.array(next_X)
xin = xin.reshape(xin.shape[0], xin.shape[1], 1)
# Initialize LSTM model
m = Sequential()
m.add(LSTM(units=50, return_sequences=True, input_shape=(xin.shape[1],1)))
m.add(Dropout(0.2))
m.add(LSTM(units=50))
m.add(Dropout(0.2))
m.add(Dense(units=1))
m.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fit LSTM model
history = m.fit(xin, next_X, epochs = 50, batch_size = 50,verbose=0)
plt.figure()
plt.ylabel('loss'); plt.xlabel('epoch')
plt.semilogy(history.history['loss'])
# Store "window" points as a sequence
xin = []
next_X1 = []
for i in range(window,len(Xtest)):
xin.append(Xtest[i-window:i])
next_X1.append(Xtest[i])
# Reshape data to format for LSTM
xin, next_X1 = np.array(xin), np.array(next_X1)
xin = xin.reshape((xin.shape[0], xin.shape[1], 1))
# Predict the next value (1 step ahead)
X_pred = m.predict(xin)
# Plot prediction vs actual for test data
plt.figure()
plt.plot(X_pred,':',label='LSTM')
plt.plot(next_X1,'--',label='Actual')
plt.legend()
# Using predicted values to predict next step
X_pred = Xtest.copy()
for i in range(window,len(X_pred)):
xin = X_pred[i-window:i].reshape((1, window, 1))
X_pred[i] = m.predict(xin)
# Plot prediction vs actual for test data
plt.figure()
plt.plot(X_pred[window:],':',label='LSTM')
plt.plot(next_X1,'--',label='Actual')
plt.legend()
plt.show()
# generate new data
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tclab
import time
n = 840 # Number of second time points (14 min)
tm = np.linspace(0,n,n+1) # Time values
lab = tclab.TCLab()
T1 = [lab.T1]
T2 = [lab.T2]
Q1 = np.zeros(n+1)
Q2 = np.zeros(n+1)
Q1[30:] = 35.0
Q1[270:] = 70.0
Q1[450:] = 10.0
Q1[630:] = 60.0
Q1[800:] = 0.0
for i in range(n):
lab.Q1(Q1[i])
lab.Q2(Q2[i])
time.sleep(1)
print(Q1[i],lab.T1)
T1.append(lab.T1)
T2.append(lab.T2)
lab.close()
# Save data file
data = np.vstack((tm,Q1,Q2,T1,T2)).T
np.savetxt('tclab_data.csv',data,delimiter=',',\
header='Time,Q1,Q2,T1,T2',comments='')
# Create Figure
plt.figure(figsize=(10,7))
ax = plt.subplot(2,1,1)
ax.grid()
plt.plot(tm/60.0,T1,'r.',label=r'$T_1$')
plt.ylabel(r'Temp ($^oC$)')
ax = plt.subplot(2,1,2)
ax.grid()
plt.plot(tm/60.0,Q1,'b-',label=r'$Q_1$')
plt.ylabel(r'Heater (%)')
plt.xlabel('Time (min)')
plt.legend()
plt.savefig('tclab_data.png')
plt.show()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
import time
# For LSTM model
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.callbacks import EarlyStopping
from keras.models import load_model
# Load training data
file = 'http://apmonitor.com/do/uploads/Main/tclab_dyn_data3.txt'
train = pd.read_csv(file)
# Scale features
s1 = MinMaxScaler(feature_range=(-1,1))
Xs = s1.fit_transform(train[['T1','Q1']])
# Scale predicted value
s2 = MinMaxScaler(feature_range=(-1,1))
Ys = s2.fit_transform(train[['T1']])
# Each time step uses last 'window' to predict the next change
window = 70
X = []
Y = []
for i in range(window,len(Xs)):
X.append(Xs[i-window:i,:])
Y.append(Ys[i])
# Reshape data to format accepted by LSTM
X, Y = np.array(X), np.array(Y)
# create and train LSTM model
# Initialize LSTM model
model = Sequential()
model.add(LSTM(units=50, return_sequences=True, \
input_shape=(X.shape[1],X.shape[2])))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=50))
model.add(Dropout(0.2))
model.add(Dense(units=1))
model.compile(optimizer = 'adam', loss = 'mean_squared_error',\
metrics = ['accuracy'])
# Allow for early exit
es = EarlyStopping(monitor='loss',mode='min',verbose=1,patience=10)
# Fit (and time) LSTM model
t0 = time.time()
history = model.fit(X, Y, epochs = 10, batch_size = 250, callbacks=[es], verbose=1)
t1 = time.time()
print('Runtime: %.2f s' %(t1-t0))
# Plot loss
plt.figure(figsize=(8,4))
plt.semilogy(history.history['loss'])
plt.xlabel('epoch'); plt.ylabel('loss')
plt.savefig('tclab_loss.png')
model.save('model.h5')
# Verify the fit of the model
Yp = model.predict(X)
# un-scale outputs
Yu = s2.inverse_transform(Yp)
Ym = s2.inverse_transform(Y)
plt.figure(figsize=(10,6))
plt.subplot(2,1,1)
plt.plot(train['Time'][window:],Yu,'r-',label='LSTM')
plt.plot(train['Time'][window:],Ym,'k--',label='Measured')
plt.ylabel('Temperature (°C)')
plt.legend()
plt.subplot(2,1,2)
plt.plot(train['Q1'],label='heater (%)')
plt.legend()
plt.xlabel('Time (sec)'); plt.ylabel('Heater')
plt.savefig('tclab_fit.png')
# Load model
v = load_model('model.h5')
# Load training data
test = pd.read_csv('http://apmonitor.com/pdc/uploads/Main/tclab_data4.txt')
Xt = test[['T1','Q1']].values
Yt = test[['T1']].values
Xts = s1.transform(Xt)
Yts = s2.transform(Yt)
Xti = []
Yti = []
for i in range(window,len(Xts)):
Xti.append(Xts[i-window:i,:])
Yti.append(Yts[i])
# Reshape data to format accepted by LSTM
Xti, Yti = np.array(Xti), np.array(Yti)
# Verify the fit of the model
Ytp = model.predict(Xti)
# un-scale outputs
Ytu = s2.inverse_transform(Ytp)
Ytm = s2.inverse_transform(Yti)
plt.figure(figsize=(10,6))
plt.subplot(2,1,1)
plt.plot(test['Time'][window:],Ytu,'r-',label='LSTM Predicted')
plt.plot(test['Time'][window:],Ytm,'k--',label='Measured')
plt.legend()
plt.ylabel('Temperature (°C)')
plt.subplot(2,1,2)
plt.plot(test['Time'],test['Q1'],'b-',label='Heater')
plt.xlabel('Time (sec)'); plt.ylabel('Heater (%)')
plt.legend()
plt.savefig('tclab_validate.png')
# Using predicted values to predict next step
Xtsq = Xts.copy()
for i in range(window,len(Xtsq)):
Xin = Xtsq[i-window:i].reshape((1, window, 2))
Xtsq[i][0] = v.predict(Xin)
Yti[i-window] = Xtsq[i][0]
#Ytu = (Yti - s2.min_[0])/s2.scale_[0]
Ytu = s2.inverse_transform(Yti)
plt.figure(figsize=(10,6))
plt.subplot(2,1,1)
plt.plot(test['Time'][window:],Ytu,'r-',label='LSTM Predicted')
plt.plot(test['Time'][window:],Ytm,'k--',label='Measured')
plt.legend()
plt.ylabel('Temperature (°C)')
plt.subplot(2,1,2)
plt.plot(test['Time'],test['Q1'],'b-',label='Heater')
plt.xlabel('Time (sec)'); plt.ylabel('Heater (%)')
plt.legend()
plt.savefig('tclab_forecast.png')
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from gekko import GEKKO
import pandas as pd
file = 'http://apmonitor.com/do/uploads/Main/tclab_dyn_data3.txt'
data = pd.read_csv(file)
# subset for training
n = 3000
tm = data['Time'][0:n].values
Q1s = data['Q1'][0:n].values
T1s = data['T1'][0:n].values
m = GEKKO()
m.time = tm
# Parameters to Estimate
K1 = m.FV(value=0.5,lb=0.1,ub=1.0)
tau1 = m.FV(value=150,lb=50,ub=250)
tau2 = m.FV(value=15,lb=10,ub=20)
K1.STATUS = 1
tau1.STATUS = 1
tau2.STATUS = 1
# Model Inputs
Q1 = m.Param(value=Q1s)
Ta = m.Param(value=23.0) # degC
T1m = m.Param(T1s)
# Model Variables
TH1 = m.Var(value=T1s[0])
TC1 = m.Var(value=T1s)
# Objective Function
m.Minimize((T1m-TC1)**2)
# Equations
m.Equation(tau1 * TH1.dt() + (TH1-Ta) == K1*Q1)
m.Equation(tau2 * TC1.dt() + TC1 == TH1)
# Global Options
m.options.IMODE = 5 # MHE
m.options.EV_TYPE = 2 # Objective type
m.options.NODES = 2 # Collocation nodes
m.options.SOLVER = 3 # IPOPT
# Predict Parameters and Temperatures
m.solve()
# Create plot
plt.figure(figsize=(10,7))
ax=plt.subplot(2,1,1)
ax.grid()
plt.plot(tm,T1s,'ro',label=r'$T_1$ measured')
plt.plot(tm,TC1.value,'k-',label=r'$T_1$ predicted')
plt.ylabel('Temperature (degC)')
plt.legend(loc=2)
ax=plt.subplot(2,1,2)
ax.grid()
plt.plot(tm,Q1s,'b-',label=r'$Q_1$')
plt.ylabel('Heater (%)')
plt.xlabel('Time (sec)')
plt.legend(loc='best')
# Print optimal values
print('K1: ' + str(K1.newval))
print('tau1: ' + str(tau1.newval))
print('tau2: ' + str(tau2.newval))
# Save and show figure
plt.savefig('tclab_2nd_order_fit.png')
# Validation
tm = data['Time'][n:3*n].values
Q1s = data['Q1'][n:3*n].values
T1s = data['T1'][n:3*n].values
v = GEKKO()
v.time = tm
# Parameters to Estimate
K1 = K1.newval
tau1 = tau1.newval
tau2 = tau2.newval
Q1 = v.Param(value=Q1s)
Ta = v.Param(value=23.0) # degC
TH1 = v.Var(value=T1s[0])
TC1 = v.Var(value=T1s[0])
v.Equation(tau1 * TH1.dt() + (TH1-Ta) == K1*Q1)
v.Equation(tau2 * TC1.dt() + TC1 == TH1)
v.options.IMODE = 4 # Simulate
v.options.NODES = 2 # Collocation nodes
v.options.SOLVER = 1
# Predict Parameters and Temperatures
v.solve(disp=True)
# Create plot
plt.figure(figsize=(10,7))
ax=plt.subplot(2,1,1)
ax.grid()
plt.plot(tm,T1s,'ro',label=r'$T_1$ measured')
plt.plot(tm,TC1.value,'k-',label=r'$T_1$ predicted')
plt.ylabel('Temperature (degC)')
plt.legend(loc=2)
ax=plt.subplot(2,1,2)
ax.grid()
plt.plot(tm,Q1s,'b-',label=r'$Q_1$')
plt.ylabel('Heater (%)')
plt.xlabel('Time (sec)')
plt.legend(loc='best')
# Save and show figure
plt.savefig('tclab_2nd_order_validate.png')
plt.show()
```
| github_jupyter |
```
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple
from itertools import count
from PIL import Image
from IPython.display import clear_output
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
from torch import FloatTensor
import pickle
from bresenham import bresenham
from tqdm import tqdm_notebook, tqdm
data_path = '../output/coord_dict_large.p'
mask = np.load('../output/europe_mask.npy')
mask = np.roll(mask, -9, axis=0)
compass = {
0: np.array([0, 1]), #E
1: np.array([1, 0]), #N
2: np.array([0, -1]), #W
3: np.array([-1, 0]) #S
}
stationary_base = 0
translate_base = 1
rotate_base = 5
WAIT_REWARD = -1
ACTIVE_COLLECTING_REWARD = 5
PASSIVE_COLLECTING_REWARD = 5
TRANSLATE_REWARD = 0
ROTATE_REWARD = 0
HIT_WALL_REWARD = -20
INIT_SEED = 42
MAP_W = 350
MAP_H = 170
STEP_SIZE = 1
# Loading flow vector field, constant for the moment
uo = np.load('../output/uo.npy').T
vo = np.load('../output/vo.npy').T
def triangle_area(A, B, C):
a = math.sqrt(np.sum((B-C)*(B-C)))
b = math.sqrt(np.sum((A-C)*(A-C)))
c = math.sqrt(np.sum((A-B)*(A-B)))
# Calculate Semi-perimeter
s = (a + b + c) / 2
# calculate the area
area = (s*(s-a)*(s-b)*(s-c)) ** 0.5
return area
def is_in_triangle(P, A, B, C, eps=1e-6):
area_ABC = triangle_area(A, B, C)
area_ABP = triangle_area(A, B, P)
area_ACP = triangle_area(A, C, P)
area_BCP = triangle_area(B, C, P)
return abs(area_ABC - (area_ABP + area_ACP + area_BCP)) < eps
def is_in_rectangle(P, A, B, C, D, eps=1e-6):
return is_in_triangle(P, A, B, C, eps) or is_in_triangle(P, A, C, D, eps)
def is_crossing(A, B, C, D, eps=1e-6):
if (max(A[0], B[0]) < min(C[0], D[0])):
return False # There is no mutual abcisses
if (max(A[1], B[1]) < min(C[1], D[1])):
return False # There is no mutual ordonnees
Ia = [max(min(A[0], B[0]), min(C[0],D[0])),
min(max(A[0], B[0]), max(C[0], D[0]))]
A1 = (A[1]-B[1])/(A[0]-B[0] + eps) # Pay attention to not dividing by zero
A2 = (C[1]-D[1])/(C[0]-D[0] + eps) # Pay attention to not dividing by zero
b1 = A[1]-A1*A[0]
b2 = C[1]-A2*C[0]
if (A1 == A2):
return False # Parallel segments
Xa = (b2 - b1) / (A1 - A2) # Once again, pay attention to not dividing by zero
if Xa + eps < max(min(A[0], B[0]), min(C[0], D[0])) or Xa -eps > min(max(A[0], B[0]), max(C[0], D[0])):
return False # intersection is out of bound
else:
return True
# crossing(np.array([0,0]), np.array([5,0]), np.array([5 + 1e-6, 1]), np.array([5+1e-6, -1]))
class FishingNet:
def __init__(self, length, wall_matrix, seed=0):
np.random.seed(seed)
self.length = length
self.num_rots = 10
self.angle = np.random.randint(self.num_rots)
self.rotations = range(self.num_rots)
angle_step = np.pi/self.num_rots
self.angles_list = np.array([(np.cos(angle_step * i), np.sin(angle_step * i)) for i in range(self.num_rots)])
is_ok = False
while not is_ok:
#print(list_bresenham)
self.pos_center = np.array([np.random.randint(MAP_W), np.random.randint(MAP_H)])
self.pos_center = np.array([170, 85])
list_bresenham = self.compute_bresenham()
try:
is_ok = not wall_matrix[list_bresenham[:,0], list_bresenham[:,1]].any()
except IndexError: # If initialization out of bounds
pass
def translate(self, direction, wall_matrix):
reward = 0
old_pos = self.pos_center
self.pos_center = self.pos_center + STEP_SIZE*compass[direction]
list_bresenham = self.compute_bresenham()
if wall_matrix[list_bresenham[:,0], list_bresenham[:,1]].any():
reward += HIT_WALL_REWARD
self.pos_center = old_pos
else:
reward += TRANSLATE_REWARD
return reward
def rotate(self, rotation, wall_matrix):
reward = 0
old_rot = self.angle
self.angle = self.angle + rotation
if self.angle == self.num_rots:
self.angle = 0
elif self.angle == -1:
self.angle = self.num_rots -1
list_bresenham = self.compute_bresenham()
if wall_matrix[list_bresenham[:,0], list_bresenham[:,1]].any():
reward += HIT_WALL_REWARD
self.angle = old_rot
else:
reward += ROTATE_REWARD
return reward
def end_points(self):
return (self.pos_center + self.length/2 * self.angles_list[self.angle], self.pos_center - self.length/2 * self.angles_list[self.angle])
def compute_bresenham(self):
endpoints = self.end_points()
return np.array(list(bresenham(int(endpoints[0][0]), int(endpoints[0][1]), int(endpoints[1][0]), int(endpoints[1][1])))).astype(np.int)
class Environment:
def __init__(self, data_path, mask, fishnet_length, n_fishing_nets=1):
self.mask = mask.T
self.n_fishing_nets = n_fishing_nets
self.fishnet_length = fishnet_length
self.data_path = data_path
def reset(self):
self.t = 0
with open(self.data_path, 'rb') as f:
self.data = pickle.load(f)
self.particles = self.data[self.t]
self.flows = self.data[self.t]
self.fishing_nets = [FishingNet(self.fishnet_length, self.mask, seed=i+INIT_SEED) for i in range(self.n_fishing_nets)]
self.fishnet_pos_history = []
def step(self, actions):
total_reward = 0
rewards = np.zeros(self.n_fishing_nets)
self.particles_square_dist = []
for i_fnet, fnet in enumerate(self.fishing_nets):
c = fnet.pos_center
particles_square_dist =\
((np.array(list(self.particles.values()))/10 - c) **2).sum(1) <= (fnet.length /2 + 2)**2
self.particles_square_dist.append({})
for i, (k, v) in enumerate(self.particles.items()):
self.particles_square_dist[i_fnet][k] = particles_square_dist[i]
rewards[i_fnet] += self.update_fishing_net(i_fnet, actions[i_fnet])
rewards += self.update_particles()
self.t += 1
self.fishnet_pos_history.append(self.fishing_nets[0].end_points())
return rewards
def remove_particle(self, caught_particles):
for particle_id in caught_particles:
for future_step in range(self.t+1, len(self.data)):
del self.data[future_step][particle_id]
del self.particles[particle_id]
def update_fishing_net(self, i_fnet, action):
"""Update fishing nets positions and gather reward on particles caught by the movements"""
reward = 0
caught_particles = []
fnet = self.fishing_nets[i_fnet]
if action < translate_base: # No movement
return WAIT_REWARD
elif action < rotate_base: # Translation
old_end_points = fnet.end_points()
reward += fnet.translate(action - translate_base, self.mask) # Hit Wall penalty
new_end_points = fnet.end_points()
for k, particle in self.particles.items():
if self.particles_square_dist[i_fnet][k]:
if is_in_rectangle(np.array(particle)/10, old_end_points[0], old_end_points[1], new_end_points[0], new_end_points[1]):
caught_particles.append(k)
reward += ACTIVE_COLLECTING_REWARD
else:
reward += 0
self.remove_particle(caught_particles)
return reward + TRANSLATE_REWARD
else: # Rotation
old_end_points = fnet.end_points()
reward += fnet.rotate(2*(action - rotate_base)-1, self.mask) # Hit Wall penalty
new_end_points = fnet.end_points()
for k, particle in self.particles.items():
c = fnet.pos_center
if self.particles_square_dist[i_fnet][k]:
if is_in_triangle(np.array(particle)/10, fnet.pos_center, old_end_points[0], new_end_points[0])\
or is_in_triangle(particle, fnet.pos_center, old_end_points[1], new_end_points[1]):
caught_particles.append(k)
reward += ACTIVE_COLLECTING_REWARD
else:
reward += 0
self.remove_particle(caught_particles)
return reward + ROTATE_REWARD
def update_particles(self):
"""Update particles positions and gather rewards on particles crossing nets"""
rewards = np.zeros(self.n_fishing_nets)
new_particles = self.data[self.t+1]
for i_fnet, fnet in enumerate(self.fishing_nets):
#Update particle position and check if it touches a Net
segment = fnet.end_points()
#rewards = PASSIVE_COLLECTING_REWARD if is_crossing(diffs[:,0], diffs[:,1], segment[0], segment[1]) # Will be faster if we get it to work
caught_particles = []
c = fnet.pos_center
for i, (k, v) in enumerate(self.particles.items()):
if self.particles_square_dist[i_fnet][k]:
if is_crossing(np.array(v)/10, np.array(new_particles[k])/10, segment[0], segment[1]):
caught_particles.append(k)
rewards[i_fnet] += PASSIVE_COLLECTING_REWARD
else:
rewards[i_fnet] += 0
self.remove_particle(caught_particles)
self.particles = self.data[self.t+1]
return rewards
def get_state(self):
particles_state = np.zeros((MAP_W, MAP_H))
for k, v in self.data[self.t].items():
particles_state[int(v[0]/10), int(v[1]/10)] += 1
fnet_coords = self.fishing_nets[0].compute_bresenham() # Assume we have only one fishnet
fnet_state = np.zeros((MAP_W, MAP_H))
for p in fnet_coords:
fnet_state[p[0], p[1]] += 1
return FloatTensor([[particles_state, self.mask, fnet_state, uo, vo]]).cuda()
def close(self):
raise NotImplementedError
with open(data_path, 'rb') as f:
data = pickle.load(f)
len(data[0].keys())
mat = np.zeros((MAP_W, MAP_H))
for k, v in data[0].items():
mat[int(v[0]/10), int(v[1]/10)] += 1
from matplotlib import animation, rc
from IPython.display import HTML
fig, ax = plt.subplots(1, 1, figsize=[10, 10])
def init(fig, ax):
mat = np.zeros((MAP_W, MAP_H))
for k, v in data[0].items():
mat[int(v[0]/10), int(v[1]/10)] += 1
ax.spy(mat.T != 0)
def simulate(i):
mat = np.zeros((MAP_W, MAP_H))
for k, v in data[i].items():
mat[int(v[0]/10), int(v[1]/10)] += 1
print((mat != 0).sum())
ax.spy(mat.T != 0)
return (ax,)
anim = animation.FuncAnimation(fig, simulate, init_func=init(fig,ax),
frames=2, interval=200,
blit=False)
HTML(anim.to_jshtml())
# flow_path
env = Environment(data_path= data_path, mask=mask, fishnet_length = 10)
# set up matplotlib
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
from IPython import display
# if gpu is to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
Transition = namedtuple('Transition',
('state', 'action', 'next_state', 'reward'))
class ReplayMemory(object):
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
"""Saves a transition."""
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
class DQN(nn.Module):
def __init__(self, h, w):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(5, 16, kernel_size=5, stride=2)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
self.bn3 = nn.BatchNorm2d(32)
# Number of Linear input connections depends on output of conv2d layers
# and therefore the input image size, so compute it.
def conv2d_size_out(size, kernel_size = 5, stride = 2):
return (size - (kernel_size - 1) - 1) // stride + 1
convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w)))
convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h)))
linear_input_size = convw * convh * 32
self.head = nn.Linear(linear_input_size, 7) # 448 or 512
# Called with either one element to determine next action, or a batch
# during optimization. Returns tensor([[left0exp,right0exp]...]).
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
return self.head(x.view(x.size(0), -1))
# resize = T.Compose([T.ToPILImage(),
# T.Resize(40, interpolation=Image.CUBIC),
# T.ToTensor()])
env.reset()
# plt.figure()
# plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(),
# interpolation='none')
# plt.title('Example extracted screen')
# plt.show()
BATCH_SIZE = 128
GAMMA = 0.999
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 200
TARGET_UPDATE = 10
model_checkpoint = './results/model1.pt'
# Get screen size so that we can initialize layers correctly based on shape
# returned from AI gym. Typical dimensions at this point are close to 3x40x90
# which is the result of a clamped and down-scaled render buffer in get_screen()
init_screen = env.get_state()
policy_net = DQN(MAP_W, MAP_H).to(device)
if model_checkpoint is not None:
policy_net.load_state_dict(torch.load(model_checkpoint))
target_net = DQN(MAP_W, MAP_H).to(device)
target_net.load_state_dict(policy_net.state_dict())
target_net.eval()
optimizer = optim.RMSprop(policy_net.parameters())
memory = ReplayMemory(10000)
steps_done = 0
def select_action(state):
global steps_done
sample = random.random()
eps_threshold = EPS_END + (EPS_START - EPS_END) * \
math.exp(-1. * steps_done / EPS_DECAY)
steps_done += 1
if sample > eps_threshold:
with torch.no_grad():
# t.max(1) will return largest column value of each row.
# second column on max result is index of where max element was
# found, so we pick action with the larger expected reward.
return policy_net(state).max(1)[1].view(1, 1)
else:
return torch.tensor([[random.randrange(7)]], device=device, dtype=torch.long)
episode_durations = []
def plot_durations():
plt.figure(2)
plt.clf()
durations_t = torch.tensor(episode_durations, dtype=torch.float)
plt.title('Training...')
plt.xlabel('Episode')
plt.ylabel('Duration')
plt.plot(durations_t.numpy())
# Take 100 episode averages and plot them too
if len(durations_t) >= 100:
means = durations_t.unfold(0, 100, 1).mean(1).view(-1)
means = torch.cat((torch.zeros(99), means))
plt.plot(means.numpy())
plt.pause(0.001) # pause a bit so that plots are updated
if is_ipython:
display.clear_output(wait=True)
display.display(plt.gcf())
def optimize_model():
if len(memory) < BATCH_SIZE:
return
transitions = memory.sample(BATCH_SIZE)
# Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for
# detailed explanation). This converts batch-array of Transitions
# to Transition of batch-arrays.
batch = Transition(*zip(*transitions))
# Compute a mask of non-final states and concatenate the batch elements
# (a final state would've been the one after which simulation ended)
non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
batch.next_state)), device=device, dtype=torch.uint8)
non_final_next_states = torch.cat([s for s in batch.next_state
if s is not None])
state_batch = torch.cat(batch.state)
action_batch = torch.cat(batch.action)
reward_batch = torch.cat(batch.reward)
# Compute Q(s_t, a) - the model computes Q(s_t), then we select the
# columns of actions taken. These are the actions which would've been taken
# for each batch state according to policy_net
state_action_values = policy_net(state_batch).gather(1, action_batch)
# Compute V(s_{t+1}) for all next states.
# Expected values of actions for non_final_next_states are computed based
# on the "older" target_net; selecting their best reward with max(1)[0].
# This is merged based on the mask, such that we'll have either the expected
# state value or 0 in case the state was final.
next_state_values = torch.zeros(BATCH_SIZE, device=device)
next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()
# Compute the expected Q values
expected_state_action_values = (next_state_values * GAMMA) + reward_batch
# Compute Huber loss
loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1))
# Optimize the model
optimizer.zero_grad()
loss.backward()
for param in policy_net.parameters():
param.grad.data.clamp_(-1, 1)
optimizer.step()
SAVE_EVERY = 10
num_episodes = 100
for i_episode in tqdm_notebook(range(24, num_episodes)):
# Initialize the environment and state
env.reset()
last_state = env.get_state()
current_state = env.get_state()
# state = current_state - last_state
total_reward = 0
for t in range(359):
# Select and perform an action
action = select_action(current_state)
reward = env.step([action.item()])[0]
total_reward += reward
reward = torch.tensor([reward], device=device)
# print(f'{t}: Reward {reward.item()}')
print(f'{t}: Reward {reward.item()}, Total Reward {total_reward}', end="\r")
# print(env.fishing_nets[0].end_points())
# Observe new state
last_state = current_state
current_state = env.get_state()
# if not done:
# next_state = current_screen - last_screen
# else:
# next_state = None
# Store the transition in memory
memory.push(current_state, action, current_state, reward)
# Move to the next state
# state = next_state
# Perform one step of the optimization (on the target network)
optimize_model()
# if done:
# episode_durations.append(t + 1)
# plot_durations()
# break
# Update the target network, copying all weights and biases in DQN
# os.mkdir(f'results/episode_{i_episode}')
if i_episode % SAVE_EVERY:
with open(f'results_2/particles_pos_episode_{i_episode}.pkl', 'wb') as f:
pickle.dump(env.data, f)
with open(f'results_2/fishnet_pos_episode_{i_episode}.pkl', 'wb') as f:
pickle.dump(env.fishnet_pos_history, f)
if i_episode % TARGET_UPDATE == 0:
target_net.load_state_dict(policy_net.state_dict())
print('Complete')
torch.save(policy_net.state_dict(), 'results/model2.pt')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Tessellate-Imaging/Monk_Object_Detection/blob/master/example_notebooks/4_efficientdet/Monk%20Type%20to%20Coco%20-%20Example%202.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Installation
- Run these commands
- git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
- cd Monk_Object_Detection/3_mxrcnn/installation
- Select the right requirements file and run
- cat requirements_cuda9.0.txt | xargs -n 1 -L 1 pip install
```
! git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
# For colab use the command below
! cd Monk_Object_Detection/3_mxrcnn/installation && cat requirements_colab.txt | xargs -n 1 -L 1 pip install
# For Local systems and cloud select the right CUDA version
#! cd Monk_Object_Detection/3_mxrcnn/installation && cat requirements_cuda9.0.txt | xargs -n 1 -L 1 pip install
```
# Monk Format
## Dataset Directory Structure
../sample_dataset/ship (root)
|
|-----------Images (img_dir)
| |
| |------------------img1.jpg
| |------------------img2.jpg
| |------------------.........(and so on)
|
|
|-----------train_labels.csv (anno_file)
## Annotation file format
| Id | Labels |
| img1.jpg | x1 y1 x2 y2 label1 x1 y1 x2 y2 label2 |
- Labels: xmin ymin xmax ymax label
- xmin, ymin - top left corner of bounding box
- xmax, ymax - bottom right corner of bounding box
# COCO Format
## Dataset Directory Structure
../sample_dataset (root_dir)
|
|------ship (coco_dir)
| |
| |---Images (img_dir)
| |----|
| |-------------------img1.jpg
| |-------------------img2.jpg
| |-------------------.........(and so on)
|
|
| |---annotations (anno_dir)
| |----|
| |--------------------instances_Train.json
| |--------------------classes.txt
- instances_Train.json -> In proper COCO format
- classes.txt -> A list of classes in alphabetical order
```
import os
import numpy as np
import cv2
import dicttoxml
import xml.etree.ElementTree as ET
from xml.dom.minidom import parseString
from tqdm import tqdm
import shutil
import json
import pandas as pd
```
# Sample Dataset Credits
- credits: https://github.com/experiencor/kangaroo
```
# Provide details on directory in Monk Format
root = "Monk_Object_Detection/example_notebooks/sample_dataset/ship/";
img_dir = "Images/";
anno_file = "train_labels.csv";
# Need not change anything below
dataset_path = root;
images_folder = root + "/" + img_dir;
annotations_path = root + "/annotations/";
if not os.path.isdir(annotations_path):
os.mkdir(annotations_path)
input_images_folder = images_folder;
input_annotations_path = root + "/" + anno_file;
output_dataset_path = root;
output_image_folder = input_images_folder;
output_annotation_folder = annotations_path;
tmp = img_dir.replace("/", "");
output_annotation_file = output_annotation_folder + "/instances_" + tmp + ".json";
output_classes_file = output_annotation_folder + "/classes.txt";
if not os.path.isdir(output_annotation_folder):
os.mkdir(output_annotation_folder);
df = pd.read_csv(input_annotations_path);
columns = df.columns
delimiter = " ";
list_dict = [];
anno = [];
for i in range(len(df)):
img_name = df[columns[0]][i];
labels = df[columns[1]][i];
tmp = labels.split(delimiter);
for j in range(len(tmp)//5):
label = tmp[j*5+4];
if(label not in anno):
anno.append(label);
anno = sorted(anno)
for i in tqdm(range(len(anno))):
tmp = {};
tmp["supercategory"] = "master";
tmp["id"] = i;
tmp["name"] = anno[i];
list_dict.append(tmp);
anno_f = open(output_classes_file, 'w');
for i in range(len(anno)):
anno_f.write(anno[i] + "\n");
anno_f.close();
coco_data = {};
coco_data["type"] = "instances";
coco_data["images"] = [];
coco_data["annotations"] = [];
coco_data["categories"] = list_dict;
image_id = 0;
annotation_id = 0;
for i in tqdm(range(len(df))):
img_name = df[columns[0]][i];
labels = df[columns[1]][i];
tmp = labels.split(delimiter);
image_in_path = input_images_folder + "/" + img_name;
img = cv2.imread(image_in_path, 1);
h, w, c = img.shape;
images_tmp = {};
images_tmp["file_name"] = img_name;
images_tmp["height"] = h;
images_tmp["width"] = w;
images_tmp["id"] = image_id;
coco_data["images"].append(images_tmp);
for j in range(len(tmp)//5):
x1 = int(tmp[j*5+0]);
y1 = int(tmp[j*5+1]);
x2 = int(tmp[j*5+2]);
y2 = int(tmp[j*5+3]);
label = tmp[j*5+4];
annotations_tmp = {};
annotations_tmp["id"] = annotation_id;
annotation_id += 1;
annotations_tmp["image_id"] = image_id;
annotations_tmp["segmentation"] = [];
annotations_tmp["ignore"] = 0;
annotations_tmp["area"] = (x2-x1)*(y2-y1);
annotations_tmp["iscrowd"] = 0;
annotations_tmp["bbox"] = [x1, y1, x2-x1, y2-y1];
annotations_tmp["category_id"] = anno.index(label);
coco_data["annotations"].append(annotations_tmp)
image_id += 1;
outfile = open(output_annotation_file, 'w');
json_str = json.dumps(coco_data, indent=4);
outfile.write(json_str);
outfile.close();
```
| github_jupyter |
# DSCI 525 - Web and Cloud Computing
***Milestone 4:*** In this milestone, you will deploy the machine learning model you trained in milestone 3.
Milestone 4 checklist :
- [X] Use an EC2 instance.
- [X] Develop your API here in this notebook.
- [X] Copy it to ```app.py``` file in EC2 instance.
- [X] Run your API for other consumers and test among your colleagues.
- [X] Summarize your journey.
```
## Import all the packages that you need
import numpy as np
import pandas as pd
from sklearn.metrics import mean_squared_error
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
```
## 1. Develop your API
rubric={mechanics:45}
You probably got how to set up primary URL endpoints from the ```sampleproject.ipynb notebook``` and have them process and return some data. Here we are going to create a new endpoint that accepts a POST request of the features required to run the machine learning model that you trained and saved in last milestone (i.e., a user will post the predictions of the 25 climate model rainfall predictions, i.e., features, needed to predict with your machine learning model). Your code should then process this data, use your model to make a prediction, and return that prediction to the user. To get you started with all this, I've given you a template which you should fill out to set up this functionality:
***NOTE:*** You won't be able to test the flask module (or the API you make here) unless you go through steps in ```2. Deploy your API```. However, here you can make sure that you develop all your functions and inputs properly.
```python
from flask import Flask, request, jsonify
import joblib
app = Flask(__name__)
# 1. Load your model here
model = joblib.load(...)
# 2. Define a prediction function
def return_prediction(...):
# format input_data here so that you can pass it to model.predict()
return model.predict(...)
# 3. Set up home page using basic html
@app.route("/")
def index():
# feel free to customize this if you like
return """
<h1>Welcome to our rain prediction service</h1>
To use this service, make a JSON post request to the /predict url with 5 climate model outputs.
"""
# 4. define a new route which will accept POST requests and return model predictions
@app.route('/predict', methods=['POST'])
def rainfall_prediction():
content = request.json # this extracts the JSON content we sent
prediction = return_prediction(...)
results = {...} # return whatever data you wish, it can be just the prediction
# or it can be the prediction plus the input data, it's up to you
return jsonify(results)
```
```
import numpy as np
import pandas as pd
from sklearn.metrics import mean_squared_error
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from flask import Flask, request, jsonify
import joblib
app = Flask(__name__)
# 1. Load your model here
model = joblib.load('model.joblib')
# 2. Define a prediction function
def return_prediction(data):
# format input_data here so that you can pass it to model.predict()
return float(model.predict(np.array(data, ndmin = 2)))
# 3. Set up home page using basic html
@app.route("/")
def index():
# feel free to customize this if you like
return """
<h1>Welcome to our rain prediction service</h1>
To use this service, make a JSON post request to the /predict url with 25 climate model outputs.
"""
# 4. define a new route which will accept POST requests and return model predictions
@app.route('/predict', methods=['POST'])
def rainfall_prediction():
content = request.json # this extracts the JSON content we sent
prediction = return_prediction(content["data"])
results = {"Input": content["data"],
"Prediction": prediction} # return whatever data you wish, it can be just the prediction
# or it can be the prediction plus the input data, it's up to you
return jsonify(results)
```
## 2. Deploy your API
rubric={mechanics:40}
Once your API (app.py) is working we're ready to deploy it! For this, do the following:
1. SSH into your EC2 instance from milestone2. There are no issues if you want to spin another EC2 instance; if you plan to do so, make sure you terminate any other running instances.
2. Make a file `app.py` file in your instance and copy what you developed above in there.
2.1 You can use the linux editor using ```vi```. More details on vi Editor [here](https://www.guru99.com/the-vi-editor.html). I do recommend doing it this way and knowing some basics like ```:wq,:q!,dd``` will help.
2.2 Or else you can make a file in your laptop called app.py and copy it over to your EC2 instance using ```scp```. Eg: ```scp -r -i "ggeorgeAD.pem" ~/Desktop/worker.py ubuntu@ec2-xxx.ca-central-1.compute.amazonaws.com:~/```
3. Download your model from s3 to your EC2 instance.
4. Presumably you already have `pip` or `conda` installed on your instance from your previous milestone. You should use one of those package managers to install the dependencies of your API, like `flask`, `joblib`, `sklearn`, etc.
4.1. You have installed it in your TLJH using [Installing pip packages](https://tljh.jupyter.org/en/latest/howto/env/user-environment.html#installing-pip-packages). if you want to make it available to users outside of jupyterHub (which you want to in this case as we are logging into EC2 instance as user ```ubuntu``` by giving ```ssh -i privatekey ubuntu@<host_name>```) you can follow these [instructions](https://tljh.jupyter.org/en/latest/howto/env/user-environment.html#accessing-user-environment-outside-jupyterhub).
4.2. Alternatively you can install the required packages inside your terminal.
- Install conda:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
- Install packages (there might be others):
conda install flask scikit-learn joblib
5. Now you're ready to start your service, go ahead and run `flask run --host=0.0.0.0 --port=8080`. This will make your service available at your EC2 instance's IP address on port 8080. Please make sure that you run this from where ```app.py``` and ```model.joblib``` resides.
6. You can now access your service by typing your EC2 instances public IPv4 address appened with `:8080` into a browswer, so something like `http://<your_EC2_ip>:8080`.
7. You should use `curl` to send a post request to your service to make sure it's working as expected.
>EG: curl -X POST http://your_EC2_ip:8080/predict -d '{"data":[1,2,3,4,53,11,22,37,41,53,11,24,31,44,53,11,22,35,42,53,12,23,31,42,53]}' -H "Content-Type: application/json"
8. Now, what happens if you exit your connection with the EC2 instance? Can you still reach your service?
9. There are several options we could use to help us persist our server even after we exit our shell session. We'll be using `screen`. `screen` will allow us to create a separate session within which we can run `flask` and which won't shut down when we exit the main shell session. Read [this](https://linuxize.com/post/how-to-use-linux-screen/) to learn more on ```screen```.
10. Now, create a new `screen` session (think of this as a new, separate shell), using: `screen -S myapi`. If you want to list already created sessions do ```screen -list```. If you want to get into an existing ```screen -x myapi```.
11. Within that session, start up your flask app. You can then exit the session by pressing `Ctrl + A then press D`. Here you are detaching the session, once you log back into EC2 instance you can attach it using ```screen -x myapi```.
12. Feel free to exit your connection with the EC2 instance now and try accessing your service again with `curl`. You should find that the service has now persisted!
13. ***CONGRATULATIONS!!!*** You have successfully got to the end of our milestones. Move to Task 3 and submit it.
## 3. Summarize your journey from Milestone 1 to Milestone 4
rubric={mechanics:10}
>There is no format or structure on how you write this. (also, no minimum number of words). It's your choice on how well you describe it.
Our Journey from Milestone 1 to Milestone 4:
In milestone 1, we attempted to clean and wrangle data for our model, but ran into speed issues locally depending on the machine that it was run on. We had issues with duplicating data through different iterations. Also, as part of this milestone we benchmarked each person’s computer when running the script to combine the data files.
In milestone 2, we set up our EC2 instance and repeated the same steps from Milestone 1, but ran into fewer issues with regards to speed. We moved the data to S3 bucket at the end of the milestone to store data. We also created logins for all users on Tiny Little Jupyter Hub (TLJH) so that we had the option to work individually. At this point, the data was ready to be applied to a model.
In milestone 3, we created our machine learning model on EC2 and then set up an EMR cluster which we accessed through `FoxyProxy Standard` in Firefox to help speed up hyperparameter optimization. Once we found the most optimal parameters, we applied those parameters to train and save the model to S3 in `joblib` format.
In milestone 4, we developed and deployed the API to access the model from Milestone 3 using `Flask`. We developed the app on our EC2 instance on TLJH and after installing the required dependencies, deployed using `screen` to get a persistent session so that it does not end after we close the Terminal. Finally, we used this to perform a prediction on sample input, as seen on the screenshot below.
https://github.com/UBC-MDS/DSCI_525_Group13_Rainfall/blob/main/img/m4_task2.png
<img src="../img/m4_task2.png" alt="result" style="width: 1000px;"/>
## 4. Submission instructions
rubric={mechanics:5}
In the textbox provided on Canvas please put a link where TAs can find the following-
- [X] This notebook with solution to ```1 & 3```
- [X] Screenshot from
- [X] Output after trying curl. Here is a [sample](https://github.ubc.ca/MDS-2020-21/DSCI_525_web-cloud-comp_students/blob/master/Milestones/milestone4/images/curl_deploy_sample.png). This is just an example; your input/output doesn't have to look like this, you can design the way you like. But at a minimum, it should show your prediction value.
| github_jupyter |
<center>
<img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# **Exploratory Data Analysis Lab**
Estimated time needed: **30** minutes
In this module you get to work with the cleaned dataset from the previous module.
In this assignment you will perform the task of exploratory data analysis.
You will find out the distribution of data, presence of outliers and also determine the correlation between different columns in the dataset.
## Objectives
In this lab you will perform the following:
* Identify the distribution of data in the dataset.
* Identify outliers in the dataset.
* Remove outliers from the dataset.
* Identify correlation between features in the dataset.
***
## Hands on Lab
Import the pandas module.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
Load the dataset into a dataframe.
```
df = pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/LargeData/m2_survey_data.csv")
```
## Distribution
### Determine how the data is distributed
The column `ConvertedComp` contains Salary converted to annual USD salaries using the exchange rate on 2019-02-01.
This assumes 12 working months and 50 working weeks.
Plot the distribution curve for the column `ConvertedComp`.
```
# your code goes here
sns.distplot(a=df["ConvertedComp"],bins=20,hist=False)
plt.show()
```
Plot the histogram for the column `ConvertedComp`.
```
# your code goes here
sns.distplot(a=df["ConvertedComp"],bins=20,kde=False)
plt.show()
```
What is the median of the column `ConvertedComp`?
```
# your code goes here
df["ConvertedComp"].median()
```
How many responders identified themselves only as a **Man**?
```
# your code goes here
df["Gender"].value_counts()
```
Find out the median ConvertedComp of responders identified themselves only as a **Woman**?
```
# your code goes here
woman = df[df["Gender"] == "Woman"]
woman["ConvertedComp"].median()
```
Give the five number summary for the column `Age`?
**Double click here for hint**.
<!--
min,q1,median,q3,max of a column are its five number summary.
-->
```
# your code goes here
df["Age"].describe()
# df["Age"].median()
```
Plot a histogram of the column `Age`.
```
# your code goes here
plt.figure(figsize=(10,5))
sns.distplot(a=df["Age"],bins=20,kde=False)
plt.show()
```
## Outliers
### Finding outliers
Find out if outliers exist in the column `ConvertedComp` using a box plot?
```
# your code goes here
sns.boxplot(x=df.ConvertedComp, data=df)
plt.show()
```
Find out the Inter Quartile Range for the column `ConvertedComp`.
```
# your code goes here
df["ConvertedComp"].describe()
```
Find out the upper and lower bounds.
```
# your code goes here
Q1 = df["ConvertedComp"].quantile(0.25)
Q3 = df["ConvertedComp"].quantile(0.75)
IQR = Q3 - Q1
print(IQR)
Q1 = df["Age"].quantile(0.25)
Q3 = df["Age"].quantile(0.75)
IQR = Q3 - Q1
outliers = (df["Age"] < (Q1 - 1.5 * IQR)) | (df["Age"] > (Q3 + 1.5 * IQR))
outliers.value_counts()
```
Identify how many outliers are there in the `ConvertedComp` column.
```
# your code goes here
outliers = (df["ConvertedComp"] < (Q1 - 1.5 * IQR)) | (df["ConvertedComp"] > (Q3 + 1.5 * IQR))
outliers.value_counts()
```
Create a new dataframe by removing the outliers from the `ConvertedComp` column.
```
# your code goes here
less = (df["ConvertedComp"] < (Q1 - 1.5 * IQR))
less.value_counts()
more = (df["ConvertedComp"] > (Q3 + 1.5 * IQR))
more.value_counts()
convertedcomp_out = df[~(df["ConvertedComp"] > (Q3 + 1.5 * IQR))]
convertedcomp_out["ConvertedComp"].median()
convertedcomp_out["ConvertedComp"].mean()
```
## Correlation
### Finding correlation
Find the correlation between `Age` and all other numerical columns.
```
# your code goes here
df.corr()
```
## Authors
Ramesh Sannareddy
### Other Contributors
Rav Ahuja
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ----------------- | ---------------------------------- |
| 2020-10-17 | 0.1 | Ramesh Sannareddy | Created initial version of the lab |
Copyright © 2020 IBM Corporation. This notebook and its source code are released under the terms of the [MIT License](https://cognitiveclass.ai/mit-license?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDA0321ENSkillsNetwork21426264-2021-01-01\&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBM-DA0321EN-SkillsNetwork-21426264\&cm_mmca1=000026UJ\&cm_mmca2=10006555\&cm_mmca3=M12345678\&cvosrc=email.Newsletter.M12345678\&cvo_campaign=000026UJ).
| github_jupyter |
# Scene collections - WIP
Scene collections define the scenes which are used together to calculate features, such as vitual time series or .
To create them we need the look up tables created in the previous notebook.
**TODO**: How to properly save the scene collection such that it is easy to work with them and Snakemake?
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import pandas as pd
from pprint import pprint
from src import configs
prjconf = configs.ProjectConfigParser()
tilenames = prjconf.get("Params", "tiles").split(" ")
tilenames =['32UNU', '32UPU', '32UQU', '33UUP', '32TPT', '32TQT', '33TUN']
tilenames
```
## Utilities
We can get all names of stored scene collections:
```
existing_scenecoll_names = prjconf.get_scene_collection_names()
existing_scenecoll_names
```
We can read existing scene collections and the parameters by using the name.
```
scenecoll_params_stored = prjconf.read_scene_collection_params(existing_scenecoll_names[0])
scenecoll_params_stored
if len(existing_scenecoll_names) > 0:
pprint(prjconf.read_scene_collection_params(existing_scenecoll_names[0]))
display(prjconf.read_scene_collection(existing_scenecoll_names[0], tile="32UNU").head(3))
```
## Scene collection parameter
```
scenecoll_params_all = {
"scoll01":
{
"product": ["L30"],
"start_date": "2018-01-01",
"end_date": "2018-12-31",
"max_cloud_cover": 75,
"min_spatial_coverage": 0
}
}
```
## Create scene collections
```
tilenames
scenecoll_name = "scoll01"
for tile in tilenames:
scenecoll_params = scenecoll_params_all[scenecoll_name]
exist_ok=True
print("*" * 80)
print(tile)
print("Scene collection parameter:")
pprint(scenecoll_params)
path__hls_tile_lut = prjconf.get_path("Raw", "hls_tile_lut", tile=tile)
df_scenes = pd.read_csv(path__hls_tile_lut)
print(df_scenes.shape)
# plot teh selection criteria
ax = df_scenes.plot(x="date", y="cloud_cover", color='b', figsize=(18, 6))
ax = df_scenes.plot(x="date", y="spatial_coverage", color='r', ax=ax)
ax.axhline(y=scenecoll_params["min_spatial_coverage"], color='r')
ax.axhline(y=scenecoll_params["max_cloud_cover"], color='b')
idx = (df_scenes["date"] >= scenecoll_params["start_date"]) & \
(df_scenes["date"] <= scenecoll_params["end_date"]) & \
(df_scenes["product"].isin(scenecoll_params["product"])) & \
(df_scenes["cloud_cover"] <= scenecoll_params["max_cloud_cover"]) & \
(df_scenes["spatial_coverage"] >= scenecoll_params["min_spatial_coverage"])
print(idx.sum())
df_scenecoll = df_scenes[idx] \
.reset_index(drop=True) \
.drop("path", axis=1) \
.sort_values(["product", "date", "cloud_cover"])
#display(df_scenecoll.head())
print(f"Number of scenes in the scene collection: {len(df_scenecoll)}")
# Write a scene collection.
# Note (!!!) that for your own safety this throws an error if it exists.
# scenecoll_name = f"scenecoll01__tile_{tile}"
# scenecoll_name += f"__date_{scenecoll_params['start_date']}T{scenecoll_params['end_date']}"
# scenecoll_name += f"__product_{'-'.join(scenecoll_params['product'])}"
# scenecoll_name += f"__maxcc_{scenecoll_params['max_cloud_cover']}"
# scenecoll_name += f"__minsc_{scenecoll_params['min_spatial_coverage']}"
prjconf.write_scene_collection(df_scenecoll, scenecoll_name, scenecoll_params, tile=tile, exist_ok=exist_ok)
```
| github_jupyter |
# LeNet Lab

Source: Yan LeCun
## Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
```
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
```
The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
```
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
```
## Visualize Data
View a sample from the dataset.
You do not need to modify this section.
```
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
```
## Preprocess Data
Shuffle the training data.
You do not need to modify this section.
```
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
```
## Setup TensorFlow
The `EPOCH` and `BATCH_SIZE` values affect the training speed and model accuracy.
You do not need to modify this section.
```
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
```
## TODO: Implement LeNet-5
Implement the [LeNet-5](http://yann.lecun.com/exdb/lenet/) neural network architecture.
This is the only cell you need to edit.
### Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
### Architecture
**Layer 1: Convolutional.** The output shape should be 28x28x6.
**Activation.** Your choice of activation function.
**Pooling.** The output shape should be 14x14x6.
**Layer 2: Convolutional.** The output shape should be 10x10x16.
**Activation.** Your choice of activation function.
**Pooling.** The output shape should be 5x5x16.
**Flatten.** Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using `tf.contrib.layers.flatten`, which is already imported for you.
**Layer 3: Fully Connected.** This should have 120 outputs.
**Activation.** Your choice of activation function.
**Layer 4: Fully Connected.** This should have 84 outputs.
**Activation.** Your choice of activation function.
**Layer 5: Fully Connected (Logits).** This should have 10 outputs.
### Output
Return the result of the 2nd fully connected layer.
```
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
weights = {
"c1": tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean=mu, stddev=sigma)),
"c2": tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean=mu, stddev=sigma)),
"fc1": tf.Variable(tf.truncated_normal(shape=(400, 120), mean=mu, stddev=sigma)),
"fc2": tf.Variable(tf.truncated_normal(shape=(120, 84), mean=mu, stddev=sigma)),
"fc3": tf.Variable(tf.truncated_normal(shape=(84, 10), mean=mu, stddev=sigma))
}
biases = {
"c1": tf.Variable(tf.zeros(6)),
"c2": tf.Variable(tf.zeros(16)),
"fc1": tf.Variable(tf.zeros(120)),
"fc2": tf.Variable(tf.zeros(84)),
"fc3": tf.Variable(tf.zeros(10))
}
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
convo1 = tf.nn.conv2d(x, weights["c1"], strides=[1, 1, 1, 1], padding="VALID")
convo1 = tf.nn.bias_add(convo1, biases["c1"])
# TODO: Activation.
convo1 = tf.nn.relu(convo1)
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
convo1 = tf.nn.max_pool(convo1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="VALID")
# TODO: Layer 2: Convolutional. Output = 10x10x16.
convo2 = tf.nn.conv2d(convo1, weights["c2"], strides=[1, 1, 1, 1], padding="VALID")
convo2 = tf.nn.bias_add(convo2, biases["c2"])
# TODO: Activation.
convo2 = tf.nn.relu(convo2)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
convo2 = tf.nn.max_pool(convo2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="VALID")
# TODO: Flatten. Input = 5x5x16. Output = 400.
flattened = tf.contrib.layers.flatten(convo2)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1 = tf.matmul(flattened, weights["fc1"]) + biases["fc1"]
# TODO: Activation.
fc1 = tf.nn.relu(fc1)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2 = tf.matmul(fc1, weights["fc2"]) + biases["fc2"]
# TODO: Activation.
fc2 = tf.nn.relu(fc2)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 10.
logits = tf.matmul(fc2, weights["fc3"]) + biases["fc3"]
return logits
```
## Features and Labels
Train LeNet to classify [MNIST](http://yann.lecun.com/exdb/mnist/) data.
`x` is a placeholder for a batch of input images.
`y` is a placeholder for a batch of output labels.
You do not need to modify this section.
```
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
```
## Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
```
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
```
## Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
```
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
```
## Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
```
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
```
## Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
```
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
```
| github_jupyter |
```
import keras
keras.__version__
```
# Deep Dream
This notebook contains the code samples found in Chapter 8, Section 2 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
----
[...]
## Implementing Deep Dream in Keras
We will start from a convnet pre-trained on ImageNet. In Keras, we have many such convnets available: VGG16, VGG19, Xception, ResNet50...
albeit the same process is doable with any of these, your convnet of choice will naturally affect your visualizations, since different
convnet architectures result in different learned features. The convnet used in the original Deep Dream release was an Inception model, and
in practice Inception is known to produce very nice-looking Deep Dreams, so we will use the InceptionV3 model that comes with Keras.
```
from keras.applications import inception_v3
from keras import backend as K
# We will not be training our model,
# so we use this command to disable all training-specific operations
K.set_learning_phase(0)
# Build the InceptionV3 network.
# The model will be loaded with pre-trained ImageNet weights.
model = inception_v3.InceptionV3(weights='imagenet',
include_top=False)
```
Next, we compute the "loss", the quantity that we will seek to maximize during the gradient ascent process. In Chapter 5, for filter
visualization, we were trying to maximize the value of a specific filter in a specific layer. Here we will simultaneously maximize the
activation of all filters in a number of layers. Specifically, we will maximize a weighted sum of the L2 norm of the activations of a
set of high-level layers. The exact set of layers we pick (as well as their contribution to the final loss) has a large influence on the
visuals that we will be able to produce, so we want to make these parameters easily configurable. Lower layers result in
geometric patterns, while higher layers result in visuals in which you can recognize some classes from ImageNet (e.g. birds or dogs).
We'll start from a somewhat arbitrary configuration involving four layers --
but you will definitely want to explore many different configurations later on:
```
# Dict mapping layer names to a coefficient
# quantifying how much the layer's activation
# will contribute to the loss we will seek to maximize.
# Note that these are layer names as they appear
# in the built-in InceptionV3 application.
# You can list all layer names using `model.summary()`.
layer_contributions = {
'mixed2': 0.2,
'mixed3': 3.,
'mixed4': 2.,
'mixed5': 1.5,
}
```
Now let's define a tensor that contains our loss, i.e. the weighted sum of the L2 norm of the activations of the layers listed above.
```
# Get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers])
# Define the loss.
loss = K.variable(0.)
for layer_name in layer_contributions:
# Add the L2 norm of the features of a layer to the loss.
coeff = layer_contributions[layer_name]
activation = layer_dict[layer_name].output
# We avoid border artifacts by only involving non-border pixels in the loss.
scaling = K.prod(K.cast(K.shape(activation), 'float32'))
loss += coeff * K.sum(K.square(activation[:, 2: -2, 2: -2, :])) / scaling
```
Now we can set up the gradient ascent process:
```
# This holds our generated image
dream = model.input
# Compute the gradients of the dream with regard to the loss.
grads = K.gradients(loss, dream)[0]
# Normalize gradients.
grads /= K.maximum(K.mean(K.abs(grads)), 1e-7)
# Set up function to retrieve the value
# of the loss and gradients given an input image.
outputs = [loss, grads]
fetch_loss_and_grads = K.function([dream], outputs)
def eval_loss_and_grads(x):
outs = fetch_loss_and_grads([x])
loss_value = outs[0]
grad_values = outs[1]
return loss_value, grad_values
def gradient_ascent(x, iterations, step, max_loss=None):
for i in range(iterations):
loss_value, grad_values = eval_loss_and_grads(x)
if max_loss is not None and loss_value > max_loss:
break
print('...Loss value at', i, ':', loss_value)
x += step * grad_values
return x
```
Finally, here is the actual Deep Dream algorithm.
First, we define a list of "scales" (also called "octaves") at which we will process the images. Each successive scale is larger than
previous one by a factor 1.4 (i.e. 40% larger): we start by processing a small image and we increasingly upscale it:

Then, for each successive scale, from the smallest to the largest, we run gradient ascent to maximize the loss we have previously defined,
at that scale. After each gradient ascent run, we upscale the resulting image by 40%.
To avoid losing a lot of image detail after each successive upscaling (resulting in increasingly blurry or pixelated images), we leverage a
simple trick: after each upscaling, we reinject the lost details back into the image, which is possible since we know what the original
image should look like at the larger scale. Given a small image S and a larger image size L, we can compute the difference between the
original image (assumed larger than L) resized to size L and the original resized to size S -- this difference quantifies the details lost
when going from S to L.
The code above below leverages the following straightforward auxiliary Numpy functions, which all do just as their name suggests. They
require to have SciPy installed.
```
import scipy
from keras.preprocessing import image
def resize_img(img, size):
img = np.copy(img)
factors = (1,
float(size[0]) / img.shape[1],
float(size[1]) / img.shape[2],
1)
return scipy.ndimage.zoom(img, factors, order=1)
def save_img(img, fname):
pil_img = deprocess_image(np.copy(img))
scipy.misc.imsave(fname, pil_img)
def preprocess_image(image_path):
# Util function to open, resize and format pictures
# into appropriate tensors.
img = image.load_img(image_path)
img = image.img_to_array(img)
img = np.expand_dims(img, axis=0)
img = inception_v3.preprocess_input(img)
return img
def deprocess_image(x):
# Util function to convert a tensor into a valid image.
if K.image_data_format() == 'channels_first':
x = x.reshape((3, x.shape[2], x.shape[3]))
x = x.transpose((1, 2, 0))
else:
x = x.reshape((x.shape[1], x.shape[2], 3))
x /= 2.
x += 0.5
x *= 255.
x = np.clip(x, 0, 255).astype('uint8')
return x
import numpy as np
# Playing with these hyperparameters will also allow you to achieve new effects
step = 0.01 # Gradient ascent step size
num_octave = 3 # Number of scales at which to run gradient ascent
octave_scale = 1.4 # Size ratio between scales
iterations = 20 # Number of ascent steps per scale
# If our loss gets larger than 10,
# we will interrupt the gradient ascent process, to avoid ugly artifacts
max_loss = 10.
# Fill this to the path to the image you want to use
base_image_path = '/home/ubuntu/data/original_photo_deep_dream.jpg'
# Load the image into a Numpy array
img = preprocess_image(base_image_path)
# We prepare a list of shape tuples
# defining the different scales at which we will run gradient ascent
original_shape = img.shape[1:3]
successive_shapes = [original_shape]
for i in range(1, num_octave):
shape = tuple([int(dim / (octave_scale ** i)) for dim in original_shape])
successive_shapes.append(shape)
# Reverse list of shapes, so that they are in increasing order
successive_shapes = successive_shapes[::-1]
# Resize the Numpy array of the image to our smallest scale
original_img = np.copy(img)
shrunk_original_img = resize_img(img, successive_shapes[0])
for shape in successive_shapes:
print('Processing image shape', shape)
img = resize_img(img, shape)
img = gradient_ascent(img,
iterations=iterations,
step=step,
max_loss=max_loss)
upscaled_shrunk_original_img = resize_img(shrunk_original_img, shape)
same_size_original = resize_img(original_img, shape)
lost_detail = same_size_original - upscaled_shrunk_original_img
img += lost_detail
shrunk_original_img = resize_img(original_img, shape)
save_img(img, fname='dream_at_scale_' + str(shape) + '.png')
save_img(img, fname='final_dream.png')
from matplotlib import pyplot as plt
plt.imshow(deprocess_image(np.copy(img)))
plt.show()
```
| github_jupyter |
```
# output of tool D
parent_folder = '/Users/kavyasrinet/Desktop/other_actions/0/'
folder_name_D = parent_folder + 'toolD/'
tool_D_out_file = folder_name_D + 'all_agreements.txt'
# output of tool C
parent_folder= '/Users/kavyasrinet/Desktop/other_actions/5/'
folder_name_C = parent_folder + 'toolC/'
tool_C_out_file = folder_name_C + 'all_agreements.txt'
# combine outputs
# check if all keys of tool C annotated yes -> put directly
# if no , check child in t2 and combine
# construct mape of tool 1
toolC_map = {}
import ast
import os.path
from os import path
if path.exists(tool_C_out_file):
with open(tool_C_out_file) as f:
for line in f.readlines():
line = line.strip()
cmd, ref_obj_text, a_d = line.split("\t")
if cmd in toolC_map:
toolC_map[cmd].update(ast.literal_eval(a_d))
else:
toolC_map[cmd]= ast.literal_eval(a_d)
print(len(toolC_map.keys()))
# construct map of tool 2
toolD_map = {}
if path.exists(tool_D_out_file):
with open(tool_D_out_file) as f2:
for line in f2.readlines():
line = line.strip()
cmd, comparison_text, comparison_dict = line.split("\t")
if cmd in toolD_map:
print("BUGGGGG")
# add the comparison dict to command -> dict
toolD_map[cmd] = ast.literal_eval(comparison_dict)
print(len(toolD_map.keys()))
import ast
def all_yes(a_dict):
if type(a_dict) == str:
a_dict = ast.literal_eval(a_dict)
for k, val in a_dict.items():
if type(val) == list and val[0] == "no":
return False
return True
def clean_up_dict(a_dict):
if type(a_dict) == str:
a_dict = ast.literal_eval(a_dict)
new_d = {}
for k, val in a_dict.items():
if type(val) == list:
if val[0] in ["yes", "no"]:
new_d[k] = val[1]
elif type(val) == dict:
new_d[k] = clean_up_dict(val)
else:
new_d[k] = val
return new_d
# post_process spans and "contains_coreference" : "no"
def merge_indices(indices):
a, b = indices[0]
for i in range(1, len(indices)):
a = min(a, indices[i][0])
b = max(b, indices[i][1])
return [a, b]
def fix_spans(d):
new_d = {}
if type(d) == str:
d = ast.literal_eval(d)
for k, v in d.items():
if k == "contains_coreference" and v == "no":
continue
if type(v) == list:
new_d[k] = [0, merge_indices(v)]
continue
elif type(v) == dict:
new_d[k] = fix_spans(v)
continue
else:
new_d[k] = v
return new_d
def fix_ref_obj(clean_dict):
val = clean_dict
new_clean_dict = {}
if 'special_reference' in val:
new_clean_dict['special_reference'] = val['special_reference']
val.pop('special_reference')
if 'repeat' in val:
new_clean_dict['repeat'] = val['repeat']
val.pop('repeat')
if val:
new_clean_dict['filters'] = val
return new_clean_dict
# combine and write output to a file
i =0
# what these action will look like in the map
import json
toolC_updated_map = {}
from pprint import pprint
# update dict of toolC with tool D and keep that in tool C's map
#pprint(toolC_map)
for cmd, a_dict in toolC_map.items():
# remove the ['yes', val] etc
for key in a_dict.keys():
a_dict_child = a_dict[key]
#pprint(a_dict_child)
clean_dict = clean_up_dict(a_dict_child)
# add in filters to reference objects
# val = clean_dict
# new_clean_dict = {}
# if 'special_reference' in val:
# new_clean_dict['special_reference'] = val['special_reference']
# val.pop('special_reference')
# if 'repeat' in val:
# new_clean_dict['repeat'] = val['repeat']
# val.pop('repeat')
# if val:
# new_clean_dict['filters'] = val
# fix reference object inside location of reference object
if 'location' in clean_dict and 'reference_object' in clean_dict['location']:
value = clean_dict['location']['reference_object']
clean_dict['location']['reference_object'] = fix_ref_obj(value)
new_clean_dict = fix_ref_obj(clean_dict)
if all_yes(a_dict_child):
if cmd in toolC_updated_map:
toolC_updated_map[cmd][key] = new_clean_dict
else:
toolC_updated_map[cmd]= {key: new_clean_dict}
continue
new_clean_dict.pop('comparison', None)
comparison_dict = toolD_map[cmd] # check on this again
valid_dict = {}
valid_dict[key] = {}
#valid_dict['reference_object'] = {}
# valid_dict['reference_object']['filters'] = clean_dict
# valid_dict['reference_object']['filters'].update(comparison_dict)
valid_dict[key]['filters'] = new_clean_dict
valid_dict[key]['filters'].update(comparison_dict)
toolC_updated_map[cmd] = valid_dict # only gets populated if filters exist
pprint(toolC_updated_map)
print(len(toolC_updated_map.keys()))
print(len(toolC_map.keys()))
# output of tool 1
folder_name_A = parent_folder + 'toolA/'
tool_A_out_file = folder_name_A + 'all_agreements.txt'
# output of tool 2
folder_name_B = parent_folder + 'toolB/'
tool_B_out_file = folder_name_B + 'all_agreements.txt'
# combine outputs
# check if all keys of t1 annotated yes -> put directly
# if no , check child in t2 and combine
# construct mape of tool 1
toolA_map = {}
with open(tool_A_out_file) as f:
for line in f.readlines():
line = line.strip()
cmd, a_d = line.split("\t")
toolA_map[cmd]= a_d
print(len(toolA_map.keys()))
# construct map of tool 2
toolB_map = {}
import os.path
from os import path
if path.isfile(tool_B_out_file):
with open(tool_B_out_file) as f2:
for line in f2.readlines():
line = line.strip()
cmd, child, child_dict = line.split("\t")
if cmd in toolB_map and child in toolB_map[cmd]:
print("BUGGG")
if cmd not in toolB_map:
toolB_map[cmd] = {}
toolB_map[cmd][child] = child_dict
print(len(toolB_map.keys()))
import ast
def all_yes(a_dict):
if type(a_dict) == str:
a_dict = ast.literal_eval(a_dict)
for k, val in a_dict.items():
if type(val) == list and val[0] == "no":
return False
return True
def clean_dict_1(a_dict):
if type(a_dict) == str:
a_dict = ast.literal_eval(a_dict)
new_d = {}
for k, val in a_dict.items():
if type(val) == list:
if val[0] in ["yes", "no"]:
new_d[k] = val[1]
elif type(val) == dict:
new_d[k] = a_dict(val[1])
else:
new_d[k] = val
# only for now
if 'dance_type_span' in new_d:
new_d['dance_type'] = {}
new_d['dance_type']['dance_type_name'] = new_d['dance_type_span']
new_d.pop('dance_type_span')
if 'dance_type_name' in new_d:
new_d['dance_type'] = {}
new_d['dance_type']['dance_type_name'] = new_d['dance_type_name']
new_d.pop('dance_type_name')
return new_d
# post_process spans and "contains_coreference" : "no"
def merge_indices(indices):
a, b = indices[0]
for i in range(1, len(indices)):
a = min(a, indices[i][0])
b = max(b, indices[i][1])
return [a, b]
def fix_put_mem(d):
if type(d) == str:
d = ast.literal_eval(d)
new_d = copy.deepcopy(d)
del new_d['action_type']
if 'has_tag' in new_d and 'upsert' in new_d:
new_d['upsert']['memory_data']['has_tag'] = new_d['has_tag']
del new_d['has_tag']
return new_d
def fix_spans(d):
new_d = {}
if type(d) == str:
d = ast.literal_eval(d)
for k, v in d.items():
if k == "contains_coreference" and v == "no":
continue
if type(v) == list:
if k == "tag_val":
new_d["has_tag"] = [0, merge_indices(v)]
else:
new_d[k] = [0, merge_indices(v)]
continue
elif type(v) == dict:
new_d[k] = fix_spans(v)
continue
else:
new_d[k] = v
return new_d
# combine and write output to a file
i =0
# what these action will look like in the map
import json
import copy
dance_type_map = {
'point': 'point',
'look': 'look_turn',
'turn' : 'body_turn'
}
from pprint import pprint
# update dict of tool1 with tool 2
folder_name_combined = '/Users/kavyasrinet/Downloads/combined_tools'
#folder_name_combined = '/Users/kavyasrinet/Downloads/'
# with open(folder_name_combined + 'all_combined.txt', 'w') as f:
with open(parent_folder + '/all_combined.txt', 'w') as f:
for cmd, a_dict in toolA_map.items():
# remove the ['yes', val] etc
clean_dict = clean_dict_1(a_dict)
print(clean_dict)
if all_yes(a_dict):
action_type = clean_dict['action_type']
valid_dict = {}
valid_dict['dialogue_type'] = clean_dict['dialogue_type']
del clean_dict['dialogue_type']
clean_dict['action_type'] = clean_dict['action_type'].upper()
act_dict = fix_spans(clean_dict)
valid_dict['action_sequence'] = [act_dict]
f.write(cmd + "\t" + json.dumps(valid_dict) + "\n")
print(cmd)
print(valid_dict)
print("All yes")
print("*"*20)
continue
if clean_dict['action_type'] == 'noop':
f.write(cmd + "\t" + json.dumps(clean_dict) + "\n")
print(clean_dict)
print("NOOP")
print("*"*20)
continue
if clean_dict['action_type'] == 'otheraction':
f.write(cmd + "\t" + str(a_dict) + "\n")
continue
if toolB_map and cmd in toolB_map:
#print(cmd)
child_dict_all = toolB_map[cmd]
# update action dict with all children except for reference object
for k, v in child_dict_all.items():
if k not in clean_dict:
print("BUGGGG")
if type(v) == str:
v = ast.literal_eval(v)
#print(k, v)
if not v:
continue
if 'reference_object' in v[k]:
#print("HHHH")
value = v[k]['reference_object']
v[k]['reference_object'] = fix_ref_obj(value)
# new_clean_dict = {}
# up_dict = {}
# if 'special_reference' in val:
# new_clean_dict['special_reference'] = val['special_reference']
# val.pop('special_reference')
# if 'repeat' in val:
# new_clean_dict['repeat'] = val['repeat']
# val.pop('repeat')
# if val:
# new_clean_dict['filters'] = val
# up_dict['reference_object'] = new_clean_dict
# v = {k: up_dict}
#print(cmd, a_dict, child_dict)
if k == "tag_val":
clean_dict.update(v)
elif k == "facing":
action_type = clean_dict['action_type']
# set to dance
clean_dict['action_type'] = 'DANCE'
clean_dict['dance_type'] = {dance_type_map[action_type]: v['facing']}
clean_dict.pop('facing')
else:
clean_dict[k] = v[k]
# print("after tool B")
# pprint(clean_dict)
# print("after tool C")
# # now add reference object dict
# pprint(clean_dict)
ref_obj_dict = {}
if toolC_updated_map and cmd in toolC_updated_map:
ref_obj_dict = toolC_updated_map[cmd]
clean_dict.update(ref_obj_dict)
if 'receiver_reference_object' in clean_dict:
clean_dict['receiver'] = {'reference_object': clean_dict['receiver_reference_object']}
clean_dict.pop('receiver_reference_object')
if 'receiver_location' in clean_dict:
clean_dict['receiver'] = {'location': clean_dict['receiver_location']}
clean_dict.pop('receiver_location')
actual_dict = copy.deepcopy((clean_dict))
action_type = actual_dict['action_type']
valid_dict = {}
valid_dict['dialogue_type'] = actual_dict['dialogue_type']
del actual_dict['dialogue_type']
actual_dict['action_type'] = actual_dict['action_type'].upper()
act_dict = fix_spans(actual_dict)
valid_dict['action_sequence'] = [act_dict]
print(cmd)
pprint(valid_dict)
print("*"*40)
f.write(cmd + "\t" + json.dumps(valid_dict) + "\n")
with open('/Users/kavyasrinet/Github/annotated_data/with_get_give_bring/annotated_data_combined.txt') as f:
print(len(f.readlines()))
# for composites
# now skip composite_action and separate otheraction from combined dicts
mypath = '../../minecraft/python/craftassist/text_to_tree_tool/turk_data/composites/'
mypath = '/Users/kavyasrinet/Downloads/'
import ast
import json
f = []
all_cnt = 0
cmp = 0
other = 0
valid = 0
all_comp = set()
with open(mypath + "all_combined.txt") as f_read, open(mypath + 'all_combined_final.txt', 'w') as f:
for line in f_read.readlines():
line = line.strip()
text, d = line.split("\t")
actual_dict = ast.literal_eval(d.strip())
action_type = actual_dict['action_type']
if action_type == 'composite_action':
print(line)
cmp += 1
all_comp.add(text.strip())
continue
else:
valid += 1
valid_dict = {}
valid_dict['dialogue_type'] = actual_dict['dialogue_type']
del actual_dict['dialogue_type']
actual_dict['action_type'] = actual_dict['action_type'].upper()
valid_dict['action_sequence'] = [actual_dict]
f.write(text.strip() + "\t" + json.dumps(valid_dict) + "\n")
print(cmp)
print(len(all_comp))
print(other)
print(valid)
# now skip composite_action and separate otheraction from combined dicts
mypath = '../../minecraft/python/craftassist/text_to_tree_tool/turk_data/composites/'
from os import walk
import ast
import json
f = []
all_cnt = 0
cmp = 0
other = 0
valid = 0
all_comp = set()
with open(mypath + "all_final_annotations.txt", 'w') as f_final, open(mypath + "all_other_actions.txt", 'w') as f_other:
for (dirpath, dirnames, filenames) in walk(mypath):
if filenames:
for file_name in filenames:
if not (file_name.startswith('.') or file_name in ['all_final_annotations.txt', 'all_other_actions.txt']):
fn = dirpath + file_name
with open(fn) as f:
#print(fn, len(f.readlines()))
#all_cnt += len(f.readlines())
for line in f.readlines():
#print(line)
line = line.strip()
text, d = line.split("\t")
actual_dict = ast.literal_eval(d.strip())
#print(f)
action_type = actual_dict['action_type']
#print(action_type)
#print(action_type)
if action_type == 'composite_action':
cmp += 1
all_comp.add(text.strip())
continue
elif action_type[1] == 'otheraction':
other += 1
f_other.write(line + "\n")
else:
valid += 1
valid_dict = {}
valid_dict['dialogue_type'] = actual_dict['dialogue_type']
del actual_dict['dialogue_type']
#print(actual_dict['action_type'])
actual_dict['action_type'] = actual_dict['action_type'].upper()
valid_dict['action_sequence'] = [actual_dict]
f_final.write(text.strip() + "\t" + json.dumps(valid_dict) + "\n")
print(cmp)
print(len(all_comp))
print(other)
print(valid)
i = 0
input_file = '../../minecraft/python/craftassist/text_to_tree_tool/turk_data/composites/all_combined_final.txt'
output_file = '../../minecraft/python/craftassist/text_to_tree_tool/turk_data/composites/all_combined_final_postprocessed.txt'
with open(input_file) as f, open(output_file, 'w') as f_w:
for line in f.readlines():
i+= 1
line = line.strip()
text, d = line.split("\t")
d = ast.literal_eval(d)
# if text.split()[0] in ['put', 'place', 'install']:
# print(text, d['action_sequence'][0])
action_dict = fix_spans(d['action_sequence'][0])
if action_dict['action_type'] == 'TAG':
updated_dict = fix_put_mem(action_dict)
new_d = {}
new_d['dialogue_type'] = d['dialogue_type']
new_d.update(updated_dict)
elif action_dict['action_type'] == 'ANSWER':
#print(d)
new_d = {}
new_d['dialogue_type'] = 'GET_MEMORY'
else:
if action_dict['action_type'] == 'COPY':
action_dict['action_type'] = 'BUILD'
d['action_sequence'] = [action_dict]
new_d = d
f_w.write(text+ "\t" + json.dumps(new_d) + "\n")
def RepresentsInt(s):
try:
int(s)
return True
except ValueError:
return False
def find_int(sentence):
words= sentence.split()
for word in words:
if RepresentsInt(word):
return True
return False
import random
def fix_int(sentence):
random_degrees = ['-90', '-45', '45', '90', '135', '-135', '180', '-180', '360', '-360']
words = sentence.strip().split()
new_words= []
for word in words:
if RepresentsInt(word) :#and 'degrees' in words:
random_num = random.randrange(-360, 360)
new_words.append(str(random_num))
else:
new_words.append(word)
return " ".join(new_words).strip()
import json
orig_data = {}
with open('/Users/kavyasrinet/Github/minecraft/python/craftassist/text_to_tree_tool/turk_data/new_dance_form_data/first_65/all_combined.txt') as f:
for line in f.readlines():
line = line.strip()
t, d =line.split("\t")
d = json.loads(d)
orig_data[t] = d
print(len(orig_data.keys()))
more_num_data = {}
for k, v in orig_data.items():
if find_int(k):
for i in range(50):
new_t = fix_int(k)
more_num_data[new_t] = v
print(len(more_num_data.keys()))
dirs_map = {
"left": "-90",
"west": "-90",
"right": "90",
"east": "90",
"front" : "0",
"north": "0",
"back": "180",
"south": "180",
"front left": "-45",
"northwest": "-45",
"northeast": "45",
"front right": "45",
"southwest": "-135",
"southeast": "135",
"all the way around clockwise": "360",
"all the way around anticlockwise": "-360",
"all the way around": "360"
}
vertical_dirs_map = {
"up": "90",
"down": "-90"
}
import copy
print(len(dirs_map.keys()))
def substitute_dirs(k, v, key_name='relative_yaw', alternate=False):
words = k.split()
new_words = []
new_d = copy.deepcopy(v)
# print(k)
# print(v)
# return{}
# find other keys that are not this direction key
other_keys = []
for x in words:
if (x in dirs_map) or (x in vertical_dirs_map):
if (key_name == 'relative_yaw' and alternate == False) or (key_name == 'relative_pitch' and alternate==True):
new_d = copy.deepcopy(v)
k2 = list(new_d['dance_type'].keys())[0]
if (key_name in new_d['dance_type'][k2]) and (not new_d['dance_type'][k2][key_name].get('angle', None)):
return {k : v}
if key_name not in new_d['dance_type'][k2]:
return {k : v}
this_dir = x
other_keys = list(dirs_map.keys())
if this_dir in other_keys:
other_keys.remove(this_dir)
elif (key_name == 'relative_yaw' and alternate == True) or (key_name == 'relative_pitch' and alternate==False):
new_d = copy.deepcopy(v)
k2 = list(new_d['dance_type'].keys())[0]
if (key_name in new_d['dance_type'][k2]) and (not new_d['dance_type'][k2][key_name].get('angle', None)):
return {k : v}
if key_name not in new_d['dance_type'][k2]:
return {k : v}
this_dir = x
other_keys = list(vertical_dirs_map.keys())
if this_dir in other_keys:
other_keys.remove(this_dir)
new_dict = {}
for k in other_keys:
new_words = []
new_d = copy.deepcopy(v)
for x in words:
# if direction word, pick new
if (x in dirs_map) or (x in vertical_dirs_map):
# it has just one key
k2 = list(new_d['dance_type'].keys())[0]
if (key_name in new_d['dance_type'][k2]) and (not new_d['dance_type'][k2][key_name].get('angle', None)):
print("BUGGGGG")
new_words.append(k)
if alternate == False:
if key_name == 'relative_yaw':
new_d['dance_type'][k2][key_name]['angle'] = dirs_map[k]
else:
new_d['dance_type'][k2][key_name]['angle'] = vertical_dirs_map[k]
elif alternate == True:
if key_name == 'relative_yaw':
new_d['dance_type'][k2]['relative_pitch'] = {}
new_d['dance_type'][k2]['relative_pitch']['angle'] = vertical_dirs_map[k]
new_d['dance_type'][k2].pop('relative_yaw')
elif key_name == 'relative_pitch':
new_d['dance_type'][k2]['relative_yaw'] = {}
new_d['dance_type'][k2]['relative_yaw']['angle'] = dirs_map[k]
new_d['dance_type'][k2].pop('relative_pitch')
else:
new_words.append(x)
new_dict[" ".join(new_words)] = new_d
return new_dict
more_rel_dir_data = {}
all_dir_keys = set(dirs_map.keys())
all_dir_keys.update(set(vertical_dirs_map.keys()))
print(all_dir_keys)
print("*"*20)
for k, v in orig_data.items():
words = k.split()
if any(x in words for x in all_dir_keys):
# print(k, v)
# print("*"*20)
#this_dict = substitute_dirs(k, v, key_name='relative_yaw', alternate=False)
#this_dict = substitute_dirs(k, v, key_name='relative_yaw', alternate=True)
this_dict = substitute_dirs(k, v, key_name='relative_pitch', alternate=False)
# this_dict = substitute_dirs(k, v, key_name='relative_pitch', alternate=True)
more_rel_dir_data.update(this_dict)
print(k, v)
print(this_dict)
print("-"*20)
# print(len(more_rel_dir_data.keys()))
# print(more_rel_dir_data)
# print(k, v)
# break
#new_k, new_v = substitute_dirs(k, v)
#print(len(more_rel_dir_data.keys()))
orig_data.update(more_num_data)
print(len(orig_data.keys()))
orig_data.update(more_rel_dir_data)
print(len(orig_data.keys()))
new_postprocessed_data = {}
for k, v in orig_data.items():
actual_dict = copy.deepcopy((v))
action_type = actual_dict['action_type']
valid_dict = {}
valid_dict['dialogue_type'] = actual_dict['dialogue_type']
del actual_dict['dialogue_type']
actual_dict['action_type'] = actual_dict['action_type'].upper()
act_dict = fix_spans(actual_dict)
valid_dict['action_sequence'] = [act_dict]
new_postprocessed_data[k] = valid_dict
print(len(new_postprocessed_data.keys()))
with open('/Users/kavyasrinet/Github/minecraft/python/craftassist/text_to_tree_tool/turk_data/new_dance_form_data/first_65/augmented_combined.json', 'w') as f:
json.dump(new_postprocessed_data, f)
with open('../../minecraft/python/craftassist/text_to_tree_tool/turk_data/new_dance_form_data/first_65/augmented_combined.json') as f:
data = json.load(f)
gt_new = {}
i = 0
for k, v in new_postprocessed_data.items():
if i == 500:
break
else:
i += 1
gt_new[k] = v
print(len(gt_new.keys()))
# look + up etc , + pitch
# look right + yaw
for k, v in new_postprocessed_data.items():
if 'turn' in k:
print(k)
pprint(v)
print("*"*20)
gt_data = {}
with open('../../minecraft/python/craftassist/ground_truth_data.txt') as f:
for line in f.readlines():
text, ad = line.strip().split("\t")
ad_new = json.loads(ad)
if type(ad_new) == str:
ad_new = json.loads(ad_new)
gt_data[text] = ad_new
print(len(gt_data.keys()))
all_actions = set()
for k, v in gt_data.items():
if v['dialogue_type'] == 'HUMAN_GIVE_COMMAND' and 'action_sequence' in v and len(v['action_sequence']) == 1:
new_dict = copy.deepcopy(v)
act = new_dict['action_sequence'][0]
all_actions.add(act['action_type'])
# if act['action_type'] == 'DANCE':
# if 'dance_type' in act and type(act['dance_type']) == dict:
# key = list(act['dance_type'].keys())[0]
# if 'relative_yaw' in act['dance_type'][key] and 'yaw' in act['dance_type'][key]:
# act['dance_type'][key].pop('relative_yaw')
# print(k,v )
# print(k, act)
# print("*"*20)
# #act['dance_type'][key].pop('relative_yaw')
print(all_actions)
print(len(gt_data.keys()))
gt_data.update(gt_new)
print(len(gt_data.keys()))
with open('../../minecraft/python/craftassist/ground_truth_data.txt', 'w') as f:
for k, v in gt_data.items():
f.write(k + "\t" + json.dumps(v)+ "\n")
#print(gt_new)
if 'turn by 14 degrees' in gt_new:
print(gt_new['turn by 14 degrees'])
with open('../../minecraft/python/craftassist/ground_truth_data.txt') as f:
for line in f.readlines():
t , a = line.strip().split("\t")
a = json.loads(a)
if type(a) != dict:
print("nooooo")
```
| github_jupyter |
```
import bioformats
import deepdish as dd
import h5py
import javabridge
import numpy as np
import os.path
from pyprind import prog_percent
import SimpleITK as sitk
import tables
import time
from xml.etree import ElementTree as ETree
# https://stackoverflow.com/questions/40845304/runtimewarning-numpy-dtype-size-changed-may-indicate-binary-incompatibilityV
import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
import deepdish as dd
import argparse
import numpy as np
import javabridge as jv
import bioformats as bf
from pyprind import prog_percent
from xml import etree as et
from queue import Queue
from threading import Thread, Lock
from enum import Enum
import torch as th
from torch.autograd import Variable
from torch.nn.functional import grid_sample
from tqdm import tqdm_notebook
import h5py
import os
from skimage.feature import blob_log
from skimage.draw import circle
from skimage.transform import AffineTransform, warp
from sklearn.preprocessing import normalize
import matplotlib.pyplot as plt
import matplotlib.cm as mplColorMap
import ipywidgets as widgets
#import ipyvolume as ipv
from scipy.interpolate import RegularGridInterpolator
SPACING_ZBB = (0.798, 0.798, 2)
SPACING_JAKOB = (0.7188675, 0.7188675, 10)
SPACING_JAKOB_HQ = (0.7188675, 0.7188675, 1)
def lif_get_metas(fn):
md = bioformats.get_omexml_metadata(fn) # Load meta data
mdroot = ETree.fromstring(md) # Parse XML
# meta = mdroot[1][3].attrib # Get relevant meta data
metas = list(map(lambda e: e.attrib, mdroot.iter('{http://www.openmicroscopy.org/Schemas/OME/2016-06}Pixels')))
return metas
def lif_find_timeseries(fn):
metas = lif_get_metas(fn)
meta = None
img_i = 0
for i, m in enumerate(metas):
if int(m['SizeT']) > 1:
meta = m
img_i = i
if not meta:
raise ValueError('lif does not contain an image with sizeT > 1')
return img_i
def start_jvm():
javabridge.start_vm(class_path=bioformats.JARS)
log_level = 'ERROR'
# reduce log level
"""
rootLoggerName = javabridge.get_static_field("org/slf4j/Logger", "ROOT_LOGGER_NAME", "Ljava/lang/String;")
rootLogger = javabridge.static_call("org/slf4j/LoggerFactory", "getLogger",
"(Ljava/lang/String;)Lorg/slf4j/Logger;", rootLoggerName)
logLevel = javabridge.get_static_field("ch/qos/logback/classic/Level", log_level, "Lch/qos/logback/classic/Level;")
javabridge.call(rootLogger, "setLevel", "(Lch/qos/logback/classic/Level;)V", logLevel)
"""
def lif_open(fn):
start_jvm()
ir = bioformats.ImageReader(fn)
return ir
def lif_read_stack(fn):
ir = lif_open(fn)
img_i = lif_find_timeseries(fn)
shape, spacing = get_shape(fn, img_i)
stack = np.empty(shape, dtype=np.uint16)
# Load the whole stack...
for t in prog_percent(range(stack.shape[0])):
for z in range(stack.shape[1]):
stack[t, z] = ir.read(t=t, z=z, c=0, series=img_i, rescale=False)
return stack, spacing
def get_shape(fn, index=0):
"""
:param fn: image file
:return: shape of that file
"""
in_ext = os.path.splitext(fn)[1]
if in_ext == '.h5':
"""
f = tables.open_file(fn)
return f.get_node('/stack').shape
"""
img = load(fn)
return img.shape
elif in_ext == '.nrrd':
img = load(fn)
return img.shape
elif in_ext == '.lif':
metas = lif_get_metas(fn)
meta = metas[index]
shape = (
int(meta['SizeT']),
int(meta['SizeZ']),
int(meta['SizeY']),
int(meta['SizeX']),
)
order = meta['DimensionOrder']
spacing = tuple([float(meta['PhysicalSize%s' % c]) for c in 'XYZ'])
return shape, spacing
else:
raise UnsupportedFormatException('Input format "' + in_ext + '" is not supported.')
def __sitkread(filename):
img = sitk.ReadImage(filename)
spacing = img.GetSpacing()
return sitk.GetArrayFromImage(img), spacing
def __sitkwrite(filename, data, spacing):
img = sitk.GetImageFromArray(data)
img.SetSpacing(spacing)
sitk.WriteImage(img, filename)
def save(fn, data, spacing):
out_ext = os.path.splitext(fn)[1]
if out_ext == '.nrrd':
__sitkwrite(fn, data, spacing)
elif out_ext == '.h5':
"""
with tables.open_file(fn, mode='w') as f:
f.create_array('/', 'stack', data.astype(np.float32))
f.close()
"""
__sitkwrite(fn, data, spacing)
else:
raise UnsupportedFormatException('Output format "' + out_ext + '" is not supported.')
class Pyfish:
def __init__(self, file_id, save_path="", align_to_frame=0, use_gpu=True, max_displacement=300, thread_count=4):
#self.lif_file_path = lif_file_path
self.file_id=file_id
self.align_to_frame = align_to_frame #un frami ke nesbat be un baghie axaro align mikonim
self.use_gpu = use_gpu
self.max_displacement = max_displacement
self.thread_count = thread_count
self._start_lif_reader()
print('Number of timepoints: ',self.n_timepoints)
self.frame_shape=(self.n_timepoints, 1024, 1024)
#self.frame_shape=
# start lif reader
# set image stack shape
# lif files can contain multiple stacks so we pick the index of one with more than one frames in lif_stack_idx
def _start_lif_reader(self):
jv.start_vm(class_path=bf.JARS)
log_level = 'ERROR'
if self.file_id[1]=='control':
file_path='//ZMN-HIVE/User-Data/Maria/control/fish'+self.file_id[0]+'_6dpf_medium.lif'
elif self.file_id[1]=='amph':
file_path='//ZMN-HIVE/User-Data/Maria/stimulus/fish'+self.file_id[0]+'_6dpf_amph.lif'
self.ir = bf.ImageReader(file_path, perform_init=True)
mdroot = et.ElementTree.fromstring(bf.get_omexml_metadata(file_path))
mds = list(map(lambda e: e.attrib, mdroot.iter('{http://www.openmicroscopy.org/Schemas/OME/2016-06}Pixels')))
# lif can contain multiple images, select one that is likely to be the timeseries
self.metadata = None
self.lif_stack_idx = 0
for idx, md in enumerate(mds):
if int(md['SizeT']) > 1:
self.lif_stack_idx = idx
self.metadata = md
if not self.metadata: raise ValueError('lif does not contain an image with sizeT > 1')
self.n_timepoints=int(md['SizeT'])
def read_frame(self, z):
frame = np.empty(self.frame_shape, dtype=np.uint16)
for t in range(self.n_timepoints):
frame[t] = self._read_plane(t, z)
return frame
def _read_plane(self, t, z):
return self.ir.read(t=t, z=z, c=0, series=self.lif_stack_idx, rescale=False)
def save_hdf5(self):
start=time.time()
hdf5_file = np.zeros((self.n_timepoints,21,1024,1024),dtype='float32')
for z in range(21):
print('working on plane: ', z)
dat=self.read_frame(z)
hdf5_file[:,z,:,:]=dat
end=time.time()
SPACING_ZBB = (0.798, 0.798, 2)
SPACING_JAKOB = (0.7188675, 0.7188675, 10)
SPACING_JAKOB_HQ = (0.7188675, 0.7188675, 1)
fn='//ZMN-HIVE/User-Data/Maria/hdf5_conversion/fish'+self.file_id[0]+'_6dpf_medium.h5'
#save(fn, hdf5_file, SPACING_JAKOB)
with h5py.File(fn, 'w') as f:
dset = f.create_dataset("data", data=hdf5_file)
print('Time taken to save: ', end-start)
file_ids=[['11','control']]
for j in file_ids:
data_processor=Pyfish(j)
data_processor.save_hdf5()
```
| github_jupyter |
# Interacting with Ethereum using web3.py and Jupyter Notebooks
Step by step guide for setting up a Jupyter notebook, connecting to an Ethereum node and working with a Smart Contract.
In this tutorial we are using Python 3, so make sure that **python** and **pip** are versioned correctly.
<hr>
## STEP 0: Getting tutorial materials
Grab a copy of the files that we use in this tutorial:
+ Using Git:
<code>git clone https://github.com/apguerrera/ethereum-notebooks.git</code>
+ Or download it manually from https://github.com/apguerrera/ethereum-notebooks
<hr>
## STEP 1: Installing dependencies
+ Install [Jupyter](https://jupyter.org/)
<code>pip install --upgrade pip</code>
<code>pip install jupyter</code>
+ Install [Web3.py](https://web3py.readthedocs.io/en/stable/), Python module for accessing Ethereum blockchain
<code>pip install web3</code>
+ Install [py-solc-x](https://pypi.org/project/py-solc-x/), Python module for compiling Solidity contracts
We use **py-solc-x** instead of **py-solc** to compile contracts, since py-solc doesn't support Solidity versions v.0.5.x.
Also **py-solc-x** provides an ability to choose between different Solidity compiler versions.
<code>pip install py-solc-x</code>
Note: the module itself doesn't contain **solc** executable, so let's install solc executable version 0.5.3 that we use in this tutorial
<code>python -m solcx.install v0.5.3</code>
+ To install Geth go to https://ethereum.org/cli and follow the instructions
<hr>
## STEP 2: Running local Geth node
+ Go to the project directory and run in your terminal:
<code>geth --dev --dev.period 2 --datadir ./testchain --rpc --rpccorsdomain ‘*’ --rpcport 8646 --rpcapi “eth,net,web3,debug” --port 32323 --maxpeers 0 console</code>
+ Or use <code>runGeth.sh</code> script which is doing exactly the same
<hr>
## STEP 3: Running Jupyter notebook
**If you're already viewing this notebook in Jupyter live mode, just skip this step.**
+ Open Jupyter notebooks by running the following in your terminal:
<code>jupyter notebook</code>
+ If you see an error message, try:
<code>jupyter-notebook</code>
It will open up a window in your browser. Here you need to go to the project folder and open <code>EthereumNotebookNew.ipynb</code>
<hr>
## STEP 4: Conecting to Web3
Web3 has a provider type that lets you connect to a local Ethereum node or endpoint such as [Infura](https://infura.io/).
In our example, we’ll be connecting to a local Geth node running from the /testchain directory, but can be set to any Ethereum node that web3 can connect to.
```
from web3 import Web3
w3 = Web3(Web3.IPCProvider('./testchain/geth.ipc'))
w3.isConnected() # if it's false, something went wrong
# check that all accounts were pulled from ./testchain directory successfuly
w3.eth.accounts
```
## STEP 5: Compiling contracts with py-solc-x
```
# compile contract using solcx and return contract interface
# arguments are filepath to the contract and name of the contract
def compile_contract(path, name):
compiled_contacts = solcx.compile_files([path])
contract_interface = compiled_contacts['{}:{}'.format(path, name)]
return contract_interface
contract_path = './contracts/WhiteList.sol'
contract_name = 'WhiteList'
contract_interface = compile_contract(contract_path, contract_name)
print(contract_interface)
# check that py-solc-x and solc are installed correctly
import solcx
solcx.get_installed_solc_versions()
```
## STEP 6: Deploying a contract to blockchain
In next steps we'll be using some functions from [/scripts/util.py](https://github.com/apguerrera/ethereum-notebooks/blob/master/scripts/util.py) and [/scripts/whitelist.py](https://github.com/apguerrera/ethereum-notebooks/blob/master/scripts/whitelist.py). It's **highly recommended** to check out this Python files to have better understanding of next steps.
Also we will pass **w3** instance as an argument to imported functions. We don't use **w3** as global variable since it's possible to have different endpoints thus having more than one w3 object in your program.
```
# import function that decrypts keystore file and returns account object
# check out tutorial directory in /scripts/util.py
from scripts.util import account_from_key
# compile contract, deploy it from account specified, then return transaction hash and contract interface
def deploy_contract(w3, account, path, name):
contract_interface = compile_contract(path, name)
contract = w3.eth.contract(abi=contract_interface['abi'], bytecode=contract_interface['bin'])
transaction = contract.constructor().buildTransaction({
'nonce': w3.eth.getTransactionCount(account.address),
'from': account.address
})
signed_transaction = w3.eth.account.signTransaction(transaction, account.privateKey)
tx_hash = w3.eth.sendRawTransaction(signed_transaction.rawTransaction)
return tx_hash.hex(), contract_interface
key_path = './testchain/keystore/UTC--2017-05-20T02-37-30.360937280Z--a00af22d07c87d96eeeb0ed583f8f6ac7812827e'
key_passphrase = '' # empty password for test keystore file, never do that in real life
account = account_from_key(w3, key_path, key_passphrase)
tx_hash, contract_interface = deploy_contract(w3, account, './contracts/WhiteList.sol', 'WhiteList')
tx_hash
```
Note: **deploy_contract doesn't return the address of created contract**, it returns hash of transaction made to create the contract
To get the address of the contract:
```
# import function that waits for deploy transaction to be included to block, and returns address of created contract
# check out tutorial directory in /scripts/util.py
from scripts.util import wait_contract_address
contract_address = wait_contract_address(w3, tx_hash)
contract_address
```
## STEP 7: Interacting with the contract
```
# import function that returns contract object using its address and ABI
# check out tutorial directory in /scripts/util.py
from scripts.util import get_contract
contract = get_contract(w3, contract_address, contract_interface['abi'])
contract.all_functions() # get all available functions of the contract
# check out /scripts/util.py and /scripts/whitelist.py
from scripts.whitelist import add_to_list
from scripts.util import wait_event
address_to_add = w3.eth.accounts[17]
tx_hash = add_to_list(w3, account, contract, [address_to_add])
event_added = wait_event(w3, contract, tx_hash, 'AccountListed')
if event_added:
print(event_added[0]['args'])
# check out /scripts/whitelist.py
from scripts.whitelist import is_in_list
is_in_list(account, contract, address_to_add) # check if address in whitelist
```
## Moving forward
Now you know how to compile Solidity contracts using **solc** and **py-solc-x**, deploy contracts using **Web3** and interact with them!
To see other code snippets and related information please check out [tutorial's GitHub repo](https://github.com/apguerrera/ethereum-notebooks/) and **WhitelistExample** notebook.
| github_jupyter |
<img src="https://github.com/pmservice/ai-openscale-tutorials/raw/master/notebooks/images/banner.png" align="left" alt="banner">
# Working with Watson Machine Learning
This notebook should be run using with **Python 3.7.x** runtime environment. **If you are viewing this in Watson Studio and do not see Python 3.7.x in the upper right corner of your screen, please update the runtime now.** It requires service credentials for the following services:
* Watson OpenScale
* Watson Machine Learning
* DB2
The notebook will train, create and deploy a German Credit Risk model, configure OpenScale to monitor that deployment, and inject seven days' worth of historical records and measurements for viewing in the OpenScale Insights dashboard.
### Contents
- [Setup](#setup)
- [Model building and deployment](#model)
- [OpenScale configuration](#openscale)
- [Quality monitor and feedback logging](#quality)
- [Fairness, drift monitoring and explanations](#fairness)
- [Custom monitors and metrics](#custom)
- [Historical data](#historical)
# Setup <a name="setup"></a>
## Package installation
```
import warnings
warnings.filterwarnings('ignore')
!pip install --upgrade pyspark==2.4 --no-cache | tail -n 1
!pip install --upgrade pandas==1.2.3 --no-cache | tail -n 1
!pip install --upgrade requests==2.23 --no-cache | tail -n 1
!pip install numpy==1.20.1 --no-cache | tail -n 1
!pip install SciPy --no-cache | tail -n 1
!pip install lime --no-cache | tail -n 1
!pip install --upgrade ibm-watson-machine-learning --user | tail -n 1
!pip install --upgrade ibm-watson-openscale --no-cache | tail -n 1
```
### Action: restart the kernel!
## Configure credentials
- WOS_CREDENTIALS (CP4D)
- WML_CREDENTIALS (CP4D)
- DATABASE_CREDENTIALS (DB2 on CP4D or Cloud Object Storage (COS))
- SCHEMA_NAME
```
WOS_CREDENTIALS = {
"url": "***",
"username": "***",
"password": "***"
}
WML_CREDENTIALS = {
"url": "***",
"username": "***",
"password" : "***",
"instance_id": "wml_local",
"version" : "3.5" #If your env is CP4D 4.0 then specify "4.0" instead of "3.5"
}
#IBM DB2 database connection format example
DATABASE_CREDENTIALS = {
"hostname":"***",
"username":"***",
"password":"***",
"database":"***",
"port":"***",
"ssl":"***",
"sslmode":"***",
"certificate_base64":"***"}
```
### Action: put created schema name below.
```
SCHEMA_NAME = 'AIOSFASTPATHICP'
```
## Run the notebook
At this point, the notebook is ready to run. You can either run the cells one at a time, or click the **Kernel** option above and select **Restart and Run All** to run all the cells.
# Model building and deployment <a name="model"></a>
In this section you will learn how to train Spark MLLib model and next deploy it as web-service using Watson Machine Learning service.
## Load the training data from github
```
!rm german_credit_data_biased_training.csv
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/german_credit_data_biased_training.csv
from pyspark.sql import SparkSession
import pandas as pd
import json
spark = SparkSession.builder.getOrCreate()
pd_data = pd.read_csv("german_credit_data_biased_training.csv", sep=",", header=0)
df_data = spark.read.csv(path="german_credit_data_biased_training.csv", sep=",", header=True, inferSchema=True)
df_data.head()
```
## Explore data
```
df_data.printSchema()
print("Number of records: " + str(df_data.count()))
```
## Visualize data with pixiedust
```
#import pixiedust
display(df_data)
```
## Create a model
```
spark_df = df_data
(train_data, test_data) = spark_df.randomSplit([0.8, 0.2], 24)
MODEL_NAME = "Spark German Risk Model - Final"
DEPLOYMENT_NAME = "Spark German Risk Deployment - Final"
print("Number of records for training: " + str(train_data.count()))
print("Number of records for evaluation: " + str(test_data.count()))
spark_df.printSchema()
```
The code below creates a Random Forest Classifier with Spark, setting up string indexers for the categorical features and the label column. Finally, this notebook creates a pipeline including the indexers and the model, and does an initial Area Under ROC evaluation of the model.
```
from pyspark.ml.feature import OneHotEncoder, StringIndexer, IndexToString, VectorAssembler
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml import Pipeline, Model
from pyspark.ml.feature import SQLTransformer
features = [x for x in spark_df.columns if x != 'Risk']
categorical_features = ['CheckingStatus', 'CreditHistory', 'LoanPurpose', 'ExistingSavings', 'EmploymentDuration', 'Sex', 'OthersOnLoan', 'OwnsProperty', 'InstallmentPlans', 'Housing', 'Job', 'Telephone', 'ForeignWorker']
categorical_num_features = [x + '_IX' for x in categorical_features]
si_list = [StringIndexer(inputCol=x, outputCol=y) for x, y in zip(categorical_features, categorical_num_features)]
va_features = VectorAssembler(inputCols=categorical_num_features + [x for x in features if x not in categorical_features], outputCol="features")
si_label = StringIndexer(inputCol="Risk", outputCol="label").fit(spark_df)
label_converter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=si_label.labels)
from pyspark.ml.classification import RandomForestClassifier
classifier = RandomForestClassifier(featuresCol="features")
pipeline = Pipeline(stages= si_list + [si_label, va_features, classifier, label_converter])
model = pipeline.fit(train_data)
```
**Note**: If you want filter features from model output please replace `*` with feature names to be retained in `SQLTransformer` statement.
```
predictions = model.transform(test_data)
evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderROC')
area_under_curve = evaluatorDT.evaluate(predictions)
evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderPR')
area_under_PR = evaluatorDT.evaluate(predictions)
#default evaluation is areaUnderROC
print("areaUnderROC = %g" % area_under_curve, "areaUnderPR = %g" % area_under_PR)
# extra code: evaluate more metrics by exporting them into pandas and numpy
from sklearn.metrics import classification_report
y_pred = predictions.toPandas()['prediction']
y_pred = ['Risk' if pred == 1.0 else 'No Risk' for pred in y_pred]
y_test = test_data.toPandas()['Risk']
print(classification_report(y_test, y_pred, target_names=['Risk', 'No Risk']))
```
## Save training data to Cloud Object Storage
## Cloud object storage details
In next cells, you will need to paste some credentials to Cloud Object Storage. If you haven't worked with COS yet please visit getting started with COS tutorial. You can find COS_API_KEY_ID and COS_RESOURCE_CRN variables in Service Credentials in menu of your COS instance. Used COS Service Credentials must be created with Role parameter set as Writer. Later training data file will be loaded to the bucket of your instance and used as training refecence in subsription.
COS_ENDPOINT variable can be found in Endpoint field of the menu.
```
IAM_URL="https://iam.ng.bluemix.net/oidc/token"
COS_API_KEY_ID = "***"
COS_RESOURCE_CRN = "***" # eg "crn:v1:bluemix:public:cloud-object-storage:global:a/3bf0d9003abfb5d29761c3e97696b71c:d6f04d83-6c4f-4a62-a165-696756d63903::"
COS_ENDPOINT = "***" # Current list avaiable at https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints
BUCKET_NAME = "***" #example: "credit-risk-training-data"
training_data_file_name="german_credit_data_biased_training.csv"
import ibm_boto3
from ibm_botocore.client import Config, ClientError
cos_client = ibm_boto3.resource("s3",
ibm_api_key_id=COS_API_KEY_ID,
ibm_service_instance_id=COS_RESOURCE_CRN,
ibm_auth_endpoint="https://iam.bluemix.net/oidc/token",
config=Config(signature_version="oauth"),
endpoint_url=COS_ENDPOINT
)
with open(training_data_file_name, "rb") as file_data:
cos_client.Object(BUCKET_NAME, training_data_file_name).upload_fileobj(
Fileobj=file_data
)
```
## Publish the model
In this section, the notebook uses Watson Machine Learning to save the model (including the pipeline) to the WML instance. Previous versions of the model are removed so that the notebook can be run again, resetting all data for another demo.
```
import json
from ibm_watson_machine_learning import APIClient
wml_client = APIClient(WML_CREDENTIALS)
wml_client.version
space_name = "tutorial-space"
# create the space and set it as default
space_meta_data = {
wml_client.spaces.ConfigurationMetaNames.NAME : space_name,
wml_client.spaces.ConfigurationMetaNames.DESCRIPTION : 'tutorial_space'
}
spaces = wml_client.spaces.get_details()['resources']
space_id = None
for space in spaces:
if space['entity']['name'] == space_name:
space_id = space["metadata"]["id"]
if space_id is None:
space_id = wml_client.spaces.store(meta_props=space_meta_data)["metadata"]["id"]
print(space_id)
wml_client.set.default_space(space_id)
```
### Remove existing model and deployment
```
deployments_list = wml_client.deployments.get_details()
for deployment in deployments_list["resources"]:
model_id = deployment["entity"]["asset"]["id"]
deployment_id = deployment["metadata"]["id"]
if deployment["metadata"]["name"] == DEPLOYMENT_NAME:
print("Deleting deployment id", deployment_id)
wml_client.deployments.delete(deployment_id)
print("Deleting model id", model_id)
wml_client.repository.delete(model_id)
wml_client.repository.list_models()
```
#### Add training data reference either from DB2 on CP4D or Cloud Object Storage
```
# COS training data reference example format
training_data_references = [
{
"id": "Credit Risk",
"type": "s3",
"connection": {
"access_key_id": COS_API_KEY_ID,
"endpoint_url": COS_ENDPOINT,
"resource_instance_id":COS_RESOURCE_CRN
},
"location": {
"bucket": BUCKET_NAME,
"path": training_data_file_name,
}
}
]
software_spec_uid = wml_client.software_specifications.get_id_by_name("spark-mllib_2.4")
print("Software Specification ID: {}".format(software_spec_uid))
model_props = {
wml_client._models.ConfigurationMetaNames.NAME:"{}".format(MODEL_NAME),
wml_client._models.ConfigurationMetaNames.TYPE: "mllib_2.4",
wml_client._models.ConfigurationMetaNames.SOFTWARE_SPEC_UID: software_spec_uid,
#wml_client._models.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: training_data_references,
wml_client._models.ConfigurationMetaNames.LABEL_FIELD: "Risk",
}
print("Storing model ...")
published_model_details = wml_client.repository.store_model(
model=model,
meta_props=model_props,
training_data=train_data,
pipeline=pipeline)
model_uid = wml_client.repository.get_model_uid(published_model_details)
print("Done")
print("Model ID: {}".format(model_uid))
wml_client.repository.list_models()
```
## Deploy the model
The next section of the notebook deploys the model as a RESTful web service in Watson Machine Learning. The deployed model will have a scoring URL you can use to send data to the model for predictions.
```
deployment_details = wml_client.deployments.create(
model_uid,
meta_props={
wml_client.deployments.ConfigurationMetaNames.NAME: "{}".format(DEPLOYMENT_NAME),
wml_client.deployments.ConfigurationMetaNames.ONLINE: {}
}
)
scoring_url = wml_client.deployments.get_scoring_href(deployment_details)
deployment_uid=wml_client.deployments.get_uid(deployment_details)
print("Scoring URL:" + scoring_url)
print("Model id: {}".format(model_uid))
print("Deployment id: {}".format(deployment_uid))
```
## Sample scoring
```
fields = ["CheckingStatus", "LoanDuration", "CreditHistory", "LoanPurpose", "LoanAmount", "ExistingSavings",
"EmploymentDuration", "InstallmentPercent", "Sex", "OthersOnLoan", "CurrentResidenceDuration",
"OwnsProperty", "Age", "InstallmentPlans", "Housing", "ExistingCreditsCount", "Job", "Dependents",
"Telephone", "ForeignWorker"]
values = [
["no_checking", 13, "credits_paid_to_date", "car_new", 1343, "100_to_500", "1_to_4", 2, "female", "none", 3,
"savings_insurance", 46, "none", "own", 2, "skilled", 1, "none", "yes"],
["no_checking", 24, "prior_payments_delayed", "furniture", 4567, "500_to_1000", "1_to_4", 4, "male", "none",
4, "savings_insurance", 36, "none", "free", 2, "management_self-employed", 1, "none", "yes"],
]
scoring_payload = {"input_data": [{"fields": fields, "values": values}]}
scoring_response = wml_client.deployments.score(deployment_uid, scoring_payload)
scoring_response
```
# Configure OpenScale <a name="openscale"></a>
The notebook will now import the necessary libraries and set up a Python OpenScale client.
```
from ibm_cloud_sdk_core.authenticators import CloudPakForDataAuthenticator
from ibm_watson_openscale import APIClient
from ibm_watson_openscale import *
from ibm_watson_openscale.supporting_classes.enums import *
from ibm_watson_openscale.supporting_classes import *
authenticator = CloudPakForDataAuthenticator(
url=WOS_CREDENTIALS['url'],
username=WOS_CREDENTIALS['username'],
password=WOS_CREDENTIALS['password'],
disable_ssl_verification=True
)
wos_client = APIClient(service_url=WOS_CREDENTIALS['url'],authenticator=authenticator)
wos_client.version
```
## Create datamart
### Set up datamart
Watson OpenScale uses a database to store payload logs and calculated metrics. If database credentials were not supplied above, the notebook will use the free, internal lite database. If database credentials were supplied, the datamart will be created there unless there is an existing datamart and the KEEP_MY_INTERNAL_POSTGRES variable is set to True. If an OpenScale datamart exists in Db2 or PostgreSQL, the existing datamart will be used and no data will be overwritten.
Prior instances of the German Credit model will be removed from OpenScale monitoring.
```
wos_client.data_marts.show()
data_marts = wos_client.data_marts.list().result.data_marts
if len(data_marts) == 0:
if DB_CREDENTIALS is not None:
if SCHEMA_NAME is None:
print("Please specify the SCHEMA_NAME and rerun the cell")
print('Setting up external datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
database_configuration=DatabaseConfigurationRequest(
database_type=DatabaseType.DB2,
credentials=PrimaryStorageCredentialsLong(
hostname=DATABASE_CREDENTIALS['hostname'],
username=DATABASE_CREDENTIALS['username'],
password=DATABASE_CREDENTIALS['password'],
db=DATABASE_CREDENTIALS['database'],
port=DATABASE_CREDENTIALS['port'],
ssl=DATABASE_CREDENTIALS['ssl'],
sslmode=DATABASE_CREDENTIALS['sslmode'],
certificate_base64=DATABASE_CREDENTIALS['certificate_base64']
),
location=LocationSchemaName(
schema_name= SCHEMA_NAME
)
)
).result
else:
print('Setting up internal datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
internal_database = True).result
data_mart_id = added_data_mart_result.metadata.id
else:
data_mart_id=data_marts[0].metadata.id
print('Using existing datamart {}'.format(data_mart_id))
```
## Remove existing service provider connected with used WML instance.
Multiple service providers for the same engine instance are avaiable in Watson OpenScale. To avoid multiple service providers of used WML instance in the tutorial notebook the following code deletes existing service provder(s) and then adds new one.
```
SERVICE_PROVIDER_NAME = "Watson Machine Learning V2"
SERVICE_PROVIDER_DESCRIPTION = "Added by tutorial WOS notebook."
service_providers = wos_client.service_providers.list().result.service_providers
for service_provider in service_providers:
service_instance_name = service_provider.entity.name
if service_instance_name == SERVICE_PROVIDER_NAME:
service_provider_id = service_provider.metadata.id
wos_client.service_providers.delete(service_provider_id)
print("Deleted existing service_provider for WML instance: {}".format(service_provider_id))
```
## Add service provider
Watson OpenScale needs to be bound to the Watson Machine Learning instance to capture payload data into and out of the model.
**Note:** You can bind more than one engine instance if needed by calling `wos_client.service_providers.add` method. Next, you can refer to particular service provider using `service_provider_id`.
```
added_service_provider_result = wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type=ServiceTypes.WATSON_MACHINE_LEARNING,
deployment_space_id = space_id,
operational_space_id = "production",
credentials=WMLCredentialsCP4D(
url=WML_CREDENTIALS["url"],
username=WML_CREDENTIALS["username"],
password=WML_CREDENTIALS["password"],
instance_id=None
),
background_mode=False
).result
service_provider_id = added_service_provider_result.metadata.id
wos_client.service_providers.show()
asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id, deployment_id = deployment_uid, deployment_space_id = space_id).result['resources'][0]
asset_deployment_details
model_asset_details_from_deployment=wos_client.service_providers.get_deployment_asset(data_mart_id=data_mart_id,service_provider_id=service_provider_id,deployment_id=deployment_uid,deployment_space_id=space_id)
model_asset_details_from_deployment
```
## Subscriptions
### Remove existing credit risk subscriptions
This code removes previous subscriptions to the German Credit model to refresh the monitors with the new model and new data.
```
wos_client.subscriptions.show()
subscriptions = wos_client.subscriptions.list().result.subscriptions
for subscription in subscriptions:
sub_model_id = subscription.entity.asset.asset_id
if sub_model_id == model_uid:
wos_client.subscriptions.delete(subscription.metadata.id)
print('Deleted existing subscription for model', sub_model_id)
```
This code creates the model subscription in OpenScale using the Python client API. Note that we need to provide the model unique identifier, and some information about the model itself.
```
subscription_details = wos_client.subscriptions.add(
data_mart_id=data_mart_id,
background_mode=False,
service_provider_id=service_provider_id,
asset=Asset(
asset_id=model_asset_details_from_deployment["entity"]["asset"]["asset_id"],
name=model_asset_details_from_deployment["entity"]["asset"]["name"],
url=model_asset_details_from_deployment["entity"]["asset"]["url"],
asset_type=AssetTypes.MODEL,
input_data_type=InputDataType.STRUCTURED,
problem_type=ProblemType.BINARY_CLASSIFICATION
),
deployment=AssetDeploymentRequest(
deployment_id=asset_deployment_details['metadata']['guid'],
name=asset_deployment_details['entity']['name'],
deployment_type= DeploymentTypes.ONLINE,
url=asset_deployment_details['entity']['scoring_endpoint']['url']
),
asset_properties=AssetPropertiesRequest(
label_column='Risk',
probability_fields=['probability'],
prediction_field='predictedLabel',
feature_fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"],
categorical_fields = ["CheckingStatus","CreditHistory","LoanPurpose","ExistingSavings","EmploymentDuration","Sex","OthersOnLoan","OwnsProperty","InstallmentPlans","Housing","Job","Telephone","ForeignWorker"],
training_data_reference=TrainingDataReference(type='cos',
location=COSTrainingDataReferenceLocation(bucket = BUCKET_NAME,
file_name = training_data_file_name),
connection=COSTrainingDataReferenceConnection.from_dict({
"resource_instance_id": COS_RESOURCE_CRN,
"url": COS_ENDPOINT,
"api_key": COS_API_KEY_ID,
"iam_url": IAM_URL})),
training_data_schema=SparkStruct.from_dict(model_asset_details_from_deployment["entity"]["asset_properties"]["training_data_schema"])
)
).result
subscription_id = subscription_details.metadata.id
subscription_id
import time
time.sleep(5)
payload_data_set_id = None
payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id
if payload_data_set_id is None:
print("Payload data set not found. Please check subscription status.")
else:
print("Payload data set id: ", payload_data_set_id)
wos_client.data_sets.show()
```
### Score the model so we can configure monitors
Now that the WML service has been bound and the subscription has been created, we need to send a request to the model before we configure OpenScale. This allows OpenScale to create a payload log in the datamart with the correct schema, so it can capture data coming into and out of the model.
```
fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"]
values = [
["no_checking",13,"credits_paid_to_date","car_new",1343,"100_to_500","1_to_4",2,"female","none",3,"savings_insurance",46,"none","own",2,"skilled",1,"none","yes"],
["no_checking",24,"prior_payments_delayed","furniture",4567,"500_to_1000","1_to_4",4,"male","none",4,"savings_insurance",36,"none","free",2,"management_self-employed",1,"none","yes"],
["0_to_200",26,"all_credits_paid_back","car_new",863,"less_100","less_1",2,"female","co-applicant",2,"real_estate",38,"none","own",1,"skilled",1,"none","yes"],
["0_to_200",14,"no_credits","car_new",2368,"less_100","1_to_4",3,"female","none",3,"real_estate",29,"none","own",1,"skilled",1,"none","yes"],
["0_to_200",4,"no_credits","car_new",250,"less_100","unemployed",2,"female","none",3,"real_estate",23,"none","rent",1,"management_self-employed",1,"none","yes"],
["no_checking",17,"credits_paid_to_date","car_new",832,"100_to_500","1_to_4",2,"male","none",2,"real_estate",42,"none","own",1,"skilled",1,"none","yes"],
["no_checking",33,"outstanding_credit","appliances",5696,"unknown","greater_7",4,"male","co-applicant",4,"unknown",54,"none","free",2,"skilled",1,"yes","yes"],
["0_to_200",13,"prior_payments_delayed","retraining",1375,"100_to_500","4_to_7",3,"male","none",3,"real_estate",37,"none","own",2,"management_self-employed",1,"none","yes"]
]
payload_scoring = {"fields": fields,"values": values}
payload = {
wml_client.deployments.ScoringMetaNames.INPUT_DATA: [payload_scoring]
}
scoring_response = wml_client.deployments.score(deployment_uid, payload)
print('Single record scoring result:', '\n fields:', scoring_response['predictions'][0]['fields'], '\n values: ', scoring_response['predictions'][0]['values'][0])
```
## Check if WML payload logging worked else manually store payload records
```
import uuid
from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord
time.sleep(5)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
print("Number of records in the payload logging table: {}".format(pl_records_count))
if pl_records_count == 0:
print("Payload logging did not happen, performing explicit payload logging.")
wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(
scoring_id=str(uuid.uuid4()),
request=payload_scoring,
response={"fields": scoring_response['predictions'][0]['fields'], "values":scoring_response['predictions'][0]['values']},
response_time=460
)])
time.sleep(5)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
print("Number of records in the payload logging table: {}".format(pl_records_count))
```
# Quality monitoring and feedback logging <a name="quality"></a>
## Enable quality monitoring
The code below waits ten seconds to allow the payload logging table to be set up before it begins enabling monitors. First, it turns on the quality (accuracy) monitor and sets an alert threshold of 70%. OpenScale will show an alert on the dashboard if the model accuracy measurement (area under the curve, in the case of a binary classifier) falls below this threshold.
The second paramater supplied, min_records, specifies the minimum number of feedback records OpenScale needs before it calculates a new measurement. The quality monitor runs hourly, but the accuracy reading in the dashboard will not change until an additional 50 feedback records have been added, via the user interface, the Python client, or the supplied feedback endpoint.
```
import time
time.sleep(10)
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"min_feedback_data_size": 50
}
thresholds = [
{
"metric_id": "area_under_roc",
"type": "lower_limit",
"value": .80
}
]
quality_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.QUALITY.ID,
target=target,
parameters=parameters,
thresholds=thresholds
).result
quality_monitor_instance_id = quality_monitor_details.metadata.id
quality_monitor_instance_id
```
## Feedback logging
The code below downloads and stores enough feedback data to meet the minimum threshold so that OpenScale can calculate a new accuracy measurement. It then kicks off the accuracy monitor. The monitors run hourly, or can be initiated via the Python API, the REST API, or the graphical user interface.
```
!rm additional_feedback_data_v2.json
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/additional_feedback_data_v2.json
```
## Get feedback logging dataset ID
```
feedback_dataset_id = None
feedback_dataset = wos_client.data_sets.list(type=DataSetTypes.FEEDBACK,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result
print(feedback_dataset)
feedback_dataset_id = feedback_dataset.data_sets[0].metadata.id
if feedback_dataset_id is None:
print("Feedback data set not found. Please check quality monitor status.")
with open('additional_feedback_data_v2.json') as feedback_file:
additional_feedback_data = json.load(feedback_file)
wos_client.data_sets.store_records(feedback_dataset_id, request_body=additional_feedback_data, background_mode=False)
wos_client.data_sets.get_records_count(data_set_id=feedback_dataset_id)
run_details = wos_client.monitor_instances.run(monitor_instance_id=quality_monitor_instance_id, background_mode=False).result
wos_client.monitor_instances.show_metrics(monitor_instance_id=quality_monitor_instance_id)
```
# Fairness, drift monitoring and explanations
<a name="fairness"></a>
The code below configures fairness monitoring for our model. It turns on monitoring for two features, Sex and Age. In each case, we must specify:
* Which model feature to monitor
* One or more **majority** groups, which are values of that feature that we expect to receive a higher percentage of favorable outcomes
* One or more **minority** groups, which are values of that feature that we expect to receive a higher percentage of unfavorable outcomes
* The threshold at which we would like OpenScale to display an alert if the fairness measurement falls below (in this case, 95%)
Additionally, we must specify which outcomes from the model are favourable outcomes, and which are unfavourable. We must also provide the number of records OpenScale will use to calculate the fairness score. In this case, OpenScale's fairness monitor will run hourly, but will not calculate a new fairness rating until at least 200 records have been added. Finally, to calculate fairness, OpenScale must perform some calculations on the training data, so we provide the dataframe containing the data.
```
wos_client.monitor_instances.show()
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"features": [
{"feature": "Sex",
"majority": ['male'],
"minority": ['female'],
"threshold": 0.95
},
{"feature": "Age",
"majority": [[26, 75]],
"minority": [[18, 25]],
"threshold": 0.95
}
],
"favourable_class": ["No Risk"],
"unfavourable_class": ["Risk"],
"min_records": 100
}
fairness_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.FAIRNESS.ID,
target=target,
parameters=parameters).result
fairness_monitor_instance_id =fairness_monitor_details.metadata.id
fairness_monitor_instance_id
```
## Drift configuration
```
monitor_instances = wos_client.monitor_instances.list().result.monitor_instances
for monitor_instance in monitor_instances:
monitor_def_id=monitor_instance.entity.monitor_definition_id
if monitor_def_id == "drift" and monitor_instance.entity.target.target_id == subscription_id:
wos_client.monitor_instances.delete(monitor_instance.metadata.id)
print('Deleted existing drift monitor instance with id: ', monitor_instance.metadata.id)
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"min_samples": 100,
"drift_threshold": 0.1,
"train_drift_model": True,
"enable_model_drift": False,
"enable_data_drift": True
}
drift_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.DRIFT.ID,
target=target,
parameters=parameters
).result
drift_monitor_instance_id = drift_monitor_details.metadata.id
drift_monitor_instance_id
```
## Score the model again now that monitoring is configured
This next section randomly selects 200 records from the data feed and sends those records to the model for predictions. This is enough to exceed the minimum threshold for records set in the previous section, which allows OpenScale to begin calculating fairness.
```
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/german_credit_feed.json
!ls -lh german_credit_feed.json
```
Score 200 randomly chosen records
```
import random
with open('german_credit_feed.json', 'r') as scoring_file:
scoring_data = json.load(scoring_file)
fields = scoring_data['fields']
values = []
for _ in range(200):
values.append(random.choice(scoring_data['values']))
payload_scoring = {"input_data": [{"fields": fields, "values": values}]}
scoring_response = wml_client.deployments.score(deployment_uid, payload_scoring)
time.sleep(5)
if pl_records_count == 8:
print("Payload logging did not happen, performing explicit payload logging.")
wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(
scoring_id=str(uuid.uuid4()),
request=payload_scoring,
response=scoring_response,
response_time=460
)])
time.sleep(5)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
print("Number of records in the payload logging table: {}".format(pl_records_count))
print('Number of records in payload table: ', wos_client.data_sets.get_records_count(data_set_id=payload_data_set_id))
```
## Run fairness monitor
Kick off a fairness monitor run on current data. The monitor runs hourly, but can be manually initiated using the Python client, the REST API, or the graphical user interface.
```
time.sleep(5)
run_details = wos_client.monitor_instances.run(monitor_instance_id=fairness_monitor_instance_id, background_mode=False)
time.sleep(10)
wos_client.monitor_instances.show_metrics(monitor_instance_id=fairness_monitor_instance_id)
```
## Run drift monitor
Kick off a drift monitor run on current data. The monitor runs every hour, but can be manually initiated using the Python client, the REST API.
```
drift_run_details = wos_client.monitor_instances.run(monitor_instance_id=drift_monitor_instance_id, background_mode=False)
time.sleep(5)
wos_client.monitor_instances.show_metrics(monitor_instance_id=drift_monitor_instance_id)
```
## Configure Explainability
Finally, we provide OpenScale with the training data to enable and configure the explainability features.
```
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"enabled": True
}
explainability_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID,
target=target,
parameters=parameters
).result
explainability_monitor_id = explainability_details.metadata.id
```
## Run explanation for sample record
```
pl_records_resp = wos_client.data_sets.get_list_of_records(data_set_id=payload_data_set_id, limit=1, offset=0).result
scoring_ids = [pl_records_resp["records"][0]["entity"]["values"]["scoring_id"]]
print("Running explanations on scoring IDs: {}".format(scoring_ids))
explanation_types = ["lime", "contrastive"]
result = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result
print(result)
```
# Custom monitors and metrics <a name="custom"></a>
## Register custom monitor
```
def get_definition(monitor_name):
monitor_definitions = wos_client.monitor_definitions.list().result.monitor_definitions
for definition in monitor_definitions:
if monitor_name == definition.entity.name:
return definition
return None
monitor_name = 'my model performance'
metrics = [MonitorMetricRequest(name='sensitivity',
thresholds=[MetricThreshold(type=MetricThresholdTypes.LOWER_LIMIT, default=0.8)]),
MonitorMetricRequest(name='specificity',
thresholds=[MetricThreshold(type=MetricThresholdTypes.LOWER_LIMIT, default=0.75)])]
tags = [MonitorTagRequest(name='region', description='customer geographical region')]
existing_definition = get_definition(monitor_name)
if existing_definition is None:
custom_monitor_details = wos_client.monitor_definitions.add(name=monitor_name, metrics=metrics, tags=tags, background_mode=False).result
else:
custom_monitor_details = existing_definition
```
## Show available monitors types
```
wos_client.monitor_definitions.show()
```
### Get monitors uids and details
```
custom_monitor_id = custom_monitor_details.metadata.id
print(custom_monitor_id)
custom_monitor_details = wos_client.monitor_definitions.get(monitor_definition_id=custom_monitor_id).result
print('Monitor definition details:', custom_monitor_details)
```
## Enable custom monitor for subscription
```
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
thresholds = [MetricThresholdOverride(metric_id='sensitivity', type = MetricThresholdTypes.LOWER_LIMIT, value=0.9)]
custom_monitor_instance_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=custom_monitor_id,
target=target
).result
```
### Get monitor instance id and configuration details
```
custom_monitor_instance_id = custom_monitor_instance_details.metadata.id
custom_monitor_instance_details = wos_client.monitor_instances.get(custom_monitor_instance_id).result
print(custom_monitor_instance_details)
```
## Storing custom metrics
```
from datetime import datetime, timezone, timedelta
from ibm_watson_openscale.base_classes.watson_open_scale_v2 import MonitorMeasurementRequest
custom_monitoring_run_id = "11122223333111abc"
measurement_request = [MonitorMeasurementRequest(timestamp=datetime.now(timezone.utc),
metrics=[{"specificity": 0.78, "sensitivity": 0.67, "region": "us-south"}], run_id=custom_monitoring_run_id)]
print(measurement_request[0])
published_measurement_response = wos_client.monitor_instances.measurements.add(
monitor_instance_id=custom_monitor_instance_id,
monitor_measurement_request=measurement_request).result
published_measurement_id = published_measurement_response[0]["measurement_id"]
print(published_measurement_response)
```
### List and get custom metrics
```
time.sleep(5)
published_measurement = wos_client.monitor_instances.measurements.get(monitor_instance_id=custom_monitor_instance_id, measurement_id=published_measurement_id).result
print(published_measurement)
```
# Historical data <a name="historical"></a>
```
historyDays = 7
```
## Insert historical payloads
The next section of the notebook downloads and writes historical data to the payload and measurement tables to simulate a production model that has been monitored and receiving regular traffic for the last seven days. This historical data can be viewed in the Watson OpenScale user interface. The code uses the Python and REST APIs to write this data.
```
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_fairness_v2.json
!ls -lh history_fairness_v2.json
from datetime import datetime, timedelta, timezone
from ibm_watson_openscale.base_classes.watson_open_scale_v2 import Source
from ibm_watson_openscale.base_classes.watson_open_scale_v2 import Measurements
with open("history_fairness_v2.json") as f:
fairness_values = json.load(f)
for day in range(historyDays):
print('Loading day', day + 1)
daily_measurement_requests = []
sources_list = []
for hour in range(24):
score_time = (datetime.now(timezone.utc) + timedelta(hours=(-(24*day + hour + 1)))).strftime('%Y-%m-%dT%H:%M:%SZ')
index = (day * 24 + hour) % len(fairness_values) # wrap around and reuse values if needed
fairness_values[index]["timestamp"] = score_time
#print(score_time)
fairness_value = fairness_values[index]
metrics_list = fairness_value["metrics"]
sources = fairness_value["sources"]
sources_list = []
for source in sources:
source_id = source["id"]
source_type = source["type"]
source_data = source["data"]
if source_id == "bias_detection_summary":
source_data["evaluated_at"] = score_time
source_data["favourable_class"] = ["No Risk"]
source_data["unfavourable_class"] = ["Risk"]
source_data["score_type"] = "disparate impact"
sources_list.append(
Source(
id=source_id,
type=source_type,
data=source_data
)
)
measurement_request = MonitorMeasurementRequest(metrics=metrics_list, sources=sources_list, timestamp=score_time)
daily_measurement_requests.append(measurement_request)
measurements_client = Measurements(wos_client)
measurements_client.add(monitor_instance_id=fairness_monitor_instance_id, monitor_measurement_request=daily_measurement_requests)
print('Finished')
```
## Insert historical debias metrics
```
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_debias_v2.json
!ls -lh history_debias_v2.json
with open("history_debias_v2.json") as f:
debias_values = json.load(f)
for day in range(historyDays):
print('Loading day', day + 1)
daily_measurement_requests = []
sources_list = []
for hour in range(24):
score_time = (datetime.now(timezone.utc) + timedelta(hours=(-(24*day + hour + 1)))).strftime('%Y-%m-%dT%H:%M:%SZ')
index = (day * 24 + hour) % len(debias_values) # wrap around and reuse values if needed
debias_values[index]["timestamp"] = score_time
debias_value = debias_values[index]
metrics_list = debias_value["metrics"]
sources = debias_value["sources"]
sources_list = []
for source in sources:
sources_list.append(
Source(
id=source["id"],
type=source["type"],
data=source["data"]
)
)
measurement_request = MonitorMeasurementRequest(metrics=metrics_list, sources=sources_list, timestamp=score_time)
daily_measurement_requests.append(measurement_request)
measurements_client = Measurements(wos_client)
measurements_client.add(monitor_instance_id=fairness_monitor_instance_id, monitor_measurement_request=daily_measurement_requests)
print('Finished')
```
## Insert historical quality metrics
```
measurements = [0.76, 0.78, 0.68, 0.72, 0.73, 0.77, 0.80]
for day in range(historyDays):
quality_measurement_requests = []
print('Loading day', day + 1)
for hour in range(24):
score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1)))
score_time = score_time.isoformat() + "Z"
metric = {"area_under_roc": measurements[day]}
measurement_request = MonitorMeasurementRequest(timestamp=score_time,metrics = [metric])
quality_measurement_requests.append(measurement_request)
response = wos_client.monitor_instances.measurements.add(
monitor_instance_id=quality_monitor_instance_id,
monitor_measurement_request=quality_measurement_requests).result
print('Finished')
```
## Insert historical confusion matrixes
```
!rm history_quality_metrics.json
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_quality_metrics.json
!ls -lh history_quality_metrics.json
from ibm_watson_openscale.base_classes.watson_open_scale_v2 import Source
with open('history_quality_metrics.json') as json_file:
records = json.load(json_file)
for day in range(historyDays):
index = 0
cm_measurement_requests = []
print('Loading day', day + 1)
for hour in range(24):
score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1)))
score_time = score_time.isoformat() + "Z"
metric = records[index]['metrics']
source = records[index]['sources']
measurement_request = {"timestamp": score_time, "metrics": [metric], "sources": [source]}
cm_measurement_requests.append(measurement_request)
index+=1
response = wos_client.monitor_instances.measurements.add(monitor_instance_id=quality_monitor_instance_id, monitor_measurement_request=cm_measurement_requests).result
print('Finished')
```
## Insert historical performance metrics
```
target = Target(
target_type=TargetTypes.INSTANCE,
target_id=payload_data_set_id
)
performance_monitor_instance_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.PERFORMANCE.ID,
target=target
).result
performance_monitor_instance_id = performance_monitor_instance_details.metadata.id
for day in range(historyDays):
performance_measurement_requests = []
print('Loading day', day + 1)
for hour in range(24):
score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1)))
score_time = score_time.isoformat() + "Z"
score_count = random.randint(60, 600)
metric = {"record_count": score_count, "data_set_type": "scoring_payload"}
measurement_request = {"timestamp": score_time, "metrics": [metric]}
performance_measurement_requests.append(measurement_request)
response = wos_client.monitor_instances.measurements.add(
monitor_instance_id=performance_monitor_instance_id,
monitor_measurement_request=performance_measurement_requests).result
print('Finished')
```
## Insert historical drift measurements
```
!rm history_drift_measurement_*.json
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_0.json
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_1.json
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_2.json
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_3.json
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_4.json
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_5.json
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_6.json
!ls -lh history_drift_measurement_*.json
for day in range(historyDays):
drift_measurements = []
with open("history_drift_measurement_{}.json".format(day), 'r') as history_file:
drift_daily_measurements = json.load(history_file)
print('Loading day', day + 1)
#Historical data contains 8 records per day - each represents 3 hour drift window.
for nb_window, records in enumerate(drift_daily_measurements):
for record in records:
window_start = datetime.utcnow() + timedelta(hours=(-(24 * day + (nb_window+1)*3 + 1))) # first_payload_record_timestamp_in_window (oldest)
window_end = datetime.utcnow() + timedelta(hours=(-(24 * day + nb_window*3 + 1)))# last_payload_record_timestamp_in_window (most recent)
#modify start and end time for each record
record['sources'][0]['data']['start'] = window_start.isoformat() + "Z"
record['sources'][0]['data']['end'] = window_end.isoformat() + "Z"
metric = record['metrics'][0]
source = record['sources'][0]
measurement_request = {"timestamp": window_start.isoformat() + "Z", "metrics": [metric], "sources": [source]}
drift_measurements.append(measurement_request)
response = wos_client.monitor_instances.measurements.add(
monitor_instance_id=drift_monitor_instance_id,
monitor_measurement_request=drift_measurements).result
print("Daily loading finished.")
```
## Additional data to help debugging
```
print('Datamart:', data_mart_id)
print('Model:', model_uid)
print('Deployment:', deployment_uid)
```
## Identify transactions for Explainability
Transaction IDs identified by the cells below can be copied and pasted into the Explainability tab of the OpenScale dashboard.
```
wos_client.data_sets.show_records(payload_data_set_id, limit=5)
```
## Congratulations!
You have finished the hands-on lab for IBM Watson OpenScale. You can now view the OpenScale Dashboard: (https://url-to-your-cp4d-cluster/aiopenscale). Click on the tile for the German Credit model to see fairness, accuracy, and performance monitors. Click on the timeseries graph to get detailed information on transactions during a specific time window.
| github_jupyter |
# The Red Line Problem
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
```
The Red Line is a subway that connects Cambridge and Boston, Massachusetts. When I was working in Cambridge I took the Red Line from Kendall Square to South Station and caught the commuter rail to Needham. During rush hour Red Line trains run every 7–8 minutes, on average.
When I arrived at the station, I could estimate the time until the next train based on the number of passengers on the platform. If there were only a few people, I inferred that I just missed a train and expected to wait about 7 minutes. If there were more passengers, I expected the train to arrive sooner. But if there were a large number of passengers, I suspected that trains were not running on schedule, so I would go back to the street level and get a taxi.
While I was waiting for trains, I thought about how Bayesian estimation could help predict my wait time and decide when I should give up and take a taxi. This chapter presents the analysis I came up with.
This example is based on a project by Brendan Ritter and Kai Austin, who took a class with me at Olin College.
It was a chapter in the first edition of *Think Bayes*, I cut it from the second edition.
Before we get to the analysis, we have to make some modeling decisions. First, I will treat passenger arrivals as a Poisson process, which means I assume that passengers are equally likely to arrive at any time, and that they arrive at a rate, λ, measured in passengers per minute. Since I observe passengers during a short period of time, and at the same time every day, I assume that λ is constant.
On the other hand, the arrival process for trains is not Poisson. Trains to Boston are supposed to leave from the end of the line (Alewife station) every 7–8 minutes during peak times, but by the time they get to Kendall Square, the time between trains varies between 3 and 12 minutes.
To gather data on the time between trains, I wrote a script that downloads real-time data from the [MBTA](http://www.mbta.com/rider_tools/developers/), selects south-bound trains arriving at Kendall square, and records their arrival times in a database. I ran the script from 4 pm to 6 pm every weekday for 5 days, and recorded about 15 arrivals per day. Then I computed the time between consecutive arrivals.
Here are the gap times I recorded, in seconds.
```
observed_gap_times = [
428.0, 705.0, 407.0, 465.0, 433.0, 425.0, 204.0, 506.0, 143.0, 351.0,
450.0, 598.0, 464.0, 749.0, 341.0, 586.0, 754.0, 256.0, 378.0, 435.0,
176.0, 405.0, 360.0, 519.0, 648.0, 374.0, 483.0, 537.0, 578.0, 534.0,
577.0, 619.0, 538.0, 331.0, 186.0, 629.0, 193.0, 360.0, 660.0, 484.0,
512.0, 315.0, 457.0, 404.0, 740.0, 388.0, 357.0, 485.0, 567.0, 160.0,
428.0, 387.0, 901.0, 187.0, 622.0, 616.0, 585.0, 474.0, 442.0, 499.0,
437.0, 620.0, 351.0, 286.0, 373.0, 232.0, 393.0, 745.0, 636.0, 758.0,
]
```
I'll convert them to minutes and use `kde_from_sample` to estimate the distribution.
```
import numpy as np
zs = np.array(observed_gap_times) / 60
from utils import kde_from_sample
qs = np.linspace(0, 20, 101)
pmf_z = kde_from_sample(zs, qs)
```
Here's what it looks like.
```
from utils import decorate
pmf_z.plot()
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of time between trains')
```
## The Update
At this point we have an estimate for the distribution of time between trains.
Now let's suppose I arrive at the station and see 10 passengers on the platform.
What distribution of wait times should I expect?
We'll answer this question in two steps.
* First, we'll derive the distribution of gap times as observed by a random arrival (me).
* Then we'll derive the distribution of wait times, conditioned on the number of passengers.
When I arrive at the station, I am more likely to arrive during a long gap than a short one.
In fact, the probability that I arrive during any interval is proportional to its duration.
If we think of `pmf_z` as the prior distribution of gap time, we can do a Bayesian update to compute the posterior.
The likelihood of my arrival during each gap is the duration of the gap:
```
likelihood = pmf_z.qs
```
So here's the first update.
```
posterior_z = pmf_z * pmf_z.qs
posterior_z.normalize()
```
Here's what the posterior distribution looks like.
```
pmf_z.plot(label='prior', color='C5')
posterior_z.plot(label='posterior', color='C4')
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of time between trains')
```
Because I am more likely to arrive during a longer gap, the distribution is shifted to the right.
The prior mean is about 7.8 minutes; the posterior mean is about 8.9 minutes.
```
pmf_z.mean(), posterior_z.mean()
```
This shift is an example of the "inspection paradox", which [I wrote an article about](https://towardsdatascience.com/the-inspection-paradox-is-everywhere-2ef1c2e9d709).
As an aside, the Red Line schedule reports that trains run every 9 minutes during peak times. This is close to the posterior mean, but higher than the prior mean. I exchanged email with a representative of the MBTA, who confirmed that the reported time between trains is deliberately conservative in order to account for variability.
## Elapsed time
Elapsed time, which I call `x`, is the time between the arrival of the previous train and the arrival of a passenger.
Wait time, which I call `y`, is the time between the arrival of a passenger and the next arrival of a train.
I chose this notation so that
```
z = x + y.
```
Given the distribution of `z`, we can compute the distribution of `x`. I’ll start with a simple case and then generalize. Suppose the gap between trains is either 5 or 10 minutes with equal probability.
If we arrive at a random time, we arrive during a 5 minute gap with probability 1/3, or a 10 minute gap with probability 2/3.
If we arrive during a 5 minute gap, `x` is uniform from 0 to 5 minutes. If we arrive during a 10 minute gap, `x` is uniform from 0 to 10.
So the distribution of wait times is a weighted mixture of two uniform distributions.
More generally, if we have the posterior distribution of `z`, we can compute the distribution of `x` by making a mixture of uniform distributions.
We'll use the following function to make the uniform distributions.
```
from empiricaldist import Pmf
def make_elapsed_dist(gap, qs):
qs = qs[qs <= gap]
n = len(qs)
return Pmf(1/n, qs)
```
`make_elapsed_dist` takes a hypothetical gap and an array of possible times.
It selects the elapsed times less than or equal to `gap` and puts them into a `Pmf` that represents a uniform distribution.
I'll use this function to make a sequence of `Pmf` objects, one for each gap in `posterior_z`.
```
qs = posterior_z.qs
pmf_seq = [make_elapsed_dist(gap, qs) for gap in qs]
```
Here's an example that represents a uniform distribution from 0 to 0.6 minutes.
```
pmf_seq[3]
```
The last element of the sequence is uniform from 0 to 20 minutes.
```
pmf_seq[-1].plot()
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of wait time in 20 min gap')
```
Now we can use `make_mixture` to make a weighted mixture of uniform distributions, where the weights are the probabilities from `posterior_z`.
```
from utils import make_mixture
pmf_x = make_mixture(posterior_z, pmf_seq)
pmf_z.plot(label='prior gap', color='C5')
posterior_z.plot(label='posterior gap', color='C4')
pmf_x.plot(label='elapsed time', color='C1')
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of gap and elapsed times')
posterior_z.mean(), pmf_x.mean()
```
The mean elapsed time is 4.4 minutes, half the posterior mean of `z`.
And that makes sense, since we expect to arrive in the middle of the gap, on average.
## Counting passengers
Now let's take into account the number of passengers waiting on the platform.
Let's assume that passengers are equally likely to arrive at any time, and that they arrive at a rate, `λ`, that is known to be 2 passengers per minute.
Under those assumptions, the number of passengers who arrive in `x` minutes follows a Poisson distribution with parameter `λ x`
So we can use the SciPy function `poisson` to compute the likelihood of 10 passengers for each possible value of `x`.
```
from scipy.stats import poisson
lam = 2
num_passengers = 10
likelihood = poisson(lam * pmf_x.qs).pmf(num_passengers)
```
With this likelihood, we can compute the posterior distribution of `x`.
```
posterior_x = pmf_x * likelihood
posterior_x.normalize()
```
Here's what it looks like:
```
pmf_x.plot(label='prior', color='C1')
posterior_x.plot(label='posterior', color='C2')
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of time since last train')
```
Based on the number of passengers, we think it has been about 5 minutes since the last train.
```
pmf_x.mean(), posterior_x.mean()
```
## Wait time
Now how long do we think it will be until the next train?
Based on what we know so far, the distribution of `z` is `posterior_z`, and the distribution of `x` is `posterior_x`.
Remember that we defined
```
z = x + y
```
If we know `x` and `z`, we can compute
```
y = z - x
```
So we can use `sub_dist` to compute the distribution of `y`.
```
posterior_y = Pmf.sub_dist(posterior_z, posterior_x)
```
Well, almost. That distribution contains some negative values, which are impossible.
But we can remove them and renormalize, like this:
```
nonneg = (posterior_y.qs >= 0)
posterior_y = Pmf(posterior_y[nonneg])
posterior_y.normalize()
```
Based on the information so far, here are the distributions for `x`, `y`, and `z`, shown as CDFs.
```
posterior_x.make_cdf().plot(label='posterior of x', color='C2')
posterior_y.make_cdf().plot(label='posterior of y', color='C3')
posterior_z.make_cdf().plot(label='posterior of z', color='C4')
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of elapsed time, wait time, gap')
```
Because of rounding errors, `posterior_y` contains quantities that are not in `posterior_x` and `posterior_z`; that's why I plotted it as a CDF, and why it appears jaggy.
## Decision analysis
At this point we can use the number of passengers on the platform to predict the distribution of wait times. Now let’s get to the second part of the question: when should I stop waiting for the train and go catch a taxi?
Remember that in the original scenario, I am trying to get to South Station to catch the commuter rail. Suppose I leave the office with enough time that I can wait 15 minutes and still make my connection at South Station.
In that case I would like to know the probability that `y` exceeds 15 minutes as a function of `num_passengers`.
To answer that question, we can run the analysis from the previous section with range of `num_passengers`.
But there’s a problem. The analysis is sensitive to the frequency of long delays, and because long delays are rare, it is hard to estimate their frequency.
I only have data from one week, and the longest delay I observed was 15 minutes. So I can’t estimate the frequency of longer delays accurately.
However, I can use previous observations to make at least a coarse estimate. When I commuted by Red Line for a year, I saw three long delays caused by a signaling problem, a power outage, and “police activity” at another stop. So I estimate that there are about 3 major delays per year.
But remember that my observations are biased. I am more likely to observe long delays because they affect a large number of passengers. So we should treat my observations as a sample of `posterior_z` rather than `pmf_z`.
Here's how we can augment the observed distribution of gap times with some assumptions about long delays.
From `posterior_z`, I'll draw a sample of 260 values (roughly the number of work days in a year).
Then I'll add in delays of 30, 40, and 50 minutes (the number of long delays I observed in a year).
```
sample = posterior_z.sample(260)
delays = [30, 40, 50]
augmented_sample = np.append(sample, delays)
```
I'll use this augmented sample to make a new estimate for the posterior distribution of `z`.
```
qs = np.linspace(0, 60, 101)
augmented_posterior_z = kde_from_sample(augmented_sample, qs)
```
Here's what it looks like.
```
augmented_posterior_z.plot(label='augmented posterior of z', color='C4')
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of time between trains')
```
Now let's take the analysis from the previous sections and wrap it in a function.
```
qs = augmented_posterior_z.qs
pmf_seq = [make_elapsed_dist(gap, qs) for gap in qs]
pmf_x = make_mixture(augmented_posterior_z, pmf_seq)
lam = 2
num_passengers = 10
def compute_posterior_y(num_passengers):
"""Distribution of wait time based on `num_passengers`."""
likelihood = poisson(lam * qs).pmf(num_passengers)
posterior_x = pmf_x * likelihood
posterior_x.normalize()
posterior_y = Pmf.sub_dist(augmented_posterior_z, posterior_x)
nonneg = (posterior_y.qs >= 0)
posterior_y = Pmf(posterior_y[nonneg])
posterior_y.normalize()
return posterior_y
```
Given the number of passengers when we arrive at the station, it computes the posterior distribution of `y`.
As an example, here's the distribution of wait time if we see 10 passengers.
```
posterior_y = compute_posterior_y(10)
```
We can use it to compute the mean wait time and the probability of waiting more than 15 minutes.
```
posterior_y.mean()
1 - posterior_y.make_cdf()(15)
```
If we see 10 passengers, we expect to wait a little less than 5 minutes, and the chance of waiting more than 15 minutes is about 1%.
Let's see what happens if we sweep through a range of values for `num_passengers`.
```
nums = np.arange(0, 37, 3)
posteriors = [compute_posterior_y(num) for num in nums]
```
Here's the mean wait as a function of the number of passengers.
```
mean_wait = [posterior_y.mean()
for posterior_y in posteriors]
import matplotlib.pyplot as plt
plt.plot(nums, mean_wait)
decorate(xlabel='Number of passengers',
ylabel='Expected time until next train',
title='Expected wait time based on number of passengers')
```
If there are no passengers on the platform when I arrive, I infer that I just missed a train; in that case, the expected wait time is the mean of `augmented_posterior_z`.
The more passengers I see, the longer I think it has been since the last train, and the more likely a train arrives soon.
But only up to a point. If there are more than 30 passengers on the platform, that suggests that there is a long delay, and the expected wait time starts to increase.
Now here's the probability that wait time exceeds 15 minutes.
```
prob_late = [1 - posterior_y.make_cdf()(15)
for posterior_y in posteriors]
plt.plot(nums, prob_late)
decorate(xlabel='Number of passengers',
ylabel='Probability of being late',
title='Probability of being late based on number of passengers')
```
When the number of passengers is less than 20, we infer that the system is operating normally, so the probability of a long delay is small. If there are 30 passengers, we suspect that something is wrong and expect longer delays.
If we are willing to accept a 5% chance of missing the connection at South Station, we should stay and wait as long as there are fewer than 30 passengers, and take a taxi if there are more.
Or, to take this analysis one step further, we could quantify the cost of missing the connection and the cost of taking a taxi, then choose the threshold that minimizes expected cost.
This analysis is based on the assumption that the arrival rate, `lam`, is known.
If it is not know precisely, but is estimated from data, we could represent our uncertainty about `lam` with a distribution, compute the distribution of `y` for each value of `lam`, and make a mixture to represent the distribution of `y`.
I did that in the version of this problem in the first edition of *Think Bayes*; I left it out here because it is not the focus of the problem.
| github_jupyter |
```
import numpy as np
import mxnet as mx
import time
import pandas as pd
import cv2
import logging
logging.getLogger().setLevel(logging.DEBUG) # logging to stdout
import matplotlib.pyplot as plt
%matplotlib inline
# Load the trained model
# img_w, img_h = 200, 200
# checkpoint = 210
img_w, img_h = 64, 64
checkpoint = 390
sym, arg_params, aux_params = mx.model.load_checkpoint('models/chkpt', checkpoint)
model = mx.mod.Module(symbol=sym, context=mx.cpu(), label_names=None)
model.bind(for_training=False, data_shapes=[('data', (1,3,img_w,img_h))],
label_shapes=model._label_shapes)
model.set_params(arg_params, aux_params, allow_missing=True)
# Load the gesture mappings:
import json
num_to_ges = None
with open('num2ges.json') as fin:
num_to_ges = json.load(fin, encoding='ascii')
num_to_ges
def get_processed_image(img):
global img_w, img_h
# img = cv2.imread(im_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2YCR_CB)
res = cv2.resize(gray,(img_w, img_w), interpolation=cv2.INTER_CUBIC)
res = np.swapaxes(res, 0, 2)
res = np.swapaxes(res, 1, 2)
res = res[np.newaxis, :]
return res
from collections import namedtuple
Batch = namedtuple('Batch', ['data'])
def predict(img):
global model
im = get_processed_image(img)
model.forward(Batch([mx.nd.array(im)]))
prob = model.get_outputs()[0].asnumpy()
prob = np.squeeze(prob)
a = np.argsort(prob)[::-1]
max_prob = None
max_idx = None
for i in a[:5]:
idx = str(i)
if max_prob < prob[i] :
max_prob = prob[i]
max_idx = idx
print('probability=%f, class=%s' %(prob[i], num_to_ges[idx]))
return num_to_ges[max_idx]
data0 = pd.read_csv('full_hand_data.csv')#, names=['name','state'])
data0.tail()
one_test = data0['name'].values[-1]
one_label = data0['state'].values[-1]
one_test, one_label
one_test = data0['name'].values[-3]
one_label = data0['state'].values[-3]
one_test, one_label
img = cv2.imread(one_test)
plt.imshow(img)
predictedClass = predict(img)
print predictedClass
# num_to_ges['29']
# num_class = len(data0['state'].unique())
# ges_to_num = dict({(g,i) for i, g in enumerate(data0['state'].unique())})
# num_to_ges = dict({(i,g) for i, g in enumerate(data0['state'].unique())})
# num_class, ges_to_num
# data0 = data0.replace({'state':ges_to_num})
# labels = np.empty((data0.shape[0]))
# res_width, res_height = 200, 200
# imgs = np.empty(shape=(data0.shape[0],1,res_width,res_height))
# imgs.shape, labels.shape
# prefix = 'fdata/pic/'
# outfix = 'fdata/bi_pic/'
# for i, (im_name, state) in enumerate(data0.values):
# im_path = prefix + im_name
# # print im_path
# img = cv2.imread(im_path)
# gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# res = cv2.resize(gray,(200, 200), interpolation=cv2.INTER_CUBIC)
# imgs[i][0] = res
# labels[i] = state
# metric = mx.metric.Accuracy()
# train_data, train_label = imgs, labels
# # test_data, test_label = imgs[23:], labels[2:]
# train_data.shape, train_label.shape#, test_data.shape, test_label.shape
# batch_size = 10
# train_iter = mx.io.NDArrayIter(train_data, train_label, batch_size, shuffle=True)
# # eval_iter = mx.io.NDArrayIter(test_data, test_label, batch_size)
# chk_prefix='models/chkpt'
# sym, arg_params, aux_params = mx.model.load_checkpoint(chk_prefix,200)
# model = mx.mod.Module(symbol=sym, context=mx.gpu(), label_names=None)
# model.bind(for_training=False, data_shapes=[('data', (1,1,200,200))],
# label_shapes=model._label_shapes)
# model.set_params(arg_params, aux_params, allow_missing=True)
# m = model.predict(train_iter).asnumpy()
# true = 0
# cnt = 0
# for prob, l in zip(m, train_label):
# prob = np.squeeze(prob)
# pred = np.argsort(prob)[::-1][-1]
# # print pred
# # pred = np.argsort(p)[0]
# lab = int(l)
# cnt += 1
# if pred == lab:
# true += 1
# true, cnt
```
| github_jupyter |
# Categorical Data Plots
Now let's discuss using seaborn to plot categorical data! There are a few main plot types for this:
* factorplot
* boxplot
* violinplot
* stripplot
* swarmplot
* barplot
* countplot
```
import seaborn as sns
import numpy as np
import pandas as pd
%matplotlib inline
tips = sns.load_dataset('tips')
tips.head()
```
## barplot and countplot
These very similar plots allow you to get aggregate data off a categorical feature in your data. **barplot** is a general plot that allows you to aggregate the categorical data based off some function, by default the mean:
```
sns.barplot(x='sex',y='total_bill',data=tips)
sns.barplot(x='sex',y='total_bill',data=tips, estimator=np.std)
sns.countplot(x='sex',data=tips)
df = pd.read_csv('parkinsons.csv')
df.head()
sns.countplot(x='status',data=df)
```
## boxplot and violinplot
boxplots and violinplots are used to shown the distribution of categorical data. A box plot (or box-and-whisker plot) shows the distribution of quantitative data in a way that facilitates comparisons between variables or across levels of a categorical variable. The box shows the quartiles of the dataset while the whiskers extend to show the rest of the distribution, except for points that are determined to be “outliers” using a method that is a function of the inter-quartile range.
```
sns.boxplot(x='day', y='total_bill', data=tips,palette='rainbow')
sns.boxplot(x='status', y='MDVP:Fhi(Hz)', data=df ,palette='rainbow')
sns.boxplot(data=tips, palette='rainbow', orient='h')
sns.boxplot(data=df.iloc[:50,:5], palette='rainbow', orient='h')
sns.boxplot(x='day', y='total_bill', hue='smoker', data=tips ,palette='rainbow')
```
### violinplot
A violin plot plays a similar role as a box and whisker plot. It shows the distribution of quantitative data across several levels of one (or more) categorical variables such that those distributions can be compared. Unlike a box plot, in which all of the plot components correspond to actual datapoints, the violin plot features a kernel density estimation of the underlying distribution.
```
sns.violinplot(x='day', y='total_bill', data=tips)
sns.violinplot(x='day', y='total_bill', data=tips, hue='sex')
sns.violinplot(x='day', y='total_bill', data=tips, hue='smoker', split=True)
```
## stripplot and swarmplot
The stripplot will draw a scatterplot where one variable is categorical. A strip plot can be drawn on its own, but it is also a good complement to a box or violin plot in cases where you want to show all observations along with some representation of the underlying distribution.
The swarmplot is similar to stripplot(), but the points are adjusted (only along the categorical axis) so that they don’t overlap. This gives a better representation of the distribution of values, although it does not scale as well to large numbers of observations (both in terms of the ability to show all the points and in terms of the computation needed to arrange them).
```
sns.stripplot(x='day', y='total_bill', data=tips, jitter=False)
sns.stripplot(x='day', y='total_bill', data=tips)#jitter is True by deflaut it adds some noise in data
sns.stripplot(x='day', y='total_bill', data=tips, hue='sex')
sns.stripplot(x='day', y='total_bill', data=tips, hue='sex', dodge=True) #split is renamed to dodge
sns.swarmplot(x='day', y='total_bill', data=tips)
sns.violinplot(x='day', y='total_bill', data=tips)
sns.swarmplot(x='day', y='total_bill', data=tips, color='red')
sns.swarmplot(x='status', y='MDVP:Flo(Hz)', data=df)
```
## factorplot
factorplot is the most general form of a categorical plot. It can take in a **kind** parameter to adjust the plot type:
```
sns.factorplot(x='sex',y='total_bill',data=tips)
sns.factorplot(x='sex',y='total_bill',data=tips,kind='bar')
sns.factorplot(x='day',y='total_bill',data=tips)
```
| github_jupyter |
# Projecting conflict risk
In this notebook, we will show how CoPro uses a number of previously fitted classifiers and projects conflict risk forward in time. Eventually, these forward predictions based on multiple classifiers can be merged into a robust estimate of future conflict risk.
## Preparations
Start with loading the required packages.
```
from copro import utils, pipeline, evaluation, plots, machine_learning
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import geopandas as gpd
import seaborn as sbs
import os, sys
from sklearn import metrics
from shutil import copyfile
import warnings
import glob
warnings.simplefilter("ignore")
```
For better reproducibility, the version numbers of all key packages are provided.
```
utils.show_versions()
```
To be able to also run this notebooks, some of the previously saved data needs to be loaded from a temporary location.
```
conflict_gdf = gpd.read_file(os.path.join('temp_files', 'conflicts.shp'))
selected_polygons_gdf = gpd.read_file(os.path.join('temp_files', 'polygons.shp'))
global_arr = np.load(os.path.join('temp_files', 'global_df.npy'), allow_pickle=True)
global_df = pd.DataFrame(data=global_arr, columns=['geometry', 'ID'])
global_df.set_index(global_df.ID, inplace=True)
global_df.drop(['ID'] , axis=1, inplace=True)
```
## The configurations-file (cfg-file)
To be able to continue the simulation with the same settings as in the previous notebook, the cfg-file has to be read again and the model needs to be initialised subsequently. This is not needed if CoPro is run from command line. Please see the first notebook for additional information.
```
settings_file = 'example_settings.cfg'
main_dict, root_dir = utils.initiate_setup(settings_file, verbose=False)
config_REF = main_dict['_REF'][0]
out_dir_REF = main_dict['_REF'][1]
```
In addition to the config-object and output path for the reference period, `main_dict` also contains the equivalents for the projection run. In the cfg-file, an extra cfg-file can be provided per projection.
```
config_REF.items('PROJ_files')
```
In this example, the files is called `example_settings_proj.cfg` and the name of the projection is `proj_nr_1`.
```
config_PROJ = main_dict['proj_nr_1'][0]
print('the configuration of the projection run is {}'.format(config_PROJ))
out_dir_PROJ = main_dict['proj_nr_1'][1]
print('the output directory of the projection run is {}'.format(out_dir_PROJ))
```
In the previous notebooks, conflict at the last year of the reference period as well as classifiers were stored temporarily to another folder than the output folder. Now let's copy these files back to the folders where the belong.
```
%%capture
for root, dirs, files in os.walk('temp_files'):
# conflicts at last time step
files = glob.glob(os.path.abspath('./temp_files/conflicts_in*'))
for file in files:
fname = file.rsplit('\\')[-1]
print(fname)
copyfile(os.path.join('temp_files', fname),
os.path.join(out_dir_REF, 'files', str(fname)))
# classifiers
files = glob.glob(os.path.abspath('./temp_files/clf*'))
for file in files:
fname = file.rsplit('\\')[-1]
print(fname)
copyfile(os.path.join('temp_files', fname),
os.path.join(out_dir_REF, 'clfs', str(fname)))
```
Similarly, we need to load the sample data (X) for the reference run as we need to fit the scaler with this data before we can make comparable and consistent projections.
```
config_REF.set('pre_calc', 'XY', str(os.path.join(out_dir_REF, 'XY.npy')))
X, Y = pipeline.create_XY(config_REF, out_dir_REF, root_dir, selected_polygons_gdf, conflict_gdf)
```
Lastly, we need to get the scaler for the samples matrix again. The pre-computed and already fitted classifiers are directly loaded from file (see above). The clf returned here will not be used.
```
scaler, clf = pipeline.prepare_ML(config_REF)
```
## Project!
With this all in place, we can now make projections. Under the hood, various steps are taken for each projectio run specified:
1. Load the corresponding ConfigParser-object;
2. Determine the projection period defined as the period between last year of reference run and projection year specified in cfg-file of projection run;
3. Make a separate projection per classifier (the number of classifiers, or model runs, is specified in the cfg-file):
1. in the first year of the projection year, use conflict data from last year of reference run, i.e. still observed conflict data;
2. in all following year, use the conflict data projected for the previous year with this specific classifier;
3. all other variables are read from file for all years.
4. Per year, merge the conflict risk projected by all classifiers and derive a fractional conflict risk per polygon.
For detailed information, please see the documentatoin and code of `copro.pipeline.run_prediction()`. As this is one function doing all the work, it is not possible to split up the workflow in more detail here.
```
all_y_df = pipeline.run_prediction(scaler.fit(X[: , 2:]), main_dict, root_dir, selected_polygons_gdf)
```
## Analysis of projection
All the previously used evaluation metrics are not applicable anymore, as there are no target values anymore. We can still look what the mean conflict probability is as computed by the model per polygon.
```
# link projection outcome to polygons via unique polygon-ID
df_hit, gdf_hit = evaluation.polygon_model_accuracy(all_y_df, global_df, make_proj=True)
# and plot
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
gdf_hit.plot(ax=ax, column='probability_of_conflict', legend=True, figsize=(20, 10), cmap='Blues', vmin=0, vmax=1,
legend_kwds={'label': "mean conflict probability", 'orientation': "vertical", 'fraction': 0.045})
selected_polygons_gdf.boundary.plot(ax=ax, color='0.5');
```
## Projection output
The conflict projection per year is also stored in the output folder of the projection run as geoJSON files. These files can be used to post-process the data with the scripts provided with CoPro or to load them into bespoke scripts and functions written by the user.
| github_jupyter |
# State Machines in Python — Part 1
### Check Installation
Run the following cell by clicking `Shift` + `Enter`. It should output the current version of stmpy you have installed.
```
import stmpy
print('STMPY Version installed: {}'.format(stmpy.__version__))
```
If you haven't installed stmpy, install it via the following command line commands:
`pip install stmpy`
Or if you need to update to a newer version:
`pip install --upgrade stmpy`
Once you have done this the command line, come back to this notebook and restart the kernel. (Grey menu bar at the top of this page, Kernel / Restart.) Then run the cell above again.
Sometimes, you have several Python interpreters on your machine. You can check on which one notebooks run with the code below. It will show you which Python interpreter is used. You should use the corresponding pip command to install the notebooks.
```
import sys
sys.executable
```
> If you have troubles getting notebooks with STMPY to run, take contact with us!
# Getting Started - Step by Step
*In the following we go through the setup of a single state machine, almost line by line so all details are covered. Make sure to execute every notebook cell with Python code in it, and make sure it executes correctly without error message. The next tutorials will present code in a more compact way. So if you feel confident in Python and think this goes to slow, there's hope. If you struggle a bit with Python, have an extra close look at the details.*
Let's start with a simple state machine:
<img src="images/ticktock.png" style="max-width:100%;" />
The state machine calls method `on_init()` when it starts, and goes into state `s_tick`. Then, it toggles back and forth to the `s_tock` state, controlled by timers.
### Step 1: Python Class for Actions
The actions in the transitions directly refer to Python methods. We declare them in a dedicated class `Tick`:
```
class Tick:
def on_init(self):
print('Init!')
self.ticks = 0
self.tocks = 0
def on_tick(self):
print('Tick! {}'.format(self.ticks))
self.ticks = self.ticks + 1
def on_tock(self):
print('Tock! {}'.format(self.tocks))
self.tocks = self.tocks + 1
```
Above you can see the three methods the state machine refers to. Within their body, they control simple counter variables `ticks` and `tocks`, but in principle you can do anything in these methods you like.
We also need an instance of the Tick class for later:
```
tick = Tick()
```
### Step 2: Declaring Transitions and States
**Transitions:** We declare the logic of the state machines by creating Python dictionaries for each of the transitions above. They declare source state, target state, trigger (unless its an initial transition), and effects. The effects declare a set of actions, separated by a `;`. Some of the actions refer to the methods defined in the class `Tick` from above, the others start timers directly.
Note that the action `start_timer("tick", 1000)` uses `"` for the argument, because it is declared within a string itself. Python allows that.
```
# initial transition
t0 = {'source': 'initial',
'target': 's_tick',
'effect':'on_init; start_timer("tick", 1000)'
}
# transition s_tick ----> s_tock
t1 = {'trigger':'tick',
'source':'s_tick',
'target':'s_tock',
'effect':'on_tick; start_timer("tock", 1000)'
}
# transition s_tock ----> s_tick
t2 = {'trigger':'tock',
'source':'s_tock',
'target':'s_tick',
'effect':'on_tock; start_timer("tick", 1000)'
}
```
In this example, we don't have to declare anything special for the states, so they are only declared indirectly as values in the dictionary of the transitions. (In a later example we see how they can have entry and exit actions.)
### Step 3: Creating the State Machine
First, we need to import the Machine class. It contains code to execute the transitions and take care of all other stuff related to the state machine.
```
from stmpy import Machine
```
We declare an instance for the machine, passing it the `tick` object from above and the transitions. It also gets a name.
```
tick_tock_machine = Machine(transitions=[t0, t1, t2], obj=tick, name='tick_tock')
```
Now we need to do one more technical thing. Sometimes the Python class that implements our actions wants to send messages or manipulate timers, or get access to the STMPY API to do other things. For this, we use variable `stm` in the Tick class. We have to set the value of this variable now that we have created the corresponding machine. (For this example it is not necessary, but we want to introduce this step from the beginning.)
```
tick.stm = tick_tock_machine
```
### Step4: Adding the State Machine to a Driver, and Start!
The state machine is declared and ready, we only have to run it. State machines are not executed directly, but assigned to a **Driver**. One driver corresponds to one thread (or process) and can execute many state machines at the same time.
```
from stmpy import Driver
driver = Driver()
# add our state machine to the driver
driver.add_machine(tick_tock_machine)
```
Now the driver is declared. What's left is to start it. Since this machine is describing an endless loop, we limit the number of transitions the driver executes to 5. You will see that the notebook cell is active until the driver stops.
```
# start the driver, with limited number of transitions
driver.start(max_transitions=5)
```
Execute all code cells above, step by step. The last one should start the state machine. You should now see that the state machine prints the following under the cell above:
Init!
Tick! 0
Tock! 0
Tick! 1
Tock! 1
You can also observe that the transitions are triggered by the timers, one every 1000 milliseconds.
# Repetition
- Transitions are declared as Python dictionaries.
- Actions on transitions are declared in a special class for that state machine.
- A `Machine` represents a state machine.
- A `Driver` is needed to execute one (or several) state machines.
| github_jupyter |
# Welcome to Safran Lab 1
Every day, more than 80,000 commercial flights take place around the world, operated by hundreds of airlines. For all aircraft take-off weight exceeding 27 tons, a regulatory constraint requires companies to systematically record and analyse all flight data, for the purpose of improving the safety of flights. Flight Data Monitoring strives to detect and prioritize deviations from standards set by the aircraft manufacturers, the authorities of civil aviation in the country, or even companies themselves. Such deviations, called events, are used to populate a database that enables companies to identify and monitor the risks inherent to these operations.
This notebook is designed to let you manipulate real aeronautical data, provided by the Safran Group. It is divided in two parts: the first part deals with the processing of raw data, you will be asked to visualize the data, understand what variables require processing and perform the processing for some of these variables. The second part deals with actual data analysis, and covers some interesting problems. We hope to give you some insights of the data scientist job and give you interesting and challenging questions.
<h1><div class="label label-success">Part 1: Data processing</div></h1>
## Loading raw data
** Context **
You will be provided with `780` flight records. Each is a full record of a flight starting at the beginning of the taxi out phase and terminating at the end of the taxi in phase. The sample rate is 1 Hz. Please be aware that due to side effects the very beginning of the record may be faulty. This is something to keep in mind when we will analyse the data.
Each flight data is a collection of time series resumed in a dataframe, the columns variables are described in the schema below:
| name | description | unit
|:-----:|:-------------:|:---:|
| TIME | elapsed seconds| second |
| LATP_1 | Latitude | degree ° |
| LONP_1 | Longitude | degree ° |
| RALT1 | Radio Altitude, sensor 1 | feet |
| RALT2 | Radio Altitude, sensor 2 | feet |
| RALT3 | Radio Altitude, sensor 3 | feet |
| ALT_STD | Relative Altitude | feet |
| HEAD | head | degree °|
| PITCH | pitch | degree ° |
| ROLL | roll | degree ° |
| IAS | Indicated Air Speed | m/s |
| N11 | speed N1 of the first engine | % |
| N21 | speed N2 of the first engine | % |
| N12 | speed N1 of the second engine | % |
| N22 | speed N2 of the second engine | % |
| AIR_GROUND | 1: ground, 0: air| boolean |
** Note ** : `TIME` represents the elapsed seconds from today midnight. You are not provided with an absolute time variable that would tell you the date and hour of the flights.
** Acquire expertise about aviation data **
You will need some expertise about the signification of the variables. Latitude and longitude are quite straightforward. Head, Pitch and Roll are standards orientation angles, check this [image](https://i.stack.imgur.com/65EKz.png) to be sure. RALT\* are coming from three different radio altimeters, they measure the same thing but have a lot of missing values and are valid only under a threshold altitude (around 5000 feet). Alt_std is the altitude measured from the pressure (it basically comes from a barometer), it is way less accurate that a radio altimeter but provides values for all altitudes. N1\* and N2\* are the rotational speeds of the engine sections expressed as a percentage of a nominal value. Some good links to check out to go deeper:
- [about phases of flight](http://www.fp7-restarts.eu/index.php/home/root/state-of-the-art/objectives/2012-02-15-11-58-37/71-book-video/parti-principles-of-flight/126-4-phases-of-a-flight)
- [pitch-roll-head](https://i.stack.imgur.com/65EKz.png)
- [about N\*\* variables I](http://aviation.stackexchange.com/questions/14690/what-are-n1-and-n2)
- [about N\*\* variables II](https://www.quora.com/Whats-N1-N2-in-aviation-And-how-is-the-value-of-100-N1-N2-determined)
- [how altimeters work](http://www.explainthatstuff.com/how-altimeters-work.html)
- [about runway naming](https://en.wikipedia.org/wiki/Runway#Naming)
```
# Set up
BASE_DIR = "/mnt/safran/TP1/data/"
from os import listdir
from os.path import isfile, join
import glob
import matplotlib as mpl
mpl.rcParams["axes.grid"] = True
import matplotlib.pylab as plt
%matplotlib inline
import numpy as np
import pandas as pd
pd.options.display.max_columns = 50
from datetime import datetime
from haversine import haversine
def load_data_from_directory(DATA_PATH, num_flights):
files_list = glob.glob(join(DATA_PATH, "*pkl"))
print("There are %d files in total" % len(files_list))
files_list = files_list[:num_flights]
print("We process %d files" % num_flights)
dfs = []
p = 0
for idx, f in enumerate(files_list):
if idx % int(len(files_list)/10) == 0:
print(str(p*10) + "%: [" + "#"*p + " "*(10-p) + "]", end="\r")
p += 1
dfs.append(pd.read_pickle(f))
print(str(p*10) + "%: [" + "#"*p + " "*(10-p) + "]", end="\r")
return dfs
```
<div class="label label-primary">Execute the cell below to load the data for part 1</div>
```
num_flights = 780
flights = load_data_from_directory(BASE_DIR + "part1/flights", num_flights)
for f in flights:
l = len(f)
new_idx = pd.date_range(start=pd.Timestamp("now").date(), periods=l, freq="S")
f.set_index(new_idx, inplace=True)
```
The data is loaded with pandas. Please take a look at the [pandas cheat sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) if you have any doubt. You are provided with 780 dataframes, each of them represents the records of the variables defined above during a whole flight.
`flights` is a list where each item is a dataframe storing the data of one flight. There is no particular ordering in this list. All the flights depart from the same airport and arrive at the same airport. These airports are hidden to you and you will soon understand how.
For example `flights[0]` is a dataframe, representing one flight.
```
# Give an alias to flights[0] for convenience
f = flights[0]
flights[0].head()
```
You can select a column by indexing by its name.
```
f["PITCH"].describe()
```
Use `iloc[]` to select by line number, either the whole dataframe to obtain all the variables of a dataframe...
```
f.iloc[50:60]
```
...or an individual series.
```
f["PITCH"].iloc[50:60]
```
Finally let's work out an example of visualization of a column.
```
# Create a figure and one subplot
fig, ax = plt.subplots()
# Give an alias to flights[0] for convenience
f = flights[0]
# Select PITCH column of f and plot the line on ax
f.PITCH.plot(title="the title", ax=ax)
```
## Visualization
To perform monitoring of flights, it is necessary to clean up the data. To start, it is important to visualize the data that is available, in order to understand better their properties and the problems associated with them (noise, statistical characteristics, features and other values).
For the following questions do not hesitate to resort to the documentation of pandas for plotting capabilities (for a [dataframe](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) or for a [series](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.html))
<div class="alert alert-info">
<h3><div class="label label-default">Question 1</div> <div class="label label-info">Visualize all the variables</div></h3>
<br>
For an arbitrary flight, for example <code>flights[0]</code>, visualize all the variables. Would you rather use plot or scatter? Interpolate the data or not interpolate? Think about NaN values and how they are treated when we plot a series. Comment.
</div>
```
# LATP_1 LONP_1 HEAD PITCH ROLL IAS RALT1 RALT2 RALT3 ALT_STD N11 N21 N22 N12 AIR_GROUND
fig, axarr = plt.subplots(16,1, figsize=[10, 60])
f.LATP_1.plot(title='LATP_1', ax=axarr[1])
f.LONP_1.plot(title='LONP_1',ax=axarr[2])
f.HEAD.plot(title='HEAD',ax=axarr[3])
f.PITCH.plot(title='PITCH',ax=axarr[4])
f.ROLL.plot(title='ROLL',ax=axarr[5])
f.IAS.plot(title='IAS',ax=axarr[6])
f.RALT1.plot(title='RALT1',ax=axarr[7])
f.RALT2.plot(title='RALT2',ax=axarr[8])
f.RALT3.plot(title='RALT3',ax=axarr[9])
f.ALT_STD.plot(title='ALT_STD',ax=axarr[10])
f.N11.plot(title='N11',ax=axarr[11])
f.N12.plot(title='N12',ax=axarr[12])
f.N21.plot(title='N21',ax=axarr[13])
f.N22.plot(title='N22',ax=axarr[14])
f.AIR_GROUND.plot(title='AIR_GROUND',ax=axarr[15])
```
** Answer **
First of all we would use plots in order to visualise the trend in time of each variable alone. Scatter is vetter to understand the correlation between variables (without time in it).
For data that is empty we should interpolate (RALTs), and not for the rest.
We should inferpolate as a solution to the NaN values, but if there are too many NaNs it's not worth it, it may be better to join all three RALTs together
If it is interesting to see the variables for a given flight, it is more informative to view the set of values for all flights in order to understand what are the significant/normal values and what are those which are abnormal.
<div class="alert alert-info">
<h3><div class="label label-default">Question 2</div> <div class="label label-info">Visualize N21 variable for all flights</div></h3>
<br>
For the <code>N21</code> variable, for example, display all of the flights on the same figure. Use alpha parameter to add transparency to your plot. Is there any pattern? Comment the variabilities you observe.
</div>
```
for f in flights:
f.N21.plot(alpha=0.02, color='red', figsize=[15,9])
```
** Answer **
Even though they follow a clear pattern, there is a large variance. Nevertheless, there is a clear trend, with the ascention up to 00:30, cruise from then on and landing at the end.
Some variables must be analyzed together, such as latitude and longitude, otherwise the visualization information will be incomplete, we could be missing something.
<div class="alert alert-info">
<h3><div class="label label-default">Question 3</div> <div class="label label-info">Visualize latitude against longitude for all flights</div></h3>
<br>
Display the trajectories (<code>LONP_1</code>, <code>LATP_1</code>) of a subset of flights, for example 50 flights. What do you see? Keep in mind that the data during the beginning of the recording may be abnormal. What insight do you lose when you plot <code>LONP_1</code> against <code>LATP_1</code> ?
</div>
```
# fig, ax = plt.subplots(figsize=[16,16])
# for flight in flights[:50]:
# ax.plot(x=flight.LONP_1, y=flight.LATP_1, alpha=0.1)
# plt.show()
fig, ax = plt.subplots(figsize=[13, 13])
for f in flights[:50]:
f[10:].plot(x='LONP_1', y='LATP_1', ax=ax, alpha = 0.02, legend=False, kind='scatter')
ax.set_xlim(min(f.LONP_1[10:]), max(f.LONP_1[10:]))
ax.set_ylim(min(f.LATP_1[10:]), max(f.LATP_1[10:]))
```
** Answer **
We lose the time, not knowing how fast was the plane travelling. We also lose the height, meaning we have no clue about the takeoff-landing points (although we can have an approximate idea).
Keep in mind that our goal is to understand the nature and the inherent problems of our data, and its features. Proceed with the visual analysis of the data, looking at different features.
<div class="alert alert-info">
<h3><div class="label label-default">Question 4</div> <div class="label label-info">Recap variables that require pre-processing</div></h3>
<br>
Based on your observations as for now, what are the variables requiring processing? For each of these variables, specify the necessary pre-processing required prior to perform data analysis.
</div>
** Answer **
- **Longitude & Latitude**: Messy beginning and noisy signals with very high peaks. filtering it out would help.
- **RALTs**: Treatment of NaNs, maybe merging the three measurements.
- **HEAD**: There are some discontinuities, probably due to angle wrapping.
- **NXX and others**: they are quite clean, not need to processing
## Pre-processing
Data pre-processing is essential, in order to separate the errors due to measurement from "normal" data variability, which is representative of the phenomenon that interests us.
<div class="alert alert-info">
<h3><div class="label label-default">Question 5</div> <div class="label label-info">Smooth and filter out abnormal data in trajectories (LATP_1 and LONP_1)</div></h3>
<br>
Filter the flight trajectories (<code>LATP_1</code> and <code>LONP_1</code> variables). You can focus on the first 20 flights, that is <code>flights[:20]</code>. Display the trajectories before and after smoothing.
</div>
```
# This is a template code, fill in the blanks, or use your own code
# Give an alias to the first few flights for convenience
fs = flights[:20]
# Set up the figure to plot the trajectories before (ax0) and after smoothing (ax1)
fig, axes = plt.subplots(1, 2, figsize=(15, 8))
# Unpack the axes
ax0, ax1 = axes
# Iterate over fs and add two new smooth columns for each flight
for f in fs:
f["LATP_1_C"] = f.LATP_1.rolling(window=40).sum() # FILL IN THE BLANKS
f["LONP_1_C"] = f.LONP_1.rolling(window=40).sum()
# Iterate over fs and plot the trajectories before and after smoothing
for f in fs:
# Plot the raw trajectory on ax0
f.plot(kind="scatter", x="LATP_1", y="LONP_1", s=1, ax=ax0)
# Plot the smoothed trajectory on ax1
f.plot(kind="scatter", x="LATP_1_C", y="LONP_1_C", s=1, ax=ax1)
fig.tight_layout()
```
** Answer **
Just filtering with a window size of 40 secs and summing does the job
<div class="alert alert-info">
<h3><div class="label label-default">Question 6</div> <div class="label label-info">Pre-process HEAD, get rid off discontinuities</div></h3>
<br>
Angles are special variables because they "cycle" over their range of values. The <code>HEAD</code> variable shows artificial discontinuities: your goal is to eliminate (filter out) such discontinuities. The angle may no longer be between 0 and 360 degrees after the transformation but it will come very handy for some analysis later. Display the data before and after transformation. You can focus on one flight, for example <code>flights[0]</code>.
</div>
```
# Your code goes here ...
```
** Answer **
your answer here ...
<h1><div class="label label-success">Part 2: Analysis</div></h1>
We now turn to the data analysis task. In this part, we will use a **clean** dataset, which has been prepared for you; nevertheless, the functions you developed in the first part of the notebook can still be used to visualize and inspect the new data. Next, we display the schema of the new dataset you will use:
| name | description | unit |
|:-----:|:-------------:|:---:|
| TIME |elapsed seconds| second |
| **LATP_C** | **Latitude, Corrected**| ** degree ° **|
| **LONP_C** | **Longitude, Corrected**| ** degree ° **|
| **RALT_F** | **Radio Altitude, Fusioned**| ** feet ** |
| **ALT_STD_C** | **Relative Altitude, Corrected**| ** feet ** |
| **HEAD_C** | **head, Corrected**| ** degree ° ** |
| **HEAD_TRUE** | **head, without discontinuities**| ** degree ° ** |
| **PITCH_C** | **pitch, Corrected**| ** degree ° ** |
| **ROLL_C** | **roll, Corrected**| ** degree ° ** |
| **IAS_C** | **Indicated Air Speed, Corrected**| ** m/s ** |
| N11 | speed N1 of the first engine | % |
| N21 | speed N2 of the first engine | % |
| N12 | speed N1 of the second engine | % |
| N22 | speed N2 of the second engine | % |
| AIR_GROUND | 1: ground, 0: airr | boolean |
<div class="label label-primary">Execute the cell below to load the data for part 2</div>
```
num_flights = 780
flights = load_data_from_directory(BASE_DIR + "part2/flights/", num_flights)
for f in flights:
l = len(f)
new_idx = pd.date_range(start=pd.Timestamp("now").date(), periods=l, freq="S")
f.set_index(new_idx, inplace=True)
```
## Detection of phases of flight

In order to understand the different events that can happen, it is necessary to understand in what phase of the flight the aircraft is located. Indeed, an event that could be regarded as normal in a stage could be abnormal in another stage.
<div class="alert alert-info">
<h3><div class="label label-default">Question 7</div> <div class="label label-info">Detect take-off and touch-down phases</div></h3>
<br>
Using the clean dataset, detect the take-off phase and the touch-down of all flights. Among all the variables available, what is the variable that tells us the most easily when the take off happens? There is no trap here. Choose the best variable wisely and use it to detect the indices of take-off and touch-down. Plot <code>ALT_STD_C</code> 5 mins before and 5 mins after take-off to test your criterion. Do the same for touch-down.
</div>
```
plt.plot(f.RALT_F.diff(1).rolling(window=20).sum())
fig, ax = plt.subplots(1,2, figsize=[15,9])
ax[0].set_title('Takeoff')
ax[1].set_title('Landing')
for i,f in enumerate(flights):
takeoff = f.RALT_F.diff(1).rolling(window=20).sum().idxmax()
landing = f.RALT_F.diff(1).rolling(window=20).sum().idxmin()
ax[0].plot(f.loc[takeoff-360:takeoff+360].ALT_STD_C)
ax[1].plot(f.loc[landing-360:landing+360].ALT_STD_C)
plt.show()
```
** Answer **
Clearly the best variable would have been AIR_GROUND, a boolean that would tell us the exact moment when it starts flying. However, since we don't have access to it anymore, we are gonna use RALT_F, and by applying a difference of 5 seconds in the series while detecting the maximum and minimum we obtain the value.
<div class="alert alert-info">
<h3><div class="label label-default">Question 8</div> <div class="label label-info">HEAD during take-off and touch-down phases</div></h3>
<br>
Plot the <code>HEAD_C</code> variable between 20 seconds before the take-off until the take-off itself. Compute the mean of <code>HEAD_C</code> during this phase for each individual flight and do a boxplot of the distribution you obtain. Do the same for the touch-down. What do you observe? Is there something significant? Recall [how runways are named](https://en.wikipedia.org/wiki/Runway#Naming)
</div>
```
fig, ax = plt.subplots(1,2, figsize=[15,9])
ax[0].set_title('Takeoff HEAD_C')
ax[1].set_title('Landing HEAD_C')
meansTakeoff = []
meansLanding = []
for i,f in enumerate(flights):
takeoff = f.RALT_F.diff(1).rolling(window=20).sum().idxmax()
landing = f.RALT_F.diff(1).rolling(window=20).sum().idxmin()
meansTakeoff.append(f.loc[takeoff-20:takeoff].HEAD_C.mean())
meansLanding.append(f.loc[landing-20:landing].HEAD_C.mean())
df = pd.DataFrame(meansTakeoff)
df.plot.box(ax=ax[0])
df2 = pd.DataFrame(meansLanding)
df2.plot.box(ax=ax[1])
```
** Answer **
We see that the vast majority are below 100 degrees. if we divide by 10 we would be getting the most probable names for the runways the plane took.
Also, since the head degree in takeoff is in average smaller than the landing degree, the plane should be turning right in average during his trajectory
Next, we want to detect the moment that the aircraft completed its climb (top of climb) and the moment when the aircraft is in descent phase.
<div class="alert alert-info">
<h3><div class="label label-default">Question 9</div> <div class="label label-info">Detect top-of-climb and beginning of descent phases</div></h3>
<br>
Plot <code>ALT_STD_C</code> a minute before liftoff until five minutes after the top of climb. In another figure plot <code>ALT_STD_C</code> a minute before the beginning of descent until the touch-down. For information, a plane is considered:
<ul>
<li>in phase of climb if the altitude increases 30 feet/second for 20 seconds</li>
<li>in stable phase if the altitude does not vary more than 30 feet for 5 minutes</li>
<li>in phase of descent if the altitude decreases 30 feet/second for 20 seconds</li>
</ul>
</div>
```
# This is a template code, fill in the blanks, or use your own code
# Give an alias to flights[0] for convenience
f = flights[0]
f["CLIMB"] = f.ALT_STD_C.diff().rolling(window=20).sum() > 30
f["STABLE"] = f.ALT_STD_C.diff().rolling(window=300).sum() < 30
f["DESCENT"] = f.ALT_STD_C.diff().rolling(window=20).sum() < -30
f[f.CLIMB].ALT_STD_C.plot(color="C0", linestyle="none", marker=".", label="CLIMB") # plot climb phase
f[f.STABLE].ALT_STD_C.plot(color="C1", linestyle="none", marker=".", label="STABLE") # plot stable phase
f[f.DESCENT].ALT_STD_C.plot(color="C2", linestyle="none", marker=".", label="DESCENT") # plot descent phase
top_of_climb = f.CLIMB[-1]
beginning_of_descent = f.DESCENT[0]
plt.legend()
```
** Answer **
It works! some data is left out due to window size in stable areas.
<div class="alert alert-info">
<h3><div class="label label-default">Question 10</div> <div class="label label-info">Flight time</div></h3>
<br>
Using your criteria to detect the take-off and the touch-down, compute the duration of each flight, and plot the distribution you obtain (boxplot, histogram, kernel density estimation, use your best judgement). Comment the distribution.
</div>
```
dur_arr = []
for i,f in enumerate(flights):
takeoff = f.RALT_F.diff(1).rolling(window=20).sum().idxmax()
landing = f.RALT_F.diff(1).rolling(window=20).sum().idxmin()
dur_arr.append((landing.value-takeoff.value)/(1e+9))
df = pd.DataFrame(dur_arr)
df.plot.density(figsize=[15,9])
df.plot.hist(figsize=[15,9], bins=100)
```
** Answer **
Average is arround 6000 secs
## Problems
Note that the data that we are using in this notebook has been anonymized. This means that the trajectories of a flight have been modified to hide the real information about that flight. In particular, in the dataset we use in this notebook, trajectories have been modified by simple translation and rotation operations
<div class="alert alert-info">
<h3><div class="label label-default">Question 11</div> <div class="label label-danger">Challenge</div> <div class="label label-info">Find origin and destination airports</div></h3>
<br>
You are asked to find the departure and destination airports of the flights in the dataset. You are guided with sample code to load data from external resources and through several steps that will help you to narrow down the pairs of possible airports that fit with the anonymised data.
</div>
We begin by grabbing airport/routes/runways data available on the internet, for example [ourairports](http://ourairports.com/data) (for [airports](http://ourairports.com/data/airports.csv) and [runways](http://ourairports.com/data/runways.csv)) and [openflights](http://www.openflights.org/data.html) (for [routes](https://raw.githubusercontent.com/jpatokal/openflights/master/data/routes.dat)). These datasets would come useful. You can find the schema of the three datasets below and the code to load the data.
airports.csv
---------------
|var|description|
|:--:|:--:|
| ** ident ** | ** icao code **|
| type | type |
| name | airport name|
| ** latitude_deg **| ** latitude in ° **|
| ** longitude_deg **| ** longitude in ° **|
| elevation_ft | elevation in feet|
| ** iata_code ** | ** iata code ** |
routes.dat
---------------
|var|description|
|:--:|:--:|
|AIRLINE | 2-letter (IATA) or 3-letter (ICAO) code of the airline.|
|SOURCE_AIRPORT | 3-letter (IATA) code of the source airport.|
|DESTINATION_AIRPORT| 3-letter (IATA) code of the destination airport.|
runways.csv
---------------
|var|description|
|:--:|:--:|
|airport_ident | 4-letter (ICAO) code of the airport.|
| ** le_ident **| ** low-end runway identity **|
| le_elevation_ft | low-end runway elevation in feet |
| le_heading_degT | low-end runway heading in ° |
| ** he_ident **| ** high-end runway identity **|
| he_elevation_ft | high-end runway elevation in feet |
| ** he_heading_degT **|** high-end runway heading in ° **|
The code below has been done for you, it loads the three datasets mentionned and prepare the `pairs` dataframe.
```
# Load airports data from ourairports.com
airports = pd.read_csv("http://ourairports.com/data/airports.csv",
usecols=[1, 2, 3, 4, 5, 6, 13])
# Select large airports
large_airports = airports[(airports.type == "large_airport")]
print("There are " + str(len(large_airports)) +
" large airports in the world, let's focus on them")
print("airports columns:", airports.columns.values)
# Load routes data from openflights.com
routes = pd.read_csv("https://raw.githubusercontent.com/jpatokal/openflights/master/data/routes.dat",
header=0, usecols=[0, 2, 4],
names=["AIRLINE", "SOURCE_AIRPORT",
"DESTINATION_AIRPORT"])
print("routes columns:", routes.columns.values)
# Load runways data from ourairports.com
runways = pd.read_csv("http://ourairports.com/data/runways.csv", header=0,
usecols=[2, 8, 12, 14, 18],
dtype={
"le_ident": np.dtype(str),
"he_ident": np.dtype(str)
})
print("runways columns:", runways.columns.values)
# Create all pairs of large airports
la = large_airports
pairs = pd.merge(la.assign(i=0), la.assign(i=0), how="outer",
left_on="i", right_on="i", suffixes=["_origin", "_destination"])
# Compute haversine distance for all pairs of large airports
pairs["haversine_distance"] = pairs.apply(lambda x: haversine((x.latitude_deg_origin, x.longitude_deg_origin),
(x.latitude_deg_destination, x.longitude_deg_destination)), axis=1)
del pairs["type_origin"]
del pairs["type_destination"]
del pairs["i"]
pairs = pairs[pairs.ident_origin != pairs.ident_destination]
pairs = pairs.reindex_axis(["ident_origin", "ident_destination", "iata_code_origin", "iata_code_destination",
"haversine_distance",
"elevation_ft_origin", "elevation_ft_destination",
"latitude_deg_origin", "longitude_deg_origin",
"latitude_deg_destination", "longitude_deg_destination"], axis=1)
print("pairs columns:", pairs.columns.values)
```
<div class="label label-primary">Execute the cell below to load the data created by the code above</div>
```
airports = pd.read_pickle(BASE_DIR + "part2/airports.pkl")
large_airports = pd.read_pickle(BASE_DIR + "part2/large_airports.pkl")
routes = pd.read_pickle(BASE_DIR + "part2/routes.pkl")
runways = pd.read_pickle(BASE_DIR + "part2/runways.pkl")
pairs = pd.read_pickle(BASE_DIR + "part2/pairs.pkl")
print("There are " + str(len(large_airports)) +
" large airports in the world, let's focus on them")
# Plot all airports in longitude-latitude plane
plt.scatter(airports["longitude_deg"], airports["latitude_deg"], s=.1)
# Plot large airports in longitude-latitude plane
plt.scatter(large_airports["longitude_deg"], large_airports["latitude_deg"], s=.1)
plt.xlabel("latitude_deg")
plt.ylabel("longitude_deg")
plt.title("All airports (blue) \n large airports (red)")
print("airports columns:", airports.columns.values)
print("routes columns:", routes.columns.values)
print("runways columns:", runways.columns.values)
print("pairs columns:", pairs.columns.values)
```
You are provided with a dataframe of all pairs of large airports in the world: `pairs`
```
pairs.sample(5)
```
<div class="alert alert-info">
<h3><div class="label label-default">Question 11.1</div> <div class="label label-info"> Step 1</div></h3>
<br>
A first step towards the desanonymisation of the data is to use the distance between the airports. Each entry of `pairs` show the latitude and longitude of both airports and the haversine distance between them. Filter the possible pairs of airports by selecting airports that show a distance that is reasonably close to the distance you can compute with the anonymised data. How many pairs of airports do you have left?</div>
```
dist_arr = []
for i,f in enumerate(flights):
takeoff = f.RALT_F.diff(1).rolling(window=20).sum().idxmax()
landing = f.RALT_F.diff(1).rolling(window=20).sum().idxmin()
dist_arr.append(haversine((f.loc[takeoff].LATP_C, f.loc[takeoff].LONP_C), (f.loc[landing].LATP_C, f.loc[landing].LONP_C) ))
df= pd.DataFrame(dist_arr)
float(df.mean())
pairs2 = pairs[pairs.haversine_distance < float(df.mean())+100]
pairs3 = pairs2[pairs2.haversine_distance > float(df.mean())-100]
pairs3.describe()
pairs.describe()
```
** Answer **
We have reduced it to 6K possibilities by limiting it to the mean +-100 in haversine
<div class="alert alert-info">
<h3><div class="label label-default">Question 11.2</div> <div class="label label-info">Step 2</div></h3>
<br>
You should now have a significantly smaller dataframe of possible pairs of airports. The next step is to eliminate the pairs of airports that are connected by commercial flights. You have all the existing commercial routes in the dataset <code>routes</code>. Use this dataframe to eliminate the airports that are not connected. How many pairs of airports possible do you have left?
</div>
```
# This is template code cell, fill in the blanks, or use your own code
selected = pd.merge(pairs3,
routes,
how='inner',
left_on=["iata_code_origin", "iata_code_destination"],
right_on=["SOURCE_AIRPORT", "DESTINATION_AIRPORT"])
selected.describe()
```
** Answer **
Further down! up to 2.7K
<div class="alert alert-info">
<h3><div class="label label-default">Question 11.3</div> <div class="label label-info"> Step 3</div></h3>
<br>
You now have a list of pairs of airports that are at a reasonable distance with respect to the distance between the airports in the anonymised data and that are connected by a commercial route. We have explored variables in the anonymised data that have not been altered and that may help us to narrow down the possibilities even more. Can you see what variable you may use? What previous question can help you a lot? Choose your criterion and use it to eliminate to pairs of airports that does not fit to the anonymised data.
</div>
```
lat_arr1 = []
lat_arr2 = []
for i,f in enumerate(flights):
takeoff = f.RALT_F.diff(1).rolling(window=20).sum().idxmax()
landing = f.RALT_F.diff(1).rolling(window=20).sum().idxmin()
lat_arr1.append(f.loc[takeoff].LATP_C, f.loc[takeoff].LONP_C)
lat_arr2.append(f.loc[landing].LATP_C, f.loc[landing].LONP_C)
df1 = pd.DataFrame(lat_arr1)
df2 = pd.DataFrame(lat_arr2)
selected[selected.latitude_deg_origin < ...]
```
** Answer **
Clearly using the Latitude and Longitude should help greatly, filtering the same way we did before with the other data. No time to run it!
<div class="alert alert-info">
<h3><div class="label label-default">Question 11.4</div> <div class="label label-info">Step 4</div></h3>
<br>
Is there any other variables that can help discriminate more the airports?
</div>
```
# Your code goes here ...
```
** Answer **
your answer here ...
| github_jupyter |
```
from math import floor
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
from src.LinearSimulator import Simulator, SimpleSSM
from src.control.SimpleMPC import MPC
from src.utils import set_seed
```
## Setup simulator and model
```
device = 'cpu'
set_seed(seed=0, use_cuda=False)
state_dim = 2
action_dim = 3
model = SimpleSSM(state_dim, action_dim).to(device)
simul = Simulator(state_dim, action_dim, noise=0.01).to(device)
A=np.array([[0.9, 0.1],
[0.1, 0.9]])
B=np.array([[1, -1],
[-1, 2],
[-2, 1]])
model.A.weight.data = torch.tensor(A, device=device).float()
simul.A.weight.data = torch.tensor(A, device=device).float()
model.B.weight.data = torch.tensor(B, device=device).float()
simul.B.weight.data = torch.tensor(B, device=device).float()
action_min = -1.0
action_max = +1.0
H = 5
T = 100
```
## Set target trajectory
```
x_ref = torch.ones((T, state_dim), device=device) * 2.0
recession_period = floor(T * 0.1)
x_ref[:recession_period, :] = 0
x_ref[-recession_period:, :] = 0
fig, axes = plt.subplots(2,1, sharex=True)
for i in range(2):
axes[i].plot(x_ref[:,i].to('cpu'),
color='C{}'.format(i),
label='state {} target'.format(i))
axes[i].set_xlabel('time')
axes[i].legend()
solver = MPC(model=model,
state_dim=state_dim,
action_dim=action_dim,
H=H,
action_min=action_min,
action_max=action_max).to(device)
x = torch.zeros((1,state_dim), device=device)
state_trajectory = []
action_trajectory = []
opt_results = []
for itr in range(T-H):
# Solve MPC problem
state_ref = x_ref[itr:itr+H,:]
state_ref = state_ref.unsqueeze(dim=0) # add batch dim
us, info = solver.solve(x0=x, target=state_ref, max_iter=2000)
action = us[:,0,:]
print("Step {:2d} | loss {:.7f} | solve : {}".format(itr, info['loss'], info['solve']))
# perform simulation with the optmized action
with torch.no_grad():
x = simul(x, action.view(1,-1))
action_trajectory.append(action)
state_trajectory.append(x)
opt_results.append(action)
controlled = np.concatenate(state_trajectory, axis=0)
fig, axes = plt.subplots(2,1, sharex=True)
for i in range(2):
axes[i].plot(x_ref[:,i].to('cpu'),
color='gray',
ls='--',
label='target'.format(i))
axes[i].plot(controlled[:,i],
color='C{}'.format(i),
label='controlled'.format(i))
axes[i].set_xlabel('time')
axes[i].legend()
```
| github_jupyter |
## Import libraries
```
import logging
from typing import Optional, Tuple
import numpy as np
import pandas as pd
import lightgbm as lgb
from catboost import CatBoostClassifier, Pool
from sklearn import metrics
# https://github.com/roelbertens/time-series-nested-cv/blob/master/time_series_cross_validation/custom_time_series_split.py
class CustomTimeSeriesSplit:
def __init__(self,
train_set_size: int,
test_set_size: int
):
"""
:param train_set_size: data points (days) in each fold for the train set
:param test_set_size: data points (days) in each fold for the test set
"""
self.train_set_size = train_set_size
self.test_set_size = test_set_size
self._logger = logging.getLogger(__name__)
def split(
self,
x: np.ndarray,
y: Optional[np.ndarray] = None
) -> Tuple[np.ndarray, np.ndarray]:
"""Return train/test split indices.
:param x: time series to use for prediction, shape (n_samples, n_features)
:param y: time series to predict, shape (n_samples, n_features)
:return: (train_indices, test_indices)
Note: index of both x and y should be of type datetime.
"""
if y is not None:
assert x.index.equals(y.index)
split_points = self.get_split_points(x)
for split_point in split_points:
is_train = (x.index < split_point) & (x.index >= split_point -
pd.Timedelta(self.train_set_size, unit="D"))
is_test = (x.index >= split_point) & (x.index < split_point +
pd.Timedelta(self.test_set_size, unit="D"))
if not is_train.any() or not is_test.any():
self._logger.warning(
"Found %d train and %d test observations "
"skipping fold for split point %s",
is_train.sum(), is_test.sum(), split_point
)
continue
dummy_ix = pd.Series(range(0, len(x)), index=x.index)
ix_train = dummy_ix.loc[is_train].values
ix_test = dummy_ix.loc[is_test].values
if ix_train is None or ix_test is None:
self._logger.warning(
"Found no data for train or test period, "
"skipping fold for split date %s",
split_point
)
continue
yield ix_train, ix_test
def get_split_points(self, x: np.array) -> pd.DatetimeIndex:
"""Get all possible split point dates"""
start = x.index.min() + pd.Timedelta(self.train_set_size, unit="D")
end = x.index.max() - pd.Timedelta(self.test_set_size - 1, unit="D")
self._logger.info(f"Generating split points from {start} to {end}")
split_range = pd.date_range(start, end, freq="D")
first_split_point = (len(split_range) + self.test_set_size - 1) % self.test_set_size
return split_range[first_split_point::self.test_set_size]
class ModelBuilder:
def __init__(self, df, target, feats, cat_feats):
self.df = df
self.target = target
self.feats = feats
self.cat_feats = cat_feats
self.mode = "classification" if type(target) == str else "multiclassification"
def train_folds(self, train_size=120, test_size=30, iterations=1000, early_stopping=False):
if self.mode == "classification":
oof_preds = np.zeros(self.df.shape[0])
else:
oof_preds = np.zeros((self.df.shape[0], len(targets)))
folds_mask = np.zeros(oof_preds.shape[0])
for fold_, (train_index, test_index) in enumerate(CustomTimeSeriesSplit(train_set_size=train_size, test_set_size=test_size).split(self.df)):
X_train, y_train = self.df.iloc[train_index, :][self.feats], self.df.iloc[train_index, :][self.target]
X_val, y_val = self.df.iloc[test_index, :][self.feats], self.df.iloc[test_index, :][self.target]
weeks_train = X_train.reset_index()["dt"]
weeks_test = X_val.reset_index()["dt"]
tr_start_week = weeks_train.min()
tr_end_week = weeks_train.max()
ts_start_week = weeks_test.min()
ts_end_week = weeks_test.max()
print()
print()
print(f"Fold {fold_} train ({tr_start_week}, {tr_end_week}) test ({ts_start_week}, {ts_end_week})")
cat_model = CatBoostClassifier(
iterations=iterations,
learning_rate=0.05,
metric_period=500,
loss_function="Logloss" if self.mode=="classification" else "MultiLogloss",
l2_leaf_reg=10,
eval_metric="F1" if self.mode=="classification" else "MultiLogloss",
task_type="CPU",
early_stopping_rounds=100,
random_seed=1234,
use_best_model=early_stopping
)
D_train = Pool(X_train, y_train, cat_features=cat_feats, feature_names=feats)
D_val = Pool(X_val, y_val, cat_features=cat_feats, feature_names=feats)
print("Train catboost")
cat_model.fit(
D_train,
eval_set=D_val if early_stopping else None,
verbose=True,
plot=False
)
if self.mode == "classification":
D_train_lgb = lgb.Dataset(X_train, y_train, weight=None, free_raw_data=False)
D_val_lgb = lgb.Dataset(X_val, y_val, weight=None, free_raw_data=False)
print("Train lgbm")
lgbm_model = lgb.train(
{
"objective": "binary",
"feature_pre_filter": False,
"lambda_l1": 5.246525412521277e-08,
"lambda_l2": 3.963188589061798e-05,
"num_leaves": 6,
"feature_fraction": 0.7,
"bagging_fraction": 1.0,
"bagging_freq": 0,
"min_child_samples": 20,
},
D_train_lgb,
num_boost_round=iterations,
early_stopping_rounds=200 if early_stopping else None,
valid_sets=D_val_lgb if early_stopping else None,
feature_name=feats,
verbose_eval=500,
)
preds = (cat_model.predict_proba(X_val)[:, 1] + lgbm_model.predict(X_val)) / 2
print()
print(f"Fold {fold_} F1 Score ", metrics.f1_score(y_val, preds.round()))
print(f"Fold {fold_} ROC AUC Score ", metrics.roc_auc_score(y_val, preds.round()))
print(f"Fold {fold_} Confusion matrix")
print(metrics.confusion_matrix(y_val, preds.round()))
oof_preds[test_index] = preds
else:
oof_preds[test_index] = cat_model.predict(X_val)
print(f"Fold {fold_} F1 Score ", metrics.f1_score(y_val, oof_preds[test_index].round(), average="micro"))
try:
print(f"Fold {fold_} ROC AUC Score ", metrics.roc_auc_score(y_val, oof_preds[test_index]))
except ValueError:
print(f"Fold {fold_} ROC AUC Score ", 0)
folds_mask[test_index] = 1
if self.mode == "classification":
oof_f1micro = metrics.f1_score(self.df.iloc[folds_mask == 1, :][self.target], oof_preds[folds_mask == 1].round(), average="micro")
oof_f1micro = metrics.roc_auc_score(self.df.iloc[folds_mask == 1, :][self.target], oof_preds[folds_mask == 1], average="micro")
else:
oof_f1micro = metrics.f1_score(self.df.iloc[folds_mask == 1, :][self.target], oof_preds[folds_mask == 1].round(), average="micro")
oof_f1micro = metrics.roc_auc_score(self.df.iloc[folds_mask == 1, :][self.target], oof_preds[folds_mask == 1], average="micro")
print()
print("Overall OOF F1 Micro ", oof_f1micro)
print("Overall OOF Mean ROC AUC Score ", oof_f1micro)
def train_final_models(self, iterations=1000, early_stopping=False):
if self.mode == "classification":
X_train, y_train = self.df.iloc[:, :][self.feats], self.df.iloc[:, :][self.target]
cat_model = CatBoostClassifier(
iterations=iterations,
learning_rate=0.05,
metric_period=500,
loss_function="Logloss",
l2_leaf_reg=10,
eval_metric="F1",
task_type="CPU",
random_seed=1234,
use_best_model=early_stopping
)
D_train = Pool(X_train, y_train, cat_features=cat_feats, feature_names=feats)
print("Train catboost")
cat_model.fit(
D_train,
eval_set=None,
verbose=True,
plot=False
)
D_train_lgb = lgb.Dataset(X_train, y_train, weight=None, free_raw_data=False)
print("Train lgbm")
lgbm_model = lgb.train(
{
"objective": "binary",
"feature_pre_filter": False,
"lambda_l1": 5.246525412521277e-08,
"lambda_l2": 3.963188589061798e-05,
"num_leaves": 6,
"feature_fraction": 0.7,
"bagging_fraction": 1.0,
"bagging_freq": 0,
"min_child_samples": 20,
},
D_train_lgb,
num_boost_round=iterations,
valid_sets=None,
feature_name=feats,
verbose_eval=500,
)
return cat_model, lgbm_model
elif self.mode == "multiclassification":
raise NotImplementedError
```
## Prepare training data
```
df = pd.read_csv("merged.csv")
df["SATELLITE"].unique()
df["CONFIDENCE"] = df["CONFIDENCE"].map({"l":0, "h":1, "n":3})
df["SATELLITE"] = df["SATELLITE"].map({"1":0, "N":1})
df["DAYNIGHT"] = df["DAYNIGHT"].map({"D":0, "N":1})
df["dt"] = pd.to_datetime(df["dt"]).dt.date
df = df.set_index("dt")
targets = ["infire_day_1","infire_day_2","infire_day_3","infire_day_4","infire_day_5","infire_day_6","infire_day_7","infire_day_8"]
feats = ["BRIGHTNESS","SCAN","TRACK","ACQ_TIME","SATELLITE","DAYNIGHT","CONFIDENCE","BRIGHT_T31","FRP"]
#cat_feats = ["grid_index", "DAYNIGHT","SATELLITE"]
cat_feats = []
targets = ["infire_day_1","infire_day_2","infire_day_3","infire_day_4","infire_day_5","infire_day_6","infire_day_7","infire_day_8"]
df["target"] = (df[targets].sum(axis=1)>0).astype(np.uint8)
df["target"].value_counts(normalize=True)
### syntetic data
DROPOUT_PROBA = 0.7
UPSAMPLE_RATE = 6
df_syn_base = df[df["target"]==0][feats]
df_syn_final = pd.DataFrame()
for i in range(UPSAMPLE_RATE):
df_syn = df_syn_base.copy()
for f in feats[3:]:
df_syn[f] = df_syn[f].apply(lambda x: x if np.random.random()>DROPOUT_PROBA else None).sample(frac=1.0).values
df_syn_final = pd.concat([df_syn_final, df_syn], axis=0)
df_syn_final["target"] = 0
df_combined = pd.concat([
df[feats+["target"]],
df_syn_final], axis=0)
df_combined["target"].value_counts(normalize=True)
```
## Train with single lable (will we see fire during a period of 8 days)
```
fire_model = ModelBuilder(df_combined, "target", feats, cat_feats)
fire_model.train_folds(train_size=120, test_size=30, iterations=1000, early_stopping=False)
cat_model, lgbm_model = fire_model.train_final_models()
```
### Save models
```
cat_model.save_model("catboost", format="cbm")
lgbm_model.save_model("light_gbm.txt")
```
| github_jupyter |
### Preparation steps
Install iotfunctions with
`pip install git+https://github.com/ibm-watson-iot/functions@dev`
This projects contains the code for the Analytics Service pipeline as well as the anomaly functions and should pull in most of this notebook's dependencies.
The plotting library matplotlib is the exception, so you need to run
`pip install matplotlib`
#### Install py4j
In addition install py4j v0.10.9.1-mm from my github clone
`git clone https://github.com/sedgewickmm18/py4j`
Install with
```
cd py4j-java
./gradlew clean assemble # build java jars
cd ..
pip install . # install python and jars
```
#### Install timeseries-insights
Checkout WatFore forecasting library first
`git clone https://github.ibm.com/Common-TimeSeries-Analytics-Library/WatFore`
then timeseries-insights
`git clone https://github.ibm.com/Common-TimeSeries-Analytics-Library/timeseries-insights`
Finally apply a patch for allow for a callback server IP other than 127.0.0.1
`curl https://raw.githubusercontent.com/sedgewickmm18/tsi/master/context.py.patch | patch -p1`
Build WatFore
```
cd WatFore
mvn clean install -DskipTests
```
Build timeseries-insights
```
cd ../timeseries-insights
mvn clean install -DskipTests
```
Build the python distribution of tspy
```
cd python
python setup.py sdist
```
Install it
`pip install dist/tspy-2.0.5.0.tar.gz`
#### Run timeseries-insights as docker container
`docker run -p 25332:25332 -p 25333:25333 sedgewickmm18/tsi`
* port 25333 exposes the default port for the java server
* port 25332 allows for optional ssh based port forwarding (should not be necessary)
The patch above allows for callback server IP addresses other than 127.0.0.1, i.e. the python client that also acts as callback server for python lambda processing can listen to a docker bridge IP address. In my case I'm running it from my laptop on `172.17.0.1` while the container with the java process has IP address `172.17.0.2`.
##### Caveat:
The java process attempts to listen to IPv4 **and** IPv6 addresses so have to enable IPv6 for your docker bridge with
`sudo vi /etc/docker/daemon.json`
so that it looks similar to
```
{
"insecure-registries" : ["localhost:32000"],
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1::/64"
}
```
Then restart the docker daemon with
`systemctl restart docker`
and check with
`docker network inspect bridge`
```
# Real life data
import logging
import threading
import itertools
import pandas as pd
import numpy as np
import json
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import axes3d
import seaborn as seabornInstance
from sqlalchemy import Column, Integer, String, Float, DateTime, Boolean, func
from iotfunctions import base
from iotfunctions import bif
from iotfunctions import entity
from iotfunctions import metadata
from iotfunctions.metadata import EntityType
from iotfunctions.db import Database
from iotfunctions.dbtables import FileModelStore
from iotfunctions.enginelog import EngineLogging
from iotfunctions import estimator
from iotfunctions.ui import (UISingle, UIMultiItem, UIFunctionOutSingle,
UISingleItem, UIFunctionOutMulti, UIMulti, UIExpression,
UIText, UIStatusFlag, UIParameters)
import datetime as dt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
EngineLogging.configure_console_logging(logging.INFO)
# setting to make life easier
Temperature='Vx'
# set up a db object with a FileModelStore to support scaling
with open('credentials_as_monitor_demo.json', encoding='utf-8') as F:
credentials = json.loads(F.read())
db_schema=None
fm = FileModelStore()
db = Database(credentials=credentials, model_store=fm)
print (db)
# Run on the good pump first
# Get stuff in
df_i = pd.read_csv('AllOfArmstark.csv', index_col=False, parse_dates=['timestamp'])
#df_i['entity']='MyRoom'
#df_i[Temperature]=df_i['value'] + 20
#df_i = df_i.drop(columns=['value'])
# and sort it by timestamp
df_i = df_i.sort_values(by='timestamp')
df_i = df_i.set_index(['entity','timestamp']).dropna()
df_i.head(2)
# Simplify our pandas dataframe to prepare input for plotting
EngineLogging.configure_console_logging(logging.INFO)
df_inputm2 = df_i.loc[['04714B6046D5']]
df_inputm2.reset_index(level=[0], inplace=True)
# predicted just means normalized - need to modify the BaseEstimatorFunction superclass
# start the callback server
#from tspy import TSContext
import tspy
from tspy.data_structures.context import TSContext
from py4j.java_gateway import JavaGateway, GatewayParameters, CallbackServerParameters
gateway = JavaGateway(gateway_parameters=GatewayParameters(address=u'172.17.0.2',
auth_token='DZQv45+bq4TTHSF3FH2RoYqLoGjY2zMcojcQQpRFZMA='),
callback_server_parameters=CallbackServerParameters(daemonize=True,port=25334,address=u'172.17.0.1',
auth_token='DZQv45+bq4TTHSF3FH2RoYqLoGjY2zMcojcQQpRFZMA=',daemonize_connections=True))
df_i
import datetime
df = df_i.reset_index()[['entity','timestamp','Vx']] #['entity'=='04714B6046D5']
df = df[df['entity']=='04714B6046D5']
tsc = TSContext(gateway=gateway, jvm=gateway.jvm, daemonize=True)
model = tspy.forecasters.arima(500)
model_map = {
'04714B6046D5': model,
}
df
dfs = df.tail(20000)
mts_raw = tsc.multi_time_series\
.df_observations(dfs, dfs.keys()[0], dfs.keys()[1], dfs.keys()[2], granularity=datetime.timedelta(milliseconds=1))\
.with_trs(granularity=datetime.timedelta(minutes=1)) \
.transform(tsc.duplicate_transforms.combine_duplicate_time_ticks(lambda x: float(sum(x) / len(x))))
dfss = mts_raw.to_df()
dfss
model.update_model(dfss['timestamp'].astype(int).tolist(), dfss['value'].tolist())
#model.update_model(mts_raw)
model
forecasts = mts_raw.forecast(100, model_map, confidence=0.97)
df2 = mts_raw.to_df()
df2.describe()
for k, series in forecasts.items():
for k2 in series:
print(k2)
print(k2.value)
ser = pd.Series
df2 = df2.append({'timestamp': k2.time_tick, 'key': k, 'value': k2.value['value']}, ignore_index=True)
dftail = df2.tail(400)
fig, ax = plt.subplots(1, 1, figsize=(20,5))
ax.plot(df2.index, df2['value'],linewidth=0.7,color='green',label=Temperature)
ax.plot(dftail.index, dftail['value'],linewidth=0.9,color='darkorange')
ax.legend(bbox_to_anchor=(1.1, 1.05))
ax.set_ylabel('Pump Vibration',fontsize=14,weight="bold")
```
| github_jupyter |
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from pathlib import Path
import os
DATA_DIR=Path('../data/influence')
for dirname, _, filenames in os.walk(DATA_DIR):
for filename in filenames:
print(os.path.join(dirname, filename))
TRAIN_PATH = DATA_DIR / "train.csv"
TEST_PATH = DATA_DIR / "test.csv"
SOLUTION_PATH = DATA_DIR / "solution.csv"
SUBMISSION_PATH = Path("../submissions/v1")
SUBMISSION_PATH.mkdir(parents=True, exist_ok=True)
PURPOSE_LABELS = {
0: "BACKGROUND",
1: "COMPARES_CONTRASTS",
2: "EXTENSION",
3: "FUTURE",
4: "MOTIVATION",
5: "USES"
}
INFLUENCE_LABELS = {
0: "INCIDENTAL",
1: "INFLUENTIAL"
}
TASKS={
"purpose": ["citation_class_label", PURPOSE_LABELS],
"influence": ["citation_influence_label", INFLUENCE_LABELS]
}
np.random.seed(250320)
df_train = pd.read_csv(TRAIN_PATH).merge(
pd.read_csv(str(TRAIN_PATH).replace("influence", "purpose"))[["unique_id", "citation_class_label"]],
on="unique_id"
)
df_train.columns
df_test = pd.read_csv(TEST_PATH).merge(
pd.read_csv(str(TEST_PATH).replace("influence", "purpose"))[["unique_id"]],
on="unique_id"
)
df_test.columns
df_solution = pd.read_csv(SOLUTION_PATH).merge(
pd.read_csv(str(SOLUTION_PATH).replace("influence", "purpose")),
on="unique_id"
)
df_solution.columns
df_test = df_test.merge(df_solution, on="unique_id")
df_test.shape
df = pd.concat([
df_train.assign(split="train"),
df_test.assign(split="test"),
], axis=0, sort=False).reset_index(drop=True).astype({task[0]: int for task in TASKS.values()})
df.head()
df.split.value_counts()
df.pivot_table(
index="citation_class_label", columns="split", values="unique_id", aggfunc=len
).sort_values("train", ascending=False)
from sklearn.compose import ColumnTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import classification_report
ct = ColumnTransformer([
#("citing_tfidf", TfidfVectorizer(), "citing_title"),
#("cited_tfidf", TfidfVectorizer(), "cited_title"),
("citation_context_tfidf", TfidfVectorizer(),"citation_context"),
])
ct.fit(df)
df_features = ct.transform(df)
df_features.shape
from joblib import dump, load
# Save transformer
dump(ct, SUBMISSION_PATH / "ColumnTransformer.joblib")
dump(df_features, SUBMISSION_PATH / "df_features.joblib")
df_features
df_features[[0, 1, 5]]
def generate_data(df, label_col, split="train"):
split_idx = df[(df.split == split)].index.tolist()
X = df_features[split_idx]
y = df.iloc[split_idx][label_col]
print(f"{split}: X={X.shape}, y={y.shape}")
return X, y, split_idx
def submission_pipeline(model, df, df_features, task, model_key=None, to_dense=False):
# Setup submission folder
submission_folder = SUBMISSION_PATH / f"{model_key}_{task}"
submission_folder.mkdir(parents=True, exist_ok=True)
print(f"Generated folder: {submission_folder}")
model_file = submission_folder / "model.joblib"
submission_file=submission_folder / f"submission.csv"
label_col, label_dict = TASKS[task]
X_train, y_train, train_idx = generate_data(df, label_col, split="train")
X_test, y_test, test_idx = generate_data(df, label_col, split="test")
print(f"Training model")
if to_dense:
X_train = X_train.toarray()
X_test = X_test.toarray()
model.fit(X_train, y_train.astype(int))
dump(model, model_file)
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
print("Output label dist")
print(pd.Series(y_test_pred).map(label_dict).value_counts())
target_names = list(sorted(label_dict.values()))
# Print reports
print("Training report")
print(classification_report(y_train, y_train_pred, target_names=target_names))
print("Test report")
print(classification_report(y_test, y_test_pred, target_names=target_names))
train_report = classification_report(y_train, y_train_pred, target_names=target_names, output_dict=True)
test_report = classification_report(y_test, y_test_pred, target_names=target_names, output_dict=True)
print(f"Writing submission file: {submission_file}")
df.iloc[test_idx][["unique_id"]].assign(**{label_col: y_test_pred}).to_csv(submission_file, index=False)
return model, train_report, test_report
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.linear_model import LogisticRegressionCV
model_configs = {
"gbt": [GradientBoostingClassifier, dict()],
"rf": [RandomForestClassifier, dict(n_jobs=-1)],
"mlp-3": [MLPClassifier, dict(hidden_layer_sizes=(256,256,128))],
"mlp": [MLPClassifier, dict()],
"lr": [LogisticRegressionCV, dict(n_jobs=-1)]
}
DENSE_MODELS = {"mlp", "mlp-3"}
reports = {}
for model_key, model_params in model_configs.items():
model_cls, model_kwargs = model_params
to_dense=False
if model_cls in DENSE_MODELS:
to_dense=True
print(model_key, model_params)
for task in TASKS:
model = model_cls(**model_kwargs)
model, train_report, test_report = %time submission_pipeline(model, df, df_features, task, model_key=model_key, to_dense=to_dense)
reports[(model_key, task)] = {"train": train_report, "test": test_report}
df_reports = pd.concat([
pd.concat([
pd.DataFrame(report[split]).T.assign(model=model, task=task, split=split).reset_index().rename(columns={"index": "label"})
for split in report
])
for (model, task), report in reports.items()
], axis=0, sort=False, ignore_index=True)
df_reports
df_reports.loc[
df_reports.label=="macro avg",
["f1-score", "model", "task", "split"]
].pivot_table(index="model", columns=["task", "split"], values="f1-score", aggfunc="first")
df_t = df_reports.loc[
(df_reports.label=="macro avg") & (df_reports.task=="purpose"),
["f1-score", "model", "task", "split"]
].pivot_table(index="model", columns="split", values="f1-score", aggfunc="first").sort_values("test")
with pd.option_context("precision", 3):
print(df_t.to_latex())
df_t
df_t = df_reports.loc[
(df_reports.label=="macro avg") & (df_reports.task=="influence"),
["f1-score", "model", "task", "split"]
].pivot_table(index="model", columns="split", values="f1-score", aggfunc="first").sort_values("test")
with pd.option_context("precision", 3):
print(df_t.to_latex())
df_t
df_t = df_reports.loc[
(df_reports.split=="test") & (df_reports.task=="purpose"),
["label", "f1-score", "model",]
].pivot_table(index="model", columns="label", values="f1-score", aggfunc="first").sort_values("macro avg")
with pd.option_context("precision", 3):
print(df_t.to_latex())
df_t
df_t = df_reports.loc[
(df_reports.split=="test") & (df_reports.task=="influence"),
["label", "f1-score", "model",]
].pivot_table(index="model", columns="label", values="f1-score", aggfunc="first").sort_values("macro avg")
with pd.option_context("precision", 3):
print(df_t.to_latex())
df_t
```
## Investigate model
```
model_path = SUBMISSION_PATH / "lr_influence/model.joblib"
lr_influence_model = load(model_path)
lr_influence_model.coef_.shape
lr_influence_model.classes_
"citation_context_tfidf__00__0".split("__", 1)
df_coefs = pd.DataFrame(
lr_influence_model.coef_.T,
index=[x.split("__", 1)[-1] for x in ct.get_feature_names()],
columns=["weight"]
).rename_axis("feature").reset_index()
df_coefs.head()
df_t = pd.concat({
INFLUENCE_LABELS[0]: df_coefs.sort_values("weight").reset_index(drop=True),
INFLUENCE_LABELS[1]: df_coefs.sort_values("weight", ascending=False).reset_index(drop=True),
}, axis=1, sort=False).head(10)
with pd.option_context("precision", 3):
print(df_t.to_latex())
df_t
```
| github_jupyter |
```
#load watermark
%load_ext watermark
%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras,tflearn,bokeh,gensim
import keras
keras.__version__
```
# A first look at a neural network
This notebook contains the code samples found in Chapter 2, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
----
We will now take a look at a first concrete example of a neural network, which makes use of the Python library Keras to learn to classify
hand-written digits. Unless you already have experience with Keras or similar libraries, you will not understand everything about this
first example right away. You probably haven't even installed Keras yet. Don't worry, that is perfectly fine. In the next chapter, we will
review each element in our example and explain them in detail. So don't worry if some steps seem arbitrary or look like magic to you!
We've got to start somewhere.
The problem we are trying to solve here is to classify grayscale images of handwritten digits (28 pixels by 28 pixels), into their 10
categories (0 to 9). The dataset we will use is the MNIST dataset, a classic dataset in the machine learning community, which has been
around for almost as long as the field itself and has been very intensively studied. It's a set of 60,000 training images, plus 10,000 test
images, assembled by the National Institute of Standards and Technology (the NIST in MNIST) in the 1980s. You can think of "solving" MNIST
as the "Hello World" of deep learning -- it's what you do to verify that your algorithms are working as expected. As you become a machine
learning practitioner, you will see MNIST come up over and over again, in scientific papers, blog posts, and so on.
The MNIST dataset comes pre-loaded in Keras, in the form of a set of four Numpy arrays:
```
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
```
`train_images` and `train_labels` form the "training set", the data that the model will learn from. The model will then be tested on the
"test set", `test_images` and `test_labels`. Our images are encoded as Numpy arrays, and the labels are simply an array of digits, ranging
from 0 to 9. There is a one-to-one correspondence between the images and the labels.
Let's have a look at the training data:
```
train_images.shape
len(train_labels)
train_labels
```
Let's have a look at the test data:
```
test_images.shape
len(test_labels)
test_labels
```
Our workflow will be as follow: first we will present our neural network with the training data, `train_images` and `train_labels`. The
network will then learn to associate images and labels. Finally, we will ask the network to produce predictions for `test_images`, and we
will verify if these predictions match the labels from `test_labels`.
Let's build our network -- again, remember that you aren't supposed to understand everything about this example just yet.
```
from keras import models
from keras import layers
network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,)))
network.add(layers.Dense(10, activation='softmax'))
```
The core building block of neural networks is the "layer", a data-processing module which you can conceive as a "filter" for data. Some
data comes in, and comes out in a more useful form. Precisely, layers extract _representations_ out of the data fed into them -- hopefully
representations that are more meaningful for the problem at hand. Most of deep learning really consists of chaining together simple layers
which will implement a form of progressive "data distillation". A deep learning model is like a sieve for data processing, made of a
succession of increasingly refined data filters -- the "layers".
Here our network consists of a sequence of two `Dense` layers, which are densely-connected (also called "fully-connected") neural layers.
The second (and last) layer is a 10-way "softmax" layer, which means it will return an array of 10 probability scores (summing to 1). Each
score will be the probability that the current digit image belongs to one of our 10 digit classes.
To make our network ready for training, we need to pick three more things, as part of "compilation" step:
* A loss function: the is how the network will be able to measure how good a job it is doing on its training data, and thus how it will be
able to steer itself in the right direction.
* An optimizer: this is the mechanism through which the network will update itself based on the data it sees and its loss function.
* Metrics to monitor during training and testing. Here we will only care about accuracy (the fraction of the images that were correctly
classified).
The exact purpose of the loss function and the optimizer will be made clear throughout the next two chapters.
```
network.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
```
Before training, we will preprocess our data by reshaping it into the shape that the network expects, and scaling it so that all values are in
the `[0, 1]` interval. Previously, our training images for instance were stored in an array of shape `(60000, 28, 28)` of type `uint8` with
values in the `[0, 255]` interval. We transform it into a `float32` array of shape `(60000, 28 * 28)` with values between 0 and 1.
```
train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
```
We also need to categorically encode the labels, a step which we explain in chapter 3:
```
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
```
We are now ready to train our network, which in Keras is done via a call to the `fit` method of the network:
we "fit" the model to its training data.
```
network.fit(train_images, train_labels, epochs=2, batch_size=128)
```
Two quantities are being displayed during training: the "loss" of the network over the training data, and the accuracy of the network over
the training data.
We quickly reach an accuracy of 0.989 (i.e. 98.9%) on the training data. Now let's check that our model performs well on the test set too:
```
test_loss, test_acc = network.evaluate(test_images, test_labels)
print('test_acc:', test_acc)
```
Our test set accuracy turns out to be 97.8% -- that's quite a bit lower than the training set accuracy.
This gap between training accuracy and test accuracy is an example of "overfitting",
the fact that machine learning models tend to perform worse on new data than on their training data.
Overfitting will be a central topic in chapter 3.
This concludes our very first example -- you just saw how we could build and a train a neural network to classify handwritten digits, in
less than 20 lines of Python code. In the next chapter, we will go in detail over every moving piece we just previewed, and clarify what is really
going on behind the scenes. You will learn about "tensors", the data-storing objects going into the network, about tensor operations, which
layers are made of, and about gradient descent, which allows our network to learn from its training examples.
| github_jupyter |
**Copyright 2019 The TensorFlow Authors**.
Licensed under the Apache License, Version 2.0 (the "License").
# Generating Handwritten Digits with DCGAN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/alpha/tutorials/generative/dcgan.ipynb">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/generative/dcgan.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/generative/dcgan.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
This tutorial demonstrates how to generate images of handwritten digits using a [Deep Convolutional Generative Adversarial Network](https://arxiv.org/pdf/1511.06434.pdf) (DCGAN). The code is written using the [Keras Sequential API](https://www.tensorflow.org/guide/keras) with a `tf.GradientTape` training loop.
## What are GANs?
[Generative Adversarial Networks](https://arxiv.org/abs/1406.2661) (GANs) are one of the most interesting ideas in computer science today. Two models are trained simultaneously by an adversarial process. A *generator* ("the artist") learns to create images that look real, while a *discriminator* ("the art critic") learns to tell real images apart from fakes.

During training, the *generator* progressively becomes better at creating images that look real, while the *discriminator* becomes better at telling them apart. The process reaches equilibrium when the *discriminator* can no longer distinguish real images from fakes.

This notebook demonstrate this process on the MNIST dataset. The following animation shows a series of images produced by the *generator* as it was trained for 50 epochs. The images begin as random noise, and increasingly resemble hand written digits over time.

To learn more about GANs, we recommend MIT's [Intro to Deep Learning](http://introtodeeplearning.com/) course.
```
# To generate GIFs
!pip install imageio
```
### Import TensorFlow and other libraries
```
!pip install tf-nightly-gpu-2.0-preview
import tensorflow as tf
print("You have version", tf.__version__)
assert tf.__version__ >= "2.0" # TensorFlow ≥ 2.0 required
from __future__ import absolute_import, division, print_function
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow.keras.layers as layers
import time
from IPython import display
```
### Load and prepare the dataset
You will use the MNIST dataset to train the generator and the discriminator. The generator will generate handwritten digits resembling the MNIST data.
```
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
```
## Create the models
Both the generator and discriminator are defined using the [Keras Sequential API](https://www.tensorflow.org/guide/keras#sequential_model).
### The Generator
The generator uses `tf.keras.layers.Conv2DTranspose` (upsampling) layers to produce an image from a seed (random noise). Start with a `Dense` layer that takes this seed as input, then upsample several times until you reach the desired image size of 28x28x1. Notice the `tf.keras.layers.LeakyReLU` activation for each layer, except the output layer which uses tanh.
```
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
```
Use the (as yet untrained) generator to create an image.
```
generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
```
### The Discriminator
The discriminator is a CNN-based image classifier.
```
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
return model
```
Use the (as yet untrained) discriminator to classify the generated images as real or fake. The model will be trained to output positive values for real images, and negative values for fake images.
```
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print (decision)
```
## Define the loss and optimizers
Define loss functions and optimizers for both models.
```
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
```
### Discriminator loss
This method quantifies how well the discriminator is able to distinguish real images from fakes. It compares the disciminator's predictions on real images to an array of 1s, and the disciminator's predictions on fake (generated) images to an array of 0s.
```
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
```
### Generator loss
The generator's loss quantifies how well it was able to trick the discriminator. Intuitively, if the generator is performing well, the discriminator will classify the fake images as real (or 1). Here, we will compare the disciminators decisions on the generated images to an array of 1s.
```
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
```
The discriminator and the generator optimizers are different since we will train two networks separately.
```
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
```
### Save checkpoints
This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
```
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
```
## Define the training loop
```
EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
# We will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
```
The training loop begins with generator receiving a random seed as input. That seed is used to produce an image. The discriminator is then used to classify real images (drawn from the training set) and fakes images (produced by the generator). The loss is calculated for each of these models, and the gradients are used to update the generator and discriminator.
```
# Notice the use of `tf.function`
# This annotation causes the function to be "compiled".
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as we go
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
```
**Generate and save images**
```
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
```
## Train the model
Call the `train()` method defined above to train the generator and discriminator simultaneously. Note, training GANs can be tricky. It's important that the generator and discriminator do not overpower each other (e.g., that they train at a similar rate).
At the beginning of the training, the generated images look like random noise. As training progresses, the generated digits will look increasingly real. After about 50 epochs, they resemble MNIST digits. This may take about one minute / epoch with the default settings on Colab.
```
%%time
train(train_dataset, EPOCHS)
```
Restore the latest checkpoint.
```
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
```
## Create a GIF
```
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(EPOCHS)
```
Use `imageio` to create an animated gif using the images saved during training.
```
with imageio.get_writer('dcgan.gif', mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
# A hack to display the GIF inside this notebook
os.rename('dcgan.gif', 'dcgan.gif.png')
```
Display the animated gif with all the mages generated during the training of GANs.
```
display.Image(filename="dcgan.gif.png")
```
## Next steps
This tutorial has shown the complete code necessary to write and train a GAN. As a next step, you might like to experiment with a different dataset, for example the Large-scale Celeb Faces Attributes (CelebA) dataset [available on Kaggle](https://www.kaggle.com/jessicali9530/celeba-dataset/home). To learn more about GANs we recommend the [NIPS 2016 Tutorial: Generative Adversarial Networks](https://arxiv.org/abs/1701.00160).
| github_jupyter |
# Detectron2 training notebook
Based on the [official Detectron 2 tutorial](
https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
This is a notebook that can generate the model used for the image segmentation. It is intended to be used on google colab, but can easily be changed to run locally.
To run this notebook you need some extra files which are available in our google drive folder detectron2segment. The train and val folders, they contain images and their respective polygon annotations.
The output you need to save from this notebook is
- model_final.pth
- config.yml
These are used to run inference and should be placed in this folder
# Install detectron2
```
# install dependencies:
!pip install pyyaml==5.1
!gcc --version
# CUDA 10.2
!pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2
# opencv is pre-installed on colab
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
!nvcc --version
# install detectron2: (Colab has CUDA 10.1 + torch 1.7)
# See https://detectron2.readthedocs.io/tutorials/install.html for instructions
import torch
assert torch.__version__.startswith("1.7") # please manually install torch 1.7 if Colab changes its default version
!pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.7/index.html
exit(0) # After installation, you need to "restart runtime" in Colab. This line can also restart runtime
# Some basic setup:
# Setup detectron2 logger
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import os, json, cv2, random
from google.colab.patches import cv2_imshow
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
```
# Run a pre-trained detectron2 model
We first download an image from the COCO dataset:
```
!wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O input.jpg
im = cv2.imread("./input.jpg")
#im = cv2.imread("./drive/MyDrive/data/train/brace.jpg")
cv2_imshow(im)
```
Then, we create a detectron2 config and a detectron2 `DefaultPredictor` to run inference on this image.
```
cfg = get_cfg()
# add project-specific config (e.g., TensorMask) here if you're not running a model in detectron2's core library
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model
# Find a model from detectron2's model zoo. You can use the https://dl.fbaipublicfiles... url as well
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
predictor = DefaultPredictor(cfg)
outputs = predictor(im)
# We can use `Visualizer` to draw the predictions on the image.
v = Visualizer(im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
# look at the outputs. See https://detectron2.readthedocs.io/tutorials/models.html#model-output-format for specification
print(outputs["instances"].pred_classes)
print(outputs["instances"].pred_boxes)
print(outputs["instances"].pred_masks[0])
import matplotlib.pyplot as plt
outputs["instances"].pred_masks[0].shape
im = cv2.imread("./input.jpg")
plt.imshow(outputs["instances"].pred_masks[0].cpu())
plt.imshow(im)
```
Extract relevant element
```
def extract(im):
mask2 = np.asarray(outputs["instances"].pred_masks[0].cpu())*1
mask3d = np.dstack((np.dstack((mask2, mask2)),mask2))*1
# mask by multiplication, clip to range 0 to 255 and make integer
result2 = (im * mask3d).clip(0, 255).astype(np.uint8)
result2[mask3d==0] = 255
box = np.asarray(outputs["instances"].pred_boxes[0].to('cpu').tensor[0],dtype=int)
crop_img = result2[box[1]:box[3], box[0]:box[2]]
return crop_img
crop_img = extract(im)
cv2_imshow(crop_img)
```
prøv den på et smykke
# Train on a custom dataset
```
# if your dataset is in COCO format, this cell can be replaced by the following three lines:
# from detectron2.data.datasets import register_coco_instances
# register_coco_instances("my_dataset_train", {}, "json_annotation_train.json", "path/to/image/dir")
# register_coco_instances("my_dataset_val", {}, "json_annotation_val.json", "path/to/image/dir")
```
På smykkerne
```
from google.colab import drive
drive.mount('/content/drive')
from detectron2.structures import BoxMode
from detectron2.data import DatasetCatalog, MetadataCatalog
def get_jewellery_dicts(directory):
"""
NB: the image labels used to train must have more than 6 vertices.
"""
classes = ['Jewellery']
dataset_dicts = []
i = 0
for filename in [file for file in os.listdir(directory) if file.endswith('.json')]:
json_file = os.path.join(directory, filename)
with open(json_file) as f:
img_anns = json.load(f)
record = {}
filename = os.path.join(directory, img_anns["imagePath"])
record["image_id"] = i
record["file_name"] = filename
record["height"] = 340
record["width"] = 510
i+=1
annos = img_anns["shapes"]
objs = []
for anno in annos:
px = [a[0] for a in anno['points']]
py = [a[1] for a in anno['points']]
poly = [(x, y) for x, y in zip(px, py)]
poly = [p for x in poly for p in x]
obj = {
"bbox": [np.min(px), np.min(py), np.max(px), np.max(py)],
"bbox_mode": BoxMode.XYXY_ABS,
"segmentation": [poly],
"category_id": classes.index(anno['label']),
"iscrowd": 0
}
objs.append(obj)
record["annotations"] = objs
dataset_dicts.append(record)
return dataset_dicts
for d in ["train", "val"]:
DatasetCatalog.register("Jewellery_" + d, lambda d=d: get_jewellery_dicts('/content/drive/MyDrive/detectron2segment/' + d))
MetadataCatalog.get("Jewellery_" + d).set(thing_classes=['Jewellery'])
jewellery_metadata = MetadataCatalog.get("Jewellery_train")
dataset_dicts = get_jewellery_dicts("drive/MyDrive/detectron2segment/train")
for d in random.sample(dataset_dicts, 3):
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=jewellery_metadata, scale=0.5)
out = visualizer.draw_dataset_dict(d)
cv2_imshow(out.get_image()[:, :, ::-1])
#train
from detectron2.engine import DefaultTrainer
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("Jewellery_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
cfg.SOLVER.MAX_ITER = 1000 # 1000 iterations seems good enough for this dataset
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (jewellery). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
# NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here.
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
# Inference should use the config with parameters that are used in training
# cfg now already contains everything we've set previously. We changed it a little bit for inference:
f = open('config.yml', 'w')
f.write(cfg.dump())
f.close()
# cfg.MODEL.WEIGHTS = "/content/drive/MyDrive/detectron2segment/model_final.pth"
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") # path to the model we just trained
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.82 # set a custom testing threshold
predictor = DefaultPredictor(cfg)
from detectron2.utils.visualizer import ColorMode
dataset_dicts = get_jewellery_dicts("drive/MyDrive/detectron2segment/val")
for d in random.sample(dataset_dicts, 5):
im = cv2.imread(d["file_name"])
outputs = predictor(im) # format is documented at https://detectron2.readthedocs.io/tutorials/models.html#model-output-format
v = Visualizer(im[:, :, ::-1],
scale=0.5 # remove the colors of unsegmented pixels. This option is only available for segmentation models
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
im = cv2.imread("test.jpg")
outputs = predictor(im) # format is documented at https://detectron2.readthedocs.io/tutorials/models.html#model-output-format
v = Visualizer(im[:, :, ::-1],
scale=0.5 # remove the colors of unsegmented pixels. This option is only available for segmentation models
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
evaluator = COCOEvaluator("Jewellery_val", ("bbox", "segm"), False, output_dir="./output/")
val_loader = build_detection_test_loader(cfg, "Jewellery_val")
print(inference_on_dataset(trainer.model, val_loader, evaluator))
# another equivalent way to evaluate the model is to use `trainer.test`
```
| github_jupyter |
# More Feature Engineering - Wide and Deep models
**Learning Objectives**
* Build a Wide and Deep model using the appropriate Tensorflow feature columns
## Introduction
In this notebook we'll use what we learned about feature columns to build a Wide & Deep model. Recall, that the idea behind Wide & Deep models is to join the two methods of learning through memorization and generalization by making a wide linear model and a deep learning model to accommodate both.
<img src='assets/wide_deep.png' width='80%'>
<sup>(image: https://ai.googleblog.com/2016/06/wide-deep-learning-better-together-with.html)</sup>
The Wide part of the model is associated with the memory element. In this case, we train a linear model with a wide set of crossed features and learn the correlation of this related data with the assigned label. The Deep part of the model is associated with the generalization element where we use embedding vectors for features. The best embeddings are then learned through the training process. While both of these methods can work well alone, Wide & Deep models excel by combining these techniques together.
```
# Ensure that we have Tensorflow 1.13 installed.
!pip3 freeze | grep tensorflow==1.13.1 || pip3 install tensorflow==1.13.1
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
```
## Load raw data
These are the same files created in the `create_datasets.ipynb` notebook
```
!gsutil cp gs://cloud-training-demos/taxifare/small/*.csv .
!ls -l *.csv
```
## Train and Evaluate input Functions
These are the same as before with one additional line of code: a call to `add_engineered_features()` from within the `_parse_row()` function.
```
CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"]
CSV_DEFAULTS = [[0.0],[1],[0],[-74.0],[40.0],[-74.0],[40.7]]
def read_dataset(csv_path):
def _parse_row(row):
# Decode the CSV row into list of TF tensors
fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS)
# Pack the result into a dictionary
features = dict(zip(CSV_COLUMN_NAMES, fields))
# NEW: Add engineered features
features = add_engineered_features(features)
# Separate the label from the features
label = features.pop("fare_amount") # remove label from features and store
return features, label
# Create a dataset containing the text lines.
dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv)
dataset = dataset.flat_map(map_func = lambda filename:tf.data.TextLineDataset(filenames = filename).skip(count = 1))
# Parse each CSV row into correct (features,label) format for Estimator API
dataset = dataset.map(map_func = _parse_row)
return dataset
def train_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2. Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
return dataset
def eval_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
return dataset
```
## Feature columns for Wide and Deep model
For the Wide columns, we will create feature columns of crossed features. To do this, we'll create a collection of Tensorflow feature columns to pass to the `tf.feature_column.crossed_column` constructor. The Deep columns will consist of numberic columns and any embedding columns we want to create.
```
# 1. One hot encode dayofweek and hourofday
fc_dayofweek = tf.feature_column.categorical_column_with_identity(key = "dayofweek", num_buckets = 7)
fc_hourofday = tf.feature_column.categorical_column_with_identity(key = "hourofday", num_buckets = 24)
# 2. Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start = 38.0, stop = 42.0, num = NBUCKETS).tolist()
lonbuckets = np.linspace(start = -76.0, stop = -72.0, num = NBUCKETS).tolist()
fc_bucketized_plat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplon"), boundaries = lonbuckets)
fc_bucketized_plon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplat"), boundaries = latbuckets)
fc_bucketized_dlat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflon"), boundaries = lonbuckets)
fc_bucketized_dlon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflat"), boundaries = latbuckets)
# 3. Cross features to get combination of day and hour
fc_crossed_day_hr = tf.feature_column.crossed_column(keys = [fc_dayofweek, fc_hourofday], hash_bucket_size = 24 * 7)
fc_crossed_dloc = tf.feature_column.crossed_column(keys = [fc_bucketized_dlat, fc_bucketized_dlon], hash_bucket_size = NBUCKETS * NBUCKETS)
fc_crossed_ploc = tf.feature_column.crossed_column(keys = [fc_bucketized_plat, fc_bucketized_plon], hash_bucket_size = NBUCKETS * NBUCKETS)
fc_crossed_pd_pair = tf.feature_column.crossed_column(keys = [fc_crossed_dloc, fc_crossed_ploc], hash_bucket_size = NBUCKETS**4)
```
We also add our engineered features that we used previously.
```
def add_engineered_features(features):
features["dayofweek"] = features["dayofweek"] - 1 # subtract one since our days of week are 1-7 instead of 0-6
features["latdiff"] = features["pickuplat"] - features["dropofflat"] # East/West
features["londiff"] = features["pickuplon"] - features["dropofflon"] # North/South
features["euclidean_dist"] = tf.sqrt(x = features["latdiff"]**2 + features["londiff"]**2)
return features
```
### Gather list of feature columns
Next we gather the list of wide and deep feature columns we'll pass to our Wide & Deep model in Tensorflow. To do this, we'll create a function `get_wide_deep` which will use our previously bucketized columns to collect crossed feature columns and sparse feature columns for our wide columns, and embedding feature columns and numeric features columns for the deep columns.
```
def get_wide_deep():
# Wide columns are sparse, have linear relationship with the output
wide_columns = [
# Feature crosses
fc_crossed_day_hr, fc_crossed_dloc,
fc_crossed_ploc, fc_crossed_pd_pair,
# Sparse columns
fc_dayofweek, fc_hourofday
]
# Continuous columns are deep, have a complex relationship with the output
deep_columns = [
# Embedding_column to "group" together ...
tf.feature_column.embedding_column(categorical_column = fc_crossed_pd_pair, dimension = 10),
tf.feature_column.embedding_column(categorical_column = fc_crossed_day_hr, dimension = 10),
# Numeric columns
tf.feature_column.numeric_column(key = "pickuplat"),
tf.feature_column.numeric_column(key = "pickuplon"),
tf.feature_column.numeric_column(key = "dropofflon"),
tf.feature_column.numeric_column(key = "dropofflat"),
tf.feature_column.numeric_column(key = "latdiff"),
tf.feature_column.numeric_column(key = "londiff"),
tf.feature_column.numeric_column(key = "euclidean_dist"),
tf.feature_column.indicator_column(categorical_column = fc_crossed_day_hr),
]
return wide_columns, deep_columns
```
## Serving Input Receiver function
Same as before except the received tensors are wrapped with `add_engineered_features()`.
```
def serving_input_receiver_fn():
receiver_tensors = {
'dayofweek' : tf.placeholder(dtype = tf.int32, shape = [None]), # shape is vector to allow batch of requests
'hourofday' : tf.placeholder(dtype = tf.int32, shape = [None]),
'pickuplon' : tf.placeholder(dtype = tf.float32, shape = [None]),
'pickuplat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflon' : tf.placeholder(dtype = tf.float32, shape = [None]),
}
features = add_engineered_features(receiver_tensors) # 'features' is what is passed on to the model
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = receiver_tensors)
```
## Train and Evaluate (500 train steps)
The same as before, we'll train the model for 500 steps (sidenote: how many epochs do 500 trains steps represent?). Let's see how the engineered features we've added affect the performance. Note the use of `tf.estimator.DNNLinearCombinedRegressor` below.
```
%%time
OUTDIR = "taxi_trained_wd/500"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
# Collect the wide and deep columns from above
wide_columns, deep_columns = get_wide_deep()
model = tf.estimator.DNNLinearCombinedRegressor(
model_dir = OUTDIR,
linear_feature_columns = wide_columns,
dnn_feature_columns = deep_columns,
dnn_hidden_units = [10,10], # specify neural architecture
config = tf.estimator.RunConfig(
tf_random_seed = 1, # for reproducibility
save_checkpoints_steps = 100 # checkpoint every N steps
)
)
# Add custom evaluation metric
def my_rmse(labels, predictions):
pred_values = tf.squeeze(input = predictions["predictions"], axis = -1)
return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}
model = tf.contrib.estimator.add_metrics(estimator = model, metric_fn = my_rmse)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn("./taxi-train.csv"),
max_steps = 500)
exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training
# Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn("./taxi-valid.csv"),
steps = None,
start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec)
```
### Results
Our RMSE for the Wide and Deep model is worse than for the DNN. However, we have only trained for 500 steps and it looks like the model is still learning. Just as before, let's run again, this time for 10x as many steps so we can give a fair comparison.
## Train and Evaluate (5,000 train steps)
Now, just as above, we'll execute a longer trianing job with 5,000 train steps using our engineered features and assess the performance.
```
%%time
OUTDIR = "taxi_trained_wd/5000"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
# Collect the wide and deep columns from above
wide_columns, deep_columns = get_wide_deep()
model = tf.estimator.DNNLinearCombinedRegressor(
model_dir = OUTDIR,
linear_feature_columns = wide_columns,
dnn_feature_columns = deep_columns,
dnn_hidden_units = [10,10], # specify neural architecture
config = tf.estimator.RunConfig(
tf_random_seed = 1, # for reproducibility
save_checkpoints_steps = 100 # checkpoint every N steps
)
)
# Add custom evaluation metric
def my_rmse(labels, predictions):
pred_values = tf.squeeze(input = predictions["predictions"], axis = -1)
return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}
model = tf.contrib.estimator.add_metrics(estimator = model, metric_fn = my_rmse)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn("./taxi-train.csv"),
max_steps = 5000)
exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training
# Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn("./taxi-valid.csv"),
steps = None,
start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec)
```
### Results
Our RMSE is better but still not as good as the DNN we built. It looks like RMSE may still be reducing, but training is getting slow so we should move to the cloud if we want to train longer.
Also we haven't explored our hyperparameters much. Is our neural architecture of two layers with 10 nodes each optimal?
In the next notebook we'll explore this.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
## Facies classification using Machine Learning
#### Joshua Poirier, NEOS
Let's take a different approach from traditional machine learning algorithms. Something simple. For each **test** observation, I will cross-correlate it (and surrounding observations - a log section) against all log sections in the **train** data set. The highest correlation (averaged across all logs) gets to assign its facies to the **test** observation.
### Load the data
Let's load the entire training data set and perform some quick pre-processing (turning the non-marine/marine feature into a boolean). Let's also center and scale the data - but I'll do it on a well-by-well basis to correct for any instrument/environmental bias.
```
source("loadData.R")
# load and clean the data
data <- loadData()
data <- cleanData(data)
dataPrime <- data.frame()
wells <- unique(data$Well.Name)
for (well_i in wells) {
data_i <- data[data$Well.Name == well_i,]
data_i$GR <- (data_i$GR - mean(data_i$GR, na.rm=T)) / sd(data_i$GR, na.rm=T)
data_i$ILD_log10 <- (data_i$ILD_log10 - mean(data_i$ILD_log10, na.rm=T)) / sd(data_i$ILD_log10, na.rm=T)
data_i$DeltaPHI <- (data_i$DeltaPHI - mean(data_i$DeltaPHI, na.rm=T)) / sd(data_i$DeltaPHI, na.rm=T)
data_i$PHIND <- (data_i$PHIND - mean(data_i$PHIND, na.rm=T)) / sd(data_i$PHIND, na.rm=T)
data_i$PE <- (data_i$PE - mean(data_i$PE, na.rm=T)) / sd(data_i$PE, na.rm=T)
dataPrime <- rbind(dataPrime, data_i)
}
data <- dataPrime
rm(dataPrime)
format(head(data,3), digits=3)
```
### Training function
For each **test** observation I will now cross-correlate it's section (the observation and *n* observations above/below it) against each well. Each well will provide the best correlation (averaged across each log) found, as well as the corresponding *facies* at that **train** observation.
My suspicion is that the advantage of this approach will leverage **all** the data. Other approaches must choose between subsetting the observations (as the **PE** log is only available for some wells) or subsetting the features (excluding the **PE** log) in order to utilize all observations.
A disadvantage may be that it does not make sense to utilize our **Recruit F9** pseudowell - as it is composed of manually selected observations independent of spatial context. This approach attempts to leverage spatial (vertical) context by cross-correlating log sections as opposed to looking at each observation individually. This in my opinion is closer to how a petrophysicist works.
```
source("mirrorData.R")
corrPredict <- function(train, test, l) {
wells <- unique(train$Well.Name)
for (i in 1:nrow(test)) {
top <- i - l / 2 + 1
base <- i + l / 2
test_i <- subsetData(test, top, base)
for (well_j in wells) {
train_j <- train[train$Well.Name == well_j,]
cors <- data.frame()
for (k in 1:nrow(train_j)) {
top_k <- k - l / 2 + 1
base_k <- k + l / 2
train_jk <- subsetData(train_j, top_k, base_k)
corGR <- cor(test_i$GR, train_jk$GR)
corILD <- cor(test_i$ILD_log10, train_jk$ILD_log10)
corDeltaPHI <- cor(test_i$DeltaPHI, train_jk$DeltaPHI)
corPHIND <- cor(test_i$PHIND, train_jk$PHIND)
if (sum(!is.na(test_i$PE)) == nrow(test_i) & sum(!is.na(train_jk$PE)) == nrow(train_jk)) {
corPE <- cor(test_i$PE, train_jk$PE)
} else { corPE <- NA }
c <- c(corGR, corILD, corDeltaPHI, corPHIND, corPE)
corAVG <- mean(c, na.rm=T)
temp <- data.frame(corGR=corGR, corILD=corILD, corDeltaPHI=corDeltaPHI, corPHIND=corPHIND, corPE=corPE,
corAVG=corAVG,
testWell=test_i$Well.Name[i], trainWell=well_j,
testDepth=test$Depth[i], trainDepth=train_j$Depth[k])
cors <- rbind(cors, temp)
}
best_j <- cors[which.max(cors$corAVG),]
test[i, paste0("Facies_", well_j)] <- train_j[train_j$Depth==best_j$trainDepth[1], "Facies"][1]
test[i, paste0("Corr_", well_j)] <- best_j$corAVG[1]
}
}
test
}
```
### Cross-validation
Before we include the contest **test** wells (STUART and CRAWFORD) let's perform some cross-validation to see what type of performance we may expect with this unorthodox machine learning approach. To simulate contest conditions, I will hold as a **test** set each two-well combination possible. The **train** set will be the remaining wells. As such, I will be building a model for each combination and we can see how much the performance varies.
Each well combination will call the previously defined **corrPredict** function which will identify each **train** well's vote. Instead of a democratic vote, I will simply take the highest cross-correlation across all wells and choose that **train** observations Facies as the prediction.
Each well combination will also print out the names of the two test wells and the F1-score from that model.
```
source("accuracyMetrics.R")
wells <- unique(data$Well.Name)
wells <- wells[!wells %in% c("Recruit F9")]
# loop through test well pairs
for (i in 1:(length(wells)-1)) {
for (j in (i+1):(length(wells))) {
trainIndex <- data$Well.Name != wells[i] & data$Well.Name != wells[j]
train <- data[trainIndex & data$Well.Name != "Recruit F9",]
test <- data[!trainIndex,]
trainWells <- unique(train$Well.Name)
testPrime <- corrPredict(train, test, 20)
print(head(testPrime))
# find the best cross correlation from each well - use that as the predictor
# for (i in 1:nrow(testPrime)) {
# c <- NULL
# f <- NULL
# for (well_j in trainWells) {
# c <- c(c, testPrime[i, paste0("Corr_", well_j)])
# f <- c(f, testPrime[i, paste0("Facies_", well_j)])
# }
# j <- which.max(c)
# testPrime[i, "Predicted"] <- f[j]
# }
# testPrime$Predicted <- as.factor(testPrime$Predicted)
# levels(testPrime$Predicted) <- c("SS", "CSiS", "FSiS", "SiSh", "MS", "WS", "D", "PS", "BS")
# print(paste("-----------",
# "\nTest well 1:", wells[i],
# "\nTest well 2:", wells[j],
# "\nF1-score:", myF1Metric(testPrime$Predicted, testPrime$Facies),
# "\n-----------"))
}
}
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ChamaniS/ANN-exercises/blob/master/HAR_using_CNN_%26_LSTM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/gdrive')
import numpy as np
import keras
import _pickle as cPickle
import tensorflow
from numpy import mean
from numpy import std
from numpy import dstack
from pandas import read_csv
from keras.models import Sequential
from keras.layers import Dense, TimeDistributed, Conv1D, MaxPooling1D
from keras.layers import Flatten
from keras.layers import Dropout
from keras.layers import LSTM
from keras.utils import to_categorical
import matplotlib.pyplot as plt
import time
from sklearn import metrics
import csv
def load_file(filepath):
dataframe = read_csv(filepath, header=None, delim_whitespace=True)
return dataframe.values
# load a list of files and return as a 3d numpy array
def load_group(filenames, prefix=''):
loaded = list()
for name in filenames:
data = load_file(prefix + name)
loaded.append(data)
# stack group so that features are the 3rd dimension
loaded = dstack(loaded)
return loaded
# load a dataset group, such as train or test
def load_dataset_group(group, prefix=''):
filepath = prefix + group + '/Inertial Signals/'
# load all 9 files as a single array
filenames = list()
# total acceleration
filenames += ['total_acc_x_' + group + '.txt', 'total_acc_y_' + group + '.txt', 'total_acc_z_' + group + '.txt']
# body acceleration
filenames += ['body_acc_x_' + group + '.txt', 'body_acc_y_' + group + '.txt', 'body_acc_z_' + group + '.txt']
# body gyroscope
filenames += ['body_gyro_x_' + group + '.txt', 'body_gyro_y_' + group + '.txt', 'body_gyro_z_' + group + '.txt']
# load input data
X = load_group(filenames, filepath)
# load class output
y = load_file(prefix + group + '/y_' + group + '.txt')
return X, y
# load the dataset, returns train and test X and y elements
def load_dataset(prefix=''):
# load all train
trainX, trainy = load_dataset_group('train', prefix + '/content/gdrive/My Drive/Colab Notebooks/UCI HAR Dataset/UCI HAR Dataset/')
print(trainX.shape, trainy.shape)
# load all test
testX, testy = load_dataset_group('test', prefix + '/content/gdrive/My Drive/Colab Notebooks/UCI HAR Dataset/UCI HAR Dataset/')
print(testX.shape, testy.shape)
# zero-offset class values
trainy = trainy - 1
testy = testy - 1
# one hot encode y
trainy = to_categorical(trainy)
testy = to_categorical(testy)
print(trainX.shape, trainy.shape, testX.shape, testy.shape)
return trainX, trainy, testX, testy
# fit and evaluate a model
def evaluate_model(trainX, trainy, testX, testy):
# define model
verbose, epochs, batch_size = 1, 100,64
n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
# reshape data into time steps of sub-sequences
n_steps, n_length = 4, 32
trainX = trainX.reshape((trainX.shape[0], n_steps, n_length, n_features))
testX = testX.reshape((testX.shape[0], n_steps, n_length, n_features))
# define model
model = Sequential()
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=3, activation='relu'), input_shape=(None, n_length, n_features)))
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=3, activation='relu')))
model.add(TimeDistributed(Dropout(0.5)))
model.add(TimeDistributed(MaxPooling1D(pool_size=2)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(100))
model.add(Dropout(0.5))
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
# fit network
# ts1 = time.time()
# print ("time1",ts1)
history=model.fit(trainX, trainy, epochs=epochs, batch_size=batch_size, verbose=verbose, validation_data=(testX, testy))
# ts2 = time.time()
# print ("time2",ts2)
# print("time2-time1",(ts2-ts1))
# test model
#ts3 = time.time()
#print ("time3",ts3)
test_model(model, testX, verbose, batch_size, n_outputs)
_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=verbose)
#ts4 = time.time()
#print ("time4",ts4)
#print("time4-time3",(ts4-ts3))
return accuracy,history
# fit and evaluate a model
def test_model(model, testX, verbose, batch_size, n_outputs):
prediction_list = model.predict(testX, batch_size=batch_size, steps=None, verbose=verbose)
predictions_transformed = np.eye(n_outputs, dtype=int)[np.argmax(prediction_list, axis=1)]
np.savetxt('predictions.txt', prediction_list)
np.savetxt('predictions_trans.txt', (np.argmax(predictions_transformed, axis=1)))
def plot_accuracy(history):
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'orange', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
def plot_loss(history):
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'orange', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.figure()
# summarize scores
def summarize_results(scores):
print(scores)
m, s = mean(scores), std(scores)
print('Accuracy: %.3f%% (+/-%.3f)' % (m, s))
def plot_predictions():
x = list(range(1, 101))
actual = read_csv("/content/gdrive/My Drive/Colab Notebooks/UCI HAR Dataset/UCI HAR Dataset/test/y_test.txt", nrows=100, header=None, delim_whitespace=True)
predicted = read_csv("predictions_trans.txt", nrows=100, header=None, delim_whitespace=True)
predicted = ([x+1 for x in np.array(predicted)])
plt.plot(x, actual)
plt.plot(x, predicted, color='r')
plt.show()
with open('/content/gdrive/My Drive/Colab Notebooks/UCI HAR Dataset/UCI HAR Dataset/test/y_test.txt', newline='') as csvfile:
actualok = list(csv.reader(csvfile))
with open('predictions_trans.txt', newline='') as csvfile:
predictedok = list(csv.reader(csvfile))
print("Confusion Matrix:")
confusion_matrix = metrics.confusion_matrix(actualok, predictedok)
print(confusion_matrix)
normalised_confusion_matrix = np.array(confusion_matrix, dtype=np.float32)/np.sum(confusion_matrix)*100
# Plot Results:
width = 8
height = 8
plt.figure(figsize=(width, height))
plt.imshow(
normalised_confusion_matrix,
interpolation='nearest',
cmap=plt.cm.rainbow
)
plt.title("Confusion matrix \n(normalised to % of total test data)")
print(normalised_confusion_matrix)
LABELS = [
"WALKING",
"WALKING_UPSTAIRS",
"WALKING_DOWNSTAIRS",
"SITTING",
"STANDING",
"LAYING"
]
plt.colorbar()
tick_marks = np.arange(6)
plt.xticks(tick_marks, LABELS, rotation=90)
plt.yticks(tick_marks, LABELS)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
def run_experiment(repeats=5):
# load data
trainX, trainy, testX, testy = load_dataset()
#repeat experiment
scores=list()
for r in range(repeats):
score,history = evaluate_model(trainX, trainy, testX, testy)
score=score*100.0
print("Accuracy : ", score)
plot_predictions()
plot_accuracy(history)
plot_loss(history)
# run the experiment
run_experiment()
```
| github_jupyter |
<h1 align = center><font size = 5>Image Procesing With Python(Matplotlib,Numpy and OpenCV)<font/></h1>
<h1>Introduction!</h1>
<h3>Welcome</h3>
<p>int this section, you will learn how to obtain the histogram from image, do normalization to image intesities, calculate cumulative histogram. By the end of this lab you will successfully learn histogram equalization</p>
### Prerequisite:
* [Python Tutorial](https://docs.python.org/3/tutorial/)
* [Numpy Tutorial](https://numpy.org/doc/stable/user/absolute_beginners.html)
* [Matplotlib Image Tutorial](https://matplotlib.org/tutorials/introductory/images.html#sphx-glr-tutorials-introductory-images-py)
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ol>
<li><a href="#histogram_">Histogram Calculation</a>
<li><a href="#intensity_norm">Intensity Normalization</a></li>
<li><a href="#cummulative_histogram">Cummulative Histogram</a></li>
<li><a href="Histogram_eq">Histogram Equalization</a></li>
</ol>
</div>
<h2>what is the purpose of the histogram?</h2>
<P>describe the frequency of intensity values, brightness variation and show you how individual brightness levels are occupid in image, if the image was darker overall histogram would be concentrated towards black. and if the image was brighter, but with lower contrast. then the histogram would be thinner and concentrated near the whiter brightness levels. </p>
<p>histogram dipct the problem that originate during image acquistion, and reveal if there is much noise in the image, if the ideal histogram is known. we might want to remove this noisy </p>
### Import Packages.
```
import os # Miscellaneous operating system interface.
import numpy as np # linear algebra and scientific computing library.
import matplotlib.pyplot as plt # visualization package.
import matplotlib.image as mpimg # image reading and displaying package.
import cv2 # computer vision library.
%matplotlib inline
```
### list of images:
listing the images in the directory.
```
# images path
path = '..\..\images'
# list of images names
list_imgs = os.listdir(os.path.join(path,'img_proc'))
print(list_imgs)
```
### Reading Image:
* each image has a width and height (2-dim matrix).
* each coordinate is $I(u,v)$ we calling intenisty where `u` is row index and `v` is column index.
```
# display sample.
path = os.path.join(path,'img_proc',list_imgs[10])
image = mpimg.imread(path)
# printing image dimensions.
print(image.shape)
```
by taking a look to the result above we have image `2003` as width (rows) and `3000` as height (columns). in addition to the third element represent the colors channels `3`
### Convert Image to grayscale:
```
gray_img = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
# printing grayscale image dimension.
print(gray_img.shape)
```
above, we can see the result of printing image shape is little bit different. there are just two elements the width and height.
<br>
in grayscale image we just got 1 color channel.
#### elements of image datatype.
```
# depth of image is 8-bit grayscale
print(image.dtype)
```
#### Displaying the image.
```
# display sample.
plt.imshow(gray_img,cmap='gray')
plt.show()
```
## 1-Histogram Calculation:
### Grayscale image Histogram:
each single histogram is defined as where $h(i) = card\{(u,v)|I(u,v) = i\}$ $\forall$ $i \in [0,k-1]$
* $h(i)$ is the number of pixels in I with intensity value i
* $K = 2^8 = 256 - 1 = 255$
* $h(0),h(1),h(2),\dots,h(255)$
### Histogram Calcualtion Methods:
* Numpy Methods
* OpenCV Method.
* Matplotlib Method for calculating and visualizing in one step.
#### Numpy Methods.
```
# numpy method-1
un_val,occ_counts = np.unique(gray_img,return_counts = True)
plt.plot(un_val,occ_counts)
plt.show()
# histogram calculation with 'numpy.histogram()' function (method-2).
hist_np,_= np.histogram(gray_img.flatten(),256,[0,256])
plt.plot(hist_np)
plt.show()
```
#### OpenCV Method.
```
# histogram calculation opencv function with bins (method-4).
hist_cv = cv2.calcHist([gray_img],[0],None,[256],[0,256])
plt.plot(hist_cv)
plt.show()
```
#### Matplotlib Method.
```
# display the histogram (Matplotlib) method-3
fig,axes = plt.subplots(1,2,figsize=(12,8))
# display grayscale image and it's histogram.
axes[0].imshow(gray_img,cmap='gray')
axes[1].hist(gray_img.flatten(),256,[0,256])
# show the figure.
plt.show()
```
### 2-Intensity Normalization:
stretching the range of image intensities, simply instead of dealing with values ranging $[0-255]$ . the values will be ranging $[0,1]$.
<br>
here it's the equation: $N_{x,y} = \frac{O_{x,y}}{O_{max}}$
* $N_{x,y}$ $\rightarrow$ new image (output).
* $O_{x,y}$ $\rightarrow$ old image(input).
* $(x,y)\rightarrow$ is the Intensity or pixel coordinates
* $O_{max}$ $\rightarrow$ maximum value of intensities in the input image.
```
# image normalization
normalized_image = gray_img / float(gray_img.max())
# plotting figure.
fig, axes = plt.subplots(2,2,figsize=(12,8))
# display grayscale image and it's histogram.
axes[0][0].imshow(gray_img,cmap = 'gray')
axes[0][1].hist(gray_img.flatten(),256)
# display normalized grayscale image and it's histogram.
axes[1][0].imshow(normalized_image,cmap = 'gray')
axes[1][1].hist(normalized_image.flatten(),256)
# show the figure.
plt.show()
```
### 3-Cummulative Histogram:
derived from the ordinary histogram, and is useful when you performing certain image operations involving histogram e.g(Histogram Equlization). in other words we using to computer parameters for several common point operations.
<br>
<br>
here it's the mathemtical formula: $H(i) = \sum_{j=0}^i h(j)$ for $0 \leq i \lt k$, where $k = 2^8$
* H(i) the sum of all histogram values h(j) where $j \leq i$
* $H(k-1) = \sum_{j=0}^{k-1} h(j) = M \times N$ <br> where the total number of pixels in image is Width M and Height N
```
# method-1
_,hist = np.unique(gray_img,return_counts=True)
cum_hist = np.cumsum(hist)
plt.plot(cum_hist)
plt.show()
# cumulative histogram calculation.
hist_0,bins = np.histogram(gray_img.flatten(),256,[0,256])
cum_hist = hist_0.cumsum()
fig,axes = plt.subplots(1,2,figsize=(12,8))
axes[0].plot(hist_0)
axes[1].plot(cum_hist)
plt.show()
```
### 4-HIstogram Equalization:
<p>
nonlinear process aimed to highlight image brightness in away particularly suited to human visual analysis, and the produced picture with flatter histogram where all the levels equiprobable.
</p>
Here it's the first equation: $f_{eq}(a) = \lfloor {H}(a)\cdot \frac{K-1}{M \cdot N}\rfloor$
<br>
$E(q,O) = \frac{N_{max} - N_{min}}{N^2}\times\displaystyle\sum_{l=0}^ p O(l)$
<br>
$N_{x,y} = E(O_{x,y},O)$
* $E(q,O)$ $\rightarrow$ function take cummlative histogram and the image as input.
* $(N_{max} - N_{min})$ $\rightarrow$ $(K-1) = (2^8 - 1) = (256 -1) = 255$, $k = 255$ where $N_{max}$ is 255 and $N_{min} = 0$
* $l$ $\rightarrow$ each level value.
* $p \rightarrow$ is levels of the histogram.
* $\sum_{l = 0}^p O(l) \rightarrow$ cumulative histogram
* $N^2 \rightarrow$ image width and height $(M \times N)$
#### Numpy method.
```
# histogram equalization calculation.
hist_eq = cum_hist * (255/(gray_img.shape[0] * gray_img.shape[1]))
eq_image = hist_eq[gray_img]
# cumulative histogram calculation.
hist_0,bins = np.histogram(eq_image.flatten(),256,[0,256])
cum_hist = hist_0.cumsum()
fig, axes = plt.subplots(1,3,figsize=(16,8))
axes[0].imshow(eq_image,cmap='gray')
axes[1].hist(eq_image.flatten(),256)
axes[2].plot(cum_hist)
plt.show()
```
#### OpenCV Method.
```
# display the equalized histogram
eq_img_cv = cv2.equalizeHist(gray_img)
# cumulative histogram calculation.
hist_0,bins = np.histogram(eq_img_cv.flatten(),256,[0,256])
cum_hist = hist_0.cumsum()
fig,axes = plt.subplots(1,3,figsize=(16,8))
axes[0].imshow(eq_img_cv,cmap='gray')
axes[1].hist(eq_img_cv.flatten(),256)
axes[2].plot(cum_hist)
plt.show()
```
### About The Author:
this notebook written by Mohamed Salah Hassan Akel, Machine learning Engineer.
<hr>
<p>Copyright © 2020 Mohamed Akel Youtube Channel. This notebook and its source code are released under the terms of the <a href="">MIT License</a>.</p>
| github_jupyter |
# Interrupted Time Series Analysis - Time Series with control (in R)
```
library(nlme)
library(car)
library(tseries)
library(ggplot2)
library(lmtest)
library(LSTS)
library(data.table)
data <- read.csv(file="data/gp.csv", header=TRUE, sep=",")
nrow(data) # expecting 74 weeks
data$diff <- data$rate_attend_control - data$rate_attend_pilot
```
## Plot data
```
head(data)
```
## 1. Visual Impact Analysis of Interruptions
```
plot(data$time[1:74],data$rate_attend_pilot[1:74],
ylab="ED attendences per 10k registered population",
ylim=c(0,10),
xlab="Week of year",
type="l",
col="blue",
xaxt="n")
#Weekend working
points(data$time[37:61],data$rate_attend_pilot[37:61],
type='l',
col="dark green")
#Weekend working ended
points(data$time[62:74],data$rate_attend_pilot[62:74],
type='l',
col="red")
axis(1, at=1:74, labels=data$time[1:74])
# Add in the points for the figure
points(data$time[1:36],data$rate_attend_pilot[1:36],
col="blue",
pch=20)
points(data$time[37:61],data$rate_attend_pilot[37:61],
col="dark green",
pch=20)
points(data$time[62:74],data$rate_attend_pilot[62:74],
col="red",
pch=20)
# Interruption 1. GPs begin to work Friday evenings and Saturdays
abline(v=36.5,lty=2, lwd = 2)
# Interruption 2. GPs stop working Friday evenings and Saturdays
abline(v=61.5, lty=2, lwd = 2)
# Add in a legend
legend(x=2, y=2, legend=c("No weekend working","GPs work Fri evening and Sat", "Winter pressure funding ends"),
col=c("blue","dark green", "red"),pch=20, cex=0.6, pt.cex = 1)
plot(data$time[1:74],data$diff[1:74],
ylab="ED attendences per 10k registered population",
ylim=c(-1,10),
xlab="Week of year",
type="l",
col="blue",
xaxt="n")
#Weekend working
points(data$time[37:61],data$diff[37:61],
type='l',
col="dark green")
#Weekend working ended
points(data$time[62:74],data$diff[62:74],
type='l',
col="red")
axis(1, at=1:74, labels=data$time[1:74])
# Add in the points for the figure
points(data$time[1:36],data$diff[1:36],
col="blue",
pch=20)
points(data$time[37:61],data$diff[37:61],
col="dark green",
pch=20)
points(data$time[62:74],data$diff[62:74],
col="red",
pch=20)
# Interruption 1. GPs begin to work Friday evenings and Saturdays
abline(v=36.5,lty=2, lwd = 2)
# Interruption 2. GPs stop working Friday evenings and Saturdays
abline(v=61.5, lty=2, lwd = 2)
# Add in a legend
legend(x=2, y=2, legend=c("No weekend working","GPs work Fri evening and Sat", "Winter pressure funding ends"),
col=c("blue","dark green", "red"),pch=20, cex=0.6, pt.cex = 1)
```
## 2. ITS - Linear Regression using Generalised Least Squares
```
# A preliminary OLS regression
model_ols <- lm(rate_attend_pilot ~ time + wknd + wknd_trend + end + end_trend + outlier, data=data)
summary(model_ols)
# A preliminary OLS regression
model_ols2 <- lm(diff ~ time + wknd + wknd_trend + end + end_trend, data=data)
summary(model_ols)
# Plot ACF and PACF
# Set plotting to two records on one page
par(mfrow=c(2,1))
# Produce plots
acf(residuals(model_ols2))
acf(residuals(model_ols2),type='partial')
# Fit the GLS regression model
gls_m1<- gls(rate_attend_pilot ~ time + wknd + wknd_trend + end + end_trend + outlier,
data=data,
correlation=corARMA(p=10,form=~time),
method="ML")
summary(gls_m1)
confint(gls_m1)
model_final = gls_m1
# Produce the plot, first plotting the raw data points
plot(data$time,data$rate_attend_pilot,
ylim=c(0,8),
ylab="Attendences (per 10k pop)",
xlab="Week",
pch=20,
col="pink",
xaxt="n")
# Add x axis with dates
#axis(1, at=1:35, labels=data$year)
# Interruption 1. GPs begin to work Friday evenings and Saturdays
abline(v=35.5,lty=2, lwd = 2)
# Interruption 2. GPs stop working Friday evenings and Saturdays
abline(v=61.5, lty=2, lwd = 2)
# The before line
lines(data$time[1:35], fitted(model_final)[1:35], col="red",lwd=2)
# The during line
lines(data$time[36:61], fitted(model_final)[36:61], col="red",lwd=2)
#The after line
lines(data$time[62:74], fitted(model_final)[62:74], col="red",lwd=2)
axis(1, at=1:74, labels=data$time[1:74])
#Line representing counterfactual to ED implementation
segments(36, model_final$coef[1]+model_final$coef[2]*36,
61, model_final$coef[1]+model_final$coef[2]*61,
lty=2, lwd=2, col='red')
# Line representing counterfactual to the ambulance implementation
#segments(41, model_final$coef[1] + model_final$coef[2]*41 +
# model_final$coef[3] + model_final$coef[4]*15,
# 54, model_final$coef[1] + model_final$coef[2]*54 +
# model_final$coef[3] + model_final$coef[4]*30,
# lty=2, lwd=2, col='red')
# Fit the GLS regression model
gls_m2<- gls(diff ~ time + wknd + wknd_trend + end + end_trend,
data=data,
correlation=corARMA(p=10,form=~time),
method="ML")
summary(gls_m2)
model_final = gls_m2
# Produce the plot, first plotting the raw data points
plot(data$time,data$diff,
ylim=c(-8,1),
ylab="Attendences (per 10k pop)",
xlab="Week",
pch=20,
col="pink",
xaxt="n")
# Add x axis with dates
#axis(1, at=1:35, labels=data$year)
# Interruption 1. GPs begin to work Friday evenings and Saturdays
abline(v=35.5,lty=2, lwd = 2)
# Interruption 2. GPs stop working Friday evenings and Saturdays
abline(v=61.5, lty=2, lwd = 2)
# The before line
lines(data$time[1:35], fitted(model_final)[1:35], col="red",lwd=2)
# The during line
lines(data$time[36:61], fitted(model_final)[36:61], col="red",lwd=2)
#The after line
lines(data$time[62:74], fitted(model_final)[62:74], col="red",lwd=2)
axis(1, at=1:74, labels=data$time[1:74])
#Line representing counterfactual to ED implementation
segments(36, model_final$coef[1]+model_final$coef[2]*36,
61, model_final$coef[1]+model_final$coef[2]*61,
lty=2, lwd=2, col='red')
# Line representing counterfactual to the ambulance implementation
#segments(41, model_final$coef[1] + model_final$coef[2]*41 +
# model_final$coef[3] + model_final$coef[4]*15,
# 54, model_final$coef[1] + model_final$coef[2]*54 +
# model_final$coef[3] + model_final$coef[4]*30,
# lty=2, lwd=2, col='red')
```
| github_jupyter |
# Identifying Complexes in your Network of Protein-Protein Interactions
**Contact: **
- http://github.com/cokelaer
- http://thomas-cokelaer.info
**Date:** Feb 2015
## Introduction
The assumption is that you have a network of protein-protein interactions, from which you know the protein names by their
uniprot accession number (e.g., P43403)
You can have for instance a network in various formats (e.g., SIF, SBSMLQual), in which case you may be interested to
look at http://github.com/cellnopt/cellnopt . The SBMLqual format encodes network of proteins as logical gates (ORs and ANDs). One issue is how to you know thata logical gate is an AND. You can try all combinations and optimise the network to some data like in CellNOpt analysis. Another way is to use databases of complexes to identify such AND gates.
Here below, we use the **Complexes** class that will take as input a list of UniProt identifiers and return possible logical AND gates. The database used being the scene is **Intact Complex. **
First we import the module of interest to be found in biokit.networt.complexes
```
import biokit
%pylab inline
from biokit.network import complexes
```
We create an instance setting the cache to True. This is an option fro BioServices that create a
local file to speed up future requests. You can set it to False if you do not want to create local
cache files. The code will take about 30 seconds to run. If you use the cache, next time it will take half a second.
```
c = complexes.Complexes(cache=True)
```
By default the organism selected is *Homo sapiens*. Complexes for the Intact Complex database are not stored internally.
As an example, you can plot the histogram of number of participants (protein) within each complexes
```
count = c.hist_participants()
```
Further analysis can be shown here below by focusing on the participants
themselves: how many occurences, how many unique proteins.
```
c.stats()
```
Some complexes are actually homodimers, which we may want to ignore
```
_ = c.remove_homodimers()
c.stats()
```
### Looking for complexes (AND gates)
We must provide a list if uniprot identifiers. Here is an example. The output of the **search** function is then
two fold:
- dictionary containing the complexes for whic all participants are included in the user list of proteins
- dictionary containing the complexes for which some participants are included in the user list of proteins (partial)
```
user_species = ['P51234', 'P11111', 'P22222', 'P33333', u'P51553',
u'P50213', 'Q01955', 'P53420', u'P29400', 'Q8TD08', 'P47985', 'CHEBI:18420']
and_gates, subset = c.search_complexes(user_species, verbose=True)
and_gates
subset
```
For convenience, we wrap the **search_complexes** method as **report** method that returns a dataframe with all relevant
information
```
c.report(user_species)
```
#### Other utilities
Instead of looking at a set of proteins, you can search for a given one
to figure out if it is included in the database at all
```
# search for a given species
c.search('P51553')
```
Some participants are actually not protein but chemical compounds (e.g. magnesium), which are provided as ChEBI
identifiers. Again, we use here BioServices behind the scene and provide a small method to get the name of ChEBI identifier
```
c.chebi2name('CHEBI:18420')
```
Similarly, protein are encoded as accession number, which can be translated in list of possible gene names mas follows
```
c.uniprot2genename('P16234')
```
Finally, all details about a complex can be retrieved by looking at the dictionary **complexes**
```
c.complexes['EBI-1224506']['name']
```
### Reproductibility
```
import easydev
for x in easydev.dependencies.get_dependencies('biokit'):
print(x)
```
| github_jupyter |
```
import numpy as np
np.random.seed(1337)
import matplotlib.pyplot as plt
x, y, z = np.random.randn(3,10,2)
x.shape, y.shape, z.shape
plt.scatter(*x.T, c="r", label="x")
plt.scatter(*y.T, c="g", label="y")
plt.scatter(*z.T, c="b", label="z")
plt.legend()
plt.scatter(*(x/np.linalg.norm(x,axis=1,keepdims=True)).T, c="r", label="x")
plt.scatter(*(y/np.linalg.norm(y,axis=1,keepdims=True)).T, c="g", label="y")
plt.scatter(*(z/np.linalg.norm(z,axis=1,keepdims=True)).T, c="b", label="z")
plt.legend()
def max_jaccard(x, y):
"""
MaxPool-Jaccard similarity measure between two sentences
:param x: list of word embeddings for the first sentence
:param y: list of word embeddings for the second sentence
:return: similarity score between the two sentences
"""
m_x = np.max(x, axis=0)
m_x = np.maximum(m_x, 0, m_x)
m_y = np.max(y, axis=0)
m_y = np.maximum(m_y, 0, m_y)
m_inter = np.sum(np.minimum(m_x, m_y))
m_union = np.sum(np.maximum(m_x, m_y))
return m_inter / m_union
def fuzzify(s, u):
"""
Sentence fuzzifier.
Computes membership vector for the sentence S with respect to the
universe U
:param s: list of word embeddings for the sentence
:param u: the universe matrix U with shape (K, d)
:return: membership vectors for the sentence
"""
f_s = np.dot(s, u.T)
m_s = np.max(f_s, axis=0)
m_s = np.maximum(m_s, 0, m_s)
return m_s
def dynamax_jaccard(x, y):
"""
DynaMax-Jaccard similarity measure between two sentences
:param x: list of word embeddings for the first sentence
:param y: list of word embeddings for the second sentence
:return: similarity score between the two sentences
"""
u = np.vstack((x, y))
m_x = fuzzify(x, u)
m_y = fuzzify(y, u)
m_inter = np.sum(np.minimum(m_x, m_y))
m_union = np.sum(np.maximum(m_x, m_y))
return m_inter / m_union
max_jaccard(x, y), dynamax_jaccard(x, y)
max_jaccard(x, z), dynamax_jaccard(x, z)
max_jaccard(z, y), dynamax_jaccard(z, y)
m_x = np.max(x, axis=0)
m_x
m_x = np.maximum(m_x, 0, m_x)
m_x
m_y = np.max(y, axis=0)
m_y
m_y = np.maximum(m_y, 0, m_y)
m_y
m_inter = np.sum(np.minimum(m_x, m_y))
m_inter
m_union = np.sum(np.maximum(m_x, m_y))
m_union
m_inter / m_union
dynamax_jaccard(x, x)
dynamax_jaccard(x/np.linalg.norm(x,axis=1,keepdims=True), y/np.linalg.norm(y,axis=1,keepdims=True))
u = np.vstack((x/np.linalg.norm(x,axis=1,keepdims=True), y/np.linalg.norm(y,axis=1,keepdims=True)))
u.shape
f_s = np.dot(x/np.linalg.norm(x,axis=1,keepdims=True), u.T)
f_s.shape
u_norm = np.linalg.norm(u, axis=1)
u_norm
np.argsort(u_norm)
np.max(f_s, axis=0)
np.argmax(f_s, axis=0)
m_x = fuzzify(x, u)
m_x.shape
m_y = fuzzify(y, u)
m_y.shape
m_inter = np.sum(np.minimum(m_x, m_y))
m_inter
m_union = np.sum(np.maximum(m_x, m_y))
m_union
m_inter / m_union
u = np.vstack((x, y))
u.shape
f_s = np.dot(x, u.T)
f_s.shape
m_s = np.max(f_s, axis=0)
m_s = np.maximum(m_s, 0, m_s)
m_s
f_s = np.dot(u, u.T)
f_s.shape
%timeit np.maximum(m_s, 0, m_s)
%timeit np.clip(m_s, 0, None, m_s)
%%timeit
f_s = np.dot(u, u.T)
np.split(f_s, [x.shape[0]])
%%timeit
f_s = np.dot(x, u.T)
f_s = np.dot(y, u.T)
m_x = np.max(f_s[:x.shape[0]], axis=0)
m_s = np.maximum(m_s, 0, m_s)
m_s
```
| github_jupyter |
# Expungability Hypothetical: Elimination of Petition Eligibility (Converting Petition to Automatic)
```
import sqlalchemy as sa
from sqlalchemy import create_engine
import psycopg2 as db
import pandas as pd
import numpy as np
import os
from matplotlib import pyplot as plt
import matplotlib.ticker as ticker
import seaborn as sns
import sidetable
pd.set_option('display.max_colwidth', None)
## loading from db
postPass=os.environ["POSTGRES_PASS"]
try:
conn = db.connect(host='localhost', database='expunge', user='jupyter', password=postPass, port='5432')
except:
print("I am unable to connect to the database")
cur = conn.cursor()
try:
tables=cur.execute("select * from pg_catalog.pg_tables WHERE schemaname != 'information_schema' AND schemaname != 'pg_catalog';")
print(cur)
except:
print("I can't drop our test database!")
## grabbing court data
myquery = """
SELECT * FROM public.data_1k_sample
"""
courtdata = pd.read_sql(myquery, con=conn)
courtdata.head()
courtdata.columns
conditionals = [
courtdata["expungable"] == "Petition",
courtdata["expungable"] == "Petition (pending)",
(courtdata["expungable"] != "Petition (pending)") & (courtdata["expungable"] != "Petition")]
labels = ["Automatic", "Automatic (pending)", courtdata["expungable"]]
courtdata["expungable_no_petition"] = np.select(conditionals, labels)
# courtdata.head()
# courtdata.tail()
```
# Graph 1: General Expungability Graphs
### Expungability without Petition
```
# percentages without petition
df_percentages_no_petition = pd.DataFrame(courtdata.groupby("expungable_no_petition").size(), columns=['Count'])
df_percentages_no_petition['Cumlative_Percent'] = 100*(df_percentages_no_petition.Count.cumsum() / df_percentages_no_petition.Count.sum())
df_percentages_no_petition['Cumlative_Count'] = df_percentages_no_petition.Count.cumsum()
df_percentages_no_petition['Percentage'] = df_percentages_no_petition.Count.apply(lambda x: x/df_percentages_no_petition['Count'].sum())
df_percentages_no_petition
# Graph of Expungability without Petition
df_percentages_no_petition.plot.bar(x=None, y="Percentage")
plt.ylabel("Percentage")
plt.title("Expungability when Petition is Converted to Automatic")
```
### Expungability with Petition
```
# percentages with petition
df_percentages = pd.DataFrame(courtdata.groupby("expungable").size(), columns=['Count'])
df_percentages['Cumlative_Percent'] = 100*(df_percentages.Count.cumsum() / df_percentages.Count.sum())
df_percentages['Cumlative_Count'] = df_percentages.Count.cumsum()
df_percentages['Percentage'] = df_percentages.Count.apply(lambda x: x/df_percentages['Count'].sum())
df_percentages
# Graph of Expungability with Petition
df_percentages.plot.bar(x=None, y="Percentage")
plt.ylabel("Percentage")
plt.title("Expungability with Petition")
```
# Graph 2: Percent Change when Petition is Converted to Automatic
```
# merge percentage tables
merged_percentages = df_percentages.reset_index()[["expungable", "Percentage"]].merge(df_percentages_no_petition.reset_index()[["expungable_no_petition", "Percentage"]], how="outer", left_on="expungable", right_on="expungable_no_petition", suffixes=["_w_pet", "_wo_pet"])
merged_percentages
# convert nan to 0
merged_percentages.fillna(0, inplace=True)
merged_percentages
# drop the expungable_no_petition column
merged_percentages.drop(columns=["expungable_no_petition"], inplace=True)
merged_percentages
# add difference column
merged_percentages["percent_difference"] = merged_percentages["Percentage_wo_pet"] - merged_percentages["Percentage_w_pet"]
merged_percentages
# Graph Percent Change when Petition is Converted to Automatic
merged_percentages.plot.bar(x="expungable", y="percent_difference", color="blue")
plt.ylabel("Percent Difference")
plt.title("Percent Change when Petition is Converted to Automatic")
plt.axhline(color="black")
```
# Graph 3: Expungbility by Race
### Expunbability by Race without Petition
```
courtdata.stb.freq(['Race'])
replace_map = {'Black(Non-Hispanic)':'Black (Non-Hispanic)',
'White Caucasian(Non-Hispanic)':'White (Non-Hispanic)',
'Other(Includes Not Applicable.. Unknown)':'Other',
'White Caucasian (Non-Hispanic)':'White (Non-Hispanic)',
'Unknown (Includes Not Applicable.. Unknown)':'Other',
'NA':'Other',
'Asian Or Pacific Islander':'Asian or Pacific Islander',
'Black (Non-Hispanic)':'Black (Non-Hispanic)',
'White':'White (Non-Hispanic)',
'American Indian':'American Indian or Alaskan Native',
'Unknown':'Other',
'Other (Includes Not Applicable.. Unknown)':'Other',
'Black':'Black (Non-Hispanic)',
'American Indian or Alaskan Native':'American Indian or Alaskan Native',
'American Indian Or Alaskan Native':'American Indian or Alaskan Native',
'Asian or Pacific Islander':'Asian or Pacific Islander'}
courtdata.Race = courtdata.Race.replace(replace_map)
courtdata.stb.freq(['Race'])
rowtable_no_petition = (pd.crosstab(courtdata.Race, courtdata.expungable_no_petition, normalize='index')*100).round(2).reset_index()
rowtable_no_petition
# graph by race without petition
barplot = pd.melt(rowtable_no_petition,
id_vars = ['Race'],
value_vars = ['Automatic', 'Automatic (pending)', 'Not eligible'])
plt.figure(figsize=(18, 6))
sns.barplot(x='Race', y='value', hue='expungable_no_petition', data=barplot).set(title="Breakdown by Race without Petition")
plt.ylabel("Percentage")
```
### Expunbability by Race with Petition
```
rowtable = (pd.crosstab(courtdata.Race, courtdata.expungable, normalize='index')*100).round(2).reset_index()
rowtable
# graph by race with petition
barplot = pd.melt(rowtable,
id_vars = ['Race'],
value_vars = ['Automatic', 'Automatic (pending)', 'Not eligible', 'Petition', 'Petition (pending)'])
plt.figure(figsize=(18, 6))
sns.barplot(x='Race', y='value', hue='expungable', data=barplot).set(title="Breakdown by Race with Petition")
plt.ylabel("Percentage")
```
# Graph 4: Percent Change by Race when Petition is Converted to Automatic
```
# # merge percentage tables
# merged_percentages = rowtable.reset_index().merge(rowtable_no_petition.reset_index(), how="outer", on="Race", suffixes=["_w_pet", "_wo_pet"])
# merged_percentages
# # convert nan to 0
# merged_percentages.fillna(0, inplace=True)
# merged_percentages
# # add columns for petition and petittio
# # drop the expungable_no_petition column
# merged_percentages.drop(columns=["expungable_no_petition"], inplace=True)
# merged_percentages
# # add difference column
# merged_percentages["percent_difference"] = merged_percentages["Percentage_wo_pet"] - merged_percentages["Percentage_w_pet"]
# merged_percentages
```
## Evaluate Code Sections in 100k Sample
```
## grab court data
myquery = """
SELECT * FROM public.data_100k_sample
"""
courtdata_100k = pd.read_sql(myquery, con=conn)
# courtdata_100k.head()
# courtdata.groupby(["codesection", "expungable"]).count()
codesection_data = courtdata_100k[["codesection", "expungable", "person_id"]].groupby(["codesection", "expungable"]).count().rename(columns={"person_id": "counts"})
codesection_data
codesection_data.reset_index()
# # graph by race with petition
# barplot = pd.melt(codesection_data,
# id_vars = ['expungable'],
# value_vars = ['Automatic', 'Automatic (pending)', 'Not eligible', 'Petition', 'Petition (pending)'])
# plt.figure(figsize=(18, 6))
# sns.barplot(x='codesection', y='value', hue='expungable', data=barplot).set(title="Breakdown by Race with Petition")
# plt.ylabel("Percentage")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ornob39/Python_For_DataScience_AI-IBM/blob/master/Python_For_DSandAI_5_1_Numpy1D.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="https://cocl.us/topNotebooksPython101Coursera">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center">
</a>
</div>
<a href="https://cognitiveclass.ai/">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center">
</a>
<h1>1D <code>Numpy</code> in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about using <code>Numpy</code> in the Python Programming Language. By the end of this lab, you'll know what <code>Numpy</code> is and the <code>Numpy</code> operations.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li><a href="pre">Preparation</a></li>
<li>
<a href="numpy">What is Numpy?</a>
<ul>
<li><a href="type">Type</a></li>
<li><a href="val">Assign Value</a></li>
<li><a href="slice">Slicing</a></li>
<li><a href="list">Assign Value with List</a></li>
<li><a href="other">Other Attributes</a></li>
</ul>
</li>
<li>
<a href="op">Numpy Array Operations</a>
<ul>
<li><a href="add">Array Addition</a></li>
<li><a href="multi">Array Multiplication</a></li>
<li><a href="prod">Product of Two Numpy Arrays</a></li>
<li><a href="dot">Dot Product</a></li>
<li><a href="cons">Adding Constant to a Numpy Array</a></li>
</ul>
</li>
<li><a href="math">Mathematical Functions</a></li>
<li><a href="lin">Linspace</a></li>
</ul>
<p>
Estimated time needed: <strong>30 min</strong>
</p>
</div>
<hr>
<h2 id="pre">Preparation</h2>
```
# Import the libraries
import time
import sys
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Plotting functions
def Plotvec1(u, z, v):
ax = plt.axes()
ax.arrow(0, 0, *u, head_width=0.05, color='r', head_length=0.1)
plt.text(*(u + 0.1), 'u')
ax.arrow(0, 0, *v, head_width=0.05, color='b', head_length=0.1)
plt.text(*(v + 0.1), 'v')
ax.arrow(0, 0, *z, head_width=0.05, head_length=0.1)
plt.text(*(z + 0.1), 'z')
plt.ylim(-2, 2)
plt.xlim(-2, 2)
def Plotvec2(a,b):
ax = plt.axes()
ax.arrow(0, 0, *a, head_width=0.05, color ='r', head_length=0.1)
plt.text(*(a + 0.1), 'a')
ax.arrow(0, 0, *b, head_width=0.05, color ='b', head_length=0.1)
plt.text(*(b + 0.1), 'b')
plt.ylim(-2, 2)
plt.xlim(-2, 2)
```
Create a Python List as follows:
```
# Create a python list
a = ["0", 1, "two", "3", 4]
```
We can access the data via an index:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumOneList.png" width="660" />
We can access each element using a square bracket as follows:
```
# Print each element
print("a[0]:", a[0])
print("a[1]:", a[1])
print("a[2]:", a[2])
print("a[3]:", a[3])
print("a[4]:", a[4])
```
<hr>
<h2 id="numpy">What is Numpy?</h2>
A numpy array is similar to a list. It's usually fixed in size and each element is of the same type. We can cast a list to a numpy array by first importing numpy:
```
# import numpy library
import numpy as np
```
We then cast the list as follows:
```
# Create a numpy array
a = np.array([0, 1, 2, 3, 4])
a
```
Each element is of the same type, in this case integers:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumOneNp.png" width="500" />
As with lists, we can access each element via a square bracket:
```
# Print each element
print("a[0]:", a[0])
print("a[1]:", a[1])
print("a[2]:", a[2])
print("a[3]:", a[3])
print("a[4]:", a[4])
```
<h3 id="type">Type</h3>
If we check the type of the array we get <b>numpy.ndarray</b>:
```
# Check the type of the array
type(a)
```
As numpy arrays contain data of the same type, we can use the attribute "dtype" to obtain the Data-type of the array’s elements. In this case a 64-bit integer:
```
# Check the type of the values stored in numpy array
a.dtype
```
We can create a numpy array with real numbers:
```
# Create a numpy array
b = np.array([3.1, 11.02, 6.2, 213.2, 5.2])
```
When we check the type of the array we get <b>numpy.ndarray</b>:
```
# Check the type of array
type(b)
```
If we examine the attribute <code>dtype</code> we see float 64, as the elements are not integers:
```
# Check the value type
b.dtype
```
<h3 id="val">Assign value</h3>
We can change the value of the array, consider the array <code>c</code>:
```
# Create numpy array
c = np.array([20, 1, 2, 3, 4])
c
```
We can change the first element of the array to 100 as follows:
```
# Assign the first element to 100
c[0] = 100
c
```
We can change the 5th element of the array to 0 as follows:
```
# Assign the 5th element to 0
c[4] = 0
c
```
<h3 id="slice">Slicing</h3>
Like lists, we can slice the numpy array, and we can select the elements from 1 to 3 and assign it to a new numpy array <code>d</code> as follows:
```
# Slicing the numpy array
d = c[1:4]
d
```
We can assign the corresponding indexes to new values as follows:
```
# Set the fourth element and fifth element to 300 and 400
c[3:5] = 300, 400
c
```
<h3 id="list">Assign Value with List</h3>
Similarly, we can use a list to select a specific index.
The list ' select ' contains several values:
```
# Create the index list
select = [0, 2, 3]
```
We can use the list as an argument in the brackets. The output is the elements corresponding to the particular index:
```
# Use List to select elements
d = c[select]
d
```
We can assign the specified elements to a new value. For example, we can assign the values to 100 000 as follows:
```
# Assign the specified elements to new value
c[select] = 100000
c
```
<h3 id="other">Other Attributes</h3>
Let's review some basic array attributes using the array <code>a</code>:
```
# Create a numpy array
a = np.array([0, 1, 2, 3, 4])
a
```
The attribute <code>size</code> is the number of elements in the array:
```
# Get the size of numpy array
a.size
```
The next two attributes will make more sense when we get to higher dimensions but let's review them. The attribute <code>ndim</code> represents the number of array dimensions or the rank of the array, in this case, one:
```
# Get the number of dimensions of numpy array
a.ndim
```
The attribute <code>shape</code> is a tuple of integers indicating the size of the array in each dimension:
```
# Get the shape/size of numpy array
a.shape
# Create a numpy array
a = np.array([1, -1, 1, -1])
# Get the mean of numpy array
mean = a.mean()
mean
# Get the standard deviation of numpy array
standard_deviation=a.std()
standard_deviation
# Create a numpy array
b = np.array([-1, 2, 3, 4, 5])
b
# Get the biggest value in the numpy array
max_b = b.max()
max_b
# Get the smallest value in the numpy array
min_b = b.min()
min_b
```
<hr>
<h2 id="op">Numpy Array Operations</h2>
<h3 id="add">Array Addition</h3>
Consider the numpy array <code>u</code>:
```
u = np.array([1, 0])
u
```
Consider the numpy array <code>v</code>:
```
v = np.array([0, 1])
v
```
We can add the two arrays and assign it to z:
```
# Numpy Array Addition
z = u + v
z
```
The operation is equivalent to vector addition:
```
# Plot numpy arrays
Plotvec1(u, z, v)
```
<h3 id="multi">Array Multiplication</h3>
Consider the vector numpy array <code>y</code>:
```
# Create a numpy array
y = np.array([1, 2])
y
```
We can multiply every element in the array by 2:
```
# Numpy Array Multiplication
z = 2 * y
z
```
This is equivalent to multiplying a vector by a scaler:
<h3 id="prod">Product of Two Numpy Arrays</h3>
Consider the following array <code>u</code>:
```
# Create a numpy array
u = np.array([1, 2])
u
```
Consider the following array <code>v</code>:
```
# Create a numpy array
v = np.array([3, 2])
v
```
The product of the two numpy arrays <code>u</code> and <code>v</code> is given by:
```
# Calculate the production of two numpy arrays
z = u * v
z
```
<h3 id="dot">Dot Product</h3>
The dot product of the two numpy arrays <code>u</code> and <code>v</code> is given by:
```
# Calculate the dot product
np.dot(u, v)
```
<h3 id="cons">Adding Constant to a Numpy Array</h3>
Consider the following array:
```
# Create a constant to numpy array
u = np.array([1, 2, 3, -1])
u
```
Adding the constant 1 to each element in the array:
```
# Add the constant to array
u + 1
```
The process is summarised in the following animation:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumOneAdd.gif" width="500" />
<hr>
<h2 id="math">Mathematical Functions</h2>
We can access the value of pie in numpy as follows :
```
# The value of pie
np.pi
```
We can create the following numpy array in Radians:
```
# Create the numpy array in radians
x = np.array([0, np.pi/2 , np.pi])
```
We can apply the function <code>sin</code> to the array <code>x</code> and assign the values to the array <code>y</code>; this applies the sine function to each element in the array:
```
# Calculate the sin of each elements
y = np.sin(x)
y
```
<hr>
<h2 id="lin">Linspace</h2>
A useful function for plotting mathematical functions is "linespace". Linespace returns evenly spaced numbers over a specified interval. We specify the starting point of the sequence and the ending point of the sequence. The parameter "num" indicates the Number of samples to generate, in this case 5:
```
# Makeup a numpy array within [-2, 2] and 5 elements
np.linspace(-2, 2, num=5)
```
If we change the parameter <code>num</code> to 9, we get 9 evenly spaced numbers over the interval from -2 to 2:
```
# Makeup a numpy array within [-2, 2] and 9 elements
np.linspace(-2, 2, num=9)
```
We can use the function line space to generate 100 evenly spaced samples from the interval 0 to 2π:
```
# Makeup a numpy array within [0, 2π] and 100 elements
x = np.linspace(0, 2*np.pi, num=100)
```
We can apply the sine function to each element in the array <code>x</code> and assign it to the array <code>y</code>:
```
# Calculate the sine of x list
y = np.sin(x)
# Plot the result
plt.plot(x, y)
```
<hr>
<h2 id="quiz">Quiz on 1D Numpy Array</h2>
Implement the following vector subtraction in numpy: u-v
```
# Write your code below and press Shift+Enter to execute
u = np.array([1, 0])
v = np.array([0, 1])
z=u-v
z
```
```
# This is formatted as code
```
Double-click __here__ for the solution.
<!-- Your answer is below:
u - v
-->
<hr>
Multiply the numpy array z with -2:
```
# Write your code below and press Shift+Enter to execute
z = np.array([2, 4])
x=z*2
x
```
Double-click __here__ for the solution.
<!-- Your answer is below:
-2 * z
-->
<hr>
Consider the list <code>[1, 2, 3, 4, 5]</code> and <code>[1, 0, 1, 0, 1]</code>, and cast both lists to a numpy array then multiply them together:
```
# Write your code below and press Shift+Enter to execute
a=np.array([1, 2, 3, 4, 5])
b=np.array([1, 0, 1, 0, 1])
mul = a*b
mul
```
Double-click __here__ for the solution.
<!-- Your answer is below:
a = np.array([1, 2, 3, 4, 5])
b = np.array([1, 0, 1, 0, 1])
a * b
-->
<hr>
Convert the list <code>[-1, 1]</code> and <code>[1, 1]</code> to numpy arrays <code>a</code> and <code>b</code>. Then, plot the arrays as vectors using the fuction <code>Plotvec2</code> and find the dot product:
```
# Write your code below and press Shift+Enter to execute
a = np.array([-1,1])
b = np.array([1,1])
Plotvec2(a,b)
c = np.dot(a,b)
print("Dot Product is :",c)
```
Double-click __here__ for the solution.
<!-- Your answer is below:
a = np.array([-1, 1])
b = np.array([1, 1])
Plotvec2(a, b)
print("The dot product is", np.dot(a,b))
-->
<hr>
Convert the list <code>[1, 0]</code> and <code>[0, 1]</code> to numpy arrays <code>a</code> and <code>b</code>. Then, plot the arrays as vectors using the function <code>Plotvec2</code> and find the dot product:
```
# Write your code below and press Shift+Enter to execute
a = np.array([1,0])
b = np.array([0,1])
Plotvec2(a,b)
c = np.dot(a,b)
print("Dot Product is :",c)
```
Double-click __here__ for the solution.
<!--
a = np.array([1, 0])
b = np.array([0, 1])
Plotvec2(a, b)
print("The dot product is", np.dot(a, b))
-->
<hr>
Convert the list <code>[1, 1]</code> and <code>[0, 1]</code> to numpy arrays <code>a</code> and <code>b</code>. Then plot the arrays as vectors using the fuction <code>Plotvec2</code> and find the dot product:
```
# Write your code below and press Shift+Enter to execute
a = np.array([1,1])
b = np.array([0,1])
Plotvec2(a,b)
c = np.dot(a,b)
print("Dot Product is :",c)
```
Double-click __here__ for the solution.
<!--
a = np.array([1, 1])
b = np.array([0, 1])
Plotvec2(a, b)
print("The dot product is", np.dot(a, b))
print("The dot product is", np.dot(a, b))
-->
<hr>
Why are the results of the dot product for <code>[-1, 1]</code> and <code>[1, 1]</code> and the dot product for <code>[1, 0]</code> and <code>[0, 1]</code> zero, but not zero for the dot product for <code>[1, 1]</code> and <code>[0, 1]</code>? <p><i>Hint: Study the corresponding figures, pay attention to the direction the arrows are pointing to.</i></p>
```
# Write your code below and press Shift+Enter to execute
#Perpendicular Vector. So angle is 90 degree. cos(90)=0 . So the dot product is zero.
#As we know,
#A.B = |A|.|B|.cos(angle)
```
Double-click __here__ for the solution.
<!--
The vectors used for question 4 and 5 are perpendicular. As a result, the dot product is zero.
-->
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<h2>Get IBM Watson Studio free of charge!</h2>
<p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p>
</div>
<h3>About the Authors:</h3>
<p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
| github_jupyter |
## Compare the assemblies in GenBank, GTDB, and RAST
Each has a different set. What are the unions and intersections?
```
# A lot of this is not used, but we import it so we have it later!
import os
import sys
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
import pandas as pd
import seaborn as sns
import numpy as np
import math
import re
from PhiSpyAnalysis import theils_u, DateConverter, printmd
from PhiSpyAnalysis import read_phages, read_gtdb, read_checkv, read_base_pp, read_categories, read_metadata, read_gbk_metadata
from scipy.stats import pearsonr, f_oneway
from sklearn.linear_model import LinearRegression
from sklearn import decomposition
from sklearn.ensemble import RandomForestClassifier
import statsmodels.api as sm
from statsmodels.formula.api import ols
from statsmodels.stats.multicomp import pairwise_tukeyhsd, tukeyhsd, MultiComparison
from statsmodels.multivariate.manova import MANOVA
import subprocess
import gzip
rv = re.compile('^\\w+')
def remove_ver(x):
return rv.search(str(x)).group()
```
# GTDB
```
# GTDB
gtdb = read_gtdb()
gtdb['assembly_nover'] = gtdb['assembly_accession'].apply(remove_ver)
gtdb[['assembly_accession', 'assembly_nover']]
gtdb
# RAST
# the full data set. Don't try this at home!
# metadf = pd.read_csv("../small_data/patric_genome_metadata.tsv.gz", compression='gzip', header=0, delimiter="\t")
rast = read_metadata()
rast['assembly_nover'] = rast['assembly_accession'].apply(remove_ver)
rast[['assembly_accession', 'assembly_nover']]
rast
```
# GenBank
This assembly summary comes from [GenBank ftp site](ftp://ftp.ncbi.nlm.nih.gov/genomes/genbank/) and you want the [assembly_summary.txt](ftp://ftp.ncbi.nlm.nih.gov/genomes/genbank/bacteria/assembly_summary.txt) file explicitly from bacteria (but don't try and open the bacteria list in your browser!)
```
# GenBank
gbk = read_gbk_metadata()
gbk['assembly_nover'] = gbk['assembly_accession'].apply(remove_ver)
gbk[['assembly_accession', 'assembly_nover']]
gbk
phagesdf = read_phages(maxcontigs=-1) # this disables contig length filtering
phagesdf['assembly_nover'] = phagesdf['assembly_accession'].apply(remove_ver)
phagesdf
phagesdf[phagesdf['Kept'] == 0]
phagesdf[['Total Predicted Prophages', 'Not enough genes', 'No phage genes', 'Kept']].agg('sum')
```
# What is in common between the groups?
```
gbkaa=set(gbk['assembly_accession'])
rastaa=set(rast['assembly_accession'])
gtdbaa=set(gtdb['assembly_accession'])
phagesaa=set(phagesdf['assembly_accession'])
gbr = gbkaa.intersection(rastaa)
gbg = gbkaa.intersection(gtdbaa)
gtr = gtdbaa.intersection(rastaa)
print(f"Between GBK and RAST there are {len(gbr):,} genomes in common")
print(f"Between GBK and GTDB there are {len(gbg):,} genomes in common")
print(f"Between GTDB and RAST there are {len(gtr):,} genomes in common")
print()
gbnotdone = gbkaa - phagesaa
gtnotdone = gtdbaa - phagesaa
ranotdone = rastaa - phagesaa
print(f"There are {len(gbnotdone):,} phages in Genbank that have not been analyzed")
print(f"There are {len(gtnotdone):,} phages in GTDB that have not been analyzed")
print(f"There are {len(ranotdone):,} phages in Genbank that have not been analyzed")
print()
allmissing = gbnotdone.intersection(gtnotdone).intersection(ranotdone)
print(f"There are {len(allmissing):,} phages in all three that have not been analyzed")
if False:
with open("../data/unprocessed_phages.txt", 'w') as out:
for o in allmissing:
gbk[gbk['assembly_accession'] == 'GCA_002129805.1'][['assembly_accession', 'ftp_path']].to_csv(out, sep="\t", header=False, index=False)
print("IF WE IGNORE GENOME VERSIONS")
gbkaa=set(gbk['assembly_nover'])
rastaa=set(rast['assembly_nover'])
gtdbaa=set(gtdb['assembly_nover'])
phagesaa=set(phagesdf['assembly_nover'])
gbr = gbkaa.intersection(rastaa)
gbg = gbkaa.intersection(gtdbaa)
gtr = gtdbaa.intersection(rastaa)
print(f"There are {len(gbkaa):,} genomes in GenBank")
print(f"There are {len(rastaa):,} genomes in PATRIC")
print(f"There are {len(gtdbaa):,} genomes in GTDB")
print(f"Between GenBank and PATRIC there are {len(gbr):,} genomes in common")
print(f"Between GenBank and GTDB there are {len(gbg):,} genomes in common")
print(f"Between GTDB and PATRIC there are {len(gtr):,} genomes in common")
print()
gbnotdone = gbkaa - phagesaa
gtnotdone = gtdbaa - phagesaa
ranotdone = rastaa - phagesaa
print(f"There are {len(gbnotdone):,} phages in Genbank that have not been analyzed")
print(f"There are {len(gtnotdone):,} phages in GTDB that have not been analyzed")
print(f"There are {len(ranotdone):,} phages in PATRIC that have not been analyzed")
print()
allmissing = gbnotdone.intersection(gtnotdone).intersection(ranotdone)
print(f"There are {len(allmissing):,} phages in all three that have not been analyzed")
```
| github_jupyter |
# Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: [Redmon et al., 2016](https://arxiv.org/abs/1506.02640) and [Redmon and Farhadi, 2016](https://arxiv.org/abs/1612.08242).
**You will learn to**:
- Use object detection on a car detection dataset
- Deal with bounding boxes
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "3a".
* You can find your original work saved in the notebook with the previous version name ("v3")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates
* Clarified "YOLO" instructions preceding the code.
* Added details about anchor boxes.
* Added explanation of how score is calculated.
* `yolo_filter_boxes`: added additional hints. Clarify syntax for argmax and max.
* `iou`: clarify instructions for finding the intersection.
* `iou`: give variable names for all 8 box vertices, for clarity. Adds `width` and `height` variables for clarity.
* `iou`: add test cases to check handling of non-intersecting boxes, intersection at vertices, or intersection at edges.
* `yolo_non_max_suppression`: clarify syntax for tf.image.non_max_suppression and keras.gather.
* "convert output of the model to usable bounding box tensors": Provides a link to the definition of `yolo_head`.
* `predict`: hint on calling sess.run.
* Spelling, grammar, wording and formatting updates to improve clarity.
## Import libraries
Run the following cell to load the packages and dependencies that you will find useful as you build the object detector!
```
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
```
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`.
## 1 - Problem Statement
You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We thank [drive.ai](htps://www.drive.ai/) for providing this dataset.
</center></caption>
You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u> **Figure 1** </u>: **Definition of a box**<br> </center></caption>
If you have 80 classes that you want the object detector to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.
In this exercise, you will learn how "You Only Look Once" (YOLO) performs object detection, and then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.
## 2 - YOLO
"You Only Look Once" (YOLO) is a popular algorithm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
### 2.1 - Model details
#### Inputs and outputs
- The **input** is a batch of images, and each image has the shape (m, 608, 608, 3)
- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
#### Anchor Boxes
* Anchor boxes are chosen by exploring the training data to choose reasonable height/width ratios that represent the different classes. For this assignment, 5 anchor boxes were chosen for you (to cover the 80 classes), and stored in the file './model_data/yolo_anchors.txt'
* The dimension for anchor boxes is the second to last dimension in the encoding: $(m, n_H,n_W,anchors,classes)$.
* The YOLO architecture is: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
#### Encoding
Let's look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 2** </u>: **Encoding architecture for YOLO**<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 3** </u>: **Flattening the last two last dimensions**<br> </center></caption>
#### Class score
Now, for each box (of each cell) we will compute the following element-wise product and extract a probability that the box contains a certain class.
The class score is $score_{c,i} = p_{c} \times c_{i}$: the probability that there is an object $p_{c}$ times the probability that the object is a certain class $c_{i}$.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 4** </u>: **Find the class detected by each box**<br> </center></caption>
##### Example of figure 4
* In figure 4, let's say for box 1 (cell 1), the probability that an object exists is $p_{1}=0.60$. So there's a 60% chance that an object exists in box 1 (cell 1).
* The probability that the object is the class "category 3 (a car)" is $c_{3}=0.73$.
* The score for box 1 and for category "3" is $score_{1,3}=0.60 \times 0.73 = 0.44$.
* Let's say we calculate the score for all 80 classes in box 1, and find that the score for the car class (class 3) is the maximum. So we'll assign the score 0.44 and class "3" to this box "1".
#### Visualizing classes
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across the 80 classes, one maximum for each of the 5 anchor boxes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u> **Figure 5** </u>: Each one of the 19x19 grid cells is colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
#### Visualizing bounding boxes
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u> **Figure 6** </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
#### Non-Max suppression
In the figure above, we plotted only boxes for which the model had assigned a high probability, but this is still too many boxes. You'd like to reduce the algorithm's output to a much smaller number of detected objects.
To do so, you'll use **non-max suppression**. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class; either due to the low probability of any object, or low probability of this particular class).
- Select only one box when several boxes overlap with each other and detect the same object.
### 2.2 - Filtering with a threshold on class scores
You are going to first apply a filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It is convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing the midpoint and dimensions $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes in each cell.
- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the "class probabilities" $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
#### **Exercise**: Implement `yolo_filter_boxes()`.
1. Compute box scores by doing the elementwise product as described in Figure 4 ($p \times c$).
The following code may help you choose the right operator:
```python
a = np.random.randn(19*19, 5, 1)
b = np.random.randn(19*19, 5, 80)
c = a * b # shape of c will be (19*19, 5, 80)
```
This is an example of **broadcasting** (multiplying vectors of different sizes).
2. For each box, find:
- the index of the class with the maximum box score
- the corresponding box score
**Useful references**
* [Keras argmax](https://keras.io/backend/#argmax)
* [Keras max](https://keras.io/backend/#max)
**Additional Hints**
* For the `axis` parameter of `argmax` and `max`, if you want to select the **last** axis, one way to do so is to set `axis=-1`. This is similar to Python array indexing, where you can select the last position of an array using `arrayname[-1]`.
* Applying `max` normally collapses the axis for which the maximum is applied. `keepdims=False` is the default option, and allows that dimension to be removed. We don't need to keep the last dimension after applying the maximum here.
* Even though the documentation shows `keras.backend.argmax`, use `keras.argmax`. Similarly, use `keras.max`.
3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep.
4. Use TensorFlow to apply the mask to `box_class_scores`, `boxes` and `box_classes` to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep.
**Useful reference**:
* [boolean mask](https://www.tensorflow.org/api_docs/python/tf/boolean_mask)
**Additional Hints**:
* For the `tf.boolean_mask`, we can keep the default `axis=None`.
**Reminder**: to call a Keras function, you should use `K.function(...)`.
```
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence*box_class_probs
### END CODE HERE ###
# Step 2: Find the box_classes using the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores, axis=-1)
box_class_scores = K.max(box_scores, axis=-1)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores>=threshold
### END CODE HERE ###
# Step 4: Apply the mask to box_class_scores, boxes and box_classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores,filtering_mask)
boxes = tf.boolean_mask(boxes,filtering_mask)
classes = tf.boolean_mask(box_classes,filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
10.7506
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 8.42653275 3.27136683 -0.5313437 -4.94137383]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
7
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(?,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(?, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(?,)
</td>
</tr>
</table>
**Note** In the test for `yolo_filter_boxes`, we're using random numbers to test the function. In real data, the `box_class_probs` would contain non-zero values between 0 and 1 for the probabilities. The box coordinates in `boxes` would also be chosen so that lengths and heights are non-negative.
### 2.3 - Non-max suppression ###
Even after filtering by thresholding over the class scores, you still end up with a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 7** </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probability) of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 8** </u>: Definition of "Intersection over Union". <br> </center></caption>
#### **Exercise**: Implement iou(). Some hints:
- In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) is the lower-right corner. In other words, the (0,0) origin starts at the top left corner of the image. As x increases, we move to the right. As y increases, we move down.
- For this exercise, we define a box using its two corners: upper left $(x_1, y_1)$ and lower right $(x_2,y_2)$, instead of using the midpoint, height and width. (This makes it a bit easier to calculate the intersection).
- To calculate the area of a rectangle, multiply its height $(y_2 - y_1)$ by its width $(x_2 - x_1)$. (Since $(x_1,y_1)$ is the top left and $x_2,y_2$ are the bottom right, these differences should be non-negative.
- To find the **intersection** of the two boxes $(xi_{1}, yi_{1}, xi_{2}, yi_{2})$:
- Feel free to draw some examples on paper to clarify this conceptually.
- The top left corner of the intersection $(xi_{1}, yi_{1})$ is found by comparing the top left corners $(x_1, y_1)$ of the two boxes and finding a vertex that has an x-coordinate that is closer to the right, and y-coordinate that is closer to the bottom.
- The bottom right corner of the intersection $(xi_{2}, yi_{2})$ is found by comparing the bottom right corners $(x_2,y_2)$ of the two boxes and finding a vertex whose x-coordinate is closer to the left, and the y-coordinate that is closer to the top.
- The two boxes **may have no intersection**. You can detect this if the intersection coordinates you calculate end up being the top right and/or bottom left corners of an intersection box. Another way to think of this is if you calculate the height $(y_2 - y_1)$ or width $(x_2 - x_1)$ and find that at least one of these lengths is negative, then there is no intersection (intersection area is zero).
- The two boxes may intersect at the **edges or vertices**, in which case the intersection area is still zero. This happens when either the height or width (or both) of the calculated intersection is zero.
**Additional Hints**
- `xi1` = **max**imum of the x1 coordinates of the two boxes
- `yi1` = **max**imum of the y1 coordinates of the two boxes
- `xi2` = **min**imum of the x2 coordinates of the two boxes
- `yi2` = **min**imum of the y2 coordinates of the two boxes
- `inter_area` = You can use `max(height, 0)` and `max(width, 0)`
```
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (box1_x1, box1_y1, box1_x2, box_1_y2)
box2 -- second box, list object with coordinates (box2_x1, box2_y1, box2_x2, box2_y2)
"""
# Assign variable names to coordinates for clarity
(box1_x1, box1_y1, box1_x2, box1_y2) = box1
(box2_x1, box2_y1, box2_x2, box2_y2) = box2
# Calculate the (yi1, xi1, yi2, xi2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 7 lines)
xi1 = np.maximum(box1_x1,box2_x1)
yi1 = np.maximum(box1_y1,box2_y1)
xi2 = np.minimum(box1_x2,box2_x2)
yi2 = np.minimum(box1_y2,box2_y2)
inter_width = max(xi2-xi1, 0)
inter_height = max(yi2-yi1, 0)
inter_area = inter_width*inter_height
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1_x2-box1_x1)*(box1_y2-box1_y1)
box2_area = (box2_x2-box2_x1)*(box2_y2-box2_y1)
union_area =box1_area+box2_area-inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area/union_area
### END CODE HERE ###
return iou
## Test case 1: boxes intersect
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou for intersecting boxes = " + str(iou(box1, box2)))
## Test case 2: boxes do not intersect
box1 = (1,2,3,4)
box2 = (5,6,7,8)
print("iou for non-intersecting boxes = " + str(iou(box1,box2)))
## Test case 3: boxes intersect at vertices only
box1 = (1,1,2,2)
box2 = (2,2,3,3)
print("iou for boxes that only touch at vertices = " + str(iou(box1,box2)))
## Test case 4: boxes intersect at edge only
box1 = (1,1,3,3)
box2 = (2,3,3,4)
print("iou for boxes that only touch at edges = " + str(iou(box1,box2)))
```
**Expected Output**:
```
iou for intersecting boxes = 0.14285714285714285
iou for non-intersecting boxes = 0.0
iou for boxes that only touch at vertices = 0.0
iou for boxes that only touch at edges = 0.0
```
#### YOLO non-max suppression
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute the overlap of this box with all other boxes, and remove boxes that overlap significantly (iou >= `iou_threshold`).
3. Go back to step 1 and iterate until there are no more boxes with a lower score than the currently selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):
** Reference documentation **
- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)
```
tf.image.non_max_suppression(
boxes,
scores,
max_output_size,
iou_threshold=0.5,
name=None
)
```
Note that in the version of tensorflow used here, there is no parameter `score_threshold` (it's shown in the documentation for the latest version) so trying to set this value will result in an error message: *got an unexpected keyword argument 'score_threshold.*
- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/keras/backend/gather)
Even though the documentation shows `tf.keras.backend.gather()`, you can use `keras.gather()`.
```
keras.gather(
reference,
indices
)
```
```
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes_tensor, iou_threshold=0.5)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores,nms_indices)
boxes = K.gather(boxes,nms_indices)
classes = K.gather(classes,nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
6.9384
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[-5.299932 3.13798141 4.45036697 0.95942086]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
-2.24527
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
### 2.4 Wrapping up the filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
**Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):
```python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
```
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes`
```python
boxes = scale_boxes(boxes, image_shape)
```
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; we'll show you where they need to be called.
```
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions (convert boxes box_xy and box_wh to corner coordinates)
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold=0.6 )
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with
# maximum number of boxes set to max_boxes and a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes , iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
138.791
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
54
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
## Summary for YOLO:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
## 3 - Test YOLO pre-trained model on images
In this part, you are going to use a pre-trained model and test it on the car detection dataset. We'll need a session to execute the computation graph and evaluate the tensors.
```
sess = K.get_session()
```
### 3.1 - Defining classes, anchors and image shape.
* Recall that we are trying to detect 80 classes, and are using 5 anchor boxes.
* We have gathered the information on the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt".
* We'll read class names and anchors from text files.
* The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
```
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
```
### 3.2 - Loading a pre-trained model
* Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes.
* You are going to load an existing pre-trained Keras YOLO model stored in "yolo.h5".
* These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will simply refer to it as "YOLO" in this notebook.
Run the cell below to load the model from this file.
```
yolo_model = load_model("model_data/yolo.h5")
```
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
```
yolo_model.summary()
```
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.
**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
### 3.3 - Convert output of the model to usable bounding box tensors
The output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
If you are curious about how `yolo_head` is implemented, you can find the function definition in the file ['keras_yolo.py'](https://github.com/allanzelener/YAD2K/blob/master/yad2k/models/keras_yolo.py). The file is located in your workspace in this path 'yad2k/models/keras_yolo.py'.
```
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
```
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function.
### 3.4 - Filtering boxes
`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Let's now call `yolo_eval`, which you had previously implemented, to do this.
```
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
```
### 3.5 - Run the graph on an image
Let the fun begin. You have created a graph that can be summarized as follows:
1. <font color='purple'> yolo_model.input </font> is given to `yolo_model`. The model is used to compute the output <font color='purple'> yolo_model.output </font>
2. <font color='purple'> yolo_model.output </font> is processed by `yolo_head`. It gives you <font color='purple'> yolo_outputs </font>
3. <font color='purple'> yolo_outputs </font> goes through a filtering function, `yolo_eval`. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>
**Exercise**: Implement predict() which runs the graph to test YOLO on an image.
You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.
The code below also uses the following function:
```python
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
```
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
#### Hint: Using the TensorFlow Session object
* Recall that above, we called `K.get_Session()` and saved the Session object in `sess`.
* To evaluate a list of tensors, we call `sess.run()` like this:
```
sess.run(fetches=[tensor1,tensor2,tensor3],
feed_dict={yolo_model.input: the_input_variable,
K.learning_phase():0
}
```
* Notice that the variables `scores, boxes, classes` are not passed into the `predict` function, but these are global variables that you will use within the `predict` function.
```
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the predictions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict={yolo_model.input: image_data, K.learning_phase(): 0})
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
```
Run the following cell on the "test.jpg" image to verify that your function is correct.
```
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
```
**Expected Output**:
<table>
<tr>
<td>
**Found 7 boxes for test.jpg**
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.60 (925, 285) (1045, 374)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.66 (706, 279) (786, 350)
</td>
</tr>
<tr>
<td>
**bus**
</td>
<td>
0.67 (5, 266) (220, 407)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.70 (947, 324) (1280, 705)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.74 (159, 303) (346, 440)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.80 (761, 282) (942, 412)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.89 (367, 300) (745, 648)
</td>
</tr>
</table>
The model you've just run is actually able to detect 80 different classes listed in "coco_classes.txt". To test the model on your own images:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the cell above code
4. Run the code and see the output of the algorithm!
If you were to run your session in a for loop over all your images. Here's what you would get:
<center>
<video width="400" height="200" src="nb_images/pred_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks [drive.ai](https://www.drive.ai/) for providing this dataset! </center></caption>
## <font color='darkblue'>What you should remember:
- YOLO is a state-of-the-art object detection model that is fast and accurate
- It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume.
- The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.
- You filter through all the boxes using non-max suppression. Specifically:
- Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes
- Intersection over Union (IoU) thresholding to eliminate overlapping boxes
- Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise.
**References**: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's GitHub repository. The pre-trained weights used in this exercise came from the official YOLO website.
- Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640) (2015)
- Joseph Redmon, Ali Farhadi - [YOLO9000: Better, Faster, Stronger](https://arxiv.org/abs/1612.08242) (2016)
- Allan Zelener - [YAD2K: Yet Another Darknet 2 Keras](https://github.com/allanzelener/YAD2K)
- The official YOLO website (https://pjreddie.com/darknet/yolo/)
**Car detection dataset**:
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. We are grateful to Brody Huval, Chih Hu and Rahul Patel for providing this data.
| github_jupyter |
# Extract features from clauses and sentences that need citations and those that do not
Author: Kiran Bhattacharyya
Revision: 5/11/18 - DRM - translate .py files into .ipynb, misc formatting
this code reads in two data files:
1. one contains sentences and clauses that need citations
2. the other contains sentences that do no
Then it filters the words in the sentence by parts of speech, and stems the words
It also calculates the occurance of the unique words and parts of speech in the two datasets
finally it saves these filtered data sets and the counts of the unique features in each dataset
```
# import relevant libraries
import nltk
import pandas as pd
import numpy as np
from nltk.stem.snowball import SnowballStemmer
# Create p_stemmer object
p_stemmer = SnowballStemmer("english", ignore_stopwords=True)
```
### load data which contain sentences that need citations and sentences that do not (are not claims)
```
needCite = pd.read_pickle('../Data/CitationNeeded.pkl') # need citations
noClaim = pd.read_pickle('../Data/NotACLaim.pkl') # do NOT need citations (are not claims)
```
### tokenize sentences into words and tag parts of speech
keep nouns (NN), adjectives (JJ), verbs (VB), adverbs (RB), numberical/cardinal (CD), determiner (DT)
features will include words that are any of the previous, the length of the sentence or clause
First for claim data:
5/16/18 DRM - removed .encode from thisWord to allow python 3 compatability.
```
needCite_filtSent = list() # list to store word tokenized and filtered sentences from citation needed list
needCite_wordTag = list() # list to store the part of speech of each word
noClaim_filtSent = list() # list to store word tokenized and filtered sentences from not a claim list
noClaim_wordTag = list() # list to store the part of speech of each word
allWordList = list() # list that stores all words in both data sentences
allPOSList = list() # list that stores all POS of all words in both datasets
for sent in needCite.CitationNeeded:
sent_token = nltk.word_tokenize(sent) # word tokenize the sentence
sent_pos = nltk.pos_tag(sent_token) # tag with part of speech
sent_filt_word = list() # create list to store filtered sentence words
sent_filt_pos = list() # create list to store the filtered parts of speech
for item in sent_pos: # for each item in the sentence
if len(item) > 1:
thisTag = item[1] # grab the part of speech
if 'NN' in thisTag or 'JJ' in thisTag or 'VB' in thisTag or 'RB' in thisTag or 'CD' in thisTag or 'DT' in thisTag: # if the tag is an approved part of speech
thisWord = item[0]
sent_filt_word.append(p_stemmer.stem(thisWord.lower()))
sent_filt_pos.append(thisTag)
allWordList.append(p_stemmer.stem(thisWord.lower()))
allPOSList.append(thisTag)
needCite_filtSent.append(sent_filt_word)
needCite_wordTag.append(sent_filt_pos)
needCite_filtSent[0:2]
needCite_wordTag[0:2]
len(needCite_filtSent)
for sent in noClaim.NotAClaim:
sent_token = nltk.word_tokenize(sent) # word tokenize the sentence
sent_pos = nltk.pos_tag(sent_token) # tag with part of speech
sent_filt_word = list() # create list to store filtered sentence words
sent_filt_pos = list() # create list to store the filtered parts of speech
for item in sent_pos: # for each item in the sentence
if len(item) > 1:
thisTag = item[1] # grab the part of speech
if 'NN' in thisTag or 'JJ' in thisTag or 'VB' in thisTag or 'RB' in thisTag or 'CD' in thisTag or 'DT' in thisTag: # if the tag is an approved part of speech
thisWord = item[0]
sent_filt_word.append(p_stemmer.stem(thisWord.lower()))
sent_filt_pos.append(thisTag)
allWordList.append(p_stemmer.stem(thisWord.lower()))
allPOSList.append(thisTag)
noClaim_filtSent.append(sent_filt_word)
noClaim_wordTag.append(sent_filt_pos)
noClaim_filtSent[0:2]
def test(a,b):
return a,b,a*b
[a,b,c]=test(3,4)
print(a,b,c)
```
### compute word occurances in sentences
```
import datetime
datetime.datetime.now()
## compute word occurances in sentences
uniqWords = list(set(allWordList)) # find all uniqwords in the dataset
wordOccur_claim = list() # list to store number of times word occurs in claim dataset
wordOccur_notClaim = list() # list to store number of times word occurs in not claim data set
for i in range(0,len(uniqWords)): # for each word
word = uniqWords[i]
numOfTimes = 0
for sent in needCite_filtSent:
if word in sent:
numOfTimes = numOfTimes + len([j for j, x in enumerate(sent) if x == word])
wordOccur_claim.append(numOfTimes)
numOfTimes = 0
for sent in noClaim_filtSent:
if word in sent:
numOfTimes = numOfTimes + len([j for j, x in enumerate(sent) if x == word])
wordOccur_notClaim.append(numOfTimes)
datetime.datetime.now()
```
start @ 6:05pm
```
import datetime
datetime.datetime.now()
```
### compute POS occurances in sentences
```
uniqPOS = list(set(allPOSList)) # find all uniqwords in the dataset
posOccur_claim = list() # for part of speech
posOccur_notClaim = list()
for i in range(0,len(uniqPOS)): # for each word
word = uniqPOS[i]
numOfTimes = 0
for sent in needCite_wordTag:
if word in sent:
numOfTimes = numOfTimes + len([j for j, x in enumerate(sent) if x == word])
posOccur_claim.append(numOfTimes)
numOfTimes = 0
for sent in noClaim_wordTag:
if word in sent:
numOfTimes = numOfTimes + len([j for j, x in enumerate(sent) if x == word])
posOccur_notClaim.append(numOfTimes)
```
### save all data
```
UniqWords = pd.DataFrame(
{'UniqueWords': uniqWords,
'WordOccurClaim': wordOccur_claim,
'WordOccurNotClaim': wordOccur_notClaim
})
UniqWords.to_pickle('../Data/UniqueWords.pkl')
UniqPOS = pd.DataFrame(
{'UniquePOS': uniqPOS,
'POSOccurClaim': posOccur_claim,
'POSOccurNotClaim': posOccur_notClaim
})
UniqPOS.to_pickle('../Data/UniquePOS.pkl')
NeedCite = pd.DataFrame(
{'NeedCiteWord': needCite_filtSent,
'NeedCitePOS': needCite_wordTag
})
NeedCite.to_pickle('../Data/NeedCiteFilt.pkl')
NotClaim = pd.DataFrame(
{'NotClaimWord': noClaim_filtSent,
'NotClaimPOS': noClaim_wordTag
})
NotClaim.to_pickle('../Data/NotClaimFilt.pkl')
```
| github_jupyter |
# Generate class weights pickle array
### Generate class weights array based on classes histogram
```
import sys; print('Python:',sys.version)
import torch; print('Pytorch:',torch.__version__)
import fastai; print('Fastai:',fastai.__version__)
from fastai.basics import *
from fastai.callback.all import *
from fastai.vision.all import *
from torchvision.utils import save_image
import torch
import torchvision
from torchvision import transforms
from PIL import Image
import PIL
import os
```
## Load dataset
```
#pathToDataSet = '/mnt/c/Users/tomsq/Documents/UnB/2020.2/TG/dataset_v1/'
pathToDataSet = "../dataset_v1/sprint0/"
path_anno = pathToDataSet + 'gt/'
path_img = pathToDataSet + 'done/'
get_y_fn = lambda x : path_anno + f'{x.stem}_GT.png'
fnames = get_image_files(path_img)
print(len(fnames))
# get training and testing files from Multi Label Stratified Split (OPTIONAL)
"""
with open('../var/trainFilenames.pkl', 'rb') as f:
trainFiles = pickle.load(f) #trainFiles is an array with image names that should be in the training phase
with open('../var/testFilenames.pkl', 'rb') as f:
testFiles = pickle.load(f)
train_fnames = []
test_fnames = []
for i in range(0, len(fnames)):
if fnames[i].name in trainFiles:
train_fnames.append(fnames[i])
elif fnames[i].name in testFiles:
test_fnames.append(fnames[i])
print(len(train_fnames), len(test_fnames))
# continue the algorithm only for the these images
fnames = train_fnames + test_fnames
print(len(fnames))
"""
```
## Generate DataSet Histogram
```
totalArray = []
for i in range(0,len(fnames)):
img = Image.open(get_y_fn(fnames[i]))
arr = np.asarray(img)
totalArray += list(np.unique(arr,return_counts=False))
print('Array Generated')
codes = np.loadtxt( pathToDataSet + 'classesNumberComplete.txt', dtype=str, delimiter='\n',encoding='utf')
codes = [code.split(": ")[0] for code in codes] #pega apenas o ID de cada classe e ignora o nome
hist = plt.hist(totalArray, bins=len(codes), range=(0,len(codes)))
dicionario = {}
for code, contagem in zip(codes, hist[0]):
dicionario[code] = int(contagem)
dicionario = {k: v for k, v in sorted(dicionario.items(), key=lambda item: item[1], reverse=True)} #sort
plt.figure(figsize=(30, 10))
plt.bar(*zip(*dicionario.items()))
plt.xticks(rotation=90)
plt.tight_layout()
plt.show()
#The idx 1 is related to the class that appears the most, except for the backGround class 0.
print('Class that most appears >>> ', list(dicionario.items())[1][0])
print('Appears >>> ', list(dicionario.items())[1][1], ' times')
dataBaseImageNumberThreshold = list(dicionario.items())[1][1]
```
## Class Weights based on dictionary
```
# calculate class weights from sklearn. I think the results are wrong because they consider Single Label Classification Problems
#from sklearn.utils import class_weight
#class_weights = class_weight.compute_class_weight(class_weight="balanced", classes=np.unique(totalArray), y=totalArray)
#class_weights
# calculate class weights
classWeights = {}
totalValuesSum = sum(dicionario.values())
for key, value in dicionario.items():
classWeights[key] = (1 - (value/totalValuesSum))
#classWeights[key] = 1 / value if value != 0 else 1 # another way of calculating weights
# normalize weights between two values (OPTIONAL)
"""
maxValue = max(classWeights.values())
minValue = min(classWeights.values())
start = 0.5
end = 1
width = end - start
for key, value in classWeights.items():
classWeights[key] = (value - minValue)/(maxValue - minValue) * width + start
"""
# create array from class weights dict
classWeights = {k: v for k, v in sorted(classWeights.items(), key=lambda item: int(item[0]), reverse=False)} #sort
classWeights = {k: v for k, v in classWeights.items() if v != 1.0} #remove classes that does not appear (weights = 1)
classWeightsArray = list(classWeights.values())
print(classWeightsArray)
len(classWeightsArray)
# dump arrays into pickle file
with open('./sprint0/classWeights.pkl', 'wb') as f:
pickle.dump(classWeightsArray, f)
```
| github_jupyter |
# Modeling TRISO Particles
OpenMC includes a few convenience functions for generationing TRISO particle locations and placing them in a lattice. To be clear, this capability is not a stochastic geometry capability like that included in MCNP. It's also important to note that OpenMC does not use delta tracking, which would normally speed up calculations in geometries with tons of surfaces and cells. However, the computational burden can be eased by placing TRISO particles in a lattice.
```
%matplotlib inline
from math import pi
import numpy as np
import matplotlib.pyplot as plt
import openmc
import openmc.model
```
Let's first start by creating materials that will be used in our TRISO particles and the background material.
```
fuel = openmc.Material(name='Fuel')
fuel.set_density('g/cm3', 10.5)
fuel.add_nuclide('U235', 4.6716e-02)
fuel.add_nuclide('U238', 2.8697e-01)
fuel.add_nuclide('O16', 5.0000e-01)
fuel.add_element('C', 1.6667e-01)
buff = openmc.Material(name='Buffer')
buff.set_density('g/cm3', 1.0)
buff.add_element('C', 1.0)
buff.add_s_alpha_beta('c_Graphite')
PyC1 = openmc.Material(name='PyC1')
PyC1.set_density('g/cm3', 1.9)
PyC1.add_element('C', 1.0)
PyC1.add_s_alpha_beta('c_Graphite')
PyC2 = openmc.Material(name='PyC2')
PyC2.set_density('g/cm3', 1.87)
PyC2.add_element('C', 1.0)
PyC2.add_s_alpha_beta('c_Graphite')
SiC = openmc.Material(name='SiC')
SiC.set_density('g/cm3', 3.2)
SiC.add_element('C', 0.5)
SiC.add_element('Si', 0.5)
graphite = openmc.Material()
graphite.set_density('g/cm3', 1.1995)
graphite.add_element('C', 1.0)
graphite.add_s_alpha_beta('c_Graphite')
```
To actually create individual TRISO particles, we first need to create a universe that will be used within each particle. The reason we use the same universe for each TRISO particle is to reduce the total number of cells/surfaces needed which can substantially improve performance over using unique cells/surfaces in each.
```
# Create TRISO universe
spheres = [openmc.Sphere(r=1e-4*r)
for r in [215., 315., 350., 385.]]
cells = [openmc.Cell(fill=fuel, region=-spheres[0]),
openmc.Cell(fill=buff, region=+spheres[0] & -spheres[1]),
openmc.Cell(fill=PyC1, region=+spheres[1] & -spheres[2]),
openmc.Cell(fill=SiC, region=+spheres[2] & -spheres[3]),
openmc.Cell(fill=PyC2, region=+spheres[3])]
triso_univ = openmc.Universe(cells=cells)
```
Next, we need a region to pack the TRISO particles in. We will use a 1 cm x 1 cm x 1 cm box centered at the origin.
```
min_x = openmc.XPlane(x0=-0.5, boundary_type='reflective')
max_x = openmc.XPlane(x0=0.5, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.5, boundary_type='reflective')
max_y = openmc.YPlane(y0=0.5, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.5, boundary_type='reflective')
max_z = openmc.ZPlane(z0=0.5, boundary_type='reflective')
region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
```
Now we need to randomly select locations for the TRISO particles. In this example, we will select locations at random within the box with a packing fraction of 30%. Note that `pack_spheres` can handle up to the theoretical maximum of 60% (it will just be slow).
```
outer_radius = 425.*1e-4
centers = openmc.model.pack_spheres(radius=outer_radius, region=region, pf=0.3)
```
Now that we have the locations of the TRISO particles determined and a universe that can be used for each particle, we can create the TRISO particles.
```
trisos = [openmc.model.TRISO(outer_radius, triso_univ, center) for center in centers]
```
Each TRISO object actually **is** a Cell, in fact; we can look at the properties of the TRISO just as we would a cell:
```
print(trisos[0])
```
Let's confirm that all our TRISO particles are within the box.
```
centers = np.vstack([triso.center for triso in trisos])
print(centers.min(axis=0))
print(centers.max(axis=0))
```
We can also look at what the actual packing fraction turned out to be:
```
len(trisos)*4/3*pi*outer_radius**3
```
Now that we have our TRISO particles created, we need to place them in a lattice to provide optimal tracking performance in OpenMC. We can use the box we created above to place the lattice in. Actually creating a lattice containing TRISO particles can be done with the `model.create_triso_lattice()` function. This function requires that we give it a list of TRISO particles, the lower-left coordinates of the lattice, the pitch of each lattice cell, the overall shape of the lattice (number of cells in each direction), and a background material.
```
box = openmc.Cell(region=region)
lower_left, upper_right = box.region.bounding_box
shape = (3, 3, 3)
pitch = (upper_right - lower_left)/shape
lattice = openmc.model.create_triso_lattice(
trisos, lower_left, pitch, shape, graphite)
```
Now we can set the fill of our box cell to be the lattice:
```
box.fill = lattice
```
Finally, let's take a look at our geometry by putting the box in a universe and plotting it. We're going to use the Fortran-side plotter since it's much faster.
```
universe = openmc.Universe(cells=[box])
geometry = openmc.Geometry(universe)
geometry.export_to_xml()
materials = list(geometry.get_all_materials().values())
openmc.Materials(materials).export_to_xml()
settings = openmc.Settings()
settings.run_mode = 'plot'
settings.export_to_xml()
plot = openmc.Plot.from_geometry(geometry)
plot.to_ipython_image()
```
If we plot the universe by material rather than by cell, we can see that the entire background is just graphite.
```
plot.color_by = 'material'
plot.colors = {graphite: 'gray'}
plot.to_ipython_image()
```
| github_jupyter |
```
# Import libraries for simulation
import tensorflow as tf
import numpy as np
dimensions = (12,12)
mineProbability = 0.2
# count the number of mines in the proximity of given square, including square itself
def countMines(board,r,c):
count = 0
rows, cols = board.shape
for i in [r-1,r,r+1]:
if i >= 0 and i < rows:
for j in [c-1,c,c+1]:
if j >= 0 and j < cols:
count += int(board[i,j])
return count
# Converts a board of mines into a board of mine counts
def boardMineCounts(board):
mineInfo = np.zeros(board.shape, dtype = int)
rows, cols = board.shape
for i in range(rows):
for j in range(cols):
mineInfo[i,j] = countMines(board,i,j)
return mineInfo
'''
def boardPartialMineCounts(board):
result = boardMineCounts(board)
for index, x in np.ndenumerate(board):
if x: result[index] = -1
elif r.uniform(0, 1) < missingProbability: result[index] = -1
return result
'''
# Generates a random training batch of size n
def next_training_batch(n):
batch_xs = []
batch_ys = []
for _ in range(n):
board = np.random.random(dimensions) < mineProbability
counts = boardMineCounts(board)
batch_xs.append(counts.flatten().astype(float))
batch_ys.append(board.flatten().astype(float))
return (np.asarray(batch_xs), np.asarray(batch_ys))
# Create the model
rows, cols = dimensions
size = rows*cols
x = tf.placeholder(tf.float32, [None, size])
W = tf.Variable(tf.random_normal([size, size], stddev=0.01))
b = tf.Variable(tf.random_normal([size], stddev=0.01))
y = tf.sigmoid(tf.matmul(x, W) + b)
# Placeholder for the 'labels', ie the correct answer
y_ = tf.placeholder(tf.float32, [None, size])
# Loss function
mean_squared_error = tf.losses.mean_squared_error(labels=y_, predictions=y)
# Summaries for tensorboard
with tf.name_scope('W_reshape'):
image_shaped_W = tf.reshape(W, [-1, size, size, 1])
tf.summary.image('W', image_shaped_W, 1000)
with tf.name_scope('b_reshape'):
image_shaped_b = tf.reshape(-b, [-1, rows, cols, 1])
tf.summary.image('b', image_shaped_b, 1000)
_ = tf.summary.scalar('accuracy', mean_squared_error)
# Optimiser
train_step = tf.train.AdamOptimizer().minimize(mean_squared_error)
# Create session and initialise or restore stuff
saver = tf.train.Saver({"W": W, "b": b})
sess = tf.InteractiveSession()
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter('.', sess.graph)
tf.global_variables_initializer().run()
# Restore model?
#saver.restore(sess, "./saves1/model-100000")
# Train
for iteration in range(100001):
batch_xs, batch_ys = next_training_batch(100)
summary, _ = sess.run([merged, train_step], feed_dict={x: batch_xs, y_: batch_ys})
writer.add_summary(summary, iteration)
if iteration % 100 == 0:
acc = sess.run(mean_squared_error, feed_dict={x: batch_xs, y_: batch_ys})
print('Accuracy at step %s: %s' % (iteration, acc))
if iteration % 1000 == 0:
save_path = saver.save(sess, './saves1/model', global_step=iteration)
print("Model saved in file: %s" % save_path)
# Test trained model
batch_xs, batch_ys = next_training_batch(1000)
print(sess.run(mean_squared_error, feed_dict={x: batch_xs, y_: batch_ys}))
# Run a single randomised test
mineCounts, mines = next_training_batch(1)
print("mines")
print(mines.astype(int).reshape(dimensions))
print("predicted mines")
result = sess.run(y, feed_dict={x: mineCounts})
predictions = (result > 0.5).astype(int)
print(predictions.reshape(dimensions))
print("errors")
print((predictions != mines.astype(int)).astype(int).sum())
print("----")
print("mine counts")
print(mineCounts.astype(int).reshape(dimensions))
print("predicted mine counts")
print(boardMineCounts(predictions.reshape(dimensions)))
print("errors")
print((mineCounts.astype(int).reshape(dimensions) != boardMineCounts(predictions.reshape(dimensions))).astype(int).sum())
print(sess.run(-b))
```
| github_jupyter |
# Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!
**You will learn to:** Use regularization in your deep learning models.
Let's first import the packages you are going to use.
```
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
```
train_X, train_Y, test_X, test_Y = load_2D_dataset()
```
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
## 1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python.
- in *dropout mode* -- by setting the `keep_prob` to a value less than one
You will first try the model without any regularization. Then, you will implement:
- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"
- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
```
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
```
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
```
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
## 2 - L2 Regularization
The standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
```python
np.sum(np.square(Wl))
```
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
```
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = None
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
```
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + None
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + None
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + None
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
```
**Expected Output**:
<table>
<tr>
<td>
**dW1**
</td>
<td>
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
</td>
</tr>
<tr>
<td>
**dW2**
</td>
<td>
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
</td>
</tr>
<tr>
<td>
**dW3**
</td>
<td>
[[-1.77691347 -0.11832879 -0.09397446]]
</td>
</tr>
</table>
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call:
- `compute_cost_with_regularization` instead of `compute_cost`
- `backward_propagation_with_regularization` instead of `backward_propagation`
```
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
```
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
**What is L2-regularization actually doing?**:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
**What you should remember** -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
## 3 - Dropout
Finally, **dropout** is a widely used regularization technique that is specific to deep learning.
**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
### 3.1 - Forward propagation with dropout
**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
**Instructions**:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X < 0.5)`. Note that 0 and 1 are respectively equivalent to False and True.
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
```
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = None # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = None # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = None # Step 3: shut down some neurons of A1
A1 = None # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = None # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = None # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = None # Step 3: shut down some neurons of A2
A2 = None # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
```
**Expected Output**:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
### 3.2 - Backward propagation with dropout
**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
**Instruction**:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`.
2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
```
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = None # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = None # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = None # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = None # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**dA1**
</td>
<td>
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
</td>
</tr>
<tr>
<td>
**dA2**
</td>
<td>
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
</td>
</tr>
</table>
Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 24% probability. The function `model()` will now call:
- `forward_propagation_with_dropout` instead of `forward_propagation`.
- `backward_propagation_with_dropout` instead of `backward_propagation`.
```
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
```
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Note**:
- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.
<font color='blue'>
**What you should remember about dropout:**
- Dropout is a regularization technique.
- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
- Apply dropout both during forward and backward propagation.
- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
## 4 - Conclusions
**Here are the results of our three models**:
<table>
<tr>
<td>
**model**
</td>
<td>
**train accuracy**
</td>
<td>
**test accuracy**
</td>
</tr>
<td>
3-layer NN without regularization
</td>
<td>
95%
</td>
<td>
91.5%
</td>
<tr>
<td>
3-layer NN with L2-regularization
</td>
<td>
94%
</td>
<td>
93%
</td>
</tr>
<tr>
<td>
3-layer NN with dropout
</td>
<td>
93%
</td>
<td>
95%
</td>
</tr>
</table>
Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system.
Congratulations for finishing this assignment! And also for revolutionizing French football. :-)
<font color='blue'>
**What we want you to remember from this notebook**:
- Regularization will help you reduce overfitting.
- Regularization will drive your weights to lower values.
- L2 regularization and Dropout are two very effective regularization techniques.
| github_jupyter |
# Entanglement renormalization
One can open this notebook in Google Colab (is recommended)
[](https://colab.research.google.com/github/LuchnikovI/QGOpt/blob/master/docs/source/entanglement_renormalization.ipynb)
In the given tutorial, we show how the Riemannian optimization on the complex Stiefel manifold can be used to perform entanglement renormalization and find the ground state energy and the ground state itself of a many-body spin system at the point of quantum phase transition. First of all, let us import the necessary libraries.
```
import numpy as np
from scipy import integrate
import tensorflow as tf # tf 2.x
try:
import QGOpt as qgo
except ImportError:
!pip install git+https://github.com/LuchnikovI/QGOpt
import QGOpt as qgo
# TensorNetwork library
try:
import tensornetwork as tn
except ImportError:
!pip install tensornetwork
import tensornetwork as tn
import matplotlib.pyplot as plt
from tqdm import tqdm
tn.set_default_backend("tensorflow")
# Fix random seed to make results reproducable.
tf.random.set_seed(42)
```
## 1. Renormalization layer
First of all, one needs to define a renormalization (mera) layer. We use ncon API from TensorNetwork library for these purposes. The function mera_layer takes unitary and isometric tensors (building blocks) and performs renormalization of a local Hamiltonian as it is shown on the tensor diagram below (if the diagram is not displayed here, please open the notebook in Google Colab).

For more information about entanglement renormalization please see
Evenbly, G., & Vidal, G. (2009). Algorithms for entanglement renormalization. Physical Review B, 79(14), 144108.
Evenbly, G., & Vidal, G. (2014). Algorithms for entanglement renormalization: boundaries, impurities and interfaces. Journal of Statistical Physics, 157(4-5), 931-978.
For more information about ncon notation see for example
Pfeifer, R. N., Evenbly, G., Singh, S., & Vidal, G. (2014). NCON: A tensor network contractor for MATLAB. arXiv preprint arXiv:1402.0939.
```
@tf.function
def mera_layer(H,
U,
U_conj,
Z_left,
Z_right,
Z_left_conj,
Z_right_conj):
"""
Renormalizes local Hamiltonian.
Args:
H: complex valued tensor of shape (chi, chi, chi, chi),
input two-side Hamiltonian (a local term).
U: complex valued tensor of shape (chi ** 2, chi ** 2), disentangler
U_conj: complex valued tensor of shape (chi ** 2, chi ** 2),
conjugated disentangler.
Z_left: complex valued tensor of shape (chi ** 3, new_chi),
left isometry.
Z_right: complex valued tensor of shape (chi ** 3, new_chi),
right isometry.
Z_left_conj: complex valued tensor of shape (chi ** 3, new_chi),
left conjugated isometry.
Z_right_conj: complex valued tensor of shape (chi ** 3, new_chi),
right conjugated isometry.
Returns:
complex valued tensor of shape (new_chi, new_chi, new_chi, new_chi),
renormalized two side hamiltonian.
Notes:
chi is the dimension of an index. chi increases with the depth of mera, however,
at some point, chi is cut to prevent exponential growth of indices
dimensionality."""
# index dimension before renormalization
chi = tf.cast(tf.math.sqrt(tf.cast(tf.shape(U)[0], dtype=tf.float64)),
dtype=tf.int32)
# index dimension after renormalization
chi_new = tf.shape(Z_left)[-1]
# List of building blocks
list_of_tensors = [tf.reshape(Z_left, (chi, chi, chi, chi_new)),
tf.reshape(Z_right, (chi, chi, chi, chi_new)),
tf.reshape(Z_left_conj, (chi, chi, chi, chi_new)),
tf.reshape(Z_right_conj, (chi, chi, chi, chi_new)),
tf.reshape(U, (chi, chi, chi, chi)),
tf.reshape(U_conj, (chi, chi, chi, chi)),
H]
# structures (ncon notation) of three terms of ascending super operator
net_struc_1 = [[1, 2, 3, -3], [9, 11, 12, -4], [1, 6, 7, -1],
[10, 11, 12, -2], [3, 9, 4, 8], [7, 10, 5, 8], [6, 5, 2, 4]]
net_struc_2 = [[1, 2, 3, -3], [9, 11, 12, -4], [1, 2, 6, -1],
[10, 11, 12, -2], [3, 9, 4, 7], [6, 10, 5, 8], [5, 8, 4, 7]]
net_struc_3 = [[1, 2, 3, -3], [9, 10, 12, -4], [1, 2, 5, -1],
[8, 11, 12, -2], [3, 9, 4, 6], [5, 8, 4, 7], [7, 11, 6, 10]]
# sub-optimal contraction orders for three terms of ascending super operator
con_ord_1 = [4, 5, 8, 6, 7, 1, 2, 3, 11, 12, 9, 10]
con_ord_2 = [4, 7, 5, 8, 1, 2, 11, 12, 3, 6, 9, 10]
con_ord_3 = [6, 7, 4, 11, 8, 12, 10, 9, 1, 2, 3, 5]
# ncon
term_1 = tn.ncon(list_of_tensors, net_struc_1, con_ord_1)
term_2 = tn.ncon(list_of_tensors, net_struc_2, con_ord_2)
term_3 = tn.ncon(list_of_tensors, net_struc_3, con_ord_3)
return (term_1 + term_2 + term_3) / 3 # renormalized hamiltonian
# auxiliary functions that return initial isometries and disentanglers
@tf.function
def z_gen(chi, new_chi):
"""Returns random isometry.
Args:
chi: int number, input chi.
new_chi: int number, output chi.
Returns:
complex valued tensor of shape (chi ** 3, new_chi)."""
# one can use the complex Stiefel manfiold to generate a random isometry
m = qgo.manifolds.StiefelManifold()
return m.random((chi ** 3, new_chi), dtype=tf.complex128)
@tf.function
def u_gen(chi):
"""Returns the identity matrix of a given size (initial disentangler).
Args:
chi: int number.
Returns:
complex valued tensor of shape (chi ** 2, chi ** 2)."""
return tf.eye(chi ** 2, dtype=tf.complex128)
```
## 2. Transverse-field Ising (TFI) model hamiltonian and MERA building blocks
Here we define the Transverse-field Ising model Hamiltonian and building blocks (disentanglers and isometries) of MERA network that will be optimized.
First of all we initialize hyper parameters of MERA and TFI hamiltonian.
```
max_chi = 4 # max bond dim
num_of_layers = 5 # number of MERA layers (corresponds to 2*3^5 = 486 spins)
h_x = 1 # value of transverse field in TFI model (h_x=1 is the critical field)
```
One needs to define Pauli matrices. Here all Pauli matrices are represented as one tensor of size $3\times 2 \times 2$, where the first index enumerates a particular Pauli matrix, and the remaining two indices are matrix indices.
```
sigma = tf.constant([[[1j*0, 1 + 1j*0], [1 + 1j*0, 0*1j]],
[[0*1j, -1j], [1j, 0*1j]],
[[1 + 0*1j, 0*1j], [0*1j, -1 + 0*1j]]], dtype=tf.complex128)
```
Here we define local term of the TFI hamiltonian.
```
zz_term = tf.einsum('ij,kl->ikjl', sigma[2], sigma[2])
x_term = tf.einsum('ij,kl->ikjl', sigma[0], tf.eye(2, dtype=tf.complex128))
h = -zz_term - h_x * x_term
```
Here we define initial disentanglers, isometries, and state in the renormalized space.
```
# disentangler U and isometry Z in the first MERA layer
U = u_gen(2)
Z = z_gen(2, max_chi)
# lists with disentanglers and isometries in the rest of the layers
U_list = [u_gen(max_chi) for _ in range(num_of_layers - 1)]
Z_list = [z_gen(max_chi, max_chi) for _ in range(num_of_layers - 1)]
# lists with all disentanglers and isometries
U_list = [U] + U_list
Z_list = [Z] + Z_list
# initial state in the renormalized space (low dimensional in comparison
# with the dimensionality of the initial problem)
psi = tf.ones((max_chi ** 2, 1), dtype=tf.complex128)
psi = psi / tf.linalg.norm(psi)
# converting disentanglers, isometries, and initial state to real
# representation (necessary for the further optimizer)
U_list = list(map(qgo.manifolds.complex_to_real, U_list))
Z_list = list(map(qgo.manifolds.complex_to_real, Z_list))
psi = qgo.manifolds.complex_to_real(psi)
# wrapping disentanglers, isometries, and initial state into
# tf.Variable (necessary for the further optimizer)
U_var = list(map(tf.Variable, U_list))
Z_var = list(map(tf.Variable, Z_list))
psi_var = tf.Variable(psi)
```
## 3. Optimization of MERA
MERA parametrizes quantum state $\Psi(U, Z, \psi)$ of a spin system, where $U$ is a set of disentanglers, $Z$ is a set of isometries, and $\psi$ is a state in the renormalized space.
In order to find the ground state and its energy, we perform optimization of variational energy $$\langle\Psi(U, Z, \psi)|H_{\rm TFI}|\Psi(U, Z, \psi)\rangle\rightarrow \min_{U, \ Z, \ \psi \in {\rm Stiefel \ manifold}}$$
First of all, we define the parameters of optimization. In order to achieve better convergence, we decrease the learning rate with the number of iteration according to the exponential law.
```
iters = 3000 # number of iterations
lr_i = 0.6 # initial learning rate
lr_f = 0.05 # final learning rate
# learning rate is multiplied by this coefficient each iteration
decay = (lr_f / lr_i) ** (1 / iters)
```
Here we define an example of the complex Stiefel manifold necessary for Riemannian optimization and Riemannian Adam optimizer.
```
m = qgo.manifolds.StiefelManifold() # complex Stiefel manifold
opt = qgo.optimizers.RAdam(m, lr_i) # Riemannian Adam
```
Finally, we perform an optimization loop.
```
# this list will be filled by the value of variational energy per iteration
E_list = []
# optimization loop
for j in tqdm(range(iters)):
# gradient calculation
with tf.GradientTape() as tape:
# convert real valued variables back to complex valued tensors
U_var_c = list(map(qgo.manifolds.real_to_complex, U_var))
Z_var_c = list(map(qgo.manifolds.real_to_complex, Z_var))
psi_var_c = qgo.manifolds.real_to_complex(psi_var)
# initial local Hamiltonian term
h_renorm = h
# renormalization of a local Hamiltonian term
for i in range(len(U_var)):
h_renorm = mera_layer(h_renorm,
U_var_c[i],
tf.math.conj(U_var_c[i]),
Z_var_c[i],
Z_var_c[i],
tf.math.conj(Z_var_c[i]),
tf.math.conj(Z_var_c[i]))
# renormalizad Hamiltonian (low dimensional)
h_renorm = (h_renorm + tf.transpose(h_renorm, (1, 0, 3, 2))) / 2
h_renorm = tf.reshape(h_renorm, (max_chi * max_chi, max_chi * max_chi))
# energy
E = tf.cast((tf.linalg.adjoint(psi_var_c) @ h_renorm @ psi_var_c),
dtype=tf.float64)[0, 0]
# adding current variational energy to the list
E_list.append(E)
# gradients
grad = tape.gradient(E, U_var + Z_var + [psi_var])
# optimization step
opt.apply_gradients(zip(grad, U_var + Z_var + [psi_var]))
# learning rate update
opt._set_hyper("learning_rate", opt._get_hyper("learning_rate") * decay)
```
Here we compare exact ground state energy with MERA based value. We also plot how the difference between exact ground state energy and MERA-based energy evolves with the number of iteration.
```
# exact value of ground state energy in the critical point
N = 2 * (3 ** num_of_layers) # number of spins (for 5 layers one has 486 spins)
E0_exact_fin = -2 * (1 / np.sin(np.pi / (2 * N))) / N # exact energy per spin
plt.yscale('log')
plt.xlabel('iter')
plt.ylabel('err')
plt.plot(E_list - tf.convert_to_tensor(([E0_exact_fin] * len(E_list))), 'b')
print("MERA energy:", E_list[-1].numpy())
print("Exact energy:", E0_exact_fin)
```
| github_jupyter |
# Analysis notebook
Author: Evan Azevedo
Company: Amberdata
Blog Post: Large Txn's in the Mempool
```
# loading the packages
import os
import json
import requests
from tqdm import tqdm
from datetime import datetime, timedelta, timezone
import pytz
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from dotenv import load_dotenv
# helper functions
def get_key():
"Get the API key from an .env file"
if ".env" not in os.listdir("../"):
print("Configuring API Key...")
key = input("Amberdata API Key: ")
with open("../.env", "w") as f:
f.write(f"AMBERDATA_API_KEY={key}\n")
load_dotenv(verbose=True)
return {
"AMBERDATA_API_KEY": os.getenv("AMBERDATA_API_KEY")
}
def get_response(url, headers=None, queryString=None):
"Get the REST response from the specified URL"
if not headers:
headers = {'x-api-key': api_key["AMBERDATA_API_KEY"]}
if queryString:
response = requests.request("GET", url, headers=headers, params=queryString)
else:
response = requests.request("GET", url, headers=headers)
response = json.loads(response.text)
try:
if response["title"] == "OK":
return response["payload"]
except Exception as e:
return None
# let's load our api key
api_key = get_key()
```
## Reading the data
Now we get to read in the data from our Raspberry Pi. For now I am just plotting the raw values to see what kind of volume of transactions are going on.
```
# read in the data from different sources
results = pd.read_csv("../data/results.csv", sep='; ', engine='python')
# get the value of BTC per pending txn
results["btc"] = results.value // 10**8
# convert to timestamp
results['timestamp'] = pd.to_datetime(results.timestamp, utc=True)
# remove duplicate transactions, keeping only the first
results = results.sort_values('timestamp').groupby('hash').first().reset_index().set_index('timestamp').reset_index()
# plot just to see if there are lots of whale activity
results.set_index('timestamp').btc.plot()
plt.title("Magnitude of pending transactions")
plt.ylabel("BTC")
plt.savefig("../plots/mag_pending.png")
plt.show()
results.btc.describe()
def exchange(x):
if x.within:
return "within"
elif x.on:
return "on"
elif x.off:
return "off"
else:
return "neither"
def post_hoc(addresses):
"parses a list of addresses for a bitmex"
addresses = addresses.strip('][').split(', ')
for address in addresses:
address = address.strip("''")
if (address.startswith("3BMEX") or address.startswith("3BitMEX")):
return True
else:
continue
return False
results['off'] = results['from'].map(post_hoc)
results['on'] = results['to'].map(post_hoc)
results['within'] = results.off & results.on
results['status'] = results.apply(exchange, axis=1)
# get transactions on the exchange
data = results.status[results.status != 'neither'].value_counts()
labels = data.index.values
counts = data.values/data.sum()
x = np.arange(len(labels)) # the label locations
width = 0.55 # the width of the bars
fig, ax = plt.subplots()
p1 = plt.bar(x, counts, width)
# Add some text for labels, title and custom x-axis tick labels, etc.
plt.ylabel('Frequency')
plt.title('Counts of transactions and exchange')
plt.xticks(x, labels)
plt.tight_layout()
plt.savefig("../plots/transactions_on_exch.png")
# how many transactions were related to the exchange
len(results[results.status != 'neither'])/len(results)*100
```
## Getting OHLCV data
```
dfs = []
n_days = 3
for i in range(n_days):
print(f"Iteration: {i}")
# get the start and end dates in timestamps
startDate = results.timestamp.min() + timedelta(i-1)
endDate = startDate + timedelta(1)
# convert to UNIX format
startDate = str(round(startDate.timestamp()*10**3))
endDate = str(round(endDate.timestamp()*10**3)-100)
# the url for our endpoint
url = "https://web3api.io/api/v2/market/ohlcv/btc_usd/historical"
# our query
querystring = {
"timeInterval": "minutes",
"timeFormat": "iso",
'startDate': startDate,
'endDate': endDate,
"exchange": "bitstamp"
}
# the API key
headers = {'x-api-key': api_key["AMBERDATA_API_KEY"]}
# the response for our query
payload = get_response(url, headers, querystring)
# we save the OHLCV data
bfx = payload['data']['bitstamp']
# get the columns and make a dataframe
columns = payload['metadata']['columns']
bfx_df = pd.DataFrame(bfx, columns=columns)
# append the dataframe to a list
dfs.append(bfx_df)
# combine the several days of OHLCV data
ohlcv = pd.concat(dfs)
# save current data to csv
ohlcv.to_csv("../data/ohlcv.csv", index=False)
results.to_csv("../data/results_all.csv", index=False)
```
## Working with the OHLCV data
```
# read the data from our saved csvs
ohlcv = pd.read_csv("../data/ohlcv.csv")
results = pd.read_csv("../data/results_all.csv")
# change the UNIX timestamp to human readable format and set it as index
results["timestamp"] = pd.to_datetime(results.timestamp, utc=True)
# unify the timestamp format and set as index
ohlcv["timestamp"] = pd.to_datetime(ohlcv.timestamp)
# drop OHL from the dataframe
ohlcv.drop(['open', 'high', 'low'], axis=1, inplace=True)
# select only data from our date range
ohlcv = ohlcv[(ohlcv.timestamp <= results.timestamp.max()) & (ohlcv.timestamp >= results.timestamp.min())]
# set timestamp as index
ohlcv.set_index("timestamp", inplace=True)
```
### Adding USD to Pending txns
We would like to add the value of Bitcoin to our results DataFrame, so we can see the value of the transactions in USD.
```
# add a column of timestamps rounded to the minute
results["ts_minute"] = results.timestamp.dt.round('1min')
# rename the ohlcv column for joining
ohlcv_min = ohlcv.reset_index().rename({"timestamp": "ts_minute"}, axis=1)
# join the close column to the results dataframe
df_merged = pd.merge(results, ohlcv_min, on = "ts_minute", how='left').dropna()
# create the usd value column
df_merged["usd"] = df_merged.value/10**8 * df_merged.close
results.groupby('ts_minute').count().value.plot()
plt.title("Data collection uptime")
plt.savefig("../plots/uptime.png")
```
## Plotting exchange activity on price
```
# display the result, sorted by the largest transactions
# sort the values by increasing dollar amounts and
# save only transactions related to BitMEX
df_exch = df_merged.sort_values("usd", ascending=False).query("status != 'neither'")
# plotting the time series of bitcoin price
ohlcv.close.plot()
# add a red line when a large pending transaction occured
lines = [0]*2
for i in range(len(df_exch)):
txn = df_exch.iloc[i]
# set the color according to the status
if txn.status == 'within':
l = plt.axvline(txn[0], color='r', alpha=0.5)
lines[0] = l
elif txn.status == 'on':
l = plt.axvline(txn[0], color='g', alpha=0.5)
lines[1] = l
plt.title("Large BitMEX Pending Transactions")
plt.ylabel("btc_usd")
plt.legend(lines, ("Within", "On"))
plt.tight_layout()
plt.savefig("../plots/large_bitmex_flows.png")
plt.show()
def get_transaction_data(txn_hash):
"gets the data for the specified transaction hash"
# the url with the given hash
url = f"https://web3api.io/api/v2/transactions/{txn_hash}"
# the headers
headers = {
"x-amberdata-blockchain-id": "bitcoin-mainnet",
"x-api-key": api_key["AMBERDATA_API_KEY"]
}
# get the response
response = get_response(url, headers=headers)
# return the response
return response
def calculate_flow(txn_hash, status):
"Calculates the flow on or off the exchange for a given transaction"
# the transaction data
transaction = get_transaction_data(txn_hash)
# input and outputs of the transaction
inputs, outputs = transaction["inputs"], transaction["outputs"]
# check the status of the transaction
if status == "within":
# get the address of the sender sending money out of the exchange
senders = set([sender['addresses'][0] for sender in inputs])
# search through the outputs for the recipient that is not the sender
for output in outputs:
# get the address of the recipient
recipient = output['addresses'][0]
# check if the recipient is a sender
if recipient not in senders:
# add the value to the output
flow = output['value']
elif status == "on":
for output in outputs:
# get the address of the recipient
recipient = output['addresses'][0]
# check if the recipient is a bitmex address
if (recipient.startswith("3BMEX") or recipient.startswith("3BitMEX")):
# add the value to the output
flow = output['value']
return flow
# calculate the flows across the transactions in our exchange
df_exch['flow'] = df_exch.apply(lambda x: calculate_flow(x[1], x[10]), axis=1)
# calcule the value in BTC for the flow
df_exch['flow_btc'] = df_exch.flow//10**8
df_exch.columns
df_exch.iloc[:, [0, 4, 6, 10, 15, 16]]
df_largest = df_exch.iloc[:, [0, 1, 6, 10]].reset_index(drop=True).head(10)
df_largest
df_largest.iloc[3, 1]
# the start and end timestamps
start, end = datetime(2020, 8, 27, 7, 0, 0, tzinfo=pytz.UTC), datetime(2020, 8, 27, 12, 0, 0, tzinfo=pytz.UTC)
# plotting the price
ohlcv[(ohlcv.index > start) & (ohlcv.index < end)].close.plot()
# adding a line for bitmex flows
lines = [0]
for i in range(len(df_exch)):
txn = df_exch.iloc[i]
# grabbing the timestamp in minutes
ts = txn[0]
# plotting the vertical line
if ts > start and ts < end:
l = plt.axvline(txn[0], color='g', alpha=0.5)
lines[0] = l
# add an appropriate title
plt.title(f"BitMEX flow activity for {start.month}/{start.day} from {start.hour}:{start.minute}0 to {end.hour}:{end.minute}0")
# label the y axis
plt.ylabel("btc_usd")
plt.legend(lines, ["On"])
plt.tight_layout()
# save the figure
plt.savefig("../plots/flow_zoom_in_1.png")
# the start and end timestamps
start, end = datetime(2020, 8, 26, 6, 0, 0, tzinfo=pytz.UTC), datetime(2020, 8, 26, 8, 45, 0, tzinfo=pytz.UTC)
# plotting the price
ohlcv[(ohlcv.index > start) & (ohlcv.index < end)].close.plot()
# adding a line for bitmex flows
lines = [0]
for i in range(len(df_exch)):
txn = df_exch.iloc[i]
# grabbing the timestamp in minutes
ts = txn[0]
# plotting the vertical line
if ts > start and ts < end:
l = plt.axvline(txn[0], color='r', alpha=0.5)
lines[0] = l
# add an appropriate title
plt.title(f"BitMEX flow activity for {start.month}/{start.day} from {start.hour}:{start.minute}0 to {end.hour}:{end.minute}")
# label the y axis
plt.ylabel("btc_usd")
plt.legend(lines, ["Within"])
plt.tight_layout()
# save the figure
plt.savefig("../plots/flow_zoom_in_2.png")
```
## Which Txn's went through?
Let's see first if our big $(>\$78 m)$ transaction went through. We can check this easily enough since we recorded the transaction hash, the timestamp, and the address as well as the transaction size. We will be using the Amberdata [transaction](https://docs.amberdata.io/reference#get-address-transactions) endpoint.
```
def parse_addresses(i: int = 0, n: int = 0) -> str:
"""Returns the address from record i in position n"""
# parse the list of addresses
address_list = df_exch.iloc[i, 2].split("['")[-1].split("']")[0]
return address_list.split("', '")[n]
# the top transaction number we consider
i = 0
# the sender's address
sender = parse_addresses(i)
# the transaction hash
txn_hash = df_exch.sort_values('value', ascending=False).iloc[i, 1]
url = f"https://web3api.io/api/v2/addresses/{sender}/transactions"
# format the date string for start and endtimes
start = str(datetime(2020, 8, 26, 8, 25, 0).strftime("%Y-%m-%dT%H:%M:%S.000Z"))
end = str(datetime.today().strftime("%Y-%m-%dT%H:%M:%S.000Z"))
# setting the start and end time and the number of records to return
querystring = {"startDate": start,
"endDate": end,
"page":"0",
"size":"100"}
# headers: getting data from btc mainnet
headers = {
'x-amberdata-blockchain-id': "bitcoin-mainnet",
'x-api-key': api_key["AMBERDATA_API_KEY"]
}
# get the response
response = get_response(url, headers, querystring)
sent_txn = None
# parse the records to see if the transaction was posted
for record in response["records"]:
# if the transction hash is the same as the posted hash
if record['hash']==txn_hash:
# show the record
sent_txn = record
print(sent_txn)
# display if we found the transaction or not
if sent_txn:
print(sent_txn)
else:
print("Transaction not completed.")
```
As we can see, the transaction got mined to into block `645509` the next day, on the 27th of August.
```
sent_txns = []
for i in tqdm(range(len(df_exch))):
# the sender's address
sender = parse_addresses(i)
# the transaction hash
txn_hash = df_exch.iloc[i, 1]
url = f"https://web3api.io/api/v2/addresses/{sender}/transactions"
# format the date string for start and endtimes
start = str(datetime(2020, 8, 26, 8, 25, 0).strftime("%Y-%m-%dT%H:%M:%S.000Z"))
end = str(datetime.today().strftime("%Y-%m-%dT%H:%M:%S.000Z"))
# setting the start and end time and the number of records to return
querystring = {"startDate": start,
"endDate": end,
"page":"0",
"size":"1000"}
# headers: getting data from btc mainnet
headers = {
'x-amberdata-blockchain-id': "bitcoin-mainnet",
'x-api-key': api_key["AMBERDATA_API_KEY"]
}
# get the response
response = get_response(url, headers, querystring)
# parse the records to see if the transaction was posted
try:
for record in response["records"]:
# if the transction hash is the same as the posted hash
if record['hash']==txn_hash:
# save the record
sent_txns.append(record)
except TypeError:
continue
print(f"Number of completed transactions: {len(sent_txns)}\nTotal transactions: {len(df_exch)}\nPercent completed: {len(sent_txns)/len(df_exch)}")
```
| github_jupyter |
```
__author__ = 'Guillermo Damke <gdamke@gmail.com>, Francisco Förster <francisco.forster@gmail.com>, Alice Jacques <alice.jacques@noirlab.edu>'
__version__ = '20210119' # yyyymmdd;
__datasets__ = ['Iris flower dataset']
__keywords__ = ['Introduction to Machine Learning', 'Supervised Machine Learning', 'La Serena School for Data Science']
```
# Introduction to Supervised Machine Learning - Basic Concepts
*In original form by Francisco Forster, Centro de Modelamiento Matemático (CMM), Universidad de Chile / Instituto Milenio de Astrofísica (MAS). Adaptated for NOIRLab Astro Data Lab by Guillermo Damke and Alice Jacques.*
#### This notebook is part of the curriculum of the 2019 La Serena School for Data Science.
## Table of Contents
This notebook presents an introduction to topics in Machine Learning, in the following sections:
* [General concepts in Machine Learning](#1---General-concepts-in-Machine-Learning)
* [Supervised (and Unsupervised) Machine Learning methods](#2---Supervised-and-Unsupervised-Machine-Learning)
* [Metrics to evaluate model performance](#3---Metrics-to-evaluate-model-performance)
* [Diagnostics](#4---Diagnostics)
* [Visual representations of results](#5---Visual-representations-of-results)
# Summary
This notebook introduces several concepts and definitions that are common in Machine Learning. Practical examples of these concepts are presented in a separate notebook.
# 1 - General concepts in Machine Learning
## 1.1 - Overfitting, underfitting, and the bias-variance tradeoff
### Overfitting and Underfitting
Two important concepts in machine learning are **overfitting** and **underfitting**.
If a model represents our data too accurately (**overfitting**), it may not effectively generalize unobserved data.
If a model represents our data too generally (**underfitting**), it may underrepresent the features of the data.
A popular solution to reduce overfitting consists of adding structure to the model through **regularization**. This favors simpler models through training inspired by **[Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor)**.
### Bias
* Quantifies the precision of the model across the training sets.
### Variance
* Quantifies how sensitive the model is to small changes in the training set.
### Bias-variance tradeoff
The plot below shows the **bias-variance tradeoff**, which is a common problem in Supervised Machine Learning algorithms. It is related to model selection. A model with high complexity describes the training data well (low training error), but may not effectively generalize when applied to new data (high validation error, i.e., high error in predicting when presented to new data). A simpler model is not prone to overfitting the noise in the data, but it may underrepresent the features of the data (**underfitting**).

## 1.2 - Complexity, accuracy, robustness
In general, we want precise and robust models.
**Simpler models tend to be less accurate, but more robust.**
**More complex models tend to be more accurate, but less robust.**
This tension is usually expressed as the **bias-variance tradeoff** which is central to machine learning.
## 1.3 - Model selection
No one model performs uniformly better than another. One model may perform well in one data set and poorly in another.
## 1.4 - Classification vs. regression
The figure below represents two usual tasks performed with Machine Learning.
* **Classification**: refers to predicting to what class or category an object belongs to, given some input data about that object. In this case, the output is a category, class, or label (i.e., a discrete variable).
* **Regression**: refers to predicting an output real value, given some input data. In this case, the output is a continuous variable.

# 2 - Supervised and Unsupervised Machine Learning
In this section, we will introduce two different learning algorithms, which are considered either as Supervised or Unsupervised Machine Learning.
## 2.1 - Predictive or *Supervised Learning*:
Learn a mapping from inputs ${\bf x}$ to outputs $y$, given a **labeled** set of input-output pairs $D=\lbrace{({\bf x_i}, y_i)\rbrace}_{i=1}^N$.
$D$ is called the **training set**.
Each training input ${\bf x_i}$ is a vector of dimension $M$, with numbers called **features**, **attributes** or **covariates**. They are usually stored in a $N \times M$ **design matrix** ${\bf X}$.
An important consideration, as mentioned above:
* When $y$ is **categorical** the problem is known as **[classification](#1.4---Classification-vs.-regression)**.
* When $y$ is **real-valued** the problem is known as **[regression](#1.4---Classification-vs.-regression)**.
### Example of a labeled training set: the "Iris flower dataset".
The **Iris flower dataset** is commonly utilized in Machine Learning tests and examples for problems in categorical classification. Because of this, the dataset is included in several Python libraries, including the Seaborn library which we will use below.
The **Iris flower dataset** includes four real-valued variables (length and width of petals and sepals) for 50 samples of each three species of Iris (versicolor, virginica, and setosa):


#### What does this dataset look like?
Let's read the dataset and explore it with the Seaborn library:
```
import seaborn as sns
%matplotlib inline
sns.set(style="ticks")
dfIris = sns.load_dataset("iris")
print("Design matrix shape (entries, attributes):", dfIris.shape)
print("Design matrix columns:", dfIris.columns)
```
It can be seen that the dataset contains 150 entries with 5 atributes (columns).
We can view the first five entries with the `head` function:
```
dfIris.head()
# Notice that the real-valued variables are in centimeters.
```
The function `info` prints "a concise summary" of a DataFrame:
```
dfIris.info()
```
While the function `describe` is used to "generate descriptive statistics" of a DataFrame:
```
dfIris.describe()
```
For a quick visual exploration of the dataset, we can use the `pairplot` function of the Seaborn library.
We will pass the `hue="species"` argument, so that the three species (labels) in the dataset are represented by different colors.
```
sns.pairplot(dfIris, hue="species");
```
We will train a model to predict the Iris classes in Section 4 of this notebook.
In addition, some applications of Supervised Machine Learning algorithms is presented in the ["04_Intro_Machine_Learning_practical"](https://github.com/astro-datalab/notebooks-latest/blob/master/06_EPO/LaSerenaSchoolForDataScience/2019/04_Intro_Machine_Learning_practical/Intro_Machine_Learning_practical.ipynb) entry of this series.
## 2.2 - Descriptive or *Unsupervised Learning*
Only inputs are given: $D=\lbrace{{\bf x_i}\rbrace}_{i=1}^N$
The goal here is to find interesting patterns, which is sometimes called **knowledge discovery**.
The problem is not always well defined. It may not be clear what kind of pattern to look for, and there may not be an obvious metric to use (unlike supervised learning).
Some applications of Unsupervised Machine Learning algorithms are presented in the ["04_Intro_Machine_Learning_practical"](https://github.com/astro-datalab/notebooks-latest/blob/master/06_EPO/LaSerenaSchoolForDataScience/2019/04_Intro_Machine_Learning_practical/Intro_Machine_Learning_practical.ipynb) entry of this series.
## 2.3 - Reinforcement Learning
Mixed between Supervised and Unsupervised. Only occasional reward or punishement signals are given (e.g. baby learning to walk).
# 3 - Metrics to evaluate model performance
## 3.1 - Classification loss
Learning algorithms, and optimization algorithms, need to quantify if the predicted value from a model agrees with the true value. The learning process involves a minimization process in which a **loss function** penalizes the wrong outcomes.
### Loss function and classification risk:
The most common loss function used for supervised classification is the **zero-one** loss function:
$L(y, \hat y) = \delta(y \ne \hat y)$
where $\hat y$ is the best guess value of $y$. The function is 1 if the guess is different than the true value; and 0 if the guess is the same as the true value.
The **classification risk** of a model is the expectation value of the loss function:
$E[L(y, \hat y)] = p(y \ne \hat y)$
For the zero-one loss function the risk is equal to the **misclassification rate** or **error rate**.
## 3.2 - Types of errors
Accuracy and classification risk are not necessarily good diagnostics of the quality of a model.
It is better to distinguish between two types of errors (assuming 1 is the label we are evaluating):
1. Assigning the label 1 to an object whose true class is 0 (a **false positive**)
2. Assigning the label 0 to an object whose true class is 1 (a **false negative**)

(Image from http://opendatastat.org/mnemonics/)
Additionally, correct cases can be separated as:
- Assigning the label 1 to an object whose true class is 1 is a **true positive**.
- Assigning the label 0 to an object whose true class is 0 is a **true negative**.
# 4 - Diagnostics
Applying the concepts introduced above, it is possible to define several diagnostics or metrics in Machine Learning to evaluate the goodness of a given algorithm applied to a dataset.
## 4.1 - Accuracy, contamination, recall, and precision
These four metrics are defined as:
$$\rm accuracy = \frac{\#\ correct\ labels}{total}$$
Note that this is one minus the classification risk (defined in [Section 3.1](#3.1---Classification-loss)).
$$\rm contamination\ =\ \frac{false~ positives}{true~ positives~ +~ false~ positives}$$
$$\rm recall\ =\ \frac{true~ positives}{true~ positives~ +~ false~ negatives}$$
$$\rm precision\ = 1 - contamination = \ \frac{true~ positives}{true~ positives~ +~ false~ positives}$$
Note: Sometimes, **recall** is also called **completeness**.
## 4.2 - Macro vs. micro averages
The definitions given above can be applied directly in a two-class problem. However, when evaluating the different diagnostics in a **multiclass problem** (i.e., non-binary classification) one has to choose to do macro or micro averages.
**Macro averaging**
Compute diagnostics for every class by taking the average of the class diagnostics.
**Micro averaging**
Compute diagnostics for the total errors without making a distinction between classes (True Positive, False Positive, False Negative).
For example, consider the following 3-class problem:
| Label | TP | FP | FN | Precision | Recall |
| - | - | - | - | - | - |
| c1 | 3 | 2 | 7 | 0.6 | 0.3 |
| c2 | 1 | 7 | 9 | 0.12 | 0.1 |
| c3 | 2 | 5 | 6 | 0.29 | 0.25 |
| Total | 6 | 14 | 22 | | |
| Macro averaged | | | | 0.34 | 0.22 |
| Micro averaged | | | | 0.3 | 0.21 |
In this case, the value for macro precision is:
\begin{align}
\rm Macro_{precision} &= \rm average \big(precision(c1), precision(c2), precision(c3)\big) \\
& = \frac{1}{3} \times \biggl( \frac{3}{3 + 2} + \frac{1}{1 + 7} + \frac{2}{2 + 5} \biggr) = 0.34
\end{align}
And the value for micro precision is:
\begin{align}
\rm Micro_{precision} &= \rm precision(total) \\
& = \frac{6}{6 + 14} = 0.3
\end{align}
## 4.3 True positive rate (TPR) and false positive rate (FPR)
These scores are defined as:
$$\rm TPR\ =\ recall\ =\ \frac{true~ positives}{true~ positives~ +~ false~ negatives}$$
$$\rm FPR\ = \ \frac{false~ positives}{false~ positives~ +~ true~ negatives}$$

(image by user Walber in Wikipedia. CC BY-SA 4.0)
## 4.4 - Problems with accuracy
As introduced above, accuracy is defined as:
$$\rm accuracy\ =\ \frac{\#~ Total~ of~ correct~ predictions}{\#~ Total~ number~ of~ predictions}$$
To show why accuracy is not a very useful statistic let's consider the following example.
**Example:** A model to predict whether a person is from a given country (with a population of 37 million people):
*Simple (and wrong) model*: assuming that the world population is 7.5 billion people, predict that a person is from that country with a probability 37/7500.
$$ \rm{correct\ predictions} = (7,500,000,000 - 37,000,000) \times \bigg(1 - \frac{37}{7500}\bigg) + 37,000,000 \times \frac{37}{7500} = 7,426,365,067$$
Then, accuracy becomes:
$$\rm accuracy = \frac{7,426,365,067}{7,500,000,000} = 0.99$$
Our classifier is 99% accurate, but it is clearly too simplistic!
### Precision and recall are better statistics
Let's try precision and recall instead. First, calculate the TP, FP and FN:
True positives (TP): $37,000,000 \times \frac{37}{7500} = 182,533$
False positives (FP): $(7,500,000,000 - 37,000,000) \times \frac{37}{7500} = 36,817,467$
False negatives (FN): $37,000,000 \times \big(1 - \frac{37}{7500}\big) = 36,817,467$
Then, we evaluate **recall** and **precision**:
$$\rm recall = \frac{TP}{TP + FN} = \frac{182,533}{182,533 + 36,817,467} = 0.005$$
$$\rm precision = \frac{TP}{TP + FP} = \frac{182,533}{182,533 + 36,817,467} = 0.005$$
Our classifier has only 0.5% recall and precision!
## 4.5 - F1 score
A simple statistic which takes into account both recall and precision is the **$\rm \bf F_1$ score**, which is twice their harmonic mean. It is defined as:
$$\rm F_1 = 2 \times \ \frac{1}{\frac{1}{precision}\ +\ \frac{1}{recall}} = 2 \times \ \frac{precision\ \times\ recall}{precision\ +\ recall}$$
## 4.6 - F$_\beta$ score
To give more or less weight to recall vs precision, the $F_\beta$ score is used:
$$\rm F_\beta = (1 + \beta^2) \times \frac{precision\ \times\ recall}{\beta^2\ precision\ +\ recall}$$
$F_\beta$ was derived so that it measures the effectiveness of retrieval with respect to a user who attaches **$\beta$ times as much importance to recall as precision**.
# 5 - Visual representations of results
## 5.1 - Confusion matrix
Also known as **error matrix**, it is a way to summarize results in classification problems.
The elements of the matrix correspond to the number (or fraction) of instances of an actual class which were classified as another class.
A perfect classifier has the *identity* as its normalized confusion matrix.
For example, a classifier for the Iris flower dataset could yield the following results:

<br></br>
<br></br>

## 5.2 - ROC curve
The **Receiver Operating Characteristic (ROC)** curve is a visualization of the tradeoff between the recall and precision of a classifier as the discrimination threshold is varied.
It plots the **True Positive Rate (TPR)** vs the **False Positive Rate (FPR)** at various thresholds.

The demo below shows the ROC curve for a classifier as the discrimination between TP and FP varies.
```
from IPython.display import Image
Image(url="Images/roc_curve.gif")
```
This demonstration is described [here](https://arogozhnikov.github.io/2015/10/05/roc-curve.html).
## 5.3 - Area under the curve (AUC) and Gini coefficient (G1)
The AUC is equal to the probability that the classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one.
* A larger AUC indicates a better classification model
* A perfect classifier has AUC = 1
* A random classifier has AUC = 0.5 (note that the **no-discrimination line** is the identity)
* AUC is related to the G1, which is twice the area between the ROC and the no-discrimination line:
$\Large \rm G_1 = 2 \times AUC - 1$

The ROC AUC statistic is normally used to do model comparison.
## 5.4 - DET curve
An alternative to the ROC curve is the **Detection Error Tradeoff (DET)** curve.
The DET curve plots the **FNR (missed detections) vs. the FPR (false alarms)** on a non-linearly transformed axis in order to emphasize regions of low FPR and low FNR.

# In Conclusion
This has been a brief introduction of concepts in Machine Learning with focus on classification (Supervised Learning). Special emphasis has been put into introducing a variety of concepts and metrics that should be especially useful for the evaluation of Machine Learning algorithms in classification problems. Finally, we introduced some common visual representations of results, which are useful to summarize model performance.
| github_jupyter |
## Data Mining and Machine Learning
### Logistic Regression
### Libraries: scikit-learn and h2o
#### Edgar Acuna
#### Marzo 2021
```
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import roc_auc_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
%matplotlib inline
import h2o
from h2o.estimators.random_forest import H2ORandomForestEstimator
from h2o.estimators.naive_bayes import H2ONaiveBayesEstimator
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
#h2o.connect()
#h2o.no_progress()
h2o.init(ip="localhost", port=54323)
import warnings
warnings.filterwarnings('ignore')
```
### Example 1: Predicting the final grade in a class
```
df=pd.read_csv("http://academic.uprm.edu/eacuna/eje1dis.csv")
#Convirtiendo en matriz la tabla de predictoras y la columna de clases
y=df['Nota']
X=df.iloc[:,0:2]
#creando una columna "pass" numerica para representar las clases
lb_make = LabelEncoder()
df["pass"] = lb_make.fit_transform(df["Nota"])
databin=df[['E1','pass']]
print(databin.head())
#Trazando la linea de regresion sobre el plot de puntos
x1=databin.iloc[:,0]
x2=databin.iloc[:,1]
plt.scatter(x1,x2)
plt.plot(x1, np.poly1d(np.polyfit(x1, x2, 1))(x1),color='red')
plt.show()
```
### Applying Logistic Regression to predict Final Grade. Use of sckikit-learn
```
#Applying Logistic Regression to predict Final Grade
X=df[['E1',"E2"]]
y3=df['pass']
#Haciendo la regresion logistica ya calculando su precision
model = LogisticRegression(solver="newton-cg")
model = model.fit(X, y3)
print("Coeficientes del modelo", model.coef_)
#Accuracy
model.score(X,y3)
# Tasa de precision
pred = model.predict(X)
print(pred)
pred1=model.predict_proba(X)
print(pred1[0:5,:])
print(classification_report(y3, pred))
```
#### Graficando la frontera de decision
```
from matplotlib.colors import ListedColormap
logis = LogisticRegression(solver="newton-cg")
X1=df.iloc[:,0:2].to_numpy()
y1=df['pass'].to_numpy()
logis.fit(X1,y1)
eje1=np.arange(start = X1[:, 0].min()-1, stop = X1[:, 0].max() + 1, step = 0.1)
eje2=np.arange(start = X1[:, 1].min()-1, stop = X1[:, 1].max() + 1, step = 0.11)
Y1, Y2 = np.meshgrid(eje1,eje2)
pred2=logis.predict(np.c_[Y1.ravel(), Y2.ravel()]).reshape(Y1.shape)
plt.figure(figsize=(10, 10))
plt.pcolormesh(Y1, Y2, pred2,cmap=plt.cm.Paired)
# Plot also the training points#
plt.scatter(X1[:, 0], X1[:, 1], c=y1, edgecolors='k')
plt.xlabel('Ex1')
plt.ylabel('Ex2')
plt.xlim(Y1.min(), Y1.max())
plt.ylim(Y2.min(), Y2.max())
plt.xticks(())
plt.yticks(())
plt.show()
```
### Logistic Regression using library H2o
```
df1=h2o.H2OFrame(df)
myx=['E1','E2']
myy='Nota'
glm_model = H2OGeneralizedLinearEstimator(family= "binomial", lambda_ = 0, compute_p_values = True) #Lmbda es un parametro de regularizacion
glm_model.train(myx, myy, training_frame= df1)
y_pred=glm_model.predict(df1)
print((y_pred['predict']==df1['Nota']).sum()/len(df1))
```
### Example 2. Logistic Regression for Diabetes using sckit-learn
```
url= "http://academic.uprm.edu/eacuna/diabetes.dat"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = pd.read_table(url, names=names,header=None)
#The response variable must be binary (0,1)
y=data['class']-1
X=data.iloc[:,0:8]
#Haciendo la regresion logistica y calculando su precision
model = LogisticRegression()
model = model.fit(X, y)
print(model.coef_)
model.score(X,y)
predictions = model.predict(X)
print(classification_report(y, predictions))
```
Estimating the accuracy using 10-fold cross-validation
```
from sklearn.model_selection import cross_val_score
scores = cross_val_score(model, X, y, cv=10)
scores
#Hallando la precision media y un intervalo de confianza
print("CV Accuracy: %0.3f (+/- %0.3f)" % (scores.mean(), scores.std() * 2))
```
### Logistic regression for Diabetes using the H2o library
```
diabetes = h2o.import_file("https://academic.uprm.edu/eacuna/diabetes.dat")
myx=['C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8']
diabetes['C9']=diabetes['C9'].asfactor()
myy='C9'
glm_model = H2OGeneralizedLinearEstimator(family= "binomial", lambda_ = 0, compute_p_values = True)
glm_model.train(myx, myy, training_frame= diabetes)
y_pred=glm_model.predict(diabetes)
#print (y_pred['predict']==diabetes['C9']).sum()/float(len(diabetes))
print((y_pred['predict']==diabetes['C9']).sum()/len(diabetes))
```
Estimating the accuracy using 10-fold crossvalidation
```
model = H2OGeneralizedLinearEstimator(family= "binomial", lambda_ = 0, compute_p_values = True,nfolds=10)
model.train(myx, myy, training_frame= diabetes)
model.confusion_matrix
```
| github_jupyter |
<table border="0">
<tr>
<td>
<img src="https://ictd2016.files.wordpress.com/2016/04/microsoft-research-logo-copy.jpg" style="width 30px;" />
</td>
<td>
<img src="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/MSR-ALICE-HeaderGraphic-1920x720_1-800x550.jpg" style="width 100px;"/></td>
</tr>
</table>
# Double Machine Learning: Use Cases and Examples
Double Machine Learning (DML) is an algorithm that applies arbitrary machine learning methods
to fit the treatment and response, then uses a linear model to predict the response residuals
from the treatment residuals.
The EconML SDK implements the following DML classes:
* LinearDML: suitable for estimating heterogeneous treatment effects.
* SparseLinearDML: suitable for the case when $W$ is high dimensional vector and both the first stage and second stage estimate are linear.
In ths notebook, we show the performance of the DML on both synthetic data and observational data.
**Notebook contents:**
1. Example usage with single continuous treatment synthetic data
2. Example usage with single binary treatment synthetic data
3. Example usage with multiple continuous treatment synthetic data
4. Example usage with single continuous treatment observational data
5. Example usage with multiple continuous treatment, multiple outcome observational data
```
import econml
## Ignore warnings
import warnings
warnings.filterwarnings('ignore')
# Main imports
from econml.dml import DML, LinearDML,SparseLinearDML
# Helper imports
import numpy as np
from itertools import product
from sklearn.linear_model import Lasso, LassoCV, LogisticRegression, LogisticRegressionCV,LinearRegression,MultiTaskElasticNet,MultiTaskElasticNetCV
from sklearn.ensemble import RandomForestRegressor,RandomForestClassifier
from sklearn.preprocessing import PolynomialFeatures
import matplotlib.pyplot as plt
import matplotlib
from sklearn.model_selection import train_test_split
%matplotlib inline
```
## 1. Example Usage with Single Continuous Treatment Synthetic Data and Model Selection
### 1.1. DGP
We use the data generating process (DGP) from [here](https://arxiv.org/abs/1806.03467). The DGP is described by the following equations:
\begin{align}
T =& \langle W, \beta\rangle + \eta, & \;\eta \sim \text{Uniform}(-1, 1)\\
Y =& T\cdot \theta(X) + \langle W, \gamma\rangle + \epsilon, &\; \epsilon \sim \text{Uniform}(-1, 1)\\
W \sim& \text{Normal}(0,\, I_{n_w})\\
X \sim& \text{Uniform}(0,1)^{n_x}
\end{align}
where $W$ is a matrix of high-dimensional confounders and $\beta, \gamma$ have high sparsity.
For this DGP,
\begin{align}
\theta(x) = \exp(2\cdot x_1).
\end{align}
```
# Treatment effect function
def exp_te(x):
return np.exp(2*x[0])
# DGP constants
np.random.seed(123)
n = 2000
n_w = 30
support_size = 5
n_x = 1
# Outcome support
support_Y = np.random.choice(np.arange(n_w), size=support_size, replace=False)
coefs_Y = np.random.uniform(0, 1, size=support_size)
epsilon_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Treatment support
support_T = support_Y
coefs_T = np.random.uniform(0, 1, size=support_size)
eta_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Generate controls, covariates, treatments and outcomes
W = np.random.normal(0, 1, size=(n, n_w))
X = np.random.uniform(0, 1, size=(n, n_x))
# Heterogeneous treatment effects
TE = np.array([exp_te(x_i) for x_i in X])
T = np.dot(W[:, support_T], coefs_T) + eta_sample(n)
Y = TE * T + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n)
Y_train, Y_val, T_train, T_val, X_train, X_val, W_train, W_val = train_test_split(Y, T, X, W, test_size=.2)
# Generate test data
X_test = np.array(list(product(np.arange(0, 1, 0.01), repeat=n_x)))
```
### 1.2. Train Estimator
We train models in three different ways, and compare their performance.
#### 1.2.1. Default Setting
```
est = LinearDML(model_y=RandomForestRegressor(),
model_t=RandomForestRegressor(),
random_state=123)
est.fit(Y_train, T_train, X=X_train, W=W_train)
te_pred = est.effect(X_test)
```
#### 1.2.2. Polynomial Features for Heterogeneity
```
est1 = SparseLinearDML(model_y=RandomForestRegressor(),
model_t=RandomForestRegressor(),
featurizer=PolynomialFeatures(degree=3),
random_state=123)
est1.fit(Y_train, T_train, X=X_train, W=W_train)
te_pred1=est1.effect(X_test)
```
#### 1.2.3. Polynomial Features with regularization
```
est2 = DML(model_y=RandomForestRegressor(),
model_t=RandomForestRegressor(),
model_final=Lasso(alpha=0.1, fit_intercept=False),
featurizer=PolynomialFeatures(degree=10),
random_state=123)
est2.fit(Y_train, T_train, X=X_train, W=W_train)
te_pred2=est2.effect(X_test)
```
#### 1.2.4 Random Forest Final Stage
```
from econml.dml import ForestDML
# One can replace model_y and model_t with any scikit-learn regressor and classifier correspondingly
# as long as it accepts the sample_weight keyword argument at fit time.
est3 = ForestDML(model_y=RandomForestRegressor(),
model_t=RandomForestRegressor(),
discrete_treatment=False,
n_estimators=1000,
subsample_fr=.8,
min_samples_leaf=10,
min_impurity_decrease=0.001,
verbose=0, min_weight_fraction_leaf=.01)
est3.fit(Y_train, T_train, X=X_train, W=W_train)
te_pred3 = est3.effect(X_test)
est3.feature_importances_
```
### 1.3. Performance Visualization
```
plt.figure(figsize=(10,6))
plt.plot(X_test, te_pred, label='DML default')
plt.plot(X_test, te_pred1, label='DML polynomial degree=3')
plt.plot(X_test, te_pred2, label='DML polynomial degree=10 with Lasso')
plt.plot(X_test, te_pred3, label='ForestDML')
expected_te = np.array([exp_te(x_i) for x_i in X_test])
plt.plot(X_test, expected_te, 'b--', label='True effect')
plt.ylabel('Treatment Effect')
plt.xlabel('x')
plt.legend()
plt.show()
```
### 1.4. Model selection
For the three different models above, we can use score function to estimate the final model performance. The score is the MSE of the final stage Y residual, which can be seen as a proxy of the MSE of treatment effect.
```
score={}
score["DML default"] = est.score(Y_val, T_val, X_val, W_val)
score["DML polynomial degree=2"] = est1.score(Y_val, T_val, X_val, W_val)
score["DML polynomial degree=10 with Lasso"] = est2.score(Y_val, T_val, X_val, W_val)
score["ForestDML"] = est3.score(Y_val, T_val, X_val, W_val)
score
print("best model selected by score: ",min(score,key=lambda x: score.get(x)))
mse_te={}
mse_te["DML default"] = ((expected_te - te_pred)**2).mean()
mse_te["DML polynomial degree=2"] = ((expected_te - te_pred1)**2).mean()
mse_te["DML polynomial degree=10 with Lasso"] = ((expected_te - te_pred2)**2).mean()
mse_te["ForestDML"] = ((expected_te - te_pred3)**2).mean()
mse_te
print("best model selected by MSE of TE: ", min(mse_te, key=lambda x: mse_te.get(x)))
```
## 2. Example Usage with Single Binary Treatment Synthetic Data and Confidence Intervals
### 2.1. DGP
We use the following DGP:
\begin{align}
T \sim & \text{Bernoulli}\left(f(W)\right), &\; f(W)=\sigma(\langle W, \beta\rangle + \eta), \;\eta \sim \text{Uniform}(-1, 1)\\
Y = & T\cdot \theta(X) + \langle W, \gamma\rangle + \epsilon, & \; \epsilon \sim \text{Uniform}(-1, 1)\\
W \sim & \text{Normal}(0,\, I_{n_w}) & \\
X \sim & \text{Uniform}(0,\, 1)^{n_x}
\end{align}
where $W$ is a matrix of high-dimensional confounders, $\beta, \gamma$ have high sparsity and $\sigma$ is the sigmoid function.
For this DGP,
\begin{align}
\theta(x) = \exp( 2\cdot x_1 ).
\end{align}
```
# Treatment effect function
def exp_te(x):
return np.exp(2 * x[0])# DGP constants
np.random.seed(123)
n = 1000
n_w = 30
support_size = 5
n_x = 4
# Outcome support
support_Y = np.random.choice(range(n_w), size=support_size, replace=False)
coefs_Y = np.random.uniform(0, 1, size=support_size)
epsilon_sample = lambda n:np.random.uniform(-1, 1, size=n)
# Treatment support
support_T = support_Y
coefs_T = np.random.uniform(0, 1, size=support_size)
eta_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Generate controls, covariates, treatments and outcomes
W = np.random.normal(0, 1, size=(n, n_w))
X = np.random.uniform(0, 1, size=(n, n_x))
# Heterogeneous treatment effects
TE = np.array([exp_te(x_i) for x_i in X])
# Define treatment
log_odds = np.dot(W[:, support_T], coefs_T) + eta_sample(n)
T_sigmoid = 1/(1 + np.exp(-log_odds))
T = np.array([np.random.binomial(1, p) for p in T_sigmoid])
# Define the outcome
Y = TE * T + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n)
# get testing data
X_test = np.random.uniform(0, 1, size=(n, n_x))
X_test[:, 0] = np.linspace(0, 1, n)
```
### 2.2. Train Estimator
```
est = LinearDML(model_y=RandomForestRegressor(),
model_t=RandomForestClassifier(min_samples_leaf=10),
discrete_treatment=True,
linear_first_stages=False,
n_splits=6)
est.fit(Y, T, X=X, W=W)
te_pred = est.effect(X_test)
lb, ub = est.effect_interval(X_test, alpha=0.01)
est2 = SparseLinearDML(model_y=RandomForestRegressor(),
model_t=RandomForestClassifier(min_samples_leaf=10),
discrete_treatment=True,
featurizer=PolynomialFeatures(degree=2),
linear_first_stages=False,
n_splits=6)
est2.fit(Y, T, X=X, W=W)
te_pred2 = est2.effect(X_test)
lb2, ub2 = est2.effect_interval(X_test, alpha=0.01)
est3 = ForestDML(model_y=RandomForestRegressor(),
model_t=RandomForestClassifier(min_samples_leaf=10),
discrete_treatment=True,
n_estimators=1000,
subsample_fr=.8,
min_samples_leaf=10,
min_impurity_decrease=0.001,
verbose=0, min_weight_fraction_leaf=.01,
n_crossfit_splits=6)
est3.fit(Y, T, X=X, W=W)
te_pred3 = est3.effect(X_test)
lb3, ub3 = est3.effect_interval(X_test, alpha=0.01)
est3.feature_importances_
```
### 2.3. Performance Visualization
```
expected_te=np.array([exp_te(x_i) for x_i in X_test])
plt.figure(figsize=(16,6))
plt.subplot(1, 3, 1)
plt.plot(X_test[:, 0], te_pred, label='LinearDML', alpha=.6)
plt.fill_between(X_test[:, 0], lb, ub, alpha=.4)
plt.plot(X_test[:, 0], expected_te, 'b--', label='True effect')
plt.ylabel('Treatment Effect')
plt.xlabel('x')
plt.legend()
plt.subplot(1, 3, 2)
plt.plot(X_test[:, 0], te_pred2, label='SparseLinearDML', alpha=.6)
plt.fill_between(X_test[:, 0], lb2, ub2, alpha=.4)
plt.plot(X_test[:, 0], expected_te, 'b--', label='True effect')
plt.ylabel('Treatment Effect')
plt.xlabel('x')
plt.legend()
plt.subplot(1, 3, 3)
plt.plot(X_test[:, 0], te_pred3, label='ForestDML', alpha=.6)
plt.fill_between(X_test[:, 0], lb3, ub3, alpha=.4)
plt.plot(X_test[:, 0], expected_te, 'b--', label='True effect')
plt.ylabel('Treatment Effect')
plt.xlabel('x')
plt.legend()
plt.show()
```
### 2.4. Other Inferences
#### 2.4.1 Effect Inferences
Other than confidence interval, we could also output other statistical inferences of the effect include standard error, z-test score and p value given each sample $X[i]$.
```
est.effect_inference(X_test[:10,]).summary_frame(alpha=0.1, value=0, decimals=3)
```
We could also get the population inferences given sample $X$.
```
est.effect_inference(X_test).population_summary(alpha=0.1, value=0, decimals=3, tol=0.001)
```
#### 2.4.2 Coefficient and Intercept Inferences
We could also get the coefficient and intercept inference for the final model when it's linear.
```
est.coef__inference().summary_frame()
est.intercept__inference().summary_frame()
est.summary()
```
## 3. Example Usage with Multiple Continuous Treatment Synthetic Data
### 3.1. DGP
We use the data generating process (DGP) from [here](https://arxiv.org/abs/1806.03467), and modify the treatment to generate multiple treatments. The DGP is described by the following equations:
\begin{align}
T =& \langle W, \beta\rangle + \eta, & \;\eta \sim \text{Uniform}(-1, 1)\\
Y =& T\cdot \theta_{1}(X) + T^{2}\cdot \theta_{2}(X) + \langle W, \gamma\rangle + \epsilon, &\; \epsilon \sim \text{Uniform}(-1, 1)\\
W \sim& \text{Normal}(0,\, I_{n_w})\\
X \sim& \text{Uniform}(0,1)^{n_x}
\end{align}
where $W$ is a matrix of high-dimensional confounders and $\beta, \gamma$ have high sparsity.
For this DGP,
\begin{align}
\theta_{1}(x) = \exp(2\cdot x_1)\\
\theta_{2}(x) = x_1^{2}\\
\end{align}
```
# DGP constants
np.random.seed(123)
n = 6000
n_w = 30
support_size = 5
n_x = 5
# Outcome support
support_Y = np.random.choice(np.arange(n_w), size=support_size, replace=False)
coefs_Y = np.random.uniform(0, 1, size=support_size)
epsilon_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Treatment support
support_T = support_Y
coefs_T = np.random.uniform(0, 1, size=support_size)
eta_sample = lambda n: np.random.uniform(-1, 1, size=n)
# Generate controls, covariates, treatments and outcomes
W = np.random.normal(0, 1, size=(n, n_w))
X = np.random.uniform(0, 1, size=(n, n_x))
# Heterogeneous treatment effects
TE1 = np.array([x_i[0] for x_i in X])
TE2 = np.array([x_i[0]**2 for x_i in X]).flatten()
T = np.dot(W[:, support_T], coefs_T) + eta_sample(n)
Y = TE1 * T + TE2 * T**2 + np.dot(W[:, support_Y], coefs_Y) + epsilon_sample(n)
# Generate test data
X_test = np.random.uniform(0, 1, size=(100, n_x))
X_test[:, 0] = np.linspace(0, 1, 100)
```
### 3.2. Train Estimator
```
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import ElasticNetCV
est = LinearDML(model_y=GradientBoostingRegressor(n_estimators=100, max_depth=3, min_samples_leaf=20),
model_t=MultiOutputRegressor(GradientBoostingRegressor(n_estimators=100,
max_depth=3,
min_samples_leaf=20)),
featurizer=PolynomialFeatures(degree=2, include_bias=False),
linear_first_stages=False,
n_splits=5)
T = T.reshape(-1,1)
est.fit(Y, np.concatenate((T, T**2), axis=1), X=X, W=W)
te_pred = est.const_marginal_effect(X_test)
lb, ub = est.const_marginal_effect_interval(X_test, alpha=0.01)
```
### 3.3. Performance Visualization
```
plt.figure(figsize=(10,6))
plt.plot(X_test[:, 0], te_pred[:, 0], label='DML estimate1')
plt.fill_between(X_test[:, 0], lb[:, 0], ub[:, 0], alpha=.4)
plt.plot(X_test[:, 0], te_pred[:, 1], label='DML estimate2')
plt.fill_between(X_test[:, 0], lb[:, 1], ub[:, 1], alpha=.4)
expected_te1 = np.array([x_i[0] for x_i in X_test])
expected_te2=np.array([x_i[0]**2 for x_i in X_test]).flatten()
plt.plot(X_test[:, 0], expected_te1, '--', label='True effect1')
plt.plot(X_test[:, 0], expected_te2, '--', label='True effect2')
plt.ylabel("Treatment Effect")
plt.xlabel("x")
plt.legend()
plt.show()
```
## 4. Example Usage with Single Continuous Treatment Observational Data
We applied our technique to Dominick’s dataset, a popular historical dataset of store-level orange juice prices and sales provided by University of Chicago Booth School of Business.
The dataset is comprised of a large number of covariates $W$, but researchers might only be interested in learning the elasticity of demand as a function of a few variables $x$ such
as income or education.
We applied the `LinearDML` to estimate orange juice price elasticity
as a function of income, and our results, unveil the natural phenomenon that lower income consumers are more price-sensitive.
### 4.1. Data
```
# A few more imports
import os
import pandas as pd
import urllib.request
from sklearn.preprocessing import StandardScaler
# Import the data
file_name = "oj_large.csv"
if not os.path.isfile(file_name):
print("Downloading file (this might take a few seconds)...")
urllib.request.urlretrieve("https://msalicedatapublic.blob.core.windows.net/datasets/OrangeJuice/oj_large.csv", file_name)
oj_data = pd.read_csv(file_name)
oj_data.head()
# Prepare data
Y = oj_data['logmove'].values
T = np.log(oj_data["price"]).values
scaler = StandardScaler()
W1 = scaler.fit_transform(oj_data[[c for c in oj_data.columns if c not in ['price', 'logmove', 'brand', 'week', 'store','INCOME']]].values)
W2 = pd.get_dummies(oj_data[['brand']]).values
W = np.concatenate([W1, W2], axis=1)
X=scaler.fit_transform(oj_data[['INCOME']].values)
## Generate test data
min_income = -1
max_income = 1
delta = (1 - (-1)) / 100
X_test = np.arange(min_income, max_income + delta - 0.001, delta).reshape(-1,1)
```
### 4.2. Train Estimator
```
est = LinearDML(model_y=RandomForestRegressor(),model_t=RandomForestRegressor())
est.fit(Y, T, X=X, W=W)
te_pred=est.effect(X_test)
```
### 4.3. Performance Visualization
```
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(10,6))
plt.plot(X_test, te_pred, label="OJ Elasticity")
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.legend()
plt.title("Orange Juice Elasticity vs Income")
plt.show()
```
### 4.4. Confidence Intervals
We can also get confidence intervals around our predictions by passing an additional `inference` argument to `fit`. All estimators support bootstrap intervals, which involves refitting the same estimator repeatedly on subsamples of the original data, but `LinearDML` also supports a more efficient approach which can be achieved by leaving inference set to the default of `'auto'` or by explicitly passing `inference='statsmodels'`.
```
est.fit(Y, T, X=X, W=W)
te_pred=est.effect(X_test)
te_pred_interval = est.const_marginal_effect_interval(X_test, alpha=0.02)
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(10,6))
plt.plot(X_test.flatten(), te_pred, label="OJ Elasticity")
plt.fill_between(X_test.flatten(), te_pred_interval[0], te_pred_interval[1], alpha=.5, label="1-99% CI")
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.title("Orange Juice Elasticity vs Income")
plt.legend()
plt.show()
```
## 5. Example Usage with Multiple Continuous Treatment, Multiple Outcome Observational Data
We use the same data, but in this case, we want to fit the demand of multiple brand as a function of the price of each one of them, i.e. fit the matrix of cross price elasticities. It can be done, by simply setting as $Y$ to be the vector of demands and $T$ to be the vector of prices. Then we can obtain the matrix of cross price elasticities.
\begin{align}
Y=[Logmove_{tropicana},Logmove_{minute.maid},Logmove_{dominicks}] \\
T=[Logprice_{tropicana},Logprice_{minute.maid},Logprice_{dominicks}] \\
\end{align}
### 5.1. Data
```
# Import the data
oj_data = pd.read_csv(file_name)
# Prepare data
oj_data['price'] = np.log(oj_data["price"])
# Transform dataset.
# For each store in each week, get a vector of logmove and a vector of logprice for each brand.
# Other features are store specific, will be the same for all brands.
groupbylist = ["store", "week", "AGE60", "EDUC", "ETHNIC", "INCOME",
"HHLARGE", "WORKWOM", "HVAL150",
"SSTRDIST", "SSTRVOL", "CPDIST5", "CPWVOL5"]
oj_data1 = pd.pivot_table(oj_data,index=groupbylist,
columns=oj_data.groupby(groupbylist).cumcount(),
values=['logmove', 'price'],
aggfunc='sum').reset_index()
oj_data1.columns = oj_data1.columns.map('{0[0]}{0[1]}'.format)
oj_data1 = oj_data1.rename(index=str,
columns={"logmove0": "logmove_T",
"logmove1": "logmove_M",
"logmove2":"logmove_D",
"price0":"price_T",
"price1":"price_M",
"price2":"price_D"})
# Define Y,T,X,W
Y = oj_data1[['logmove_T', "logmove_M", "logmove_D"]].values
T = oj_data1[['price_T', "price_M", "price_D"]].values
scaler = StandardScaler()
W=scaler.fit_transform(oj_data1[[c for c in groupbylist if c not in ['week', 'store', 'INCOME']]].values)
X=scaler.fit_transform(oj_data1[['INCOME']].values)
## Generate test data
min_income = -1
max_income = 1
delta = (1 - (-1)) / 100
X_test = np.arange(min_income, max_income + delta - 0.001, delta).reshape(-1, 1)
```
### 5.2. Train Estimator
```
est = LinearDML(model_y=MultiTaskElasticNetCV(cv=3, tol=1, selection='random'),
model_t=MultiTaskElasticNetCV(cv=3),
featurizer=PolynomialFeatures(1),
linear_first_stages=True)
est.fit(Y, T, X=X, W=W)
te_pred = est.const_marginal_effect(X_test)
```
### 5.3. Performance Visualization
```
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(18, 10))
dic={0:"Tropicana", 1:"Minute.maid", 2:"Dominicks"}
for i in range(3):
for j in range(3):
plt.subplot(3, 3, 3 * i + j + 1)
plt.plot(X_test, te_pred[:, i, j],
color="C{}".format(str(3 * i + j)),
label="OJ Elasticity {} to {}".format(dic[j], dic[i]))
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.legend()
plt.suptitle("Orange Juice Elasticity vs Income", fontsize=16)
plt.show()
```
**Findings**: Look at the diagonal of the matrix, the TE of OJ prices are always negative to the sales across all the brand, but people with higher income are less price-sensitive. By contrast, for the non-diagonal of the matrix, the TE of prices for other brands are always positive to the sales for that brand, the TE is affected by income in different ways for different competitors. In addition, compare to previous plot, the negative TE of OJ prices for each brand are all larger than the TE considering all brand together, which means we would have underestimated the effect of price changes on demand.
### 5.4. Confidence Intervals
```
est.fit(Y, T, X=X, W=W)
te_pred = est.const_marginal_effect(X_test)
te_pred_interval = est.const_marginal_effect_interval(X_test, alpha=0.02)
# Plot Oranje Juice elasticity as a function of income
plt.figure(figsize=(18, 10))
dic={0:"Tropicana", 1:"Minute.maid", 2:"Dominicks"}
for i in range(3):
for j in range(3):
plt.subplot(3, 3, 3 * i + j + 1)
plt.plot(X_test, te_pred[:, i, j],
color="C{}".format(str(3 * i + j)),
label="OJ Elasticity {} to {}".format(dic[j], dic[i]))
plt.fill_between(X_test.flatten(), te_pred_interval[0][:, i, j],te_pred_interval[1][:, i,j], color="C{}".format(str(3*i+j)),alpha=.5, label="1-99% CI")
plt.xlabel(r'Scale(Income)')
plt.ylabel('Orange Juice Elasticity')
plt.legend()
plt.suptitle("Orange Juice Elasticity vs Income",fontsize=16)
plt.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
from torch.utils.data import Dataset, DataLoader
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark= False
m = 2000 # 5, 50, 100, 500 , 1000 , 2000
desired_num = 10000
tr_i = 0
tr_j = desired_num
tr_k = desired_num+1000
tr_i, tr_j, tr_k
```
# Generate dataset
```
np.random.seed(12)
y = np.random.randint(0,3,500)
idx= []
for i in range(3):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((500,))
np.random.seed(12)
x[idx[0]] = np.random.uniform(low =-1,high =0,size= sum(idx[0]))
x[idx[1]] = np.random.uniform(low =0,high =1,size= sum(idx[1]))
x[idx[2]] = np.random.uniform(low =2,high =3,size= sum(idx[2]))
x[idx[0]][0], x[idx[2]][5]
print(x.shape,y.shape)
idx= []
for i in range(3):
idx.append(y==i)
for i in range(3):
y= np.zeros(x[idx[i]].shape[0])
plt.scatter(x[idx[i]],y,label="class_"+str(i))
plt.legend()
bg_idx = [ np.where(idx[2] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(3):
y= np.zeros(x[idx[i]].shape[0])
plt.scatter(x[idx[i]],y,label="class_"+str(i))
plt.legend()
foreground_classes = {'class_0','class_1' }
background_classes = {'class_2'}
fg_class = np.random.randint(0,2)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(2,3)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
a.shape
np.reshape(a,(m,1))
from tqdm import tqdm
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in tqdm(range(desired_num+1000)):
np.random.seed(j)
fg_class = np.random.randint(0,2)
fg_idx = 0
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(2,3)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(np.reshape(a,(m,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
# if(j%(int(desired_num/4))==0):
# print("25% done")
mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
mosaic_list_of_images.shape
np.sum(mosaic_list_of_images[0:tr_j], axis=1).shape, np.sum(mosaic_list_of_images[0:tr_j], axis=1)
train = np.sum(mosaic_list_of_images[0:tr_j], axis=1)/m
train[0], train.shape
test = mosaic_list_of_images[tr_j:tr_k,0]/m
test[0], test.shape
avg_image_dataset_1 , labels_1, fg_index_1 = np.sum(mosaic_list_of_images[0:tr_j], axis=1)/m , mosaic_label[0:tr_j], fore_idx[0:tr_j]
test_dataset , labels , fg_index = mosaic_list_of_images[tr_j:tr_k,0]/m , mosaic_label[tr_j : tr_k], fore_idx[tr_j : tr_k]
avg_image_dataset_1[0], test_dataset[0], avg_image_dataset_1.shape, test_dataset.shape
avg_image_dataset_1 = avg_image_dataset_1.reshape(desired_num,1)
test_dataset = test_dataset.reshape(1000, 1)
avg_image_dataset_1.shape, test_dataset.shape
# avg_image_dataset_1 = torch.stack(avg_image_dataset_1, axis = 0)
# mean = torch.mean(avg_image_dataset_1, keepdims= True, axis = 0)
# std = torch.std(avg_image_dataset_1, keepdims= True, axis = 0)
# avg_image_dataset_1 = (avg_image_dataset_1 - mean) / std
# print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))
# print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))
# print("=="*40)
# test_dataset = torch.stack(test_dataset, axis = 0)
# mean = torch.mean(test_dataset, keepdims= True, axis = 0)
# std = torch.std(test_dataset, keepdims= True, axis = 0)
# test_dataset = (test_dataset - mean) / std
# print(torch.mean(test_dataset, keepdims= True, axis = 0))
# print(torch.std(test_dataset, keepdims= True, axis = 0))
# print("=="*40)
x1 = (avg_image_dataset_1)
y1 = np.array(labels_1)
# idx1 = []
# for i in range(3):
# idx1.append(y1 == i)
# for i in range(3):
# z = np.zeros(x1[idx1[i]].shape[0])
# plt.scatter(x1[idx1[i]],z,label="class_"+str(i))
# plt.legend()
plt.scatter(x1[y1==0], y1[y1==0]*0, label='class 0')
plt.scatter(x1[y1==1], y1[y1==1]*0, label='class 1')
# plt.scatter(x1[y1==2], y1[y1==2]*0, label='class 2')
plt.legend()
plt.title("dataset1 CIN with alpha = 1/"+str(m))
x1 = (avg_image_dataset_1)
y1 = np.array(labels_1)
idx_1 = y1==0
idx_2 = np.where(idx_1==True)[0]
idx_3 = np.where(idx_1==False)[0]
color = ['#1F77B4','orange', 'brown']
true_point = len(idx_2)
plt.scatter(x1[idx_2[:25]], y1[idx_2[:25]]*0, label='class 0', c= color[0], marker='o')
plt.scatter(x1[idx_3[:25]], y1[idx_3[:25]]*0, label='class 1', c= color[1], marker='o')
plt.scatter(x1[idx_3[50:75]], y1[idx_3[50:75]]*0, c= color[1], marker='o')
plt.scatter(x1[idx_2[50:75]], y1[idx_2[50:75]]*0, c= color[0], marker='o')
plt.legend()
plt.xticks( fontsize=14, fontweight = 'bold')
plt.yticks( fontsize=14, fontweight = 'bold')
plt.xlabel("X", fontsize=14, fontweight = 'bold')
# plt.savefig(fp_cin+"ds1_alpha_04.png", bbox_inches="tight")
# plt.savefig(fp_cin+"ds1_alpha_04.pdf", bbox_inches="tight")
avg_image_dataset_1[0:10]
x1 = (test_dataset)
y1 = np.array(labels)
# idx1 = []
# for i in range(3):
# idx1.append(y1 == i)
# for i in range(3):
# z = np.zeros(x1[idx1[i]].shape[0])
# plt.scatter(x1[idx1[i]],z,label="class_"+str(i))
# plt.legend()
plt.scatter(x1[y1==0], y1[y1==0]*0, label='class 0')
plt.scatter(x1[y1==1], y1[y1==1]*0, label='class 1')
# plt.scatter(x1[y1==2], y1[y1==2]*0, label='class 2')
plt.legend()
plt.title("test dataset1 ")
test_dataset[0:10]
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
avg_image_dataset_1[0].shape, avg_image_dataset_1[0]
batch = 200
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(test_dataset, labels )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(1,2)
# self.linear2 = nn.Linear(50,10)
# self.linear3 = nn.Linear(10,3)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
def forward(self,x):
# x = F.relu(self.linear1(x))
# x = F.relu(self.linear2(x))
x = (self.linear1(x))
return x
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/(i+1)
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the %d test dataset %d: %.2f %%' % (total, number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list):
print("--"*40)
print("training on data set ", ds_number)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr=0.0005 ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1500
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%200 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %.2f %%' % (total, 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
return loss_curi, net
train_loss_all=[]
testloader_list= [ testloader_1 ]
loss, net = train_all(trainloader_1, 1, testloader_list)
train_loss_all.append(loss)
net.linear1.weight, net.linear1.bias
%matplotlib inline
for i,j in enumerate(train_loss_all):
plt.plot(j,label ="dataset "+str(i+1))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
| github_jupyter |
```
from IPython.display import Image
from IPython.core.display import HTML
```
# Tervetuloa kurssille johdatus datatieteeseen
Tiedot kurssin suorittamisesta löytyy <a href="https://jodatut.github.io/2019/">GitHub:sta</a>
Kurssin luennoijana toimii <a href="https://www.linkedin.com/in/arhosuominen/">Arho Suominen</a>
Kurssilla assistenttina toimii <a href="https://tutcris.tut.fi/portal/fi/persons/erjon-skenderi(133f72d8-69a1-4bfb-b48b-35ba71c3322e).html">Erjon Skenderi</a>
## Odotukset kurssille - mitä ihmettä on johdatus datatieteeseen!
"Data Scientist" voidaan kääntää suomeksi tietojen tutkija, vai voidaanko. Mitä tarkoitetaan data tieteellä ja mitä odotuksia opiskelijoilla on kurssille.

Datatiede rakentuu neljän laajan kokonaisuuden varaan:
- liiketoimintaosaaminen,
- ohjelmointi- ja tietokantaosaaminen,
- tilastollinen analyysi ja
- datalähtöinen viestintä ja visualisointi.
Opiskelijoilta toivotaan perusosaamista näiltä aloilta. Opintojaksolla on tavoitteena syventyä näihin aiheisiin datatieteen näkökulmasta sekä esitellä opiskelijoille riittävät tiedot datatiedeosaamiseen kuuluvien taitojen hankkimiseen TTY:n opetustarjonnasta.
## Kurssin suorittaminen
Ohjeet kurssin suorittamiseksi löytyvät <a href="https://jodatut.github.io/2018/suorittaminen/">kurssin kotisivulta</a>
## Kurssin harjoitukset ja harjoitustyöt
Ohjeet harjoitustyöhön suorittamiseksi löytyvät <a href="https://jodatut.github.io/2018/harjoitustyo/">kurssin kotisivulta</a>
# Mitä on datatiede?
## Määritelmä
Tietojen tutkijan (Data Scientist) rooli organisaatiossa on moninainen. Työtä on kuvattu monitieteiseksi, yhdistäen ainakin tietoteknistä, matemaattista ja liiketoiminnallista osaamista. [Harvard Business Review:n artikkeli
Data Scientist: The Sexiest Job of the 21st Century](https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century)
"...he started to see possibilities. He began forming theories, testing hunches, and finding patterns that allowed him to predict whose networks a given profile would land in. He could imagine that new features capitalizing on the heuristics he was developing might provide value to users."
Alunperin datatieteestä käytettiin termiä [datalogy](https://dl.acm.org/citation.cfm?id=366510). Mielenkiintoista luettavaa on esimerkiksi Sveindottir ja Frøkjær [teksti]{https://link.springer.com/article/10.1007/BF01941128) datalogy termin kehittäjä Naurin luomasta Kööpenhaminalaisesta tietotekniikan koulutuksen traditiosta.
Itselle datatieteessä tuntui, osin virheellisesti, keskeiseltä tilastotieteen osaaminen. Osa on jopa mennyt niin pitkälle että ovat pitäneet tilastotiedettä merkittäviltä osin [samana](http://www2.isye.gatech.edu/~jeffwu/presentations/datascience.pdf) kuin datatiede. On kuitenkin selvää että tämä hyvin kapea käsitys ei kuvaa datatiedettä riittävällä tavalla, vaan tilastotiede on nähtävä yhtenä osana tietojen tutkijan [osaamista](https://arxiv.org/ftp/arxiv/papers/1410/1410.3127.pdf).
Datatieteessä keskeistä on kyky muokata laajoja aineistoja sekä hyödyntää "ohjelmointia". Erimerkkinä voidaan pitää muutosta pois tilasto-ohjelmista kuten [R](https://fi.wikipedia.org/wiki/R_(ohjelmointikieli)) kohti ohjelmointikieliä kuten [Python](https://fi.wikipedia.org/wiki/Python_(ohjelmointikieli)). Molemmat ovat käytännössä "ohjelmointikieliä", mutta R fokusoi nimenomaisesti tilastolliseen laskentaa ja grafiikan tuottamiseen. Mikä on siis muutos, joka on tapahtunut kun Python kasvattaa suosiotaan osin R:n kustannuksella.
## Malleja ja käsitekarttoja aiheeseen
CRISP-DM malli kuvaa avoimen standardin prosessikuvauksen datatieteen prosessista

Prosessimallin kautta pystyy myös ymmärtämään mitä edellytetään [hyvältä tietojen tutkijalta](https://www.schoolofdatascience.amsterdam/news/skills-need-become-modern-data-scientist/). Tästä pääsee mukavasti tutustumaan [datatieteen metrokarttaan](http://nirvacana.com/thoughts/2013/07/08/becoming-a-data-scientist/).
Täysin relevantti kysymys on mielestäni se, onko realistista että yksi henkilö hallitsee näin laajan kokonaisuuden ja minkälaisia painotuksia oheisen metrokartan sisällä voi tehdä niin että osaaminen on edelleen relevanttia.
# Datatieteen edellytykset
## Data
Globaalisti käytössämme on ennen näkemätön määrä dataa. Arvioimme että vuoteen 2025 mennessä käytössä on [163 Zetabittiä](https://www.forbes.com/sites/andrewcave/2017/04/13/what-will-we-do-when-the-worlds-data-hits-163-zettabytes-in-2025/) dataa, tai kuvattuna toisella tavalla, luomme joka minuutti [käsittämättömän määrän dataa](https://www.domo.com/learn/data-never-sleeps-5?aid=ogsm072517_1&sf100871281=1). Onko realistista että edes ymmärrämme onko tämä määrä dataa hyödyllistä tai mitä tällä datalla voidaan saada aikaan?
Data on ollut keskiössä tekoälyn toisessa aallossa, jossa keskiössä on nimenomaisesti tilastollinen oppiminen. Nykyinen tekoälyyn liittyvä toiminta fokusoituu nimenomaan koneoppimiseen ja erityisesti syviin neuroverkkoihin. Tämä ei ole ihme, sillä viime vuosien merkittävimmät menestystarinat perustuvat juuri näihin teknologioihin. Käytettävissä olevat valtavat datamäärät, hyvät kehitystyökalut ja vuosittain kasvava laskentateho vauhdittavat kehitystä.
Dataa on myös julkisesti saatavilla enemmän kuin koskaan ennnen. Hyvänä esimerkkinä on vaikka [Kaggle](www.kaggle.com) joka antaa mahdollisuuden ladata itselle mielenkiintoisia aineistoja eri tarkoituksiin
```
import pandas as pd
df = pd.read_csv("Mall_Customers.csv")
df.head()
import numpy as np
df.pivot_table(df,index=["Gender"],aggfunc=np.mean)
```
## Laskentateho
Laskentatehokasvu on selkeästi yksi merkittävin mekanismi data tieteen kehittymiseen. Kaikkihan tavalla tai toisella liittyy Mooren lakiin, eli kykyymme suorittaa laskutoimituksia.

Laskentatehon kasvun lisäksi tekniset ratkaisut skaalata yksittäisen koneen tai klusterin laskentatehoa ovat kehittyneet merkittävästi. Nämä tekevät jopa yksittäisestä koneesta huomattavan tehokkaan työyksikön
```
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/tQBovBvSDvA?start=1808" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
```
## Analyysiympäristö
Erityyppiset laskentaympäristöt voidaan karkeasti jakaa kuuteen. Vaihtoehdot kasvavat henkilökohtaisesta koneesta aina klutereihin tai pilviratkaisuihin.

```
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/4paAY2kseCE" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
```
## Työkalut
Käytettävissä olevien työkalujen määrä on kasvanut huimasti. Aikaisemmin käytössä oli lähinnä tilastolaskentaympäristöt kuten [R](https://www.r-project.org/) joita korvaamaan/lisäämään on nyt tullut Python ympäristöt. Tämän sisällä keskeisiä työkaluja ovat esimerkiksi [Pandas](https://pandas.pydata.org/), [Scikit-learn](https://scikit-learn.org/stable/) ja visualisointi työkalut kuten [Holoviews](http://holoviews.org/)

```
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/k27MJJLJNT4?start=1808" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
```
| github_jupyter |
```
#cell-width control
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:80% !important; }</style>"))
```
# Imports
```
#packages
import numpy
import tensorflow as tf
from tensorflow.core.example import example_pb2
#utils
import os
import random
import pickle
import struct
import time
from generators import *
#keras
import keras
from keras.preprocessing import text, sequence
from keras.preprocessing.text import Tokenizer
from keras.models import Model, Sequential
from keras.models import load_model
from keras.layers import Dense, Dropout, Activation, Concatenate, Dot, Embedding, LSTM, Conv1D, MaxPooling1D, Input, Lambda
#callbacks
from keras.callbacks import TensorBoard, ModelCheckpoint, Callback
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['text.usetex'] = True
mpl.rcParams['text.latex.preamble'] = [r'\usepackage{amsmath}']
SMALL_SIZE = 8
MEDIUM_SIZE = 10
BIGGER_SIZE = 12
plt.rc('xtick', labelsize=BIGGER_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=BIGGER_SIZE) # fontsize of the tick labels
```
# CPU usage
```
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
```
# Global parameters
```
# Embedding
max_features = 400000
maxlen_text = 400
maxlen_summ = 80
embedding_size = 100 #128
# Convolution
kernel_size = 5
filters = 64
pool_size = 4
# LSTM
lstm_output_size = 70
# Training
batch_size = 32
epochs = 5
#bookkeeping
thresholds = [0.1,0.3,0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99]
stats = {}
use_multiprocessing = True
workers = 4
shuffle = False
#model_path_prefix = '/home/donald/documents/MT/implementation-and-experiments/'
model_path_prefix = '/home/oala/Documents/MT/implementation-experiments/'
#data_path_prefix = '/mnt/disks/500gb/experimental-data-mini/experimental-data-mini/'
data_path_prefix = '/media/oala/4TB/experimental-data/'
ow_on_pseudorandom = ['ow_on_pseudorandom',model_path_prefix + 'exciting-crazy/experiments/ow-on-pseudorandom/1/best.h5']
ow_on_generator = ['ow_on_generator',model_path_prefix + 'exciting-crazy/experiments/ow-on-generator/1/best.h5']
ow_on_uniform = ['ow_on_uniform',model_path_prefix + 'exciting-crazy/experiments/ow-on-uniform/1/best.h5']
tw_on_pseudorandom = ['tw_on_pseudorandom',model_path_prefix + 'exciting-crazy/experiments/tw-on-pseudorandom/1/best.h5']
tw_on_generator = ['tw_on_generator',model_path_prefix + 'exciting-crazy/experiments/tw-on-generator/1/best.h5']
tw_on_uniform = ['tw_on_uniform',model_path_prefix + 'exciting-crazy/experiments/tw-on-uniform/1/best.h5']
model_paths = [tw_on_pseudorandom, tw_on_generator, tw_on_uniform, ow_on_pseudorandom, ow_on_generator, ow_on_uniform]
#get preprocessing data
#processing_dir = '/mnt/disks/500gb/stats-and-meta-data/400000/'
processing_dir = '/media/oala/4TB/experimental-data/stats-and-meta-data/400000/'
with open(processing_dir+'tokenizer.pickle', 'rb') as handle: tokenizer = pickle.load(handle)
embedding_matrix = numpy.load(processing_dir+'embedding_matrix.npy')
#stats
maxi = numpy.load(processing_dir+'training-stats-all/maxi.npy')
mini = numpy.load(processing_dir+'training-stats-all/mini.npy')
sample_info = (numpy.random.uniform, mini,maxi)
#pre-init dict
for model_path in model_paths:
model_name = model_path[0]
stats[model_name] = {}
for threshold in thresholds:
stats[model_name][threshold] = {}
for threshold in thresholds:
print('threshold:\t'+str(threshold))
for model_path in model_paths:
model_name = model_path[0]
model = load_model(model_path[1])
print('###'+model_name+'###')
if model_name[0:2] == 'tw':
#eval on clean test
data_dir = data_path_prefix + 'evaluation-data/test-onlyclean/only-clean/'
with open(data_dir+'partition.pickle', 'rb') as handle: partition = pickle.load(handle)
with open(data_dir+'labels.pickle', 'rb') as handle: labels = pickle.load(handle)
#batch generator parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'shuffle': shuffle,
'tokenizer':tokenizer,
'embedding_matrix':embedding_matrix,
'maxlen_text':maxlen_text,
'maxlen_summ':maxlen_summ,
'data_dir':data_dir,
'sample_info':sample_info}
#generators
test_generator = ContAllGenerator(partition['test'], labels, **params)
# Train model on dataset
#out = model.evaluate_generator(generator=test_generator,
#use_multiprocessing=use_multiprocessing,
#workers=workers)
out = [999,0]
preds = model.predict_generator(generator=test_generator,
use_multiprocessing=use_multiprocessing,
workers=workers)
preds[preds < threshold] = 0
preds[preds != 0] = 1
out[1] = numpy.mean(preds)
#stats[model_name] = {threshold:{'on_clean':out[1]}}
stats[model_name][threshold]['on_clean'] = out[1]
print('on 100% clean\t'+'loss:\t'+str(out[0])+'\tacc:\t'+str(out[1]))
#eval on pseudorandom-noise
data_dir = data_path_prefix + 'evaluation-data/test-onlynoise/pseudorandom-dist/only-noise/'
with open(data_dir+'partition.pickle', 'rb') as handle: partition = pickle.load(handle)
with open(data_dir+'labels.pickle', 'rb') as handle: labels = pickle.load(handle)
#batch generator parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'shuffle': shuffle,
'tokenizer':tokenizer,
'embedding_matrix':embedding_matrix,
'maxlen_text':maxlen_text,
'maxlen_summ':maxlen_summ,
'data_dir':data_dir,
'sample_info':sample_info}
#generators
test_generator = ContAllGenerator(partition['test'], labels, **params)
# Train model on dataset
#out = model.evaluate_generator(generator=test_generator,
#use_multiprocessing=use_multiprocessing,
#workers=workers)
out = [999,0]
preds = model.predict_generator(generator=test_generator,
use_multiprocessing=use_multiprocessing,
workers=workers)
preds[preds < threshold] = 0
preds[preds != 0] = 1
out[1] = 1 - numpy.mean(preds)
stats[model_name][threshold]['on_pseudo'] = out[1]
print('on 100% pseudo\t'+'loss:\t'+str(out[0])+'\tacc:\t'+str(out[1]))
#eval on generator noise
data_dir = data_path_prefix + 'evaluation-data/test-onlynoise/generator-dist/only-noise/'
with open(data_dir+'partition.pickle', 'rb') as handle: partition = pickle.load(handle)
with open(data_dir+'labels.pickle', 'rb') as handle: labels = pickle.load(handle)
#batch generator parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'shuffle': shuffle,
'tokenizer':tokenizer,
'embedding_matrix':embedding_matrix,
'maxlen_text':maxlen_text,
'maxlen_summ':maxlen_summ,
'data_dir':data_dir,
'sample_info':sample_info}
#generators
test_generator = ContAllGenerator(partition['test'], labels, **params)
# Train model on dataset
#out = model.evaluate_generator(generator=test_generator,
#use_multiprocessing=use_multiprocessing,
#workers=workers)
out = [999,0]
preds = model.predict_generator(generator=test_generator,
use_multiprocessing=use_multiprocessing,
workers=workers)
preds[preds < threshold] = 0
preds[preds != 0] = 1
out[1] = 1 - numpy.mean(preds)
stats[model_name][threshold]['on_generator'] = out[1]
print('on 100% gen\t'+'loss:\t'+str(out[0])+'\tacc:\t'+str(out[1]))
#eval on clean test
data_dir = data_path_prefix + 'evaluation-data/test-onlynoise/pseudorandom-dist/only-noise/'
with open(data_dir+'partition.pickle', 'rb') as handle: partition = pickle.load(handle)
with open(data_dir+'labels.pickle', 'rb') as handle: labels = pickle.load(handle)
#batch generator parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'shuffle': shuffle,
'tokenizer':tokenizer,
'embedding_matrix':embedding_matrix,
'maxlen_text':maxlen_text,
'maxlen_summ':maxlen_summ,
'data_dir':data_dir,
'sample_info':sample_info}
#generators
test_generator = TwoQuartGenerator(partition['test'], labels, **params)
# Train model on dataset
#out = model.evaluate_generator(generator=test_generator,
#use_multiprocessing=use_multiprocessing,
#workers=workers)
out = [999,0]
preds = model.predict_generator(generator=test_generator,
use_multiprocessing=use_multiprocessing,
workers=workers)
preds[preds < threshold] = 0
preds[preds != 0] = 1
out[1] = 1 - numpy.mean(preds)
stats[model_name][threshold]['on_uniform'] = out[1]
print('on 100% uni\t'+'loss:\t'+str(out[0])+'\tacc:\t'+str(out[1]))
elif model_name[0:2] == 'ow':
#eval on clean test
data_dir = data_path_prefix + 'evaluation-data/test-onlyclean/only-clean/'
with open(data_dir+'partition.pickle', 'rb') as handle: partition = pickle.load(handle)
with open(data_dir+'labels.pickle', 'rb') as handle: labels = pickle.load(handle)
#batch generator parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'shuffle': shuffle,
'tokenizer':tokenizer,
'embedding_matrix':embedding_matrix,
'maxlen_text':maxlen_text,
'maxlen_summ':maxlen_summ,
'data_dir':data_dir,
'sample_info':sample_info}
#generators
test_generator = ContAllGenerator_ow(partition['test'], labels, **params)
# Train model on dataset
#out = model.evaluate_generator(generator=test_generator,
#use_multiprocessing=use_multiprocessing,
#workers=workers)
out = [999,0]
preds = model.predict_generator(generator=test_generator,
use_multiprocessing=use_multiprocessing,
workers=workers)
preds[preds < threshold] = 0
preds[preds != 0] = 1
out[1] = numpy.mean(preds)
#stats[model_name] = {threshold:{'on_clean':out[1]}}
stats[model_name][threshold]['on_clean'] = out[1]
print('on 100% clean\t'+'loss:\t'+str(out[0])+'\tacc:\t'+str(out[1]))
#eval on pseudorandom-noise
data_dir = data_path_prefix + 'evaluation-data/test-onlynoise/pseudorandom-dist/only-noise/'
with open(data_dir+'partition.pickle', 'rb') as handle: partition = pickle.load(handle)
with open(data_dir+'labels.pickle', 'rb') as handle: labels = pickle.load(handle)
#batch generator parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'shuffle': shuffle,
'tokenizer':tokenizer,
'embedding_matrix':embedding_matrix,
'maxlen_text':maxlen_text,
'maxlen_summ':maxlen_summ,
'data_dir':data_dir,
'sample_info':sample_info}
#generators
test_generator = ContAllGenerator_ow(partition['test'], labels, **params)
# Train model on dataset
#out = model.evaluate_generator(generator=test_generator,
#use_multiprocessing=use_multiprocessing,
#workers=workers)
out = [999,0]
preds = model.predict_generator(generator=test_generator,
use_multiprocessing=use_multiprocessing,
workers=workers)
preds[preds < threshold] = 0
preds[preds != 0] = 1
out[1] = 1 - numpy.mean(preds)
stats[model_name][threshold]['on_pseudo'] = out[1]
print('on 100% pseudo\t'+'loss:\t'+str(out[0])+'\tacc:\t'+str(out[1]))
#eval on generator noise
data_dir = data_path_prefix + 'evaluation-data/test-onlynoise/generator-dist/only-noise/'
with open(data_dir+'partition.pickle', 'rb') as handle: partition = pickle.load(handle)
with open(data_dir+'labels.pickle', 'rb') as handle: labels = pickle.load(handle)
#batch generator parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'shuffle': shuffle,
'tokenizer':tokenizer,
'embedding_matrix':embedding_matrix,
'maxlen_text':maxlen_text,
'maxlen_summ':maxlen_summ,
'data_dir':data_dir,
'sample_info':sample_info}
#generators
test_generator = ContAllGenerator_ow(partition['test'], labels, **params)
# Train model on dataset
#out = model.evaluate_generator(generator=test_generator,
#use_multiprocessing=use_multiprocessing,
#workers=workers)
out = [999,0]
preds = model.predict_generator(generator=test_generator,
use_multiprocessing=use_multiprocessing,
workers=workers)
preds[preds < threshold] = 0
preds[preds != 0] = 1
out[1] = 1 - numpy.mean(preds)
stats[model_name][threshold]['on_generator'] = out[1]
print('on 100% gen\t'+'loss:\t'+str(out[0])+'\tacc:\t'+str(out[1]))
#eval on uniform noise
data_dir = data_path_prefix + 'evaluation-data/test-onlynoise/pseudorandom-dist/only-noise/'
with open(data_dir+'partition.pickle', 'rb') as handle: partition = pickle.load(handle)
with open(data_dir+'labels.pickle', 'rb') as handle: labels = pickle.load(handle)
#batch generator parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'shuffle': shuffle,
'tokenizer':tokenizer,
'embedding_matrix':embedding_matrix,
'maxlen_text':maxlen_text,
'maxlen_summ':maxlen_summ,
'data_dir':data_dir,
'sample_info':sample_info}
#generators
test_generator = TwoQuartGenerator_ow(partition['test'], labels, **params)
# Train model on dataset
#out = model.evaluate_generator(generator=test_generator,
#use_multiprocessing=use_multiprocessing,
#workers=workers)
out = [999,0]
preds = model.predict_generator(generator=test_generator,
use_multiprocessing=use_multiprocessing,
workers=workers)
preds[preds < threshold] = 0
preds[preds != 0] = 1
out[1] = 1 - numpy.mean(preds)
stats[model_name][threshold]['on_uniform'] = out[1]
print('on 100% uni\t'+'loss:\t'+str(out[0])+'\tacc:\t'+str(out[1]))
else:
print('wrong model')
print(stats)
import pickle
with open('stats-2.pickle', 'wb') as handle: pickle.dump(stats, handle, protocol=pickle.HIGHEST_PROTOCOL)
import matplotlib.pyplot as plt
def get_f1(clean_data, pseudo_data, generator_data, uniform_data):
f1_data = []
rec_data = []
prec_data = []
for i in range(len(clean_data)):
TP = clean_data[i]
#TN = pseudo_data[i] + generator_data[i] + uniform_data[i]
FN = 1 - clean_data[i] + 1e-10
FP = 3 - pseudo_data[i] - generator_data[i] - uniform_data[i] + 1e-10
rec = TP/(TP+FN)
prec = TP/(TP+FP)
rec_data.append(rec)
prec_data.append(prec)
f1_data.append(2*rec*prec/(rec + prec + 1e-10))
return prec_data, rec_data, f1_data
def model_plot(stats, model_name, figsize):
#get data
clean_data = []
pseudo_data = []
generator_data = []
uniform_data = []
thresholds_data = []
for threshold in sorted(stats[model_name].keys()):
thresholds_data.append(threshold)
for data_type in stats[model_name][threshold].keys():
if data_type == 'on_clean':
clean_data.append(stats[model_name][threshold][data_type])
if data_type == 'on_pseudo':
pseudo_data.append(stats[model_name][threshold][data_type])
if data_type == 'on_generator':
generator_data.append(stats[model_name][threshold][data_type])
elif data_type == 'on_uniform':
uniform_data.append(stats[model_name][threshold][data_type])
#get prec, rec, f1
prec_data, rec_data, f1_data = get_f1(clean_data, pseudo_data, generator_data, uniform_data)
print(prec_data, rec_data, f1_data)
#plot
plt.figure(figsize = figsize)
plt.plot(thresholds_data, prec_data, label=r'$\mathrm{Precision~over~all~test~data}$',ls='--',marker='o', color = 'gold', alpha = 0.8)
plt.plot(thresholds_data, rec_data, label=r'$\mathrm{Recall~over~all~test data}$', ls='--',marker='o', color = 'aqua', alpha = 0.8)
plt.plot(thresholds_data, f1_data, label=r'$\mathrm{F1~over~all~test~data}$',ls='-',marker='o', color = 'springgreen', alpha = 0.8)
plt.title(r'$\mathrm{'+model_name+'}$')
plt.ylim(ymin=-0.1, ymax=1.1)
plt.xlabel(r'$\mathrm{Threshold~value}$',fontsize=15)
plt.ylabel(r'$\mathrm{Precision,~Recall~and~F1}$',fontsize=15)
plt.legend(loc=8)
plt.show
for model_path in model_paths:
model_name = model_path[0]
model_plot(stats, model_name, (8,4))
stats['ow_on_pseudorandom']
```
| github_jupyter |
# Scripts for the analysis in the paper
```
import sys
import os
import datetime
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.stats as stats
import seaborn as sns
from matplotlib.ticker import FuncFormatter
from seaborn.algorithms import bootstrap
from seaborn.utils import ci
nb_dir = os.path.split(os.getcwd())[0]
if nb_dir not in sys.path:
sys.path.append(nb_dir)
from tmp.utils import formatter
form = FuncFormatter(formatter)
plt.rc('font', family='serif')
plt.rc('text', usetex=True)
sns.set(style="whitegrid", font="serif")
color_mine = ["#F8414A", "#5676A1", "#FD878D", "#385A89", "#FFFACD", "#EFCC00"]
```
## Hateful users are power users
```
%matplotlib inline
import datetime
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.stats as stats
import seaborn as sns
from matplotlib.ticker import FuncFormatter
from seaborn.algorithms import bootstrap
from seaborn.utils import ci
from tmp.utils import formatter
form = FuncFormatter(formatter)
plt.rc('font', family='serif')
plt.rc('text', usetex=True)
sns.set(style="whitegrid", font="serif")
color_mine = ["#F8414A", "#5676A1", "#FD878D", "#385A89", "#FFFACD", "#EFCC00"]
df = pd.read_csv("../data/users_neighborhood_anon.csv")
df = df[df["created_at"].notnull()]
f, axzs = plt.subplots(1, 5, figsize=(10.8, 2))
axzs = [axzs]
boxprops = dict(linewidth=0.3)
whiskerprops = dict(linewidth=0.3)
capprops = dict(linewidth=0.3)
medianprops = dict(linewidth=1)
df["tweet_number"] = df["tweet number"] / (df["tweet number"] + df["retweet number"] + df["quote number"])
df["created_at"] = -(df["created_at"] - datetime.datetime(2017, 12, 29).timestamp())/86400
df["statuses_count"] = df["statuses_count"] / df["created_at"]
df["followers_count"] = df["followers_count"] / df["created_at"]
df["followees_count"] = df["followees_count"] / df["created_at"]
attributes_all = [["statuses_count", "followers_count", "followees_count", "favorites_count", "time_diff"]]
titles_all = [["\#tweets/day", "\#followers/day", "\#followees/day", "\#favorites", "avg(interval)"]]
first = True
for axs, attributes, titles in zip(axzs, attributes_all, titles_all):
for axis, attribute, title in zip(axs, attributes, titles):
N = 6
men = [df[df.hate == "hateful"],
df[df.hate == "normal"],
df[df.hate_neigh],
df[df.normal_neigh],
df[df.is_63_2 == True],
df[df.is_63_2 == False]]
tmp = []
medians, medians_ci = [], []
averages, averages_ci = [], []
for category in men:
boots = bootstrap(category[attribute], func=np.nanmean, n_boot=1000)
ci_tmp = ci(boots)
average = (ci_tmp[0] + ci_tmp[1]) / 2
ci_average = (ci_tmp[1] - ci_tmp[0]) / 2
averages.append(average)
averages_ci.append(ci_average)
boots = bootstrap(category[attribute], func=np.nanmedian, n_boot=1000)
ci_tmp = ci(boots)
median = (ci_tmp[0] + ci_tmp[1]) / 2
ci_median = (ci_tmp[1] - ci_tmp[0]) / 2
medians.append(median)
medians_ci.append(ci_median)
tmp.append(category[attribute].values)
ind = np.array([0, 1, 2, 3, 4, 5])
width = .6
_, n_h = stats.ttest_ind(tmp[0], tmp[1], equal_var=False, nan_policy='omit')
_, nn_nh = stats.ttest_ind(tmp[2], tmp[3], equal_var=False, nan_policy='omit')
_, s_ns = stats.ttest_ind(tmp[4], tmp[5], equal_var=False, nan_policy='omit')
print(title)
print(n_h)
print(nn_nh)
print(s_ns)
rects = axis.bar(ind, averages, width, yerr=averages_ci, color=color_mine,
ecolor="#212823", edgecolor=["#4D1A17"]*6, linewidth=.3)
axis.yaxis.set_major_formatter(form)
axis.set_xticks([])
axis.set_title(title)
axis.set_ylabel("")
axis.set_xlabel("")
axis.axvline(1.5, ls='dashed', linewidth=0.3, color="#C0C0C0")
axis.axvline(3.5, ls='dashed', linewidth=0.3, color="#C0C0C0")
f.legend((rects[0], rects[1], rects[2], rects[3], rects[4], rects[5]),
('Hateful User', 'Normal User', 'Hateful Neigh.', 'Normal Neigh.', 'Suspended', 'Active'),
loc='upper center',
fancybox=True, shadow=True, ncol=6)
f.tight_layout(rect=[0, 0, 1, .95])
f.savefig("../imgs/activity.pdf")
```
## Hateful users have newer accounts
```
df = pd.read_csv("../data/users_neighborhood_anon.csv")
df = df[df["created_at"].notnull()]
men = [df[df.hate == "hateful"],
df[df.hate == "normal"],
df[df.hate_neigh],
df[df.normal_neigh],
df[df.is_63_2 == True],
df[df.is_63_2 == False]]
tmp = []
for category in men:
tmp.append(category["created_at"].values)
f, axs = plt.subplots(1, 1, figsize=(5.4, 3))
sns.violinplot(ax=axs, data=tmp, palette=color_mine, orient="h", linewidth=1)
axs.set_ylabel("")
axs.set_xlabel("")
_, n_h = stats.ttest_ind(men[0]["created_at"].values, men[1]["created_at"].values, equal_var=False)
_, nn_nh = stats.ttest_ind(men[2]["created_at"].values, men[3]["created_at"].values, equal_var=False)
_, s_ns = stats.ttest_ind(men[4]["created_at"].values, men[5]["created_at"].values, equal_var=False)
print(n_h)
print(nn_nh)
print(s_ns)
x = df.created_at.values
x_ticks = np.arange(min(x), max(x)+1, 3.154e+7)
axs.set_xticks(np.arange(min(x), max(x)+1, 3.154e+7))
f.canvas.draw()
axs.set_title("Creation Date of Users")
labels = [datetime.fromtimestamp(item).strftime('%Y-%m') for item in x_ticks]
axs.set_xticklabels(labels, rotation=35)
axs.set_yticklabels(["", "", "", ""], rotation=20)
f.tight_layout()
f.savefig("../imgs/created_at.pdf")
```
## Betweenness Centrality
```
df = pd.read_csv("../data/users_neighborhood_anon.csv")
f, axzs = plt.subplots(2, 3, figsize=(5.4, 3))
boxprops = dict(linewidth=0.3)
whiskerprops = dict(linewidth=0.3)
capprops = dict(linewidth=0.3)
medianprops = dict(linewidth=1)
auxfs = [["median", "median", "median"],
["avg", "avg", "avg"]]
attributes_all = [["betweenness", "eigenvector", "out_degree"],
["betweenness", "eigenvector", "out_degree"]]
titles_all = [["median(betweenness)", "median(eigenvector)", "median(out degree)"],
["avg(betweenness)", "avg(eigenvector)", "avg(out degree)"]]
rects = None
first = True
for axs, attributes, titles, auxf in zip(axzs, attributes_all, titles_all, auxfs):
for axis, attribute, title, aux in zip(axs, attributes, titles, auxf):
N = 4
men = [df[df.hate == "hateful"],
df[df.hate == "normal"],
df[df.hate_neigh],
df[df.normal_neigh],
df[df.is_63_2 == True],
df[df.is_63_2 == False]]
tmp = []
medians, medians_ci = [], []
averages, averages_ci = [], []
for category in men:
boots = bootstrap(category[attribute], func=np.nanmean, n_boot=1000)
ci_tmp = ci(boots)
average = (ci_tmp[0] + ci_tmp[1]) / 2
ci_average = (ci_tmp[1] - ci_tmp[0]) / 2
averages.append(average)
averages_ci.append(ci_average)
boots = bootstrap(category[attribute], func=np.nanmedian, n_boot=1000)
ci_tmp = ci(boots)
median = (ci_tmp[0] + ci_tmp[1]) / 2
ci_median = (ci_tmp[1] - ci_tmp[0]) / 2
medians.append(median)
medians_ci.append(ci_median)
tmp.append(category[attribute].values)
ind = np.array([0, 1, 2, 3, 4, 5])
width = .6
if aux == "median":
rects = axis.bar(ind, medians, width, yerr=medians_ci, color=color_mine,
ecolor="#212823", edgecolor=["#4D1A17"]*6, linewidth=.3)
if aux == "avg":
rects = axis.bar(ind, averages, width, yerr=averages_ci, color=color_mine,
ecolor="#212823", edgecolor=["#4D1A17"]*6, linewidth=.3)
axis.yaxis.set_major_formatter(form)
axis.set_xticks([])
axis.set_title(title)
axis.set_ylabel("")
axis.set_xlabel("")
axis.axvline(1.5, ls='dashed', linewidth=0.3, color="#C0C0C0")
axis.axvline(4.5, ls='dashed', linewidth=0.3, color="#C0C0C0")
first = False
f.tight_layout(rect=[0, 0, 1, 1])
f.savefig("../imgs/betweenness.pdf")
```
## Empath Analysis
```
df = pd.read_csv("../data/users_neighborhood_anon.csv")
df["tweet_number"] = df["tweet number"] / (df["tweet number"] + df["retweet number"] + df["quote number"])
df["retweet_number"] = df["retweet number"] / (df["tweet number"] + df["retweet number"] + df["quote number"])
df["number_urls"] = df["number urls"] / (df["tweet number"] + df["retweet number"] + df["quote number"])
df["mentions"] = df["mentions"] / (df["tweet number"] + df["retweet number"] + df["quote number"])
df["mentions"] = df["mentions"] / (df["tweet number"] + df["retweet number"] + df["quote number"])
df["number hashtags"] = df["number hashtags"] / (df["tweet number"] + df["retweet number"] + df["quote number"])
df["baddies"] = df["baddies"] / (df["tweet number"] + df["retweet number"] + df["quote number"])
f, axzs = plt.subplots(3, 7, figsize=(10.8, 4))
attributes_all = [["sadness_empath", "swearing_terms_empath", "independence_empath",
"positive_emotion_empath", "negative_emotion_empath", "government_empath", "love_empath"],
["ridicule_empath", "masculine_empath", "feminine_empath",
"violence_empath", "suffering_empath", "dispute_empath", "anger_empath"],
["envy_empath", "work_empath", "politics_empath",
"terrorism_empath", "shame_empath", "confusion_empath", "hate_empath"]]
titles_all = [["Sadness", "Swearing", "Independence", "Pos. Emotions", "Neg. Emotions", "Government", "Love"],
["Ridicule", "Masculine", "Feminine", "Violence", "Suffering", "Dispute", "Anger"],
["Envy", "Work", "Politics", "Terrorism", "Shame", "Confusion", "Hate"]]
for axs, attributes, titles in zip(axzs, attributes_all, titles_all):
for axis, attribute, title in zip(axs, attributes, titles):
N = 4
men = [df[df.hate == "hateful"],
df[df.hate == "normal"],
df[df.hate_neigh],
df[df.normal_neigh],
df[df.is_63_2 == True],
df[df.is_63_2 == False]]
tmp = []
medians, medians_ci = [], []
averages, averages_ci = [], []
for category in men:
boots = bootstrap(category[attribute], func=np.nanmean, n_boot=1000)
ci_tmp = ci(boots)
average = (ci_tmp[0] + ci_tmp[1]) / 2
ci_average = (ci_tmp[1] - ci_tmp[0]) / 2
averages.append(average)
averages_ci.append(ci_average)
boots = bootstrap(category[attribute], func=np.nanmedian, n_boot=1000)
ci_tmp = ci(boots)
median = (ci_tmp[0] + ci_tmp[1]) / 2
ci_median = (ci_tmp[1] - ci_tmp[0]) / 2
medians.append(median)
medians_ci.append(ci_median)
tmp.append(category[attribute].values)
ind = np.array([0, 1, 2, 3, 4, 5])
_, n_h = stats.ttest_ind(tmp[0], tmp[1], equal_var=False, nan_policy='omit')
_, nn_nh = stats.ttest_ind(tmp[2], tmp[3], equal_var=False, nan_policy='omit')
_, s_ns = stats.ttest_ind(tmp[4], tmp[5], equal_var=False, nan_policy='omit')
print(title)
print(n_h)
print(nn_nh)
print(s_ns)
rects = axis.bar(ind, averages, 0.6, yerr=averages_ci, color=color_mine,
ecolor="#212823", edgecolor=["#4D1A17"] * 6, linewidth=.3)
axis.yaxis.set_major_formatter(form)
axis.set_xticks([])
axis.set_title(title)
axis.set_ylabel("")
axis.set_xlabel("")
axis.axvline(1.5, ls='dashed', linewidth=0.3, color="#C0C0C0")
axis.axvline(3.5, ls='dashed', linewidth=0.3, color="#C0C0C0")
f.legend((rects[0], rects[1], rects[2], rects[3], rects[4], rects[5]),
('Hateful User', 'Normal User', 'Hateful Neigh.', 'Normal Neigh.', 'Suspended', 'Active'),
loc='upper center',
fancybox=True, shadow=True, ncol=6)
f.tight_layout(rect=[0, 0, 1, .95])
f.savefig("../imgs/lexical.pdf")
```
## Is Spam
```
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as stats
import seaborn as sns
from matplotlib.ticker import FuncFormatter
from tmp.utils import formatter
form = FuncFormatter(formatter)
plt.rc('font', family='serif')
plt.rc('text', usetex=True)
sns.set(style="whitegrid", font="serif")
color_mine = ["#F8414A", "#5676A1", "#FD878D", "#385A89", "#FFFACD", "#EFCC00"]
df = pd.read_csv("../data/users_neighborhood_anon.csv")
df["followers_followees"] = df["followers_count"] / (df["followees_count"])
df["number_urls"] = df["number urls"] / (df["tweet number"] + df["retweet number"] + df["quote number"])
df["number hashtags"] = df["number hashtags"] / (df["tweet number"] + df["retweet number"] + df["quote number"])
f, axzs = plt.subplots(1, 3, figsize=(5.4, 2))
boxprops = dict(linewidth=0.3)
whiskerprops = dict(linewidth=0.3)
capprops = dict(linewidth=0.3)
medianprops = dict(linewidth=1)
attributes_all = [
["followers_followees", "number_urls", "number hashtags"]]
titles_all = [
["\#followers/followees", "\#URLs/tweet", "hashtags/tweet"]]
for axs, attributes, titles in zip([axzs], attributes_all, titles_all):
for axis, attribute, title in zip(axs, attributes, titles):
men = [df[df.hate == "hateful"],
df[df.hate == "normal"],
df[df.hate_neigh],
df[df.normal_neigh],
df[df.is_63_2 == True],
df[df.is_63_2 == False]]
tmp = []
medians, medians_ci = [], []
averages, averages_ci = [], []
for category in men:
w_inf = category[attribute].values
non_inf = w_inf[w_inf < 1E308]
tmp.append(non_inf)
_, n_h = stats.ttest_ind(tmp[0], tmp[1], equal_var=False, nan_policy='omit')
_, nn_nh = stats.ttest_ind(tmp[2], tmp[3], equal_var=False, nan_policy='omit')
_, ns_ns2 = stats.ttest_ind(tmp[4], tmp[5], equal_var=False, nan_policy='omit')
print(title)
print(n_h)
print(nn_nh)
print(ns_ns2)
rects = sns.boxplot(data=tmp, palette=color_mine, showfliers=False, ax=axis, orient="v", width=0.8,
boxprops=boxprops, whiskerprops=whiskerprops, capprops=capprops, medianprops=medianprops)
axis.yaxis.set_major_formatter(form)
axis.set_xticks([])
axis.set_title(title)
axis.set_ylabel("")
axis.set_xlabel("")
axis.axvline(1.5, ls='dashed', linewidth=0.3, color="#C0C0C0")
axis.axvline(3.5, ls='dashed', linewidth=0.3, color="#C0C0C0")
f.tight_layout(rect=[0, 0, 1, 1])
f.savefig("../imgs/spam.pdf")
```
## Sentiment
```
df = pd.read_csv("../data/users_neighborhood_anon.csv")
f, axzs = plt.subplots(1, 3, figsize=(5.4, 1.5))
axzs = [axzs]
boxprops = dict(linewidth=0.3)
whiskerprops = dict(linewidth=0.3)
capprops = dict(linewidth=0.3)
medianprops = dict(linewidth=1)
attributes_all = [["sentiment", "subjectivity", "baddies"]]
titles_all = [["sentiment", "subjectivity", "bad words"]]
rects = None
first = True
for axs, attributes, titles in zip(axzs, attributes_all, titles_all):
for axis, attribute, title in zip(axs, attributes, titles):
N = 4
men = [df[df.hate == "hateful"],
df[df.hate == "normal"],
df[df.hate_neigh],
df[df.normal_neigh],
df[df["is_63_2"] == True],
df[df["is_63_2"] == False]]
tmp = []
medians, medians_ci = [], []
averages, averages_ci = [], []
for category, color in zip(men, color_mine):
tmp.append(category[attribute].values)
sns.boxplot(data=tmp, palette=color_mine, showfliers=False, ax=axis, orient="v", width=0.8, linewidth=.5)
ind = np.array([0, 1, 2, 3])
_, n_h = stats.ttest_ind(tmp[0], tmp[1], equal_var=False, nan_policy='omit')
_, nn_nh = stats.ttest_ind(tmp[2], tmp[3], equal_var=False, nan_policy='omit')
_, ns_ns2 = stats.ttest_ind(tmp[4], tmp[5], equal_var=False, nan_policy='omit')
print(title)
print(n_h)
print(nn_nh)
print(ns_ns2)
axis.yaxis.set_major_formatter(form)
axis.set_xticks([])
axis.set_title(title)
axis.set_ylabel("")
axis.set_xlabel("")
axis.axvline(1.5, ls='dashed', linewidth=0.3, color="#C0C0C0")
axis.axvline(3.5, ls='dashed', linewidth=0.3, color="#C0C0C0")
axzs[0][0].set_ylim(-.15, .4)
axzs[0][1].set_ylim(.30, .70)
axzs[0][2].set_ylim(-20, 100)
f.tight_layout(rect=[0, 0, 1, 1])
f.savefig("../imgs/sentiment.pdf")
```
## Mixing
```
from networkx.algorithms.assortativity import attribute_mixing_dict
import networkx as nx
import pandas as pd
g = nx.read_graphml("../data/users_clean.graphml")
df = pd.read_csv("../data/users_neighborhood_anon.csv")
hate_dict = {}
susp_dict = {}
for idv, hate, susp in zip(df.user_id.values, df.hate.values, df.is_63_2.values):
hate_dict[str(idv)] = hate
susp_dict[str(idv)] = susp
nx.set_node_attributes(g, name="hate", values=hate_dict)
nx.set_node_attributes(g, name="susp", values=susp_dict)
mixing = attribute_mixing_dict(g, "hate")
print(mixing)
print(" hate -> hate ", mixing["hateful"]["hateful"] /
(mixing["hateful"]["other"] + mixing["hateful"]["normal"] + mixing["hateful"]["hateful"]) / (544 / 100386))
print(" hate -> normal ", mixing["hateful"]["normal"] /
(mixing["hateful"]["other"] + mixing["hateful"]["normal"] + mixing["hateful"]["hateful"]) / (4427 / 100386))
print("normal -> normal ", mixing["normal"]["normal"] /
(mixing["normal"]["other"] + mixing["normal"]["normal"] + mixing["normal"]["hateful"]) / (4427 / 100386))
print("normal -> hate ", mixing["normal"]["hateful"] /
(mixing["normal"]["other"] + mixing["normal"]["normal"] + mixing["normal"]["hateful"]) / (544 / 100386))
mixing = attribute_mixing_dict(g, "susp")
print(mixing)
print(" susp -> susp ", mixing[True][True] / (mixing[True][True] + mixing[True][False]) / (668 / 100386))
print(" susp -> active ", mixing[True][False] / (mixing[True][True] + mixing[True][False]) / (99718 / 100386))
print("active -> active ", mixing[False][False] / (mixing[False][True] + mixing[False][False]) / (99718 / 100386))
print("active -> susp ", mixing[False][True] / (mixing[False][True] + mixing[False][False]) / (668 / 100386))
del g
df = pd.read_csv("../data/users_anon.csv")
men = [df[df.hate == "hateful"],
df[df.hate == "normal"],
df[df.hate_neigh],
df[df.normal_neigh],
df[df.is_63_2 == True],
df[df.is_63_2 == False]]
for i in men:
print(len(i.values))
confusion = [len(df[(df["hate"] == "hateful") & (df["is_63"])].index),
len(df[(df["hate"] == "normal") & (df["is_63"])].index),
len(df[(df["hate"] == "other") & (df["is_63"])].index)]
print(confusion)
confusion_norm = [confusion[0] / len(df[df["hate"] == "hateful"]),
confusion[1] / len(df[df["hate"] == "normal"]),
confusion[2] / len(df[df["hate"] == "other"])
]
print(confusion_norm)
```
| github_jupyter |
```
import tensorflow as tf
import numpy as np
import tictactoe as ttt
function_approx = ttt.DNN_Random_Walk(N_A=2)
N_S = 5
N_A = 2
N_episodes = 10
optimizer = tf.keras.optimizers.Adam()
random_walk_env = ttt.RandomWalkEnv()
function_approx = ttt.DNN_Random_Walk(N_A=N_A)
for _ in range(N_episodes):
S = random_walk_env.reset()
with tf.GradientTape() as tape:
S_a = np.zeros((1,N_S), dtype=np.float16)
S_a[0, S] = 1.0
S_tf = tf.Variable(S_a)
probs_tf = function_approx(S_tf)
action = np.random.choice(N_A, p=probs_tf.numpy()[0])
S_new, reward, done = random_walk_env.step(action)
prob_tf = tf.reshape(probs_tf[:,action],(-1,1))
performance = tf.math.log(prob_tf) * reward
loss_value = tf.reduce_sum(-performance, axis=-1)
gradients = tape.gradient(loss_value, function_approx.trainable_weights)
optimizer.apply_gradients(zip(gradients, function_approx.trainable_weights))
optimizer = tf.keras.optimizers.Adam()
random_walk_env = ttt.RandomWalkEnv()
function_approx = ttt.DNN_Random_Walk(N_A=N_A)
#@tf.function
def run_tf(N_episodes, N_A):
for _ in range(N_episodes):
S = random_walk_env.reset()
with tf.GradientTape() as tape:
S_a = np.zeros((1,N_S), dtype=np.float16)
S_a[0, S] = 1.0
S_tf = tf.Variable(S_a)
probs_tf = function_approx(S_tf)
action = np.random.choice(N_A, p=probs_tf.numpy()[0])
S_new, reward, done = random_walk_env.step(action)
prob_tf = tf.reshape(probs_tf[:,action],(-1,1))
performance = tf.math.log(prob_tf) * reward
loss_value = tf.reduce_sum(-performance, axis=-1)
gradients = tape.gradient(loss_value, function_approx.trainable_weights)
optimizer.apply_gradients(zip(gradients, function_approx.trainable_weights))
run_tf(N_episodes, N_A)
optimizer = tf.keras.optimizers.Adam()
random_walk_env = ttt.RandomWalkEnv()
function_approx = ttt.DNN_Random_Walk(N_A=N_A)
# tf.squeeze(tf.random.categorical(logits, 1), axis=-1)
@tf.function
def run_tf(N_episodes, N_A):
for _ in range(N_episodes):
S = random_walk_env.reset()
with tf.GradientTape() as tape:
S_a = np.zeros((1,N_S), dtype=np.float16)
S_a[0, S] = 1.0
S_tf = tf.Variable(S_a)
probs_tf = function_approx(S_tf)
# action = np.random.choice(N_A, p=probs_tf.numpy()[0])
action = tf.squeeze(tf.random.categorical(probs_tf, 1))
#S_new, reward, done = random_walk_env.step(action)
reward = 0.0
prob_tf = tf.reshape(probs_tf[:,action],(-1,1))
performance = tf.math.log(prob_tf) * reward
loss_value = tf.reduce_sum(-performance, axis=-1)
gradients = tape.gradient(loss_value, function_approx.trainable_weights)
optimizer.apply_gradients(zip(gradients, function_approx.trainable_weights))
run_tf(N_episodes, N_A)
optimizer = tf.keras.optimizers.Adam()
random_walk_env = ttt.RandomWalkEnv()
function_approx = ttt.DNN_Random_Walk(N_A=N_A)
@tf.function
def run_tf(N_episodes, N_A):
with tf.GradientTape() as tape:
S_tf = tf.zeros((2,2))
run_tf(N_episodes, N_A)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/kevincong95/cs231n-emotiw/blob/master/notebooks/2.4-tj-la-ak-kc-vl-FINAL-ensemble_fc_predictions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Video Sentiment Analysis in the Wild
Ensembling Notebook (with BOOST) | FC | CS231n
### Modalities Used
- Scene (ResNet - three LSTM heads) [model](https://storage.googleapis.com/cs231n-emotiw/models/scene-classifier-resnet-lstm-x3.h5)
- Pose [model](https://storage.googleapis.com/cs231n-emotiw/models/pose-classifier-64lstm-0.01reg.h5)
- Audio [model](https://storage.googleapis.com/cs231n-emotiw/models/openl3-cnn-lstm-tuned-lr.h5)
- Image Captioning [model](https://storage.googleapis.com/cs231n-emotiw/models/sentiment-transformer_sentiment-transformer_756.pth) and [model metadata](https://storage.googleapis.com/cs231n-emotiw/models/sentiment-transformer-16.metadata.bin)
Test Accuracy = **61.6%**
FC Model [model](https://storage.googleapis.com/cs231n-emotiw/models/ensemble-fc-laugh-boost-final-v2.h5)
### Copy Pre-Processed Files
```
!ls
!nvidia-smi
# FULL_PATH = 'My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw'
FULL_PATH = 'My Drive/cs231n-project/datasets/emotiw'
print("Using final dataset...")
!cp /content/drive/'$FULL_PATH'/train-final-* .
!cp /content/drive/'$FULL_PATH'/val-final-* .
!wget https://storage.googleapis.com/cs231n-emotiw/data/Train_labels.txt
!wget https://storage.googleapis.com/cs231n-emotiw/data/Val_labels.txt
!ls
# RUN THIS FOR FINAL FILES (zip includes root folder)
!unzip train-final-audio.zip
!unzip train-final-faces.zip
!unzip train-final-frames.zip
!unzip train-final-pose.zip
!unzip -d train-final-fer train-final-fer.zip
!unzip val-final-audio.zip
!unzip val-final-faces.zip
!unzip val-final-frames.zip
!unzip val-final-pose.zip
!unzip -d val-final-fer val-final-fer.zip
!ls
```
### Run Classifiers
```
%tensorflow_version 2.x
import tensorflow
print(tensorflow.__version__)
!pwd
import urllib
from getpass import getpass
import os
user = input('User name: ')
password = getpass('Password: ')
password = urllib.parse.quote(password) # your password is converted into url format
cmd_string = 'git clone https://{0}:{1}@github.com/kevincong95/cs231n-emotiw.git'.format(user, password)
os.system(cmd_string)
cmd_string, password = "", "" # removing the password from the variable
!mv train-* cs231n-emotiw
!mv val-* cs231n-emotiw
!mv Train* cs231n-emotiw
!mv Val* cs231n-emotiw
!pwd
import os
os.chdir('/content/cs231n-emotiw')
!pwd
!pip install pytorch-transformers
# Create the concatenated input layer to feed into FC
from src.classifiers.audio_classifier import AudioClassifier
from src.classifiers.frames_classifier import FramesClassifier
from src.classifiers.pose_classifier import PoseClassifier
from src.classifiers.face_classifier import FaceClassifier
from src.classifiers.image_captioning_classifier import ImageCaptioningClassifier, FineTuningConfig
from src.classifiers.utils import get_num_samples
import numpy as np
IS_TINY = False
def run_classifier(layers_to_extract, audio_folder='train-final-audio', frames_folder='train-final-frames', pose_folder='train-final-pose', face_folder='train-final-fer', image_caption_pkl="train-final-captions.pkl", image_caption_prefix="train_", labels_file="Train_labels.txt"):
audio_classifier = AudioClassifier(audio_folder, model_location='https://storage.googleapis.com/cs231n-emotiw/models/openl3-cnn-lstm-tuned-lr.h5', is_test=False)
frames_classifier = FramesClassifier(frames_folder, model_location='https://storage.googleapis.com/cs231n-emotiw/models/scene-classifier-resnet-lstm-x3.h5', is_test=False)
frames_classifier_vgg = FramesClassifier(frames_folder, location_prefix="vgg", model_location='https://storage.googleapis.com/cs231n-emotiw/models/vgg19-lstm-cp-0003.h5', is_test=False, batch_size=4)
pose_classifier = PoseClassifier(pose_folder, model_location='https://storage.googleapis.com/cs231n-emotiw/models/pose-classifier-64lstm-0.01reg.h5', is_test=False)
face_classifier = FaceClassifier(face_folder, model_location='/content/drive/My Drive/cs231n-project/models/face-classifier-playground/cp-0001.h5', is_test=False)
image_captioning_classifier = ImageCaptioningClassifier(image_caption_pkl, image_caption_prefix, model_metadata_location="https://storage.googleapis.com/cs231n-emotiw/models/sentiment-transformer-16.metadata.bin", model_location='https://storage.googleapis.com/cs231n-emotiw/models/sentiment-transformer_sentiment-transformer_756.pth', is_test=False)
# classifiers = [audio_classifier, frames_classifier, pose_classifier] # face_classifier]
classifiers = [audio_classifier, frames_classifier, frames_classifier_vgg, pose_classifier, face_classifier, image_captioning_classifier]
# classifiers = [frames_classifier_vgg]
sample_to_true_label = {}
with open(labels_file) as f:
l = 0
for line in f:
if l == 0:
# Skip headers
l += 1
continue
line_arr = line.split(" ")
sample_to_true_label[line_arr[0].strip()] = int(line_arr[1].strip()) - 1 # subtract one to make labels from 0 to 2
l += 1
classifier_outputs = []
classifier_samples = []
classifier_dim_sizes = []
output_dim_size = 0
num_samples = 0
sample_to_row = {}
for i, classifier in enumerate(classifiers):
output, samples = classifier.predict(layers_to_extract[i])
output_dim_size += output.shape[1]
classifier_dim_sizes.append(output.shape[1])
num_samples = len(samples)
classifier_outputs.append(output)
classifier_samples.append(samples)
X_train = np.zeros(shape=(num_samples, output_dim_size))
y_train = []
print(f"Number of samples: {num_samples}")
print(f"Dim shapes: ")
print(classifier_dim_sizes)
for i, sample in enumerate(classifier_samples[0]):
sample_to_row[sample] = i
y_train.append(sample_to_true_label[sample])
last_classifier_index = 0
for c, output in enumerate(classifier_outputs):
samples = classifier_samples[c]
print(len(output))
for i, row in enumerate(output):
sample = samples[i]
X_train[sample_to_row[sample], last_classifier_index:last_classifier_index+classifier_dim_sizes[c]] += row
last_classifier_index += classifier_dim_sizes[c]
return X_train, tf.keras.utils.to_categorical(y_train, num_classes=3)
import tensorflow as tf
# For each classifier, extract the specific desired layer
# (refer to the model summary for the layer names)
layers_to_extract = [
"dense", # Audio
"concatenate_5", # ResNet
"global_average_pooling3d_1", # VGG
"bidirectional_1", # Pose
"dense_27", # FER
"classification_head" # Image Caption
]
prefix = "final"
if IS_TINY:
prefix = "tiny"
# X_train, y_train = run_classifier(layers_to_extract, audio_folder=f"train-{prefix}-audio", frames_folder=f"train-{prefix}-frames", pose_folder=f"train-{prefix}-pose", face_folder=f"train-{prefix}-fer" ,labels_file="Train_labels.txt")
# X_valid, y_valid = run_classifier(layers_to_extract, audio_folder=f"val-{prefix}-audio", frames_folder=f"val-{prefix}-frames", pose_folder=f"val-{prefix}-pose", face_folder=f"val-{prefix}-fer" , labels_file="Val_labels.txt")
X_train, y_train = run_classifier(layers_to_extract, audio_folder=f"train-{prefix}-audio", frames_folder=f"train-{prefix}-frames", pose_folder=f"train-{prefix}-pose" , face_folder=f"train-{prefix}-fer" , image_caption_pkl="train-final-captions.pkl", image_caption_prefix="train_", labels_file="Train_labels.txt")
X_valid, y_valid = run_classifier(layers_to_extract, audio_folder=f"val-{prefix}-audio", frames_folder=f"val-{prefix}-frames", pose_folder=f"val-{prefix}-pose" , face_folder=f"val-{prefix}-fer" , image_caption_pkl="val-final-captions.pkl", image_caption_prefix="val_", labels_file="Val_labels.txt")
print(X_train.shape)
print(y_train.shape)
!ls
!rm -rf ensemble-scene-scene-pose-audio-face-caption-v1
!mkdir ensemble-scene-scene-pose-audio-face-caption-v1
np.save("ensemble-scene-scene-pose-audio-face-caption-v1/X_train.npy", X_train)
np.save("ensemble-scene-scene-pose-audio-face-caption-v1/y_train.npy", y_train)
np.save("ensemble-scene-scene-pose-audio-face-caption-v1/X_valid.npy", X_valid)
np.save("ensemble-scene-scene-pose-audio-face-caption-v1/y_valid.npy", y_valid)
!zip -r ensemble-scene-scene-pose-audio-face-caption-v1.zip ensemble-scene-scene-pose-audio-face-caption-v1
# !cp ensemble-scene-pose-audio-v1.zip ../drive/'My Drive/Machine-Learning-Projects'/cs231n-project/datasets/emotiw
!cp ensemble-scene-scene-pose-audio-face-caption-v1.zip ../drive/'My Drive'/cs231n-project/datasets/emotiw
```
## START HERE IF YOU DON'T WANT TO REMAKE THE CONCAT ABOVE
```
!cp ../drive/'My Drive'/cs231n-project/datasets/emotiw/train-final-audio.zip .
!unzip -q -d train-final-audio train-final-audio.zip
!ls train-final-audio/train-final-audio/audio-pickle | head
!cp /content/drive/'My Drive/Machine-Learning-Projects'/cs231n-project/datasets/emotiw/train-final-laugh-prob.pkl .
!cp /content/drive/'My Drive/Machine-Learning-Projects'/cs231n-project/datasets/emotiw/val-final-laugh-prob.pkl .
import pickle
train_vid_to_laugh = {}
val_vid_to_laugh = {}
train_laugh_vec = []
val_laugh_vec = []
with open('train-final-laugh-prob.pkl', 'rb') as handle:
train_laugh_obj = pickle.load(handle)
i = 0
for vid in train_laugh_obj["vids"]:
train_vid_to_laugh[vid] = train_laugh_obj["actual_preds"][i]
i += 1
for vid in sorted(train_laugh_obj["vids"]):
train_laugh_vec.append(train_vid_to_laugh[vid])
import pickle
with open('val-final-laugh-prob.pkl', 'rb') as handle:
val_laugh_obj = pickle.load(handle)
i = 0
for vid in val_laugh_obj["vids"]:
val_vid_to_laugh[vid] = val_laugh_obj["actual_preds"][i]
i += 1
for vid in sorted(val_laugh_obj["vids"]):
val_laugh_vec.append(val_vid_to_laugh[vid])
print(len(train_laugh_vec))
print(len(val_laugh_vec))
import numpy as np
!cp /content/drive/'My Drive/Machine-Learning-Projects'/cs231n-project/datasets/emotiw/ensemble-scene-scene-pose-audio-face-caption-v1.zip .
!unzip ensemble-scene-scene-pose-audio-face-caption-v1.zip
X_train = np.load("ensemble-scene-scene-pose-audio-face-caption-v1/X_train.npy")
y_train = np.load("ensemble-scene-scene-pose-audio-face-caption-v1/y_train.npy")
X_valid = np.load("ensemble-scene-scene-pose-audio-face-caption-v1/X_valid.npy")
y_valid = np.load("ensemble-scene-scene-pose-audio-face-caption-v1/y_valid.npy")
X_train.shape
# Adding laughter probability as an additional dimension
train_laugh_vec = np.expand_dims(train_laugh_vec, 1)
val_laugh_vec = np.expand_dims(val_laugh_vec, 1)
X_train = np.hstack((X_train, train_laugh_vec))
X_valid = np.hstack((X_valid, val_laugh_vec))
X_train.shape
from sklearn.preprocessing import Normalizer
norm = Normalizer()
X_train_norm = norm.fit_transform(X_train)
X_valid_norm = norm.fit_transform(X_valid)
#
# CONFIGURATION
#
# Define any constants for the model here
#
MODEL_NAME = "ensemble-scene-scene-pose-audio-face-caption-laugh-for-boost-NORMALIZED-v1"
from pathlib import Path
import tensorflow as tf
import matplotlib.pyplot as plt
sizes = [32, 30, 40, 128, 8, 16, 1]
# # UNCOMMENT IF EXCLUDING FER, IMAGE CAP, AND VGG
# mask = []
# for x in range(sum(sizes)):
# if x >= 32 and x < 62:
# mask.append(False)
# elif x < 230:
# mask.append(True)
# else:
# mask.append(False)
# # UNCOMMENT IF EXCLUDING FER, IMAGE CAP, POSE, AND RESNET
# mask = []
# for x in range(sum(sizes)):
# if x >= 62 and x < 102:
# mask.append(False)
# elif x < 62:
# mask.append(True)
# else:
# mask.append(False)
# # UNCOMMENT IF EXCLUDING FER, AND VGG
# mask = []
# for x in range(sum(sizes)):
# if x >= 30 and x < 62:
# mask.append(False)
# elif x < 230:
# mask.append(True)
# elif x >= 238:
# mask.append(True)
# else:
# mask.append(False)
# UNCOMMENT IF EXCLUDING FER, AND RESNET [best]
mask = []
for x in range(sum(sizes)):
if x >= 62 and x < 102:
mask.append(False)
elif x < 230:
mask.append(True)
elif x >= 238:
mask.append(True)
else:
mask.append(False)
# # UNCOMMENT IF EXCLUDING FER, IMAGE CAP, AND RESNET [best]
# mask = []
# for x in range(sum(sizes)):
# if x >= 62 and x < 102:
# mask.append(False)
# elif x < 230:
# mask.append(True)
# else:
# mask.append(False)
# UNCOMMENT IF EXCLUDING FER AND IMAGE CAP
# mask = []
# for x in range(sum(sizes)):
# if x < 230:
# mask.append(True)
# else:
# mask.append(False)
# # UNCOMMENT IF EXCLUDING FER
# mask = []
# for x in range(sum(sizes)):
# if x < 230:
# mask.append(True)
# elif x >= 238:
# mask.append(True)
# else:
# mask.append(False)
def build_model():
def create_model(inputs):
# x = tf.keras.layers.Dense(hp.Int('units', min_value=8, max_value=128, step=8), activation='relu', kernel_regularizer=tf.keras.regularizers.l2())(inputs)
# x = tf.keras.layers.Dense(8, activation='relu')(inputs)
# x = tf.keras.layers.Dropout(0.3)(x)
x = tf.keras.layers.BatchNormalization()(inputs)
# x = tf.keras.layers.Dense(64, activation='relu')(inputs)
# x = tf.keras.layers.Dropout(0.2)(x)
# x = tf.keras.layers.Dense(32, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.001))(x)
x = tf.keras.layers.Dense(64, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.02))(x)
x = tf.keras.layers.Dense(3, activation='softmax', kernel_regularizer=tf.keras.regularizers.l2(0.02))(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.001),
loss = 'categorical_crossentropy',
metrics=['accuracy'])
return model
# inputs = tf.keras.Input(shape=(X_train.shape[1]))
inputs = tf.keras.Input(shape=(np.count_nonzero(mask)))
model = create_model(inputs)
# model.summary()
return model
Path(f"/content/drive/My Drive/Machine-Learning-Models/cs231n-project/models/{MODEL_NAME}").mkdir(parents=True, exist_ok=True)
checkpoint_path = "/content/drive/Machine-Learning-Models/My Drive/cs231n-project/models/" + MODEL_NAME + "/cp-{epoch:04d}.h5"
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=False,
save_best_only=False,
period=1
)
model = build_model()
import pickle
history = model.fit(
x=X_train[:, mask],
y=y_train,
epochs=50,
callbacks=[cp_callback],
validation_data=(X_valid[:, mask], y_valid)
)
import tensorflow as tf
BEST_FIRST_MODEL_PATH = '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/models/ensemble-scene-scene-pose-audio-face-caption-laugh-v1/cp-0036.h5'
top_model = tf.keras.models.load_model(BEST_FIRST_MODEL_PATH)
top_model.summary()
train_pred = top_model.predict(X_train[:, mask])
val_pred = top_model.predict(X_valid[:, mask])
y_true_val = np.argmax(y_valid, axis=1)
y_pred_val = np.argmax(val_pred, axis=1)
from sklearn import metrics
import pandas as pd
import seaborn as sn
cm=metrics.confusion_matrix(y_true_val,y_pred_val)
import matplotlib.pyplot as plt
classes=['Pos' , 'Neu' , 'Neg']
con_mat = tf.math.confusion_matrix(labels=y_true_val, predictions=y_pred_val).numpy()
con_mat_norm = np.around(con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis], decimals=2)
con_mat_df = pd.DataFrame(con_mat_norm,
index = classes,
columns = classes)
figure = plt.figure(figsize=(11, 9))
plt.title("Top Model FC Validation")
sn.heatmap(con_mat_df, annot=True,cmap=plt.cm.Blues)
accuracy = (y_true_val == y_pred_val).mean()
print(f"Accuracy: {accuracy}")
```
### Add boosting for positive
```
y_train.shape
filtered_y_train = []
filtered_x_train = []
train_pred_pos_neu = np.argwhere(np.argmax(y_train,axis=1)<=1)
for i in train_pred_pos_neu:
filtered_y_train.append(y_train[i])
filtered_x_train.append(X_train_norm[i])
filtered_y_train = np.asarray(filtered_y_train).squeeze()[:,:2]
filtered_x_train = np.asarray(filtered_x_train).squeeze()
filtered_x_train.shape
y_valid.shape
val_pred_pos_neu = np.argwhere(np.argmax(val_pred,axis=1)<=1)
filtered_y_val = []
filtered_x_val = []
for i in val_pred_pos_neu:
filtered_y_val.append(y_valid[i])
filtered_x_val.append(X_valid_norm[i])
filtered_y_val = np.asarray(filtered_y_val).squeeze()[:,:2]
filtered_x_val = np.asarray(filtered_x_val).squeeze()
filtered_x_val.shape
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(max_depth=20, random_state=0 , n_estimators=500)
clf.fit(filtered_x_train, filtered_y_train,)
new_pred = clf.predict(filtered_x_val)
print(clf.score(filtered_x_val, filtered_y_val))
new_pred = np.argmax(new_pred , axis=1)
new_pred
from sklearn.svm import SVC
clf = SVC(gamma='auto')
clf.fit(filtered_x_train , np.argmax(filtered_y_train, axis=1))
clf.score(filtered_x_val , np.argmax(filtered_y_val, axis=1))
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(filtered_x_train , np.argmax(filtered_y_train, axis=1))
clf.score(filtered_x_val , np.argmax(filtered_y_val, axis=1))
new_pred_log = clf.predict(filtered_x_val)
new_pred_log
val_pred_pos_neu_f = val_pred_pos_neu.reshape(1,-1)
counter = 0
y_pred_val_boost = y_pred_val
for i in range(0,len(y_pred_val_boost)):
if i in val_pred_pos_neu_f:
y_pred_val_boost[i] = new_pred[counter]
counter += 1
val_pred_pos_neu_f
y_pred_val_boost
```
## Random Forest
```
from sklearn import metrics
import pandas as pd
import seaborn as sn
cm=metrics.confusion_matrix(y_true_val,y_pred_val_boost)
import matplotlib.pyplot as plt
classes=['Pos' , 'Neu' , 'Neg']
con_mat = tf.math.confusion_matrix(labels=y_true_val, predictions=y_pred_val_boost).numpy()
con_mat_norm = np.around(con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis], decimals=2)
con_mat_df = pd.DataFrame(con_mat_norm,
index = classes,
columns = classes)
figure = plt.figure(figsize=(11, 9))
plt.title("Top Model FC Validation")
sn.heatmap(con_mat_df, annot=True,cmap=plt.cm.Blues)
accuracy = (y_true_val == y_pred_val).mean()
print(f"Accuracy: {accuracy}")
```
## Logistic Regression
```
val_pred_pos_neu_f = val_pred_pos_neu.reshape(1,-1)
counter = 0
y_pred_val_boost = y_pred_val
for i in range(0,len(y_pred_val_boost)):
if i in val_pred_pos_neu_f:
y_pred_val_boost[i] = new_pred_log[counter]
counter += 1
from sklearn import metrics
import pandas as pd
import seaborn as sn
cm=metrics.confusion_matrix(y_true_val,y_pred_val_boost)
import matplotlib.pyplot as plt
classes=['Pos' , 'Neu' , 'Neg']
con_mat = tf.math.confusion_matrix(labels=y_true_val, predictions=y_pred_val_boost).numpy()
con_mat_norm = np.around(con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis], decimals=2)
con_mat_df = pd.DataFrame(con_mat_norm,
index = classes,
columns = classes)
figure = plt.figure(figsize=(11, 9))
plt.title("Top Model FC Validation")
sn.heatmap(con_mat_df, annot=True,cmap=plt.cm.Blues)
accuracy = (y_true_val == y_pred_val).mean()
print(f"Accuracy: {accuracy}")
```
# WITH TEST
```
from google.colab import drive
drive.mount('/content/drive')
!cp /content/drive/'My Drive/Machine-Learning-Projects'/cs231n-project/datasets/emotiw/test-final-laugh-prob.pkl .
!cp /content/drive/'My Drive/Machine-Learning-Projects'/cs231n-project/datasets/emotiw/ensemble-scene-scene-pose-audio-face-caption-v1-test.zip .
!unzip ensemble-scene-scene-pose-audio-face-caption-v1-test.zip
import numpy as np
X_test = np.load("ensemble-scene-scene-pose-audio-face-caption-v1-test/X_test.npy")
y_test = np.load("ensemble-scene-scene-pose-audio-face-caption-v1-test/y_test.npy")
X_test.shape
## Get the laughs
import pickle
test_vid_to_laugh = {}
test_laugh_vec = []
with open('test-final-laugh-prob.pkl', 'rb') as handle:
test_laugh_obj = pickle.load(handle)
i = 0
for vid in test_laugh_obj["vids"]:
test_vid_to_laugh[vid] = test_laugh_obj["actual_preds"][i]
i += 1
for vid in sorted(test_laugh_obj["vids"]):
test_laugh_vec.append(test_vid_to_laugh[vid])
print(len(test_laugh_vec))
# Adding laughter probability as an additional dimension
test_laugh_vec = np.expand_dims(test_laugh_vec, 1)
X_test = np.hstack((X_test, test_laugh_vec))
import tensorflow as tf
MODEL_PATH = '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/models/ensemble-scene-scene-pose-audio-face-caption-laugh-v1/cp-0036.h5'
model = tf.keras.models.load_model(MODEL_PATH)
#
# CONFIGURATION
#
# Define any constants for the model here
#
from pathlib import Path
import tensorflow as tf
import matplotlib.pyplot as plt
sizes = [32, 30, 40, 128, 8, 16, 1]
# UNCOMMENT IF EXCLUDING FER, AND RESNET [best] ************
mask = []
for x in range(sum(sizes)):
if x >= 62 and x < 102:
mask.append(False)
elif x < 230:
mask.append(True)
elif x >= 238:
mask.append(True)
else:
mask.append(False)
import pickle
history = model.evaluate(
x=X_test[:, mask],
y=y_test
)
import pickle
predictions = model.predict(
x=X_test[:, mask]
)
len(y_test)
len(predictions)
y_true_final = np.argmax(y_test, axis=1)
y_pred_final = np.argmax(predictions, axis=1)
y_pred_final
```
## Without Boosting
```
from sklearn import metrics
import pandas as pd
import seaborn as sn
cm=metrics.confusion_matrix(y_true_final,y_pred_final)
import matplotlib.pyplot as plt
classes=['Pos' , 'Neu' , 'Neg']
con_mat = tf.math.confusion_matrix(labels=y_true_final, predictions=y_pred_final).numpy()
con_mat_norm = np.around(con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis], decimals=2)
con_mat_df = pd.DataFrame(con_mat_norm,
index = classes,
columns = classes)
figure = plt.figure(figsize=(11, 9))
plt.title("FC Voting Confusion Matrix with Scene & Audio")
sn.heatmap(con_mat_df, annot=True,cmap=plt.cm.Blues)
accuracy = (y_pred_final == y_true_final).mean()
print(f"Accuracy: {accuracy}")
```
### Add boosting for positive
```
BOOSTING_SECOND_MODEL = tf.keras.models.load_model('/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/models/ensemble-scene-scene-pose-audio-face-caption-laugh-for-boost-BOOST-v2/cp-0049.h5')
y_test.shape
test_pred_pos_neu = np.argwhere(np.argmax(predictions,axis=1)<=1)
filtered_y_test = []
filtered_x_test = []
for i in test_pred_pos_neu:
filtered_y_test.append(y_test[i])
filtered_x_test.append(X_test[i])
filtered_y_test = np.asarray(filtered_y_test).squeeze()[:,:2]
filtered_x_test = np.asarray(filtered_x_test).squeeze()
filtered_x_test.shape
new_pred = BOOSTING_SECOND_MODEL.predict(filtered_x_test[:,mask])
new_pred = np.argmax(new_pred, axis=1)
test_pred_pos_neu_f = test_pred_pos_neu.reshape(1,-1)
counter = 0
y_pred_test_boost = y_pred_final
for i in range(0,len(y_pred_test_boost)):
if i in test_pred_pos_neu_f:
y_pred_test_boost[i] = new_pred[counter]
counter += 1
y_pred_test_boost
from sklearn import metrics
import pandas as pd
import seaborn as sn
cm=metrics.confusion_matrix(y_true_final,y_pred_test_boost)
import matplotlib.pyplot as plt
classes=['Pos' , 'Neu' , 'Neg']
con_mat = tf.math.confusion_matrix(labels=y_true_final, predictions=y_pred_test_boost).numpy()
con_mat_norm = np.around(con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis], decimals=2)
con_mat_df = pd.DataFrame(con_mat_norm,
index = classes,
columns = classes)
figure = plt.figure(figsize=(11, 9))
plt.title("FC Voting Confusion Matrix with Scene & Audio")
sn.heatmap(con_mat_df, annot=True,cmap=plt.cm.Blues)
accuracy = (y_pred_final == y_true_final).mean()
print(f"Accuracy: {accuracy}")
```
| github_jupyter |
### OCI Data Science - Useful Tips
<details>
<summary><font size="2">Check for Public Internet Access</font></summary>
```python
import requests
response = requests.get("https://oracle.com")
assert response.status_code==200, "Internet connection failed"
```
</details>
<details>
<summary><font size="2">Helpful Documentation </font></summary>
<ul><li><a href="https://docs.cloud.oracle.com/en-us/iaas/data-science/using/data-science.htm">Data Science Service Documentation</a></li>
<li><a href="https://docs.cloud.oracle.com/iaas/tools/ads-sdk/latest/index.html">ADS documentation</a></li>
</ul>
</details>
<details>
<summary><font size="2">Typical Cell Imports and Settings for ADS</font></summary>
```python
%load_ext autoreload
%autoreload 2
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import logging
logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.ERROR)
import ads
from ads.dataset.factory import DatasetFactory
from ads.automl.provider import OracleAutoMLProvider
from ads.automl.driver import AutoML
from ads.evaluations.evaluator import ADSEvaluator
from ads.common.data import ADSData
from ads.explanations.explainer import ADSExplainer
from ads.explanations.mlx_global_explainer import MLXGlobalExplainer
from ads.explanations.mlx_local_explainer import MLXLocalExplainer
from ads.catalog.model import ModelCatalog
from ads.common.model_artifact import ModelArtifact
```
</details>
<details>
<summary><font size="2">Useful Environment Variables</font></summary>
```python
import os
print(os.environ["NB_SESSION_COMPARTMENT_OCID"])
print(os.environ["PROJECT_OCID"])
print(os.environ["USER_OCID"])
print(os.environ["TENANCY_OCID"])
print(os.environ["NB_REGION"])
```
</details>
```
%load_ext autoreload
%autoreload 2
import sklearn
import joblib
import json
import numpy as np
import pandas as pd
import io
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
```
# Training (look at the large_rf.py file)
```
dataset = make_classification(n_samples=10000, n_features=75, n_informative=25, n_redundant=10, n_classes=10)
#rf_model = RandomForestClassifier(n_estimators=10000, max_depth=40, n_jobs=-1)
#rf_model.fit(dataset[0], dataset[1])
```
# Inference
```
rf_model = joblib.load("./rf.joblib")
rf_model.predict(dataset[0][1].reshape(1,-1))
# add the path of score.py:
import sys
path_to_rf_artifact = "."
sys.path.insert(0, path_to_rf_artifact)
from score import load_model, predict
# Load the model to memory
_ = load_model()
# make predictions on the training dataset:
#predictions_test = predict(input_data, _)
data = json.dumps(dataset[0][1].tolist())
data
predictions_test = predict(data, _)
print(predictions_test)
```
| github_jupyter |
## Lecture 2: Models of Computation
Lecture by Erik Demaine
Video link here: [https://www.youtube.com/watch?v=Zc54gFhdpLA&list=PLUl4u3cNGP61Oq3tWYp6V_F-5jb5L2iHb&index=2](https://www.youtube.com/watch?v=Zc54gFhdpLA&list=PLUl4u3cNGP61Oq3tWYp6V_F-5jb5L2iHb&index=2)
### Problem statement:
Given two documents, **D1** and **D2**, find the distance between them
The distance **d(D1,D2)** can be defined in a number of ways, but we use the following definition:
* For a word 'w' in document D, D[w] is defined as the number of occurences of 'w' in D
* We create a vector for both documents D1 and D2 in this way
* Given both vectors, we compute the distance **d(D1,D2)** as the following steps:
- d'(D1,D2): Compute the **inner product** of these vectors
- ``d'(D1,D2) = sum(D1[w]*D2[w] for all w)``
- This works well, but fails when the documents are very long. We can normalize this by dividing it by the lengths of the vectors
- ``d''(D1,D2) = d'(D1,D2)/(|D1| * |D2|)``
- |D| is the length of document D in words
- This is also the cosine of the angle between the two vectors
- If we take the arccos value of d''(D1,D2), we get the angle between the two vector
- ``d(D1,D2) = arccos(d''(D1,D2))``
### Steps:
Calculating this requires broadly the following steps:
1. **Split document into words** - This can be done in a number of ways. The below list is not exhasutive
1. Go through the document, anytime you see a non alphanumeric character, start a new word
2. Use regex (can run in exponential time, so be very wary)
3. Use 'split'
2. **Find word frequencies** - Couple of ways to do this:
1. Sort the words, add to count
2. Go through words linearly, add to count dictionary
3. Compute distance as above
```
import os, glob
import copy, math
doc_dir="Sample Documents/"
doc_list=os.listdir(doc_dir)
```
#### Split document into words
```
def splitIntoWords(file_contents: str) -> list:
word_list=[]
curr_word=[]
for c in file_contents:
ord_c=ord(c)
if 65<=ord_c<=90 or 97<=ord_c<=122 or ord_c==39 or ord_c==44:
if ord_c==44 or ord_c==39:
continue
curr_word.append(c)
else:
if curr_word:
word_list.append("".join(curr_word).lower())
curr_word=[]
continue
#remember to append the last word
word_list.append("".join(curr_word).lower())
return word_list
assert len(doc_list)==2, "Invalid number of documents. Select any two"
for i, doc in enumerate(doc_list):
if i==0:
D1=splitIntoWords(open(doc_dir+doc,"r").read())
else:
D2=splitIntoWords(open(doc_dir+doc,"r").read())
```
#### Compute word count
```
def computeWordCount(word_list: list)-> dict:
'''
This functions computes word counts by checking to see if the word is in the count dictionary
If it is, then it increments that count by 1
Else, it sets the count to 1
'''
word_count={}
for word in word_list:
if word in word_count:
word_count[word]+=1
else:
word_count[word]=1
return word_count
def computeWordCountSort(word_list: list)-> dict:
'''
This method computes the word counts by first sorting the list lexicographically
If the word is the same as the previous one, it increments count by 1
Else, it sets the count to the computed value, resets count to 1 and sets the current word to the new one
'''
word_list.sort()
cur=word_list[0]
count=1
word_count={}
for word in word_list[1:]:
if word==cur:
count+=1
else:
word_count[cur]=count
count=1
cur=word
word_count[cur]=count
return word_count
```
The above functions are equivalent. You can use either of them to compute the word counts. Below, I use the ```computeWordCount()``` function
#### Compute distance
```
def dotProduct(vec1: dict, vec2: dict) -> float:
res=0.0
for key in set(list(vec1.keys())+list(vec2.keys())):
res+=vec1.get(key,0)*vec2.get(key,0)
return res
def dotProductFaster(vec1: dict, vec2:dict) -> float:
res=0.0
if len(vec1)>len(vec2):
smaller,larger=vec2,vec1
else:
smaller,larger=vec1,vec2
for key in smaller.keys():
res+=smaller[key]*larger.get(key,0)
return res
def normalize(word_list_doc1: list, word_list_doc2: list) -> int:
return len(word_list_doc1)*len(word_list_doc2)
def docdist(doc1, doc2):
D1=splitIntoWords(open(doc_dir+doc1,"r").read())
D2=splitIntoWords(open(doc_dir+doc2,"r").read())
D1_WC=computeWordCount(D1)
D2_WC=computeWordCount(D2)
#Use either of the two function below
#Time them to see which is faster
DotProductValue=dotProduct(D1_WC, D2_WC)
DotProductValueFaster= dotProductFaster(D1_WC, D2_WC)
normalizedDPValue=DotProductValueFaster/(normalize(D1,D2))
return math.acos(normalizedDPValue)
print(docdist(doc_list[0], doc_list[1]))
```
| github_jupyter |
# Evaluation of a QA System
[](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial5_Evaluation.ipynb)
To be able to make a statement about the performance of a question-answering system, it is important to evalute it. Furthermore, evaluation allows to determine which parts of the system can be improved.
### Prepare environment
#### Colab: Enable the GPU runtime
Make sure you enable the GPU runtime to experience decent speed in this tutorial.
**Runtime -> Change Runtime type -> Hardware accelerator -> GPU**
<img src="https://raw.githubusercontent.com/deepset-ai/haystack/master/docs/_src/img/colab_gpu_runtime.jpg">
```
# Make sure you have a GPU running
!nvidia-smi
```
## Start an Elasticsearch server
You can start Elasticsearch on your local machine instance using Docker. If Docker is not readily available in your environment (eg., in Colab notebooks), then you can manually download and execute Elasticsearch from source.
```
# Install the latest release of Haystack in your own environment
#! pip install farm-haystack
# Install the latest master of Haystack
!pip install grpcio-tools==1.34.1
!pip install git+https://github.com/deepset-ai/haystack.git
# If you run this notebook on Google Colab, you might need to
# restart the runtime after installing haystack.
# In Colab / No Docker environments: Start Elasticsearch from source
! wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.2-linux-x86_64.tar.gz -q
! tar -xzf elasticsearch-7.9.2-linux-x86_64.tar.gz
! chown -R daemon:daemon elasticsearch-7.9.2
import os
from subprocess import Popen, PIPE, STDOUT
es_server = Popen(['elasticsearch-7.9.2/bin/elasticsearch'],
stdout=PIPE, stderr=STDOUT,
preexec_fn=lambda: os.setuid(1) # as daemon
)
# wait until ES has started
! sleep 30
from haystack.modeling.utils import initialize_device_settings
device, n_gpu = initialize_device_settings(use_cuda=True)
from haystack.preprocessor.utils import fetch_archive_from_http
# Download evaluation data, which is a subset of Natural Questions development set containing 50 documents
doc_dir = "../data/nq"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/nq_dev_subset_v2.json.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)
# make sure these indices do not collide with existing ones, the indices will be wiped clean before data is inserted
doc_index = "tutorial5_docs"
label_index = "tutorial5_labels"
# Connect to Elasticsearch
from haystack.document_store.elasticsearch import ElasticsearchDocumentStore
# Connect to Elasticsearch
document_store = ElasticsearchDocumentStore(host="localhost", username="", password="", index="document",
create_index=False, embedding_field="emb",
embedding_dim=768, excluded_meta_data=["emb"])
from haystack.preprocessor import PreProcessor
# Add evaluation data to Elasticsearch Document Store
# We first delete the custom tutorial indices to not have duplicate elements
# and also split our documents into shorter passages using the PreProcessor
preprocessor = PreProcessor(
split_length=500,
split_overlap=0,
split_respect_sentence_boundary=False,
clean_empty_lines=False,
clean_whitespace=False
)
document_store.delete_documents(index=doc_index)
document_store.delete_documents(index=label_index)
document_store.add_eval_data(
filename="../data/nq/nq_dev_subset_v2.json",
doc_index=doc_index,
label_index=label_index,
preprocessor=preprocessor
)
# Let's prepare the labels that we need for the retriever and the reader
labels = document_store.get_all_labels_aggregated(index=label_index)
```
## Initialize components of QA-System
```
# Initialize Retriever
from haystack.retriever.sparse import ElasticsearchRetriever
retriever = ElasticsearchRetriever(document_store=document_store)
# Alternative: Evaluate DensePassageRetriever
# Note, that DPR works best when you index short passages < 512 tokens as only those tokens will be used for the embedding.
# Here, for nq_dev_subset_v2.json we have avg. num of tokens = 5220(!).
# DPR still outperforms Elastic's BM25 by a small margin here.
# from haystack.retriever.dense import DensePassageRetriever
# retriever = DensePassageRetriever(document_store=document_store,
# query_embedding_model="facebook/dpr-question_encoder-single-nq-base",
# passage_embedding_model="facebook/dpr-ctx_encoder-single-nq-base",
# use_gpu=True,
# embed_title=True,
# max_seq_len=256,
# batch_size=16,
# remove_sep_tok_from_untitled_passages=True)
#document_store.update_embeddings(retriever, index=doc_index)
# Initialize Reader
from haystack.reader.farm import FARMReader
reader = FARMReader("deepset/roberta-base-squad2", top_k=4, return_no_answer=True)
from haystack.eval import EvalAnswers, EvalDocuments
# Here we initialize the nodes that perform evaluation
eval_retriever = EvalDocuments()
eval_reader = EvalAnswers(sas_model="sentence-transformers/paraphrase-multilingual-mpnet-base-v2")
```
## Evaluation of Retriever
Here we evaluate only the retriever, based on whether the gold_label document is retrieved.
```
## Evaluate Retriever on its own
retriever_eval_results = retriever.eval(top_k=20, label_index=label_index, doc_index=doc_index)
## Retriever Recall is the proportion of questions for which the correct document containing the answer is
## among the correct documents
print("Retriever Recall:", retriever_eval_results["recall"])
## Retriever Mean Avg Precision rewards retrievers that give relevant documents a higher rank
print("Retriever Mean Avg Precision:", retriever_eval_results["map"])
```
## Evaluation of Reader
Here we evaluate only the reader in a closed domain fashion i.e. the reader is given one query
and one document and metrics are calculated on whether the right position in this text is selected by
the model as the answer span (i.e. SQuAD style)
```
# Evaluate Reader on its own
reader_eval_results = reader.eval(document_store=document_store, device=device, label_index=label_index, doc_index=doc_index)
# Evaluation of Reader can also be done directly on a SQuAD-formatted file without passing the data to Elasticsearch
#reader_eval_results = reader.eval_on_file("../data/nq", "nq_dev_subset_v2.json", device=device)
## Reader Top-N-Accuracy is the proportion of predicted answers that match with their corresponding correct answer
print("Reader Top-N-Accuracy:", reader_eval_results["top_n_accuracy"])
## Reader Exact Match is the proportion of questions where the predicted answer is exactly the same as the correct answer
print("Reader Exact Match:", reader_eval_results["EM"])
## Reader F1-Score is the average overlap between the predicted answers and the correct answers
print("Reader F1-Score:", reader_eval_results["f1"])
```
## Evaluation of Retriever and Reader (Open Domain)
Here we evaluate retriever and reader in open domain fashion i.e. a document is considered
correctly retrieved if it contains the answer string within it. The reader is evaluated based purely on the
predicted string, regardless of which document this came from and the position of the extracted span.
```
from haystack import Pipeline
# Here is the pipeline definition
p = Pipeline()
p.add_node(component=retriever, name="ESRetriever", inputs=["Query"])
p.add_node(component=eval_retriever, name="EvalRetriever", inputs=["ESRetriever"])
p.add_node(component=reader, name="QAReader", inputs=["EvalRetriever"])
p.add_node(component=eval_reader, name="EvalReader", inputs=["QAReader"])
results = []
# This is how to run the pipeline
for l in labels:
res = p.run(
query=l.question,
labels=l,
params={"index": doc_index, "Retriever": {"top_k": 10}, "Reader": {"top_k": 5}},
)
results.append(res)
# When we have run evaluation using the pipeline, we can print the results
n_queries = len(labels)
eval_retriever.print()
print()
retriever.print_time()
print()
eval_reader.print(mode="reader")
print()
reader.print_time()
print()
eval_reader.print(mode="pipeline")
```
## About us
This [Haystack](https://github.com/deepset-ai/haystack/) notebook was made with love by [deepset](https://deepset.ai/) in Berlin, Germany
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our other work:
- [German BERT](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](https://apply.workable.com/deepset/)
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#VITS-Attentions" data-toc-modified-id="VITS-Attentions-1"><span class="toc-item-num">1 </span>VITS Attentions</a></span></li></ul></div>
```
# default_exp models.attentions
```
# VITS Attentions
```
# export
import copy
import math
import numpy as np
import torch
from torch import nn
from torch.nn import functional as F
from uberduck_ml_dev.models.common import LayerNorm
from uberduck_ml_dev.utils.utils import convert_pad_shape, subsequent_mask
class VITSEncoder(nn.Module):
def __init__(
self,
hidden_channels,
filter_channels,
n_heads,
n_layers,
kernel_size=1,
p_dropout=0.0,
window_size=4,
**kwargs
):
super().__init__()
self.hidden_channels = hidden_channels
self.filter_channels = filter_channels
self.n_heads = n_heads
self.n_layers = n_layers
self.kernel_size = kernel_size
self.p_dropout = p_dropout
self.window_size = window_size
self.drop = nn.Dropout(p_dropout)
self.attn_layers = nn.ModuleList()
self.norm_layers_1 = nn.ModuleList()
self.ffn_layers = nn.ModuleList()
self.norm_layers_2 = nn.ModuleList()
for i in range(self.n_layers):
self.attn_layers.append(
MultiHeadAttention(
hidden_channels,
hidden_channels,
n_heads,
p_dropout=p_dropout,
window_size=window_size,
)
)
self.norm_layers_1.append(LayerNorm(hidden_channels))
self.ffn_layers.append(
FFN(
hidden_channels,
hidden_channels,
filter_channels,
kernel_size,
p_dropout=p_dropout,
)
)
self.norm_layers_2.append(LayerNorm(hidden_channels))
def forward(self, x, x_mask):
attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
x = x * x_mask
for i in range(self.n_layers):
y = self.attn_layers[i](x, x, attn_mask)
y = self.drop(y)
x = self.norm_layers_1[i](x + y)
y = self.ffn_layers[i](x, x_mask)
y = self.drop(y)
x = self.norm_layers_2[i](x + y)
x = x * x_mask
return x
class Decoder(nn.Module):
def __init__(
self,
hidden_channels,
filter_channels,
n_heads,
n_layers,
kernel_size=1,
p_dropout=0.0,
proximal_bias=False,
proximal_init=True,
**kwargs
):
super().__init__()
self.hidden_channels = hidden_channels
self.filter_channels = filter_channels
self.n_heads = n_heads
self.n_layers = n_layers
self.kernel_size = kernel_size
self.p_dropout = p_dropout
self.proximal_bias = proximal_bias
self.proximal_init = proximal_init
self.drop = nn.Dropout(p_dropout)
self.self_attn_layers = nn.ModuleList()
self.norm_layers_0 = nn.ModuleList()
self.encdec_attn_layers = nn.ModuleList()
self.norm_layers_1 = nn.ModuleList()
self.ffn_layers = nn.ModuleList()
self.norm_layers_2 = nn.ModuleList()
for i in range(self.n_layers):
self.self_attn_layers.append(
MultiHeadAttention(
hidden_channels,
hidden_channels,
n_heads,
p_dropout=p_dropout,
proximal_bias=proximal_bias,
proximal_init=proximal_init,
)
)
self.norm_layers_0.append(LayerNorm(hidden_channels))
self.encdec_attn_layers.append(
MultiHeadAttention(
hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
)
)
self.norm_layers_1.append(LayerNorm(hidden_channels))
self.ffn_layers.append(
FFN(
hidden_channels,
hidden_channels,
filter_channels,
kernel_size,
p_dropout=p_dropout,
causal=True,
)
)
self.norm_layers_2.append(LayerNorm(hidden_channels))
def forward(self, x, x_mask, h, h_mask):
"""
x: decoder input
h: encoder output
"""
self_attn_mask = subsequent_mask(x_mask.size(2)).to(
device=x.device, dtype=x.dtype
)
encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
x = x * x_mask
for i in range(self.n_layers):
y = self.self_attn_layers[i](x, x, self_attn_mask)
y = self.drop(y)
x = self.norm_layers_0[i](x + y)
y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
y = self.drop(y)
x = self.norm_layers_1[i](x + y)
y = self.ffn_layers[i](x, x_mask)
y = self.drop(y)
x = self.norm_layers_2[i](x + y)
x = x * x_mask
return x
class MultiHeadAttention(nn.Module):
def __init__(
self,
channels,
out_channels,
n_heads,
p_dropout=0.0,
window_size=None,
heads_share=True,
block_length=None,
proximal_bias=False,
proximal_init=False,
):
super().__init__()
assert channels % n_heads == 0
self.channels = channels
self.out_channels = out_channels
self.n_heads = n_heads
self.p_dropout = p_dropout
self.window_size = window_size
self.heads_share = heads_share
self.block_length = block_length
self.proximal_bias = proximal_bias
self.proximal_init = proximal_init
self.attn = None
self.k_channels = channels // n_heads
self.conv_q = nn.Conv1d(channels, channels, 1)
self.conv_k = nn.Conv1d(channels, channels, 1)
self.conv_v = nn.Conv1d(channels, channels, 1)
self.conv_o = nn.Conv1d(channels, out_channels, 1)
self.drop = nn.Dropout(p_dropout)
if window_size is not None:
n_heads_rel = 1 if heads_share else n_heads
rel_stddev = self.k_channels**-0.5
self.emb_rel_k = nn.Parameter(
torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
* rel_stddev
)
self.emb_rel_v = nn.Parameter(
torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
* rel_stddev
)
nn.init.xavier_uniform_(self.conv_q.weight)
nn.init.xavier_uniform_(self.conv_k.weight)
nn.init.xavier_uniform_(self.conv_v.weight)
if proximal_init:
with torch.no_grad():
self.conv_k.weight.copy_(self.conv_q.weight)
self.conv_k.bias.copy_(self.conv_q.bias)
def forward(self, x, c, attn_mask=None):
q = self.conv_q(x)
k = self.conv_k(c)
v = self.conv_v(c)
x, self.attn = self.attention(q, k, v, mask=attn_mask)
x = self.conv_o(x)
return x
def attention(self, query, key, value, mask=None):
# reshape [b, d, t] -> [b, n_h, t, d_k]
b, d, t_s, t_t = (*key.size(), query.size(2))
query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
if self.window_size is not None:
assert (
t_s == t_t
), "Relative attention is only available for self-attention."
key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
rel_logits = self._matmul_with_relative_keys(
query / math.sqrt(self.k_channels), key_relative_embeddings
)
scores_local = self._relative_position_to_absolute_position(rel_logits)
scores = scores + scores_local
if self.proximal_bias:
assert t_s == t_t, "Proximal bias is only available for self-attention."
scores = scores + self._attention_bias_proximal(t_s).to(
device=scores.device, dtype=scores.dtype
)
if mask is not None:
scores = scores.masked_fill(mask == 0, -1e4)
if self.block_length is not None:
assert (
t_s == t_t
), "Local attention is only available for self-attention."
block_mask = (
torch.ones_like(scores)
.triu(-self.block_length)
.tril(self.block_length)
)
scores = scores.masked_fill(block_mask == 0, -1e4)
p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
p_attn = self.drop(p_attn)
output = torch.matmul(p_attn, value)
if self.window_size is not None:
relative_weights = self._absolute_position_to_relative_position(p_attn)
value_relative_embeddings = self._get_relative_embeddings(
self.emb_rel_v, t_s
)
output = output + self._matmul_with_relative_values(
relative_weights, value_relative_embeddings
)
output = (
output.transpose(2, 3).contiguous().view(b, d, t_t)
) # [b, n_h, t_t, d_k] -> [b, d, t_t]
return output, p_attn
def _matmul_with_relative_values(self, x, y):
"""
x: [b, h, l, m]
y: [h or 1, m, d]
ret: [b, h, l, d]
"""
ret = torch.matmul(x, y.unsqueeze(0))
return ret
def _matmul_with_relative_keys(self, x, y):
"""
x: [b, h, l, d]
y: [h or 1, m, d]
ret: [b, h, l, m]
"""
ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
return ret
def _get_relative_embeddings(self, relative_embeddings, length):
max_relative_position = 2 * self.window_size + 1
# Pad first before slice to avoid using cond ops.
pad_length = max(length - (self.window_size + 1), 0)
slice_start_position = max((self.window_size + 1) - length, 0)
slice_end_position = slice_start_position + 2 * length - 1
if pad_length > 0:
padded_relative_embeddings = F.pad(
relative_embeddings,
convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
)
else:
padded_relative_embeddings = relative_embeddings
used_relative_embeddings = padded_relative_embeddings[
:, slice_start_position:slice_end_position
]
return used_relative_embeddings
def _relative_position_to_absolute_position(self, x):
"""
x: [b, h, l, 2*l-1]
ret: [b, h, l, l]
"""
batch, heads, length, _ = x.size()
# Concat columns of pad to shift from relative to absolute indexing.
x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
# Concat extra elements so to add up to shape (len+1, 2*len-1).
x_flat = x.view([batch, heads, length * 2 * length])
x_flat = F.pad(x_flat, convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]))
# Reshape and slice out the padded elements.
x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
:, :, :length, length - 1 :
]
return x_final
def _absolute_position_to_relative_position(self, x):
"""
x: [b, h, l, l]
ret: [b, h, l, 2*l-1]
"""
batch, heads, length, _ = x.size()
# padd along column
x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]))
x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
# add 0's in the beginning that will skew the elements after reshape
x_flat = F.pad(x_flat, convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
return x_final
def _attention_bias_proximal(self, length):
"""Bias for self-attention to encourage attention to close positions.
Args:
length: an integer scalar.
Returns:
a Tensor with shape [1, 1, length, length]
"""
r = torch.arange(length, dtype=torch.float32)
diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
class FFN(nn.Module):
def __init__(
self,
in_channels,
out_channels,
filter_channels,
kernel_size,
p_dropout=0.0,
activation=None,
causal=False,
):
super().__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.filter_channels = filter_channels
self.kernel_size = kernel_size
self.p_dropout = p_dropout
self.activation = activation
self.causal = causal
if causal:
self.padding = self._causal_padding
else:
self.padding = self._same_padding
self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
self.drop = nn.Dropout(p_dropout)
def forward(self, x, x_mask):
x = self.conv_1(self.padding(x * x_mask))
if self.activation == "gelu":
x = x * torch.sigmoid(1.702 * x)
else:
x = torch.relu(x)
x = self.drop(x)
x = self.conv_2(self.padding(x * x_mask))
return x * x_mask
def _causal_padding(self, x):
if self.kernel_size == 1:
return x
pad_l = self.kernel_size - 1
pad_r = 0
padding = [[0, 0], [0, 0], [pad_l, pad_r]]
x = F.pad(x, convert_pad_shape(padding))
return x
def _same_padding(self, x):
if self.kernel_size == 1:
return x
pad_l = (self.kernel_size - 1) // 2
pad_r = self.kernel_size // 2
padding = [[0, 0], [0, 0], [pad_l, pad_r]]
x = F.pad(x, convert_pad_shape(padding))
return x
```
| github_jupyter |
```
import torch
import json
import h5py
import numpy as np
from matplotlib.pyplot import imshow
from PIL import Image, ImageDraw
project_dir = '[Your Project Path]'
image_file = json.load(open(f'{project_dir}/datasets/vg/image_data.json'))
vocab_file = json.load(open(f'{project_dir}/datasets/vg/VG-SGG-dicts.json'))
data_file = h5py.File(f'{project_dir}/datasets/vg/VG-SGG.h5', 'r')
# remove invalid image
corrupted_ims = [1592, 1722, 4616, 4617]
tmp = []
for item in image_file:
if int(item['image_id']) not in corrupted_ims:
tmp.append(item)
image_file = tmp
# load detected results
detected_origin_path = f'{project_dir}/checkpoints/causal-motifs-sgdet-exmp/inference/VG_stanford_filtered_with_attribute_test/'
detected_origin_result = torch.load(detected_origin_path + 'eval_results.pytorch')
detected_info = json.load(open(detected_origin_path + 'visual_info.json'))
# get image info by index
def get_info_by_idx(idx, det_input, thres=0.5):
groundtruth = det_input['groundtruths'][idx]
prediction = det_input['predictions'][idx]
# image path
img_path = detected_info[idx]['img_file']
# boxes
boxes = prediction.bbox
# object labels
idx2label = vocab_file['idx_to_label']
labels = ['{}-{}'.format(idx,idx2label[str(i)]) for idx, i in enumerate(groundtruth.get_field('labels').tolist())]
pred_labels = ['{}-{}'.format(idx,idx2label[str(i)]) for idx, i in enumerate(prediction.get_field('pred_labels').tolist())]
pred_scores = prediction.get_field('pred_scores').tolist()
# groundtruth relation triplet
idx2pred = vocab_file['idx_to_predicate']
gt_rels = groundtruth.get_field('relation_tuple').tolist()
gt_rels = [(labels[i[0]], idx2pred[str(i[2])], labels[i[1]]) for i in gt_rels]
# prediction relation triplet
pred_rel_pair = prediction.get_field('rel_pair_idxs').tolist()
pred_rel_label = prediction.get_field('pred_rel_scores')
pred_rel_label[:,0] = 0
pred_rel_score, pred_rel_label = pred_rel_label.max(-1)
mask = pred_rel_score > thres
pred_rel_score = pred_rel_score[mask]
pred_rel_label = pred_rel_label[mask]
pred_rels = [(pred_labels[i[0]], idx2pred[str(j)], pred_labels[i[1]]) for i, j in zip(pred_rel_pair, pred_rel_label.tolist())]
return img_path, boxes, labels, pred_labels, pred_scores, gt_rels, pred_rels, pred_rel_score, pred_rel_label
def draw_single_box(pic, box, color='red', draw_info=None):
draw = ImageDraw.Draw(pic)
x1,y1,x2,y2 = int(box[0]), int(box[1]), int(box[2]), int(box[3])
draw.rectangle(((x1, y1), (x2, y2)), outline=color)
if draw_info:
draw.rectangle(((x1, y1), (x1+50, y1+10)), fill=color)
info = draw_info
draw.text((x1, y1), info)
def print_list(name, input_list, scores):
for i, item in enumerate(input_list):
if scores == None:
print(name + ' ' + str(i) + ': ' + str(item))
else:
print(name + ' ' + str(i) + ': ' + str(item) + '; score: ' + str(scores[i].item()))
def draw_image(img_path, boxes, labels, pred_labels, pred_scores, gt_rels, pred_rels, pred_rel_score, pred_rel_label, print_img=True):
pic = Image.open(img_path)
num_obj = boxes.shape[0]
for i in range(num_obj):
info = pred_labels[i]
draw_single_box(pic, boxes[i], draw_info=info)
if print_img:
display(pic)
if print_img:
print('*' * 50)
print_list('gt_boxes', labels, None)
print('*' * 50)
print_list('gt_rels', gt_rels, None)
print('*' * 50)
print_list('pred_labels', pred_labels, pred_rel_score)
print('*' * 50)
print_list('pred_rels', pred_rels, pred_rel_score)
print('*' * 50)
return None
def show_selected(idx_list):
for select_idx in idx_list:
print(select_idx)
draw_image(*get_info_by_idx(select_idx, detected_origin_result))
def show_all(start_idx, length):
for cand_idx in range(start_idx, start_idx+length):
print(f'Image {cand_idx}:')
img_path, boxes, labels, pred_labels, pred_scores, gt_rels, pred_rels, pred_rel_score, pred_rel_label = get_info_by_idx(cand_idx, detected_origin_result)
draw_image(img_path=img_path, boxes=boxes, labels=labels, pred_labels=pred_labels, pred_scores=pred_scores, gt_rels=gt_rels, pred_rels=pred_rels, pred_rel_score=pred_rel_score, pred_rel_label=pred_rel_label, print_img=True)
show_all(start_idx=100, length=1)
#show_selected([119, 967, 713, 5224, 19681, 25371])
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as ply
import seaborn as sns
import os
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split, KFold, cross_val_score
from sklearn.metrics import mean_absolute_error
from sklearn.impute import KNNImputer
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
from sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score
from sklearn.model_selection import GridSearchCV
# from .. import data_preprocess
pd.set_option('max_columns', None)
pd.set_option('max_rows', None)
```
## Common Code
```
# Files supplied by the competition for model training
X_train = pd.read_csv('../../data/dengue_features_train.csv')
y_train = pd.read_csv('../../data/dengue_labels_train.csv', usecols=['total_cases'])
# Files supplied by the competition for submission
X_test = pd.read_csv('../../data/dengue_features_test.csv')
y_test = pd.read_csv('../../data/submission_format.csv')
X_train[X_train['city']=='sj'].describe()
X_train[X_train['city']=='iq'].describe()
X_test.describe()
def data_preprocess(df):
# drop or encode categorical cols
df_processed = df.drop('week_start_date', axis=1)
df_processed['city'] = df_processed['city'].apply(lambda x : 1 if x=='iq' else 0)
return df_processed
def create_submission_file(pipeline, filename_comment):
next_file_id = generate_next_submission_fileid()
X_test_processed = data_preprocess(X_test)
y_submit_pred = np.rint(pipeline.predict(X_test_processed))
y_test['total_cases'] = y_submit_pred
y_test['total_cases'] = y_test['total_cases'].astype(int)
filename = f'../../data/dengue_submission_{next_file_id}_{filename_comment}.csv'
y_test.to_csv(filename, index = False)
return y_submit_pred, filename
def generate_next_submission_fileid():
files_found = []
for file in os.listdir("../../data"):
if file.startswith("dengue_submission"):
files_found.append(file[18:20])
return f'{int(sorted(files_found).pop()) + 1 :02}'
```
## Notebook-specific code
### Other Estimators to try:
https://www.analyticsvidhya.com/blog/2021/01/a-quick-overview-of-regression-algorithms-in-machine-learning/ <br>
https://scikit-learn.org/stable/modules/classes.html#module-sklearn.ensemble <br>
- AdaBoost
- XGBoost
- SVM
- KNN
- Linear Regression (incl L1 reg)
- Time Series (ARIMA, etc)
```
from sklearn.ensemble import AdaBoostRegressor
def cross_validate(X, y, estimator, cv, scaler=StandardScaler(), imputer=KNNImputer(n_neighbors = 5), dim_reduction=PCA(n_components = 9)):
pipeline = Pipeline(steps=[
('scaler', scaler),
('imputer', imputer),
('dim_reduction', dim_reduction),
('estimator', estimator)
])
#X_train, y_train, X_val, y_val = train_test_split(X, y, test_size=.2, random_state=42)
mae_list_train = []
mae_list_val = []
for train_idxs, val_idxs in cv.split(X, y):
X_train, y_train = X.iloc[train_idxs], y.iloc[train_idxs]
pipeline.fit(X_train, y_train)
y_pred_train = pipeline.predict(X_train)
print(f'Train MAE = {mean_absolute_error(y_train, y_pred_train)}')
mae_list_train.append(mean_absolute_error(y_train, y_pred_train))
X_val, y_val = X.iloc[val_idxs], y.iloc[val_idxs]
y_pred_val = pipeline.predict(X_val)
print(f'Validation MAE = {mean_absolute_error(y_val, y_pred_val)}')
mae_list_val.append(mean_absolute_error(y_val, y_pred_val))
print(f'MAE Train Mean: {np.mean(mae_list_train)}')
print(f'MAE Val Mean: {np.mean(mae_list_val)}')
return pipeline
def tune_model(X_processed, y_train, pipe, params):
# do gridsearch being sure to set scoring to MAE
gridsearch = GridSearchCV(estimator=pipe, param_grid=params, scoring='neg_mean_absolute_error', verbose=2, n_jobs=-1).fit(X_processed, y_train)
print(f'Score with entire training dataset:{-gridsearch.score(X_processed, y_train)}')
best_params = gridsearch.best_params_
print(best_params)
return gridsearch
X_processed = data_preprocess(X_train)
# define pipeline
pipe = Pipeline(steps=[
('scaler', StandardScaler()),
('imputer', KNNImputer()),
('dim_reduction', PCA()),
('estimator', AdaBoostRegressor())
])
# define parameter ranges in dict
# use double underscore to link pipline object with param name -
# - use the label created when defining the pipe for the test left of the '__'
params = {
'imputer__n_neighbors' : np.arange(8,15,1),
'dim_reduction__n_components' : np.arange(4,8,1),
'estimator__n_estimators' : np.arange(20,41,5),
'estimator__learning_rate' : np.arange(.0001, .01, .001)
#'estimator__loss' : ['linear', 'square', 'exponential']
}
grid_pipe = tune_model(X_processed, y_train, pipe, params)
# GridSearch returns the estimator so we can call .predict() on it!
# so just pass the gridsearch object to create_submission_file()
y_pred_sub, filename = create_submission_file(grid_pipe, "adaboost_gridsearch_refined2")
y_pred_sub = pd.read_csv(filename)
y_pred_sub.head()
```
### Before writing tune_model function
So delete when confirm new process works as expected
```
X_processed = data_preprocess(X_train)
# define pipeline
pipe = Pipeline(steps=[
('scaler', StandardScaler()),
('imputer', KNNImputer()),
('dim_reduction', PCA()),
('estimator', AdaBoostRegressor())
])
# define parameter ranges in dict
# use dounble underscore to link pipline object with param name -
# - use the label created when defining the pipe for the test left of the '__'
params = {
'imputer__n_neighbors' : np.arange(1,10,2),
'dim_reduction__n_components' : np.arange(2,10,2),
'estimator__n_estimators' : np.arange(0,100,25),
'estimator__learning_rate' : np.arange(.02, 1.01, .1)
#'estimator__loss' : ['linear', 'square', 'exponential']
}
# do gridsearch being sure to set scoring to MAE
gridsearch = GridSearchCV(estimator=pipe, param_grid=params, scoring='neg_mean_absolute_error', verbose=2, n_jobs=-1).fit(X_processed, y_train)
print(f'Score with entire training dataset:{-gridsearch.score(X_processed, y_train)}')
best_params = gridsearch.best_params_
print(best_params)
# GridSearch returns the estimator so we can call .predict() on it!
# so just pass the gridsearch object to create_submission_file()
y_pred_sub, filename = create_submission_file(gridsearch, "adaboost_gridsearch_2")
sub_file
y_sub = pd.read_csv(filename)
y_sub.head()
```
| github_jupyter |
## Data :
--> Date (date the crash had taken place)
--> Time (time the crash had taken place)
--> Location
--> Operator
--> Flight
--> Route
--> Type
--> Registration
--> cn/In - ?
--> Aboard - number of people aboard
--> Fatalities - lethal outcome
--> Ground - saved people
--> Summary - brief summary of the case
## Importing Libraries & getting Data
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# plt.style.use('dark_background')
from datetime import date ,timedelta ,datetime
import warnings
warnings.filterwarnings('ignore')
data = pd.read_csv('datasets/Airplane_Crashes_and_Fatalities_Since_1908.csv/Airplane_Crashes_and_Fatalities_Since_1908.csv')
data.head()
data.info()
```
## Handling Missing Values
```
def percent_missing_data(data):
missing_count = data.isnull().sum().sort_values(ascending=False)
missing_percent = 100 * data.isnull().sum().sort_values(ascending=False) / len(data)
missing_count = pd.DataFrame(missing_count[missing_count > 0])
missing_percent = pd.DataFrame(missing_percent[missing_percent > 0])
missing_values_table = pd.concat([missing_count, missing_percent], axis=1)
missing_values_table.columns = ["missing_count", "missing_percent"]
print('The dataset consists of {0} columns , out of which {1} have missing values.'.format(
data.shape[1], str(missing_values_table.shape[0])))
return missing_values_table
percent_missing_data(data)
sns.heatmap(data.isnull() ,yticklabels=False ,cbar=False ,cmap='viridis')
```
## Analysing Date & Time
```
data.Time.head() ,data.Date.head()
# replacing missing data in 'Time' column with 0.00
data['Time'] = data['Time'].replace(np.nan ,'00:00')
# changing format
data['Time'] = data['Time'].str.replace('c: ', '')
data['Time'] = data['Time'].str.replace('c:', '')
data['Time'] = data['Time'].str.replace('c', '')
data['Time'] = data['Time'].str.replace('12\'20', '12:20')
data['Time'] = data['Time'].str.replace('18.40', '18:40')
data['Time'] = data['Time'].str.replace('0943', '09:43')
data['Time'] = data['Time'].str.replace('22\'08', '22:08')
data['Time'] = data['Time'].str.replace('114:20', '00:00')
data['Time'] = data['Date'] + ' ' + data['Time']
def to_date(x):
return datetime.strptime(x, '%m/%d/%Y %H:%M')
data['Time'] = data['Time'].apply(to_date)
print('Date ranges from ' + str(data.Time.min()) + ' to ' + str(data.Time.max()))
data.Operator = data.Operator.str.upper()
data.head()
```
# Visualization
## Analysing Total Accidents per Year
```
temp = data.groupby(data.Time.dt.year)[['Date']].count()
temp = temp.rename(columns={'Date':'Count'})
plt.figure(figsize=(10,5))
plt.plot(temp.index ,'Count', data=temp, color='darkkhaki' ,marker='.',linewidth=1)
plt.xlabel('Year', fontsize=12)
plt.ylabel('Count', fontsize=12)
plt.title('Count of accidents by Year', loc='Center', fontsize=18)
plt.show()
```
## Analysing Total Accidents per Month, weekday & hour-of-day
```
import matplotlib.pylab as pl
import matplotlib.gridspec as gridspec
gs = gridspec.GridSpec(2,2)
plt.figure(figsize=(15,10) ,facecolor='#f7f7f7')
# 1st plot (month)
ax0 = pl.subplot(gs[0,:])
sns.barplot(data.groupby(data.Time.dt.month)[['Date']].count().index, 'Date', data=data.groupby(data.Time.dt.month)[['Date']].count(),color='coral',linewidth=2)
plt.xticks(data.groupby(data.Time.dt.month)[['Date']].count().index, [
'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.xlabel('Month' ,fontsize=10)
plt.ylabel('Count', fontsize=10)
plt.title('Count of Accidents by Month',loc='center', fontsize=14)
#====================================================================#
# 2nd plot (weekday)
ax1 = pl.subplot(gs[1,0])
sns.barplot(data.groupby(data.Time.dt.weekday)[['Date']].count().index, 'Date', data=data.groupby(data.Time.dt.weekday)[['Date']].count() ,color='deepskyblue' ,linewidth=1)
plt.xticks(data.groupby(data.Time.dt.weekday)[['Date']].count().index, [
'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'])
plt.xlabel('Day of Week', fontsize=10)
plt.ylabel('Count', fontsize=10)
plt.title('Count of accidents by Day of Week', loc='Center', fontsize=14)
#====================================================================#
# 3rd plot (hour)
ax2 = pl.subplot(gs[1,1])
sns.barplot(data[data.Time.dt.hour != 0].groupby(data.Time.dt.hour)[['Date']].count().index, 'Date',
data=data[data.Time.dt.hour != 0].groupby(data.Time.dt.hour)[['Date']].count(), color='greenyellow', linewidth=1)
# plt.xticks(data.groupby(data.Time.dt.hour)[['Date']].count().index, [
# 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'])
plt.xlabel('Hour', fontsize=10)
plt.ylabel('Count', fontsize=10)
plt.title('Count of accidents by Hour of Day', loc='Center', fontsize=14)
plt.tight_layout()
plt.show()
```
## Analysing Total Accidents based on Operator (Military)
```
temp = data.copy()
temp['isMilitary'] = temp.Operator.str.contains('MILITARY')
temp = temp.groupby('isMilitary')[['isMilitary']].count()
temp.index = ['Passenger' ,'Military']
temp_ = data.copy()
temp_['Military'] = temp_.Operator.str.contains('MILITARY')
temp_['Passenger'] = temp_.Military == False
temp_ = temp_.loc[:, ['Time', 'Military', 'Passenger']]
temp_ = temp_.groupby(temp_.Time.dt.year)[
['Military', 'Passenger']].aggregate(np.count_nonzero)
colors = ['tan', 'indianred']
plt.figure(figsize=(15, 6))
# 1st plot(pie-plot)
plt.subplot(1, 2, 1)
patches, texts = plt.pie(temp.isMilitary, colors=colors,
labels=temp.isMilitary, startangle=90)
plt.legend(patches, temp.index, loc="best", fontsize=10)
plt.axis('equal')
plt.title('Total number of accidents by Type of flight',
loc='Center', fontsize=14)
# 2nd plot
plt.subplot(1, 2, 2)
plt.plot(temp_.index, 'Military', data=temp_,
color='indianred', marker=".", linewidth=1)
plt.plot(temp_.index, 'Passenger', data=temp_,
color='tan', marker=".", linewidth=1)
plt.legend(fontsize=10)
plt.show()
```
## Analysing Fatalities vs Year
```
Fatalities = data.groupby(data.Time.dt.year).sum()
Fatalities['Proportion'] = Fatalities['Fatalities'] / Fatalities['Aboard']
plt.figure(figsize=(15, 6))
# 1st plot
plt.subplot(1, 2, 1)
plt.fill_between(Fatalities.index, 'Aboard',data=Fatalities, color="skyblue", alpha=0.2)
plt.plot(Fatalities.index, 'Aboard', data=Fatalities,marker=".", color="Slateblue", alpha=0.6, linewidth=1)
plt.fill_between(Fatalities.index, 'Fatalities',data=Fatalities, color="olive", alpha=0.2)
plt.plot(Fatalities.index, 'Fatalities', data=Fatalities,color="olive", marker=".", alpha=0.6, linewidth=1)
plt.legend(fontsize=10,loc='best')
plt.xlabel('Year', fontsize=10)
plt.ylabel('Amount of people', fontsize=10)
plt.title('Total number of people involved by Year', loc='Center', fontsize=14)
# 2nd plot
plt.subplot(1, 2, 2)
plt.plot(Fatalities.index, 'Proportion', data=Fatalities,marker=".", color='firebrick', linewidth=1)
plt.xlabel('Year', fontsize=10)
plt.ylabel('Ratio', fontsize=10)
plt.title('Fatalities / Total Ratio by Year', loc='Center', fontsize=14)
plt.tight_layout()
plt.show()
```
### So, 1970-1990 look like scary years in the history of air-flights with rise of deaths, but there might be also the rise of total amount of people flying by air, while actually proportion became lower.
### So, now analysing another dataset showing total number of flights or passengers
# Getting Data from new dataset
```
data_ = pd.read_csv('datasets/API_IS.AIR.DPRT_DS2_en_csv_v2_2766566/API_IS.AIR.DPRT_DS2_en_csv_v2_2766566.csv')
data_.head()
data_.columns
```
## Data Cleaning
```
data_ = data_.drop('Unnamed: 65',axis=1)
data_ = data_.drop(['Country Name', 'Country Code',
'Indicator Name', 'Indicator Code'], axis=1)
data_.head()
data_ = data_.replace(np.nan, 0)
data_ = pd.DataFrame(data_.sum())
data_.drop(data_.index[0:10])
data_ = data_['1970' :'2008']
data_.columns = ['Sum']
data_.index.name = 'Year'
data_.head()
Fatalities = Fatalities.reset_index()
Fatalities.Time = Fatalities.Time.apply(str)
Fatalities.index = Fatalities['Time']
del Fatalities['Time']
Fatalities = Fatalities['1970':'2008']
Fatalities = Fatalities[['Fatalities']]
Fatalities.head()
data_ = pd.concat([data_, Fatalities], axis=1)
data_['Ratio'] = data_['Fatalities'] / data_['Sum'] * 100
data_.Ratio.head()
```
# Visualization (data_)
## Analysing Amount of Passengers ,Total number of Fatalities per Year & Fatalities Ratio
```
gs = gridspec.GridSpec(3,3)
plt.figure(figsize=(30,10) ,facecolor='#f7f7f7')
ax0 = pl.subplot(gs[0,:])
plt.plot(data_.index ,'Sum' ,data=data_ ,marker='.' ,color='crimson',linewidth=1)
plt.xlabel('Year' ,fontsize=12)
plt.ylabel('Amount of passengers', loc='center', fontsize=12)
plt.title('Total amount of passengers by Year', loc='center', fontsize=16)
plt.xticks(rotation=90)
#---------------------------------------------------#
ax1 = pl.subplot(gs[1,:])
plt.plot(Fatalities.index, 'Fatalities', data=Fatalities,
marker='.', color='forestgreen', linewidth=1)
plt.xlabel('Year', fontsize=12)
plt.ylabel('Number of Fatalities', loc='center', fontsize=12)
plt.title('Total number of Fatalities by Year', loc='center', fontsize=16)
plt.xticks(rotation=90)
#---------------------------------------------------#
ax2 = pl.subplot(gs[2,:])
plt.plot(data_.index, 'Ratio', data=data_,
marker='.', color='darkorchid', linewidth=1)
plt.xlabel('Year', fontsize=12)
plt.ylabel('Ratio', loc='center', fontsize=12)
plt.title('Fatalities / Total amount of passegers Ratio by Year',
loc='center', fontsize=16)
plt.xticks(rotation=90)
#---------------------------------------------------#
plt.tight_layout()
plt.show()
```
## Analysing Ratio VS number of deaths
```
fig = plt.figure(figsize=(12, 6))
ax1 = fig.subplots()
ax1.plot(data_.index, 'Ratio', data=data_,
color='darkcyan', marker=".", linewidth=1)
ax1.set_xlabel('Year', fontsize=11)
for label in ax1.xaxis.get_ticklabels():
label.set_rotation(90)
ax1.set_ylabel('Ratio', fontsize=11)
ax1.tick_params('y')
ax2 = ax1.twinx()
ax2.plot(Fatalities.index, 'Fatalities', data=Fatalities,
color='hotpink', marker=".", linewidth=1)
ax2.set_ylabel('Number of fatalities', fontsize=11)
ax2.tick_params('y')
plt.title('Fatalities VS Ratio by Year', loc='Center', fontsize=14)
fig.tight_layout()
plt.show()
```
## Analysing Operator vs Fatality-Count
```
data.Operator = data.Operator.str.upper()
data.Operator = data.Operator.replace('A B AEROTRANSPORT' ,'AB AEROTRANSPORT')
total_count_operator = data.groupby('Operator')[['Operator']].count()
total_count_operator = total_count_operator.rename(columns={'Operator':'Count'})
total_count_operator = total_count_operator.sort_values(by='Count' ,ascending=False).head(20)
plt.figure(figsize=(12,6))
sns.barplot(x='Count' ,y=total_count_operator.index ,data=total_count_operator )
plt.xlabel('Count',fontsize=12)
plt.ylabel('Operator' ,fontsize=12)
plt.title('Total Count by Operator' ,loc='center', fontsize=14)
plt.show()
total_fatality_per_operator = data.groupby('Operator')[['Fatalities']].sum()
total_fatality_per_operator = total_fatality_per_operator.rename(columns={'Operator':'Fatalities'})
total_fatality_per_operator = total_fatality_per_operator.sort_values(by='Fatalities' ,ascending=False).head(20)
plt.figure(figsize=(12,6))
sns.barplot(x='Fatalities' ,y=total_fatality_per_operator.index ,data=total_fatality_per_operator )
plt.xlabel('Fatalities',fontsize=12)
plt.ylabel('Operator' ,fontsize=12)
plt.title('Total Fatalities per Operator' ,loc='center', fontsize=14)
plt.show()
```
## Analysing AEROFLOT (as they had the highest number of fatalities in all of the operators)
```
aeroflot = data[data.Operator =='AEROFLOT']
count_per_year = aeroflot.groupby(data.Time.dt.year)[['Date']].count()
count_per_year = count_per_year.rename(columns={'Date' : 'Count'})
plt.figure(figsize=(12,6))
plt.plot(count_per_year.index ,'Count' ,data=count_per_year ,marker='.' ,linewidth=1 ,color='darkslategray')
plt.xlabel('Year',fontsize=12)
plt.ylabel('Count',fontsize=12)
plt.title('Fatality Count of AEROFLOT (per year)',loc='Center',fontsize=14)
plt.show()
```
## Observations :
### --> Even so the number of crashes and fatalities is increasing, the number of flights is also increasing.
### --> And we could actually see that the ratio of fatalities/total amount of passengers trending down (for 2000s).
### --> However we can not make decisions about any Operator like "which airline is much safer to flight with" without knowledge of total amount flights.
### --> If Aeroflot has the largest number of crashes this doesn't mean that it is not worse to flight with because it might have the largest amount of flights.
| github_jupyter |
```
from IPython.core.display import HTML
HTML('''<style>
.container { width:100% !important; }
</style>
''')
```
# How to Check that a Formula is a Tautology
In this notebook we develop a function <tt>tautology</tt> that takes a formula $f$ from propositional logic and checks whether $f$ is a tautology. As we represent tautologies as nested tuples, we first have to import the parser for propositional logic.
```
import propLogParser as plp
```
As we represent propositional valuations as sets of variables, we need a function to compute all subsets of a given set. The module <tt>power</tt> provides a function called <tt>allSubsets</tt> such that for a given set $M$ the function call $\texttt{allSubsets}(M)$ computes a list containing all subsets of $M$, that is we have:
$$ \texttt{allSubsets}(M) = \bigl[A \mid A \in 2^M\bigr] $$
```
import power
power.allSubsets({'p', 'q'})
```
To be able to compute all propositional valuations for a given formula $f$ we first need to determine the set of all variables that occur in $f$. The function $\texttt{collectVars}(f)$ takes a formula $f$ from propositional logic and computes all propositional variables occurring in $f$. This function is defined recursively.
```
def collectVars(f):
"Collect all propositional variables occurring in the formula f."
if f[0] in ['⊤', '⊥']:
return set()
if isinstance(f, str):
return { f }
if f[0] == '¬':
return collectVars(f[1])
return collectVars(f[1]) | collectVars(f[2])
```
We have discussed the function <tt>evaluate</tt> previously. The call
$\texttt{evaluate}(f, I)$ takes a propsitional formula $f$ and a propositional valuation $I$, where $I$ is represented as a set of propositional variables. It evaluates $f$ given $I$.
```
def evaluate(f, I):
"""
Evaluate the propositional formula f using the propositional valuation I.
I is represented as a set of variables.
"""
if isinstance(f, str):
return f in I
if f[0] == '⊤': return True
if f[0] == '⊥': return False
if f[0] == '¬': return not evaluate(f[1], I)
if f[0] == '∧': return evaluate(f[1], I) and evaluate(f[2], I)
if f[0] == '∨': return evaluate(f[1], I) or evaluate(f[2], I)
if f[0] == '→': return not evaluate(f[1], I) or evaluate(f[2], I)
if f[0] == '↔': return evaluate(f[1], I) == evaluate(f[2], I)
```
Now we are ready to define the function $\texttt{tautology}(f)$ that takes a propositional formula $f$ and checks whether $f$ is a tautology. If $f$ is a tautology, the function returns <tt>True</tt>, otherwise a set of variables $I$ is returned such that $f$ evaluates to <tt>False</tt> if all variables in $I$ are <tt>True</tt>, while all variables not in $I$ are <tt>False</tt>.
```
def tautology(f):
"Check, whether the formula f is a tautology."
P = collectVars(f)
A = power.allSubsets(P)
if all(evaluate(f, I) for I in A):
return True
else:
return [I for I in A if not evaluate(f, I)][0]
```
The function $\texttt{test}(s)$ takes a string $s$ that can be parsed as a propositionl formula and checks whether this formula is a tautology.
```
def test(s):
f = plp.LogicParser(s).parse()
counterExample = tautology(f);
if counterExample == True:
print('The formula', s, 'is a tautology.')
else:
P = collectVars(f)
print('The formula ', s, ' is not a tautology.')
print('Counter example: ')
for x in P:
if x in counterExample:
print(x, "↦ True")
else:
print(x, "↦ False")
```
Let us run a few tests.
The first example is DeMorgan's rule.
```
test('¬(p ∨ q) ↔ ¬p ∧ ¬q')
test('(p → q) → (¬p → q) → q')
test('(p → q) → (¬p → ¬q)')
test('¬p ↔ (p → ⊥)')
```
| github_jupyter |
# Section: Encrypted Deep Learning
- Lesson: Reviewing Additive Secret Sharing
- Lesson: Encrypted Subtraction and Public/Scalar Multiplication
- Lesson: Encrypted Computation in PySyft
- Project: Build an Encrypted Database
- Lesson: Encrypted Deep Learning in PyTorch
- Lesson: Encrypted Deep Learning in Keras
- Final Project
# Lesson: Reviewing Additive Secret Sharing
_For more great information about SMPC protocols like this one, visit https://mortendahl.github.io. With permission, Morten's work directly inspired this first teaching segment._
```
import random
import numpy as np
BASE = 10
PRECISION_INTEGRAL = 8
PRECISION_FRACTIONAL = 8
Q = 293973345475167247070445277780365744413
PRECISION = PRECISION_INTEGRAL + PRECISION_FRACTIONAL
assert(Q > BASE**PRECISION)
def encode(rational):
upscaled = int(rational * BASE**PRECISION_FRACTIONAL)
field_element = upscaled % Q
return field_element
def decode(field_element):
upscaled = field_element if field_element <= Q/2 else field_element - Q
rational = upscaled / BASE**PRECISION_FRACTIONAL
return rational
# Explained by Leohard Feinar in Slack
# Here we want to encode negative numbers despite our number space is only between `0` and `Q-1`.
# Therefore we define that the half of the number space between `0` and `Q-1` is reserved for negative numbers,
# particularly the upper half of the number space between `Q/2` and `Q`
# therefore we check in the condition if our number is a positive number with
# `field_element <=Q/2`, then we simply return the number
# if the number is part of the negative number space, we make it negative by subtracting `Q`
def encrypt(secret):
first = random.randrange(Q)
second = random.randrange(Q)
third = (secret - first - second) % Q
return [first, second, third]
def decrypt(sharing):
return sum(sharing) % Q
def add(a, b):
c = list()
for i in range(len(a)):
c.append((a[i] + b[i]) % Q)
return tuple(c)
x = encrypt(encode(5.5))
x
y = encrypt(encode(2.3))
y
z = add(x,y)
z
decode(decrypt(z))
```
**Deal With Negative Value**
https://stackoverflow.com/questions/3883004/the-modulo-operation-on-negative-numbers-in-python
Unlike C or C++, Python's modulo operator (%) always return a number
having the same sign as the denominator (divisor).
`(-5) % 4 = (-2 × 4 + 3) % 4 = 3`
```
(-5) % 100 # -5 = -1 X 100 + 95
encode(-5)
n = encrypt(encode(-5))
n
decode(decrypt(n))
```
# Lesson: Encrypted Subtraction and Public/Scalar Multiplication
```
field = 23740629843760239486723
x = 5
bob_x_share = 2372385723 # random number
alices_x_share = field - bob_x_share + x
(bob_x_share + alices_x_share) % field
field = 10
x = 5
bob_x_share = 8
alice_x_share = field - bob_x_share + x
y = 1
bob_y_share = 9
alice_y_share = field - bob_y_share + y
((bob_x_share + alice_x_share) - (bob_y_share + alice_y_share)) % field
((bob_x_share - bob_y_share) + (alice_x_share - alice_y_share)) % field
bob_x_share + alice_x_share + bob_y_share + alice_y_share
bob_z_share = (bob_x_share - bob_y_share)
alice_z_share = (alice_x_share - alice_y_share)
(bob_z_share + alice_z_share) % field
def sub(a, b):
c = list()
for i in range(len(a)):
c.append((a[i] - b[i]) % Q)
return tuple(c)
field = 10
x = 5
bob_x_share = 8
alice_x_share = field - bob_x_share + x
y = 1
bob_y_share = 9
alice_y_share = field - bob_y_share + y
bob_x_share + alice_x_share
bob_y_share + alice_y_share
```
**Multiply by Public Number**
```
((bob_y_share * 3) + (alice_y_share * 3)) % field
def imul(a, scalar):
# logic here which can multiply by a public scalar
c = list()
for i in range(len(a)):
c.append((a[i] * scalar) % Q)
return tuple(c)
x = encrypt(encode(5.5))
x
z = imul(x, 3) # multiplier 3 is public
decode(decrypt(z))
```
# Lesson: Encrypted Computation in PySyft
```
import syft as sy
import torch as th
hook = sy.TorchHook(th)
from torch import nn, optim
bob = sy.VirtualWorker(hook, id="bob").add_worker(sy.local_worker)
alice = sy.VirtualWorker(hook, id="alice").add_worker(sy.local_worker)
secure_worker = sy.VirtualWorker(hook, id="secure_worker").add_worker(sy.local_worker)
x = th.tensor([1,2,3,4])
y = th.tensor([2,-1,1,0])
x = x.share(bob, alice, crypto_provider=secure_worker) # secure_worker provides randomly generated number
y = y.share(bob, alice, crypto_provider=secure_worker)
z = x + y
z.get()
z = x - y
z.get()
z = x * y
z.get()
z = x > y
z.get()
z = x < y
z.get()
z = x == y
z.get()
```
**With fix_precision**
```
x = th.tensor([1,2,3,4])
y = th.tensor([2,-1,1,0])
x = x.fix_precision().share(bob, alice, crypto_provider=secure_worker)
y = y.fix_precision().share(bob, alice, crypto_provider=secure_worker)
z = x + y
z.get().float_precision()
z = x - y
z.get().float_precision()
z = x * y
z.get().float_precision()
z = x > y
z.get().float_precision()
z = x < y
z.get().float_precision()
z = x == y
z.get().float_precision()
```
# Project: Build an Encrypted Database
```
# try this project here!
```
# Lesson: Encrypted Deep Learning in PyTorch
### Train a Model
```
from torch import nn
from torch import optim
import torch.nn.functional as F
# A Toy Dataset
data = th.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True)
target = th.tensor([[0],[0],[1],[1.]], requires_grad=True)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2, 20)
self.fc2 = nn.Linear(20, 1)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
# A Toy Model
model = Net()
def train():
# Training Logic
opt = optim.SGD(params=model.parameters(),lr=0.1)
for iter in range(20):
# 1) erase previous gradients (if they exist)
opt.zero_grad()
# 2) make a prediction
pred = model(data)
# 3) calculate how much we missed
loss = ((pred - target)**2).sum()
# 4) figure out which weights caused us to miss
loss.backward()
# 5) change those weights
opt.step()
# 6) print our progress
print(loss.data)
train()
model(data)
```
## Encrypt the Model and Data
```
encrypted_model = model.fix_precision().share(alice, bob, crypto_provider=secure_worker)
list(encrypted_model.parameters())
encrypted_data = data.fix_precision().share(alice, bob, crypto_provider=secure_worker)
encrypted_data
encrypted_prediction = encrypted_model(encrypted_data)
encrypted_prediction.get().float_precision()
```
# Lesson: Encrypted Deep Learning in Keras
## Step 1: Public Training
Welcome to this tutorial! In the following notebooks you will learn how to provide private predictions. By private predictions, we mean that the data is constantly encrypted throughout the entire process. At no point is the user sharing raw data, only encrypted (that is, secret shared) data. In order to provide these private predictions, Syft Keras uses a library called [TF Encrypted](https://github.com/tf-encrypted/tf-encrypted) under the hood. TF Encrypted combines cutting-edge cryptographic and machine learning techniques, but you don't have to worry about this and can focus on your machine learning application.
You can start serving private predictions with only three steps:
- **Step 1**: train your model with normal Keras.
- **Step 2**: secure and serve your machine learning model (server).
- **Step 3**: query the secured model to receive private predictions (client).
Alright, let's go through these three steps so you can deploy impactful machine learning services without sacrificing user privacy or model security.
Huge shoutout to the Dropout Labs ([@dropoutlabs](https://twitter.com/dropoutlabs)) and TF Encrypted ([@tf_encrypted](https://twitter.com/tf_encrypted)) teams for their great work which makes this demo possible, especially: Jason Mancuso ([@jvmancuso](https://twitter.com/jvmancuso)), Yann Dupis ([@YannDupis](https://twitter.com/YannDupis)), and Morten Dahl ([@mortendahlcs](https://github.com/mortendahlcs)).
_Demo Ref: https://github.com/OpenMined/PySyft/tree/dev/examples/tutorials_
## Train Your Model in Keras
To use privacy-preserving machine learning techniques for your projects you should not have to learn a new machine learning framework. If you have basic [Keras](https://keras.io/) knowledge, you can start using these techniques with Syft Keras. If you have never used Keras before, you can learn a bit more about it through the [Keras documentation](https://keras.io).
Before serving private predictions, the first step is to train your model with normal Keras. As an example, we will train a model to classify handwritten digits. To train this model we will use the canonical [MNIST dataset](http://yann.lecun.com/exdb/mnist/).
We borrow [this example](https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py) from the reference Keras repository. To train your classification model, you just run the cell below.
```
from __future__ import print_function
import tensorflow.keras as keras
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, AveragePooling2D
from tensorflow.keras.layers import Activation
batch_size = 128
num_classes = 10
epochs = 2
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(10, (3, 3), input_shape=input_shape))
model.add(AveragePooling2D((2, 2)))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(AveragePooling2D((2, 2)))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(AveragePooling2D((2, 2)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
## Save your model's weights for future private prediction
model.save('short-conv-mnist.h5')
```
## Step 2: Load and Serve the Model
Now that you have a trained model with normal Keras, you are ready to serve some private predictions. We can do that using Syft Keras.
To secure and serve this model, we will need three TFEWorkers (servers). This is because TF Encrypted under the hood uses an encryption technique called [multi-party computation (MPC)](https://en.wikipedia.org/wiki/Secure_multi-party_computation). The idea is to split the model weights and input data into shares, then send a share of each value to the different servers. The key property is that if you look at the share on one server, it reveals nothing about the original value (input data or model weights).
We'll define a Syft Keras model like we did in the previous notebook. However, there is a trick: before instantiating this model, we'll run `hook = sy.KerasHook(tf.keras)`. This will add three important new methods to the Keras Sequential class:
- `share`: will secure your model via secret sharing; by default, it will use the SecureNN protocol from TF Encrypted to secret share your model between each of the three TFEWorkers. Most importantly, this will add the capability of providing predictions on encrypted data.
- `serve`: this function will launch a serving queue, so that the TFEWorkers can can accept prediction requests on the secured model from external clients.
- `shutdown_workers`: once you are done providing private predictions, you can shut down your model by running this function. It will direct you to shutdown the server processes manually if you've opted to manually manage each worker.
If you want learn more about MPC, you can read this excellent [blog](https://mortendahl.github.io/2017/04/17/private-deep-learning-with-mpc/).
```
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import AveragePooling2D, Conv2D, Dense, Activation, Flatten, ReLU, Activation
import syft as sy
hook = sy.KerasHook(tf.keras)
```
## Model
As you can see, we define almost the exact same model as before, except we provide a `batch_input_shape`. This allows TF Encrypted to better optimize the secure computations via predefined tensor shapes. For this MNIST demo, we'll send input data with the shape of (1, 28, 28, 1).
We also return the logit instead of softmax because this operation is complex to perform using MPC, and we don't need it to serve prediction requests.
```
num_classes = 10
input_shape = (1, 28, 28, 1)
model = Sequential()
model.add(Conv2D(10, (3, 3), batch_input_shape=input_shape))
model.add(AveragePooling2D((2, 2)))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(AveragePooling2D((2, 2)))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(AveragePooling2D((2, 2)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(num_classes, name="logit"))
```
### Load Pre-trained Weights
With `load_weights` you can easily load the weights you have saved previously after training your model.
```
pre_trained_weights = 'short-conv-mnist.h5'
model.load_weights(pre_trained_weights)
```
## Step 3: Setup Your Worker Connectors
Let's now connect to the TFEWorkers (`alice`, `bob`, and `carol`) required by TF Encrypted to perform private predictions. For each TFEWorker, you just have to specify a host.
These workers run a [TensorFlow server](https://www.tensorflow.org/api_docs/python/tf/distribute/Server), which you can either manage manually (`AUTO = False`) or ask the workers to manage for you (`AUTO = True`). If choosing to manually manage them, you will be instructed to execute a terminal command on each worker's host device after calling `model.share()` below. If all workers are hosted on a single device (e.g. `localhost`), you can choose to have Syft automatically manage the worker's TensorFlow server.
```
AUTO = False
alice = sy.TFEWorker(host='localhost:4000', auto_managed=AUTO)
bob = sy.TFEWorker(host='localhost:4001', auto_managed=AUTO)
carol = sy.TFEWorker(host='localhost:4002', auto_managed=AUTO)
```
## Step 4: Split the Model Into Shares
Thanks to `sy.KerasHook(tf.keras)` you can call the `share` method to transform your model into a TF Encrypted Keras model.
If you have asked to manually manage servers above then this step will not complete until they have all been launched. Note that your firewall may ask for Python to accept incoming connection.
```
model.share(alice, bob, carol)
```
## Step 5: Launch 3 Servers
```
python -m tf_encrypted.player --config /tmp/tfe.config server0
python -m tf_encrypted.player --config /tmp/tfe.config server1
python -m tf_encrypted.player --config /tmp/tfe.config server2```
## Step 6: Serve the Model
Perfect! Now by calling `model.serve`, your model is ready to provide some private predictions. You can set `num_requests` to set a limit on the number of predictions requests served by the model; if not specified then the model will be served until interrupted.
```
model.serve(num_requests=3)
```
## Step 7: Run the Client
At this point open up and run the companion notebook: Section 4b - Encrytped Keras Client
## Step 8: Shutdown the Servers
Once your request limit above, the model will no longer be available for serving requests, but it's still secret shared between the three workers above. You can kill the workers by executing the cell below.
**Congratulations** on finishing Part 12: Secure Classification with Syft Keras and TFE!
```
model.shutdown_workers()
if not AUTO:
process_ids = !ps aux | grep '[p]ython -m tf_encrypted.player --config /tmp/tfe.config' | awk '{print $2}'
for process_id in process_ids:
!kill {process_id}
print("Process ID {id} has been killed.".format(id=process_id))
```
# Keystone Project - Mix and Match What You've Learned
Description: Take two of the concepts you've learned about in this course (Encrypted Computation, Federated Learning, Differential Privacy) and combine them for a use case of your own design. Extra credit if you can get your demo working with [WebSocketWorkers](https://github.com/OpenMined/PySyft/tree/dev/examples/tutorials/advanced/websockets-example-MNIST) instead of VirtualWorkers! Then take your demo or example application, write a blogpost, and share that blogpost in #general-discussion on OpenMined's slack!!!
Inspiration:
- This Course's Code: https://github.com/Udacity/private-ai
- OpenMined's Tutorials: https://github.com/OpenMined/PySyft/tree/dev/examples/tutorials
- OpenMined's Blog: https://blog.openmined.org
| github_jupyter |
# CAT10 BAYESIAN :D :D :D :D :D
```
import sys
from skopt import gp_minimize
from skopt.space import Real, Integer
from utils.post_processing import eurm_to_recommendation_list,eurm_remove_seed, shift_rec_list_cutoff
from utils.pre_processing import norm_max_row, norm_l1_row
from utils.evaluator import Evaluator
from utils.datareader import Datareader
from utils.ensembler import ensembler
from utils.definitions import *
import scipy.sparse as sps
import numpy as np
import os.path
```
# datareader, valutazione ed essenziali
```
dr = Datareader(verbose=False, mode = "offline", only_load="False")
ev = Evaluator(dr)
```
# cat10 > impostazioni da toccare
```
target_metric = 'ndcg'
best_score = 0
best_params = 0
norm = norm_max_row
verbose = True
# memory_on_disk= False
memory_on_notebook=True
```
### impostazioni da NON toccare
```
cat = 10
start_index = 9000
end_index = 10000
global_counter=0
x0 = None
y0 = None
```
# files e matrici
```
path = ROOT_DIR+'/npz_simo/'
cb_ar_file = path+"cb_ar_offline.npz"
cb_al_file = path+"cb_al_offline.npz"
cb_al_ar_file = path+"cb_al_offline.npz"
cf_ib_file = path+"cf_ib_offline.npz"
cf_ub_file = path+"cf_ub_offline.npz"
cb_ar = norm( eurm_remove_seed( sps.load_npz(cb_ar_file) ,dr)[start_index:end_index] )
cb_al = norm( eurm_remove_seed( sps.load_npz(cb_al_file) ,dr)[start_index:end_index] )
cb_al_ar = norm( eurm_remove_seed( sps.load_npz(cb_al_file) ,dr)[start_index:end_index] )
cf_ib = norm( eurm_remove_seed( sps.load_npz(cf_ib_file) ,dr)[start_index:end_index] )
cf_ub = norm( eurm_remove_seed( sps.load_npz(cf_ub_file) ,dr)[start_index:end_index] )
matrices_names = ['cb_ar', 'cb_al', 'cb_al_ar', 'cf_ib', 'cf_ub']
matrices_array = [ cb_ar, cb_al, cb_al_ar , cf_ib , cf_ub ]
matrices = dict(zip(matrices_names, matrices_array ))
```
# funzione obiettivo
il numero di matrici e l'ordine va rispettato
```
def obiettivo( x ):
global best_score,global_counter, best_params, x0, y0
# eurm = x[0]*cb_ar + x[1]*cb_al + x[2]*cf_ib + x[3]*cf_ub
eurm = sum( x[i]*matrix for i,matrix in enumerate(matrices_array))
# real objective function
ris = -ev.evaluate_single_metric(eurm_to_recommendation_list(eurm, cat=cat, remove_seed=False, verbose=False),
verbose=False,
cat=cat,
name="ens"+str(cat),
metric=target_metric,
level='track')
# memory variables
if x0 is None:
x0 = [[x]]
y0 = [ris]
else:
x0.append(x)
y0.append(ris)
global_counter+=1
if ris < best_score :
print("[NEW BEST]")
pretty_print(ris,x)
best_score= ris
best_params = x.copy()
best_params_dict = dict(zip(matrices_names,x.copy()))
elif verbose:
pretty_print(ris,x)
return ris
def pretty_print(ris, x):
print(global_counter,"RES:",ris, end="\tvals:\t")
for i in range(len(x)):
print(matrices_names[i],"%.2f" % (x[i]), end="\t")
print( )
```
# parameters
```
# The list of hyper-parameters we want to optimize. For each one we define the bounds,
# the corresponding scikit-learn parameter name, as well as how to sample values
# from that dimension (`'log-uniform'` for the learning rate)
space = [Real(0, 100, name=x) for x in matrices_names]
#"log-uniform",
space
res = gp_minimize(obiettivo, space,
base_estimator=None,
n_calls=300, n_random_starts=100,
acq_func='gp_hedge',
acq_optimizer='auto',
x0=x0, y0=y0,
random_state=None, verbose=False,
callback=None, n_points=10000,
n_restarts_optimizer=10,
xi=0.01, kappa=1.96,
noise='gaussian', n_jobs=-1)
best_score
best_params
```
# plot
```
from skopt.plots import plot_convergence, plot_objective
import skopt.plots
import matplotlib.pyplot as plt
%matplotlib inline
plot_convergence(res)
a = plot_objective(res)
['cb_ar', 'cb_al','cf_ib', 'cf_ub']
a = skopt.plots.plot_evaluations(res)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.